are you in the business of making investigations? then you have a budget
if it's a paid relay and the client asks for delete, then you gotta stop sending it out
but that doesn't mean that you can't charge for an extra service to access that data
as a relay operator, i have no obligation to send you anything unless i'm paid to do it and if you don't pay me extra why should i rat on my customers?
deleting spammy notes is necessary, because storage space is limited, and garbage is infinite
you just didn't think about how much volume it may entail, maybe?
to be honest, if spammers ask to have their shit deleted, good, but that costs extra
you gotta be harsh with these slimebags, you know... make the game hard for them to win, then you get them out of your neighbourhood
I have an alpha draft of a tool that would allow clients to train their own custom filters. It works pretty well, but its a real bitch trying to get enough data. Data being the text of spammy notes.
How much extra storage space can spammy text content really take up on a relay? If you're hosting images, sure, nuke those, but keeping suspect notes up for a couple of weeks would be very helpful.
i think you could probably easily get relay operators to feed their deleted events into your midden if you just asked... it's a matter of just adding a tiny feature "when delete, send to dumbass who wants deleted events"
in fact, i am just about to build out a two level caching algorithm that lets me maintain some reasonable limits on the cache inbuilt database and maintain searchability (via simple filter searches) but push the event itself to a secondary store
that's practically half of what you are looking for
but i think you are barking up the wrong tree looking for preemptive methods of blocking spam
web of trust will do most of that for you, spammers can't win long term confidence in people and they have to constantly make new identities, which excludes them from getting deep into the web
Multilevel caching sounds like a great idea.
I am a big believer in WoT, but I also believe preemptive filtering is a "must have" for many, many users. Not me and not you, but many.
Spammers are already using LLM generated content. They can fail as many times as they like, and one human user only has to fail once for a spammer to get into the WoT for a while.
the thing is that fakes can't get deep into the social graph without being like the people in it
ultimately if you are a fake you gonna get tricked
there's many helpful things we can build into the systems to add friction for malicious actors but ultimately social manipulation in general is something that requires *requires* personal responsibility, skepticism, alertness, and emotional maturity to defeat
100%, but partial successes still annoy nostriches and waste their time.
We need to do all the above.
Winners study the British SDS undercover training manuals.
Losers study "ChatGPT For Dummies".