first point can be solved by serving the delete request instead proving that the spammer deleted their note secondly, you can have access control lists on a relay that bypass that and show the event itself AND the delete event again, another thing that paid relays can solve yeah, it really was given the right number, fuck censorship
Truth, but ACLs mean signing up for a list on multiple other peoples relays. Would prefer if it were in the protocol. As it stands I'd much rather the relay just issued a 1984 on the note and let clients decide what to do about it. Actually deleting spammy notes silently is the real creepy 1984 behavior. I get it that its sometimes required by law for certain content (criticising the King for siamstr), but that's why more relays need to be on Tor...
are you in the business of making investigations? then you have a budget if it's a paid relay and the client asks for delete, then you gotta stop sending it out but that doesn't mean that you can't charge for an extra service to access that data as a relay operator, i have no obligation to send you anything unless i'm paid to do it and if you don't pay me extra why should i rat on my customers? deleting spammy notes is necessary, because storage space is limited, and garbage is infinite you just didn't think about how much volume it may entail, maybe?
to be honest, if spammers ask to have their shit deleted, good, but that costs extra
I have an alpha draft of a tool that would allow clients to train their own custom filters. It works pretty well, but its a real bitch trying to get enough data. Data being the text of spammy notes. How much extra storage space can spammy text content really take up on a relay? If you're hosting images, sure, nuke those, but keeping suspect notes up for a couple of weeks would be very helpful.
i think you could probably easily get relay operators to feed their deleted events into your midden if you just asked... it's a matter of just adding a tiny feature "when delete, send to dumbass who wants deleted events" in fact, i am just about to build out a two level caching algorithm that lets me maintain some reasonable limits on the cache inbuilt database and maintain searchability (via simple filter searches) but push the event itself to a secondary store that's practically half of what you are looking for but i think you are barking up the wrong tree looking for preemptive methods of blocking spam web of trust will do most of that for you, spammers can't win long term confidence in people and they have to constantly make new identities, which excludes them from getting deep into the web
Multilevel caching sounds like a great idea. I am a big believer in WoT, but I also believe preemptive filtering is a "must have" for many, many users. Not me and not you, but many. Spammers are already using LLM generated content. They can fail as many times as they like, and one human user only has to fail once for a spammer to get into the WoT for a while.
the thing is that fakes can't get deep into the social graph without being like the people in it ultimately if you are a fake you gonna get tricked there's many helpful things we can build into the systems to add friction for malicious actors but ultimately social manipulation in general is something that requires *requires* personal responsibility, skepticism, alertness, and emotional maturity to defeat
100%, but partial successes still annoy nostriches and waste their time. We need to do all the above.
Winners study the British SDS undercover training manuals. Losers study "ChatGPT For Dummies".
also, "would prefer it was in the protocol" is requiring the protocol to not just be a relay protocol but also a consensus no just had this conversation with regard to semisol's idea about cursors being jammed into REQ/EOSE envelopes no, this is a separate protocol, like i said to @Semisol - make a new query type that only returns event IDs, problem solved, no state to save, far less data cost, and the client is free to paginate it as they wish
Well, 1984s ARE in the protocol and they do exactly what I need already. Or would, if relay operators could leave the suspect notes on Death Row for a while. Otherwise, 100% agree
i'm making a mental note about this, that 1984'd events will not be served to the reporter this will ultimately lead to a concept of "web of distrust" which could be a marketable data set too
Except, its the reporter that most needs that event for training their own filter. Hmmmm...
@Bob_stores_nostr's idea of an "archive relay" solves my data problem. If he doesn't build it I may have to...
Happy to collab or learn about your use case. What subset of Nostr notes specifically would be enabling for your filter training?
Thank you, Bob! Anything that: - is a Kind-1984, or - is the note reported in a Kind-1984 The idea being that client apps can train their own filter models, or at least a marketplace of data vending machines can build them on demand. Two-tiered access to your archive could work, free access to last month's data, maybe, and paid access to everything...
It would be great if an archive could generate revenue, but honestly, my focus is on building relationships and trust with relay operators to get the data without disrupting their primary function. Once the data is coming in, then it will be down to working with people like yourself to figure out how to serve the data subset you need in the time frame you need it with the resources I have available. And if the resources are lacking, how to get them.