How do we proceed with this? The only thing that comes to my mind are shared blocklists and popular relays closing posting based on the presence on those blocklists. Do you have proposed solution on steps we can take?
I doubt popular relays would sign up to block notes from certain npubs, but first step could be that npubs can express preference not to be mentioned by some other npubs. I don't see why those types of lists would not be welcomed by everyone - if someone is publicly willing to say - I don't want this npub to interact with me, the rest of the network should support that.
And yeah, Primal already has this in a way. You can still post messages to relays... but it's possible to subscribe to each other's mute lists. So it's more of a problem of organizing community and then having support of clients to maybe eventually get to some kind of default "no assholes" list. https://m.primal.net/HqUQ.png
Iris by @Martti Malmi implemented WOT configurable hop filter. Dont show me anyone/thing outside my WOT, outside my WOT + 1 hop etc.
With new npubs so quick to create, bad actors can easily keep operating as filters fail to keep up, no?
Yes, but we could create filters for new users and use a web of trust to filter out these accounts. Eventually with good LLM’s you could create troll accounts that do the work of being legit before they start their abuse, but for now we’re not yet facing those issues.
Would it be possible to build a limiter like what twitter has where people you dont follow can't reply or otherwise engage with you? If people who are harassed are given control to consent to social interactions as a starting point, that might be worth considering.
Yeah, that's basically what I'm proposing. On protocol level it's possible to craft any type of note, but relays could check - if a note contains reference to npub that blocks mentions by you - they wouldn't accept or propagate your note. They also could stop returning your notes if the request is by certain npub.
I see this as a client failure to implement relay configuration/outbox model. Clients experimented and went down a path of free and open relays, with zero-conf relay configs. They thought they could rely on WoT and mute lists, and put off implementing outbox. With the outbox model, you can set what relay to use as your INBOX and OUTBOX. The INBOX is the ONLY place you read comments and reactions from and you can setup a relay to do this like @Laeserin has already done. She has a relay that limits posting to only her follows (using listr) + any other pubkeys she wants on there (setup by relay.tools). The only additional thing relays can do is to add NIP42 auth so that you could have read restricted to that same set of pubkeys. This would enable tiers of protection from anything nostr throws at you. The benefit of outbox vs. mute lists, is the pre-emptive ability to control what you see. Seeing stuff is disturbing, and muting it after seeing it, this is not good enough on it's own. And unfortunately, even when someone sets up a relay config in one client that works, if they jump to a client that doesn't use relay configs properly they get a barrage of all that stuff they didn't want to see (like primal). Anyway, I gotta run, but I wanted to help describe what I see on my end, since I spend a lot of time on the relay side of things and I am well familiar with comment bots (which are technically no different than humans following someone around and commenting on them).
Yes the more variety of means for a user to control their OWN experience the better. Because being here doesn’t mean every individual has the same shared experience
Relay-based filtering. Only load comments, replies and mentions from relays that are deemed (by you) to be safe. That should be the default in all clients. What relays are safe? Relays that will require some form of friction for accepting events, and that will actively ban people that have been flagged with kind 1984.
We’re working on a moderation bot that will accept encrypted report notes, then the bot can review the content via AI and a human, and then issue a report on that content. Basically it’s not always a good idea to be public about your report. Also clearly we need the ability to vet reports, because often the report is not accurate or isn’t enough. So i an say, sure bob reported jane as being as spammer, but I don’t agree, or this moderation group looked at Jane’s content and decided Bob’s report didn’t have merrit. Sure others might agree with Bob’s report, but the point is that we need a second level, and the ability to have some reviews.
I think that approach is valid, but I don't know if it's worth trying to arrive at the "best" judgement of whether someone should be banned or not. It's better to foster an ecosystem in which relays can ban freely according to whatever absurd criteria they decide and users and clients can pick relays wisely according to their preferences (most clients are not ready for this as far as I know, but it wouldn't take much). On https://pyramid.fiatjaf.com/ my plan was to somehow let hierarchy of the relay somehow police itself. If people higher in the tree don't like someone down they can just ban that person -- and if they are not in the same subtree their vote still counts more than someone that is lower. Or something like that. Of course I didn't implement any of that yet, I was even blocking kind 1984 by mistake until a minute ago.