Oddbean new post about | logout
 There are two items that can help based on the discussion a couple weeks ago:
1. If someone is harassing a user it is not enough to mute that user bc they are still in the replies. In some cases the harasser’s friends are seeing the replies and piling on. However the person being harassed doesn’t know the harassment is continuing and then is blindsided by the additional harassment coming their way. Freedom from needs to extend to freedom from having someone in your replies continuing to harass you. The current model assumes Nostr is a level playing field that exists in a cultural vacuum and that is not the case. As a result those who experience harassment IRL also feel the brunt of it here. Direct harassment is different from saying whatever you want. It involves being in someone’s replies or mentioning them directly. People need the ability to say no to both of these. 
2. The second issue women reported was being found by random jerks. This happens bc some apps have aggregator feeds. Users need the option to opt out of these. 
I also think there’s more user research needed to understand if other problems exist. 
 Do you see these types of changes being needed at the protocol layer, relay layer or can they be done at the client layer? 
 I think the first one is at the protocol layer. For the second I’m guessing its something that would be tied to the npub so updating that nip to support this additional metadata could help.  
 Not sure I agree on the harassment issue. The protocol layer is dedicated to providing a stable foundation where anyone can post anything, and I don't see the issue of harassment changing that, because whether a note or reply is harassment is a value judgement. It might not be the intention, and there's a spectrum that needs to be considered. The protocol layer can't really deal with that level of complexity. I think the functionality that you are looking for is in network logistics, where we could be given the option of seeing whether an npub is a close connection with someone else that we have already muted, and make it an easy single click to mute that npub as well.  That would work well at the client level, if the client has the type of network analytics that I'm thinking about.  That kind of development work is going to be coming, but right now I haven't seen anything.
And I think the second issue of "random jerks" is also going to be related to network analytics as well. We can see a list of Nostr Highlights in our feed.  Maybe a client could dig a bit and provide a list of "most muted npubs". If someone decides to be a complete jerk to anyone and everyone, they can post whatever they want as long as they are in a silo that protects the rest of the network from abuse. So put them on a list and give people the option to mute jerks sight unseen. 
 As @brugeman  pointed out below Nos has something like this. However in user research multiple people have noted they don’t like having posts hidden that are outside their network. It forces users to do extra work which is not really freedom. 

If you haven’t tried it out, try Nos for a week and LMK what you think. 

One thing that would be helpful is to look at the entire user journey when thinking about freedom. We focus a lot on the initial post, but we don’t pay enough attention to the subsequent dialogue that ensues. Freedom within dialogue is very different than speaking into the public square. In the public square I can walk away from someone by unfollowing them. In a dialogue (thread or via mentions) I cannot walk away. With mute I can put on headphones and hope the harassers buddies don’t show up. If they do then i have to keep adding additional sets of headphones. But then my friends have to step in or be subject to the harassment as well or they need to put on headphones. Within the dialogue- the power is in the hands of the harassers if mute is the only option.


If Nostr adds the ability to limit replies and mentions, then the person being harassed has the freedom to walk away. 

The harasser can still speak in the public square, they simply can’t speak in a dialogue when the other person doesn’t want them there.  
 I hear the problem. 

I don’t see a solution possible without a moderated community, or a private or pay to write relay. 

Maybe yall have a galaxy brain NIP for this 👀 
 I'm on Android and/or Web, but glad they have something close to what you are looking for. 
 Yeah  I’m mistaken about the protocol layer as the place where mutes of replies and mentions should happen. Based on some conversations today it makes the most sense that the clients and relays screen for these. 

In terms of filtering out the random jerks - other protocols are experimenting with mute lists - similar to what you suggest above and I think it’s a good option for Nostr as well. A user can subscribe to a mute list of their choice based on whatever criteria they have - it could be words / content or people. Another framing is feed curation. 

The beauty of Nostr at its basic level is that the user is in control of their feed, rather than an algorithm. Sorting through harassment challenges makes that control truly possible.  
 Exactly. If I don't want to see a post, then I need to be the one to make that choice. And the reverse is also true. If I do want to see a post, I need to be the one in control. Giving algorithms or advertisements control of pushing and pulling is what I hate about all of the other centralized corporate social media networks. It's why we are here instead of there. Because nobody else has control. 
 @darashi 
@miljan  @brugeman @Marko have yall consider a “do not index me” request for yalls aggregators - see Linda’s latter point.

I think the guulagl search has a do not index flag for websites 
 Primal's indexer respects deletion requests. As for “do not index me” directives, I don't think we have those on Nostr yet.  
 Searchnos also handles NIP-09 event deletion and my deployment does not keep indexes for so long periods of time (30 days with the current configuration). 
 Jerks' daily job is to find targets, not sure if deindexing from aggregators would help much, you should then deindex from major public relays too, and only post your stuff on paid relays (paid to read, not only to write). Meaning, you shouldn't be on public Nostr if you're afraid to be found by someone determined to find you.

I think the only anti-harassment solution that could work on Nostr is client-side filtering, based on contact/mute lists, friends' reports, etc. Don't show replies from people you don't follow, or that were reported/muted many times by people you follow, or replies from public relays. I bet some of these policies are implemented an nos.social, but the issue is - everyone's using Damus/Amethyst/Primal, and those have nothing like that.

The way I see it, we should have a separate pluggable layer/API/NIP for content post-filtering, that can be plugged into any app: an app forms a feed (main/replies/notifs/anything) and then passes all the events from the feed to the filter, and filter returns various labels (spam/harassment/nsfw/impersonation/...), and app covers the content of the labeled event and shows labels above it. This way apps don't have to rebuild their feed building logic - just apply another layer above it, users would specify the filtering API endpoint in the settings and get the filtering they want. Safe mode could be 'cover notes from users I don't follow until filter returns it's labels - uncover if no bad labels returned', more reckless mode could be 'show notes first, only hide them if filter returns some bad labels'.  

If nos or anyone is interested in experimenting with me in this area, let me know. 
 Amethyst has this, too. It used to be activated by default until people started forking because of this feature.  
 Sounds like some half decent logic 👍🏻