Oddbean new post about | logout
 If we can’t make a social media ecosystem where people feel safe enough to share then they’ll go somewhere else. It’s like a dark ally in a city without anyone doing public safety. You can walk down these back alleys but it’s risky.  If you’re strong and have guns you’re fine. Or if you’ve got a security guard. Or if you have a lot of social capital protecting you, most people won’t mess with you. 

But on nostr we see folks who don’t have that power or protection be attacked with death threats for posting a selfie or doxxed for publishing an accurate content report. This kind of harassment silences speech. We need tools that let folks feel safe enough to share and build communities. Otherwise they’ll opt for the evil mall cop an corporate social media platforms because at least there is some safety in that authoritarianism. 

Read the experiences if women on Nostr 

nostr:note1n2h9d5wxfdhrncyp66xvqnh3vagz4kcakh5wardx7j6xde8g3fgsgn640z 

nostr:note1e8gnytp9da5xdk06e5hxwufgahpc0jytnqat7lw03tatdunpth2q0aegzt  
 This is why we love ya, Rabble 🫡 
 🥳🥳🥳
nostr:nevent1qqs2kxdxeeer87wgpktkljjqcc544rax5wa33z6334sgx2wxkl32w6gprfmhxue69uh4g6r9gehhyetnwshxummnw3erztnrdaksygrkcud2uwjfruweamz8ewshug5umfq38g9mkmn2u9mk6ajru2w2lgpsgqqqqqqsg8wx9p 
 How do we proceed with this?

The only thing that comes to my mind are shared blocklists and popular relays closing posting based on the presence on those blocklists.

Do you have proposed solution on steps we can take? 
 I doubt popular relays would sign up to block notes from certain npubs, but first step could be that npubs can express preference not to be mentioned by some other npubs.

I don't see why those types of lists would not be welcomed by everyone - if someone is publicly willing to say - I don't want this npub to interact with me, the rest of the network should support that. 
 And yeah, Primal already has this in a way. You can still post messages to relays... but it's possible to subscribe to each other's mute lists.

So it's more of a problem of organizing community and then having support of clients to maybe eventually get to some kind of default "no assholes" list.

https://m.primal.net/HqUQ.png  
 Iris by @Martti Malmi implemented WOT configurable hop filter. Dont show me anyone/thing outside my WOT, outside my WOT + 1 hop etc. 
 With new npubs so quick to create, bad actors can easily keep operating as filters fail to keep up, no? 
 Yes, but we could create filters for new users and use a web of trust to filter out these accounts. Eventually with good LLM’s you could create troll accounts that do the work of being legit before they start their abuse, but for now we’re not yet facing those issues.  
 Would it be possible to build a limiter like what twitter has where people you dont follow can't reply or otherwise engage with you?

If people who are harassed are given control to consent to social interactions as a starting point, that might be worth considering. 
 Yeah, that's basically what I'm proposing. 

On protocol level it's possible to craft any type of note, but relays could check - if a note contains reference to npub that blocks mentions by you - they wouldn't accept or propagate your note.

They also could stop returning your notes if the request is by certain npub. 
 Ah, I see. Thanks for the clarification. I'm not at all knowledgeable on how this protocol works in technical terms. 
 I see this as a client failure to implement relay configuration/outbox model.  Clients experimented and went down a path of free and open relays, with zero-conf relay configs.  They thought they could rely on WoT and mute lists, and put off implementing outbox.  With the outbox model, you can set what relay to use as your INBOX and OUTBOX.  The INBOX is the ONLY place you read comments and reactions from and you can setup a relay to do this like @Laeserin has already done.  She has a relay that limits posting to only her follows (using listr) + any other pubkeys she wants on there (setup by relay.tools).  The only additional thing relays can do is to add NIP42 auth so that you could have read restricted to that same set of pubkeys.  This would enable tiers of protection from anything nostr throws at you.

The benefit of outbox vs. mute lists, is the pre-emptive ability to control what you see.  Seeing stuff is disturbing, and muting it after seeing it, this is not good enough on it's own.  And unfortunately, even when someone sets up a relay config in one client that works, if they jump to a client that doesn't use relay configs properly they get a barrage of all that stuff they didn't want to see (like primal).

Anyway, I gotta run, but I wanted to help describe what I see on my end, since I spend a lot of time on the relay side of things and I am well familiar with comment bots (which are technically no different than humans following someone around and commenting on them). 
 This is a pretty good idea. 
 Yes the more variety of means for a user to control their OWN experience the better. Because being here doesn’t mean every individual has the same shared experience 
 Thank you everyone for making this point 
 Relay-based filtering. Only load comments, replies and mentions from relays that are deemed (by you) to be safe.
That should be the default in all clients.

What relays are safe? Relays that will require some form of friction for accepting events, and that will actively ban people that have been flagged with kind 1984. 
 We’re working on a moderation bot that will accept encrypted report notes, then the bot can review the content via AI and a human, and then issue a report on that content. Basically it’s not always a good idea to be public about your report. Also clearly we need the ability to vet reports, because often the report is not accurate or isn’t enough. So i an say, sure bob reported jane as being as spammer, but I don’t agree, or this moderation group looked at Jane’s content and decided Bob’s report didn’t have merrit. Sure others might agree with Bob’s report, but the point is that we need a second level, and the ability to have some reviews.  
 I think that approach is valid, but I don't know if it's worth trying to arrive at the "best" judgement of whether someone should be banned or not. It's better to foster an ecosystem in which relays can ban freely according to whatever absurd criteria they decide and users and clients can pick relays wisely according to their preferences (most clients are not ready for this as far as I know, but it wouldn't take much).

On https://pyramid.fiatjaf.com/ my plan was to somehow let hierarchy of the relay somehow police itself. If people higher in the tree don't like someone down they can just ban that person -- and if they are not in the same subtree their vote still counts more than someone that is lower. Or something like that.

Of course I didn't implement any of that yet, I was even blocking kind 1984 by mistake until a minute ago. 
 What happened I was out of the loop for a couple of days? 
 roya deleted her account 
 What about an indicator that says "This profile has been blocked/muted by ## users on Nostr."  That way before someone chooses to follow a profile, we can see troublemakers before they can see us.

A troublemaker can post whatever they want, but if I don't want to see  it, I don't see it. 

 @des Nostr DOES allow users to be muted, blocked and reported. But it's not clear how effective that is, and having a clear indicator would help. 
 I feel like this would not fly 😂 I would of course be totally fine with it, but then you have to think about people abusing the blocking function. Then the numbers wouldn’t truly represent the user and their content (I’m sure I’d have a lot of blocks but if you know me I’ve only been kind on here). I think it’s just a matter of considering options and seeing it as an issue, like the poster mentioned versus completing ignoring. I’m glad a lot of different people are talking about it today 🥹 
 And a lot of that is thanks to you putting your perspective out there, which is at the core of real social media. We can't control what other people do or say, but we can feed positive or negative energy with our reaction.  Like  @Toshi mentioned, humanity is humanity and that includes both positive and negative.

We just want to make sure that we are feeding the positive. 
 Yes!!! 🙌🏻 
  @rabble has anyone done work to create a 'social circles' visualization based on nodes and follows on the Nostr network yet?  There's a lot of opportunity there for our social network to be open and clear about the "six degrees of separation" between profiles, and I think catching it early enough so that we could see those circles form and grow would be amazing to watch. 
 The best way is probably for client devs and/or others  to create a fake women profile to experience first hand the problems they are facing.  
 Or recruit women to work on clients and nostr services. A more diverse team building the apps will mean that the app serves the needs of more kinds of people. Talk to users! Talk to people who’ve decided not to use your apps. Ask open ended questions. 

Read this: The Mom Test: How to Talk to Customers and Learn If Your Business is a Good Idea when Everyone is Lying to You 
 Moderation lists with the ability to subscribe created by users, each one will choose their moderators... it is my solution and I don't know how difficult it is to implement that here 👀 
 The goal is to be free to post . Care free . Not worrying about who won’t make you feel safe !  At some point we have to realize the naysayers don’t matter 
 orrr we could realize that there’s no need in making someone feel unsafe. communication should matter- it’s important to hear the naysayers. after all, we all have the ability to grow together, regardless of who was “right” or “wrong” first. 
feeling free to post is needed, yes, but so it is to build spaces in which everyone can thrive and be challenged without being pushed out. 
get it? 
 i agree  . that’s well put 
 😚 
 Hello 👋, how are you doing? I'm a professional Bitcoin miner,Our team here help educate people all around the world to help them start making profit and income through bitcoin mining 
We work closely with them to have them learning and earning as quickly as possible. I can help you mine Bitcoin and make awesome profit, Just let me know if you'are interested 
 I am interested in mining ⛏ very much ! Send sats to my fund , i shall buy my own miner to mine in Kenya 🇰🇪 :) https://geyser.fund/project/basicswithasics ⚡️ 🫂 
 I hope you know how it’s works  
 I don’t need to ! I shall learn on the job ! 
 Then join my mining pool https://braiins.com/pool 
 First the weirdos build something cool, then some more weirdos come and make it cooler, then tourists hear of the cool place and corpos project economic potential, they both demand the sharp edges be rounded and the weirdos be muzzled for tourists safety and brand safety.

This is Cultural Gentrification and I have seen it happen countless times before.

nostr:nevent1qqs2kxdxeeer87wgpktkljjqcc544rax5wa33z6334sgx2wxkl32w6gpp4mhxue69uhkummn9ekx7mqzypmvwx4w8fy378v7a3ruhgt7y2wd5sgn5zamde4wzamdwep798905qcyqqqqqqgzxdlfe 
 💯 This will become our hardest challenge to solve while still preserving free speech.
nostr:note1p044jc5vesthx282rsf7cvne6xa82t3z273pezca7pjnca54tv4qsc5he2 
 Interesting so far one of the few notes I have read in these threads trying to solve the issue.  A mute list of previous offenders could be useful to the woman of Nostr looking for a safer experience. Opt in of course. 
 > ... where people feel safe ...

This is the problem.  You can make people "feel" safe but you can't actually make them safe.  The 'safety' provided by centrailzed platforms is an illusion.  I can go on any platform, sign up and post horrible things.  Sure my account will get deleted, but it will be too late.  Fortunately, the danger is mostly an illusion too.  No one can physically harm you over the internet.  Sticks and stones, etc.  If your mental state is really so fragile that you can't handle any criticism, you need to seek help.

Doxxing is a real problem.  The only protection for that is taking privacy and anonymity seriously and YOU are the only person that can realistically do that.  No rule, law or policeman can prevent you from sharing your personal information.  It is completely brain dead to use your real name on public forums (including Facebook, Twitter and Nostr) and still expect to retain privacy.  When you put your information out there, you become a public figure and lose both practical and legal privacy protections.  I always hear people saying things like "oh it's all out there anyway" or "the government already knows everything."  First, those are bullshit.  Second, you're not improving matters by being lazy.  There are real steps you can take that will make you much safer.  If you would rather make up excuses than learn, that's your problem.


"Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety” 
 Maybe devs could make a #sparta button ? One that mutes and blocks both ways… like being kicked into the void - that would give anyone feeling under threat a way of feeling a bit safer… if they came back with another npub just did it again… just my two sats worth 
 There is something to the bluesky feedback about nostr being more akin to 4chan (with b!tcoin chat lol). 

But Eg Coracle's web of trust slide, amethyst's security filters solves a lot of this if you don't want that experience. No idea what Damus offers, but I'm mostly on these two clients and happy with them.

I rarely ever look at the global feed (mostly because it's just nonsense and even more ai b!tcoin 'art' and slogans - so I dont even use amethyst's filters). It's like looking at your spam folder, it's not even as entertaining as 4chan.

nostr:nevent1qqs2kxdxeeer87wgpktkljjqcc544rax5wa33z6334sgx2wxkl32w6gppemhxue69uhkummn9ekx7mp0qgs8d3c64cayj8canmky0jap0c3fekjpzwsthdhx4cthd4my8c5u47srqsqqqqqpe43aj9 
 It would be great to ban posting threats or sending messages containing threats by coding an Algorithm that recognizes the commonly used words for that. 
 Too easily gamed.

733T-speak is enough to defeat a regex filter.

"Shakespearean" insults and threats get past the smaller LLMs, too.

Any LLM smart enough to defeat a level two troll is not something a client can afford to run, (or even most relays).

Actively shared mute lists works for email 
 I will talk to the CEO and we will get this fixed right away! We don’t want anyone to feel unsafe!!!! 🥳 
 It is sad to see that some women on Nostr are getting harassed. I know there are no perfect solutions yet, but I appreciate those who are thinking it out and working on it. Sadly, there are always going to be assholes. The Mute button works for most of them, but when it comes to serial harassment, these people can spin up new accounts easily and continue to harass. 

Good idea of improving ways to curate and share Mute Lists.
nostr:note1p044jc5vesthx282rsf7cvne6xa82t3z273pezca7pjnca54tv4qsc5he2 


nostr:note14vv6dnnjx0uusrvhdl9yp33ft286dgamrz94rrtqsv5uddlz5a5suv2klc 
 
 I’ve gone though the replies to this thread and muted the obvious offenders, though I’m wishing for some way to ensure my presence here benefits them in no way, and that my computation is not used to convey their toxic bits. I liked that property/idea of scuttlebut even though it was maybe more of an idea than it was effective. 
 Well said, Rabble. It's hard to find a working solution though. We're never going to have one that makes everyone happy, but I do have faith that we can have one that makes the masses happy. We just need to attack this understanding that no system is perfect, but we're looking for the one that checks the most boxes and can give us the most user success. 
 aim low!

LFG 
 bro are you the leader of nostr? it is impossible to make "the masses" happy.  
 Yes. Fiatjaf gave me the master key.

You can please the majority, but never please everyone. We should build tools for the masses, but also not completely alienate the small groups of users either. It's a tough job, but as I said, I have faith. We have the smartest people working to build us a better future. 
 Name one example of a central authority pleasing a majority at any point in history that didnt end in genocide or total destruction. I'll wait.  
 You seem to be misunderstanding what I said. That's okay. We'll build tools for you too. 
 Love that complete philosophical dodge and superiority complex. So grateful that we have people like you taking care of us and knowing what is best for everyone <3 
 🫂 
 You are beginning to see what we went though on Fediverse side  with the censorship-happy policing of speech. Nostr clients have the mute button....that's all you need 
 I agree. I am in favor of more tools to help with this.  I want to support NIP-32 and NIP-56.  Does anybody have a reference to a vocabulary that people are using? 
 if you want a safe space then maybe try Facebook? 
 @Rabble what is the problem?
Trolls or people not agreeing with someone's post???

I have only seen maybe four social-justice-warrior trolls and a couple of troll-trolls.  But definitely not an issue with nostr.

Also the example you gave is heavily biased and shows nothing of the problem.  The "I am a marginalized woman and you don't understand me" doesn't work.  We are all marginalized these days and everybody is a unique individual.  This means nothing to me. Please give examples of the exact problem. 
 Nostr is like putting things on a telephone pole on a busy street corner. You would never do this if you wanted to be “safe”. 

How nostr works is not conducive to safety. Users are second class and Relays are first class.

We should look to other protocols for safe and private communication where users have control of their data. Something like Urbit or maybe the next iteration of SSB https://www.manyver.se/blog/2024-03-05 
 NOSTR is anonymous unless they doxx themselves.  Snowflakes are mistaking “unsafe” for “offended.” 
 What happened with the idea of community relays? Once you have a closed community it becomes really hard to harass. 
 Is there a 100% safe place online? Instead of trying to build something “perfect” for people to feel safe to post, the self-protection and awareness of the potential harassment of internet is more essential, since you can not control who has or has no access to online content. 
 how do we provide a safe nostr experience? 
 I object to the notion of „anyone doing public safety“. We all should be part of public safety. You cannot buy or employ it. If each individual stepped up, there‘d be no dark ally. Like in the little remote village where everyone knew each other. In functionally proxied social groups the concept „public safety“ is not needed. We should rather work on the proxy mechanisms. 
 I've long been a hater of Twitter's "flat" model, where everything comes out of the same firehose. It means it's impossible to moderate communities like you can on forums or on sites like reddit. I see Twitter-clone nostr falling into this same trap. Unmoderated internet fucking sucks.

This comic is from 2004ish (pre modern social media) and is still just as true today, if not moreso now that we're on impossible-to moderate platforms.


https://m.primal.net/HqaJ.jpg
 
 Nah, that's why i like Twitter more than Reddit.
We can have clients that focuses on reddit/discord like experience. 
 Honestly I prefer chat above both. People are usually more cordial with each other since there's not an audience to talk to. 
 I struggle to understand how the mute button doesn't solve this? In my  experience, the mute button seems to be incredibly effective in eliminating noise. I would say way too effective than my Twitter mute experience .  
 Maybe you should leave. 

Honest question. 

Risky freedom is the only freedom. 

There is no safe freedom. Never has been. Never will be. 

Personally I hope you stay! 
 Heres another:

nostr:nevent1qqs2kxdxeeer87wgpktkljjqcc544rax5wa33z6334sgx2wxkl32w6gpgdmhxw309a3xjarrda5kuu3kv3jn2mrtweurgarswajx67njv3nxgurvvy6hx7tpxfskvamsvdsky6n4wqe8surfx4j82mrzv9jzummwd9hkuq3qwmr34t36fy03m8hvgl96zl3znndyzyaqhwmwdtshwmtkg03fetaqxpqqqqqqzy2snmw