Oddbean new post about | logout
 I have yet to see any civil discourse happening in response to a report on Nostr. It’s a nice idea, but in practice I am seeing only outrage and division. Most of the people who harass, bully, and name-call on the internet are creatures of opportunity. Sure they could go look at the reports in another application but they won’t, they aren’t, they’re seeing them in Amethyst. When they receive a notification they were reported they get angry and take it out on others. We’ve seen several people get harassed for reporting things over the past couple months (under their own accounts, not through a bot). I can share some pretty nasty links in a DM if you want to see for yourself. 

Eventually on Nos we’d like to have strong enough moderation and web of trust features to prevent this. In the meantime I just want you to be aware of the ick being generated by this feature in Amethyst. Again, I’m not arguing the data shouldn’t be public, I’m saying the way it’s being presented is not net positive for Nostr (in my experience). 
 What's the share of all reports by the bot that generated this reaction? Because of course no one is going to reply happy that the bot reported them. What you are seeing is self-selected to be argumentative... to the least. The vast majority of users won't care about your bot's notification. 

I just loaded some of your bot's reports. All of the ones I see are related to language slurs. I don't think they should be reports. They should be content ratings in the kind 1985's label event, not reports of kind 1984. I don't understand why you are forcing the issue. 

For instance, one of the reports happened simply because the user said "Sick fuck". That is a terrible way to behave online, but it's not a reportable thing. What are relays going to do about this? Delete everyone that used `fuck` in their posts? That will wipe most of Nostr out. 

If you used labels, we can have content ratings to avoid showing these slurs when kids are using the app. But adults should be able to see them just fine. 

Was this "Other" category created just to find a way to use labels in the 1984 event? If so, I am starting to think that was a mistake.  
 In fact, I am confused why you are even calling the bot a Reportinator. It should be a Content Rater.  
 My goal was to bring to your attention how real humans are being harassed when they make totally justified reports and how Amethyst is part of the pipeline. I probably shouldn’t have brought up the bot in the first place, it was just an easy example because I don’t want to dox/elevate any of the nastier stuff. Anyway I think I’ve made that point. 

I do want to dive into the details of how the events are structured and hear your thoughts on that, but I’ll put it in a separate message. 
 We are actually already in the process of changing the ontology and renaming it. It was thrown out there as a provocative experiment and POC. We’ve even talked about making a dumber bot that reports random people for random things because if Nostr users (and their clients) can’t handle a pluralistic set of rules for what is objectionable then none of this is really going to work. (we probably won’t actually do this, but somebody should).

> “What are relays going to do about this? Delete everyone that used fuck in there reports”

This is not how we are viewing reports at all at Nos. Reports are opinionated labels for objectionable content. They do not carry with them the implication that the content should be deleted from a relay. This is where the word “report” deviates from what it means on other big social platforms, and we’re talking about removing the word from our UI. Maybe we shouldn’t use kind 1984. But it exists and that’s what other apps (like Amethyst) are using so it made the most sense at the time.

In our ideal world reports are published by all kind of people: regular users, relay operators, bots, professional moderators. And they are consumed differently by different people. Some folks might want a client that only listens to reports from a specific list of moderators they choose. Some folks might want notes reported by those moderators to be hidden completely, or just behind a content warning, or maybe only if 2 or more moderators agree. Relay owners might filter reports just down to the things that are illegal in their jurisdiction and delete events matching that criteria. Some users will want to ignore reports altogether and browse the network as if they don’t exist. This is how we create a place where any type of speech can find a home, and everyone can follow the law and have a good time: each user has full control over what they see, and each agent (user, dev, relay operator) acts according to their own convictions. 

Currently we are using 1984 events because they carry with them the semantic notion that “this content is objectionable to someone and if you care you can review it”. We attach NIP-32 labels to them because it allows us to be more specific and add important categories (like harassment) that NIP-56 doesn’t cover. (Our current ontology is proving to be too specific, at least the way we are using it, we are working to simplify). We could just publish label events by themselves and it would work mostly the same way.

@Vitor Pamplona do you share this overall vision? If so what event structure do you think fits best?