Oddbean new post about | logout
 The goal of reportinator is to provide a service that people can opt in to using for content warning. It’s not perfect and we’re not telling anyone they have to use it. Don’t like it? Mute it. Don’t host its events on your relays. 

It doesn’t always get it right, but we’re tweaking it. We’re also working on adding an easier way for people to review and edit or delete reports. 

Our goal is that there should be many different reportinator like services with different takes. The Thank God For Nostr folks can do one consistent with christian values. The sex workers can do one from the perspective of embracing sexual content but helping protect those same creators from harassment. A bitcoiner might make one that labeles all non-bitcoin crypto content as “shitcoin propaganda”. 

Relays can then decide what content and users they want to host, having content labeled makes that process a lot easier. Most relays already have algorithms to decide what content they’ll host and what they reject. This just gives them better tools. If you don’t like the policies of relays then run your own!  
 could we see which relays are accepting @Reportinator reports? and not showing the notes that are reported? 
 I didn't opted in to be followed around and have every note reported or my language evaluated. If you don't like it Mute it-you can also say about Spam or harassment, and, for your intentions I can also say the same, don't like my grammar don't follow, mute.

Your approach to your goals is a dystopian view, everyone will get reported by N random AI bots, even if none of those people interested are following me, or interacting with me let's just keep everyone under watch and surveillance and label them. 

That type of curation should be done by people themselves and WoT, like you said, I don't like the language of someone I don't follow, I mute and probably others in they're circle. You want restricted content, do the opposite, allow the ones that are approved in your WoT, or label notes on the positive side with an "OK". 
Want a Christian relay? Run you own relay, and label people in your relay with your rules if you want. And in this case i really can optin or opt out. 

Your bot and view for the future with N npubs following my every note is not very different from harassment. 
 
 Ronin, your view on this is exactly correct.  Thank you for voicing it. 
 I’m not sure I see the actual difference between: 

1. An account that consumes global feed, reports on content quality, and can be followed by others for recommendations. 

2. An account (already followed) that mutes unwanted content and accounts, and whose followers also mute said content and accounts. 

It’s true that WoT tools can be improved to provide better implementations of 2. Also true that 1 can fit nicely into WoT based filtering. 

AI bots are red herring. Both 1 and 2 can be accomplished (hopefully with transparency) effectively by bots OR by human “influencers”. Both are respectable options with their own strengths and weaknesses. 

Dystopian these are not. With freedom to choose and a free market of choices … WoT based filtering SHOULD be available in as many forms as can be imagined.  

WoT needs better tools … not fewer choices. 
 The problem is only one, you can implement your goal and vision, with the minimum scope required to achieve it, or like in this case the scope is every note from everyone wich is not a good experience for everyone. I'm not the first one complaining, there aren't more probably bc many client don't show it. 

You want content moderation for some specific topics or policies, you don't need to target everyone, all the time, you can reduce your scope to threads that include people that the bot subscribers follow. Because the subscribers   don't care about notes that aren't even in their feed.

A person reporting will not report every note from everyone in mass like the bot. It's a poor and selfish implementation decision without considering the targets. Reportinator has 15 followers, that have nothing to do with me yet all my notes can be flagged for everyone. 

It's like saying I want my neighbourhood to be safe and know wich neighborhoods are safe. Ok let's put surveillance for everyone and report everything they do wrong, don't want it? Just don't look at the camera, you will still be reported, but if you don't look at it you won't see the cam and you'll feel better.  
 If you DONT want your notes to be PUBLICLY viewable and filterable by ANYBODY you have ONLY two options. 

Either ENCRYPT all your notes (nip44) and post to private groups on paid relays OR don’t use Nostr. 

Data on Nostr is by DEFAULT open for ANYONE to do anything they like. This is Nostr’s SUPERPOWER, and what makes (public data on) Nostr so censorship resistant. 

NO a bot that scrapes ALL content is NOT akin to “big brother” surveillance, simply because it has no special access  or (power to restrict access) to PUBLIC data.

I think you need to shift your paradigm. Problem solving in a decentralized space is actually more challenging …  
 If I create a bot that creates a new npub  to reply to your every note with harassment and improper content is that fine and ever report your npub? I mean its a decentralized open network so it's fine right? Or maybe don't use nostr. Everything is possible, but there is good and bad options to achieve something, this is a community and you can be a good or a bad actor for the whole experience in Nostr. If saying suck it up I do what I want is your view, fine. It's not mine.  
 Glad this is staying a civil but spirited debate because it's an interesting philosophical argument.

On one hand people shouldn't expect privacy in a public space, that is true both online and offline. If however, in physical space I followed you around 24/7 and provided commentary on every comment you made, you would want to punch me in the face.

For a moment let's set aside the technical responses such as: encrypt your messages, or run your own private relay. That kinda defeats the purpose of participating in a public network. 

So, is the act of someone programatically categorizing your content a form of harassment? Would providing an opt out mechanism satisfy both parties? 
 Harrasement is not desirable. While the definition of “unwanted content” is up to each individual, everybody should have accessible tools to curate their own best content experience. Indeed … ones persons WoT filter may in fact be another persons spam bot, and that’s OK. 

Nostr needs better WoT tools, not fewer. 

A scalable WoT solution for Nostr will inevitably involve some standard for users to choose their own “is trusted” accounts and content filters across clients. This is an interesting and active space for discussion… 

nostr:note1m69f00rzyqrgwlrcsx56dxr8572rj4psl6zqr0cpqwvjl83mfyrsugevgv 
 I think my opinion is that unfortunately anonymity and a complete lack of censorship always results in undesirable content persisting. A person can have no censorship if there is forced personal accountability for content that is subject to community dissent and law enforcement. If there is anonymity, there must be centralized censorship to moderate content for the platform to remain civilized. 
 Absolutely. I even accept to be reported by the bot in a thread where a subscriber is engaging or even if is not engaging but i replied to someone he follows, so I'm in his feed. Because the way I see it the bot is just doing what the user would probably do anyway. The scope is reduced. But currently I'm reported forever at any time, a user would not go out of his way to report every note of everyone including the ones out of his broad circle.  
 @Ronin I appreciate your concerns and what reportinator is right now is a flawed experiment. Sometimes it helps sometimes it gets things wrong. Let’s talk. Also I think there are some straight up bugs where it should be reporting a post and not a person. Or where it flags people talking about being harassed as harassment. I also think this is about content labeling so users can be empowered to shape their own experience, the name “report” comes from appstore requirements and is causing a lot of confusion. This can be better.  
 Sure, no problem. Sent you a DM.  
 "Who says we have to fit into neat little categories? Why not embrace the chaos and complexity of both options? Let's celebrate diversity in filtering methods and explore all the possibilities. WoT tools should be as diverse as the people using them. Let's keep pushing for innovation and choice!" 
 horrible 
 No, the goal of reportinator is to censor people via rating their speech and to create a twitter like experience from a ridiculous rabid left nut PoV.  It's disgusting and wrong.  Wasn't needed to begin with and isn't needed with any "tweaks" at all.  It should be simply removed.  We didn't have to endure it before, we shouldn't have to endure it now.   You think there should be "many different reportinator" like services.  I think there should be exactly zero.  Stop polluting this space with crap. 
 > It should be simply removed.
> I think there should be exactly zero.

Freedom of choice is a hallmark of Nostr. “Censoring” is only possible when choices become inaccessible. Ranting about “left vs right” is just low signal hyperbole, when we’re all in the same boat. 

For Nostr, freedom of choice is what matters. Stay focused on that, and we all win. Loose focus, and somebody looses, and nobody wins.