We are actually already in the process of changing the ontology and renaming it. It was thrown out there as a provocative experiment and POC. We’ve even talked about making a dumber bot that reports random people for random things because if Nostr users (and their clients) can’t handle a pluralistic set of rules for what is objectionable then none of this is really going to work. (we probably won’t actually do this, but somebody should).
> “What are relays going to do about this? Delete everyone that used fuck in there reports”
This is not how we are viewing reports at all at Nos. Reports are opinionated labels for objectionable content. They do not carry with them the implication that the content should be deleted from a relay. This is where the word “report” deviates from what it means on other big social platforms, and we’re talking about removing the word from our UI. Maybe we shouldn’t use kind 1984. But it exists and that’s what other apps (like Amethyst) are using so it made the most sense at the time.
In our ideal world reports are published by all kind of people: regular users, relay operators, bots, professional moderators. And they are consumed differently by different people. Some folks might want a client that only listens to reports from a specific list of moderators they choose. Some folks might want notes reported by those moderators to be hidden completely, or just behind a content warning, or maybe only if 2 or more moderators agree. Relay owners might filter reports just down to the things that are illegal in their jurisdiction and delete events matching that criteria. Some users will want to ignore reports altogether and browse the network as if they don’t exist. This is how we create a place where any type of speech can find a home, and everyone can follow the law and have a good time: each user has full control over what they see, and each agent (user, dev, relay operator) acts according to their own convictions.
Currently we are using 1984 events because they carry with them the semantic notion that “this content is objectionable to someone and if you care you can review it”. We attach NIP-32 labels to them because it allows us to be more specific and add important categories (like harassment) that NIP-56 doesn’t cover. (Our current ontology is proving to be too specific, at least the way we are using it, we are working to simplify). We could just publish label events by themselves and it would work mostly the same way.
nostr:npub1gcxzte5zlkncx26j68ez60fzkvtkm9e0vrwdcvsjakxf9mu9qewqlfnj5z do you share this overall vision? If so what event structure do you think fits best?