“Cowering” is to not pursue a “decentralized content filter” architecture (that would keep Nostr “trusted users” buffered from the onslaught of bots and bad actors that are sure to come) simply because the current paradigm of “content moderation” is centralized (run in black boxes by a handful of companies) and prone to being manipulated by govt.
> “Perfecting truly decentralized content filtering (where users choose, design, and share their own filters across clients) is Nostr’s game to loose. If we don’t develop these tools today, then tomorrow it may be too late. We need to EMBRACE the fact that nobody notices us yet, rather than LOCK DOWN and prepare for the worst.”
I only see the notes of people I follow. Is that not a pretty good filter?
As long as it works for you … but TBH are you SURE that your client isn’t filtering behind the scenes for you?
Im using Damus, primarily. I do t know that. I would think that filtering should be done on the relay level. Having an assortment of PG, PG-13 and, R rated paid relays would provide a potential source of revenue for the relay operators as well as options for the users. People want different things so there’s a place for all of them.
Right. There is a way to accomplish filtering on relays and clients WITHOUT assuming the “legal” role of a publisher. This is the backdrop of the conversation above.
The nostr experience is 100% what the user makes of it.