@hh your points are insightful, but pretty entrenched in a deep dystopia. On the whole, our societies are not “there” yet. Whether we “will be” or not and “what to do if” are all fine questions… but these have little to do with “what we build today”.
My point is: unknown number of causal paths may lead to dystopia. If we keep “narrowing down” what Nostr is for the sake of avoiding these possible futures, then Nostr will be nothing.
Dumb relays have limited usefulness. And what’s to stop dystopian govt from coming after clients also? So clients have to be dumb? No “content filters” for anyone?
This is just counter productive. If Nostr is gonna grow AT ALL to a scale where Govt cares, then so will bots and bad actors polluting the network with shit content. Without ability for relays AND clients to provide content filtering (and yes… based on “trusted” user feedback mechanisms like follow, mute, trust, report, ect…) then Nostr itself will fail.
Content filtering is NOT some legacy practice of “centralized” walled garden socials. It is an essential tool for maintaining network health and vitality.
Perfecting truly decentralized content filtering (where users choose, design, and share their own filters across clients) is Nostr’s game to loose. If we don’t develop these tools today, then tomorrow it may be too late. We need to EMBRACE the fact that nobody notices us yet, rather than LOCK DOWN and prepare for the worst.
"We're not there". OK. As I said, it seems we need to have our own Nostr Ross Ulbricht before people are convinced that, in fact, we are.
Right. So we’re not “there” yet. Nostr “elites” are not being persecuted ATM.
This is not a case of “once we have our persecuted then people will see that we are being persecuted”. We are NOT being persecuted ATM, and we should take advantage of this by “redefining” and “growing” the tools for decentralization.
To cower for the sake of being persecuted is to forget the entire reason of existence.
Why does Nostr exist? What can Nostr accomplish?
I'm not saying we have to "cower". It's exactly the opposite. "Cowering" is asking for compliance with censorship laws like this was Twitter already.
“Cowering” is to not pursue a “decentralized content filter” architecture (that would keep Nostr “trusted users” buffered from the onslaught of bots and bad actors that are sure to come) simply because the current paradigm of “content moderation” is centralized (run in black boxes by a handful of companies) and prone to being manipulated by govt.
> “Perfecting truly decentralized content filtering (where users choose, design, and share their own filters across clients) is Nostr’s game to loose. If we don’t develop these tools today, then tomorrow it may be too late. We need to EMBRACE the fact that nobody notices us yet, rather than LOCK DOWN and prepare for the worst.”
I only see the notes of people I follow. Is that not a pretty good filter?
As long as it works for you … but TBH are you SURE that your client isn’t filtering behind the scenes for you?
Im using Damus, primarily. I do t know that. I would think that filtering should be done on the relay level. Having an assortment of PG, PG-13 and, R rated paid relays would provide a potential source of revenue for the relay operators as well as options for the users. People want different things so there’s a place for all of them.
Right. There is a way to accomplish filtering on relays and clients WITHOUT assuming the “legal” role of a publisher. This is the backdrop of the conversation above.
The nostr experience is 100% what the user makes of it.