GM Reporting is censoring. It's surprising to see some of you who fled to #nostr to avoid censorship now endorsing it.
For real. Ignore/block is enough
GM 🫂
Hello 🫡
I report gross racism when I see it. No should have to come across that garbage.
I don’t want anyone to decide for me what I should not see
🫂
Then don’t use those relays or clients doing it.
How do I find out which ones do?
Excellent question. Keeping your ear to the ground and asking around probably. Over the years I assume people will make spreadsheets comparing the relays and clients and those columns will be on there. For now, it’s about finding out. For example, I recently contacted one of the major app-dev/relay-hoster about potentially-illegal content referenced in a post. They said they wouldn’t do anything about it because the content wasn’t hosted by them.
Which is why clients say 'hey this has been reported' click to see anyways. Where as most censorship, in traditional sense is. YOU CANT VIEW THIS CAUSE I SAID SO. https://media1.tenor.com/m/eFiK4z8Q2hEAAAAd/still-same.gif
"publisher censoring" is irrelevant when there are infinite publishers available all connected via the same network. You can't be "censored" on Nostr, but a relay is well within its rights to block you (including after user suggestions. "reports", as it were) If you're on a "single publisher" network (like Twitter): 1. You're asking for it. 2. They own your content, not you, so it's not censoring. It's their property to do what they want with. If you want to truly be censorship-immune, you must be fully sovereign in your compute and your network.
On nostr to censor will almost be impossible! There will be more relays that not gonna censor than those that will. So, it is only possible to do this on centralized platforms.
Right. And if a platform *does* allow it, you should expect it. And it's fair game. Move to an alternative
You don't have such options on twitter so only here on nostr.
Then do not cry when big bad government comes knocking to dand that relay owners also KYC people, or any other legal requirement they come up with.
The goal is to make such things impossible or irrelevant. Know the system and work around it. One way is for every user to have their own proxy relay that directly connects peer to peer to other users' proxy relays, running on sovereign personal servers in the cloud (or in your home + an IP-cloaking remote proxy). This is what we're enabling at https://vaporware.network Another way is going extremely hard on mesh networks and various darknet protocols. There is always a way. Privacy will win. https://image.nostr.build/46346879bfd9b53fdd4f85597a8c4585d95486caee746e255bc16241fa20aad9.jpg You can read the rest at blog.vaporware.network
I like the idea of individual relays being as easy and cheap and commonly used as possible. For the reasons you mention (censorship by centralized relays becomes effectively irrelevant if we can just route around them), and also bc if we want web of trust to work, we’re going to want something like a personal relay to keep track of our WoT. Why? because calculating WoT on the fly is not going to work for anything beyond the simplest and most basic of implementations. A personal relay would be perfect for this. #WoT
Yes exactly. You're getting it. I had worked on a WoT style project on another decentralized network - one in which each user has a full VM (which can do your WoT computation, as you rightfully point out as a need). You may be interested in reading a summary of it as it stood at the time: https://gist.github.com/vcavallo/e008ed60968e9b5c08a9650c712f63bd (I plan on rewriting this for a Nostr context) The language is specific to the platform (Urbit) but you can glean the general ideas. We're working on bringing the "personal algorithm" and WoT idea to fruition generally, for all decentralized social platforms.
Awesome. I will def take a look! 😃 I am in the process of demonstrating what I’m calling the influence score as a method of nostr content stratification and discovery. You can see it in action at https://brainSToRm.ninja where I use the influence score to stratify wiki articles on nostrapedia. One of the advantages I hope to convey of the influence score is that it can see past 2 hops on one’s social graph, unlike most WoT score implementations that give a score of zero for anyone more than 2 hops away. And a second advantage of the influence score is that it is not a popularity contest: your score can be high even if you do not have a high follower count. I haven’t incorporated context yet, but it’s pretty straightforward to make influence scores that are context-specific, and that’s coming up soon on my roadmap. I would LOVE to run a personal relay that maintains my influence score calculations for me, that would be 🔥 🔥 🔥 🔥 🔥
Lot of similarities between aera and @Lez's proposed NIP-77. Both are weighted, both are contextual. Main difference is that NIP-77 has a field for confidence. I like it!
The problematic part is, you can't control which relays people use. If there are 5 relays running decent, and they censor someone/a note, it will be censored for most. So I think you need either the need on readers/viewers side to combat censorship, or clients shall have better discovering feature that shows you the person who you follow even if, it got censored on the relays you follow. Or at least it reports you, that you might want to add this relay to your list, because others censored your followed person.
I had a reply to this concern elsewhere in the thread: nostr:nevent1qqsvfnf87s5gsg4unc502whwz6wjm7vmz02peghwupnhr0m34a7w4rgpr3mhxue69uh5ummnw3ezucmvda6kgtnkd9hxuete9eu8j7szyqh04fc4hw6xm4d7dd7634msqfndz9n5hyfms9u2mk6u9e3anpenzqcyqqqqqqgy8tj68
Good suggestions! 🤙 Seems still a hard work for me. 1. If you want this to work on scale you need this on a button click. 2. Somehow we have to make it sustainable. But hosting these cost money. So at the end, you have to compete with "free" solutions. Somehow the hard part for me seems that as a content creator e.g., it does not only depend on you.
Yes, at the click of a button. We've demostrated that already elsewhere - it works nicely. And yea, hosting. But, to make a long story very short: the market will handle that. Our computational model quite well suited to something like a bidding-based resource hosting market (or you could always pay a more traditional VPS service and run there, again at the click of a button. Or host it off an old laptop or pi in your closet. Or all of the above if you're a fan of lots of fail over redundancy)
Then let's go! Let's make it accessible via one-click setup a relay on something computationally cheap.
And war is peace?
Ah, you mean reporting a profile. I thought you meant writing articles.
Both 😉
You're not supposed to go full retard.
Fiatjaf has doxxed him a long time ago! Preaching on nostr about freedom and then going to daddy elon to remove the article is full retard move!
I don't know much of what you're talking about. I know Fiatjaf didn't hide very well, and people are blaming someone other than Fiatjaf. If that's what you're talking about, then yeah, everyone slamming the reporter for revealing a name does not understand the concept of personal responsibility. Since I find personal responsibility to be a basic and simple concept, anyone that doesn't understand it is retarded, in my opinion. Anyway, I disagree that someone writing an article (reporting) is censorship. To think that "creating words is censorship" is 100% backwards; i.e. retarded.
It's just not
It is
Does Coracle (that's your project right?) use this kind-1984 event? (hate that name, 1984, I know it was an attempt at humor, but given how close we are to Orwells vision, it's a little too on the nose for me to laugh at it)
No, but only because I haven't figured out a satisfying way to implement reports that doesn't have negative externalities
Do you use the microsoft api too?
Nope, but all things considered that is a good thing for image hosts to use.
Thanks man.
It's just a report event; how to handle it specifically is up to the client.
Yeah, that's exactly the reason why my accounts on twitter were banned or suspended. Someone soft on there don’t like “retard” word in the sentence goes reports it and you are censored for it by the company.
Like snitching on your neighbor is just reporting? It is not always that simple. You are lacking communist experience.
Uh, no. Reporting to a central authority is probably censoring. Reporting things as spam or categorizing something NSFW that wasn't labeled properly is helping others to be able to get signal on something. In this case, since everything is mostly decentralized, that signal is able to be used by those that choose to use it to better their own experience using the protocol. Don't conflate things that are not the same in fiction just because they are the same in name.
GM. Disagree. What is done with those reports could be considered censorship though. Which I would argue is good. Also, it’s presumptive that people “fled censorship”.
You said disagree but in the first sentence you stated “could be considered censorship”. So agree or disagree?
I disagree that reporting it is censoring. Notes are not always censored upon reporting. Relay operators (or apps depending on if they collect the report in-app) are the censors. My apologies if i was unclear. I only meant that simply a report doesnt automatically censor, but aid in it. A relay operator or app can choose to even ignore the report or wait for 1000 reports per note before censoring if they choose to. Up to them.
I fled censorship, I know that much. The anti-scientific debacle known as "The Lockdowns" was a two headed beast; one head was called propaganda, and the other head was called censorship. Many many many decent, honest and expert opinions that desperately tried to steer the debate on draconian lockdowns into the realm of sense and reason were chewed to pieces by rabid censorship. It has only gotten worse and more sophisticated since. There is a digital gulag archipelego forming around us, and the amount of meaningful spaces where you can speak your mind without fear of censorship is evaporating rapidly. Each of us flee to higher ground each time the waters of censorship rise, eventually inhabiting our individual desert islands, sad and alone, our shouts reaching nobody.
Okay. Let me rephrase. “Also, it’s presumptive that **all people here** “fled censorship”.” You fled what you believe was censorship. I did not flee censorship.
You didn’t pay attention to my note! It says “SOME of you who fled”
I hear you, that's fair.
Jesus. You thought The UVF was bad😳.
usar as armas deles contra eles é fazê-los provar do próprio veneno
It is absolutely censorship. Not government or company censorship, but censorship nonetheless. However there is nothing wrong with user and peer censorship, as it isnt forced onto another user on nostr. They can use or create a relay that mirrors the large relays and simply ignores reported content. Unlike with government or centralized entities like 'X'.
So Nostr relays don't have to follow the laws of their jurisdictions? What type of illegal content are you hosting on your relay?
If relays will be following “the law of the jurisdictions” then every second note will be censored on here lmao
So you are advocating for relays to ignore laws and just allow kiddy porn and murder porn? I'm sure that will work out 👌
Where did I say this ?!
if I report kiddy porn you claim that is censorship.......
I don’t care what you report! Read my note and don’t twist it !
Yeah man, I'm with you on this one, the censorship that is going on today is fucking WILD.
do what you preach and run a free public relay that accepts anything from anyone then
I am not preaching! Stating the facts! I did run relay. Then I figured out that I can be a target. New tor relays are incoming ! No worries !
As long as irresponsible relay owners continue to push for censorship, running a non-censored relay is suicidal. The pro-censorship relays will inevitably attract the attention of regulators and State actors, because they are in fact not relays but publishers, and will be forced into submission and compliance, under threat of prosecution. Their irresponsible actions and obsession with replicating centralized social media practices "for the good of the protocol", "for adoption", "for common sense", etc., endangers everybody else and is the single worst point of failure of Nostr and what will effectively kill it, in time.
How about Paid relays? Since the initial payment already is a spam filter
So you'll be responsible for whatever your clients, who give you money for your services, post? What will you do when the EU tells you that your paid "relay" is publishing anti-vaccine disinformation and that you have to ban and permanently block those accounts who do? And that you have to give assurances that you will no further allow them to do it? What if they decide that you're committing financial crimes because you could prevent them from zapping, and you don't? Will you KYC and report to the tax authorities your clients? Will you keep a centralized database of the transactions? What if some of them are zapping people in Russia or China, or in a tax haven?
This shit is coming and it will be easy to catch everyone on here. Most of the IPs are being leaked and zaps are public. Nobody just paying attention and care about this, yet!
I don't know how to solve this technically, other than with pure "dumb" relays, and that is only because I think you would have a chance at plausible deniability when they arrest you, at least. But even that may be insufficient, as we're seeing with the kangaroo courts that are being organized to jail people for writing software, as State bureaucracies become more and more brazen and drop all pretense to even act as a law-based regime.
@hh your points are insightful, but pretty entrenched in a deep dystopia. On the whole, our societies are not “there” yet. Whether we “will be” or not and “what to do if” are all fine questions… but these have little to do with “what we build today”. My point is: unknown number of causal paths may lead to dystopia. If we keep “narrowing down” what Nostr is for the sake of avoiding these possible futures, then Nostr will be nothing. Dumb relays have limited usefulness. And what’s to stop dystopian govt from coming after clients also? So clients have to be dumb? No “content filters” for anyone? This is just counter productive. If Nostr is gonna grow AT ALL to a scale where Govt cares, then so will bots and bad actors polluting the network with shit content. Without ability for relays AND clients to provide content filtering (and yes… based on “trusted” user feedback mechanisms like follow, mute, trust, report, ect…) then Nostr itself will fail. Content filtering is NOT some legacy practice of “centralized” walled garden socials. It is an essential tool for maintaining network health and vitality. Perfecting truly decentralized content filtering (where users choose, design, and share their own filters across clients) is Nostr’s game to loose. If we don’t develop these tools today, then tomorrow it may be too late. We need to EMBRACE the fact that nobody notices us yet, rather than LOCK DOWN and prepare for the worst.
"We're not there". OK. As I said, it seems we need to have our own Nostr Ross Ulbricht before people are convinced that, in fact, we are.
Right. So we’re not “there” yet. Nostr “elites” are not being persecuted ATM. This is not a case of “once we have our persecuted then people will see that we are being persecuted”. We are NOT being persecuted ATM, and we should take advantage of this by “redefining” and “growing” the tools for decentralization. To cower for the sake of being persecuted is to forget the entire reason of existence. Why does Nostr exist? What can Nostr accomplish?
I'm not saying we have to "cower". It's exactly the opposite. "Cowering" is asking for compliance with censorship laws like this was Twitter already.
“Cowering” is to not pursue a “decentralized content filter” architecture (that would keep Nostr “trusted users” buffered from the onslaught of bots and bad actors that are sure to come) simply because the current paradigm of “content moderation” is centralized (run in black boxes by a handful of companies) and prone to being manipulated by govt. > “Perfecting truly decentralized content filtering (where users choose, design, and share their own filters across clients) is Nostr’s game to loose. If we don’t develop these tools today, then tomorrow it may be too late. We need to EMBRACE the fact that nobody notices us yet, rather than LOCK DOWN and prepare for the worst.”
I only see the notes of people I follow. Is that not a pretty good filter?
As long as it works for you … but TBH are you SURE that your client isn’t filtering behind the scenes for you?
Im using Damus, primarily. I do t know that. I would think that filtering should be done on the relay level. Having an assortment of PG, PG-13 and, R rated paid relays would provide a potential source of revenue for the relay operators as well as options for the users. People want different things so there’s a place for all of them.
Right. There is a way to accomplish filtering on relays and clients WITHOUT assuming the “legal” role of a publisher. This is the backdrop of the conversation above.
The nostr experience is 100% what the user makes of it.
notice you are agreeing with a guy who supports Israel's extermination campaign at Gaza
What will you do? I'l move myself and my Relay outside of the EU. Do i also become responsible for transactions going through my Lightning Routing node?
Are you gonna keep using a relay if they censor you or someone else? Are you gonna keep using Alby for zaps if they censor you?
IMO that's just they way it is. I believe it's natural for people to want to build walls around their community. This behavior has happened for millennia. These will be the growing pains of nostr. Some of my first notes (nearly a year ago) arguing my concerns for anyone suggesting pure censorship. That being said, since this exists, I suppose it's really on relay operators to ignore reports for the time being anyway. I'm sure there will be some other censorship authority that clients can send reports to and lookup reported content and censor client-side. We need to be aware of it's conception. The best part of most clients is that they are open source, we can always fork if necessary. It will be a lot of work, but on paper it can be done.
I think of it as a library Reporting is applying a label/category Books in libraries are categorized Ppl not interested in science will not visit the science section That doesn't mean that categorizing books as science is censorship Censorship would be if the library were forbidden from holding science books It is up to nostr users how they treat reported content Some nostr users report their own content for nudity or profanity Some nostr users may *prefer* reported content - it would be trivial to write nostr clients and relays that specialize in reported content E.g. a nostr nsfw app Web of Trust greatly mitigates false reports And it's impossible for relays to effectively censor across the network The most they are likely to do is scan content reported as illegal and remove if it is nostr:nevent1qqsf0qkhy3n7ldak3m2heqrsf9upajht5g07ms7hekgn9zr55zueknspr4mhxue69uhkummnw3ezucnfw33k76twv4ezuum0vd5kzmp0qgsp2alytxwazryxxjv0u0pqhkp247hc9xjetn5rch8c4s6xx5cmpxcrqsqqqqqpg9jhqs