Oddbean new post about | logout

Notes by ts | export

 A handful of other resources from the fediverse:

https://github.com/iftas-org/resources/tree/main 
 Some thoughts by nostr:nprofile1qqstgy9c8cmmxadu3uax4j3al5cf6fw6gkgfc2pjelg2jqj5ptzzseqpz4mhxue69uhhyetvv9ujumt0wd68ytnsw43qzxmhwden5te0v4cxsetdv4ex2mrp0yhx6mmnw3ezuur4vgq3jamnwvaz7tmjv4kxz7fwwd5xjarxdaexxefwdahx2svrdwz  on the Stanford report (which he co-authored):

"Traditionally the solution here has been to defederate from freezepeach servers and...well, all of Japan."

"CSAM-scanning systems work one of two ways: hosted like PhotoDNA, or privately distributed hash databases. The former is a problem because all servers hitting PhotoDNA at once for the same images doesn't scale. The latter is a problem because widely distributed hash databases allow for crafting evasions or collisions."

https://hachyderm.io/@det/110782896527254991 
 Below is a transcript of a call some fediverse people had in August on how they hope to address CSAM on ActivityPub. There is good work being done over there.

"IFTAS is a non-profit that has 
recently incorporated; we’ve been working on something since Nov-Dec 
last year. It’s intended to be a way to get foundation money into 
Fediverse, and out to moderation communities in Fediverse and 
decentralized social media. As far as CSAM specifically, we’ve been 
looking at it as something we want to work on. The story from a few 
weeks ago and the Stanford paper really kept motivating us. Right now, 
IFTAS is generally considering building on third-party moderation 
tooling. One of those things is likely going to be some form of CSAM 
scanning and reporting for fediverse service providers. We’re talking 
with Fastly, PhotoDNA, Thorn, some of the Google tools, etc. We’re 
hoping to hit the big hosting companies that can use things like 
Cloudflare’s CSAM scanning tools. For larger providers, we hope to 
connect them with services like Thorn."

"What people are doing is - they’re taking reports from users, and taking
 down the content without reporting to the gov’t. I don’t know what EU 
law is like, but even among above-board instances, we’re not seeing 
legal levels of US operators."

"Yeah, to clarify, for US operators, just deleting is not abiding by the 
law, you actually have to report it to NCNEC. This is an education 
issue; operators don’t know they have to do that."

https://socialhub.activitypub.rocks/t/2023-08-04-special-topic-call-social-web-and-csam-liabilities-and-tooling/3469/9 
 "decentralized platforms have relied heavily on giving tools to end-users to control their own experience, to some degree using democratization to justify limited investment in scalable proactive trust and safety. Counterintuitively, to enable the scaling of the Fediverse as a whole, some centralized components will be required, particularly in the area of child safety."

https://stacks.stanford.edu/file/druid:vb515nd6874/20230724-fediverse-csam-report.pdf 
 "The hashtag and keyword blocklists made available to Mastodon users are very useful for avoiding content that individual users want to avoid: posts about certain political events, disasters, or other personally disturbing content. However, as discussed in Section 2, it is unreasonable to expect users to curate their own filter list of hashtags and keywords—particularly as hashtags and keywords related to CSAM change with high frequency."

Are there any nostr lists that include categorized lists of things to block? Both clients and relays could incorporate those curated lists and block or flag anything tagged.

It's interesting to note that text indicators are a good vector for removing illegal images. Images can be hard to automatically detect, and are much hotter to handle. However, an up-to-date list of child exploitation keywords presents the possibility for lower risk and more automation. 
 "The ActivityPub specification does not provide any guidance for Direct Messages; in Mastodon, DMs are more akin to “posts with an audience of two”, and are readable by instance admins. Because of this, DMs on Mastodon are unlikely to be a primary channel for child exploitation-related activity. Instead, users in these communities most commonly request contact on Session, an encrypted messenger forked from Signal that uses onion routing by default and requires no phone number or e-mail address (instead, using a hash as an identifier). Session allows either large group chats or one to one encrypted communication, and is so heavily associated with CSAM that posts not containing a Session ID still will use the “#session” hashtag as a discovery mechanism (see Figure 2 on page 7).

Lack of end-to-end encrypted DMs (or indeed, any easy to use direct messaging) pushing users to other platforms has mixed results with regard to child safety: using Session is extremely slow and requires some technical understanding, limiting its reach. On the other hand, if Mastodon theoretically had end-to-end encrypted DMs or chat groups, it would at least have access to the e-mail and IP addresses of the users involved, and users in such a group could report to instance admins."

Nostr has encrypted DMs, with no email requirement, making it a more anonymous option than mastodon for abusers. IP could be collected by relays, but abusers could select trusted relays to send messages over. 
 "CG-CSAM has also increasingly become commercialized, with users advertising for-profit private Discord channels or distributing bundles of CG-CSAM or customized generative models in exchange for money (see Figure 1) or cryptocurrency"

It's only a matter of time before this happens with Data Vending Machines.

https://stacks.stanford.edu/file/druid:vb515nd6874/20230724-fediverse-csam-report.pdf 
 "In this case, the administrator and sole moderator received abuse reports regarding CSAM on the instance—either uploaded by a local user or ingested from a remote follow—that were not immediately acted upon, resulting in an abuse report being sent to the server’s hosting provider. This resulted in the instance admin inspecting and deleting the content, but by this time the xyz top-level domain had suspended the server’s DNS domain name, effectively taking the entire site down and depriving the entire userbase of the service."

This seems like the main attack vector relevant for clients. Even if clients don't store content themselves, they do provide access. While they can't be held legally liable (I believe), they can be reported and de-platformed either by app removal or domain name suspension.

https://stacks.stanford.edu/file/druid:vb515nd6874/20230724-fediverse-csam-report.pdf 
 "Administrative moderation tooling is also fairly limited: for example, while Mastodon allows user reports and has moderator tools to review them, it has no built-in mechanism to report CSAM to the relevant child safety organizations. It also has no tooling to help moderators in the event of being exposed to traumatic content—for example, grayscaling and fine-grained blurring mechanisms."

https://stacks.stanford.edu/file/druid:vb515nd6874/20230724-fediverse-csam-report.pdf 
 "n the case of child safety, Japan has significantly more lax laws related to CSAM which has resulted in a cultural divide4 where most users in Japan are segregated from the rest of the Fediverse"

This is why Japan adopted Nostr early on. It's good that they largely use their own relays, but the cross-over is a dangerous area for people not in Japan.

https://stacks.stanford.edu/file/druid:vb515nd6874/20230724-fediverse-csam-report.pdf 
 No, I'm only here to continue the conversation already started and be a resource for devs trying to figure out what to do with illegal content.

It seems like many of them don't know what sort of risks they're exposing users to by creating "reporting" features. Others, like nostr.build are doing the right thing by reporting stuff to NCMEC.

Coordination between service providers is challenging, which is good for censorship resistance, bad for keeping users safe from legal liability.

I've adopted the term "Trust and Safety" as my username with tongue somewhat in cheek, sort of like our revered "Nostr CEO". 
 No background, just someone who wants to see Nostr succeed. 100% agreed on those goals, and the hypocrisy of the powers that be. The way I would articulate my hopes for Nostr in this regard would be to:

- Protect well-intentioned users and service providers from accidental criminal or civil liability
- Protect Nostr the protocol from gaining the reputation of being a harbor for CP
- Direct law enforcement to the people actually participating in trafficking and production of CP

I have some ideas on how Nostr can hobble bad actors without compromising on decentralization, hoping to write them up soon. 
 PSA to relay operators: know your responsibilities and risks.

- Be careful about hosting images or NIP 94 kind 1063 image headers.
- It's probably a good idea to avoid serving kind 1984 content reports, even if you accept them, since it could be considered "advertising" illegal content. 
 A very promising organization for helping decentralized service providers deal with illegal content:

https://about.iftas.org/ 
 A report on CSAM and other illegal content in the fediverse:

https://stacks.stanford.edu/file/druid:vb515nd6874/20230724-fediverse-csam-report.pdf 
 Here's a relevant conversation going on in the fediverse:

https://codeberg.org/fediverse/fep/pulls/140 
 Here's what federal law has to say about CSAM:

https://www.law.cornell.edu/uscode/text/18/2251 
 PSA: The only agency you're allowed to report illegal sexual content to is the NCMEC. Report it to anyone else, and it's considered "advertising" and could put you in prison for 10 years.

This means most or all Nostr clients with a "report" button are putting their users in serious danger. This is made worse by the fact that the post is signed using the user's private key, and very difficult to delete after it's been published.

nostr:nevent1qqstvgy8g2wq8a9w6ln5kqx6xqtrljfstnu6427r9x0pjc489sszalqpzfmhxue69uhhqatjwpkx2urpvuhx2ucpz3mhxue69uhhyetvv9ujuerpd46hxtnfduq3vamnwvaz7tmjv4kxz7fwdehhxarj9e3xzmnyqyfhwumn8ghj7un9d3shjctzd3jjummjvuq3qamnwvaz7tmwdaehgu3wwa5kuegpz4mhxue69uhhyetvv9ujumt0wd68ytnsw43qvvq9d6 
 Hello everyone, here to do research on trust and safety on Nostr. 
 No, I'm only here to continue the conversation already started and be a resource for devs trying to figure out what to do with illegal content.

It seems like many of them don't know what sort of risks they're exposing users to by creating "reporting" features. Others, like nostr.build are doing the right thing by reporting stuff to NCMEC.

Coordination between service providers is challenging, which is good for censorship resistance, bad for keeping users safe from legal liability.

I've adopted the term "Trust and Safety" as my username with tongue somewhat in cheek, sort of like our revered "Nostr CEO". 
 There is borderline anime child porn in global. Much is not marked sensitive. This makes us look ... 
 The really nasty thing is that it's also illegal in the US to report child porn to anyone other than NCMEC. This problem has to be solved not only for nostr's reputation, but also because it puts everyone at risk (especially amateur relay operators).

nostr:nevent1qqstmcca5c9ea8ruex4x4vzkzjwsvsg4tjrn04pyetegexe49ukhwxsprpmhxue69uhhqatzd35kxtnjv4kxz7tfdenju6t00mdkwc 
 Agreed, the protocol should not concern itself with this stuff. However, clients and relays must. 
 Here's one possible approach: nostr:nevent1qqsp0l92vytmzeq03k3sqd0hyldv9utg8t9wgk0yffj7njalm45qzgcpzfmhxue69uhhqatjwpkx2urpvuhx2ucpkp9n5