Oddbean new post about | logout
 There is a pubkey who has uploaded over 6500 files to nostrcheck.me

Many of these files with adult content, I have reviewed each upload by hand, one by one.

Two days ago this ‘person’ started uploading illegal content involving underage children.

I'm tired, very tired. For me #nostr is a breath of fresh air, but this is beyond me, this is an abuse to all of us, from the victims of the pictures to myself.

#asknostr we have to do something about this. Seriously

P.S: never donated a single satoshi because of the indiscriminate abuse of my server, never. 
 unfortunately this is just the nature of public uploads. best we can probably do is automatic detection using ai. 
 right now I'm fighting a chinese spammer on the damus relay, these people have no lives. 
 DM me 
 I've seen it, they've been going full blast for a few days. We have to fight it somehow. 
 This is exactly what we are working on with @The Fishcake🐶🐾  but we still have a problem with these degenerates. The last few weeks we have an AI analyzing, we no longer have to check by hand, but the problem with these riff-raff is still there. 
 PhotoDNA is the way! 🐶🐾🫡 
 Do you have a product that can AI scan images for sensitive content? Would love to know more 
 Yes. What do you want to know? 🐶🐾🫡 
 I’m developing a nostr client https://storm.pub so that might be useful to prevent gross content from being posted

So you have a website I can bookmark? 
 I have an API that I could share with you to use with low volume 🐶🐾🫡 
 That’s awesome! I’m not quite ready for it yet but I’ll let you know when I am! 
 Thank you! 
 any other FOSS type alternative to PDNA ? or is it that MS has huge library to crosscheck? 
 This is exactly why I don't do any public server stuff cause I don't want to deal with it, but I have mad respect for the devs who fight through it and do it for the community. I tried making a public blog website once, it got spammed with ads on Russian funeral services  . I decided then I didn't want to.  
 Running AI against this shit also costs money. Indeed, uploading media should probably cost a few sats 
 This! Such „PoW” would discourage spamming. 
 it wouldn't. spammers can do pow better than normal clients 
 How better? Better machines or what? 
 people writing spam bots are probably nerds with more computing resources than just a person on their phone 
 True. That’s a pity then… 
 Ojalá encontremos la manera adecuada y rápida de frenar ésto  
 How about some initial sats payment that is automatically reimbursed if the relay operator does not decide to slash it? That would allow for much higher prepayment but come at low cost for honest users 
 I think that was the so Sphinx chat model. You give up sats to join a group and send messages and if the chat room owner decided it was spam they could keep the sats.  
 Then you'd trust the good intentions of the host. 
 yeah, but it is an opt-in model and these things can be handled on the social layer through reputation. Additionally, we are not talking about huge sums. A deposit of 5$ per user would already kill almost all spammers, the individual loss in a rug is low. 
 I guess so 🤷 
 which is a perfectly viable model I think. It would even allow to make it free for honest users and very costly for dishonest ones. There's always the possibility of rugs by the relay operators, but that's the risk a user is taking. 
 Are uploads signed with nsecs? Maybe a web of trust scheme or reputation based rate limiting would help. npubs without a clean history or low connectivity go to the back of the review queue. 
 That's horrible, we need to do something, just not sure what.  
 An obvious solution would be to only accept messages that pay some sats 
 That's is one of my fears about what could happen if we fix the relay ip address we could not track this sick animals down. 
nostr:nevent1qqsdxn5sn9sv7fkvnjfl27y20dvx8vxwhaz8w6yggh8ntx5xukuag2spz4mhxue69uhhyetvv9ujuerpd46hxtnfduhsygyfu997f8ksqu76swm8sfuu62d6ttvxeuqqk63arfxrm392flws9spsgqqqqqqsu55puk 
 Ai is the answer.  I’ve thought about this a bit. We need to setup a local llm that is vision capable that scans all uploaded images for abusing content and denies storage if determined to be illegal or inappropriate. 
 DMed you  
 It’s risky to do nothing; we don’t want any developers getting involved in illegal activities in the name of freedom of speech or resisting censorship. Crime should be prevented, especially when it involves underage individuals or other serious issues. Governments often target those who enable such activities, so the facilitators would be held responsible. 
 Limit the free upload just like nostr.build make storage by paying on demand like satellite.earth  so it covers the cost for AI scanning and allows more storage for honest people.  
 I’m don’t know about your legislation, but at least under Swiss law if you’re a hoster you only have to follow reports and notify the police if you discover something. As a provider you’re not liable for hosting unknown content. You wouldn’t need to review this shit by yourself. But I can see how you feel it’s your duty. 
 You might look into iftas.org. They've shown up in the last year to help fediverse admins deal with stuff like this. Right now it's mostly just forums/chat, but they're planning to create some tools to help automate this kind of thing. 
 Sorry to hear that Quentin, thank you for your service. It is really hard to handle those issue as media service provider.

Maybe, you can take a look on
https://github.com/atrifat/nsfw-detector-api and might be modify it.

I've used that as module for nfrelay.app . It is effective and really help to handle the issue. Hopefully, it can also help you. 
 Already doing it, but CSAM requires and has a different solution 🐶🐾🫡 
 I have some idea to improve and adjust it to be able detect CSAM. Is it ok to send through to your DM (Regular DM or Gift-Wrapped DM) Fishcake? 
 There is PhotoDNA that is free of charge and does it well. I don’t think we should spend our limited resources on that. 
If I were to do it, I could have a age detection AI model and nude detection model in a multimodel setup, but PDNA is definitely better and an industry standard 🐶🐾🫡
DM me if you wish, but consider the above 
 Well, you already said very similar strategy that i want to propose. No need to DM amymore 😄

Yes, PhotoDNA for known data while multimodel setup using nude detection + age detection model for unknown data. It is also my proposal idea. 
 We could work on the multimodal thingy together then. @quentin and I have been experimenting with general detection of safe/unsafe media. I can share my repo (still closed source for now until I am sure the code is safe and clean) and take it from there. Interested? 🐶🐾🤔 
 Yes, i'm interested. I think i can probably join helping 1 month later. I need to finish some of pending work in nfrelay.app for the moment. 
 Excellent! The more the merrier! Ping me in the DM if/when your are available 🐶🐾🫡 
 Yes, will do 🫡 
 Ok, I implemented age prediction for unsafe content, we’ll see how it goes 🐶🐾🫡 
 Oh nice, is it already running for nostr.build and nostrcheck.me? 👀 . 

Hopefully it can help both of your services. 
 It runs on NB, @quentin will need to update api call and it’ll work 🐶🐾🫡 
 Glad to hear. 🫂 
 nostr content filtering experts @The Fishcake🐶🐾 @Rif'at Ahdi R @quentin  
 this is very good tip for #photoDNA or any selfhosted AI model @quentin to integrate this foss software into nostrcheck package. (using external free/paid indexing or content searching means dependency on another 3rd party service i.e. BAD by design) - there is nohing gives 100% guarantee in cat n mouse game BUT MITIGATING 80% solves good part the problem. also look POW and/or NIP-42 also which limit attackers limit resource also. 
 So sorry you are fighting this right now Quentin. I’m sure you will find a responsible solution. 

People spreading this sort of thing need to be in prison. Plain and simple. 
 Is there an AI bot which can detect anything like that and automatically delete + block + report to the police? Having to see that sort of stuff would be scarring, no one should have to deal with it. 
 Cloudflare has that as a service. It doesn't even cost anything.  I think it might even be on by default if you're using their file hosting services.

Includes automatic reporting to law enforcement and follow up by them so you don't have to get involved.  

Sometimes, some things are best left to multinational corporations. 





 
 Probably a Fed trying to attack Nostr 
 No free uploads, or just very limited, AI scanning, get their IP, report to authorities.  
 just close shop
that's a lost battle  
 or like, charge for access, per npub

even IP rate limiting won't fix this 
 yes, by running a free service like that he is just inviting criminals in and then complaining  
 nice cover up ya pedo 
 Sorry dude, that sucks. Having an open source CSAM filter unfortunately feels like a big need for multiple Nostr projects. 
 As @jb55 said, probably needs NSFW filters that use AI at both the client, relays and/or public hosting servers.

I looked into using an open source AI model for @damus (I think from Yahoo from memory). At the time the code used the Kingfisher library which made it a little more difficult to integrate and made the Damus app significantly larger.

Server side this will have a running cost, and I think if people want those services they will probably have to pay for it. I would prefer relays that did this. 
 制约一件事情的最好做法是让成本高于收益,见效速度看高多少。

nostr:note16d8fpxtqeunve8yn74ug576cvwcva06ywa5gs3w0xkdgdede6s4qg83kzs  
 just mass DELETE 
 free service SHOULD NOT BE ABUSED of SPACE + BANDWIDTH even IF even content are legit. just mass delete n block npub 
 that doesn't help, uploader can make new npubs at a rate of tens of thousands per second 
 thats why RATE LIMIT in free/paid is important , which quentin is NOT using - allow any npub max 10MB doesnot matter what content - there are auto content filters. NIP-42 doesnot prevent RICH paying abuser also. any illegal delete PAID or FREE dnot matter 
 1. npubs literally take less than a millisecond to generate
2. tor allows an attacker to keep changing their IP every minute or two, defeating IP based block/rate limiting
3. wasting resources recognising images with an AI engine is costing money
4. it is an assumption without any basis that this attacker wants to pay to make this happen, and if they do, then there is per-subscription limits, and the option that abuse can cancel the subscription

bitcoin survives because it pays people to protect it, subsciption services survive by charging people to use them 
 RATE LIMIT is already used by many - they are SAFE from exit IP or npub changes that simple to mitigate n slow them down to non-abuse levels (this is nothing to do with content). any attackers can break any filter over time once they learn it.
 
 less than a millisecond for a new npub, less than a second to auth with it, resume spam

paid subscriptions can still be attacked trying to break the security but they can't upload data the server is dropping automatically and ceasing to respond to it

there are many countermeasures, please, you, and quentin, study some network security theory

google and yahoo and protonmail and cloudflare deal with this bullshit every day, and you don't know it until someone finds a crack and jams a wedge in it 
 u r less 1ms theory is taken already care by using CF frontend which many using - so thats not a problem 
 it's not a theory, i wrote a key miner and it does over 100k new pubkeys every 5 seconds

CF is only stopping the more obvious bulk traffic attacks, not protocol level ones 
 POW enables gateway stop ur script unless you hardware is strong enough 
 NIP-42 and POW both that prevent that  but allowing non-auth npub is also important for growth of nostr. 
no normies even know how to use nostr and asking everyone to use NIp42 client is make no normies use nostr.
so RATE LIMIT is still the key 
 normies won't have anything to come to if the network is broken 
 PoW, or fees, either way

we have lightning network integrations everywhere

PoW will only last as long as it remains more expensive or complicated, after that, only subscription fees will stop them

bitcoin survives entirely because it costs more to attack it than the benefits it provides 
 fully subscription only based will kill NOSTR from new comers evenb before it is born by that approach
there are several FREE image uploading n  txt gossip services to attract normies besides nostr. 
 FREE relays FREE CDN are must for new comers even wine relay has FREE new npub service. 
 nope 
 deploy auto jpeg/mp4 scanner to block as NSFW content if not marked by tag 
 I guess it's not possible to find the person via IP?  
 RATE LIMIT by BANDWIDTH to all
RATE LIMIT by SPACE to FREE npubs
AUTO CONTENT DELETION by AI scanning
TAG & NIP-42 based whitelisting for adult content IF u wanna allow those.
There is no full manual solution unless you place MITIGATION tools in place for CDN service. 
its better have slow service than no service or shutting down service 
 Thank you for your service 
 I feel you and thank you for having to endure that. Sadly, I decided to shut down my free relay for a similar reason. There was some Japanese accounts that kept sharing CSAM and I don't want any part of the stress with wack-a-mole blocking them. Was best just to shut it down imo. Still operating a paid relay, though. 
 Perhaps there's some way for a version of a mining pool  @OCEAN s proposed solution for miners to construct their own block templates, for a nostr version to support solo/larger/all relays to construct their own content templates? 

It's a delicate balance, guess this is the behind the scenes world of any content platform, confident that the #nostrmind can solve this. 

Bitcoin is only starting out on the mission of decentralising mining  @7xl  @Solo Satoshi feels like we should get the jump on the relays before we end up with the big 3...

 @captjack
 @Luke Dashjr
@Bitcoin Mechanic 
npub1wnlu28xrq9gv77dkevck6ws4euej4v568rlvn66gf2c428tdrptqq3n3wr
 
 Someone should file a detailed report to any relevant authorities in OPs area, these crimes need to be taken seriously! 
 How would you know the OPs area? He’s definitely using VPN or tor 
 Both VPN and tor have been de-anonymized by feds on countless occasions 
 Can't you block this pubkey ?
And can't AI bots make the blocking automatic ? 
 Sounds like an old problem... Email -> Spam 
  block the pubkey? 
 What solutions are there? A satoshi hosting fee per kilobyte/ megabyte? Sorry you are going through that. 
 Running cost of running a relay:
- Hardware
- Bandwidth
- Management
- Moderation

The last one seems like the biggest problem considering the quoted example.

Unless you have the funds to hire a team to do this and develop systems to help detect and mark them for deletion, the current manageable solution, for an individual running a relay, is to only moderate medium to highly reported posts.

Is this the ideal solution? Probably not.
It is the most reasonable one though.

As a user, I don't expect the behavior/work of what's being quoted. Thankful, but I'd never put such a heavy expectation on relay/server operators.

Shitty user: "Why haven't you deleted these posts yet?"
Operator: "Did you report it?"
Shitty user: "No"
Operator: "Did you tell your friends, family, followers to report it?"
Shitty user: "No"
Operator: "Then how do you expect me to know about it?"
Shitty user: "Spend time reviewing these posts and/or hire a team to do it!"
Operator: "Are you willing to help cover the costs, and ask your network to help too?"
Shitty user: "No"
Operator: "Then you should not expect me to do the impossible. If I see them, I'll delete them. Otherwise, even if all of your answers are yes, it'll still take time to get to them, and a few will slip through the cracks. We do what we reasonably can."
Shitty user: "That's not good enough."
Operator: "..."

nostr:note16d8fpxtqeunve8yn74ug576cvwcva06ywa5gs3w0xkdgdede6s4qg83kzs  
 I Can teach you how to invest in stock mining turn your $200 to $5,500 in just 2hrs ask me How! text me for more info TEXT SMS: Text No:+1 (703) 879-8125
WhatsApp:+1(332)252-4701

WhatsApp link below 👇 👇👇👇
https://wa.me/message/7L7D2AETIXNUD1 
 This is my biggest concern and worry with Nostr.

This could be used against you, easily, if this is your personal hard drive, or directly connected to you in any way.

It would be WAY too simple for the Feds (of any country), to upload this stuff and then arrest you.

You’d have no recourse.

So, Nostr devs. What is the solve here?

Is there one?

I would have set up MANY nodes already except for this significant problem.