Oddbean new post about | logout
 Oh god, just finished the most dreaded part of my responsibility and fully automated CSAM report to NCMEC. It still requires for a human (fuck my eyes) to confirm and press the submit button in the isolated environment. 😭😭😭😭😭😭😭😭😭😭😭😭😭
(Why you upload CSAM, anon? Stop it, get help! Please, I beg you!) 
 How does this process work exactly? Is there an AI api service you use to detect if uploads contain illicit content and then it has a certain confidence interval that requires you to approve if it’s below a certain threshold?

This sounds brutal 
 Simple and complex:
User reports.
Moderator catching it.
AI detects if underage nudity.
PhotoDNA edge hash match.

Each then has to be manually configured end by the admin (me or NB) and we have to click a button to pack and submit a report to NCMEC people. 
 Couldn’t hash match automatically be reported? I’m assuming this means an images hash matches another known bad image. 

Do user reports automatically undergo AI detection as secondary verification?

If a moderator catches one then why not have it automatically report? Unless this is a balance of power thing between you and moderators. 

Sorry for the barrage of questions, just trying to learn in case I ever have to upload and store content 
 Hash is not a cryptographic hash, because changing that is super easy, it can s a special image hash that works with visuals. It is not perfect, so requires review.

AI detection is done on all media, users may report something that neither AI nor moderator could catch. Needs verification.

Moderator is admin (only us are allowed to mark something as CSAM) it is two stage, so we don’t do false reporting when your finger slips or some other issues/bugs.

No problems.