Oddbean new post about | logout
 Simple and complex:
User reports.
Moderator catching it.
AI detects if underage nudity.
PhotoDNA edge hash match.

Each then has to be manually configured end by the admin (me or NB) and we have to click a button to pack and submit a report to NCMEC people. 
 Couldn’t hash match automatically be reported? I’m assuming this means an images hash matches another known bad image. 

Do user reports automatically undergo AI detection as secondary verification?

If a moderator catches one then why not have it automatically report? Unless this is a balance of power thing between you and moderators. 

Sorry for the barrage of questions, just trying to learn in case I ever have to upload and store content 
 Hash is not a cryptographic hash, because changing that is super easy, it can s a special image hash that works with visuals. It is not perfect, so requires review.

AI detection is done on all media, users may report something that neither AI nor moderator could catch. Needs verification.

Moderator is admin (only us are allowed to mark something as CSAM) it is two stage, so we don’t do false reporting when your finger slips or some other issues/bugs.

No problems.