Somebody’s going to write a bot using AI that actually makes all the hallucinated software libraries that AI chat bots and other tools are trying to use. Then they’ll be able to put backdoors in them and we’ll have a security nightmare where there’s nobody ever to check the code that is being generated and used all through the new large language models.
On the optimistic side, we can also have AI that scrutinizes all code and points out and flags vulnerabilities, not all software is audited and now we have a tireless auditor. Maybe too optimistic, but just an alternate take 😛
to be able to have AI scan vulnerabilities means that it needs to be able to build a map of the application logic, how every small detail interacts and how these may combine into an exploit chain that is not possible with a text predictor
Agreed, but it can pickup small details like if a method with known vulnerabilities is used, or if an unscoped variable is exposed, its obviously not foolproof, but it could be a great helping hand.
did you hear about static or dynamic analysis? 😆
Imagine a Library of Congress whose curation is 100% untrustworthy. nostr:note1kpjrlf06x3yvegjsyghtpjq7ppjkcgt0gs6f6ppkqpdk4ly4d6sse3qew3
They already have.