Oddbean new post about | logout
 #infosec people, help me out here.

There was an idea where you'd select people you trust for a certain domain, and then check to see if they reviewed #software and attested that it is "good" in some way.

For example, maybe you trust me to verify #security, but someone else to speak to the #performance, and maybe a third person of #usability or something.

This is something I've heard at lobbycons all over the place, but never seen it formally presented, or implemented.

Does anyone know if progress has been made on this concept? Has it been tried and failed? Am I the only one who remembers people talking about this at the hotel bars? 
 I have thought of this as well. It's like what modern credentialism wants to be but fails at.  
 Ive seen bitcoinbinary.org for something like that at least related to people attesting to being able to do reproducible builds. Id be interested in more like that for other domains 
 Yeah reproducable builds are good at making sure the code matches the executable, but it says nothing about the quality of the code.

That's where reviews come in. It doesn't even have to be human review, although the automated review systems are frequently abled to be gamed.

An example: if I attested that my CI process compiled a library without any warnings using `gcc -Wall`, it means something. Maybe it means the developer put inline compiler warning suppressions all over the place, or maybe they fixed up all the things the compiler was warning about.

Now if that same library also had stats about warning suppressions, that might be interesting too. The same could be done with automated test suites passing, code coverage, operating sysyem compatibility, static and dynamic security tools, and a bunch of other things.

If a person I know reviewed it, that would likely have more influence over me in terms of whether I'd want to use it, as it's harder for developers to undermine a manual review. Humans can frequently spot sketchy heuristic bypasses of the automated checks. And they can find things like logic errors, which scanners can almost never find. 
 agreed 
 i mean, i believe software audit companies do this internally.