Yeah reproducable builds are good at making sure the code matches the executable, but it says nothing about the quality of the code.
That's where reviews come in. It doesn't even have to be human review, although the automated review systems are frequently abled to be gamed.
An example: if I attested that my CI process compiled a library without any warnings using `gcc -Wall`, it means something. Maybe it means the developer put inline compiler warning suppressions all over the place, or maybe they fixed up all the things the compiler was warning about.
Now if that same library also had stats about warning suppressions, that might be interesting too. The same could be done with automated test suites passing, code coverage, operating sysyem compatibility, static and dynamic security tools, and a bunch of other things.
If a person I know reviewed it, that would likely have more influence over me in terms of whether I'd want to use it, as it's harder for developers to undermine a manual review. Humans can frequently spot sketchy heuristic bypasses of the automated checks. And they can find things like logic errors, which scanners can almost never find.