@4ebb1885 @0683504d what types of traps have you identified so far?
@918f62c6 @0683504d the biggest one by far is hallucination - they can produce extremely convincing answers to all sorts of things which are entirely invented and unrelated to reality Actually less of a problem for code, because if they hallucinate an API the code won't work when you test it!
@4ebb1885 @918f62c6 @0683504d from a coding perspective, if the model had access to fast compilers or code analysers it could self correct those hallucinations.
@815c3ddb @918f62c6 @0683504d that's exactly what ChatGPT Code Interpreter / Advanced Data Analysis does - it has Python, but you can extend it to be able to run other languages too https://til.simonwillison.net/llms/code-interpreter-expansions