Oddbean new post about | logout
 @4ebb1885 I always enjoy reading what you write, and there are some really interesting parts in the piece - and you also, en passant, pointed out that my carbon maths was wrong for LLMs (they're not as bad as I was claiming).

I'll also take the viewpoint that these things now exist, and so it is on us (as senior technologists) to see how we can use them to help new programmers - and that might, in turn, help democratise programming - which is, effectively, still a priest class today. 1/ 
 @4ebb1885 My cocnerns are twofold.

First, which I think you addressed in your piece, is the "grain of salt", the skepticism that is required to take theoutput of LLMs and apply them.

You and I know that what they are producing is "spicy autocomplete" or "eerily accurate madlibs", depending upon your viewpoint. But the general population does not. A population that has 20+% of people that do not understand how ranked-choice voting works. 2/ 
 @4ebb1885 This population is likely to view the convincing, authoritative view as something it is not (one reason why I think that it was massively irresponsible to package LLMs with this style of interface, and especially with a style that contains faux-selfhood). FWIW, we've known about this problem since Eliza, and yet nothing was done.

Second, and related, is the myth of algorithmic infallability, e.g.: https://emptycity.substack.com/p/computer-says-guilty-an-introduction 3/ 
 @4ebb1885 I'm not sure where we go from here - but in general, I feel like we should be defaulting towards to caution in a way that we very much are not right now.

One technical point that I do want to push back on - I don't see how LLMs evolve from their current state of statistical bins to a "model of the world", which they currently lack. Adding more parameters, and ingesting an increasingly more polluted input set just may not give any better results than today. /4 
 @640e31bb I agree with everything you just said

I'm not convinced LLMs can "model the world" if we keep making them bigger, and I'm very worried that we won't find a way to teach people how not to fall victim to science fiction thinking about what these things are capable of

That's why I talk about LLMs and not "AI" - I'm trying to emphasize that this is mainly about a subset of machine learning - spicy autocomplete- not about Data/Jarvis/Skynet 
 I absolutely hate how LLMs imitate humans and use "I" pronouns and express their opinions - I talked about that here: https://simonwillison.net/2023/Aug/27/wordcamp-llms/#llm-work-for-you.036.jpeg