Oddbean new post about | logout
 @4ebb1885 This population is likely to view the convincing, authoritative view as something it is not (one reason why I think that it was massively irresponsible to package LLMs with this style of interface, and especially with a style that contains faux-selfhood). FWIW, we've known about this problem since Eliza, and yet nothing was done.

Second, and related, is the myth of algorithmic infallability, e.g.: https://emptycity.substack.com/p/computer-says-guilty-an-introduction 3/ 
 @4ebb1885 I'm not sure where we go from here - but in general, I feel like we should be defaulting towards to caution in a way that we very much are not right now.

One technical point that I do want to push back on - I don't see how LLMs evolve from their current state of statistical bins to a "model of the world", which they currently lack. Adding more parameters, and ingesting an increasingly more polluted input set just may not give any better results than today. /4 
 @640e31bb I agree with everything you just said

I'm not convinced LLMs can "model the world" if we keep making them bigger, and I'm very worried that we won't find a way to teach people how not to fall victim to science fiction thinking about what these things are capable of

That's why I talk about LLMs and not "AI" - I'm trying to emphasize that this is mainly about a subset of machine learning - spicy autocomplete- not about Data/Jarvis/Skynet 
 I absolutely hate how LLMs imitate humans and use "I" pronouns and express their opinions - I talked about that here: https://simonwillison.net/2023/Aug/27/wordcamp-llms/#llm-work-for-you.036.jpeg