Oddbean new post about | logout
 GM,

When people say "Ai will be more intelligent than humans," this doesn't mean that they will be able to predict what humans like, or know what will be highly valuable or worthless, which memes will go viral, or what the future will actually look like. 

Those problems aren't "intelligence" problems.

It may very well be a complete waste of capital to make "vastly smarter than human" Ai, for the simple reason is that people won't care and won't even be able to judge how good it even is. Think about it, when ChatGPT 5-8 and Claude 5 and Llama 6.2 all get so good that you can't tell the difference, and where you can't tell which output is better than the other, why would you pay 3x as much for the "super intelligence" model than the "gets me the results I want" model?

I increasingly think model intelligence will act more like computer graphics in video games. For about 15 years "better graphics" was literally everything. If you had better graphics you had the better gaming console.  But then one day it just stopped mattering becuase graphics got good enough where the cost to make it better outweighed the fact that people just wanted good games. It simply looked "good enough." 

What if this same thing happens to the "smarter models" in LLMs? 🧐

https://fountain.fm/episode/98UjiXJsa1b2VusbQQur