Oddbean new post about | logout
 GM,

When people say "Ai will be more intelligent than humans," this doesn't mean that they will be able to predict what humans like, or know what will be highly valuable or worthless, which memes will go viral, or what the future will actually look like. 

Those problems aren't "intelligence" problems.

It may very well be a complete waste of capital to make "vastly smarter than human" Ai, for the simple reason is that people won't care and won't even be able to judge how good it even is. Think about it, when ChatGPT 5-8 and Claude 5 and Llama 6.2 all get so good that you can't tell the difference, and where you can't tell which output is better than the other, why would you pay 3x as much for the "super intelligence" model than the "gets me the results I want" model?

I increasingly think model intelligence will act more like computer graphics in video games. For about 15 years "better graphics" was literally everything. If you had better graphics you had the better gaming console.  But then one day it just stopped mattering becuase graphics got good enough where the cost to make it better outweighed the fact that people just wanted good games. It simply looked "good enough." 

What if this same thing happens to the "smarter models" in LLMs? 🧐

https://fountain.fm/episode/98UjiXJsa1b2VusbQQur 
 GM  🤙🏽☕️☀️💪🏽 🤖 
 I believe the main issue that we might face even in the short run is that LLMs depend on content created by humans. When there is no incentive for humans to create new content, where will LLMs get their input? From it self creating an endless loop of "knowledge"? 
 Yes and no, actually. It's less about getting "more intelligent source material," and more about extracting greater amounts of intelligence/knowledge from what it is given. Right now if you feed a math book into an LLM, it wont be able to do math after you fine tune it. It's actually horrifically stupid in the sense of extracting specific knowledge. It will be many layers of algorithmic improvements that begin to enable what some are calling "deep thinking" on a single pieve of material, such that you can train it on a math book and it will actually extract and self-evaluate all of the lessons in the book until it legit actually can DO math because you gave it a math book. And I don't think that's a stretch either, its just a few layers up from what we are doing now.

I actually cover it on the show in a few episodes because I thought this was a fundamental limitation too, but it actually isn't as serious of a limitation as it seems, simply because of how LLMs scale with *compute*. 

In other words, a good LLM can, in fact, train a better one. (there's more to it than that but too long for a nostr note) 

However, you are correct in the sense that there is only so much knowledge (or even correct information) to draw from any particular piece of information or material. And it begs the question of understanding whether the information is correct or not. If its "super intelligent" about shit that's simply wrong, then it is meaningless. 

 
 No amount of computational power can ever make silicon and software intelligent. It still can only parse the data fed to it. When a system that is only fed cat pictures figures out how to do calculus, then I will grant it actual intelligence 
 Shortsighted...AI is the most dangerous invention man has ever created...

If not "computational power" what differentiates human intelligence from AI? 

(I'd argue that currently it's the "computational power" of the human mind)

If so, then adding "computational power" to AI would be the critical piece to creating actual intelligence.

 
 Yea, bombs are a bit more dangerous than a glorified search engine. What differentiates it is exactly what I wrote. I can go learn anything on my own. These glorified search engines need to be fed data. If no data about dogs is fed to it, it cannot know dogs exist. 
 Today...that is true. Tomorrow it will not be--

To envision today's risk requires one to be able to envision the future.  
 Added to my queue 🔥 
 More processing power does not equal more intelligence. 
 That's correct, but that's not what I'm referencing with the size or capability of LLMs. 
 there is no intelligence in ai

just increased computing power and speed 
 If you can just throw enough computing power at it, you will discover that the answer to the ultimate question of life, the universe, and everything is 42 
 This is true 
 ...water cannot rise above its source. 
 That’s good in theory, but doesn’t really apply here. Because the amount of knowledge we have, bs the amount that any one person can understand or take into account has a delta of orders of magnitude.

So in this case the “source” isn’t what an intelligent human can produce, it’s the collection of everything that all intelligent humans have produced.

So LLMs at a large enough scale can account for much more than the average human, and even display “logic” far better than most humans, merely by consequence of accounting for greater pools of knowledge/information.

But you are correct in theory, just not in practice with what I think you are implying.

My evidence is just that state of the art LLMs are already more intellectually reliable and have stronger logic than the avg person. The average person is real dumb honestly. 
 Beyond that, people in charge (CEO's, politicians) often ignore recommendations of very intelligent humans. They will likely do the same with AGI / ASI. Even if they pay big dollars for it. I've witnessed CEO's pay millions to consulting firms to turn around and ignore all recommendations.