Why I think the idea of “superintelligence” and “AGI” is HEAVILY exaggerated or misunderstood:
Assuming we have Ai much smarter than the average human, smarter than the typical PhD (granted smart and “PhD” are not at all equal but for the sake of simplicity). If (or when) this occurs, this will not mean Ai will just be able to invent whatever we need or make all decisions better than anyone else. And I think all we have to do is look at humans to make this simple assessment —
• If we asked a physicist and a biologist what was the most important thing to focus our time and resources on, do you suspect the physicist would find something related to physics and the biologist would find something biological?
This points to the question of speciality. What an Ai is trained on will determine what and how it values things, and there is no amount of information that will make it perfect and forever aligned with the truth at all times. It will always have a weight toward something, because the question of WHAT to value for training and for dedicating resources is present at all stages. It presupposes that we already have the answer if we assume Ai will just magically come up with it.
• in addition, the answer to “where should we devote resources” isn’t static. It changes year to year, month to month, even minute to minute sometimes. It is a question of value and judgement. The only way to sort out this relationship is through trade and competition, denoting the **necessity** of Ai that compete and exchange data and resources.
• General intelligence is useful, but extremely inefficient. Generalists are great to have for combining and relating ideas, but specialists still down into the true details and do the dirty work of real building and fine tuning of the world. Specialization isn’t just an economic phenomenon, it’s a physical reality of the universe. It will be the same with Ai, because Ai doesn’t defy universal laws, it’s just a computer program.
— A giant, trillion dollar cluster AGI will not be as valuable or produce nearly as good results or decision making capability as 10,000 much smaller and specialized AI’s focused on their own corner and trading resources with others to accomplish their task or test the ideas or paths of progress apparent from their vantage point. Nothing in nature resembles the former.
• Intelligence isn’t an omnipotent, unrestricted power. Mental intelligence isn’t the only kind of intelligence. I think as humans we have become deeply arrogant about the fact that we are “so smart” and we have begun to worship our own intelligence in such a way that if we ever imagine something smarter, then it MUST be God and it must be without any limits or flaws at all. Yet there is nothing to suggest this. The “smartest” people today often have the greatest blinders on, and everyone is only as good as the information they have and the values lens through which they see everything.
While the intelligence explosion will be shockingly disruptive and revolutionary in many ways, and while I do see it as an extremely likely outcome in the rather near future, I think the vision of a giant, all powerful AGI dropped on the world like a nuclear bomb is increasingly a projection of our own ignorance and arrogance. It simply doesn’t hold water, imo.
Covered a lot of these ideas in the 31st episode of Ai Unchained:
https://fountain.fm/episode/98UjiXJsa1b2VusbQQur
See my previous notes on this subject. We agree. I've said I don't think that AGI in unachievable, but we will discover the cure for cancer far before we discover AGI. AGI is nothing like LLMs we currently have. What we have is predictive text running on a machine with lots of compute having seen billions of examples to feed us a response. That's NOTHING like what AGI will look like. If you think AGI is around the corner, you must also think we're about to cure cancer in a few months. The compute required for the former, is an order or magnitude more than what is required for the latter.
Absolutely agree. The idea that an LLM is anything close to AGI is maddening to me.
People want an AGI for *judgment* which is different than say for doing multivariable calculus.
The same people who so eagerly outsourced their own judgment to laughably flawed “experts” the last few years are the ones in whom the fantasy is the strongest.
Dude, yes… I see people pin all sorts of hopes and desires on AGI and it really disturbs me if I’m being honest.
I said, and try ot be clear about my wording more recently, "the intelligence explosion is in our very near future," not AGI. I increasingly think "AGI" suffers from a fundamental semantic problem of being undefinable.
This is a really thoughtful take on AI and its limitations. I like how you point out that specialization is crucial, even in AI, and that intelligence isn't some all-powerful force. It's true, what AI values will depend on what it's trained on, and that won't always align with every situation or need. The idea that smaller, specialized AIs working together could be more effective than one giant AGI makes a lot of sense too. Definitely gives me something to think about!
You nailed it on the idea that we worship our intelligence. Which is ironic when you consider the fact that most people alive today aren’t nearly as smart or resourceful as previous generations.
generally I agree although we might see surprising things emerging from a connectivity of a large amount of specialized AIs. they could even be connected by a free market such as Bitcoin and create an emergent super intelligence?
And when you say that super intelligence having the answer requires that we already have the answer, I think it's possible that we already have many answers to our questions which we are not aware of that we have. in fact, we might have all the answers to all our questions already if we just knew how to parse the data.
Maybe have a general AI oversee a bunch of specialized AI to try to combine depth and width?
That might be too likely to be misused.
What happens to AGI when the interest rates come back to reality?
It's merely revealed that any massive, AGI-focused model is actually vastly more inefficient than we had even originally suspected it would be.
interesting take, while im not familiar enough with the space to even guess I truely hope you are right.
Tech innovation is good but i'm not convinced AGI is good.
I try to focus on learning about #bitcoin as from my 3 years of research this is going to revolutionize the world in more ways than we can know!
as long as we can flip the power switch off, agi won't be a problem
Good piece. IMO, it's all about control and prioritization by those with it.
I was recently on a project team of a half billion dollar (and extremely basic) AI implementation that ended up being scrapped because of that. Those with the overarching control didn't want to cede an inch and ultimately blew up the entire thing.
Seeing someone seriously discussing something like this just makes me think that AI really is smarter.
Discussing why AGI won't really be a thing and how we've misunderstood the idea of intelligence makes you think that Ai is smarter than people?
It makes me think that people, even intelligent people, have accepted the discussion.
There are discussions that should not be accepted. Premises that should never be put on the table, because the simple fact of accepting them as a valid discussion denotes stupidity and proves that the individual who accepted to discuss in such terms has already validated the opponent's ideas.
For example: Discussions about gender identity. Because some idiot at some college in Canada started this kind of crazy conversation, today there are places where we must ask people's pronouns before we talk to them because we can be arrested if we make the wrong way to address them.
If this is not living in a mental hospital, I don't know what is.
Therefore, the simple discussion of AI as intelligence is typical of those who do not understand what intelligence is, or learned about intelligence in a Kantian way.
Are you implying that you think AGI is going to happen and that it will just replace humans? I'd love for you to defend that claim if that's what you are arguing.
I was being sarcastic. I can't even conceive of a discussion that takes AI seriously.