Economists often talk about “factors of production”: things like labor, land, and capital. The phrase “marginal returns to labor/land/capital” captures the idea that in a given situation, a given factor may or may not be the limiting one – for example, an air force needs both planes and pilots, and hiring more pilots doesn’t help much if you’re out of planes. I believe that in the AI age, we should be talking about the marginal returns to intelligence7, and trying to figure out what the other factors are that are complementary to intelligence and that become limiting factors when intelligence is very high. We are not used to thinking in this way—to asking “how much does being smarter help with this task, and on what timescale?”—but it seems like the right way to conceptualize a world with very powerful AI.
Intelligence is always limited by wisdom and employment in the right place, just like any other form of capital.
Exactly. Always thought the emphasis in discussion of AI focused too much on Intelligence to the neglect of Artificial.
This touches on the models if Marginal Utility and the Marginal Rate of Substitution. When it comes to inhabiting a world with AGI and ASI, we also need to think in terms of comparative advantage. A person doesn’t need to be the best at anything in order to contribute. They just need to contribute something to the system. You can be worse at everything, but your humble productivity still adds to the tally of the economy.
In a world with hyper-efficient machines, there’s a chance that the costs of sustaining a person will be so minuscule that it could still be covered by their own meagre (compared to ASI) output. Especially if humbler personal AI can be relied on to fortify an individual or a household, we might be able to double-down on the autonomy of economically independent citizens rather than provide for them through a central planner.
It’s much more conducive to the flourishing of free will and agency.