Oddbean new post about | logout
 You could also introduce some checks and balances like having a small representative committee from various parts of communities exercise judgement on the judgment of the AI agent to ensure it doesn’t do anything radical. 
 Who defines radical? Who decides the representatives? You're slow boating your way back to human governance of humans. Humans using AI as a tool to help them make decisions seems like a more useful approach than the addition of an extra step and piece of government to monitor (AI). What happens when the AI determines that the optimal situation is for it to have free agency without human involvement? And if it's wrong or radical in one area, why is it any more trustworthy in any other area? This just seems like a more complicated form of government to me with an even greater potential for abuse of rights. It seems a lot like having a government made up entirely of scientists and pragmatists. We'd end up right back at Hitler before long. Philosophy is important for 'optimizing' human life on earth, and having something like AI as it stands isn't going to necessarily appreciate that. 
 Well it’s not pulling any levers or switches and the human is the failsafe 
 I just don't understand why it's there then. We still have to elect the "failsafes" and define what a fail even is. It's just a more complicated version of what we already have. Having an AI make decisions that government shouldn't be involved with anyway isn't going to fix the resulting problems that come from having government involved. Government can't possibly optimize life for everyone. All it can do is pick who it wants to win and eliminate the rest. The only real solution I see is to remove government entirely from anything that isn't regulating force against citizens and property. Having an AI decide the most optimal winner doesn't fix the underlying flaws.

People should be left free to optimize for themselves (as long as they don't violate the property rights of others), not have government do it for them, AI or otherwise.

My issue is with what you think government should be doing. Having AI involved doesn't fix that for me. It probably just makes it worse. 
 I haven’t once mentioned what the government should be doing or the size of the govt in that situation , so I’m not sure how one can take issue with that which they weren’t told about. 
 Your model seems to be one big AI government with a small number of humans in charge of it via inputs and "failsafes."

I simply think that's a worse version of what we already have. I think the simpler approach is to just remove government from everything we can and replace it with nothing. 
 I think humans have an innate desire to fill voids with something, when the reality is that a void is sometimes the most optimal situation.

Governments are so massive right now that most people can't conceptualize having a void that simply isn't filled. The reality is that average people want mostly the same things and would likely optimize everything over time much more effectively for humans than any government or AI could.

I'd rather see micro AIs used privately in various industries to optimize within those industries. That would be a free market approach vs some god AIs that allegedly optimize many things at the hands of a few people.