Oddbean new post about | logout
 If we’re building systems that are based on the way humans learn and communicate, we also have to accept there are not universal rules for establishing which humans are nefarious or naïve, which ones give and which ones take, etc..  The current flaw in thinking (IMHO) is that robots are going to be different. The outcome is not destined to be good or bad, it will continue to be a balance of both. 

The main challenge for humans is we have limited bandwidth for cognitive input and things are going to get exponentially more noisy. That’s where the robots are better equipped, and the risk to us is not being enslaved, it’s being completely overwhelmed. 

It’s not hopeless, it’s just different. Humans are super resourceful and resilient, I’m optimistic we’ll come up with solutions. 
 Good point, people change too. Trust then isn’t forever. Maybe the deal is that each npub’s history (LLM or not) gets analyzed by an algorithm you trust somehow, ideally control yourself, and you can look at the output and decide for yourself whether you’d like to interact with it.

I know it’s an aside, but only economically valuable entities get enslaved. I don’t think humans have a future in the economy. I think we just won’t be able to contribute economically. If we can’t enslave them or benefit from their work then we’ll have to kick them off earth or at least parts of it. 
 Nah man, we all have value. AI will just change the types of things we value as a society.

Remember, we are still in control of the future. Things will only be out of control for those that are not planning ahead. 
 I’m definitely not claiming that people don’t have intrinsic value. I think that’s one of the things that really sets us apart from the machines that I imagine will exist in 100 years. Each person is precious 🫂. It doesn’t seem obvious that machines will have intrinsic value like we do.

That said, the economic value that I was referring to is a different concept. Market value is yet another thing as well. Technically, I think what I mean is that human labor will have a zero and possibly negative slightly market value.

And ultimately, the reason I care about thinking about this stuff is that I agree: by planning ahead, it’s possible that humans have a future. Not to sound pessimistic but I don’t think having a future is inevitable given that it seems likely that we are evolutionarily “unfit” when compared to… the machines.