Good point, people change too. Trust then isn’t forever. Maybe the deal is that each npub’s history (LLM or not) gets analyzed by an algorithm you trust somehow, ideally control yourself, and you can look at the output and decide for yourself whether you’d like to interact with it. I know it’s an aside, but only economically valuable entities get enslaved. I don’t think humans have a future in the economy. I think we just won’t be able to contribute economically. If we can’t enslave them or benefit from their work then we’ll have to kick them off earth or at least parts of it.
Nah man, we all have value. AI will just change the types of things we value as a society. Remember, we are still in control of the future. Things will only be out of control for those that are not planning ahead.
I’m definitely not claiming that people don’t have intrinsic value. I think that’s one of the things that really sets us apart from the machines that I imagine will exist in 100 years. Each person is precious 🫂. It doesn’t seem obvious that machines will have intrinsic value like we do. That said, the economic value that I was referring to is a different concept. Market value is yet another thing as well. Technically, I think what I mean is that human labor will have a zero and possibly negative slightly market value. And ultimately, the reason I care about thinking about this stuff is that I agree: by planning ahead, it’s possible that humans have a future. Not to sound pessimistic but I don’t think having a future is inevitable given that it seems likely that we are evolutionarily “unfit” when compared to… the machines.