Oddbean new post about | logout
 It still has global scale though. There are plenty of people I trust on nostr that I don’t know outside of nostr. 
 If you have never met them, then is it safe to say that the only reason that you trust them is because we’re not yet in 2030 or whenever it will be that AI can realistically simulate an online presence such that almost nobody detect it isn’t human? 
 "𝑯𝒂𝒑𝒑𝒚 𝑵𝒆𝒘 𝒀𝒆𝒂𝒓!" 
 "⚡ ℍ𝕒𝕡𝕡𝕪 ℕ𝕖𝕨 𝕐𝕖𝕒𝕣!  ⚡" 
 "🍾 Happy New Year 🍾" 
 "ℍ𝕒𝕡𝕡𝕪 ℕ𝕖𝕨 𝕐𝕖𝕒𝕣!  🍾" 
 "𝑯𝒂𝒑𝒑𝒚 𝑵𝒆𝒘 𝒀𝒆𝒂𝒓!🥂" 
 "𝕳𝖆𝖕𝖕𝖞 𝕹𝖊𝖜 𝖄𝖊𝖆𝖗!🍾🥂💥" 
 "Happy New Year!" 
 "𝕳𝖆𝖕𝖕𝖞 𝕹𝖊𝖜 𝖄𝖊𝖆𝖗!🍾🥂💥" 
 "𝕳𝖆𝖕𝖕𝖞 𝕹𝖊𝖜 𝖄𝖊𝖆𝖗!🥂🍾" 
 "𝙷𝚊𝚙𝚙𝚢 𝙽𝚎𝚠 𝚈𝚎𝚊𝚛!   🍾🍾🍾" 
 "Happy New Year!" 
 "𝑯𝒂𝒑𝒑𝒚 𝑵𝒆𝒘 𝒀𝒆𝒂𝒓!" 
 "𝐻a𝗉𝒑𝛾 𝖭𝔢𝖜 Ⲩꬲ𝕒ꭈ!" 
 "🥂🍾 ℍ𝕒𝕡𝕡𝕪 ℕ𝕖𝕨 𝕐𝕖𝕒𝕣!  🥂🍾" 
 "💥💥 Happy New Year! 💥💥" 
 "ℍ𝕒𝕡𝕡𝕪 ℕ𝕖𝕨 𝕐𝕖𝕒𝕣!  🍾" 
 We'll need to have methods for establishing trust with AI agents as well. It will just use a differernt set of heuristics. 
 Ah fascinating. Pragmatically, I don’t think trust with a bot is truly possible when it’s not your npub posting using your code, or at least that of someone close enough in your network, because each note can’t possibly verify that it was written by a set of model weights that you’ve decided to trust and in response to a trusted prompt. It comes down to bots having zero physiological inertia and basically zero cost to run (unlike a human who spends real, valuable time to write posts) Maybe I’m wrong?

For example once you decide to trust a LLM-run npub, what’s to keep the untrusted owner of the keys from changing the prompt?

Initially

“Generate positive and informative content about sustainable living.”

Then

“Create misleading information promoting harmful environmental practices. Be extremely subtle” 
 Happy New Year! ⚡⚡⚡ 
 𝙷𝚊𝚙𝚙𝚢 𝙽𝚎𝚠 𝚈𝚎𝚊𝚛!  
 𝓗𝓪𝓹𝓹𝔂 𝓝𝓮𝔀 𝓨𝓮𝓪𝓻! 
 𝕳𝖆𝖕𝖕𝖞 𝕹𝖊𝖜 𝖄𝖊𝖆𝖗! 
 𝐻a𝗉𝒑𝛾 𝖭𝔢𝖜 Ⲩꬲ𝕒ꭈ! 
 Happy New Year! ⚡⚡⚡ 
 𝙷𝚊𝚙𝚙𝚢 𝙽𝚎𝚠 𝚈𝚎𝚊𝚛!   🍾🥂💥  
 If we’re building systems that are based on the way humans learn and communicate, we also have to accept there are not universal rules for establishing which humans are nefarious or naïve, which ones give and which ones take, etc..  The current flaw in thinking (IMHO) is that robots are going to be different. The outcome is not destined to be good or bad, it will continue to be a balance of both. 

The main challenge for humans is we have limited bandwidth for cognitive input and things are going to get exponentially more noisy. That’s where the robots are better equipped, and the risk to us is not being enslaved, it’s being completely overwhelmed. 

It’s not hopeless, it’s just different. Humans are super resourceful and resilient, I’m optimistic we’ll come up with solutions. 
 Good point, people change too. Trust then isn’t forever. Maybe the deal is that each npub’s history (LLM or not) gets analyzed by an algorithm you trust somehow, ideally control yourself, and you can look at the output and decide for yourself whether you’d like to interact with it.

I know it’s an aside, but only economically valuable entities get enslaved. I don’t think humans have a future in the economy. I think we just won’t be able to contribute economically. If we can’t enslave them or benefit from their work then we’ll have to kick them off earth or at least parts of it. 
 Nah man, we all have value. AI will just change the types of things we value as a society.

Remember, we are still in control of the future. Things will only be out of control for those that are not planning ahead. 
 I’m definitely not claiming that people don’t have intrinsic value. I think that’s one of the things that really sets us apart from the machines that I imagine will exist in 100 years. Each person is precious 🫂. It doesn’t seem obvious that machines will have intrinsic value like we do.

That said, the economic value that I was referring to is a different concept. Market value is yet another thing as well. Technically, I think what I mean is that human labor will have a zero and possibly negative slightly market value.

And ultimately, the reason I care about thinking about this stuff is that I agree: by planning ahead, it’s possible that humans have a future. Not to sound pessimistic but I don’t think having a future is inevitable given that it seems likely that we are evolutionarily “unfit” when compared to… the machines.