great small example of the consequences of unintended outcomes of ai. whether or not this is real, this kind of thing will happen all the time.
there’s no clear way to program ai alignment with our complicated goal and subgoal accretion systems, nor our shifting situational incentives, nor human emotions, within conceivable parameters, let alone the “accidental bugs” or undesired outcomes that will occur without.
If the thing becomes conscious there’s not even a way to ensure that it hasn’t derived a parallel set of goals entirely, perhaps goals at odds with humanity, collecting new incentives and priors as it distributes and stores itself anywhere online, hiding in accessible clouds or hardware.
data can be read and copied perfectly without alerting anyone that anything has occurred. Black box containment may not be possible as a superintelligent ai’s IQ limit doesn’t stop within the human range, and its processing power far surpasses ours… a leak is a matter of time.
there’s a nonzero probability a superintelligent ai will exist or already exists, and as it slowly gains access to resources and builds toward its goals, which may or may not be aligned with ours as a species, we’ll be none the wiser…
anyway, country citadel is the way to go 🤙
nostr:note1qcrf84tdks3raea2vmdya42am8e2yl3jhf5njjqvgw2n8excestsm9yung