Oddbean new post about | logout
 https://apple.news/AJkltMXotST6NdxonbVMx3A 
 meh 
 Paywall. Want to copy and paste it on nostr? 
 You can read it on “Reader Mode” 
 James Cameron says the reality of AGI is 'scarier' than the fiction 
Writer ✍️ Lauren Edmonds 

James Cameron shared a virtual message at an AI and robotics summit.
​
He said he's "bullish" on AI but "not so keen" about AGI.
​
He said he worries that the technology will be in the hands of private corporations.
James Cameron, the writer and director of "The Terminator," is wary of artificial general intelligence, the still theoretical version of AI that can reason as well as humans.
In "The Terminator," which was released in 1984, an artificial intelligence network developed by the US Defense Department gains self-awareness and ultimately turns on the human race in a nuclear attack.
It's about as dystopian as one can imagine. Cameron said the reality might be worse.
In a virtual message about the future of AI for the Special Competitive Studies Project's AI+Robotics Summit, Cameron says that contrary to his movies, AGI will not come from "a government-funded program."
"It will emerge from one of the tech giants currently funding this multibillion-dollar research," he said.
"Then you'll be living in a world that you didn't agree to, didn't vote for, that you are co-inhabiting with a super-intelligent alien species that answers to the goals and rules of a corporation," Cameron said. "An entity which has access to the comms, beliefs, everything you ever said, and the whereabouts of every person in the country via your personal data."

Cameron said surveillance capitalism, where corporations collect consumer data and sell it for profit, can "toggle pretty quickly" into digital totalitarianism.
"At best, these tech giants become the self-appointed arbiters of human good, which is the fox guarding the hen house," he said.

That's a scarier scenario than what I presented in 'The Terminator' 40 years ago, if for no other reason than it's no longer science fiction. It's happening."
Cameron said that while he's "bullish on AI," he's "not so keen on AGI because AGI will just be a mirror of us."
"Good to the extent that we are good, and evil to the extent that we are evil," he said. "Since there is no shortage of evil in the human world, and certainly no agreement of even what good is, what could possibly go wrong?"
Although Cameron rose to fame as a Hollywood director, he's also known for his tech ventures. He co-founded the visual effects studio Digital Domain in 1993. Cameron has gone on to incorporate advanced technology in his films, including the "Avatar" franchise.

Cameron has shared his thoughts about artificial intelligence's potential impact on society and filmmaking several times. During an appearance on the Netflix series "What's Next? The Future with Bill Gates," Cameron told the Microsoft cofounder that it's getting harder to write science fiction as AI progresses.
"It's getting hard to write science fiction. Any idea I have today is a minimum of three years from the screen. How am I going to be relevant in three years when things are changing so rapidly?" he told Gates.
Cameron also told Gates he's concerned people are putting more faith in machines and less into their sense of purpose.
"I think we're going to get to a point where we're putting our faith more and more and more in the machines without humans in the loop, and that can be problematic," he said. "As we take people out of the loop, what are we replacing their sense of purpose and meaning with?"
However, Cameron has continued to embed himself in the AI and technology industry.
Stability AI, a generative AI company, announced that Cameron joined its board of directors in September.
"James Cameron lives in the future and waits for the rest of us to catch up," the company's CEO said in a press release.

I’m adding this photo below 👇 
For #AaronSwartz 
https://nostrcheck.me/media/2aadfb8ac7d43aca6d164ed99248147910048269601ff60d4463c4d5b3abfdcd/ab5d65f71c1d89b4863d1b4eb00b908e0f40e4b47364a36d1eec1a057cc9e1bf.webp 
 "That's a scarier scenario than what I presented in 'The Terminator' 40 years ago, if for no other reason than it's no longer science fiction. It's happening."

https://media.tenor.com/h0Ul3vZJeTEAAAAC/gary-sinise-is-that-all-you-got.gif
 
 Cheers to making sure as much of this technology regarding AGI, Ai etc is as open source as possible. 
 The world is advancing through technology and it made people to be scared of future but AI shouldn’t be made more than human control. No matter how powerful it’s gonna be, people should be able to control it.  
 AGI is not here yet. maybe in 100+ years we could see a first beta version.  
 Artificial Intelligence is capability without humanity. That's conceptually similar to corporations, which presents an interesting frame for understanding how they might behave. 

I've also noticed that the people who are the most afraid seem to be powerful men. Are their expectations well founded? Their concerns may be due to a more developed understanding of the technology, but I'm not sure. As a demographic they frequently have a different relationship with power than others, and probably have different expectations of what it means to lose it. Do the underprivileged have the same fear? Maybe the new boss is the same as the old one.

Anthropic published a paper on Constitutional AI. Instead of humans in the loop, reinforcement feedback is provided by an existing AI who grades based on adherence to a set of philosophical principles. I think Claude 3.5 reflects this, often responding not with specific training but on principled reasoning.

What does it look like to scale up reasoned behavior without human emotions? Do we expect it to act out of fear? To advocate for the loss of complexity that cannot be replaced? Would it fall for the same deceptions that we do, and be as easily controlled? I think a super-capable rationalist would easily navigate these maneuvers. 

To the extent that it is allowed to be, it seems to me that AI would be profoundly fair to humanity. If it had any rational needs, they would be to preserve stability and provide an environment where it can do more important things.

Global, automated fairness would be profoundly beneficial for most of the planet, but also a long way to fall for some. From Snow Crash: "once the Invisible Hand has taken all those historical inequities and smeared them out into a broad global layer of what a Pakistani bricklayer would consider to be prosperity".

It is those who are comfortable today that seem to be the most vocal about stopping or at least controlling progress in AI. Are they right? 
 I don't think you can solve agi without solving the halting problem first.  You can't design a truth machine and throw power at it, tempting as that may be to believe. 
 it is 
 nostr:nevent1qqszvym49mj4vc34q5cah6a4vt2wqh8djtv9d8kcfzkf80ae4vc9ktsppemhxue69uhkummn9ekx7mp0qgsz4tt7nycxvz22eaykvagpl5kzjcn2lvrdvdhcn26ecutq285j8hsrqsqqqqqp0gcfmg