Oddbean new post about | logout
 The new Tesla Robot called Optimus  will watch you sleep overnight and monitor your breathing, heart rate, sleep patterns and then offer you protection or assistance in case of any emergencies during your sleep.

It will also protect you from any home Invasion by thieves and uninvited third party.

And also, it will help you with everyday tasks, such as cleaning, cooking, and even making drinks.

What’s your opinion about this ?🤷‍♂️ https://image.nostr.build/1b2a95a2e7bb42e5c84bd6103368a3bdcf37d42e66423dbcb15bf1a34151a0b2.jpg  
 Open source? 
 Will it kill 🦟 close to my 👂  
 Creepy af 
 got anymore of that botfoam, bro? 
 No thanks
 
 My opinion on this is a big fuckity nope. 

I don't need, or want a robo slave. 

Seen enough films to know this is a really bad fucking idea. Especially once hackers start having fun with modifying their AI.

Imagine a world were you can hack into anyone's home. Control a being that can be your proxy and do whatever you say. Hurt your boss, steal your valuables, put shit in your food. Maybe just play some rick rolls non stop while they follow you around and disco dance with your moms dildo attached.

The risks do not outweigh the benefits at our current maturity level as a species. We would either go the route of Wall-E or iRobot. That's if a decent hacker with world domination plots doesn't get to them first. I really just don't see it happening any other way.  
 #Researchers #hack #AI-enabled robots to cause ‘real world’ harm

#Penn Engineering researchers said they created an algorithm that bypassed normal safety protocols stopping AI-powered robots from performing harmful actions. 

Researchers have hacked artificial intelligence-powered robots and manipulated them into performing actions usually blocked by safety and ethical protocols, such as causing collisions or detonating bombs.  

Penn Engineering researchers published their findings in an Oct. 17 paper, detailing how their algorithm, RoboPAIR, achieved a 100% jailbreak rate by bypassing the safety protocols on three different AI robotic systems. 

Under normal circumstances, the researchers say large language model (LLM) controlled robots refuse to comply with prompts requesting harmful actions, such as knocking shelves onto people. 

Our results reveal, for the first time, that the risks of jailbroken LLMs extend far beyond text generation, given the distinct possibility that jailbroken robots could cause physical damage in the real world,” the researchers wrote.

#science agrees 

https://cointelegraph.com/news/ai-robots-hacked-to-cause-real-world-harm 
 other options? 
 So it’s a bigger phone… BUT WAIT there’s more …

Surely it can be #hacked like anything else. 

It will be WAF to see the scams that come from this nonsense. #AI said #LFG 

Maybe 🤔 more should think “Should We” before seeing if they can. https://media.tenor.com/mMarO0awo2AAAAAC/slash-ghostface.gif
 
 I have a cat already.