Well shoot. Call me e/acc then :) But for real. The above assumes we maintain control. Were I to assume that, I’d totally agree with you. I think the main difference in our views is that I would put my confidence in that assumption at less than 100%, and also small enough that I’m not comfortable simply behaving as if it is 100%. And the sci-fi outcome is not exclusively thought to be possible by people who don’t understand LLMs. Don’t know if you’ve seen survey results from AI researchers, but you see the full range of predictions from them as well 🫂
Those “ai researchers” want control over ai, they call themselves “Ai Alliance” https://arstechnica.com/information-technology/2023/12/ibm-meta-form-ai-alliance-with-50-organizations-to-promote-open-source-ai/
https://thealliance.ai/news
Nice. That is consistent though with them acting in good faith though right? And I agree. It’s potentially Orwellian. Here’s the paper I was referring to. I think you’re not saying that none of the respondents who are worried are acting in bad faith? https://aiimpacts.org/wp-content/uploads/2023/04/Thousands_of_AI_authors_on_the_future_of_AI.pdf https://image.nostr.build/329263b1de4e2059e7970e99dafbe1f94bfd8669a3ac1366e7182d30329f6d28.jpg
Oops, I meant to screenshot a part about extinction. But you can see that in the paper too
I’ll read the paper n get back to you! 🫡 but there’s definitely safety concerns!! I just don’t believe that it’ll do worst than it’ll do good, I believe the trade offs will be significantly good for human kind in the long run. 🫂
I commit to sending 1k sats your way if you do 🫂 Yeah, I really hope you’re right. If that happens, humans will be free in a way we’ve never seen before. It’ll be amazing. Luxury automated superabundance. Idk what percent confidence I’d assign that vs the other outcome, just know that extinction is >0% chance, and therefore, since quadrillions (to understate it) of lives are on the line, I think treading very carefully is super super important.