I just see no way that doesn’t end in humans losing the ability to be part of more than essentially 0% of the economy. That might also mean extinction, idk. So it makes me think e/acc is not the way. It’s a a point of dissonance in my mind for sure because it implies freedom restriction.
no way is it extinction if anything it’ll be a boom of growth both in numbers and in wealth for everyone involved. dangerous jobs out of the picture, no shortage of food, etc. science fiction movies are written by people who didn’t understand LLMs, it’s pure fantasy. we need to touch base with the reality of this technology.
Well shoot. Call me e/acc then :) But for real. The above assumes we maintain control. Were I to assume that, I’d totally agree with you. I think the main difference in our views is that I would put my confidence in that assumption at less than 100%, and also small enough that I’m not comfortable simply behaving as if it is 100%. And the sci-fi outcome is not exclusively thought to be possible by people who don’t understand LLMs. Don’t know if you’ve seen survey results from AI researchers, but you see the full range of predictions from them as well 🫂
Those “ai researchers” want control over ai, they call themselves “Ai Alliance” https://arstechnica.com/information-technology/2023/12/ibm-meta-form-ai-alliance-with-50-organizations-to-promote-open-source-ai/
https://thealliance.ai/news
Nice. That is consistent though with them acting in good faith though right? And I agree. It’s potentially Orwellian. Here’s the paper I was referring to. I think you’re not saying that none of the respondents who are worried are acting in bad faith? https://aiimpacts.org/wp-content/uploads/2023/04/Thousands_of_AI_authors_on_the_future_of_AI.pdf https://image.nostr.build/329263b1de4e2059e7970e99dafbe1f94bfd8669a3ac1366e7182d30329f6d28.jpg
Oops, I meant to screenshot a part about extinction. But you can see that in the paper too
I’ll read the paper n get back to you! 🫡 but there’s definitely safety concerns!! I just don’t believe that it’ll do worst than it’ll do good, I believe the trade offs will be significantly good for human kind in the long run. 🫂
I commit to sending 1k sats your way if you do 🫂 Yeah, I really hope you’re right. If that happens, humans will be free in a way we’ve never seen before. It’ll be amazing. Luxury automated superabundance. Idk what percent confidence I’d assign that vs the other outcome, just know that extinction is >0% chance, and therefore, since quadrillions (to understate it) of lives are on the line, I think treading very carefully is super super important.
But also an increase in useless people who can't contribute meaningfully. This are relegated to being supported via UBI or whatever. Then you have people who are have never fed themselves. They only have ever been able to eat by taking.
nostr:note1duj66gdgq6sc76t2gj7z90slg5cnp0m7sh62dkkpf0fd48qtatcsz3th4s With full acknowledgement that even I could be one of those people, or my offspring. Living a life where your useless is not exactly a future I am excited about for myself, or for anyone else for that matter.
I’d just soften this. People are inherently valuable whether or not they are economically productive We are the only reason the universe contains wonder and love 🫂🫂🫂
I don't know if that's true. I would love to believe that, I really would. Was Hitler valuable? Stalin? Terrorists? Degenerates of every stripe? There are some who many would agree that is not the case. Not even to mention the people above, or people like them, would have the same feeling about you or I. And if the mighty think you are not valuable, what are you going to do to stop them from implementing whatever machinations they have in mind for you?
That’s part of why it’s a big problem if people don’t see everyone as valuable. I would say those people are profoundly and tragically mistaken, and still individually valuable. That’s part of the worldview where you endeavor to avoid harming anyone, helping everyone you can, but also defending yourself. 🫂🫂🫂
Hell is other people, we learn from them n their choices, we make advancements in reinforcing measures to not allow their mistakes to take place over n over again.. like mark says everyone is individually valuable n offer some contribution to humanity’s path forward.
there is no economy without humans the whole scare campaign about AI is so overblown, AIs are not in any state to replace us for anything meaningful and most of the technologies they are based on already are being deployed in many products it's just a scare campaign by a bunch of misanthropic cannibal baby rapist "philanthropists" who love us so much they want to genocide us whenever they judge there is too many of us, they have been doing this for thousands of years
Help me out with why you say “there is no economy without humans”. Maybe I’m missing something 🫂
because robots don't have any needs to satisfy, so they don't have any way to value anything price is created by the meshing of everyone's scale of preferences, and robots don't have preferences that you didn't give them so there's no real market it's pretty funny, i do wonder what exact jobs that have even yet been replaced by AI anyway https://i.giphy.com/xTka034bGJ8H7wH1io.webp
They don’t actually have needs yes, but hang with me here on this: in the same sense we also don’t have needs. Whether you think that makes sense or not, both us and machines can behave with the idea that we do have needs, and both can come up with that we are willing and able to pay to achieve those needs. Or am I misunderstanding what you said? https://image.nostr.build/ba994f79e40fe2b2f9592b3987ca9f329563a350446030973f0716da56ded74a.jpg
um... yeah, i have needs when i have a headache, it's called relief from the headache when i'm hungry i have needs, it's called eating food an AI making an estimate based on a wide spectrum of market data doesn't say anything about the AI's ability to calculate price, it's just doing statistics, based on frequencies, medians and averages
I’m not sure if you’re thinking about needs at a subjective or an objective level… but I’m thinking objectively. Let’s up the ante though to clarify. You feel thirsty. That is to say your sensors are telling you you’re low on a resource that is important to your goal of staying alive. You consciously think “I need water”. You get water. Similarly AI responds to a prompt to check in on battery voltage. Sensors indicate low voltage. LLM reads this as a need for charging and schedules a task to move to the charger. AI charges its battery (and can negotiate the price to pay the charger if it’s a negotiable price) It’s a different level of complexity but the same process works when AI mines copper. I’m not saying that *all* AI agents will be really good at prices for things… the bad ones will run out of battery power and “die”.
well, current AIs don't need anything except internet connections and lots of power to run farms of GPUs and there isn't AIs running the farms yet... so, yeah, logically if there was autonomous devices that did much of their own management they would have to become able to process price data and propagate this information and that would probably end up being a major ballache for collectivists too, who currently think AI will be some kind of panacea for all the woes of price gouging and price instability, even though these twits don't even understand prices are because of people's changing needs and the changes in the supply of resources to satisfy them
100% And we’re at least a few years (lol) away from them managing their needs fully. Maybe as many as 500 years away. <shrug>
Humans need goods. Food, clothes, housing, tools, etc.
AI needs silicon, copper, cooling, housing, etc 🫂🫂🫂