What if Microsoft controlling OpenAI and a huge part of the population using their products force us to a manipulated source of data?
Same with Amazon and Anthropic, Google with Bard and China with WeChat.
What if in a near future, the only source of information and knowledge would be over an AI fine tuned bot just to feed you what they want and judge being the best for them?
How to protect our collective knowledge out of the monopoly of tech giants?
Mars Colonization: A New Frontier or an Elite Escape?
The concept of colonizing Mars has long captivated the imagination of humanity. But what if the journey to Mars is not just a giant leap for mankind but a strategic move by Earth's elite? This article explores the implications of a Mars colonization that serves as a retreat for the privileged, leaving Earth exclusively to them.
Space colonization offers a new realm for human expansion, but it also presents unique challenges and opportunities. The idea that a select few might use this as a chance to control Earth’s resources raises critical questions about inequality and power dynamics in society. How would the selection process work, and what criteria would determine who stays on Earth and who goes to Mars?
The rapid evolution of technology, particularly in AI and robotics, plays a significant role in this scenario. As companies like Boston Dynamics advance their tech, the integration of AI in everyday life becomes more imminent. This technological leap could significantly alter social structures and labor markets.
This scenario forces us to confront ethical dilemmas about the fate of humanity and our planet. Who decides who colonizes Mars and who remains on Earth? What does this mean for the future of human rights and autonomy?
Colonizing Mars could be the largest social experiment in human history. It raises questions about the development of new socio-political systems on Mars, distinct from those on Earth, and the evolution of 'Martian' identity over generations.
Mars colonists would likely be heavily dependent on Earth for technology and resources, especially in the early stages. This dependence could create a power imbalance between Earth and the Mars colony.
The prospect of Mars colonization by a select elite is a multifaceted issue that intertwines technology, ethics, and social dynamics. It's a scenario that challenges us to think about the type of future we are creating and the kind of world we want to live in. As we stand on the brink of potential interplanetary habitation, these discussions are not just speculative; they are crucial in shaping our collective future.
OpenAI had women and other non-white Silicon Valley individuals in their team. Now, it's fully run by Silicon Valley guys. They're handling a system that's already suffered heavy data breaches. They didn't have multi-factor authentication until recently, and their support has a service-level agreement of a week or more.
Elon Musk did an interview with Lex Fridman not long before Sam Altman was fired by the board. In this interview, Musk praised Ilya Sutskever and expressed concerns about the lack of AI safety, referencing an argument with Larry Page from Google.
Then Microsoft, already an investor, stepped in. Soon after, the entire board was fired, including Ilya, whom Musk had praised for being a good human. The lack of security in most new apps and technologies nowadays is alarming. Tons of kids are hacking into systems as if it's nothing.
AI is on the loose everywhere. People and governments are jailbreaking LLMs everywhere.
Then we complain about Nigerian Prince emails, call center scammers from India, and tons of malware and crypto frauds from around the globe.
Get ready. If things keep going on this path, soon all of us will face challenges far greater than we can handle.
Not to mention that most of this stuff is Silicon Valley-based.
But you and I, we're not involved in Silicon Valley stuff.
#nostr #grownostr
Elon Musk do an Interview with Lex Fridman explaining how he basically created OpenAI with Larry Page back in the days giving to it 40 million usd and the name (what came from Open Source) and how Larry didnt care about safety plus that OpenAi turned to pure profit what is not a good karma.
Then a few days later Sam Altman and several other employees from OpenAi get fired or quit.
Coincidence? I don't think so.
Notes by Vinihill | export