Billionaires in the technology field seem to be preparing for the Apocalypse. Should we be worried?

Billionaires in the technology field seem to be preparing for the Apocalypse. Should we be worried?

Elon Musk, as well as other technology leaders, are said to have bought parcels of land with underground spaces, perfect for converting into million-dollar luxury bunkers. What are they preparing for?

It is said that Mark Zuckerberg began work on Koolau Ranch as early as 2014, his 600-hectare complex on the Hawaiian island of Kauai. It will include a shelter with its own energy and food reserves.

Workers on-site have been forbidden by confidentiality agreements to speak about it, according to an article in Wired magazine. A two-meter wall blocks the view from a nearby road, reports BBC.

Asked last year if he is building an apocalyptic bunker, the Facebook founder categorically replied „no.” The underground space, spanning approximately 460 square meters, is „just like a small shelter, it’s like a basement,” he explained.

But that hasn’t stopped speculation about his decision to purchase 11 properties in the Crescent Park neighborhood of Palo Alto, California, reportedly adding an underground space of 650 square meters.

Although his building permits refer to basements, according to the New York Times, some of his neighbors claim they are bunkers.

Speculations have also arisen about other technology leaders who seem to have bought parcels of land with underground spaces, perfect for converting into million-dollar luxury bunkers.

Reid Hoffman, co-founder of LinkedIn, has talked about „apocalypse insurance.” It’s about roughly half of the super-rich, he previously asserted, with New Zealand being a popular destination for such arrangements.

So, what are they preparing for? War, the effects of climate change, or another catastrophic event that we regular folks are not yet aware of?

### The Rapid Evolution of Artificial Intelligence, the New Threat

In recent years, progress in artificial intelligence (AI) has been added to the list of potential existential problems. Many are deeply concerned about the immense speed of progress in this field.

Ilya Sutskever, chief researcher and co-founder of OpenAI, is among them. By mid-2023, the San Francisco-based company had launched ChatGPT – the chatbot now used by hundreds of millions of people worldwide – and was rapidly working on updates.

But by that summer, Sutskever was becoming increasingly convinced that computer scientists were on the verge of developing artificial general intelligence (AGI) – the point at which machines equal human intelligence – according to a book by journalist Karen Hao.

In a meeting, Sutskever suggested to his colleagues to dig an underground shelter for the company’s top scientists before such powerful technology is released to the market, Hao reports.

„We will definitely build a bunker before launching AGI,” he allegedly said, although it’s unclear who Sutskever was referring to by „we.”

It’s a revealing fact: many renowned computer scientists and technology leaders, some of whom are working hard to develop highly intelligent AI, also seem deeply afraid of what it could do one day, notes BBC.

So, when might AGI actually appear? And could it truly be transformative enough to make ordinary people fearful?

### It Will Come „Sooner Than We Think”

Technology leaders claim that artificial general intelligence (AGI) is imminent. OpenAI’s CEO, Sam Altman, stated in December 2024 that it will come „sooner than most people around the world believe.”

Demis Hassabis, co-founder of DeepMind, predicted that this moment will arrive within the next five to ten years, while Anthropic’s founder, Dario Amodei, wrote last year that his preferred term – „strong AI” – could be among us as soon as 2026.

Others are skeptical. „They keep changing their position,” said Wendy Hall, a computer science professor at the University of Southampton. „It depends on who you talk to. We’re on the phone, but I can almost hear them rolling their eyes,” she added.

„The scientific community says AI technology is amazing, but it’s nowhere near human intelligence,” she added.

### A Scientific Nonsense

Neil Lawrence is a machine learning professor at the University of Cambridge. For him, all this debate is nonsense.

Discussions about AGI are a distraction, he believes. „The technology we’ve already built allows, for the first time, ordinary people to communicate directly with a machine and eventually get it to do what they intend. This is absolutely extraordinary… and completely transformative. The big concern is that we are so drawn to the narratives of the big tech companies about AGI that we miss the ways in which we need to improve things for people,” Lawrence said.

Current AI tools are trained with massive amounts of data and are efficient at identifying patterns, whether it’s scanning for tumor markers or predicting the next word in a sequence. But they lack the power to „feel,” no matter how convincing their responses may seem, the expert explained.

„There are a few ‘deceptive’ ways to make a Large Language Model (the foundation of AI chatbots) act as if it has memory and learns, but these are unsatisfactory and quite inferior to humans,” Hodjat said.

### Intelligence Without Consciousness

In a way, AI has already taken the lead over the human brain. A generative AI tool can be an expert in medieval history one minute and solve complex mathematical equations the next.

Meta says there are signs that its AI systems are improving.

However, ultimately, **no matter how intelligent machines become, biologically speaking, the human brain still wins**. It has about 86 billion neurons and 600 trillion synapses, far more than artificial equivalents, notes BBC.

The brain doesn’t need breaks between interactions and constantly adapts to new information.

„If you tell a person that life has been discovered on an exoplanet, they will immediately learn this, and this will affect their future view of the world. An LLM [Large Language Model] will know this only as long as you repeat it to him as a fact,” Hodjat said.

„Also, LLM systems lack metacognition, which means they don’t know exactly what they know. People seem to have an introspective capacity, sometimes called consciousness, which allows them to know what they know,” he explained.

It is a fundamental part of human intelligence – and one that has not yet been replicated in a lab.

T.D.


Every day we write for you. If you feel well-informed and satisfied, please give us a like. 👇