“A major risk for civilization”: Elon Musk explains the three conditions for artificial intelligence not to get out of control

“A major risk for civilization”: Elon Musk explains the three conditions for artificial intelligence not to get out of control

Elon Musk has reiterated warnings about the risks of artificial intelligence and presented three principles that he considers vital for the technology to evolve in a beneficial direction.

The billionaire, CEO of Tesla, SpaceX, xAI, X, and The Boring Company, spoke on a podcast hosted by Indian businessman Nikhil Kamath, as cited by CNBC.

"We do not have the guarantee of a positive future with AI. When you create a powerful technology, it can also become potentially destructive," Musk said.

The founder of xAI, who launched the Grok chatbot in 2023, has repeatedly criticized the accelerated pace of AI development and described artificial intelligence as one of the "greatest risks to civilization."

In the discussion with Kamath, Musk emphasized that AI must pursue truth and avoid taking erroneous information from the online environment.

"You can drive an AI crazy if you force it to believe untrue things, because it will reach wrong conclusions," he warned, referring to the phenomenon of "hallucinations," incorrect or fabricated responses.

Musk summarized the three essential principles for AI as follows: "Truth, beauty, and curiosity."

According to him, without these references, technology can become unsafe, and models risk "absorbing lies incompatible with reality."

He added that appreciating beauty is important because it helps AI recognize harmony and coherence, and curiosity is essential to explore the nature of reality instead of being tempted to "exterminate humanity."

Musk's warnings come in a context where other pioneers in the field have also expressed concerns. Geoffrey Hinton, former Vice President of Google and dubbed the "godfather of artificial intelligence," recently estimated that there is "a 10%-20% probability" that AI could become an existential threat. In the short term, Hinton listed risks such as hallucinations and massive automation of entry-level jobs.

The debate on controlling and directing AI is intensifying as technology companies launch increasingly powerful models, and industry leaders try to outline a safety framework in a period of accelerated innovation.


Every day we write for you. If you feel well-informed and satisfied, please give us a like. 👇