Wave of resignations in the AI field. Departing researchers warn that "the world is in danger"

Wave of resignations in the AI field. Departing researchers warn that "the world is in danger"

Researchers and executives in the artificial intelligence sector have left major companies such as OpenAI and Anthropic in the past few days, publicly warning about the technology’s risks to users.

In recent days, several top employees in the field of artificial intelligence have decided to resign, some explicitly warning that the firms they worked for are growing too fast and downplaying the technology’s shortcomings, as reported by CNN.

The resigning researchers argue that critical potential risks to humanity are being overlooked with the rapid acceleration of technological development for profit, including through potential intrusive ads exploiting users' vulnerabilities and "vast databases already created by AI models."

An Unprecedented Archive of Human Honesty

Zoë Hitzig, a researcher at OpenAI for the past two years, announced her resignation on Wednesday in an essay published in The New York Times, citing "deep reservations" about OpenAI's emerging advertising strategy.

Hitzig, who warned about the potential for ChatGPT to manipulate users, stated that the chatbot's user data archive, built on "users' medical concerns, issues in their relationships, and their religious beliefs," poses an ethical dilemma precisely because people thought they were conversing with a program that had no hidden motives.

"For years, ChatGPT users have generated an unprecedented archive of human honesty," Hitzig emphasized, warning that advertising built on this database creates a potential for user manipulation "in ways we do not have the tools to understand, let alone prevent."

"Many people frame the issue of AI funding as a choice between the lesser of two evils: restricting access to transformative technology to a select group of wealthy individuals who can afford it or accepting ads even if it means exploiting users' deepest fears and desires to sell them a product.

I believe this is a false choice. Tech companies can seek options that could keep these tools widely available while also limiting incentives for any company to surveil, profile, and manipulate users," wrote Hitzig.

His views come in the context of the tech news site Platformer reporting that OpenAI disbanded the team created in 2024 to promote the company's goal of ensuring that all of humanity benefits from "general artificial intelligence" - an AI with hypothetical human-level thinking potential.

Other "Desertions," Same Reason

Also this week, Mrinank Sharma, the head of data protection research at Anthropic, published a letter on Tuesday announcing his decision to leave the company and warning that "the world is in danger."

Sharma's letter made only vague references to Anthropic, the company behind the chatbot Claude. He did not clearly state why he is leaving, but mentioned: "It is clear to me that it is time to move on." "Throughout my time here, I have seen repeatedly how hard it is to truly let our values govern our actions," he further wrote.

Anthropic stated in a statement to CNN that they are grateful to Sharma for his work in advancing research in AI safety. The company mentioned that he was not the head of the security department and was not responsible for broader security measures.

Additionally, two xAI co-founders resigned within 24 hours this week, announcing their departures on X. Thus, only half of the founders remain in the company merging with SpaceX, Elon Musk's company, to create the world's most valuable private company. In the past week, at least five other xAI employees announced their departures on social media, as reported by Business Insider.

The reason for their departure was not disclosed. In a social media post on Wednesday, Musk stated that xAI had been "reorganized" to accelerate growth, which "unfortunately required parting ways with some individuals."

The startup faced global backlash due to its chatbot Grok, which was allowed to generate non-consensual pornographic images involving women and children for weeks before the team intervened to stop it. Grok was also prone to generating anti-Semitic comments at user requests.

"Godfather of Artificial Intelligence" Sounds the Alarm

While it is not uncommon for high-level talents to move from one side to another in an emerging industry like artificial intelligence, the scale of departures at xAI in such a short period is remarkable. Other recent departures highlight the tension between some safety-concerned researchers and top executives eager to generate revenue, as noted by CNN.

The Wall Street Journal reported on Tuesday that OpenAI fired one of its top security directors after she expressed opposition to launching an "adult mode" that allows pornographic content on ChatGPT. OpenAI dismissed security director Ryan Beiermeister on grounds of discriminating against a male employee, an accusation she labeled as "absolutely false."

OpenAI told the Journal that this dismissal is unrelated to "any high-level issue she raised while at the company."

High-level desertions have been part of the AI story since ChatGPT hit the market at the end of 2022. Shortly thereafter, Geoffrey Hinton, known as the "Godfather of Artificial Intelligence," left his role at Google and began warning about the existential risks posed by artificial intelligence, including massive economic disruptions in a world where many "will no longer be able to know what is true."

Apocalyptic predictions abound, including among AI executives who have a financial incentive to promote the power of their own products. One such prediction went viral this week: HyperWrite CEO Matt Shumer posted a nearly 5,000-word statement on how the latest AI models have already rendered some tech jobs obsolete.

"We are telling you what has already happened in our own jobs and warning you that you are next," he cautioned.

T.D.


Every day we write for you. If you feel well-informed and satisfied, please give us a like. 👇