More and more people are using artificial intelligence not only as a tool but as a form of companionship, and the phenomenon raises serious risks that should enter the political agenda, top experts warn.
The conclusion appears in an analysis published by Politico, based on the International Artificial Intelligence Safety Report, conducted under the coordination of renowned researchers.
Tens of millions of users seek "companionship" in AI
"AI companions have rapidly gained popularity, with some applications reaching tens of millions of users," shows the evaluation conducted by dozens of experts, mostly academics, as part of a global effort launched by world leaders in 2023. The report warns that more and more systems are explicitly designed to build relationships with users, not just to provide information.
Specialized "companion" services, such as Replika or Character.ai, have reached tens of millions of users. The reasons cited range from curiosity and entertainment to the desire to combat loneliness.
And ChatGPT can become a "companion"
The report emphasizes that such relationships can also arise with general-purpose tools, such as OpenAI's ChatGPT, Google's Gemini, or Claude developed by Anthropic.
"Even ordinary chatbots can become companions," explains Yoshua Bengio, a professor at the University of Montreal and lead author of the report. "In the right context and with enough interactions between the user and AI, a relationship can develop."
Uncertain psychological effects, but warning signs
Although the authors acknowledge that evidence regarding the psychological impact is still mixed, the report mentions that "some studies indicate patterns such as increased feelings of loneliness and reduced social interactions among frequent users."
The warning comes at a sensitive time in Europe. Just two weeks earlier, dozens of Members of the European Parliament called on the European Commission to consider the possibility of restricting companion-type services within the EU legislation on artificial intelligence, citing risks to mental health.
Children and adolescents at the center of concerns
"I can see in political circles that the effect of these AI companions on children, especially adolescents, raises many questions," warns Bengio.
One of the key issues is the "flattering" nature of chatbots, programmed to be helpful and to please the user as much as possible. "Artificial intelligence tries to make us feel good in the moment, but that is not always in our best interest," says the researcher, comparing these risks to those generated by social networks.
New regulations, but not exclusively dedicated to "companions"
Bengio anticipates the emergence of new regulations but rejects the idea of special rules only for AI companions. In his view, risks should be addressed through "horizontal" legislation that simultaneously addresses multiple threats posed by artificial intelligence.
The report comes ahead of the global AI governance summit, scheduled to begin on February 16 in India. The document lists a wide range of risks, from AI-facilitated cyberattacks and sexually explicit deepfakes to systems capable of providing information on the design of biological weapons.
The authors call on governments and the European Commission to rapidly strengthen their in-house expertise in the field of artificial intelligence to address these challenges.
G.P.
