AI is not all good news for dictators Interview with Noah Yuval Harari

AI is not all good news for dictators Interview with Noah Yuval Harari

Dictatorships are far more vulnerable to algorithmic takeover than democracies. It would be difficult for even a super-Machiavellian AI to amass power in a decentralized democratic system like the United States. Even if the AI learns to manipulate the U.S. president, it might face opposition from Congress, the Supreme Court, state governors, the media, major corporations, and various NGOs.

You argue that the main issue with authoritarian regimes isn’t human nature or psychology, but rather how people get informed and how the stories and the major political narratives are created, told, and received. Does this mean that the more information we receive, the better our political choices become? Or is it on the contrary?

Nexus argues that the fault is not with our nature, but with our information. Wise and good people make bad decisions if they are fed bad information. And unfortunately, humans are flooded by bad information. From Stone Age tribes to modern states, human societies repeatedly fall victim to mass delusions. Whereas my previous books told the story of humans, the hero of Nexus is information. The book explores how information shapes history, and why over time, information gave us a lot of power but little wisdom.

            Some people believe that over thousands of years, our information should improve. They argue that truthful information, like scientific facts, give people power, whereas delusions make us weak. So the truth should spread and delusions should disappear. But this is not how history works. In history, power doesn’t depend just on truth. It depends even more on order. And it is easier to create order by spreading fictions and fantasies than by spreading facts.

ADVERTISING

            Consider, for example, a country that wants to produce an atom bomb. To do so, it definitely needs to know some scientific facts. If you try to build an atom bomb and you ignore the facts of physics, your bomb will not explode. But to produce a bomb you also need millions of people to cooperate on this project. You need people to mine uranium, build a reactor, and provide food for all the miners, builders and physicists. And that’s the key point: it is easier to make millions of people cooperate by telling them a fiction than by telling them the truth.

            So over thousands of years humans learned more scientific facts, but they also built more powerful religions and ideologies that ignored the facts. And in most cases, scientists who are experts in physics or chemistry take their orders from ideological and religious leaders who are experts in mythology. That’s why despite all our scientific knowledge, we are prone to do stupid and self-destructive things. That’s how we got, for example, to Nazism and Stalinism. These were exceptionally powerful networks, held together by exceptionally deluded ideas. As George Orwell put it, ignorance is strength. Now the danger of bad information is particularly menacing, due to the rise of a new information technology – AI. In the 21st century, AI may form the nexus for such a powerful network of delusions that it could prevent future generations from even attempting to expose its lies and fictions. 

The root problem is that information isn’t truth. It has been common to think, especially in the early days of the Internet, that more sophisticated information technology would necessarily unite humanity, because it would spread the truth. However, it is costly to produce truthful information. You need to invest a lot of time and energy in research. In addition, the truth is often complicated and painful. Even on the individual level, to know the truth about myself I have to understand the relations between dozens of different psychological and biological forces within me, and it is uncomfortable to acknowledge my defects and the misery that I have occasionally inflicted on other people – and on myself. That’s why people have to spend years in therapy or meditation to get to know the truth about themselves. On the collective level it is even more complicated to understand what shapes the history and politics of a nation, and people are extremely reluctant to acknowledge the defects of their nation and the crimes it has committed. 

ADVERTISING

One of the biggest themes today is the relationship between information and the truth. Tell us a little bit about this relationship – we say that an informed person is the one who knows the truth or at least recognizes the truth. 

            In contrast, it is very cheap to produce fictions and fantasies, and you can make them as simple and attractive as you want. The easiest way to connect people is by telling them flattering myths. Consequently, most information in the world isn’t truthful. The most successful book ever written is the Bible – which is full of comforting myths, but not a lot of truth. On the Internet, too, most information isn’t truth.

            We can compare information to food. 100 years ago food was scarce. So humans ate anything they could find, and humans particularly liked food with a lot of fat and sugar. Today food is abundant, and we are flooded by junk food which is artificially high in fat and sugar. If people eat too much junk food they become sick. The same is true of information, which is the food of the mind. In the past information was scarce, so we consumed any information we could get. Now we are flooded by too much information, and a lot of junk information. Junk information is artificially filled with greed, hate and fear – things that draw our attention. All this junk information makes our minds and our societies sick. We need an information diet. 

We have been debating the AI since longtime already. You’ve complained about AI for years. What has changed in your concern, and what was the turning point? What is AI more dangerous than the printed press was at its time? 

One turning point was the violence that occurred in Myanmar in 2016-17. During that time, outrageous conspiracy theories spread on Facebook, and these fueled an ethnic cleansing campaign against the Rohingya minority there.

ADVERTISING

When these events unfolded, I had just written a book called Homo Deus. In that book I discussed the various dangers that AI might pose in the distant future. But things developed much faster than I anticipated.

To understand why AI is so dangerous, and why it’s different from the printing press, it helps to look at how exactly social media algorithms contributed to the outbreak of violence in Myanmar. Facebook gave its algorithms a seemingly benign goal: to make more people spend more time on Facebook. But the algorithms then discovered by trial and error that the easiest way to achieve this goal was by spreading hate and outrage. If a user in Myanmar finished watching an innocent video on Facebook, the algorithm would immediately recommend or even auto-play a video of some hate-filled conspiracy theory. This was meant to keep users glued to the screen. In the case of one such video, internal research at Facebook estimated that 70 percent of the video’s views came from being auto-played by the Facebook algorithm. The same research estimated that, altogether, 53 percent of all the videos watched in Myanmar were being auto-played for users by the algorithm. In other words, people weren’t choosing what to see. The algorithm was choosing for them. And the hate the algorithm chose to spread contributed to the following massacre of the Rohingya.

What happened in Myanmar with Facebook’s algorithm is an example of the new power of AI to make decisions that have dangerous unanticipated consequences. This shows how AI is very different from the printing press. Gutenberg’s printing press could make copies of a controversial book. But it could never recommend hate-filled books to people to keep them reading. And it certainly couldn’t produce a hate-filled book itself. AI can now do both of those things.

 If human nature is not to blame, how could AI tell a better story than a human? Stories are never just a compilation of information, like AI does, but they also involve emotions. 

It is true that AI does not feel emotions. When AlphaGo wins at the boardgame Go it doesn’t feel the joy of victory. And when AlphaFold correctly predicts the structure of a protein, it doesn’t feel the joy of discovery. While intelligence is the ability to solve problems, consciousness is the ability to feel emotions. And there is no indication that computers are on the road to developing consciousness. Just as airplanes fly faster than birds without ever developing feathers, so computers have come to solve certain problems much better than humans without ever developing feelings.   

But the fact that an AI has no emotions of itself doesn’t mean that an AI cannot create stories that are able to manipulate the emotions of humans. By experimenting on billions of human guinea pigs, and absorbing billions of pages of text, AI has learned how to use language. And language is the operating system of human civilization. Having become experts at using language, AI is now prepared to hack the operating system of our civilization. 

Across the world, democracy seems to be falling apart. We’re not only facing dictatorships, which are easy to identify, but also illiberal regimes or soft autocracies. Even Russia’s war in Ukraine has been seen as a war against democratic political culture. How can AI tip the scale? And which of the two systems, democracy or autocracy, is more vulnerable in this regard? Which one has the upper hand?

Democracy is a conversation. It is therefore built on information technology. For most of history, the available technology didn’t allow large-scale political conversations. All ancient democracies, like ancient Athens, were limited to a single tribe or a single city. We don’t know of any large-scale ancient democracy. Only when modern information technologies like newspapers, radio and television appeared, did large-scale democracies become possible. The new information technology of the 21st century might again make large-scale democracy impossible, because they might make large-scale conversations impossible. If most voices in a conversation belong to non-human agents like bots and algorithms, democracy collapses. There are a lot of arguments about politics in Europe nowadays, but I think everybody can agree on one thing: we have the most sophisticated information technology in history, and at precisely this moment Europeans are losing the ability to hold a rational conversation. 

One idea to help protect democracy is to forbid counterfeiting humans, the same way we forbid counterfeiting money. Prior to the rise of AI, it was impossible to create fake humans, so nobody bothered to outlaw it. Now the world is being flooded with fake humans – bots pretending to be humans – which is one key reason for the inability of humans to trust each other and to hold a conversation. Governments should ban this. If anyone complains that such measures violate freedom of speech, they should be reminded that bots don’t have freedom of speech – only humans have rights. 

While AI is a threat to democracies, in some ways it could help dictators. AI facilitates the concentration of all information and power in one hub. In the twentieth century, distributed information networks like the USA functioned better than centralized information networks like the USSR, because the human apparatchiks at the center just couldn’t analyze all the information efficiently. Replacing apparatchiks with AIs might make Soviet-style centralized networks superior. 

            Nevertheless, AI is not all good news for dictators. Throughout history, the biggest threat to autocrats usually came from their own subordinates. No Roman emperor or Soviet premier was toppled by a democratic revolution, but they were always in danger of being overthrown or turned into puppets by their own subordinates. If a twenty-first-century autocrat gives AIs too much power, that autocrat might become their puppet. The last thing a dictator wants is to create something more powerful than himself that he does not know how to control.

Dictatorships are far more vulnerable to algorithmic takeover than democracies. It would be difficult for even a super-Machiavellian AI to amass power in a decentralized democratic system like the United States. Even if the AI learns to manipulate the U.S. president, it might face opposition from Congress, the Supreme Court, state governors, the media, major corporations, and various NGOs. Seizing power in a highly centralized system is much easier. To take control of an authoritarian network, the AI needs to manipulate just a single paranoid individual.

 Polarization is one of the most pressing issues today within our democratic societies. We are living in networks, in social media groups, with other people that are just like us. We want and we even need to confirm our own biases and beliefs. How can AI influence this situation? 

In the eartly years of the Internet, the metaphor that dominated it was “the web” – something that connects everyone together. But it now seems that a better metaphor is the cocoon. People are becoming enclosed within separate information cocoons, and cannot communicate at all. 

Moreover, in the past all the fictions and fantasies we believed in were invented by humans. Now AI can invent fictions and fantasies. For thousands of years humans lived inside the dreams of other humans. We have worshipped gods, pursued ideals of beauty, and dedicated our lives to causes that originated in the imagination of some prophet, poet or politician. In the coming decades we might find ourselves cocooned inside the dreams of an alien intelligence.

Fear of powerful computers has haunted humankind only since the beginning of the computer age in the middle of the twentieth century. But for thousands of years humans have been haunted by a much deeper fear. We have always appreciated the power of stories and images to manipulate our minds and to create illusions. Consequently, since ancient times humans have feared being trapped in a world of illusions. In ancient Greece Plato told the famous Allegory of the Cave, in which a group of people are chained inside a cave all their lives, facing a blank wall. A screen. On that screen they see projected various shadows. The prisoners mistake the illusions they see there for reality. Ancient Buddhist sages pointed out that all humans lived trapped inside Maya – the world of illusions. What we normally take to be “reality” is often just fictions in our own minds. People may wage entire wars, killing others and willing to be killed themselves, because of their belief in this or that illusion.

            The computer revolution is bringing us into Plato’s cave, into the Buddhist Maya. If we are not careful, we might be trapped behind a curtain of illusions that we cannot not tear away – or even realise is there. This danger threatens all humans – Chinese and Americans, Israelis and Romanians. This actually gives us reason for hope. For all humans have an interest in preventing such a scenario. If AI succeeds in trapping us inside a cocoon of illusions, this will not benefit any human group – it will make all humans the slaves of AI. 

What are your biggest fears regarding AI? How about your biggest expectations, maybe even hopes?

I don’t fear science fiction scenarios like a single computer that decides to kill all the humans and take over the world. This is extremely unlikely. Instead, I fear the rise of new surveillance empires managed by millions of AIs. In its heyday, the Securitate had about 40,000 agents and 400,000 civilian informers. But even with that large number of people, Ceauşescu’s regime couldn’t keep track of everyone all the time. A future Ceauşescu won’t need millions of human agents to spy on everyone. Smartphones, computers, cameras, microphones and drones could do it much more easily. Nor would the dictator need millions of human analysts. AI could process the enormous flood of information, and punish any dissent. This is already happening in some parts of the world. In Iran, for example, there are strict laws forcing women to wear the hijab whenever they leave the house. Previously, it was difficult to enforce these laws. But the Iranian regime now uses AI to do it. Even if a woman drives her own private car without a hijab, facial recognition cameras identify this “crime”, and immediately punish her by, for example, confiscating her car. This is a glimpse of the kind of future that frightens me.

My biggest hope is that AI will not only help humans deal with many of our current problems – from disease to climate change – but will also help us get to know ourselves better. As I’ve pointed out already, following the advice of the ancient sages to ‘know thyself’ is very hard and sometimes painful work! In a best-case scenario, AI will help us along that path.


Every day we write for you. If you feel well-informed and satisfied, please give us a like. 👇