Israeli historian Yuval Noah Harari speaks during an interview in Tel Aviv. (The Asahi Shimbun)

Israeli historian and best-selling author Yuval Noah Harari warns that advances in artificial intelligence, surveillance and other technologies could give rise to an Orwellian society in its most extreme form.

“We are now reaching a point when new technologies--the combination of artificial intelligence and biotechnology, biometric sensors, face recognition, voice recognition--make it possible for the first time in history for a dictatorial government to follow all the citizens all the time,” Harari told The Asahi Shimbun in a recent interview in Tel Aviv.

“This could lead to the creation of totalitarian regimes much, much worse than anything we saw in the 20th century, worse than Stalin, worse than Hitler. And we don’t have much time to stop this.”

Harari, 43, is a professor of history at the Hebrew University of Jerusalem. His “Sapiens: A Brief History of Humankind” and “Homo Deus: A Brief History of Tomorrow” together sold 18 million copies around the world. The readers include many political and business leaders, among them former U.S. President Barack Obama.

Harari envisions a world in which mankind is ruled by the power of computing--more specifically, ever-improving algorithms that process large amounts of data to solve problems.

“The new technology will give immense power to a small number of people. But looking beyond that, on an even deeper level, the real power will be in the hand of algorithms because there is so much data being gathered and analyzed that no human being can do it,” he said. “The real ruler will actually be the algorithm.”

Harari said people should pressure the government to regulate the ownership of personal data to prevent it from being used against them by the government or businesses.

He also advised citizens to “know yourself better” to realize their weaknesses, such as a bias against immigrants, and guard against being manipulated by the government or businesses.

A Japanese translation of Harari’s “21 Lessons for the 21st Century,” which has sold 2 million copies, will be published in November.

Excerpts from the interview follow:

* * *

Question: What are the important global issues we are facing or we will be facing in 10 or 20 years?

Harari: We have three big issues which impact every country in the world. The first is the threat of global war, especially nuclear war. The second is ecological collapse, in particular climate change. It’s not just climate change, but climate change is the biggest process that is threatening ecological collapse.

The third, and maybe the most complicated, is “technological disruption.” It’s the rise of artificial intelligence and biotechnology, which will completely change the economy, the political system, our lives, over the next two, three, four decades.

It will change the job market as AI and robots will increasingly replace humans in more and more tasks. Nobody knows today what the job market would look like in 30 years. It means that nobody knows what to teach children today in school because we don’t know what skills they will need.

Another change, which again will have universal impact, is the rise of new surveillance technologies, which could lead to the creation of the most totalitarian regimes that ever existed in history.

Previously, even in a totalitarian country like the USSR, the regime could not follow everybody all the time. It was practically impossible.

But we are now reaching a point when new technologies--the combination of artificial intelligence and biotechnology, biometric sensors, face recognition, voice recognition--make it possible for the first time in history for a dictatorial government to follow all the citizens all the time.

This could lead to the creation of totalitarian regimes much, much worse than anything we saw in the 20th century, worse than Stalin, worse than Hitler. And we don’t have much time to stop this.

Q: Will technological disruption benefit dictatorships more than democracies?

A: Regarding the more general issue of the shape of the economy, of the political system, centralized systems, whether in economics or politics, were very inefficient in the 20th century because they couldn’t process data and make good decisions.

In the times of Mao in China or of Stalin in the USSR, the command economy, the idea was that all information comes to one place, Beijing or Moscow. There, they make all the important decisions and transmit back orders.

It was extremely inefficient because technology in the 1930s or 1950s was such that you could not gather all the data, and the data you gathered in one place, nobody could analyze it fast enough. The system was extremely inefficient, and they made terrible decisions, stupid decisions.

In contrast, in the West, in Japan, in the United States or in France, they distributed information and power, so individual consumers, individual managers of companies could make decisions by themselves. It was far more efficient. And this is why in the Cold War, the United States defeated the Soviet Union.

Now, technology is changing. It’s becoming easier and more efficient to concentrate an enormous amount of information in one place, and use AI and machine learning to analyze it.

The more information you have, the better the AI becomes. A system where all the information is in one place is much more efficient, from this perspective, than a system which distributes the information between 10 different centers.

To take an example, in genetics, if you have one big database of DNA from a billion people, you are going to have much better algorithms than if you have lots of small companies, each with a database of just a million people.

So the danger is that suddenly command economies and dictatorial governments will have a technological advantage over democracies.

It’s not inevitable. We can still do something about it. But we need to be not naive and not to think that there is a law of history that democracy and the free market always work better. They work better under particular technological circumstances. When the circumstances change, the balance of power changes.

Q: Who will rule the world in the 21st century? Is it a person or something like algorithms or data?

A: It’s not a prophecy; it’s just a warning, and we still can prevent it. But if we don’t take action, then the new technologies will empower, might empower, a very small elite, or in some countries a dictatorial government, and give them more power than at any previous time in history.

I think about the USSR in the time of Stalin, but when Stalin can actually follow every Soviet citizen 24 hours a day. This gives immense power to a very small group of people and, once it happens, it’s almost impossible to resist it because if you just “think about” resisting, if you meet a couple of people, five people, to try and organize something, they immediately know. You can’t form any resistance.

In this scenario, the new technology will give immense power to a small number of people. But looking beyond that, on an even deeper level, the real power will be in the hand of algorithms because there is so much data being gathered and analyzed that no human being can do it.

It can be a dictatorship or it can be a democracy.

Still, the real ruler will actually be the algorithm. Take the financial system as an example. How many people in the world understand how the financial system works? Maybe 1 percent. Maybe 0.01 percent. In 30 years, the number of people who will understand the financial system will be zero.

The financial system, because of the rise of AI and the immense amount of data and the speed of analysis, will reach a point when no human being can understand it.

Maybe you still have a human as prime minister, a human as president. But actually, the real power is the algorithm because the algorithm comes to the president and says, “Hey, Mr. President, we are facing a financial disaster.”

The president says, “Why?” The AI says, “Well, you’re a human; you can’t understand. I gathered 36 trillion bits of data. I analyzed them, and I am telling you, if we don’t take action, by tomorrow morning, everything will collapse.”

The president says, “OK, so what should we do?” The AI says, “OK, you have option one, option two, option three.”

And the president says, “Why are these the three options? Why not this?”

The AI says: “Oh, you’re a human; you can’t understand. I analyzed the potential impact of doing this and doing that, and I can tell you these are the three options. But I can’t explain to you why because to understand this you need to go over 36 trillion bits of data, and you can’t; you’re a human.”

So the official power is still with the president, but he or she is making decisions about something they don’t understand.

Q: How close are we to the “point of no return” where algorithms surpass humans?

A: We are very close. We have--I don’t know--maybe five years, 10 years, 20 years, depending upon the country.

If countries don’t take action now, it’s too late in 20 years. An AI arms race (will progress) mainly between China and the United States, and many countries will be left far behind.

RETRAINING WORKERS

Q: What is the problem for ordinary people, workers and companies?

A: There are lots of potential problems. The worst problem is that their job disappears. It’s being taken over by an AI, by a robot, by a self-driving vehicle, and so forth. Of course, new jobs will appear, but the big problem is to retrain and reinvent yourself.

Some people have this extreme vision that the robots will come and take all the jobs. I think this is unlikely. Some jobs will disappear. Some new jobs will appear. A lot of jobs will change their nature, their characteristic.

The main problem will not be the absolute lack of jobs; the main problem will be to retrain yourself to fill the new job. Let’s say you are a bus driver and you lost your job because they have a self-driving vehicle. But there is a new job in designing vehicles or in writing software. Now, how do you retrain a 40-year-old bus driver to be a software engineer?

Some countries might have the financial ability to do it. The rich countries.

But what about poorer countries? Let’s say countries like Bangladesh or Honduras close down all the factories because now it’s cheaper to produce shirts in Japan or in Germany, and there is a demand for software engineers. You can’t just transform millions of Bangladeshi textile workers into software engineers. It takes a lot of time and money.

So there is the problem of financing and retraining. Then there is a psychological problem.

Reinventing yourself, beyond a certain age, is very stressful. You worked in a particular job, a teacher, bus driver or whatever, you are now 40 or 50, your job is gone, and there is a new job. Even if the government gives you (a chance) to retrain yourself for one year, do you have the psychological flexibility to reinvent yourself--and not just once, but many times? Because the automation revolution will not be a single watershed event.

Let’s say we have a big automation revolution in 2025. A lot of jobs disappear, and new jobs emerge. We have a couple of difficult years, but then people settle down to a new equilibrium and everything is OK. It won’t be like that.

AI is just getting better and better. In 2025, you have a big disruption, but in 2035 you have a bigger disruption. In 2045, an even bigger disruption. People have to reinvent themselves, not just once but two, three, four times. Psychologically, the levels of stress might be too much for most people.

What will add to that is surveillance and algorithms. Just imagine that you are constantly being surveilled and, say, what you do now in this room is being recorded, and that 10 years from now, you go for an interview to some other newspaper or some other job and an algorithm goes over all the data, including what you do now, and based on that decides whether to give you the job.

Your whole life, your entire life, is one long, stressful job interview. Every single moment, say, when you go to a beach, you’re being watched, and the algorithm can analyze, “Oh, when he went to the beach he did this, he did that, so he’s not reliable, don’t take him.” Just think what it means to live your entire life as one long, never-ending job interview.

Q: When I covered the U.S. presidential election in 2016, Republican candidate Donald Trump said many lies and I was surprised by the post-truth phenomenon. Do people believe and spread lies as their fundamental features?

A: The phenomenon of fake news and post-truth is very worrying, but it’s not new. It’s been around for as long as human history.

Actually, things were even worse in the past than they are today, if you look at the kind of propaganda that was very common in the early 20th century, with fascism and communism. People told even worse lies and managed to convince millions.

Similarly, if you go back in history, there were witch hunts and pogroms in the Middle Ages.

People today accuse social media of inciting hatred on the basis of conspiracy theories and fake news.

In the Middle Ages, there is no Twitter, there is no Facebook, there are no mobile phones. But you do have the very fast spread of hatred based on conspiracy theories and false rumors.

You live in a small town, and suddenly somebody comes and tells you, “Look! You know this old woman who lives by the forest? She’s actually a witch. I saw her flying on a broomstick last night,” or “meeting a devil.”

Within an hour or two, this rumor spreads all over the town, and you have a mob of villagers with pitchforks and torches, coming to burn this poor old lady to death. This has been a feature of humanity throughout our history.

Today, the technology is different. You would spread it on Facebook. But the phenomenon itself is not new.

PROBLEM WITH POPULISTS

Q: Why are populists on the rise all over the world today and post-truth politics so prevalent?

A: I think that the rise of populism is not solely about this post-truth phenomenon, which, as I said, has been common in the past as well.

For a vast majority of people, the way they understand the world is through a “story.” They don’t think in terms of statistics or facts.

In the 20th century, we had three big stories to explain the world. We had the communist story, the fascist story and the liberal story. In the 20th century you can see it as a struggle between these three stories. Each of them tries to explain the whole human history and predict the future.

But the stories collapsed one after the other. First, the fascist story collapsed in World War II. Then, you had a struggle between communism and liberalism, and communism collapsed.

For 20 or 30 years, we felt, or many people felt, that this is the end of history: now we have just one story that explains everything. This gave a lot of confidence and assurance that we know everything, we know what will happen and we have all the answers.

But it turned out the liberal story is also collapsing. People are losing faith in it. It is unable to solve many of the difficult problems we’re facing, whether it’s climate change or whether it’s the rise of automation and of artificial intelligence.

Now, there is a vacuum. There is no credible story that explains the world and predicts the future or explains what is to be done in the future. This vacuum is being filled not with sensible visions for the future. It is being filled with nostalgic fantasies about the past.

Populist politicians have no real vision for how to organize the world to deal with climate change, to deal with nuclear war, to deal with the rise of artificial intelligence. Instead, what they tell people is that in the past it was much better and we will somehow be able to go back to the past.

You see it with Trump in the United States, promising to make America great again, to return to the 1950s or something. You see it with Putin in Russia, basically promising to go back to the days of the czar. In Israel, we are like the most extreme. Here we have people who want to go back 2,000 years to biblical times.

My biggest fear is that it’s obvious this is not going to work. You cannot go back to the past. Populists don’t really have an answer to any of the big questions of the world. They don’t even know how to organize the world.

They have ideas about individual countries, but they don’t have any plan how to organize the countries of the world together, how will international relations function.

The problem with populists is that they will never admit that they have been wrong or that they don’t know what to do. Whenever they fail, they say, “No, it’s not our fault. It’s because of enemies outside and traitors inside. We failed because we did not have enough power, so give us more power.”

If you give them more power and they fail again because they have no vision for the future, they will say, “Yes, because there are still traitors. Give us even more power.”

You see it in what’s happening in Venezuela. You see it in Turkey, in Russia, and now even with Brexit.

When people realize what a terrible mistake it was, the people responsible for that don’t say, “OK, we made some mistakes.” They say, “We need even more power to push it through.” It just becomes more and more extreme.

Q: In your new book, “21 Lessons for the 21st Century,” you wrote, “Never underestimate human stupidity,” ringing an alarm bell to the danger of war. How likely will a major war or nuclear terrorism take place?

A: I don’t know. Again, I’m not a prophet. I think the likelihood is not high, but it’s still significant and should cause us concern.

Five years ago, I would have told you the likelihood was extremely, extremely small. But since 2016, the situation of the global system, of the geopolitical system, rapidly deteriorated. Not only do we see more and more tension and violation of international norms, but the United States and Britain, which were two of the dominant powers that saw them as the champions of the free world, basically resigned the job.

In 2016, the Americans and the British kind of said, “We don’t want this job any more. We don’t care about the world. We care only about ourselves.” Nobody wants to follow a leader whose motto is “Me first,” or “America first.”

The leader has resigned. The tensions are building up. There are more and more violations of international rules and norms. This gives far more concern of deterioration into global war and even nuclear war.

This would be a disaster for everybody. It will be an economic disaster, a human disaster, for everybody.

One of the lessons of the previous century is that even when war is bad for everybody, it can still happen if we are not careful. Humans make stupid mistakes.

If you look at the two previous world wars, nobody really benefited from them, and still they occurred.

In my new book, I also give the example of Japan and Germany. One of the most interesting things about World War II is that after the war the losers actually were far more prosperous than ever before. Why did they need the war if they could lose a war and still enjoy this way of prosperity? What was the point?

It was simply a miscalculation. They thought they needed war in order to prosper. It wasn’t true, but they still made this mistake. This should teach us that even though war would be a disaster for everybody, it still doesn’t mean that everybody will be wise enough to avoid war. This is why there is room for concern.

WE ARE ALL FALLIBLE

Q: In “Homo Deus,” you discuss the risk of a small cadre of elites controlling a large majority of people as a “useless class.” Do you believe that leaders should be self-critical and modest because people are fallible?

A: We need to take into account the fact that we are fallible, we will continue to be fallible. I think any political system that assumes the government will always make the right decision is a terrible system. You need to design a political system, taking into account the fact that the government will make mistakes.

Similarly, any personal philosophy that assumes that you are infallible is a terrible philosophy. You need to take error into account in everything you build, in everything you design.

If you build a car, you have to ask, “OK, what happens when the driver makes a mistake?” If you build a car that works well only when the driver always makes the right decision, it’s a terrible car.

Q: You emphasized the need to “run faster than algorithms.” How can we do that?

A: As individuals, I think the most important thing is to get to know ourselves faster, better. Using whatever method that works for you. It doesn’t have to mean meditation, like me. It can be different methods.

At the same time, it’s obvious that individuals can’t really change the world if they don’t organize. Organization is the key to success.

When it comes to data and algorithms, the key is to regulate the ownership of data. Who owns the data? Who can use it? Whether it’s a private company, whether it’s the government, and what regulations and rules.

And this is a political issue. I cannot regulate data by myself. It’s not that I can say, “I don’t have a smartphone, I’m OK.” People around me have smartphones. There are cameras on every street. Everything I buy, everything I do, something is watching. I can’t do it by myself.

If I want to prevent data being gathered on me and used against me, I need to organize myself with many other citizens and put pressure on the government because governments are the only powers that can effectively regulate the big corporations and the world as a whole.

It’s not a single country. We need international cooperation on the regulation of data, and especially the countries which are left behind in the technological race. They have the greatest interest in regulating the surveillance and the ownership and use of data.

I think this should be one of the most important political issues in every election campaign, in every political system in the world. Unfortunately, it still doesn’t get enough attention, which is one of the reasons I write these books and I do these articles to try and change the political conversation.

There is also the question of what kind of AI we are developing. We can develop each technology in different directions. At present, most of the AI and surveillance tools enable governments and corporations to monitor individuals.

But technically, we can reverse it. We can tell engineers, “We don’t want tools that monitor individuals in the service of the government. Please develop an AI system that monitors the government in the service of individuals, of citizens, for instance, to make sure that there is no corruption.”

Q: How can people understand themselves better not to be a slave of technology?

A: People were always advised to get to know yourself better. You hear it from Socrates, from Buddha and from all the big thinkers of humanity.

Now, it’s more urgent because you have the corporations and the governments surveilling you and trying to decipher you. Once they get to know you better than you know yourself, it’s very easy to manipulate you.

This is how all these populists work. They find out what people already hate and fear, and they press these emotional buttons and create even more hatred, even more fear.

To resist them, you first need to realize what are your own weaknesses and be on your guard.

“Oh, I should be careful. I already have a bias against immigrants, against Muslims, or against whatever. It’s very easy to fool me, with fake news stories about these groups, so I should be more careful.”

For that, you need to get to know yourself.

There are many different ways to do it. I practice meditation, but I don’t think it will work for everybody. You can go to a psychologist. You can use art. People go on hikes in the mountains and use that as a mechanism to get in touch with themselves and get to know themselves better.

Whatever works for you is good. The important thing is to invest time in it because you are in a competition. You have Facebook, Google and the government trying to get to know you better so that they can more easily predict your actions and manipulate you. If you don’t get to know yourself better than them, you lose control of your life.

* * *

(This article is based on an interview by Takeshi Yamawaki, a senior staff writer, and Junki Watanabe.)