If Bill Gates, Elon Musk and Stephen Hawking Are Worried, Shouldn’t You Be?

Talk to most policy wonks about the prospect of artificial intelligence, and eyes start to roll. Religious conservatives aren’t afraid of AI because they don’t believe humans can create truly human-like intelligence. Fiscal conservatives aren’t worried because they see the robotic age as generating more wealth for shareholders, and because they’re far more worried about this quarter’s earnings reports than 20-30 year projections for the future. Liberals, meanwhile, tend to be skeptical of predictions of scientific breakthroughs that could fundamentally reorganize society, and downright hostile to the idea that middle class labor could actually be threatened by fundamental economic and technological forces rather than the political and legal manipulation of the wealthy and corporate elites.

Worrying about artificial intelligence, then, is inconvenient for both sides of the political aisle, and therefore gets waived off as the province of nerds too nerdy even for politics. When it’s addressed at all, conventional wisdom is to repeat the mantra that technological progress is always accompanied by increased productivity leading to new and more abundant jobs–even though that industrial-revolution-era accident of history has not been operant in the internet age. Taking climate change seriously is at least convenient politically to the left, even though it’s massively inconvenient to the right. The prospect of transformational artificial intelligence is convenient to neither, so it tends to be dismissed as a crackpot concern.

But it’s not crackpot to the people most engaged with the technology–folks like Bill Gates, Elon Musk and Stephen Hawking:

“I am in the camp that is concerned about super intelligence,” Gates wrote. “First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Here’s what Elon Musk had to say:

I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.

Keep in mind that most people working in the field of artificial intelligence believe that human-level artificial intelligence is is only two to three decades away–and that a great many of those think that super-intelligence is only just around the corner from human-level intelligence. Their concern is the survival of the human race in the face of this development; the assumption that smart machines are going to replace most human jobs is, for them, almost a foregone conclusion. Bill Gates has even said in the past that governments will need to beg corporations to employ people rather than programs and machines within just a few decades.

Which means that there is gigantic disconnect between what the smartest and most informed people in the field of artificial intellligence think is going to happen, and what most policy wonks and economists think is going to happen. AI scientists are shouting from the rooftops to anyone who will listen, but no one in serious policy circles is hearing them. That’s very similar to the situation climate scientists faced a decade or two ago, particularly prior to the screenings of An Inconvenient Truth.

Economists and insider policy wonks are usually the last people to realize when there are big systemic problems that need addressing, even as political hacks continue to relitigate the 1960s. We need to do much better than that if we’re going to survive as a species.