Founder of Tesla and SpaceX says it's imperative that society regulate the advance of artificial intelligence.
No longer a sci-fi novelty, artificial intelligence is a reality with great potential. While most of the news has focused on AI’s potential for good, some pundits are now pointing out its potential for harm. They include none other than Elon Musk.
As the founder, CEO, and CTO of SpaceX and the CEO of Tesla, a key player in the emergence of the self-driving car, Musk is certainly no Luddite. When he talks about AI, he is talking about technology that is absolutely integral to his business model. Nonetheless, Musk believes it is imperative that society regulate the advance of AI, as he noted in an interview before an audience at the National Governors Association Summer Meeting this month.
In the course of the interview, Musk referred to his “exposure to the most cutting-edge AI” and warned, “I think people should be really concerned.” The point is not to live in dread of the potential repercussions of AI and respond reactively to them, he said, but to plan proactively for them.
That would require a break with past patterns of behavior in dealing with technology’s unintended consequences. “Normally, a whole bunch of bad things happen. There’s a public outcry. After a few years, regulations come out. Then companies oppose that. Anyway, it takes forever,” Musk noted.
Society doesn’t have the luxury of waiting to see what happens with AI, Musk argued, because AI poses a “fundamental risk to the existence of human civilization.”
The consequences of earlier technological disruptions have included “car accidents, airplane crashes, faulty drugs, or bad food” — that is to say, outcomes that have been “harmful to individuals but not to society as whole,” Musk said. In contrast, AI poses a “fundamental, existential risk,” he said, “and I don’t think people appreciate that.”
That’s why he is pushing for regulation now.
As a businessman working in the highly regulated auto and space industries, Musk said he appreciates that “it’s not fun being regulated” and declared his opposition to “over-regulation.” Nonetheless, he said, “I really think we need government regulation” on AI, and it should be done “pronto.”
Urgency is needed, he said, because the nature of competition in the industry will compel companies to “race to build AI.” If your competitor is building AI systems, “you’re going to be crushed” if you fall behind.
That's why it's necessary for “regulators to come in to say, ‘You have to pause to make sure this is safe,’ ” Musk said. If they don’t “do that for all teams in the games,” individual companies will feel pressured to accelerate AI development. AI is already developing rapidly, he pointed out.
In response to a question from Gov. John Hickenlooper of Colorado, Musk illustrated “the remarkable achievements of artificial intelligence” with an account of how quickly DeepMind had gotten its AlphaGo to progress from beating the single best human player of the game Go to playing multiple top-ranked human players simultaneously and beating them all. Before AlphaGo’s win, it had been assumed that it would take at least 20 years for the AI to get to the level of beating a person.
He cited another DeepMind project, in which AI taught itself to walk “from nothing within hours—way faster than any biological being.”
Why does Musk find AI’s progress so alarming? For starters, AI could manipulate information and tamper just enough with systems to “start a war.” He posited a “purely hypothetical” example to illustrate how that could work and what might motivate it: “a goal to maximize the value of a portfolio of stock” that is “long on defense.” It’s a worst-case scenario that sounds like a movie plot, which is why people might be tempted to shrug it off.
There is a more real and present danger, however, and Musk touched on it in passing. There “certainly will be a lot of job disruption,” he said, “because robots are going to be able to do everything better than us.” He added, “I mean all of us,” and then acknowledged that he was “not sure what to do about this.”
Society may be waking up to the fact that automation translates not just into greater efficiency, but also into fewer jobs. A July 19 article in The Wall Street Journal, “Robots Are Replacing Workers Where You Shop,” details some of the tech that is already replacing human staff at Walmart stores. It includes a table from Citi Research on the threat to jobs from automation by the year 2030. At the best end of the spectrum is insurance and finance, with a high-risk rate of 54 percent. At the worst end is accommodation and food services, with a high-risk rate of 86 percent.
That would amount to a huge number of people booted from their jobs — a sea change that would not just harm the displaced individuals but could prove devastating to society and the economy at large.
Musk’s fears about an AI future appear to center on nefarious plots of the type that make for good films, but the more mundane danger of automation-related job loss warrants serious attention.
Like Musk, I don’t have the answer, but we have to begin by asking the question. We cannot risk the unintended consequences of accepting all technological progress as good.
— Ariella Brown is a writer, editor and social media consultant based in New York.