Artificial intelligence is the future. Google, Microsoft, Amazon and Apple are all making big bets on AI. (Amazon owner Jeff Bezos also owns The Washington Post.) Congress has held hearings and even formed a bipartisan Artificial Intelligence Caucus. From health care to transportation to national security, AI has the potential to improve lives. But it comes with fears about economic disruption and a brewing “AI arms race .” Like any transformational change, it’s complicated. Perhaps the biggest AI myth is that we can be confident about its future effects. Here are five others.
It is certainly true that conversations with AI chatbots are often unintentionally funny. And no one who interacts with Alexa or Siri or Cortana is going to say they pass the Turing Test. “Their responses, often cobbled together out of fragments of stored conversations, make sense at a local level but lack long-term coherence,” Brian Christian wrote in a 2012 Smithsonian Magazine article. Garbled sentences and ridiculous responses often make clear just how poorly machines mimic human capabilities — or even, sometimes, how they process information. “Machines don’t have understanding,” Garry Kasparov told TechCrunch last year. “They don’t recognize strategical patterns. Machines don’t have purpose.”
But AI is already writing financial news, sports stories and weather reports, and readers aren’t noticing. From the Associated Press to The Washington Post, it’s becoming increasingly common. AI is also producing “deep fake” videos — from invented speeches by politicians to pornography featuring celebrities’ computer-generated faces — that many people think are real. These rapid advances present significant concerns, shaking the public’s confidence in what they see and hear. As a 2017 Harvard study warned, “The existence of widespread AI forgery capabilities will erode social trust, as previously reliable evidence becomes highly uncertain.”
China’s national strategy to lead the world in artificial intelligence — which calls for “the training and gathering of high-end AI talent” — has elicited fear and loathing in the United States. “China’s prowess in the field will help fortify its position as the dominant economic power in the world,” Will Knight observed in MIT Technology review in 2017. Writing in the Hill, Tom Daschle and David Bier warned in January that “the U.S. government is behind the curve.”
While there is clearly reason for concern about the United States’ standing, China’s strategic document admits that “there is still a gap between China’s overall level of development of AI relative to that of developed countries.” According to Jeffrey Ding , a University of Oxford researcher, “China trails the U.S. in every driver except for access to data.” The United States also has more AI experts, who publish more Association for the Advancement of Artificial Intelligence papers on the topic, and far more commercial investments in the field.
That said, given China’s dedication to pursuing AI, the United States will need to take a concerted societal approach if it wants to maintain its dominant position. Such efforts are already underway: In March, the New York Times reported that the Pentagon is attempting to work with Silicon Valley companies to push projects ahead.
As early as 1964, a group of Nobel Prize winners known as the Ad Hoc Committee on the Triple Revolution warned that machines would usher in “a system of almost unlimited productive capacity” that would cause disruptive levels of unemployment. More recently, a Mother Jones headline proclaimed, “You Will Lose Your Job to a Robot — and Sooner Than You Think.” The article noted that traditional blue-collar and white-collar workers alike may be displaced, leading to joblessness and poverty. Taking up that torch, one truck driver worried in the Guardian that “we will soon be extraneous — roadkill, so to speak, except we won’t be dead.”
But in transforming work, AI may also create new jobs. As Joel Mokyr, an economic historian at Northwestern University, observed, “We can’t predict what jobs will be created in the future, but it’s always been like that.” Historically, technological change has initially diminished, but then later boosted, employment and living standards by enabling new industries and sectors to emerge.
We don’t yet know how AI will affect employment in the long term. Between now and then, there may still be disruptions, and we’ll have to grapple with the growing gap between those who have the skills to thrive in a changing world and those who don’t.
It’s easy to imagine that relying on computers to make critical decisions would take human bias out of the equation. “Humans are hindered by both their unconscious assumptions and their simple inability to process huge amounts of information,” wrote Digitalist Magazine last year. Judges around the United States are using AI tools in sentencing decisions, on the assumption that these systems can offer “the most objective information available to make fair decisions about prisoners.”
If only it were that simple. In one example that shows AI’s vulnerability to bias, ProPublica found that a program intended to play a key role in criminal justice decisions from bail to sentencing was almost twice as likely to rate black defendants as probable repeat offenders than white defendants. The program also incorrectly rated white defendants as low-risk more often than blacks. “It’s often wrong — and biased against blacks,” ProPublica wrote.
In another example, a 2015 Carnegie Mellon University experiment found that far fewer women were being shown online ads for jobs paying more than $200,000 than were men. “Many important decisions about the ads we see are being made by online systems,” said Anupam Datta, associate professor of computer science and electrical and computer engineering at Carnegie Mellon. “Oversight of these ‘black boxes’ is necessary to make sure they don’t compromise our values.” Researchers are already addressing the bias issue, seeking to head off mistakes and build more transparent algorithms.
Some prominent science and technology leaders have raised grave concerns about the implications of AI for humanity’s future. “The danger of AI is much greater than the danger of nuclear warheads by a lot, and nobody would suggest that we allow anyone to build nuclear warheads if they want,” Elon Musk said at the South by Southwest conference in March. “I fear that AI may replace humans altogether,” Stephen Hawking told Wired in 2017.
The truth is we simply don’t know where AI will lead us, but that doesn’t mean murderous terminators are going to start stalking the streets. In a 2015 open letter, experts associated with the nonprofit Future of Life Institute warned against the rise of autonomous weapons systems, which could be abused by ill-intentioned humans. The more pressing concern might not be that AI is a risk to us, but that we’re a risk to ourselves if we don’t exercise caution in how we push ahead with our AI experiments.
In some contexts, AI can save lives. In March, a self-driving car struck and killed a pedestrian in Arizona, an incident that presaged trouble for the emerging technology. Nevertheless, many researchers have long held that self-driving vehicles will help reduce traffic fatalities overall. A 2017 Rand Corp. report, for example, concludes that introducing autonomous automobiles to the streets sooner could prevent hundreds of thousands of deaths.