In just the past few years, Robert McMillan writes in Wired magazine, dramatic advances have been made in the field of artificial intelligence. With Skype’s “Star-Trek-like instant translation capabilities,” Google’s self-driving cars and computers that can teach themselves to humiliate humans at arcade games, he says, the new developments are both exhilarating and scary.
In “AI Has Arrived, and That Really Worries the World’s Brightest Minds,” he reports on the fears of experts who gathered at a closed-door conference in Puerto Rico in early January; among the industry talents were Elon Musk of SpaceX and Tesla, Skype co-founder Jaan Tallinn and Google AI expert Shane Legg. Think of that game-winning computer, Tallinn told the meeting: Though “the technologist in me marveled at the achievement, the other thought I had was that I was witnessing a toy model of how an AI disaster would begin, a sudden demonstration of an unexpected intellectual capability.” In other words, it’s a little too much like a precursor to “The Terminator.”
Delegates to the conference signed an open letter pledging to conduct AI research only for good. Another letter outlined research priorities that would involve studying the economic and legal effects of robots that can take away human jobs or manipulate financial markets; Musk kicked in $10 million to help pay for the research.
It’s not a new fear, McMillan notes: Last year a Canadian company, Clearpath Robotics, promised not to build autonomous robots for military use. On its Web site the company posted this statement: “To the people against killer robots: We support you.”