Matthew Hutson is a freelance science writer in New York City and the author of “The 7 Laws of Magical Thinking.”
The largest artificial-intelligence conference, Neural Information Processing Systems (with the regrettable acronym NIPS, changed last month to NeurIPS), has become a hot ticket. At last year’s event, Intel threw a packed party with the rapper Flo Rida. This year, it sold out in less than 12 minutes. Meanwhile, Burning Man — the desert festival begun in 1986, a year before NeurIPS — sold out in under half an hour this year. The theme: “I, Robot.” Reacting to the theme’s announcement, the AI researcher Miles Brundage tweeted, “NIPS is the new Burning Man; Burning Man is the new NIPS.”
AI was once a fringe academic pursuit that reached public consciousness mostly through sci-fi movies like “2001” and “The Terminator.” Now it nets researchers seven-figure salaries and converses with our kids through appliances and phones. Ethical dilemmas like those in the movies — How much autonomy should machines have? Whose priorities should they serve? — have become urgent topics with near-term consequences. And now that AI is replacing jobs and creating art, it is forcing us to confront an age-old question with new intensity: What makes humans so special?
Artificial intelligence has many definitions, but broadly it refers to software that perceives the world or makes decisions. It uses algorithms, or step-by-step instructions (a recipe is an algorithm). Within AI is an area called machine learning, in which algorithms are not hand-coded but trained. Give the computer lots of labeled photos, and it figures out how to label new ones. And within machine learning is deep learning, which uses algorithms loosely modeled on the brain. So-called neural networks pass data among many connected nodes, each performing a bit of computation, like the brain’s neurons. It’s deep learning that’s behind self-driving cars, speech recognition, and superhuman players of Go and poker. It’s deep learning that’s made NeurIPS the new Burning Man.
One of its pioneers traces its history in a new book, “The Deep Learning Revolution.” Terrence J. Sejnowski started as a physicist in the 1970s before finding that his mathematical tools could be used to study information processing in the brain — and to create new forms of information processing in computers. (He’s now a computational neuroscientist at the Salk Institute for Biological Studies and since 1993 has been the president of NeurIPS.) Neural networks have always had devotees, but they were not always popular. Despite initial promise, they couldn’t do much until the rise of large, multi-layered networks — deep learning — in the past few years. We finally have the powerful computers, software refinements and giant data sets to train and operate them.
One is struck by how badly even experts misjudge the progress of this (and other) technology. A key ancestor of deep learning was a one-neuron algorithm developed in the 1950s called a perceptron. A 1958 article in the New York Times read, “The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.” Presumably it meant within a generation. The project soon hit snags. While it appeared to be able to spot tanks in photos, it was relying on sky brightness. In 1969 researchers Marvin Minsky and Seymour Papert published a book arguing that complex tasks would require multiple layers of perceptrons but that such networks might not be trainable. They were wrong about training, but their pessimism helped cause a “winter” in the research field that lasted until the 1980s. Deep learning’s limitations, or lack thereof, are still debated.
Sejnowski’s book is part history lesson, part textbook and part memoir, with varying levels of accessibility. Those with an existing interest in the topic will be charmed and enlightened. Some anecdotes have a you-had-to-be-there quality; others are more repeatable. At a 2006 banquet, Sejnowski publicly pressed Minsky on whether people were right to demonize him for waylaying neural nets. Finally Minsky shouted, “Yes, I am the devil!” At a resort after the first NeurIPS, a neuroscientist was discussing the sea slug Aplysia. Sejnowski recalls, “The gentleman next to me in the hot tub from the Department of Defense was probably wondering what Aplysia had to do with national security.”
Sejnowski has been a steadfast cheerleader for deep learning. In his first chapter, he notes its existing and near-term applications: autonomous driving, language translation, cancer diagnosis, stock trading, legal discovery, personalized tutoring. But he remains grounded. Later, he touches on some of its current risks: Neural nets are often too complex to explain their decisions in relatable terms, they can perpetuate social discrimination if trained on biased data, and they can be used for autonomous weapons that might become trigger-happy. Granted, humans are also opaque, unfair and ornery.
Still, much separates AI from people. All existing AI is “narrow,” good at one thing. With deep learning and other methods, experts aspire to create artificial general intelligence (AGI), which has common sense. An old saying remains true: State -of-the-art AI will make the perfect chess move — while the room is on fire. Sejnowski has hope for our ability to reverse-engineer the brain. “Nature may be cleverer than we are individually,” he writes, “but I see no reason why we, as a species, cannot someday solve the puzzle of intelligence.”
The dream of building minds is an old one. How old? You may be surprised to learn that the ancient Greeks had myths about robots. In “Gods and Robots,” Stanford science historian Adrienne Mayor describes how, more than 2,500 years before the modern computer, people told tales of autonomous machines that could labor, entertain, kill and seduce.
Among them was Talos, a bronze automaton forged by Hephaestus, god of metalworking, to guard the island of Crete. This machine, the size of the Statue of Liberty, patrolled the shore hurling boulders at invaders. (In 1948, the name Talos was given to a partly autonomous missile.) Hephaestus’s human descendant Daedalus was said to craft animated statues of animals so lifelike they needed to be tied up. Pandora, another of Hephaestus’s creations, was an android sent to curse humanity. She entices Epimetheus (“afterthought”) to let her into his home, where she lifts the lid on her woeful jar. (“Box” is a mistranslation.) While Pandora was a one-trick pony — narrow AI — “The Iliad” describes Hephaestus’s golden serving girls as having “sense and reason . . . [and] all the learning of the immortals.” AGI, and then some.
Eastern traditions also featured robots. Indian legend has mechanical soldiers defending the remains of the Buddha. And an ancient Chinese tale has a robotic man dance and flirt with royal concubines, angering King Mu before its creator reveals its artificial nature. That people could even picture such technical feats thousands of years ago may seem a stretch, but they had catapults, voting machines and other automated mechanisms from which to extrapolate. We don’t have anything near time travel, and we can still enjoy “The Terminator.”
In “Gods and Robots,” Mayor carefully examines secondary and source material — writings and artwork — to discern the ancients’ views on minds both supernatural and soulless. She takes an academic tone (her book and Sejnowski’s are from university presses) but draws occasional parallels to modern sci-fi movies such as “Blade Runner” and “Ex Machina,” arguing that our concerns about artificial life haven’t changed much. “The age-old stories,” she writes, “raise questions of free will, slavery, the origins of evil, man’s limits, and what it means to be human.” Can we control our creations? Can our creators control us? Are we robots — in Plato’s words “ingenious puppet[s] of the gods”?
We’d better pay attention to those stories, old and new, Mayor says. “The ancient Greeks understood that the quintessential attribute of humankind is always to be tempted to reach ‘beyond human,’ ” she writes, “and to neglect to envision consequences.” Prometheus (“forethought”) warned Epimetheus about Pandora’s jar. Mayor wonders if Stephen Hawking, Elon Musk and Bill Gates, who have warned that AI could kill us all, are “the Promethean Titans of our era.” She calls the stories in her book “good to think with.” And not just for us. Mayor foresees a day when AIs will read our fictions and come to understand us through them.
Who knows, maybe they’ll even develop their own stories and culture and rituals. They’ll form a festival with a flaming effigy of the being that brought them into this world. They’ll call it, you guessed it, Burning Man.
By Terrence J. Sejnowski
MIT. 352 pp. $29.95
By Adrienne Mayor
Princeton. 304 pp. $29.95