Wendell Wallach chairs the Technology and Ethics Study Group at the Yale Interdisciplinary Center for Bioethics, and is senior advisor to the Hastings Center. His recent book is “A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control.”
During a 1950s encounter at MIT, Marvin Minsky, one of the fathers of research on artificial intelligence, declared: “We’re going to make machines intelligent. We are going to make them conscious!” To which Douglas Engelbart, another early Information Age icon, reportedly replied: “You’re going to do all that for the machines? What are you going to do for the people?”
The Minsky-Engelbart exchange captures the tension that has continued to dog development of artificial intelligence. What should be the goal of engineers who create thinking machines? Engelbart proposed building smart devices that could augment human capabilities (intelligence augmentation, or IA) as an alternative to the pursuit of autonomous machines with thinking powers that might equal or exceed human capabilities (artificial intelligence, or AI). Both research trajectories have made significant strides. In 1997, IBM’s Deep Blue defeated Garry Kasparov, the reigning world chess champion, and in 2011, Watson beat “Jeopardy” whizzes Ken Jennings and Brad Rutter, marking significant steps on the road toward artiﬁcial intelligence.
For nearly three decades, New York Times reporter John Markoff has tracked advances in technology. In “Machines of Loving Grace,” he captures the history of artificial intelligence and robotics. He introduces us to a large cast of computer geeks and colorful personalities, and explores their many failures and successes. “Machines of Loving Grace” and Walter Isaacson’s bestseller “The Innovators” tell a few of the same stories, but the two books can be read as complementary. Isaacson began with the prehistory of hardware and software in the early 1800s, while Markoff focuses on the pursuit of artificial intelligence once it seriously got underway at a 1956 summer conference on the subject at Dartmouth College.
Markoff explores the pros and cons of advanced technology and robotics. He examines, for example, whether robots could one day serve a useful role in caring for the elderly and concludes that they could, augmenting the work of humans without elimininating caregivers’ jobs. “The development of robots that will act as companions and caregiver,” Markoff writes, “is a way of using artificial intelligence to ward off one of the greatest hazards of old age — loneliness and isolation.”
In his final chapter, he asks whether developments in AI and IA will create intelligent machines that are “Masters, Slaves, or Partners?” He wonders if AI systems can be designed so that they are unequivocally beneficial and controllable. Noting the dual nature of smart machines, he explains that on the one hand, they can eliminate human drudgery, but on the other they can subjugate humanity. Over the decades, he writes, the dichotomy has “only sharpened.” But he knows where responsibility lies. “This is about us, about humans and the kind of world we will create,” he argues. “It’s not about the machines.”
In “Our Robots, Ourselves,” David A. Mindell offers a more in-depth and tightly woven discussion of whether robots will replace or complement human skills. Mindell, an MIT professor, has played a leading role in developing deep-sea submersibles and autonomous aircraft. The history of robots designed to perform tasks in extreme environments provides Mindell with plenty of fodder to challenge the conventional argument that tasks performed by people inevitably migrate first to remotely controlled robots and then on to autonomous systems.
Discussing the Mars exploration rovers, Mindell shows how the efforts of people and robots can be complementary and evolve together. He points out that transporting an astronaut to and from Mars is dramatically more expensive than launching an unmanned mission like the rovers. But without humans involved, either on board or in nearby orbit around Mars, exploration of the planet is more difficult and time-consuming. A 20-minute delay in getting a signal from Earth to Mars meant that the remote control of the rovers would be ridiculously slow. Therefore the rovers were designed to do many things on their own while they waited for instructions from Earth.
But autonomous tasks also often take a lot of time. “The rover can autonomously plan a route around a series of rocks or obstacles using imagery it gathers from its camera,” Mindell writes. “But to do that it stops every ten seconds to look at the terrain for twenty seconds. Thus autonomy is costly in time.” As a result, even with the most advanced forms of machine autonomy, some researchers believe that a human presence on Mars would be more efficient.
Mindell clearly demonstrates that the efforts of people and robots can be complementary and inextricably entangled, and can evolve together. He acknowledges that each step forward, however, improves the prospect of assembling fully autonomous machines. “The challenges of robotics in the twenty-first century,” he writes, “are those of situating machines within human and social systems. They are challenges of relationship.”
In “We, Robots,” Curtis White offers witty and insightful rants against the culture being created by the alliance between innovative technologies and capitalism. He joins Evgeny Morozov and Jaron Lanier on the front line of critics challenging assumptions about the benefits of technology. White wants us to create a new narrative that will compete with what he sees as the dominant techno-cultural interpretation of modern life and human destiny: that humans are constantly diminished by the march of technological possibilities. However, he provides little guidance as to what that narrative might be.
By John Markoff
Ecco. 378 pp. $26.99
By David A. Mindell
Viking. 260 pp. $27.95
By Curtis White
Melville House. 284 pp. $25.95