The White House Office of Science and Technology Policy recently organized a symposium of top robotics experts at Worcester Polytechnic Institute to brainstorm how field robots could be used in future Ebola-like pandemics. While the researchers came up with a number of innovative short-term and long-term ideas for how robots could be used to fight Ebola — everything from cleaning and decontaminating rooms to actually administering IVs to humans under medical treatment — there are still a number of important issues to clarify before we hand over the task of fighting Ebola to the robots.
On the surface, of course, handing over the dirty work of cleaning up after an Ebola outbreak to the robots sounds like a no-brainer. Instead of putting humans into harm’s way, why not just send in a robot? That’s the logic behind the Xenex germ-zapping robots, the TRU-D sanitation robots, the HStar Technologies medical robot that can lift and carry patients in its arms and the QinetiQ unmanned ground vehicles that can be used to detect and remove hazardous materials. There’s even an amazing medical robot that can remove the sheets from a patient’s bed and then discard the contaminated linen. No hands, no fuss. Robots can’t develop symptoms from Ebola, they are relatively easy to disinfect (except for their wheels), they dutifully carry out tasks without talking back and they can dispose of hazardous waste efficiently.
Scratch the surface, though, and you can start to see the moral and philosophical questions that arise once robots start doing more than just grunt-level decontamination work. In short, everything changes once robots also become human-like caregivers of Ebola patients rather than just repurposed industrial robots.
Even assuming that wise and highly moral technologists have created robots according to something approximating Isaac Asimov’s Three Laws of Robotics, there still exists all kinds of potential for things to go wrong as robots go about trying to observe these laws. Just read any of Asimov’s “Robot” stories (or, better yet, watch the Will Smith movie) to understand how things might go awry.
Here’s just one real-world example: What would an Ebola robot following Asimov’s Three Laws of Robotics have made of Kaci Hickox, the controversial Ebola nurse, and all the questions – both legal and moral – raised by her decision to flout quarantine guidelines while in New Jersey and then in Maine? How would you possibly program an artificially intelligent robot to deal with that scenario?
The First Law of Robotics seems to be fairly incontrovertible: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Okay, so if we know that a potentially infected Ebola nurse (showing no signs of infection) still has some small statistical chance of spreading Ebola to others, what would a robot caregiver do if it saw the nurse taking bike rides around her community, potentially enabling the spread of Ebola? The robot couldn’t physically harm the nurse, of course. And it probably couldn’t impose a mandatory quarantine, either, since that might harm the individual by infringing on constitutional rights. However, the whole “inaction” clause of the First Law opens up another loophole – the caregiver robot couldn’t exactly sit around, watching patients taking bike rides whenever they want if they potentially posed a risk to other human beings.
So, the robot would be forced to revert to the Second Law of Robotics, which is: “A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.” Easy, you say, the Ebola robot wouldn’t actually have to worry about what to do in these examples – it would be told what to do by humans. But, as we know from reality, what New Jersey Gov. Chris Christie might give as an order (mandatory quarantine) could differ from what Maine Gov. Paule LePage and state health authorities might order (voluntary quarantine). Which order, then, should the robot follow?
Faced with all these potential contradictions, the robot would have to resort to the Third Law of Robotics: “A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.” Okay, so now you’d have an Ebola-fighting robot with a limited degree of freedom able to make decisions on its own about what’s right for humanity. Even if you add in another, Fourth Law, as Asimov later did – “A robot may not harm humanity, or, by inaction, allow humanity to come to harm” – you suddenly have robots making decisions about what to do about Ebola on a global, rather than local, level. What might be good for humanity in general might not be good for an individual Ebola patient, that patient’s lovable pet dog, or that patient’s immediate family.
Maybe all of this is just over-thinking the matter too much, influenced negatively by too many dystopian novels about robot overlords. After all, as Dmitry Berenson, an assistant professor in Worcester Polytechnic Institute’s robotics program, emphasized at the Ebola robotics symposium, “We’re not trying to make this a completely automated process.” Theoretically, human operators would always be a safe distance away, whether it’s taking off hazmat clothing, operating a telepresence robot, or overseeing the cleanup of an Ebola-infected area. Robots would not be working autonomously, so no scary dystopian scenarios of robots taking over.
However, clearly there are a lot of moral and philosophical considerations to take into account before the White House signs off on any proposal for the extensive use of robots in fighting Ebola. Robots might be great at performing simple instrumental tasks like cleaning up an infected room, but they are far less equipped to take on human-like qualities as Ebola caregivers. Even the concept of robotic burials — robots carrying out full burials of humans in order to protect a community from potential infection risks — could face strong pushback in some communities. Would you want your loved one’s body handled by a robot? And, there’s always the issue of whether using robotic caretakers would somehow stigmatize or psychologically traumatize a patient. (You know it’s the beginning of the end when a robot shows up to treat you instead of a human doctor.)
Let’s just hope that, in the future, Ebola fighting robots will be just as accomplished as navigating complex moral and philosophical issues of tackling a pandemic as they are at navigating the complex terrain of physical world obstacles.