I’m at the WeRobot2014 conference at University of Miami over the next two days (Co-Conspirator David Post is also on the program), Friday and Saturday. The topic this year is the diffusion of robots and robotic technologies into social life, ordinary social settings where most of us live and work, and the legal and policy issues and implications.  Organized this year by University of Miami law professor Michael Froomkin, who is one of the leaders in this field, it is a remarkably interdisciplinary meeting – engineers and designers and technologists, social psychologists, philosophers, lawyers, business and investment people, and some tech journalists and writers.  The range of issues include drones in the civil airspace, robots as “moral proxies” in self-driving cars, and many more. If you want to follow it, the live-stream is here, papers and sessions are posted at the program page, and the Twitter hashtag is #WeRobot.

There’s a huge part of technology that is just about technology, engineering, materials science, and whether you can get an artifact to rotate a gripper, or be able to distinguish door one from door two. But insofar as these machines are intended to take part in society, then the problems are no longer just about engineering a thing, but the interaction of humans and machines. At that point, especially as these machines are supposed to interact with ordinary folks in their ordinary social spaces – homes, schools, eldercare residential facilities, hospitals, offices, etc. – questions of law, regulation, and ethics can’t really be elided.

I’ve sometimes referred to this as “normative engineering,” for the purpose of emphasizing the now widely-held understanding in the robotics community that the normative aspects that arise from the human-machine social interactions need to be considered in design from the beginning of the process. Otherwise you might wind up with a very expensive investment in a machine that will not survive tort and product liability suits – and, to be clear, not necessarily because of an overly-burdensome litigation system (though this might frequently be a problem), but because it was not developed with attention, for example, to what psychologists could tell them about how humans would be likely to interpret the machine’s behavior.

Although there is a lot of discussion about the future of technology and society here at this conference, it is emphatically not science fiction. As much as many – most, all? – of the people attending this conference love science fiction and could recite the Three Laws of Robotics and all that, in fact the technology has moved beyond the point that we can appeal to science fiction for answers in the way that we could before there was an actual path of technology to examine. At that point, the normative engineering has to be drawn from the tools of society at hand.  This is a striking feature of this and other conferences I’ve attended in the past couple of years – originally a self-conscious decision and today internalized within this intellectual community not to frame things in science fiction terms, in pop culture terms, but to address the very real challenges in law, ethics, and other aspects of social organization posed by the new technologies as they are actually emerging. Indeed, I expect that one of these days journalists writing about robotic technologies, drones, etc., will find themselves feeling sufficiently foolish, constantly framing serious policy issues about today’s emerging technology by invoking the Terminator or Skynet that they … won’t.

This raises the important question, then, of how to define a robot by reference to today’s technologies and the ones that can be seen emerging now.  How to define it not for pedantry’s sake, but because it’s important to say what distinguishes it from other technologies that might affect how we see it for social purposes, including its legal regulation. In that regard, Ryan Calo (a law professor at University of Washington, a member of the WeRobot2014 organizing committee and a major intellectual in the law and technology field) has a new paper at this conference, “Robots and the New Cyberlaw.” It lays out better, I think, than any other currently what makes “robots” distinctive in terms of how law, regulation, and ethics need to frame of them.  They are different from automation or cyber, for example, and Calo’s paper identifies three features particularly: “embodiment,” physical extension and actions in the world, mobility and motion; “emergence,” by which he means machine learning and self-learning and gradually increasing intelligence capabilities; and “social meaning.” As he says in the paper’s abstract:

Two decades of analysis have produced a rich set of insights as to how the law should apply to the Internet’s peculiar characteristics. But, in the meantime, technology has not stood still. The same public and private institutions that developed the Internet, from the armed forces to search engines, have initiated a significant shift toward robotics and artificial intelligence … Robotics has a different set of essential qualities than the Internet and, accordingly, will raise distinct issues of law and policy. Robotics combines, for the first time, the promiscuity of data with the capacity to do physical harm; robotic systems accomplish tasks in ways that cannot be anticipated in advance; and robots increasingly blur the line between person and instrument.

I’ll be posting over the next couple of days about themes raised at the conference. My congratulations to Professor Froomkin and his staff, the WeRobot2014 organizing committee, and all the attendees and all of us who are trying to think seriously through the issues raised by the “social life of things.”