How to punish robots when they inevitably turn against us

robotskillGabriel Hallevy is a professor at Ono Academic College's Faculty of Law in Kiryat Ono, Israel. He specializes in criminal law and in particular the interface between criminal law and new technologies, such as robots and other machines equipped with artificial intelligence. His work was recently featured in a Boston Globe article by Leon Neyfakh on the ability of the legal system to handle robots.

His new book, out next month, is "When Robots Kill: Artificial Intelligence under Criminal Law."  We spoke Tuesday morning; a lightly edited transcript follows.

Dylan Matthews: A lot of people would assume that criminal responsibility should lie with the manufacturer and owner of a robot, rather than with the robot himself (or herself).

Gabriel Hallevy: Well, if we impose criminal responsibility on the robot itself, it does not mitigate liability on the part of the programmer or the manufacturer or the user.

Dylan Matthews: So you're adding liability to the robot, rather than shifting it from the manufacturer.

Gabriel Hallevy: Yes. The criminal liability of a robot is additional to the current criminal liability. The current criminal liability of the manufacturer is very limited. You can impose liability on the programmer or manufacturer only by negligence, if we’re talking about AI, because the manufacturer can claim in court that the robot has learned by itself how to commit the offense and perhaps the user taught the robot what should he do or not, so he is majorly responsible for the commission of the offense.

Dylan Matthews: A lot of criminal law relies on subjective mental states. An intentional homicide is punished more harshly than one caused by negligence, for instance. How can the law make judgments on robots based on such factors?

Gabriel Hallevy: The key point here is that the current definitions in criminal law are too narrow and as a result today in most modern legal systems, including the American legal system, we have no legal process to impose criminal liability upon machines.

The key term here is “awareness.” Generally in order to impose criminal liability we must impose two elements: the central element, and the mental element. The central element means the act. You cannot be criminally liable for homicide unless you take the knife or the gun and shoot or stab the victim. A machine can do that as well. A robot could have arms, it could move, etc.

The problem is with the mental element. In most criminal codes, including America and Israel and most Western European countries, awareness is the key term for the mental element. In most criminal offenses, there should be proven awareness of the offender. For humans, when we ask ourselves "what is awareness?", we think about the very deep philosophical significance of what it means to be aware. But in criminal law the definition is very narrow. It is the capability of absorbing sensual data and processing it. The process of processing some kind of sensual data, we wouldn’t call that understanding something, but it just means to create an internal image.

The robot, and AI since the 1980s, has this ability when a robot is equipped with cameras. When we think about South Korean robots, they started to use them a few years ago as prison guards. So they’re moving through the center of the prison, and when they see something that moves, they only have to identify if it is a prisoner who is trying to escape. If they identify that this is the situation, they start a process of alarming the human guards. This is awareness.

The problem is that when we think of the typical murderer or the typical rapist in non-legal terms, we think about the murderer as an evil person, the rapist as an evil offender. But evil is not required here. What is required is only awareness. In some offenses there are additional requirements like specific intent. In murder we require that. But intent is derived by the awareness. You must be aware in order to intend something. In the current technology, we have the capability to do that. We have the capability of applying awareness by the criminal law definition to machines.

I should say that the plan of the book is a bit different. I’m not calling for the imposition of criminal liability on machines. The point here is to call for a change in the criminal law and its narrow definitions, by stating that either you have to accept the possibility of imposition of criminal liability on non-human entities, as we have done since the 17th century on corporations, or that we should change the legal definitions. This is the point.

Watson seems like he'd be likelier to commit white collar crimes. (Seth Wenig - AP)

Dylan Matthews: A question many people would probably have is, "But how do you punish a robot?" The analogy seems less clear there.

Gabriel Hallevy: This is a wonderful question. The book has six chapters, and the biggest, the sixth one of them, is dealing with this part. After all, if you can impose liability and you can’t punish the robot, what have we done here?

The rule is very simple. Any punishment that we may impose on humans, we can impose it both on corporations and on the robot, or any other non-human entity. You need some fine-tuning adjustments. We can impose imprisonment on corporations. We have no problem with it. I’m not talking about putting in prison the people who are managing the corporations. The legal technique for corporations is to ask, “What is the meaning of imprisonment?” It’s to negate its freedom. The freedom of any corporation is the legal capability to make business. Therefore, when you impose six years imprisonment on a corporation, you cannot allow the corporation during this period to do business.

Robots, it has the same technique but it may lead to different consequences. When we impose imprisonment, we should ask what is the meaning of the certain punishment on the robot. It means to negate its freedom. That freedom is the freedom to commit its useful daily tasks. So you ban him from doing the daily tasks.

I don’t think that imprisonment for robots would be effective as it is for humans. There are other punishments that may be effective on robots than on humans. For any corporation the most effective punishment isn’t imprisonment. It’s a fine. For robots, I can think of community service. For example, in the near future when I hire the services of a robot to help me with my daily task, and the robot commits a criminal offense, for the next few months it may help the community by doing daily tasks for the community. For example, to help in the community library, to help to clean the streets or such other things that contribute to the community.

This is not the only punishment, but any punishment can be adjusted to the robot. Of course, the death penalty, in the case that we still have this punishment, it would be the simple solution of a shutdown, to shut down the robot. If there is no other option, you must cease his life, and that means to shut him down.

Dylan Matthews: This really cuts to the heart of a lot of debates about whether punishments are meant to rehabilitate offenders or deter future wrongdoing. With a robot that relies on machine learning algorithms, you could imagine rehabilitation programs changing its behavior, perhaps more readily than with humans.

Gabriel Hallevy: You’re right that the purposes of punishment not only deterrence but rehabilitation. That is a legitimate purpose of punishment. This works also for robots. When we have a robot who is equipped with artificial intelligence, it can learn. It has the capability of learning. So it acts, it does things. Somehow, somewhere, some time it commits a criminal offense. Now, we have in our criminal law system the opportunity to make him learn how not to commit offenses, regardless of the question of evil.

Now when we put the robot into the criminal process, the punishment would be interpreted for him, for that entity, as drawing the borders of what is expected of him to do and not to do. This is, in fact, the process of rehabilitation and deterrence. Now it knows that if he crosses these borders, he is to expect a sanction. He is programmed not to want it. The same process both for deterrence and for rehabilitation occurs in humans and in robots.

In corporations, the process was more wide. It may be that corporations initiate internal processes to heal the corporation, to get rid of criminal subcultures. When we’re talking about an individual robot or human, it’s much more simple because you don’t have to change other entities. You learn something and you implement it by yourself. The purposes of punishment are relevant, but they should be adjusted to the new mode of being a robot.

A robotic prison guard in South Korea, plotting its next move. (Reuters)

A robotic prison guard in South Korea, plotting its next move. (Reuters)

Dylan Matthews: Robots are autonomous actors but they're also property. Why does your legal system get to say what I'm allowed to do with my robot?

Gabriel Hallevy: Take this example. My cat is a tiger, and I walk with my tiger, which is my property, in the street of my city. Now, can anyone tell me not to use my property the way I want? Well, of course! When you jeopardize the community, the community has the authority to restrict the usage of your property the same way as we restrict the usage of guns. What’s the difference? Well, I use the gun as a hobby or to commit offenses, the same way I can commit offenses through using robots or use them for my personal need.

The community has the authority to restrict the usage of my private property when I jeopardize the community. And if I am using robots in such a manner that may jeopardize the community, the community has the right to defend itself from the robot and the user of the robot.

Dylan Matthews: In cases like self-defense, even intentional homicide is excused in many legal codes. If a robot feels threatened, should it be allowed to kill intentionally?

Gabriel Hallevy: It depends on the character of the robot, the technological character of the robot. I assume that advanced robots, from artificial intelligence of the fourth generation, would be closer to where we can identify jeopardy from humans. I think that this is a pure technological question. I don’t think that the law has, or should have, different answers here for humans and for robots. I think that the key here is given by the technology, not the law.

Also on Wonkblog

The Dow Jones's new high is fake