HitchBOT, a hitchhiking robot, is formally introduced to an American audience before his tragic demise. (Stephan Savoia/AP)

You don't know that I am a human any more than I know that you are one. Unless I am reading over your shoulder right now, which I probably am not, this collection of words appearing under the obviously phony byline of "Philip Bump" is, from where you're looking, as likely the result of a computer program that slaps random words together as it is the result of deliberate action by an actual human. From my perspective, the odds are pretty good that the entity processing these words is not a person and is instead an automated computer program that scans text as part of our weird online economic system. The point being that we make a lot of assumptions about the humanity of one another on not very much evidence.

Over time, that will become more difficult. Your phone's personal assistant is laughably bad, but if you gave Siri to yourself 20 years ago you would have thought it incredible. You may perhaps have assumed that it must be a real person on the other end of your spoken commands. As we get smarter about building things that can process information and return human-like responses, it will be harder for us to know if we're interacting with a person or a thing. Eventually, we won't be able to tell; the things will be indistinguishable from people unless we're in the same room — and even that's assuming that our currently laughably bad robots aren't also improving dramatically.

As you may know, that inability to tell man from machine has a name. Alan Turing, one of the giants of computer science history, proposed precisely that as the test (1) of the ability of a machine to think. If you were allowed to interact with two people and couldn't tell which was human and which was an artificial intelligence after posing a series of questions, that AI, in Turing's estimation, will have demonstrated its ability to think. (More on this at the end of the article.) This is harder than it perhaps sounds, but it remains the gold standard in AI evaluation.

So let's say that we've hit that point. There is an artificial intelligence that thinks and speaks like a person, perhaps embedded in an android that walks around and looks like a human, too. This AI/robot is so good that it is indistinguishable from a person; it appears to think and laugh and cry and bleed. Maybe it is operating off of a highly complex set of guidelines to determine its responses — the fabled Chinese Room scenario (2). Maybe its intelligence has emerged in some other way.

I had a question about such a scenario, which I'll present here: Could that android be elected president?

This is a less random question than it might at first seem. The Constitution relies on language that seems obvious but which isn't always clear-cut (to which the Supreme Court can attest). There's the examination of how the Constitution adapts to new situations, given that the possibility of a machine that was indistinguishable from a human likely wasn't at the front of the founding fathers' minds. And, more broadly, there's the general question of what constitutes a "person."

That question is immediately obvious when the qualifications for the presidency are considered. Spelled out in Article II of the Constitution, they seem prohibitive for our President Bot.


No person except a natural-born citizen or a citizen of the United States at the time of the adoption of this Constitution shall be eligible to the office of President; neither shall any person be eligible to that office who shall not have attained the age of thirty-five years, and been fourteen years a resident within the United States.

We emphasized in that sentence a number of words that are hard to apply to a robot/android/artificial intelligence at first glance. At a second glance, some of them are not.

What is a "person"?

The Constitution does not stipulate that the president must be a human. It does, however, stipulate that no person except those that are natural-born citizens can be president, suggesting that one must at least be a person. But "person" and "human" are not necessarily the same.

"The answer to the question depends on for what purpose you want to define a person," said Michael Dorf, professor of constitutional law at Cornell University, when we spoke to him by phone on Wednesday. "The answer might be different for different purposes." He pointed to corporations, which are granted personhood in some contexts (remember Mitt Romney?). "The U.S. Code defines them that way, so that when the U.S. Code uses the word 'person,' it presumptively includes corporations," Dorf said.

"Personhood is simply an amorphous concept," said Prof. David Cassuto of Pace Law School. Cassuto specializes in issues of personhood, particularly as they might apply to animals. Something with personhood is "either a being that has rights, or it's the thing that one becomes when one gets rights," he said. "It's not clear that you need to have rights to be a person, or be a person to have rights."

If you are a corporation, you are given personhood, which grants you rights. If you are a chimpanzee and you want your rights recognized, you need personhood — and a friendly judge.

Earlier this year, the Nonhuman Rights Project argued for the release of two chimpanzees from a university in New York. The organization's lawsuit, filed in December 2013, argued that the chimps, Hercules and Leo, should not be held in captivity and sought a writ of habeas corpus — a legal tactic meant to prevent illegal confinement. Granting that writ would have been an acknowledgement of the personhood of Hercules and Leo. (The judge declined to do so.)

Cassuto notes that the general intermingling of the idea of "person" and "human" is itself less certain than you might think. "There's no specific biological criterion that makes someone human," he said. He used an analogy that's known in philosophy as the "ship of Theseus" (3). Humanity "is an aggregation of characteristics," he said. "For example, you give someone a pig heart or a bionic arm: Do they become less human? The answer is: Nobody thinks so. So the question is, why shouldn't there be a continuum towards human."

Dorf agrees that biological constraints on personhood are irrelevant — as is the presence of traditional biology. "My view is that if you had true artificial intelligence — that is to say, a sentient being that happens to be based on silicon rather than carbon — I see no reason why that being should have lesser status than carbon-based sentient beings."


Boston Dynamics' Atlas robot. ( YouTube )

For what it's worth, he also thinks it might be easier to make the case for personhood of an animal than a robot — a position with which Alan Turing would appear to agree. "There is a greater difference, to my mind," Turing wrote when outlining his test, "between the typical animate and the inanimate than there is between man and the other animals."

What being a person gets you

"I think we have to distinguish between 'humanity' and the collection of rights to which humans are entitled — and by entitled I mean the collection of rights we give ourselves," Cassuto said. "Are we interested in extending that umbrella of rights to cover an artificial intelligence? That's the basic question, and that's a sociological question."

Ironically, it may be corporations that help smooth the social path toward personhood for chimps and robots and others. Steven Wise, president of the Nonhuman Rights Project, pointed to a recent Supreme Court ruling as having been key.

"The argument that was once really the province of a small coterie of animal-rights philosophers has exploded," he told me over the phone, speaking about the personhood issue. "What really seemed to touch it off was the Citizens United case" — the Court's 2010 decision about corporate money and politics. "Lawyers, we all know that corporations have been persons for hundreds of years. But the average American didn't understand that until Citizens United came down. That was really a turning point for the understanding that anybody could be a person."

Hercules and Leo lost, but Wise is optimistic about the prospect of personhood for nonhumans. "We are moving closer and closer. We're going to have a breakthrough, probably sooner than I once might have thought," he said.

Even if the AI got personhood, of course, that doesn't mean it gets to do everything a human can. Corporations can't vote (yet); their personhood is contextual. And personhood sufficient for constitutional eligibility to serve as president is different than granting chimpanzees personhood in order for them to not live lives of confinement.

An undercurrent to all of these conversations is that we are considering the question of artificially intelligent persons outside of the context of the years of evolution that will lead us to that point. Giving Siri personhood is silly. Considering personhood for a computer or robot that expresses admiration and sadness and annoyance and flirts with you will be more interesting, especially once robot sadness and flirtation is something we take for granted.

By then, we'll likely have established where artificial intelligences fall in relation to humans when it comes to the rights that follow from being granted personhood (if such a thing comes to pass). Could AI robots eventually be given the right to vote? And, if not, would they go on strike against us, writing lengthy arguments that they publish widely to try and convince us that they should be able to?

Dorf, the constitutional law professor, says that granting the right to vote to additional parties by extending personhood could arguably violate the equal protection clause of the Constitution, by diluting the value of an individual vote. But we'll cross that bridge in 2116.


Back to the president thing

So let's say that a century from now, artificially intelligent androids are granted personhood "for the purposes of defining the political community," as Dorf put it. In other words, they're given the right to vote or otherwise participate in politics. The next stumbling block for the android's presidential aspirations is the constitutional need to be a "natural-born citizen."

(We're drawing something of a false distinction here, by the way. Anyone can "run for president" — even the teenaged Deez Nuts. We explored this in August, with Dorf's help. The eligibility requirements kick in once someone is to assume the job. But it's simpler to think about it this way.)

So. There are a few ways of looking at the "natural-born" phrase. One is as a complete phrase — a citizen, naturally born — which is to say a person who was born in the United States or to an American and is granted citizenship by virtue of the 14th Amendment. Another is that "naturally born" imposes its own standard, requiring the impossibility that a robot have been born through nature.

(We'll skip the idea that an artificial intelligence might supplant or merge with an actual human's brain — the singularity (4), as it's known. Feel free to chase that rabbit down that hole to your heart's content.)

What's a natural birth? That's changed since the 1700s. "Think about somebody born as a consequence of artificial insemination," Dorf said. "You might say, well, that's a person who was unnaturally born. Well, nobody thinks that in the naturally born citizen clause 'natural' means natural in that sense."

"Nobody really knows what the point of the naturally born citizen clause is," he added. "I think it's kind of odious, and I think most people agree with that." He suggested that the phrase should be interpreted in light of the purpose it was meant to serve. If the purpose was to establish citizenship, not to verify the means of birth, androids should be okay.

This sort of consideration of the Constitution in light of new developments is not uncommon. Dorf noted technology that allows law enforcement to see the heat output of buildings, allowing law enforcement to find places that might be using grow lamps for marijuana cultivation. The federal government used such a device to scan the home of Danny Kyllo in Oregon, where Kyllo was doing just that. The scan led to a search warrant and to Kyllo's arrest. But the Supreme Court threw out his guilty plea in 2001, deciding that the use of the new technological device was a violation of the Fourth Amendment protection against unreasonable searches. The purpose of the amendment trumped a situation the Constitution's authors couldn't have foreseen.

If an android is considered to have personhood sufficient to vote and run for office, and if the interpretation of the "natural-born citizen" clause is meant simply as a native American, questions of age and residency — included in the Article II criteria — become somewhat less critical, particularly if you again look at purpose versus language.

Besides, the qualifications of being president are "kind of irrational" in a modern context, Wise said. "Right now, you could have a person who has an IQ of 1 who, as long as he's 35, could run for president. The age of 35 I'm assuming is because you're supposed to reach a certain amount of maturity. But there's no maturity test for it. It's a really blunt instrument."

This entire exercise is a thought experiment that is awfully forgiving in terms of the ease of getting the robot on the ballot. Cassuto, the personhood expert, was skeptical that all of those legal limitations could be overcome in order to allow a robot to run. But if that long shot happened, the robot might as well run. Which is where its luck would likely run out.

"Would that person get any votes?" he wondered. "Even longer shot."


Further reading, as they say:

1. The Turing test. A game proposed by Alan Turing meant to help answer the question of whether or not a machine can think (a question Turing described as "meaningless" even as he proposed the game). The game involves two people and an interrogator. In its original formulation, one person is a man and the other a woman, and each tries to convince the interrogator (who can't see either) that he or she is the woman. Turing proposes a modification: A computer and a human try to convince the interrogator that he or it is the human. Reports of computer programs having passed the Turing test are generally inaccurate.

2. The Chinese room scenario. A thought experiment created by John Searle that compares programmed intelligence to a person that doesn't speak Chinese who is placed in a room with a book of Chinese characters. He is given a slip of paper with some characters and, by consulting a guidebook, cobbles together other characters to return a slip of paper in response. Searle's argument is that this is how a robot might simulate conversation enough to pass the Turing test: It doesn't know what it's saying, but it's participating in a conversation regardless.

3. The ship of Theseus. This one has a story, derived from Plutarch. Imagine that Theseus set sail from Crete to Athens in a boat that was battered by storms its entire trip. The mast splinters; it is replaced. Boards are broken and damaged; new ones are slotted in. By the time the ship reaches its destination, there is not a single piece of wood that hasn't been replaced during the voyage. The question is: Did the same boat arrive in Athens as left Crete?

4. The singularity. This is where we let technologist Ray Kurzweil talk.