You can read our draft here, but I’ll also post parts of the article directly on the blog next week. I hope our readers will find them interesting, and I’d love to have feedback; we have plenty of time to improve the article.
A word about my coauthor: Mark is probably the most-cited scholar of his generation, and certainly one of the two or three top intellectual property scholars and law-and-technology scholars in the country. A good rule for life in the legal academy: Always take every opportunity to cowrite with Mark Lemley.
To start with, here is a brief summary of what our article discusses:
* * *
AR and VR both present legal questions for courts, companies and users. Some are new takes on classic legal questions. People will die using AR and VR — indeed, some already have. They will injure themselves and others. Some will use the technology to threaten or defraud others.
Sorting out who is responsible will require courts to understand the technology and how it differs from the world that came before. But it won’t necessarily require a fundamental rethinking of legal doctrines. A death threat via AR or VR is legally the same as a death threat via an oral conversation, a letter, an e-mail or a fax.
But AR and VR will also create new legal questions. Virtual interactions will be conducted through devices and networks that are privately owned and operated. Those interactions may therefore be subject to contractual terms and conditions that users will likely never see or consider, but that significantly limit the privacy, property and liberty rights of those users.
The interactions may not happen in any one physical jurisdiction, and therefore may be harder to regulate effectively. This move — from conducting most of our business in public spaces with public rules, largely located in a single jurisdiction, to private spaces with private rules in which the parties seem next to each other but are really physically in many jurisdictions — may cause us to rethink just what constitutes a legally binding contract and what things we want governed by public rather than private rules.
And AR and VR can also raise other questions that are more fundamental. VR isn’t “real” in the way we normally mean that term. It is an artificial construct, bits cobbled together to produce sounds and images that we observe. But it feels real in a way that is hard to understand until you’ve experienced it. The same may be true with AR, if it can overlay vivid and realistic images of people and objects over the real reality that we see.
This gut feeling of realness can cast doubt on legal doctrines that tend to distinguish between physical contact and physical danger and things that are “just” audio and visual communication. We base many rules on the distinction between the mental and the visceral, between things we perceive and things we experience. VR and AR will make it harder to draw that line, and may push us to think hard about why we punish certain kinds of conduct and not others in the physical world. Indeed, they may even lead us to rethink the notion of what is “real” in a world where more and more of our most significant experiences aren’t “real” in the classic understanding of that term.
VR and AR aren’t the first technologies to challenge legal doctrine. We can, for instance, learn some important lessons from our efforts to apply legal rules to the Internet over the past quarter-century. But most of those efforts happened haphazardly, not deliberately. Thinking deeply now about how the law will apply to VR and AR requires us to tread new ground. The reward — hopefully — will be not only a solid framework for applying legal doctrine to some tricky new questions, but also a better understanding of doctrines we take for granted in the physical world.
We begin in Part I, by discussing the rise of VR and AR and how people experience those technologies. We then turn in Part II to how the law is likely to treat “street crimes” in VR — behavior such as disturbing the peace, indecent exposure, deliberately harmful visuals (such as strobe lighting used to provoke seizures in people with epilepsy) and “virtual groping.” Two key aspects of this, we will argue, are the Bangladesh problem (which will make criminal law very hard to practically enforce) and technologically enabled self-help (which will offer an attractive alternative, but also a further excuse for real-world police departments not to get involved).
In Part III, we turn to tort lawsuits, by users against users, users against VR and AR environment operators, outsiders (such as copyright owners whose works are being copied by users) against users, and outsiders against the environment operators. In Part IV, we discuss users’ alteration of other users’ avatars, or creation of their own avatars that borrow someone else’s name and likeness, and discuss whether that should be viewed as tortious.
We then consider in Part V the likelihood that VR and AR systems will pervasively store all the sensory information that they present to their users (and that they gather in the course of presenting it), and discuss the privacy implications of such data collection and potential disclosure. And we close in Part VI by talking about two overarching issues — order without law and the speech-conduct distinction — that can reflect on broader debates even outside VR and AR.
Our article primarily aims to identify the interesting coming questions, and outline some possible answers. We will sometimes suggest which answers are best, but that’s not the main value that we seek to add. Rather, we simply hope that, by thinking ahead about such matters, all of us can better decide how to better develop both VR and AR law and VR and AR technology, and perhaps also learn something about the role of law in the physical world as well.