The Washington PostDemocracy Dies in Darkness

Weitzner: Encryption solution in wake of Paris should come from Washington not Silicon Valley

In the wake of the Paris terror attacks, some lawmakers and law enforcement officials want technologists like Apple CEO Tim Cook to find ways to access encrypted emails and messages that are being used by terrorists,(AP Photo/Eric Risberg)

The Outrage Machine is a weekly opinion column by voices from the left and right on Washington.

In the wake of the terrible terrorist attacks in Paris, law enforcement officials in Washington are again calling on technology designers to dumb down user’s Internet security to enable guaranteed access to all data and communications, even if encrypted.

Several months ago, after a round of exhortations from FBI Director James Comey and United Kingdom Prime Minister David Cameron, hearings, academic papers and urging from privacy advocates, the White House publicly stood up for the need for strong encryption even though it might impair some law enforcement surveillance. At that time, President Obama declined to endorse any legislative initiative to force technology companies to build in “back doors” to encrypted systems.

But now, after Paris, top law enforcement officials have renewed their hope that the wizards of Silicon Valley might just design their way out of this problem. As Attorney General Loretta Lynch said, “The hope, perhaps, is that Silicon Valley, having engineered a problem, might just engineer a solution too.”

Will it help to ask Silicon Valley to go back and try harder? I doubt it.

One thing is for certain: the basic facts about how secure systems work have not changed. Over the summer, 14 colleagues of mine, including the world’s leading cryptographers, published a paper: “Keys Under Doormats,” which showed there are grave risks to building “exceptional access” systems for law enforcement. Once those back doors are there for the FBI, all of our private communications become much more vulnerable to attack by malicious criminals and terrorists. Exceptional access refers to a technical feature of a system that gives government access to data or communications that are otherwise private to those directly granted access.

To guarantee exceptional access for law enforcement, systems must be designed to keep the keys that unlock the secret information in some secure location, possibly for months or years, so that police can get the data on demand. Keeping keys around in storage creates attractive targets for unauthorized attackers and backtracks on the lessons we’ve learned over the years about system design.

For the sake of argument, let’s make a big hypothetical leap and assume that someone discovers a technically secure exception access system. Does this solve the problem? Not in the least.

The real problem with law enforcement calls for mandated exceptional access is actually about policy and trust, not just a search for the right technology. It bears remembering how we got here to begin with.

The Edward Snowden disclosures of spring 2014 revealed that U.S. intelligence agencies were collecting large amounts of data about both American citizens and Internet users globally. What’s more, it appears the NSA had actually cut its own back doors into systems run by Google, Yahoo and others by tapping into the undersea fiber optic cables that connect tech companies’ data centers around the world.

The reaction in the U.S. and around the world was dramatic. American companies were embarrassed. Commercial competitors in Europe and elsewhere immediately launched marketing campaigns urging non-U.S. Internet customers to switch to products and services operated outside the U.S., to avoid the risk of NSA surveillance.

At the height of the Snowden disclosures, Deutsche Telekom ran ads in the European Wall Street Journal touting “E-mail Made In Germany” as safer than that by U.S. providers. Responding to this very real market pressure (and likely to soothe their own wounded pride), Apple, Google and others announced their mobile devices (iPhones and Androids) would be shipped with encryption turned on by default.

The impact on law enforcement is this: if government officials want to tap an iMessage conversation or access the contents of an Android phone found at a crime scene, they may not be able to do so in the future. What’s more, Apple and Google systems are designed so that neither company has the innate ability to unlock the encrypted data any more. All of a sudden, in the fast-moving tech industry that has not always prioritized secure design, market pressures pushed world-leading companies to build systems impervious to all but the most sophisticated law enforcement agencies.

We got here not because of arbitrary or willful design choices by companies, but because of a rapidly escalating cycle of distrust by users and customers who demand systems designed against many threats, including the perceived threat of some government surveillance.

Finding the hypothesized technical solution to exceptional access actually opens up a Pandora’s Box.

There is simply no way to consider such technology apart from the basic fact that any surveillance technology — once deployed globally as part of smartphones, apps or web-based services — will be available to all governments. We will not be able to limit it to the governments that have good human rights practices. So, even if we think we have an exceptional access solution for Apple or Google to deploy, we have to imagine whether it’s tolerable for it to end up in the hands of bad actors. This puts both users and Internet companies in the impossible position of either compromising basic human rights or forgoing access to the world’s largest markets such as China and Russia.

A real trust gap has brought us to this point. But the actual problem we should work on addressing is strengthening the public policy framework that governs surveillance, both domestically and globally. Rebuilding trust requires extension to the global arena of the basic civil liberties and human rights principles that govern domestic law enforcement.

First, we need comprehensive transparency regarding government surveillance practices. Leading Internet and telecom companies have begun regularly reporting  the number and kind of surveillance requests they receive and how they respond.

U.S. law is becoming more open to transparency, but many countries outside the U.S. prohibit transparency reporting from tech companies. The U.S. government should be a leader in encouraging our allies to enable greater transparency. If we look carefully at the surveillance actually conducted under the programs disclosed by Snowden, the fact is there  were no major government abuses. Though the scope of the programs was extreme and based on questionable court rulings, most of the surveillance was technically within the law. More proactive transparency will help narrow the trust gap between users, companies and the government.

Second, global surveillance must be subject to strict independent approval and oversight. This means that an independent judicial entity must approve each surveillance action and provide ongoing oversight of government access to and use of that data. The U.S. legal system provides such oversight for law enforcement surveillance and certain intelligence activities. Other democratic countries are far behind.

The U.K. is considering a law that would allow global surveillance without oversight by courts, but based only on supervision from other parts of the executive branch. France recently enacted a law allowing widespread surveillance without independent judicial controls.

And finally, recognizing the uniquely intrusive nature of surveillance, our laws must assure that these tools are only used when necessary and the privacy risks are proportionate to the public safety threat. After all, digital surveillance tools are now inexpensive enough that they could be employed for investigating minor crimes such as late tax payments or parking violations. But that would be overkill, so we need strong legal limitations on the scope of surveillance.

Law enforcement rightly cautions us against the risks of a global Internet environment in which more and more human activity (including criminal behavior) takes place behind impenetrable dark corners.

The way to encourage people out into the light is not by building increasingly intrusive surveillance tools with back doors everywhere, but to show there is a genuine human rights framework that protects Internet users from unjustified and unchecked surveillance.

Daniel Weitzner is director of the MIT Internet Policy Research Initiative and principal research scientist at MIT’s Computer Science and Artificial Intelligence Lab. From 2011 to 2012 he was White House deputy chief technology officer for Internet policy.  

Loading...