Last week, leaks revealed that the Web sites most people use every day are sharing users' private information with the government. Companies participating in the National Security Agency's program, code-named PRISM, include Google, Facebook, Apple and Microsoft.
It wasn't supposed to be this way. During the 1990s, a "cypherpunk" movement predicted that ubiquitous, user-friendly cryptographic software would make it impossible for governments to spy on ordinary users' private communications.
The government seemed to believe this story, too. "The ability of just about everybody to encrypt their messages is rapidly outrunning our ability to decode them," a U.S. intelligence official told U.S. News & World Report in 1995. The government classified cryptographic software as a munition, banning its export outside the United States. And it proposed requiring that cryptographic systems have "back doors" for government interception.
The cypherpunks won that battle. By the end of the Clinton administration, the government conceded that the Internet had made it impossible to control the spread of strong cryptographic software. But more than a decade later, the cypherpunks seem to have lost the war. Software capable of withstanding NSA snooping is widely available, but hardly anyone uses it. Instead, we use Gmail, Skype, Facebook, AOL Instant Messenger and other applications whose data is reportedly accessible through PRISM.
And that's not a coincidence: Adding strong encryption to the most popular Internet products would make them less useful, less profitable and less fun.
"Security is very rarely free," says J. Alex Halderman, a computer science professor at the University of Michigan. "There are trade-offs between convenience and usability and security."
Most people's priority: Convenience
Consumers have overwhelmingly chosen convenience and usability. Mainstream communications tools are more user-friendly than their cryptographically secure competitors and have features that would be difficult to implement in an NSA-proof fashion.
And while most types of software get more user-friendly over time, user-friendly cryptography seems to be intrinsically difficult. Experts are not much closer to solving the problem today than they were two decades ago.
Ordinarily, the way companies make sophisticated software accessible to regular users is by performing complex, technical tasks on their behalf. The complexity of Google, Microsoft and Apple's vast infrastructure is hidden behind the simple, polished interfaces of their Web and mobile apps. But delegating basic security decisions to a third party means giving it the ability to access your private content and share it with others, including the government.
Most modern online services do make use of encryption. Popular Web services such as Gmail and Hotmail support an encryption standard called SSL. If you visit a Web site and see a "lock" icon in the corner of your browser window, that means SSL encryption is enabled. But while this kind of encryption will protect users against ordinary bad guys, it's useless against governments.
That's because SSL only protects data moving between your device and the servers operated by Google, Apple or Microsoft. Those service providers have access to unencrypted copies of your data. So if the government suspects criminal behavior, it can compel tech companies to turn over private e-mails or Facebook posts.
That problem can be avoided with "end-to-end" encryption. In this scheme, messages are encrypted on the sender's computer and decrypted on the recipient's device. Intermediaries such as Google or Microsoft only see the encrypted version of the message, making it impossible for them to turn over copies to the government.
Software like that exists. One of the oldest is PGP, e-mail encryption software released in 1991. Others include OTR (for "off the record"), which enables secure instant messaging, and the Internet telephony apps Silent Circle and Redphone.
But it's difficult to add new features to applications with end-to-end encryption. Take Gmail, for example. "If you wanted to prevent government snooping, you'd have to prevent Google's servers from having a copy of the text of your messages," Halderman says. "But that would make it much harder for Google to provide features like search over your messages." Filtering spam also becomes difficult. And end-to-end encryption would also make it difficult for Google to make money on the service, since it couldn't use the content of messages to target ads.
A similar point applies to Facebook. The company doesn't just transmit information from one user to another. It automatically resizes users' photos and allows them to "tag" themselves and their friends. Facebook filters the avalanche of posts generated by your friends to display the ones you are most likely to find the most interesting. And it indexes the information users post to make it searchable.
These features depend on Facebook's servers having access to a person's private data, and it would be difficult to implement them in a system based on end-to-end encryption. While computer scientists are working on techniques for creating more secure social-media sites, these techniques aren't yet mature enough to support all of Facebook's features or efficient enough to serve hundreds of millions of users.
Other user headaches
End-to-end encryption creates other headaches for users. Conventional online services offer mechanisms for people to reset lost passwords. These mechanisms work because Apple, Microsoft and other online service providers have access to unencrypted data.
In contrast, when a system has end-to-end encryption, losing a password is catastrophic; it means losing all data in the user's account.
Also, encryption is effective only if you're communicating with the party you think you're communicating with. This security relies on keys — large numbers associated with particular people that make it possible to scramble a message on one end and decode it on the other. In a maneuver cryptographers call a "man in the middle" attack, a malicious party impersonates a message's intended recipient and tricks the sender into using the wrong encryption key. To thwart this kind of attack, sender and recipient need a way to securely exchange and verify each other's encryption keys.
"A key is supposed to be associated closely with a person, which means you want a person to be involved in creating their own key, and in verifying the keys of people they communicate with," says Ed Felten, a computer scientist at Princeton University. "Those steps tend to be awkward and confusing."
And even those who are willing to make the effort are likely to make mistakes that compromise security. The computer scientists Alma Whitten and J.D. Tygar explored these problem in a famous 1999 paper called "Why Johnny Can't Encrypt." They focused on PGP, which was (and still is) one of the most popular tools for users to send encrypted e-mail.
PGP "is not usable enough to provide effective security for most computer users," the authors wrote.
Users expect software to "just work" without worrying too much about the technical details. But the researchers discovered that users tended to make mistakes that compromise their security. Users are supposed to send other people their "public key," used to encode messages addressed to them, and to keep their private key a secret. Yet some users foolishly did the opposite, sending others the private key that allowed eavesdroppers to unscramble e-mail addressed to them. Others failed to make backup copies of their private encryption keys, so when their hard drives crashed, they lost access to their encrypted e-mail.
Using PGP is such a hassle that even those with a strong need for secure communication resist its use. When Edward Snowden, the man who leaked the details of the PRISM program, first contacted Glenn Greenwald at the Guardian in February, he asked the journalist to set up PGP on his computer so the two could communicate securely. He even sent Greenwald a video with step-by-step directions for setting up the software. But Greenwald, who didn't yet know the significance of Snowden's leaks, dragged his feet. He did not set up the software until late March, after filmmaker Laura Poitras, who was also in contact with Snowden, met with Greenwald and alerted him to the significance of his disclosures.
Going with the flow
Felten argues that another barrier to adopting strong cryptography is a chicken-and-egg problem: It is only useful if you know other people are also using it. Even people who have gone to the trouble of setting up PGP still send most of their e-mail in plain text because most recipients don't have the capability to receive encrypted e-mail. People tend to use what's installed on their computer. So even those who have Redphone will make most of their calls with Skype because that's what other people use.
Halderman isn't optimistic that strong cryptography will catch on with ordinary users anytime soon. In recent years, the companies behind the most popular Web browsers have beefed up their cryptographic capabilities, which could make more secure online services possible. But the broader trend is that users are moving more and more data from their hard drives to cloud computing platforms, which makes data even more vulnerable to government snooping.
Strong cryptographic software is available to those who want to use it. Whistleblowers, dissidents, criminals and governments use it every day. But cryptographic software is too complex and confusing to reach a mass audience anytime soon. Most people simply aren't willing to invest the time and effort required to ensure the NSA can't read their e-mail or listen to their phone calls. And so for the masses, online privacy depends more on legal safeguards than technological wizardry.
The cypherpunks dreamed of a future where technology protected people from government spying. But end-to-end encryption doesn't work well if people don't understand it. And the glory of Google or Facebook, after all, is that anyone can use them without really knowing how they work.