Understanding cyberspace is key to defending against digital attacks
Charlie Miller prepared his cyberattack in a bedroom office at his Midwestern suburban home.
Brilliant and boyish-looking, Miller has a PhD in math from the University of Notre Dame and spent five years at the National Security Agency, where he secretly hacked into foreign computer systems for the U.S. government. Now, he was turning his attention to the Apple iPhone.
At just 5 ounces and 4 1/2 inches long, the iPhone is an elegant computing powerhouse. Its microscopic transistors and millions of lines of code enable owners to make calls, send e-mail, take photos, listen to music, play games and conduct business, almost simultaneously. Nearly 200 million iPhones have been sold around the world.
The idea of a former cyberwarrior using his talents to hack a wildly popular consumer device might seem like a lark. But his campaign, aimed at winning a little-known hacker contest last year, points to a paradox of our digital age. The same code that unleashed a communications revolution has also created profound vulnerabilities for societies that depend on code for national security and economic survival.
Miller’s iPhone offensive showed how anything connected to networks these days can be a target.
He began by connecting his computer to another laptop holding the same software used by the iPhone. Then he typed a command to launch a program that randomly changed data in a file being processed by the software.
The alteration might be as mundane as inserting 58 for F0 in a string of data such as “0F 00 04 F0.” His plan was to constantly launch such random changes, cause the software to crash, then figure out why the substitutions triggered a problem. A software flaw could open a door and let him inside.
“I know I can do it,” Miller, now a cybersecurity consultant, told himself. “I can hack anything.”
After weeks of searching, he found what he was looking for: a “zero day,” a vulnerability in the software that has never been made public and for which there is no known fix.
The door was open, and Miller was about to walk through.
Holes in the system
The words “zero day” strike fear in military, intelligence and corporate leaders. The term is used by hackers and security specialists to describe a flaw discovered for the first time by a hacker that can be exploited to break into a system.
In recent years, there has been one stunning revelation after the next about how such unknown vulnerabilities were used to break into systems that were assumed to be secure.
One came in 2009, targeting Google, Northrop Grumman, Dow Chemical and hundreds of other firms. Hackers from China took advantage of a flaw in Microsoft’s Internet Explorer browser and used it to penetrate the targeted computer systems. Over several months, the hackers siphoned off oceans of data, including the source code that runs Google’s systems.
Another attack last year took aim at cybersecurity giant RSA, which protects most of the Fortune 500 companies. That vulnerability involved Microsoft Excel, a spreadsheet program. The outcome was the same: A zero-day exploit enabled hackers to secretly infiltrate RSA’s computers and crack the security it sold. The firm had to pay $66 million in the following months to remediate client problems.
The most sensational zero-day attack became public in the summer of 2010. It occurred at Iran’s nuclear processing facility in Natanz. Known as Stuxnet, the attack involved a computer “worm” — a kind of code designed to move throughout the Internet while replicating itself. Last week, the New York Times reported that President Obama had approved the operation as part of a secret U.S.-Israeli cyberwar campaign against Iran begun under the Bush administration.
Among other things, the worm was built to infect thumb drives. Investigators think that when one of the infected drives was inserted into a computer at the Natanz plant, its code quickly found its target: It made hundreds of centrifuges designed to refine uranium run too fast and self-destruct, while sending signals to monitors that all was well.
To complete its mission, the Stuxnet worm relied on four zero days.
Just days ago, researchers released information about Flame, another cyberattack. It appears to be designed as a massive espionage and surveillance tool, also aimed at Iran, that can steal data and listen in on phone calls.
Some researchers believe it exploits zero-day vulnerabilities similar to those in Stuxnet.
The vastness of cyberspace
Miller and his kind are masters of code. At a fundamental level, there is almost nothing simpler than the stuff of their obsessions. There is software, which is written computer language. Computers transform software into machine code, which is simply 0’s and 1’s. Those “binary digits,” or bits, organized in trillions of combinations, serve as both the DNA and digital blood of our modern electronic world.
Bits guide the electrical impulses that tell the world’s computers what to do. They enable the seemingly magical applications that computer and smartphone users take for granted. Bits have also given life to the most dynamic man-made environment on Earth: cyberspace.
Not too long ago, “cyberspace” was pure fiction. The word appeared in “Neuromancer,” a 1984 novel that described a digital realm in which people, properly jacked in, could navigate with their minds. Author William Gibson described it as a “consensual hallucination experienced daily by billions of legitimate operators.”
Now cyberspace is a vital reality that includes billions of people, computers and machines. Almost anything that relies on code and has a link to a network could be a part of cyberspace. That includes smartphones, such as the iPhone and devices running Android, home computers and, of course, the Internet. Growing numbers of other kinds of machines and “smart” devices are also linked in: security cameras, elevators and CT scan machines; global positioning systems and satellites; jet fighters and global banking networks; commuter trains and the computers that control power grids and water systems.
So much of the world’s activity takes place in cyberspace — including military communications and operations — that the Pentagon last year declared it a domain of war.
All of it is shot through with zero days.
“We have built our future upon a capability that we have not learned how to protect,” former CIA director George J. Tenet has said.
Researchers and hackers, the good guys and bad, are racing to understand the fundamental nature of cyberspace. For clues about how to improve security — or to mount better attacks — they have turned to physics, mathematics, economics and even agriculture. Some researchers consider cyberspace akin to an organism, its security analogous to a public health issue.
One of the things they know for sure is that the problem begins with code and involves what “Neuromancer” described as the “unthinkable complexity” of humans and machines interacting online.
“The truth is that the cyber-universe is complex well beyond anyone’s understanding and exhibits behavior that no one predicted, and sometimes can’t even be explained well,” concluded JASON, an independent advisory group of the nation’s top scientists, in a November 2010 report to the Pentagon. “Our current security approaches have had limited success and have become an arms race with our adversaries.”
To picture the scale of cyberspace and the scope of the cybersecurity problem, think of the flow of electronic data around the world as filaments of light. Those virtual threads form a vast, brilliant cocoon around the globe.
The electronic impulses that carry the data move at lightning speed. A round-trip between Washington and Beijing online typically occurs in less time than it takes for a major leaguer’s fastball to cross home plate. Blink, and you miss it.
It almost doesn’t matter where hackers work. In the physics governing cyberspace, hackers, terrorists and cyberwarriors can operate virtually next door to regular people browsing the World Wide Web or sending e-mails or phone texts.
Charlie Miller works in suburban St. Louis, in a room that has a small desk, a laptop, a large monitor and power cords that snake across the floor. A wooden bookshelf holds technical manuals alongside his kids’ plastic toys and stuffed animals.
The main clue about what he does for a living is a wall poster for the movie “Hackers.” “Their Crime Is Curiosity,” it says.
The 39-year-old Miller is regarded by some as among the best hackers in the world, but he does not fit the stereotype of an alienated outsider. For starters, he is one of the good guys, a white-hat hacker. He is a security consultant, and he hunts zero days as a hobby. A father of two, trim and balding, he is deceptively modest about his special talents. But his résuméentry about his NSA experience speaks volumes:
“Performed computer network scanning and reconnaissance. Identified weaknesses and vulnerabilities in computer networks. Executed numerous computer network exploitations against foreign targets.”
Apple would not be happy about his plan to attack the iPhone. Like other technology companies, Apple does not want questions about security to taint its products. The company has a well-deserved reputation for developing strong software systems. (Apple officials declined to comment for this article.)
But Miller wasn’t being malicious. He wanted to have fun, prove that it could be done and let the attack serve as a warning about the insecurity of the networked world.
Most of all, he wanted to win a prestigious annual contest where hackers convene to show off the skills that they generally keep to themselves. To win the contest, known as “Pwn2Own,” Miller had to discover a zero day and exploit it. (Pwn is hacker lingo for taking control of a computer.)
If he won, he would receive $15,000, the device he had pwned and a white blazer (modeled on the green jacket worn by winners of the Masters golf tournament). He had won the prize before for hacking Apple products, but it was getting harder.
As he settled into a large black swivel chair in his office, Miller knew he had a challenge on his hands. He did not doubt whether he would find a flaw. He only wondered how bad it would be.
Cracking the iPhone
In December 2010, Miller reached out to a friend and security colleague, Dionysus Blazakis.
Blazakis, 30, started hacking in 1994 and has been breaking code ever since. But instead of breaking the law, he decided to become a software developer. He and Miller worked for the same computer security firm in Baltimore, Independent Security Evaluators. He’s also a zero-day hunter.
In instant chat messages, the two bantered about the technical details of the iPhone’s software. Like hackers everywhere, they wanted to find the easiest route to a vulnerability that would let them take control. Unlike most hackers, they had a deadline: The contest began on March 9, 2011.
“Where do you start? . . . What do you focus on?” Miller recalled asking himself. “The hard part is figuring out the soft part to go after.”
Reading through all the software instructions was out of the question. That might have worked two decades ago, when computer systems were simpler and the Web was still a novelty. A desktop computer then might have a million lines of software. Today, the software in a desktop computer could have 80 million lines or more. Finding the zero days by hand would be like searching a beach for a grain of sand of a particular shade of tan.
Miller and Blazakis decided to rely on a hacker technique known as “fuzzing” — inserting random data into applications and trying to force them to crash.
Making systems crash is easier than it might seem. Software programs are miracles of human ingenuity, veritable cathedrals made of letters and digits. But unlike Notre Dame in Paris or the Duomo in Milan — which took lifetimes to build and remain sturdy to this day — digital architecture is constantly evolving and can be made to crumble with the right push at the wrong spot.
Miller attributes that fragility to companies that place sales and novel applications over computer security.
“Companies want to make money,” he said. “They don’t want to sit around and make their software perfect.”
Many of those vulnerabilities are related to errors in code designed to parse, or sort through, data files sent over the Internet. A typical computer has hundreds of parser codes in its operating system. One good example is an image parser. It identifies the information that makes up a digital photo, processes it and then sends the file to the part of the machine designed to display the image.
Hackers will insert corrupted data in the photo’s code to disrupt the parser software, cause it to crash and open the way for it to be hijacked.
“If an application has never been fuzzed, any form of fuzzing is likely to find bugs,” Microsoft researchers said in a recent paper on the use of fuzzing to improve security.
No human being fuzzing by hand could cause a sufficient number of crashes to routinely allow a hacker to identify a zero day. So Miller and others write programs to do it. Miller’s fuzzing program enables him to connect to a variety of computers and keep track of thousands of crashes, including where in the software the crash took place.
“99.999 percent of the time, nothing bad happens,” Miller explained. “But I do it a billion times, and it happens enough times it’s interesting.”
The heart of his program is a function that randomly substitutes data in a targeted software program. He called the 200 lines of code that make up this function his “special sauce.”
To begin his iPhone hack, he took four Apple computers, one a laptop borrowed from his wife, and connected them to another computer holding the iPhone’s software, the entire amalgamation spread over the benchlike desks of his home office. The homey set-up, complete with an overstuffed bookcase crowned by a bowling pin, looked like the lair of a graduate student pursuing a science project.
Miller ran the mini-network 24 hours a day for weeks. One machine served as the quarterback, launching and coordinating the fuzz attacks, tracking the crashes and collecting the details. Before 7 most mornings, he woke up, went into the office, signed into the quarterback computer and checked on the progress, like a kid hoping for snow.
He was on the lookout in particular for failures that involved computer memory management — a serious flaw that could offer the way in.
“The memory manager keeps track of where things are, where new things should go, et cetera,” Miller recalled. “If a program crashes in the memory manager, it means the computer is confused about what things are located where. This is pretty serious, because it means it is in a state where it might be persuaded to think my data is something it thinks is entirely something else.”
For now, most of the crashes were trivial. February was approaching, and time was short. Miller and Blazakis still did not have their zero day.
The hunt for flaws
Zero days have become the stuff of digital legend. In the 1996 science-
fiction movie “Independence Day,” characters played by Will Smith and Jeff Goldblum launched a “virus” that took advantage of a zero-day vulnerability, crashed the computer system of an alien mothership and saved the world.
But they have always been more than just science fiction. For decades, hackers and security specialists have known about the existence of zero days. And as software proliferated, along with computers and networks, so have zero days. The researchers who found them often had no incentive to share their finds with the affected companies. Sometimes the researchers simply released the vulnerabilities publicly on the Internet to warn the public at large.
Government agencies that secretly engaged in hacking operations, along with some affected software makers, bought information on zero days from a thriving gray market, according to interviews with hackers and security specialists.
In 2005, a security firm called TippingPoint began offering bounties to researchers. Executives of the Austin-based firm reasoned that they could learn much for their own use while spurring the industry to fix threats by creating a master list. They called their effort the Zero Day Initiative.
Since then, more than 1,600 researchers have been paid for reporting almost 5,000 zero days. Starting at hundreds of dollars, the bounties soar into the tens of thousands. A hacker in Shanghai named Wu Shi has earned close to $300,000 for reporting more than 100 flaws in Web browsers.
The system seemed ideal, except for one thing: The software makers often failed to heed the warnings. Some vulnerabilities remained for two years or more.
In 2007, TippingPoint, now owned by Hewlett-Packard, decided to underscore the problem by holding a high-profile event. The Pwn2Own contest would require hackers to not only find zero days but to put them into action in what is known as an “exploit” or attack.
On Jan. 24, 2011, Miller and Blazakis saw a glimmer of hope. An especially promising crash appeared ripe for exploitation.
“Figuring out what to look at,” Miller wrote to his partner, “so we’re ready to rock.”
They had found it inside the part of the browser software that enables iPhone users to view PowerPoint presentations. It involved portions of the file that stored information about the location and size of shapes, such as a circle, square or triangle that would appear on a page of a presentation.
“Really, it was just bytes in a file. It just happened that it had something to do with a shape. We didn’t really care,” Miller said later. “As long as it was doing something wrong with the data.”
This could be their zero day, but more testing was required to see if they could exploit it.
Both men dived back into the technical details of the iPhone’s PowerPoint software. It was hard labor, even for highly skilled hackers. Blazakis stopped shaving and grew a “hacker’s beard.” He put in 18-hour days as he tried to reverse engineer the PowerPoint application in order to take control of it without causing too much disruption.
Bit by bit, they began mastering the layout of the PowerPoint software. They developed an understanding of it that rivaled those who designed it.
Finally, they found a way to insert their malicious code into the application and take control of a part of the iPhone.
“I think it’s under control now,” Miller wrote during an instant-message exchange on Jan. 27. “Sweet.”
Now they had to complete the exploit by figuring out a way to insert that code into an iPhone and ensuring that they could consistently hijack the device. Unlike the movies, where hackers are portrayed as breaking into computers as if they were cracking into digital safes, successful hacks often require deception and the unwitting complicity of the victim.
On Feb. 3, Miller joked to his friend about their struggle: “Looking for bugs fame money girls glory.”
Miller and Blazakis decided to create a way to lure an iPhone user to a bogus Web page. They would set up the page and trick a user into downloading a PowerPoint file. The file would appear normal, but it would contain their malicious code. (Known as “social engineering,” it’s the same technique used in the Google and RSA attacks.)
With the deadline looming, they began having video conference calls. They linked their computers in cyberspace and worked in tandem. They were a tired but formidable pair, cutting corners on their day jobs as security researchers as they closed in on the elusive exploit.
“The last two days were chaotic,” Blazakis said. “I stayed up most of the night doing this.”
On March 8, Miller flew to the contest, which was part of a security conference in Vancouver, B.C. But they still were not sure of the exploit. They continued fiddling with it right up to the eve of the event, including during Miller’s stopover in Seattle.
Their chance came on March 10. As he sat with judges and other hackers in a narrow conference room set up in the hotel, Miller had lingering fears that the hack still might not work on demand. Under the contest rules, he had just five tries to make it work.
When Miller’s turn arrived, he went behind a long table at one end of the room, where the judges sat with their own computers. Yellow cables snaked through the area (the hackers use cables instead of wireless to prevent other hackers from swiping the zero days in play). Miller connected his old white Apple laptop and looked out at other hackers, spectators and some reporters milling about.
A judge played the role of the unwitting iPhone user. The test phone was placed in an aluminum box to block unwanted wireless signals as an additional measure against any attempted theft of a zero-day exploit by other hackers. Miller told him to browse to the phony Web page holding a PowerPoint presentation that Miller had created. Hidden in the presentation’s data was the malicious code.
The image of the phone’s browser was projected onto a large screen. The judge typed in an address for the Web page, but the presentation never appeared. Instead, the image on the screen jumped back to the home page of the phone.
Miller, sitting with his own computer, knew just what had happened. In that moment, he had gained access to all the names and other information on the phone’s address book. He had found a way to strip privacy protections from a key part of the device.
He nudged one of the judges sitting near him and pointed to his screen, which was displaying the iPhone’s address book. He and Blazakis, who was looking on via a video feed to an iPhone he was holding in Baltimore, had won.
The next day, Miller received an oversize check worth $15,000 and beamed as he put on the white winner’s jacket.
Several weeks later, Apple acknowledged the exploit indirectly when the company issued a “patch.” As a result of the hackers’ work, the flaw they found and exploited was no longer a zero day.
Miller and Blazakis knew that behind the contest’s irreverent fun was a sobering reality.
“We’re smart and have skills and such, but we’re not that extraordinary,” Miller said later. “Imagine if you were a government or a Russian mob or a criminal syndicate and you could get 100 guys like us or 1,000 guys?”