Net of insecurity Part 1

Net of insecurity

A flaw in
the design

The Internet’s founders saw its promise
but didn’t foresee users attacking one another

Published on May 30, 2015

David D. Clark, an MIT scientist whose air of genial wisdom earned him the nickname “Albus Dumbledore,” can remember exactly when he grasped the Internet’s dark side. He was presiding over a meeting of network engineers when news broke that a dangerous computer worm — the first to spread widely — was slithering across the wires.

One of the engineers, working for a leading computer company, piped up with a claim of responsibility for the security flaw that the worm was exploiting. “Damn,” he said. “I thought I had fixed that bug.”

The making of a vulnerable Internet: This story is the first of a multi-part project on the Internet’s inherent vulnerabilities and why they may never be fixed.

Part 2: The long life of a ‘quick fix.’
Part 3: These hackers warned the Internet would become a security disaster. Nobody listened.
Part 4: How yesterday’s flaws are being built into tomorrow’s connected world.
Part 5: The kernel of the argument.
Read the eBook. The Threatened Net: How the Internet Became a Perilous Place

But as the attack raged in November 1988, crashing thousands of machines and causing millions of dollars in damage, it became clear that the failure went beyond a single man. The worm was using the Internet’s essential nature — fast, open and frictionless — to deliver malicious code along computer lines designed to carry harmless files or e-mails.

Decades later, after hundreds of billions of dollars spent on computer security, the threat posed by the Internet seems to grow worse each year. Where hackers once attacked only computers, the penchant for destruction has now leapt beyond the virtual realm to threaten banks, retailers, government agencies, a Hollywood studio and, experts worry, critical mechanical systems in dams, power plants and aircraft.

‘Real problems that require serious protection’

Play Video

Vinton G. Cerf, designed key building blocks of the Internet in the 1970s and ’80s.

These developments, though perhaps inevitable in hindsight, have shocked many of those whose work brought the network to life, they now say. Even as scientists spent years developing the Internet, few imagined how popular and essential it would become. Fewer still imagined that eventually it would be available for almost anybody to use, or to misuse.

“It’s not that we didn’t think about security,” Clark recalled. “We knew that there were untrustworthy people out there, and we thought we could exclude them.”

How wrong they were. What began as an online community for a few dozen researchers now is accessible to an estimated 3 billion people. That’s roughly the population of the entire planet in the early 1960s, when talk began of building a revolutionary new computer network.

Those who helped design this network over subsequent decades focused on the technical challenges of moving information quickly and reliably. When they thought about security, they foresaw the need to protect the network against potential intruders or military threats, but they didn’t anticipate that the Internet’s own users would someday use the network to attack one another.

Computer worm

A standalone piece of software that can make copies of itself and spread to other computers. A destructive worm can make so many copies of itself that it overwhelms host computers, causing them to crash.

“We didn’t focus on how you could wreck this system intentionally,” said Vinton G. Cerf, a dapper, ebullient Google vice president who in the 1970s and ’80s designed key building blocks of the Internet. “You could argue with hindsight that we should have, but getting this thing to work at all was non-trivial.”

Those involved from the early days — what might be called the network’s founding generation — bristle at the notion that they somehow could have prevented today’s insecurity, as if road designers are responsible for highway robbery or urban planners for muggings. These pioneers often say that online crime and aggression are the inevitable manifestation of basic human failings, beyond easy technological solutions.

“I believe that we don’t know how to solve these problems today, so the idea that we could have solved them 30, 40 years ago is silly,” said David H. Crocker, who started working on computer networking in the early 1970s and helped develop modern e-mail systems.

Yet 1988’s attack by the “Morris Worm” — named for Robert T. Morris, the Cornell University graduate student who created it — was a wake-up call for the Internet’s architects, who had done their original work in an era before smartphones, before cybercafes, before even the widespread adoption of the personal computer. The attack sparked both rage that a member of their community would harm the Internet and alarm that the network was so vulnerable to misdeeds by an insider.

When NBC’s “Today” aired an urgent report on the worm’s rampage, it became clear that the Internet and its problems were destined to outgrow the idealistic world of scientists and engineers — what Cerf fondly recalled as “a bunch of geeks who didn’t have any intention of destroying the network.”

But the realization came too late. The Internet’s founding generation was no longer in charge. Nobody really was. Those with dark intentions would soon find the Internet well suited to their goals, allowing fast, easy, inexpensive ways to reach anyone or anything on the network. Soon enough, that would come to include much of the planet.

David D. Clark, pictured at his MIT lab, says the Internet’s founders did have security concerns. “We knew that there were untrustworthy people out there, and we thought we could exclude them,” he says. (Josh Reynolds for The Washington Post)

Bracing for nuclear war

The Internet was born of a big idea: Messages could be chopped into chunks, sent through a network in a series of transmissions, then reassembled by destination computers quickly and efficiently. Historians credit seminal insights to Welsh scientist Donald W. Davies and American engineer Paul Baran — a man determined to brace his nation for the possibility of nuclear war.

Baran described his bleak vision in an influential paper in 1960 when he was working for the Rand Corp., a think tank. “The cloud-of-doom attitude that nuclear war spells the end of the earth is slowly lifting,” Baran wrote, endorsing the view that “the possibility of war exists but there is much that can be done to minimize the consequences.”

Among those was a rugged communication system with redundant links so that it could still function in the aftermath of a Soviet strike, allowing survivors to provide aid to one another, preserve democratic governance and potentially launch a counterattack. This, Baran wrote, would help “the survivors of the holocaust to shuck their ashes and reconstruct the economy swiftly.”

ARPANET

A pioneering computer network built by the Pentagon’s Advanced Research Projects Agency (ARPA). Established in 1969, it eventually linked more than 100 universities and military sites, becoming the forerunner to today’s Internet.

Davies had a more placid vision. Computers in that era were huge, costly behemoths that could fill a room and needed to serve multiple users at the same time. But logging on to them often required keeping expensive telephone lines open continuously even though there were long periods of silence between individual transmissions.

Davies began proposing in the mid-1960s that it would be better to slice data into pieces that could be sent back and forth almost continuously, allowing several users to share the same telephone line while gaining access to a remote computer. Davies also set up a small network in Britain, demonstrating the viability of the idea.

These two visions, the one for war and the one for peace, worked in tandem as the Internet moved from concept to prototype to reality.

The most important institutional force behind this development was the Pentagon’s Advanced Research Projects Agency (ARPA), created in 1958 during the aftermath of the Soviet Union’s launch of the Sputnik satellite, amid mounting fears of an international gap in scientific achievement.

A decade later, as ARPA began work on a groundbreaking computer network, the agency recruited scientists affiliated with the nation’s top universities. This group — including several who during the Vietnam War and its polarizing aftermath would have been uneasy working on a strictly military project — formed the collegial core of the Internet’s founding generation.

When the network made its first connections in 1969, among three universities in California and one in Utah, the goals were modest: It was a research project with a strongly academic character. Those on the ARPANET, as the most important predecessor to the Internet was named, soon would use it to trade messages, exchange files and gain remote access to computers.

It would have taken enormous foresight, said Virginia Tech historian Janet Abbate, for those planting these early seeds of the Internet to envision the security consequences years later, when it would take a central place in the world’s economy, culture and conflicts. Not only were there few obvious threats during the ARPANET era of the 1970s and early 1980s, but there also was little on that network worth stealing or even spying on.

“People don’t break into banks because they’re not secure. They break into banks because that’s where the money is,” said Abbate, author of “Inventing the Internet,” on the network and its creators.

She added, “They thought they were building a classroom, and it turned into a bank.”

UCLA scientist Leonard Kleinrock stands next to a specialized computer -- a forerunner to today's routers -- that sent the first message over the Internet in 1969 from his original laboratory on the school’s campus. (Bret Hartman for The Washington Post)

The first ‘killer app’

Fueling that early work was the shared intellectual challenge of developing a technology many thought doomed to failure. Several Internet pioneers felt particular frustration with AT&T’s Bell telephone system, which they saw as a rigid, expensive, heavily regulated monopoly — everything they didn’t want their new computer network to be.

Baran, who died in 2011, once told of a meeting with Bell system engineers in which he tried to explain his digital networking concept but was stopped mid-sentence. “The old analog engineer looked stunned,” Baran said in an oral history for the Institute of Electrical and Electronics Engineers, a professional group. “He looked at his colleagues in the room while his eyeballs rolled up, sending a signal of his utter disbelief. He paused for a while, and then said, ‘Son, here’s how a telephone works . . .’ And then he went on with a patronizing explanation of how a carbon button telephone worked. It was a conceptual impasse.”

Yet it was on AT&T’s lines that ARPANET first sparked to life, with data flowing between two giant Interface Message Processors — forerunners to today’s routers — each the size of a phone booth. The first, installed at UCLA, sent a message to the second, at the Stanford Research Institute more than 300 miles away, on Oct. 29, 1969. The goal was to log on remotely, but they only got as far as the “LO” of “LOGIN” when the Stanford computer crashed.

Leonard Kleinrock, a UCLA computer scientist who was among the earliest pioneers of networking technology, was at first crestfallen by the uninspiring nature of that seminal message — especially when compared with the instantly famous “That’s one small step for man, one giant leap for mankind” line delivered during the first moon landing a few months earlier.

But Kleinrock later reasoned that “LO” could be understood as the beginning of “Lo and behold,” a worthy christening for an advance that many would come to consider equally transformative. “We couldn’t have prepared a more succinct, more powerful, more prophetic message than we did by accident,” he said years later.

As the ARPANET developed in its first years, soon connecting computers in 15 locations across the country, the key barriers were neither technological nor AT&T’s lack of interest. It simply wasn’t clear what the network’s practical purpose was. There was only so much file sharing that needed to be done, and accessing computers remotely in that era was cumbersome.

What proved highly appealing, however, was conversing across the fledgling network with friends and colleagues. The network’s first “killer app,” introduced in 1972, was e-mail. By the following year, it was responsible for 75 percent of ARPANET’s traffic.

The rapid adoption of e-mail foreshadowed how computer networking would eventually supplant traditional communications technologies such as letters, telegraphs and phone calls. E-mail also would, decades later, become a leading source of insecurity in cyberspace.

Such issues were of little concern during the ARPANET era, when the dilemmas were related to building the network and demonstrating its value. At a three-day computer conference at the Washington Hilton hotel in October 1972, the ARPA team mounted the first public demonstration of its budding network and an initial suite of applications, including an artificial-intelligence game in which a networked computer mimicked a psychotherapist’s patter of questions and observations.

Though the event is remembered by those involved as a huge success, there was one sour note. Robert Metcalfe, a Harvard University doctoral student who would later co-invent Ethernet technology and found networking giant 3Com, was demonstrating the ARPANET’s capabilities for a visiting delegation of AT&T executives when the system abruptly crashed.

The system was down only briefly, but it was enough to upset Metcalfe — whose embarrassment turned to rage when he noticed that the AT&T executives, dressed in seemingly identical pinstriped suits, were laughing.

“They were happy. They were chuckling,” he recalled of this early encounter between telephone technology and computer networking. “They didn’t realize how threatening it was. . . . [The crash] seemed to confirm that it was a toy.”

Robert T. Morris leaves a federal courthouse with his mother, Anne, in Syracuse, N.Y., in January 1989. He released the “Morris Worm,” the first Internet attack to spread widely. in November 1988. (Michael J. Okoniewski/Associated Press) Vinton G. Cerf, now a Google executive, directs ARPANET team members during the first successful transmission of TCP packets over radio in Silicon Valley in 1974. (SRI International)

‘It’s kind of like safe sex’

The rivalry eventually would harden into a caricature, with the pioneering “Netheads” taking on the stodgy “Bellheads,” recalled Billy Brackenridge, an early computer programmer who later worked at Microsoft. “The Bellheads needed total control of everything,” he said. “The Netheads were anarchists.”

For this there were cultural reasons — the young newcomers vs. the establishment — but also technological ones. Telephone networks, it was often said, had an intelligent core — the switches that ran everything — and “dumb” edges, meaning the handsets in nearly every home and business in the nation. The Internet, by contrast, would have a “dumb” core — all the network did was carry data — with intelligent ­edges, meaning the individual computers controlled by users.

A “dumb” core offered few opportunities for centralized forms of security but made it easy for new users to join. This model worked so long as the edges were controlled by colleagues who shared motives and a high degree of trust. But that left the edges with a responsibility to serve as gatekeepers to the network.

“We’ve ended up at this place of security through individual vigilance,” said Abbate, the Virginia Tech historian. “It’s kind of like safe sex. It’s sort of ‘the Internet is this risky activity, and it’s up to each person to protect themselves from what’s out there.’ . . . There’s this sense that the [Internet] provider’s not going to protect you. The government’s not going to protect you. It’s kind of up to you to protect yourself.”

‘This was not the Internet for ordinary people’

Play Video

Janet Abbate, Virginia Tech historian.

Few embraced this need for constant vigilance during the ­ARPANET era. Anyone with access to a user name and password — whether officially issued to themselves, a colleague or just a friend — typically could sign on to the network; in some cases all it took was access to a terminal and the phone number of the right computer.

This created risks that some warned about even in the earliest days. Metcalfe posted a formal message to the ARPANET Working Group in December 1973 warning that it was too easy for outsiders to log on to the network.

“All of this would be quite humorous and cause for raucous eye winking and elbow nudging, if it weren’t for the fact that in recent weeks at least two major serving hosts were crashed under suspicious circumstances by people who knew what they were risking; on yet a third system, the system wheel password was compromised — by two high school students in Los Angeles no less,” Metcalfe wrote. “We suspect that the number of dangerous security violations is larger than any of us know [and] is growing.”

As the numbers of officially sanctioned users grew, there also was rising discord over the purpose of the network. Though nominally under control of the Pentagon, efforts by military authorities to impose order sometimes ran into resistance from an emerging online community that was more experimental, valuing freedom over strict adherence to rules. Unauthorized uses such as an e-mail group for science fiction fans quietly thrived online.

Tensions among users would only expand as the Internet itself arrived in the 1980s, the World Wide Web in the 1990s and smartphones in the 2000s. This ever-expanding network grew to include people increasingly working at cross purposes: Musicians vs. listeners who wanted free music. People seeking to communicate privately vs. government eavesdroppers. Criminal hackers vs. their victims.

Clark, the MIT scientist, dubbed these ongoing conflicts “tussles.” They were tensions, largely unanticipated by the Internet’s creators, that had become central to how the network actually worked. “The common purpose that launched and nurtured it no longer prevails,” Clark wrote in 2002. “There are, and have been for some time, important and powerful players that make up the Internet milieu with interests directly at odds with each other.”

A sign of trouble ahead arrived as early as 1978, when a marketer for Digital Equipment Corp. sent a message to hundreds of ARPANET users announcing events in California to demonstrate new computers. Internet historians regard it as the first bit of “spam,” the catch-all term for unwanted e-mail blasts.

It prompted a terse, all-caps response from the Pentagon official overseeing the network, who sent a message calling it “A FLAGRANT VIOLATION” of the rules. “APPROPRIATE ACTION IS BEING TAKEN TO PRECLUDE ITS OCCURRENCE AGAIN.”

Amid this and other grumbling, collected by Brad Templeton, a board member for the civil liberties group Electronic Frontier Foundation, some users sent messages defending the idea of an Internet open to many purposes — even commercial ones.

“Would a dating service for people on the net be ‘frowned upon’?” wrote Richard Stallman of MIT, a leading advocate for online freedom. “I hope not. But even if it is, don’t let that stop you from notifying me via net mail if you start one.”

Steve Crocker, who worked on early networking technology during the ARPANET era, is chairman of the Internet Corporation for Assigned Names and Numbers, a nonprofit group that oversees the designation of Web addresses worldwide. (Bill O'Leary/The Washington Post)

Concerns from the NSA

Traditional telephone systems work by maintaining open lines between callers for the duration of a conversation, while charging them by the minute. The Internet, by contrast, shoots its chunks of data from computer to computer in brief digital bursts, as capacity becomes available. These chunks — which are written in binary code, just ones and zeros arranged according to set rules — are called “packets.” The system of transmitting them is called “packet switching.”

Binary code

A combination of zeroes and ones that together can represent any letter or number. Computer commands typically are transmitted in binary code, making it the underlying alphabet of the digital world.

Packet switching

A system for chopping data into a series of smaller pieces and transmitting over a network. This allows for greater efficiency but requires that recipient computers have the ability to reassemble the data packets in the correct order to form coherent messages.

The result is something like a vast system of pneumatic tubes capable of carrying anything that fits in a capsule to any destination on the network. The key — and this is how the Internet’s founders spent much of their time — was making sure that the network routed the packets correctly and kept track of which ones arrived safely. That allowed the packets that got lost along the way to be re-sent repeatedly, perhaps along different paths, in search of a successful route to their destination.

The technology required a high degree of precision, but amazingly “packet-switched” networks can function without a central authority. Though the Pentagon oversaw the ARPANET during the years when it was footing the bill for deployment, its power gradually dwindled. Today, no U.S. government agency has a degree of control over the Internet that approaches what almost every nation in the world maintains over its telephone system.

The ARPANET in its first years ran on a protocol — essentially a set of rules allowing different computers to work together — that allowed basic functions. But as that network grew, so did others. Some were largely academic systems, linking university computers together over land lines. Others used radio signals and even satellites to help computers communicate across expanses of land or water.

Connecting these networks required writing new protocols, a job taken on by Cerf and fellow computer scientist Robert E. Kahn during the 1970s, in work undertaken at the behest of ARPA (renamed DARPA in 1972, for Defense Advanced Research Projects Agency). The result of that work, called TCP/IP, allowed virtually any computer network in the world to communicate directly with any other, no matter what hardware, software or underlying computer language the systems used.

But switching from the relatively confined world of ARPANET to a global network created new security concerns that Cerf and Kahn both appreciated.

“We were well aware of the importance of security . . . but from a military standpoint, operating in a hostile environment,” recalled Cerf. “I was not so much thinking about it in terms of the public and commercial setting as in the military setting.”

One answer was to design TCP/IP in a way that required encryption, the practice of coding messages in ways that only the intended recipient, using a mathematical “key,” could decode. Though primitive forms of encryption dated back centuries, a new generation of advanced computerized versions began appearing in the 1970s, as Cerf and Kahn worked on TCP/IP.

Successful deployment of encryption would have made the network resistant to eavesdropping and also made it easier to know who sent a particular communication. If somebody holding a certain encryption key is a trusted correspondent, other messages created with that key are probably authentic. This is true even if the correspondent’s legal name is not used — or even necessarily known.

Though clearly useful in a military setting, where intercepted or falsified messages could have disastrous consequences, the widespread deployment of encryption technology could have offered a significant degree of privacy and security to civilian users as well. But in the years that Cerf and Kahn were designing TCP/IP, implementing encryption proved daunting.

Encrypting and decrypting messages consumed large amounts of computing power, likely requiring expensive new pieces of hardware to work properly. It also was not clear how to safely distribute the necessary keys — an issue that complicates encryption systems even today.

‘Basically owned the technology in cryptography'

Play Video

Steve Crocker, worked on early networking technology for DARPA.

Yet lurking in the background were political issues as well: The National Security Agency, which Cerf said was an enthusiastic supporter of secure packet-switching technology for military uses, had serious reservations about making encryption available on public or commercial networks. Encryption algorithms themselves were considered a potential threat to national security, covered by government export restrictions on military technologies.

Steve Crocker, the brother of David Crocker and a lifelong friend of Cerf who also worked on early networking technology for DARPA, said, “Back in those days, the NSA still had the ability to visit a professor and say, ‘Do not publish that paper on cryptography.’ ”

As the ’70s wound down, Cerf and Kahn abandoned their efforts to bake cryptography into TCP/IP, bowing to what they considered insurmountable barriers.

It was still possible to encrypt traffic using hardware or software designed for that purpose, but the Internet developed into a communication system that operated mostly in the clear — meaning anyone with access to the network could monitor transmissions. With encryption rare, it also was difficult for anyone online to be sure who he or she was communicating with.

Kleinrock, the UCLA scientist, said the result was a network that combined unprecedented reach, speed and efficiency with the ability to act anonymously. “That’s a perfect formula,” he said, “for the dark side.”

Vinton G. Cerf, now a Google executive, says he wishes that he and fellow computer scientist Robert E. Kahn had been able to build encryption into TCP/IP from the beginning. (Bill O'Leary/The Washington Post)

‘Operation Looking Glass’

TCP/IP proved a historic engineering triumph, allowing a remarkably disparate group of networks to work together to an unprecedented degree. From the late 1970s through the early 1980s, DARPA sponsored a series of tests to gauge the ability of the protocols to efficiently and reliably transmit data over challenging terrain, from portable antennas set up at an outdoor bar to vans rolling along coastal highways to small aircraft flying above.

TCP/IP

A set of protocols that are the fundamental technology of the Internet. They provide a common language for a disparate group of computers and networks, allowing them to work together across the world.

Encryption

A way of encoding information so that only the sender and recipient can understand it. When computers exchange encrypted information, they use complex mathematical algorithms along with a designated digital “key.” This allows for greater privacy and also authentication of the identity of the sender and recipient.

There also was an explicitly military component. Cerf had a “personal goal,” he said years later, of proving the viability of Baran’s vision of a communication system resilient enough to help the nation recover from a nuclear attack. That idea fueled a series of exercises in which digital radios made TCP/IP connections in increasingly complex scenarios.

The most ambitious tests sought to mimic “Operation Looking Glass,” a Cold War campaign to make sure that at least one airborne command center was aloft at all times, beyond the reach of possible nuclear destruction below. This involved a nearly continuous cycle of takeoffs and landings, from Strategic Air Command near Omaha, in precise shifts over the course of 29 years.

One day in the early 1980s, two Air Force tankers flew above the Midwestern plains as a specially outfitted van, carrying its own ground-based mobile command center, drove on highways below, said people involved in the exercise. Digital radios transmitting TCP/IP messages linked the air- and ground-based computers together into a temporary “net” that stretched for hundreds of miles and also included Strategic Air Command’s underground bunker.

To demonstrate the ability to maintain communications, the command centers transmitted among themselves a mock file representing the nation’s surviving military assets — necessary to direct a nuclear counterattack. The process typically took hours over the voice radios that were the standard technology of the time, said Michael S. Frankel, who oversaw the exercises for contractor SRI International and later became a top Pentagon official.

Over the TCP/IP connections, the same process took less than a minute, demonstrating how the protocols could allow computers to share information quickly and easily, potentially knitting together even a network that had been fractured by war.

A network is born

On Jan. 1, 1983, years of work by Cerf, Kahn and countless others culminated on what they dubbed “Flag Day,” a term that refers to the reboot of a system so total that it’s difficult to go back. Every computer on the ARPANET and other networks that wanted to communicate with it had to start using TCP/IP. And gradually they did, linking disparate networks together in a new, global whole.

So was born the Internet.

There were, of course, still practical barriers to entry given the expense of computers and the lines for transmitting data. Most people online in the 1970s and ’80s were affiliated with universities, government agencies or unusually tech-savvy companies. But those barriers shrank away, gradually creating a community that was bigger than any nation yet all but ungoverned.

The U.S. military would create its own networks using TCP/IP and eventually implement encryption to protect the security of its communications. But the civilian Internet would take decades to get widespread deployment of this basic security technology — a process that remains incomplete even today despite a surge of deployment in 2013, in the aftermath of revelations about the extent of NSA spying on the Internet.

Encryption would not have prevented all of today’s problems, many of which stem from the fundamentally open nature of the Internet and the astronomical value of the information and systems now connected to it. But it would have limited eavesdropping and made it easier for the recipient of messages to verify their source — two long-standing issues that remain unresolved.

Cerf said he still wishes that he and Kahn had been able to build encryption into TCP/IP from the beginning. “We would have had much more regular end-to-end encryption in the Internet” today, he said. “I can easily imagine this alternative universe.”

Debate remains, however, about whether widespread use of encryption was feasible in the early days of the Internet. The heavy computing demands, some experts say, could have made TCP/IP too difficult to implement, leading to some other protocol — and some network other than the Internet — becoming dominant.

“I don’t think the Internet would have succeeded as it did if they had the [encryption] requirements from the beginning,” Johns Hopkins cryptologist Matthew Green said. “I think they made the right call.”

‘If we had a giant nuclear exchange'

Play Video

Vinton G. Cerf, designed key building blocks of the Internet in the 1970s and ’80s.

Old flaws, new dangers

From its unlikely roots in a Pentagon research agency, the Internet developed into a global communications network with no checkpoints, no tariffs, no police, no army, no regulators and no passports or any other reliable way to check a fellow user’s identity. Governments would eventually insinuate themselves into cyberspace — to enforce their laws, impose security measures and attack one another — but belatedly and incompletely.

The Morris Worm dramatically revealed the downside of such a system, with a “dumb” core and intelligent edges. This design pushed security to the edges as well. That is where the vast majority of hacks happen today: They are launched from one computer against another computer. The Internet is not the setting for most attacks. It is the delivery system.

The Morris Worm offers one other lesson: It can be difficult to fix problems even once they are widely known. Robert Morris — who was convicted of computer crime and given probation before becoming an entrepreneur and an MIT professor — was not looking to crash the Internet. He was experimenting with self-replicating programs and took advantage of a flaw called “buffer overflow” that had been identified by computer researchers in the 1960s. It was still a problem in 1988, when Morris made his worm, and still is used by hackers today, a ­half-century after its discovery.

The trouble with retrofitting security into networks built for a different era has convinced some scientists that it’s time to scrap much of the current Internet and start over. DARPA has spent more than $100 million over the past five years on a “Clean Slate” initiative to deal with issues not fully appreciated during the ARPANET days.

“The fundamental problem is that security is always difficult, and people always say, ‘Oh, we can tackle it later,’ or, ‘We can add it on later.’ But you can’t add it on later,” said Peter G. Neumann, a computer science pioneer who has chronicled security threats on the online “RISKS Digest” since 1985. “You can’t add security to something that wasn’t designed to be secure.”

‘A network that’s going to change mankind'

Play Video

Steve Crocker, worked on early networking technology for DARPA.

Others don’t go as far, but the mixed legacy of the Internet — so amazing, yet so insecure — continues to cause unease among much of its founding generation.

“I wished then and I certainly continue to wish now that we could have done a better job,” said Steve Crocker, who wrestles with security issues often as the chairman of the Internet Corporation for Assigned Names and Numbers, a nonprofit group that oversees the designation of Web addresses worldwide. In designing the network, Crocker said, “We could have done more, and most of what we did was in response to issues as opposed to in anticipation of issues.”

Similar themes appear repeatedly in the work of Clark, the MIT scientist. He penned a widely read paper in 1988, just a few months before the Morris Worm hit, recalling the priorities of the Internet’s designers. In listing seven important design goals, the word “security” did not appear at all.

Twenty years later, in 2008, Clark crafted a new list of priorities for a National Science Foundation project on building a better Internet. The first item was, simply, “Security.”

Computer worm

A standalone piece of software that can make copies of itself and spread to other computers. A destructive worm can make so many copies of itself that it overwhelms host computers, causing them to crash.

ARPANET

A pioneering computer network built by the Pentagon’s Advanced Research Projects Agency (ARPA). Established in 1969, it eventually linked more than 100 universities and military sites, becoming the forerunner to today’s Internet.

Binary code

A combination of zeroes and ones that together can represent any letter or number. Computer commands typically are transmitted in binary code, making it the underlying alphabet of the digital world.

Packet switching

A system for chopping data into a series of smaller pieces and transmitting over a network. This allows for greater efficiency but requires that recipient computers have the ability to reassemble the data packets in the correct order to form coherent messages.

TCP/IP

A set of protocols that are the fundamental technology of the Internet. They provide a common language for a disparate group of computers and networks, allowing them to work together across the world.

Encryption

A way of encoding information so that only the sender and recipient can understand it. When computers exchange encrypted information, they use complex mathematical algorithms along with a designated digital “key.” This allows for greater privacy and also authentication of the identity of the sender and recipient.

Credits

About the series

This is a multi-part project on the Internet’s inherent vulnerabilities and why they may never be fixed.