The fallout from a FEMA error that compromised the personal information of 2.3 million disaster victims is just getting started.

It sets up a test for President Trump: The mishap — which occurred when FEMA improperly shared personal and banking data with a contractor that helped place disaster victims in hotels — marks the first significant data loss by a governemnt agency since a 2017 executive order in which Trump pledged to hold top government officials accountable for securing their data against breaches and improper disclosure. After all, the department's inspector general called the incident a “direct violation” of government data-handling rules. And FEMA itself admitted it was a “major privacy incident.” 

And Congress is also seeking more information. The Senate Homeland Security Committee has requested a briefing from FEMA officials on the data loss, which included banking information of about 1.8 million victims, a committee spokesman told me. The House Homeland Security Committee is also considering a letter requesting more details about the incident, a spokesman said.

Experts hope the enormous compromise -- and any potential consequences -- serve as a wake-up call for other agencies. After all, the victims were already those reeling from tragedy — who had been forced out of their homes by Hurricanes Harvey, Irma and Maria and the 2017 California wildfires. 

“Anytime you have a potential exposure of people’s personal information it causes you to sit up straight and pay attention and certainly in this case, that’s very attention-getting,” Tony Scott, a former U.S. chief information officer, told me. 

“I hope this serves as yet another wake-up call and helps agencies think about where they might have an exposure in a slightly different way than they were looking at it in the past,” said Scott, who is now a senior adviser with the Squire Patton Boggs law firm. “In this case it happens to be FEMA, but there are a lot of other cases where agencies are collecting data about people with health-care issues or financial difficulties or legal issues.”

The enormous compromise is a stark reminder that the government holds some of the most sensitive personal information out there -- but after years of effort, and dozens of high-profile incidents, it still isn't managing it securely.

It's true the FEMA incident, which was disclosed in an inspector’s general report Friday, differs from most major government data breaches, in which criminal hackers or nation-states cracked into government systems to steal data. 

In this case, no one had to hack the data because FEMA shared it directly with its contractor. Yet that sort of data sharing failure is in line with a litany of recent incidents in which government agencies failed to properly manage where sensitive data was secured or who had access to it. That poor management can be just as damaging as a malicious breach if the data falls into the wrong hands, former officials told me.

In the FEMA case, the agency created a template for sharing information with contractors under its Transitional Sheltering Assistance program, but the agency never updated that template when requirements for the program changed, according to the inspector general’s report.

By the time of the 2017 natural disasters, the agency was passing along 20 unnecessary data fields about disaster victims, including the name of their financial institutions and their bank transit numbers.

Since identifying the data incident, FEMA has “taken aggressive measures to correct this error,” press secretary Lizzie Litzow said in a statement. That includes working with the contractor — which FEMA declined to name — to scrub all the improperly shared data from its systems, Litzow said.

The agency hasn’t found any evidence the data was compromised after it was sent to the contractor, Litzow said.

FEMA notified Congress about the incident in December and again in February, the agency said.

“The agency has taken aggressive steps to remediate this issue and we will continue to cooperate with Congress,” a spokesperson said.

The White House did not respond to queries about the issue.

One major problem, Scott said, is that the government relies heavily on outdated computer systems that weren’t designed with privacy and security in mind.

“If you had to retrofit all of the safety things on a modern car onto a ’65 Mustang, it would be a bubble wrap and duct tape sort of exercise and it wouldn’t work very well,” he said. “The same is true for a lot of these legacy systems.”

If the government invested more money in updating its IT systems or moving more of them to third-party services in computer clouds, that would fix a lot of cybersecurity vulnerabilities, he said. It would also make it easier to put controls on how and when agencies can share certain sensitive data, he said.

When he was in the Obama administration, Scott led a governmentwide "cybersecurity sprint" after the 2015 Office of Personnel Management breach, which compromised sensitive security clearance information about more than 20 million current and former federal employees, and was widely viewed as an indictment of government's cybersecurity capabilities.

That sprint included mandating that agencies limit who had access to sensitive data and requiring those people to digitally verify their identities. But it didn't fix the inherent vulnerabilities created by outdated technology, he said. 

Congress made some additional progress by passing the Modernizing Government Technology Act in 2017, which provided $500 million for IT upgrades, but that was just “a drop in the bucket” of what’s needed, Scott told me.

The government could also take a cue from industry, which is revising many of the ways it handles and processes sensitive data to comply with a strict new California data privacy law and the European Union’s General Data Protection Regulation, Philip Reitinger, a former top Homeland Security Department cybersecurity official, told me.

“This isn’t about technical controls but human mistakes,” Reitinger said. “We need an overall approach that helps you not collect the data you don’t need and, if you do have sensitive data, to only share it when you really intend to.”


PINGED: Hackers compromised a system the Taiwanese computer-maker ASUS uses to send software updates to its customers and manipulated that system to install malicious back doors in about half a million computers worldwide, according to an explosive report from the cybersecurity company Kaspersky Lab.

The hackers, who aren’t identified in the report, appear to have been targeting only about 600 of those computers, according to reporter Kim Zetter, who broke the story on Motherboard. The attack lasted about five months. Once the backdoor was installed, the hackers — who Kaspersky dubbed ShadowHammer — had nearly unfettered ability to spy on the computers or to use them to infect other targets.

Kaspersky's findings were independently verified by Symantec. 

Here’s more from Motherboard: “The issue highlights the growing threat from so-called supply-chain attacks, where malicious software or components get installed on systems as they’re manufactured or assembled, or afterward via trusted vendor channels. Last year the U.S. launched a supply chain task force to examine the issue after a number of supply-chain attacks were uncovered in recent years.”

“Although most attention on supply-chain attacks focuses on the potential for malicious implants to be added to hardware or software during manufacturing, vendor software updates are an ideal way for attackers to deliver malware to systems after they’re sold, because customers trust vendor updates, especially if they’re signed with a vendor’s legitimate digital certificate.”

Ironically, the U.S. government considers Kaspersky a supply chain vulnerability because of the company's links to the Kremlin and has banned it from U.S. government networks. 

Here’s Zetter on Twitter with more of the backstory:

The report got a lot of attention in the information security Twittersphere, including from Rob Joyce, NSA’s senior adviser for cybersecurity:

Here’s a thread from security research Robert Graham about the breach’s broader implications:

PATCHED: A House Oversight Committee panel hearing today will probe actions the Federal Trade Commission and Consumer Financial Protection Bureau could take to improve the cybersecurity of companies that buy and sell consumers’ personal data.

Those consumer reporting agencies “possess troves of highly sensitive personal information about nearly every American” and are prime targets for hacking, but they face no consumer pressure to ensure they’re adequately securing that data, according to a committee statement. The massive breach at the credit ratings agency Equifax is example No. 1, the committee said.

The hearing will also focus on a Government Accountability Office report that outlines actions needed to strengthen governemnt oversight on consumer reporting agencies released this morning. 

Watch the hearing here.

PWNED: Researchers have found a second flaw in a Swiss electronic voting system just weeks after finding a first flaw, Cyberscoop’s Jeff Stone reported.

The discovery is likely to breed even more skepticism about the secure use of online voting systems, which some U.S. states use for military and overseas voters.

From Cyberscoop: “The vulnerability involves a problem with the implementation of a cryptographic protocol used to generate decryption proofs, a weakness that could be leveraged 'to change valid votes into nonsense that could not be counted,' researchers Sarah Jamie Lewis, Olivier Pereira and Vanessa Teague wrote in a paper published Monday.”

And from the researchers: “We are a small team of researchers investigating this code base for the first time, In a few weeks, and while spending a small fraction of our time on this investigation, we have found critical breaks...We only inspected a small fraction of this voting system, and we therefore have no reason to believe that it does not contain other critical issues.”


Cybersecurity news from the public sector:


Cybersecurity news from the private sector: