Cybersecurity Summit: Threat Assessment 2019

Welcome

MS. CORATTI: Hello, good morning.

CROWD: Good morning.

MS. CORATTI: Good morning. My name is Kris Coratti. I'm Vice President of Communications here at The Washington Post and General Manager of Washington Post Live.

Really appreciate you all joining us here for our 9th Annual Cybersecurity Summit.

Before we begin, I'd like to first acknowledge today as a really meaningful one for The Washington Post and for the cause of press freedom around the world.

One year ago, this morning, journalist and Washington Post contributing columnist, Jamal Khashoggi, was brutally murdered in the Saudi Consulate in Istanbul.

AD

The Post has been paying tribute to Jamal throughout this week. And just moments ago, Post Publisher, Fred Ryan, and The Post's owner, Jeff Bezos, spoke at a memorial for him in Istanbul just steps from where Jamal was killed.

AD

Jamal's courage and his work will not be forgotten.

For me, Jamal inevitably brings to mind the importance of a free press. But, more broadly, our ability to access information and communicate openly is the lifeblood of our democratic society.

So much of our communication is done digitally, right, over texts or emails, on social media platforms. Our digital lives are encoded in data. They're ping-ponging around servers all over the country, on networks throughout the world. Securing these spaces is critical for free expression and for free enterprise.

AD

And understanding the threats we are facing is an important first step. But issues of cybersecurity are not always clear-cut. The same technology that keeps classified government information from getting into the wrong hands can be used to shield criminals in the darkest corners of the Web.

AD

The hacking tools used to track journalists and dissidents are similar to what law enforcement uses to track criminal and terrorists.

Today, we're going to try to sort through these issues with experts on the front lines of cybersecurity innovation and governance, who are working to keep everything, from our elections to our smart phones and emails, safe.

Before we get started, I'd like to thank our nonprofit sponsor, The Washington Institute; and our supporting sponsor, the University of Virginia.

AD

I'd especially like to thank our presenting sponsor, Raytheon, and welcome to the stage Jon Check, Senior Director of Cyber Protection Solutions at Raytheon. He's going to say a few words. Thank you.

[Applause]

MR. CHECK: Thank you, Kris.

Good morning.

CROWD: Good morning.

MR. CHECK: Thank you to Kris Coratti and The Washington Post for convening today's discussion on such an important topic to our national security.

AD

I'd also like to thank the University of Virginia, my alma mater, "Go Hoos!" And The Washington Institute for supporting this event alongside Raytheon.

The speakers we will hear from today are pioneers and key thought leaders in the front lines, protecting our nation, our businesses, and our lives from cyber threats. Each of these speakers brings a unique perspective on how we can come together as a community to address the challenges we face.

AD

And what better time to come together than National Cybersecurity Awareness Month? This year's theme is "Own it, secure it, protect it." Our goal is to make practicing good cyber hygiene a lifetime campaign, not just something we focus on once a month.

I believe cybersecurity truly is a shared responsibility. Not only is our data at stake in this contested environment, so, too, is our democracy. Over the years, we've seen interference in our elections, critical infrastructure, and throughout the private sector.

AD

Today's experts will address three key themes: the need for trusted public-private partnerships; the importance of information-sharing; and the advanced technology to help combat the threat.

AD

The convergence of these topics is where the power truly lies. Together, we make it more difficult for our adversaries to breach our critical infrastructure, our "road trust" [phonetic 00:04:16], and impact our safety.

Raytheon is proud to be a trusted security provider, providing technical solutions such as cyber as a service that matures the security posture and improves the resiliency of government and commercial organizations.

Thank you, again. I look forward to a great exchange of ideas this morning.

[Applause]

Defending Democracy: Protecting 2020

[Video played]

MR. IGNATIUS: So, ladies and gentlemen, thank you for coming this morning. I just want to join in what Kris Coratti said at the outset: This is an anniversary that has a lot of meaning for us at The Post. The fact that our Publisher, Fred Ryan, and our owners, Jeff Bezos, travelled all the way to Istanbul to speak on behalf of my colleague and friend, Jamal Khashoggi, illustrates a commitment that they have made personally, which I think all of us journalists here at The Post feel very grateful for. And so, I just wanted to share that with you.

AD
AD

Today, we're going to talk about cybersecurity, interference in our 2020 presidential elections, the very innovative new way of trying to deal with that. And we're going to talk with two of the people who are most familiar with these issues: first, former Director of Homeland Security, Michael Chertoff; second, former Director of National Intelligence, James Clapper.

Each knows cyber and these issues and the difficult political and legal background as well as anybody who's served in government.

I want to start, gentlemen, with a question that's on everybody's minds this week. It involves the question of interference in our elections, but this is the complaint that's been raised by the still-unidentified whistleblower, whose complaint is now before the House Intelligence Committee and is subject of an intense national discussion going all the way to the issue of impeachment.

AD
AD

Without asking you what you think about whether the President should be impeached, I do want to ask you each the baseline question, whether you, as experts in this area, find the whistleblower's complaint, which we now read, "urgent and credible;" those were the words that were used.

And then, second, whether you would think that it ought to be investigated to determine whether it's accurate.

MR. CLAPPER: Well, maybe I should start, since it's intelligence community, and I'm very familiar with the intelligence community Whistleblower and Protection Act, and the complaints that are submitted with it.

I would say that, of all the whistleblower complaints that I ever saw during my six-and-a-half years as DNI that this one was the best written, best prepared, footnoted, and caveated as appropriately it should be.

AD
AD

And the law prescribes that, once a whistleblower complaint is submitted, it goes directly to the intelligence community Inspector-General, which became statutory during my time as DNI, and accordingly, it acts independently.

The Inspector-General makes a determination about is the complaint credible. I don't recall ever having one that was declared to be "urgent." And so, that was done. The whistleblower complied meticulously with the provisions of the law.

And for me, it was one of the most credible, compelling such complaints I've ever seen.

Should it be investigated? Absolutely. That's the whole premise of the Whistleblower Protection Act is that a serious, credible--complaints of wrongdoing should be accordingly investigated.

MR. IGNATIUS: Mike, what's your feeling about the same issues? Was it credible, urgent, and should it be investigated?

AD

MR. CHERTOFF: Well, I can't judge whether it's credible, because I think you have to obviously investigate. You have to determine what the basis of knowledge is.

Does the person--were they in a position to know certain things or not know certain things. There are probably going to be other people who would have to be talked to.

What I would say is this, though: Obviously, it's a matter of significant concern. Any investigation ought to be dispassionate, fair, thorough, and expeditious. What should not happen is people announcing the result they think they're going to get before the investigation is done, because that impairs the credibility of the whole process.

MR. CLAPPER: If I could add just one other point--

MR. IGNATIUS: Yes.

MR. CLAPPER: --just to be clear, that the law stipulates a period of 14 days, I believe, where the Inspector-General can investigate the allegations contained in the complaint. And that was done in this case where there was--within the time limit of 14 days, corroboration, at least in the IG's mind, before he forwarded it.

MR. IGNATIUS: And Jim, let me ask you, because you were in the position that Acting DNI Joe Maguire found himself, just after taking office. He made a decision when he received the complaint from his Inspector-General to go to the White House and the White House Counsel and then to the Justice Department, the Office of Legal Counsel, both institutions, in a sense, part of the whistleblower's complaint.

Do you think that was appropriate?

MR. CLAPPER: Well, he was in a tough place. Here he'd been Acting--Acting--Director of National Intelligence for about six weeks, and this, you know, arrives on his doorstep.

So, I think the way I've answered this--I've been--this is beginning to be an FAQ, a frequently asked question. And the way I've responded in the past is I think, institutionally, Joe did the right thing.

The problem, of course, by consulting with the DOJ and the White House--and he had a genuine concern about violating Executive Privilege where he doesn't have the authority to waive Executive Privilege.

Now, you can argue that the cows come home, but was that the right thing to do where he is consulting with an element of the government that's implicated in the complaint. And you know, that's a judgment call that he made. If it were me, I honestly don't know what I would have done. I trust what I would have had is a very extensive and deep conversation with my General Counsel about the pros and cons of doing that. And I'm sure Joe did the same thing.

MR. IGNATIUS: Mike, I want to ask you about a question that's becoming more and more central now, and that is how can Congress compel testimony, either through subpoenaed witnesses or depositions, other documents, in an investigation that it deems essential but where administration officials are withholding that information? What happens next?

MR. CHERTOFF: You know, typically what's happened in the past, particularly when you get a subpoena, but even if Congress wants you to testify is, because they hold the powers of purse through appropriations, generally government officials go along with it, because the sanction they face is the money gets cut off.

I guess if you're going to be technical about it, what would happen is the subpoena would issue. If someone would fail to appear, they would then go to court. Congress would go to court. They would get a court order mandating the person to appear. And then, if the person still failed to appear, they would, in theory, be held in contempt of court.

The other possibility is someone could appear and decline to answer certain questions on the grounds that they are privileged. That gets you into some tricky legal issues about whether Congress has the direct ability to impose contempt, or whether Congress has to go to court.

As with most things in the American legal system, you usually wind up with a potentially extended litigation because you're dealing with unprecedented issues, and that means everybody is going to wind up being careful about how they deal with them.

MR. IGNATIUS: And would you guess, based on your experience, that this issue is going to end up in the Supreme Court before it's done?

MR. CHERTOFF: It's quite possible. Obviously, everybody remembers back in the early '70s with the Nixon case. But the court, given its schedule, only has a certain amount of bandwidth and, in some ways, by the time it gets up to the Supreme Court, you're talking about months having gone by. So, there may be a tension between the tempo of these investigations and the tempo of the court system.

But again, it's a little hard to speculate because we don't--we haven't yet seen a concrete dispute that emerges that is ripe for court.

MR. IGNATIUS: So, I want to turn now to our main subject of political interference going forward in the 2020 elections.

And I want to invite our audience here and also watching this on Livestream: If you have questions, you can send them to me right to this little iPad, it's #PostLive and I, in theory, will see them here and I'll try to look and ask any questions.

But let me ask Jim first and then Mike to give us a sense as we head toward 2020 of how well prepared you think we are to protect our elections from the kind of interference that we've seen now, powerfully, in 2016 and 2018, too.

MR. CLAPPER: Well, I having happily left the government, I just don't know.

It's my impression that a lot has been done, certainly among the key federal agencies: FBI, Department of Homeland Security, National Security Agency, all those that are stakeholders and can help this.

So, I think a lot has been done over the situation where we were in 2016. But you got to remember, you know, our voting apparatus is very decentralized. It's done at the state and local level, not at the federal level.

I was really taken aback during the 2016 and what were seeing the Russians doing when Jeh Johnson, then Secretary of Homeland Security, reached out to voting officials, voting election commissions and this sort of thing at the state level and got a lot of pushback; you know, "We don't want the Feds messing with us," sort of thing.

So, I think--but having said all that, I am confident that a lot has been done to make it better.

If I may, David, just make a point here which I--whenever this topic comes up. Securing the voting apparatus: voting machines, computation of votes, the transmission of votes and all that, that's hugely important.

But that's to me, at least, is one bin of the problem. The other bin is what I might call, for lack of a better term, intellectual security, meaning, how do you get people to question what they see, read, and hear on the Internet?

And this is where the Russians exploited us, exploited our divisiveness by using social media. So, that part of the problem, I'm not sure about.

MR. IGNATIUS: Mike, let me ask you the same thing of how vulnerable you think we are heading into 2020, whether the resistance that Jim describes to federal help to state and local governments, whether that's changing.

And then, also, maybe you'd comment on the broader question that Jim raises about the way in which our information space as a whole now has been--it looks like--contaminated?

MR. CHERTOFF: So, first of all, I agree with Jim. I think that the federal government has been much more active and I think the states have been much more willing to accept help. I think you'll hear more in some of the later panels about that.

I also agree that actually the machines themselves in some ways are the least vulnerable because, (a), they're decentralized; and (b), they are normally not hooked up to the Internet except very, very briefly. So, to tamper with them, you'd have to get physical access.

Where I think we have greater challenges are the registration databases, the tabulation databases, and all the infrastructure around voting which includes, you know, is the power working; is transportation working; can people get to the polls?

And these issues require not just preparing to raise your cybersecurity level against hacking, but it also means resilience. If there is something that makes it difficult to vote on Election Day, either the database goes down and therefore you can't verify who's entitled to vote, or the trains stopped running because of the cyber-attack, is there a plan for what do you next?

And that is the essence of resilience: You've got to have thought through that in advance. You have to make sure you know what the plan, that you have the authorities, and you have the capabilities. And I think that's an area we ought to look at.

On what Jim called the second bin, which is disinformation, I think this is a challenge that's broader than the election itself. Obviously, one of the approaches that the Russians and, frankly, the Chinese also, take to geopolitical conflict is the information space, what they used to call "active measures."

And the idea here is if you can disrupt the unity of effort of the United States or Europe or other democratic countries, then, basically, you win without firing a shot, because people don't trust each other and they don't trust institutions. And I think that is what we have seen over the last ten years. In fact, it goes back decades.

What has changed most recently is social media, and the ability to manipulate that to drive very carefully tailored messages to particular individuals. And that's an area where I think we're still trying to implement standards and approaches that would mitigate the effect of that.

And job number one is to get people to be critical in their thinking when they see a story and not simply accept that it's true because, quote, "It's on the Internet."

MR. IGNATIUS: So, just going to this point that Jim--and you both now have discussed, the more that we talk about the insecurity of our election systems, in a sense the more people have it in their mind that there's something wrong here.

MR. CHERTOFF: Yeah.

MR. IGNATIUS: A friend who runs cybersecurity for one of the big social media companies said to me recently, "What the Russians really are doing is weaponizing uncertainty," that the very fact that you're uncertain whether these systems may be attacked leads to less faith in the outcome.

I just want to ask you, I think it is one of the hardest questions there is, is there any way to reduce that weaponized uncertainty that you can think of that's appropriate for a democratic government?

Jim? Mike?

MR. CHERTOFF: Well, I would say this: I mean, one of the points that's been made repeatedly is you need to have a verifiable, auditable system for actually getting voting. And whether it's a paper ballot or there are various kinds of tools that are now being developed that would encrypt a copy of the ballot, the ability to assure people that, if there were a dispute, it might take a little bit of time, but you could go back and you could actually manually count. I think that is an important confidence-building measure.

MR. IGNATIUS: So, any thoughts, Jim?

MR. CLAPPER: Well, you know, I don't have any silver bullet suggestion here, other than imploring people to think critically, try to corroborate the information they're absorbing. Pick and choose your sources, that sort of thing.

I've often fantasized about some sort of national fact-checker, unassociated with the government, perhaps. I don't know quite how you would constitute this, that the fact-checker would be seen as uniformly and universally credible. But somebody like that could verify or refute what is being said out there on--particularly on social media.

MR. IGNATIUS: It's tricky. We don't want a single authority tells us what's true and what isn't.

MR. CLAPPER: Yeah.

MR. IGNATIUS: That sounds like Big Brother.

MR. CLAPPER: George Orwell.

MR. IGNATIUS: But there's got to be a solution.

So, I want to get to something that's really encouraging that you're both involved in, and it's a creative effort to deal with this problem and draw the public in: It's called CyberDome.

And maybe I could ask each of you just to explain the basic idea of this, what sorts of services CyberDome will offer to candidates around the country in 2020, and hopefully for many years to come.

Jim, why don't you start that off?

MR. CLAPPER: Well, I was approached by this group, which is a group of citizens, public-spirited, public-minded citizens who have aligned themselves with cybersecurity experts, and put together an organization which is designed on a bipartisan basis, support and assist campaigns and, particularly the two national committees to secure themselves.

It's not a government thing. They're seeking funding outside the government, and Mike and I have--both approached about it and are serving on their Board of Advisors.

Mike.

MR. CHERTOFF: Yeah, the idea here is a nonprofit organization that will offer free of charge to campaigns cybersecurity advice.

Now, we've had campaigns hacked for years. I mean, I remember back in 2008 campaigns were hacked. What was different in 2016 is what they call "doxing." Not only where the campaigns hacked by foreigners in order to see what the campaign was thinking about from a policy standpoint, but actually some of the content was disseminated by the Russians and put out there in the run-up to the 2016 election in a way, again, to try to unnerve and demoralize the Democratic Party and supporters.

So, that, I think, took the weaponization to a new level, and part of what we're trying to do is get the candidates to raise their game when it comes to protecting against these kinds of intrusions, which can then be, as has been said, weaponized against them.

MR. IGNATIUS: So, I urge people to take a look at what this CyberDome is proposing. It's a creative idea. It's not the government doing it, but the private citizens in a way that should make it easier for people to draw on help.

And as we think about how we're going to protect our democracy, which turns out to be more fragile than we realized, this is a pretty good idea, and I'm really pleased to have these two people who are associated with it here with us.

I want to ask another question that lurks under the surface of our national debate, now, and it's a hard one, but there are a lot of people out there, it's clear, who think that there's something that they call the "Deep State." And they think--probably people like the two of you, experienced national security--

[Laughter]

MR. IGNATIUS: --no criticism intended, but they think of experienced national security officials, people like Jim Clapper who served, if I remember, over 50 years as an intelligence officer, one way or another. They think about Mike Chertoff, who's been a U.S. Attorney, who's served under various agencies, who's seen every part of our government.

And that they worry that you've got a kind of hidden hand on the nation's steering wheel that surfaced in the whistleblower complaint. People say, "What the heck is this CIA guy doing, you know, seconded in the NSC staff investigating the President?"

So, I think it would be interesting for people if each of you would just respond from this long experience you had to this argument that's out there in America. And what is it, Jim, that you'd want to say?

MR. CLAPPER: Well, I never heard of the term, "Deep State." Maybe it was ignorant bliss or something, but I never heard of that until campaign and afterwards. There is--allegedly this is a conspiracy of career government public servants who have somehow organized themselves into a conspiracy to undermine or overthrow the President which, on its face, is ridiculous.

You know, the intelligence community, it's almost holy writ: truth to power and under whatever difficult circumstances that may be. Even if the power ignores the truth, they still have to keep telling it.

And my experience has been that, sure, people in the intelligence community, they're just like everybody else. They have their political views, but they--again, my observation has been consistently that they park those political preferences at the door before they walk into the office.

So, now, unfortunately, this recent whistleblower complaint coming from a member of the intelligence community just fuels that conspiratorial fire that there is such a thing as the Deep State.

MR. CHERTOFF: So, Deep State is a concept that really comes out of an entirely different context. It has to do with countries where the military is so powerful, they also control a lot of the industrial base.

If you look, for example, the Revolutionary Guard in Iran, they actually, in addition to having military capability, the actually control industry. We don't have any of that, here.

As Jim knows, our military is completely under civilian control, and they stay in their lane. Likewise, the intelligence community is very, very carefully hedged with a lot of rules. And we have courts that supervise almost everything.

And if you look at some of the history, for example, of surveillance programs and the controversy that's arisen about those, those have always occurred because somebody was uncomfortable with a decision being made, and then it got to court, perhaps, or Congress changed the rule. So, we are kind of the opposite of the Deep State.

Now, I understand that Americans traditionally have had a certain suspicion of government, but that's not so much a question of the civil service. I think it is more generally a question of not having the government overstep its role in the private sector. And our solution in our Constitution is we break the government into three parts, and we also have federalism.

What people miss sometimes is much of the real power is at the state level, in terms of the police and the enforcement mechanisms. And that's one of the things that guarantees that our government cannot overstep or really commit misconduct.

MR. IGNATIUS: Final question, again, I think every member of this audience probably would want me to ask you: What's the damage to the national security agencies, to the people of the CIA, other intelligence agencies, the FBI that you work with closely, Mike--of this period in whether you have the President calling the whistleblower, a CIA officer, a spy and accusing him of treason.

What damage does that do to the people who work for these agencies, and also to the partners we have around the world who are our central liaison.

MR. CLAPPER: Well, obviously, it's not good. It's not a good thing and I think it affects, you know, a lot of people in the intelligence community.

But I have to say it's a dangerous thing to try to characterize--again, another FAQ: You know, what's the morale of the intelligence community?

Well, the intelligence community is a large, complex, globally dispersed enterprise. And there are thousands of people in the intelligence community that aren't affected by this stuff, at all.

So, if you're at mission ground station someplace here in Denver or Menwith Hill or Pine Gap or you're in Embassy X someplace as an intelligence officer, you're just there doing your job and you're just not affected by this.

So, the specific elements that are really directly affected within the intelligence community are, of course, my old office, the Office of Director of National Intelligence; obviously the Agency, CIA; and the FBI. It does have an effect on them. But there are, you know, vast parts of the intelligence community that just aren't directly affected.

Now, just because they are a part of the intelligence community and are getting, you know, pretty regular badmouthing, that's not good for morale.

And it isn't good, as well, for our intelligence partners who share with us, in good faith, you know, information that they believe is germane to our national security.

MR. IGNATIUS: Mike.

MR. CHERTOFF: I guess I'd say two things.

My observation is that, by and large, in the agencies, you know, when there are ups and downs and controversies, people still go about their business professionally. And the vast majority are dedicated to their work, and whether things are uncomfortable or not, it's not going to change their mission.

The other thing I will say is, generally--and I think Jim will attest to this--our relations with our good partners overseas at an operational level have generally been able to resist the vicissitudes of politics. Even when the politicians are at each other's throats, the professionals, particularly those in the security space, know how to work together and how to trust each other.

So, this will pass, but I would leave you with this thought: I happen to be Chairman of the Board of Freedom House, which we set up, you know, over 50 years ago to promote freedom around the world. People look to the U.S. as a beacon for the values of democracy and freedom and the rule of law. And when we stand for that, not only do we earn friends but we actually earn admirers.

And I remember meeting people who, when I was in office, in Central and Eastern Europe, who had been high school students during the Cold War and under the boot of the Soviet Union. And they said to me when I met them many years later, the fact that Americans like Ronald Reagan spoke up for freedom, "Tear down this wall," inspired us to keep strong and to keep struggling for freedom.

And that is one of the most powerful weapons we have, and it would be a shame to lose it.

MR. IGNATIUS: So, we've had two of the very best people in national security to kick off our discussion this morning on cybersecurity.

Please join me in thanking both of them.

[Applause]

America Held Hostage: How to fight Ransomware

[Video played]

MR. MARKS: Hello, everyone. My name is Joe Marks. I am a cybersecurity reporter for The Washington Post. I write the Cybersecurity 202 Newsletter.

And I'm here with Jeanette Manfra, who is the Assistant Director for Cybersecurity at the Department of Homeland Security.

And we're here to talk, at least partly about ransomware, which I think a lot of people are familiar with. It's when hackers not only steal your computer files, they also--they lock them up and won't release them until you pay a ransom in bitcoin. And this has just been a huge problem that has hit cities, including Baltimore and Atlanta, some major industrial players; just small towns, police stations across the United States.

What's DHS and the government doing about it?

MS. MANFRA: Sure. So, for those of you who don't know, I'm from the Cybersecurity and Infrastructure Security Agency, which was established by Congress close to a year ago to be the federal government's central point for leading cybersecurity and physical infrastructure security, and working with our partners in the private sector and state and locals.

And so, first, also, if I may, today is the second day of National Cybersecurity Awareness Month. For those of you who are not aware, you are now aware.

And so--and the recent sort of spate of ransomware attacks really highlights the theme that we've decided to focus on, which is about accountability. And both as an individual, we're all consumers, we're all employees of an organization. Some of us run organizations.

And so, how do we think about how we own IT, how we secure it, and how do we protect it.

And importantly, we are also very much focused on those organizations who don't have the hundreds of millions of dollars of resources to do all of these things. Oftentimes, in the Cybersecurity circles, we talk about very advanced, sophisticated, sexy concepts. And the reality is, as the ransomware attacks have shown, is a willingness to attack the most vulnerable organizations. People who are willing to stop schools from functioning, hospitals from functioning, municipalities. That takes a certain low kind of criminal to do that, and we're really trying to step that up.

MR. MARKS: And it's also--and it's also, I mean, in addition to being pretty malicious, are these people relying on the brightest and best, new hacking technology?

MS. MANFRA: No, not at all. Much of the technology that they're using is, you know, sort of commodity malware than anybody can find and run.

There is some more sophisticated stuff, and there's definitely some money in this. And in many cases, the incentives are a bit misaligned when you have--you know, we don't want anybody to pay out, because that just encourages future "enormous problems." [phonetic 00:37:57]

MR. MARKS: Is there ever a situation when they should pay out?

MS. MANFRA: You know, I always say you should--you shouldn't pay out. That being said, I'm not the person in the midst of making that tough decision about what's going on, and I don't fully understand what their risk calculus is.

And when you have insurers and others that are going to cover that, that furthers our problem of misalignment of incentives.

We're trying to focus more on building the resilience and getting the tools. We're going to be releasing very soon, a set of cyber essentials, you know, just to place--a lot of small, medium businesses, state and locals, they come to us. And while we spend a lot of time focusing on very high-end threats, you know, the electric sector, to our elections, these things, a lot of people just come and say, "Where can I start? What do I need to do if I have five dollars? Where am I putting that five dollars towards?"

And so, this month, and really beyond, with our essentials, we're going to continue to focus on that community.

MR. MARKS: Is that a new thing for DHS, to be focusing on the small and medium businesses, the five-dollar problems rather than the five million-dollar problems?

MS. MANFRA: I wouldn't say it's new. We have--you know, we've worked closely with, you know, state and locals, with small- and medium-sized businesses. I think what's new is that we're really stepping up and prioritizing our efforts, there. Oftentimes, the five-dollar problem can turn into a five million-dollar problem.

And many times, these--you know, just the interconnectedness of everything, many of these organizations might be public safety, or they might be connected somehow in the supply chain of a larger sort of traditional critical infrastructure. So, we don't think we can separate those two communities, as much.

MR. MARKS: One ransomware problem that your office has talked a lot about is the concern about a ransomware attack from Russia, from anyone that targets statewide voter databases in advance of the 2020 election.

What are you doing to prevent that?

MS. MANFRA: Well, so, first I want to be clear that there's not a specific threat that we're aware of. We're just more sort of logical extension of, as we're seeing this, that is a potential scenario.

And there is very basic things to prevent yourself from becoming a victim of ransomware, backing up your systems, updating. And so, that's not something the federal government can do for these organizations, nor do I believe it's our role to do that.

But what we are doing is publishing more documents. In August, we published a specific ransomware, partnering with associations and state and local leaders, mayors and others, to get that message out. It is, you know, thinking about where they're taking that IT money and spending it on preventive measures; and also, being able to understand how the federal government can help them in a response scenario.

MR. MARKS: So, big picture, after two years working on this problem since the 2016 election, how confident should Americans be that the 2020 election will not suffer from a compromise by Russia or another hostile actor?

MS. MANFRA: I think I remain very, very confident that the tally of the votes, the actual vote count itself, will be faithful to what the voter actually put into the machine.

And Former Secretary Chertoff talked a little bit about the broader sort of architecture.

Some of the things that we've really focused on that increases our confidence--and I'm talking just about election infrastructure, not the disinformation, which is separate but related--is, in 2016, I saw sort of three main gaps.

The first was around visibility, you know, how the federal, state, and locals have common visibility on both the threat and how it's actually manifesting in their systems, recognizing that it's not the voting machines that are necessarily connected, but there are systems that are potentially accessible remotely.

So, we focused a lot on visibility. We spent a lot of time and effort to the point now where we have sensors covering all 50 states. And so, that's a huge improvement. And that allows us to take intelligence information or others--either from the federal government or from threat intelligence companies--and quickly sort of ping those sensors.

The other thing was ensuring that we had an understanding of a communications protocol. So, in 2016, if we had intelligence that somebody was a potential victim or a target, you know, our practice is to go to the owner of that system.

And we need to work out, to make sure that the senior official in charge of elections in this state also had visibility. So, that was something that we worked out in exercising that.

And the last thing was really about how to speak to the public and make sure that the public is really getting the facts. And this gets into the disinformation side. And so, we did some really unique things, having an exercise with media so that they would understand how the Election Day would unfold; making sure that we had quick abilities to run down, if somebody's posting on Twitter that a voting machine is behaving erratically, like did happen in 2018. We were able to quickly run it down and realize that, you know, nothing was going on there but we were able to get the facts to the media and the public.

So, those three areas we continue to focus on. And I think in 2018 we're able to really demonstrate a level of, like, cross-party, cross-sector, cross federal, state, local sort of coordination that we weren't able to do in 2016.

And we'll continue to expand that in including the private sector, those who do make the voting machines and the e-poll books, all of those, in that coordination leading up to the elections, from the time that first absentee ballot is sort of mailed out, all the way until the final vote is tallied.

MR. MARKS: So, I mean, despite all the work from DHS and other agencies, hackers at the DEF CON Cybersecurity Conference in Las Vegas looked at a bunch of voting machines that are going to be used in 2020, fond vulnerabilities of some sort in all of them.

There have been other reports about voting machines connected to the Internet when they shouldn't be; possible supply chain issues.

Should the American public be concerned about that, and how should they think about those vulnerabilities?

MS. MANFRA: You know, I think it's important to think of these in context.

If you--and the--need to still work through the report from the DEF CON Voting Village but want to make sure that what was done there is how real life sort of happens. So, that's in important thing.

People who work in cybersecurity, we have a term called "defense in depth." You're not sort of dependent on one machine being fully secure all the time and not ever be able to be hacked. You put a lot of things in place, both physical and personnel, as well as technology. And that's really what we're focused on with state and locals, and frankly something that they've done for years.

If you even just think about the transparency of the voting process, every time votes are tallied you have observers from both parties looking at the tally of those votes. And so, there's going to be a lot of indicators in place if something wasn't adding up, if it seemed that there was a sort of misalignment of votes.

We still remain, you know, sort of focused about any actors who seek to spread disinformation or dissuade people from voting, and that's always a concern, and that starts way before Election Day. And so, we're going to continue to work to, you know, make sure that people understand where are the authoritative sources that they can, you know, get a provisional ballot, even if something on the, you know, registration is not showing that they're eligible to vote.

MR. MARKS: So, you spent a lot of the last two years trying to get technology from companies that you don't trust and nations that you don't trust: the Russian antivirus, Kaspersky; and the Chinese telecom Huawei off of government systems.

Are you working on--and has there been any progress--in thinking about a way to get things more secure up-front so that you don't have such a long process for the next Kaspersky, the next Huawei?

MS. MANFRA: Well, so, there's a few things there and I could easily take multi-hours to talk about it.

On the kind of the secure-by-design sort of concept in thinking about how do you--how do we have more secure code. And there's a lot in the software community that is working on this. We're continuing to work--and do you build more secure coding practices?

How is there transparency, so you know--a lot of products are a compilation of different sort of code that comes from different places or different programmers, that may come from different countries. How do you have transparency in that? That's something we'll just have to continue to evolve.

Hardware, similar sort of how do you have that transparency in where your hardware came from.

From our perspective, what Kaspersky really taught us is you can't have a very sort of blunt approach and say, "Well, everything from X country is bad and we can't use that." Our economy just doesn't support that. We've chosen to outsource a lot of things over decades and we can't just flip that switch.

We do want to get to a point where we have more trusted capabilities. But what we really learned is we have to--the threat is important, but you cannot just sort of hope that you will get to a point where you will have this perfect case, a company is a witting agent of a foreign intelligence entity and, there you go, let's just get rid of that.

Instead, we're a very risk-based organization. What we kind of came to is sort of three components of thinking about--and we would encourage others, and when you're procuring your IT products and services--is the country's laws of either where the product comes from or where the data is stored is important.

And there are certain laws that, regardless of whether a company wants to--would want to cooperate with the government or not, there are laws in Russia and China and others that would compel that company to provide that data, which was the case in Kaspersky, and we weren't comfortable with that.

The second part is the level of access to your system or data that that IT product or service has. There's a lot of things in IT that don't have a tremendous amount of access to data. And so, that's a really important consideration, and an antivirus tool has a lot of access. And so, that is kind of the second.

And then, the last thing is really thinking about market penetration. We're coming at it from the U.S. perspective, is--you know, if it meets those first two things, but it's just not something that's used in the U.S., that's something to keep an eye on, but it's also not something that we need to sort of overly focus.

So, when Congress passed the Secure Technology Act last December, which didn't get a ton of process, it happened to be past the day of the shutdown. So, other things happening.

But that was a really important piece of legislation, because it set up the framework by which the government could do what we did in Kaspersky, but it gave us the tools to do it in a more sustainable, enduring fashion.

And so, that's what's happening right now, is we've stood up the Federal Acquisitions Supply Chain Council. I represent DHS on it and that will allow us to have a more systematic and open process for being able to ban these things.

The other thing that we learned from Kaspersky is it's important to do it in an unclassified and, quite frankly, even public way. The reason we did it publicly was for due process. We wanted to ensure that anybody who would potentially be negatively impacted would be able to have a voice in our decision.

And what that resulted in is a lot of people now sort of that don't fall under our direct authority are now following our guidance. And so, we're able to impact the larger ecosystem that doesn't necessarily have to follow our orders in any way.

MR. MARKS: And it looks like we're out of time. Thank you so much, Jennifer, for coming.

Thank everyone.

MS. MANFRA: Thank you.

[Applause]

Hacker Trackers: This is Personal

MR. MARKS: Hello again. My name is Joe Marks, as you probably, remember, I write the Cybersecurity 202 newsletter, and I am here with a great private sector panel. We have Google's head of counterespionage Shane Huntley; Director of Cybersecurity at the Electronic Frontier Foundation Eva Galperin; and Senior Researcher at Citizen Lab John Scott-Railton.

So it occurred to me as I was thinking about this panel, you guys all look at a vast array of bad people and bad organizations that are targeting the people you work with, from criminals to foreign governments to sometimes people's own governments and intelligence services, and even stalkers--malicious people in your life, stalkers and sometimes partners and exes.

So I thought a good way to start was heading down and starting with Shane, tell me who are, rather than what are, the main people that are causing problems for the people you're trying to protect online.

MR. HUNTLEY: Well, in my case my team really focuses a lot but not solely on government-backed threats against our users and against Google. And really what we're seeing these days is that pretty much every government or most governments are really engaged in this activity for espionage, for destructive reasons, for disinformation, and it's just growing over time.

So internally we have this, like, big map where we color in all the different countries where we actually see a country that we believe is like activity we've seen it from. And year over year, there is less and less countries that are white and more and more countries that are red, and really a very small number of countries now are left white on my map. So it really is everyone. It's growing.

And I think the sophistication is also growing, the gap is closing between the high end and what was the kind of low-end sophistication, that that gap is growing. This has become more accessible. So we're seeing more and more players from the Middle East, around the world able to either build this capability, buy this capability. So, day in, day out, we're seeing these users targeted.

MR. MARKS: Are you still seeing Russia, China, Iran, North Korea as the biggest threats, or is it really democratizing, to use a very strange word there?

MR. HUNTLEY: It's always hard to say who's the biggest threat. It really depends who you are, right? So those four are definitely four of the biggest players in the space. But as I said, it's a lot more broad. Like, if you're somewhere in the Middle East, you might be targeted by your own government specifically.

And we like equally warn all these users that we have this warning that we put out every month saying we believe you're the target of government-backed attack. And to give you an idea of the scale, we warned 36,000 users last year that we believed that they were the form of some form of phishing or malware attack that we saw going to them. That's not a compromise. That just means they were targeted. So that is sort of the sphere of what we're looking at in my team.

MR. MARKS: And, Eva, from your vantage point working on digital rights and civil liberties, what are the groups you're most concerned about?

MS. GALPERIN: Well, I started out my work really focused on activists, mostly activists outside of the United States, often in North Africa and the Middle East. And over the last decade or so, my work has expanded to get broader and broader and broader.

So first we started seeing international activists being targeted. Then we started seeing journalists being targeted, human rights lawyers, scientists. Then in 2016 we experienced a tremendous spike in sort of domestic activists suddenly very interested in their privacy and security.

MR. MARKS: Can you expand on that a little bit? Who are the domestic activists?

MS. GALPERIN: Oh, we've actually seen a lot of pro-choice organizations that are really concerned about their safety, a lot of civil liberties organizations. A lot of immigrant protection organizations are really concerned, and just immigrants in general, especially including legal immigrants in the United States are very concerned about their digital privacy and security.

And I have in some ways an even bigger problem than Shane in that Shane only needs to secure people's Google accounts.

MR. HUNTLEY: Only Google.

MS. GALPERIN: Just Google!

MR. HUNTLEY: And all their Android devices, and [unclear]. About every user in the world. It's an easy job, Eva.

MR. SCOTT-RAILTON: Things are getting contentious already.

MS. GALPERIN: This is why you make the big bucks.

But the problem that I have is that people come to me and they don't just need to secure the Google environment but also everything else about their lives and all of their other accounts and things which are not owned by Google, which gets somehow even more stressful, and I have fewer resources with which to do it.

And then finally, in the sort of ultimate expansion of my work, I started looking at the victims of domestic abuse. So it turns out that most people who are being spied on in their lives are not being spied on by governments or law enforcement. They are being spied on by stalkers or by exes or by people with whom they are currently in an abusive relationship.

And one of our biggest problems with sort of building a threat model for that is that companies often assume, when they're locking down devices, that if you have the user name and the password and access to somebody's phone, that you have legitimate access to the person's account. And abuse often involves access to all of these things at once. So now we need to completely rethink our threat models just in case we did not have enough to worry about.

MR. MARKS: Just to stick on that for a second before we get to John, you made a big address about this at the Kaspersky Conference a couple of months ago. Companies including Symantec and McAfee said they're going to start taking this seriously, they're going to start alerting people. Are companies getting better about this, and is it complicated? Because presumably there are some situations where apps like this have legitimate purposes.

MS. GALPERIN: Well, to begin with, I wouldn't want Symantec and McAfee to get credit that they don't deserve. Neither of them made a statement. The companies that did make statements were Kaspersky, Lookout, and Malwarebytes. So currently we have sort of three companies on board.

And right now, since we're just now kicking off both Domestic Violence Awareness Month and Cybersecurity Awareness Month--and Halloween--so just all the spooky things at once--we are really working on getting the anti-virus industry all on the same page to take these threats a lot more seriously.

Are there legitimate uses for this stuff? It depends on what you mean is a legitimate use and whether or not you're just talking about, like, is it strictly legal. Often this software is violating the law. But the real question is the law where. What jurisdiction are you in. State laws are all different. Federal laws are all different. People exist in different countries.

The place where I have decided to draw the line is software which is sold commercially and is designed to fool the user into thinking it's not there. So if for example you are a parent and you are concerned about where your children are going and you want to see their text messages and you want to know where they are and you want to do some parenting, that's fine, as long as you don't feel the need to install this software on their device which leads them to believe they're not being watched.

MR. MARKS: So just to clarify, Symantec and McAfee came from a follow-up article I did who said they were working on it. But again, don't want to give them credit that they don't deserve if they have not done anything since then.

John, what should we be scared about?

MR. SCOTT-RAILTON: Hey, so it's an interesting question. The Citizen Lab works with these, like, high-risk groups, kind of similar to what Eva has done. And I feel like our conclusion sounds a lot like what Shane has said, which is wherever we scratch, we find bad stuff.

And I kind of think of it like Neapolitan ice cream. Like, the strawberry is like nation-state actors who've got, like, a development pipeline and like good STEM capability, and then your vanilla is like can't necessarily develop in house but can pay for it.

MR. MARKS: Can you give us examples of that? Can you name names?

MR. SCOTT-RAILTON: Yeah. So like, for example, Citizen Lab has done work for years on the proliferation of what we call, like, nation-state spyware. So this is stuff made by companies that allege that they sell to governments only for the purposes of like tracking terrorists and child pornographers.

In practice, it looks more like an international espionage set of technologies. And they sell to countries like Saudi Arabia and Mexico, who then slosh around and use these things for targeting their own civil society groups. And that stuff often gets a lot of attention in press because maybe it involves like zero-day vulnerabilities and other, like, sexy, exciting stuff.

But the third flavor, which is chocolate, and by far like far the most overrepresented, is the like my cousin knows computers approach to cyber espionage. Like it doesn't need to be fancy. It just works.

And this is because, like, human behavior is fucking unpatchable, right? So the same deception that worked 20 years ago will work again in different digital guises, which is what drives Shane's team nuts.

But it also is a big overlap between the stuff that Eva and we are concerned with, which is at the simplest level--and for like a fucking decade we've seen--I'm so sorry. I'll stop.

[Laughter].

MR. MARKS: This is an R-rated panel.

MR. SCOTT-RAILTON: Yeah. I'm sorry whoever is doing the moderation on this audio. I thought I would limit myself to one.

But we've seen nation-state actors using basically the same kind of spyware that abusive partners wind up using. And increasingly, like a lot of that problem space ends up in the hands of someone like Shane and other device manufacturers and operating system manufacturers whose systems are still constantly locked in battle with those really simple technologies.

And so, I don't know. I feel like one of the biggest problems that we face is that the entry cost is so stupid low, that anyone can do it. And it ends up looking a lot like a public health problem, with all of the sort of behavioral complexity that comes from something where, like, people love using their devices and they're not going to fundamentally change how they use those devices. The platforms that they use are not always designed in the most sort of high-risk focused ways.

But we don't really know who the next clutch of activists is going to be. They have no idea who they are going to be yet, right? And yet they're going to be targeted over these platforms. People who are in a domestic situation that's going to end up in some kind of, like, digital spousal abuse don't necessarily know when they get their Android phone that two years later, they're going to have, like, a spook sharing their bedroom, right, fundamentally. So, I don't know. It's a big problem.

MR. MARKS: It is. So to take one specific example, I think this is either vanilla or strawberry. Lost the Neapolitan.

But, Shane, Google worked on exposing a very micro targeted attack with Apple devices that you guys didn't identify who the actor was. There has been reporting since then that says this was Chinese government-linked hackers who were targeting the Muslim Uighur minority.

John, you've even worked later and said they were also targeting Tibet with this.

MR. SCOTT-RAILSTON: Probably all the five poisons got targeted. We just know about the Uighurs and the Tibetans.

MR. MARKS: Yeah. So tell me how common is something like this, and how concerned should we be about these really microtargeted attacks?

MR. HUNTLEY: Well, I think what was really interesting about this attack--and kind of all these [unclear] details. We really did publish our research here--is was the fact that this was one example where our team has found zero-day exploits.

MR. MARKS: Can you explain what a zero-day exploit is?

MR. HUNTLEY: Yeah, I was about to explain exactly what that is. A zero-day exploit is an exploit where--most of the exploits out there, if you have patched your devices, if you've installed all your updates, then you're actually protected because all the holes have been fixed in what's actually gone on. So really, they still work a lot because people don't update their devices, people don't patch.

But what we consider a zero-day exploit is the exploit, there isn't a patch available. And that's what this was, which is one of--we treat this very seriously because there's not a lot a user can do in many cases against a zero-day exploit. So we have this policy if we ever find one--and over the last 12 months my team has actually found five or six different zero-day exploits against different platforms, and this is with multiple different companies.

And our policy is, we tell the company, we help work with them to get it fixed, but we say that there's a seven-day deadline here. We don't like expand this out, that you've got months. So it's like we're going to start telling people how to protect themselves within seven days. And the Apple case was one of these, and that's why this was such a sort of kind of attack.

But, again, this is the rarity, right? So it's actually somewhat the exceptional circumstance where we actually do see the zero-day exploits being used, and that's why we treat it so seriously. And I think we're having a really good effect of making it a lot harder to use these exploits. And, yeah, that's really the background of what that was. And then project zero did the complete analysis of the exploit, trying to learn the details of it. Because we really believe that learning more about these techniques, working at how to fix them, working at how to make sure these sort of bugs don't happen in the future is how we actually secure the entire sort of ecosystem in the world because this is a very microtargeted threat.

And this is not the biggest threat you're going to face. Like, you're going to generally be hacked because somebody's going to trick you for your password or somebody's going to trick you into installing something. But this really serious threat is one that we do have to take very seriously and something we're fighting.

MR. MARKS: John.

MR. SCOTT-RAILTON: Yeah, so I think part of what's interesting about this case that just happened is--and part of why it's such fun drama--is how much trouble companies have with the public communication and narrative aspect of these cases, right? So Google didn't attribute, got a lot of flak for it, later things did some sort of attribution.

And I feel like it's kind of an interesting space because we're putting a lot of emphasis on companies basically stopping nation-states doing nation-state surveillance stuff. But those companies have, like, lots of different incentives, lots of different public relations incentives, different markets.

And I feel like there's a bigger problem which is the pipeline that public and policy makers have for getting like meaningful timely information about the full scope of the threats that they or other groups face is fundamentally constricted by the different incentives of the different players.

So for example, Shane, what was the number for nation-state warnings that you guys did?

MR. HUNTLEY: 36,000.

MR. SCOTT-RAILTON: 36,000. Which is great, right? Holy smoke, that's a meaningful number. But it's also still challenging. For example, if I was to ask Shane, 36,000, how many from each country, right? How many from each threat actor? Google is limited in what they can say, and completely reasonably.

But at the same time, researchers and others, we need to know that. We need to know who are the states that are the worst actors. We need to know how they're doing it. Users don't even know, right, when they get those warnings. So I think we're in kind of a weird place.

In some sense, the other going dark problem is like information, including attribution, about threat actors and what they're really doing where they're doing it.

MR. MARKS: Go ahead.

MS. GALPERIN: Okay, I'm going to be mean. I promise not to swear, though. Nation-state targeting warnings don't work, and this has actually been one of my bitter disappointments from the last few years. I spent many years going around talking about the threat of nation-state actors and nation-state spying. And one of the things that I did was I called on companies to give users these warnings so that they know to up their game.

And then it turned out that often these warnings were too vague, that they did not give the users enough information, that they just sacred the pants off the users and they didn't know what to do next. On occasion, they would go in exactly the opposite direction, where they would not believe the warning and believe that this is just a thing that Google does every once in a while, to keep them on their toes.

So I think that now is a good time for platforms to rethink the nation-state warning and think about what kind of information you can give to users that they will actually act on and that will help to protect them in the future instead of just scaring their pants off or getting to the point where you can no longer scare their pants off and they have no pants.

MR. HUNTLEY: I would say, like, a slight defense, the person who initially rolled out these warnings like way back in the day, this is the big challenge. Like, how much can we communicate without revealing how we're detecting things? Because if we give up our detections, then we can't protect future users and how to actually cause the user to make change.

We have got feedback. Like some users definitely do secure things. I think we have come a long way in the last eight years. Like, I've been doing this. When I started this, nobody believed in nation-state threats. Now we're having these sort of conversations where everybody takes it as a given. And if anything, it's people are becoming blasé to the whole threat. But when I talk to election campaigns, when I talk to activists, people do care and believe about that there are nation-state threats out there.

I do think giving a warning sometimes is a wake-up call to people. And we have seen some users, this is the mechanism of the, oh, I didn't think about it and now I'm actually going to take some action. So we've measured that. But ideally, yes, we do want users to take more action. I think there is more research to go on to how to make this more the default.

But I also think that we as platforms and as everyone else in sort of industry, we can't just put all the blame on the users as well. It's sort of like car safety. You can't just tell everybody to drive safer. You actually have to build safer cars. And I think we are trying to work very hard to build safer operating systems, to build more security by default, to make it so the user has to do some things themselves, but we can also do a lot for the users to help them secure.

MR. MARKS: And it's interesting that you mention campaigns. It's organization and not just users that aren't able to do anything with this information. DHS has run into the same problem where since 2016 they've been trying to get as much information as they can to campaign state and local election officials. A lot of times they say what the heck can we do with this flood of information. We don't know how to respond to that. Is there something in particular that--we'll start with governments and then corporations--but that governments should be doing to improve the situation?

MR. SCOTT-RAILTON: I'll just take a freebie. I feel like it's really great to have big think, thought-leading conversations about cybersecurity with a bunch of government folks. But the problem is when they talk about cybersecurity, it's their show. And they like to think about cybersecurity issues as the great game, right? It's super exciting, and they play it with each other, and users always come second or maybe third.

And the problem is, by volume, most of the bad stuff happening on the internet is happening to individuals who don't have anybody who really has their back and who have to depend on the largesse and quality of teams like Shane's and others.

But for the most part, their governments really don't have their back. Like the number of cases where Citizen Lab has gone to users and said you've got this problem, or we've worked with users, and like, nothing happens, right? They have no meaningful recourse. It's remarkable.

And I feel like there's an ethos here, and it's like everyone has watched videos online of people getting arrested in the U.S. And basically everybody who gets arrested has some version of like, wait, I know my rights, you can't do that. They have that experience, like I know my rights, you can't do this to me.

Nobody ever says that or experiences that when they get a nation-state warning, right? No one ever says that or experiences that when they're a victim of phishing. And I feel like that's a huge problem and it doesn't get changed by folks in government basically continuing to view cybersecurity as them playing with other states.

MR. MARKS: Eva, is there discrete thing that either government or industry can do to make the people you work with more secure?

MS. GALPERIN: I get really suspicious when somebody says is there is something the government can do, because I spend a lot of time protecting people from governments. So I'm not here to come and tell you that governments and law enforcement are the good guys. And in fact, I'm really suspicious of giving them power and I'm very suspicious of any remedy that involves asking the government or law enforcement to somehow be better and rescue us from ourselves.

I think that what we need to start doing is really to start organizing as civil society. And there are kind of two ways to go about this.

One is that the people who are speaking truth to power, the journalists and human rights lawyers and people who get out and demonstrate in the streets, need to have a very solid threat model of who's going after them and how and why. And as part of that, it involves the kind of work that I do and that John does over at Citizen Lab, which is writing reports about the kinds of threats that they face so that people can then do the right thing.

But the other half of that is the work that Shane does, which is just making everyone's communications private and secure by default so that you don't have to sit there and worry about what's going to happen when the government comes calling.

And then finally there is sort of the last group of people who really often get pushed to the side, and that is victims of domestic abuse. And they have the hardest threat model to deal with because you're dealing with somebody who actually has physical access to your stuff. And I think it is really up to the companies and the platforms to start thinking about ways to deal with that particular threat model that they haven't before because I get way more calls, I get way more complaints and I get way more work than a single person can possibly do.

MR. MARKS: Just before we go on, quick, we're taking audience questions over Twitter. If you'd like to toss one in, we still have time.

MS. GALPERIN: Nothing will go wrong.

MR. MARKS: Tweet them using the #PostLive, and I will try to get them to some of our guests.

So, John, you wanted to say something.

MR. SCOTT-RAILTON: I was going to say, so Eva makes a really interesting point about changing threat models. And I feel like one of the things that we see a lot of in our research is device compromise, as ever is, right? But I feel like the new form of this, or at least what we're seeing, is more of a smash and grab approach, even from sophisticated actors, where they get on a device and they grab logs and then they go.

And so one of the challenges there is like, man, the chat apps and so on that we use end up putting a bunch of stuff on devices. So, I was super excited to read yesterday, as I'm sure some of you folks have, it looks like WhatsApp has begun to experiment with ephemerality. Did you folks see this? There was a report yesterday saying they were starting this with group chats. I feel like that stuff is really important because the number of cases that I've looked at where threat actors have gone on and then gotten all their juice because they spent 20 minutes on a person's phone or laptop and pulled everything, is huge.

And it also addresses some of the issues around intimate partner surveillance, because it means that if you get a device at Time A, you don't get A minus one, two- and three-years' worth of personal stuff. So I feel like that kind of experimentation is really good and important, but I also feel like and I worry that there is a national security narrative right now around the importance of access to secure and encrypted communications being pulled by frankly a scary narrative around dark players who use bad things for pornography, and terrorism.

MR. MARKS: And that's sort of rebounded since 2014.

MR. SCOTT-RAILTON: It's really come into its own recently.

MR. MARKS: And there is a Justice Department conference on it on Friday where both the FBI director and attorney general are going to speak. Shane, you wanted to talk about this?

MR. HUNTLEY: Yeah, I think the encryption debate never seems to die, unfortunately. Like we're against the backdoors. I think the argument here was, like, trying to balance the law enforcement--like everybody thinks there is this magical solution where we can only give access to everybody's communications to the so-called good guys but keep all the bad guys out. We really have to, as mentioned here before, create secure platforms, because we really have to weigh the risks here. And the risks here of just having these open platforms created, even if they're created to be open or backdoor for supposedly good reasons, is just way too high to kind of run the risk.

MR. MARKS: Why is that? Can you give the 30-second explanation, that having a backdoor to encryption?

MS. GALPERIN: It means you don't have encryption.

MR. HUNTLEY: Because, one, you don't have encryption. Two, like, it means somebody has to secure that backdoor, right? So like who holds that magical backdoor key? Who do you think can keep that key for secret?

And I've never heard any really solid arguments about, okay, what happens if the secret backdoor key is stolen. What happens if some insider risk at some telecommunications provider or manufacturer gets access to it? This is just creating some other new mechanism where people can have their data stolen in some massive way.

MR. MARKS: Is this a debate inside the cybersecurity community?

MR. SCOTT-RAILTON: I feel like it keeps coming from without, right? Every couple of years a certain set of folks who are struggling with very legitimate law enforcement challenges are like, you know what, let's take another crack at this encryption pinata here and maybe we've got the case that will do it this time, right?

I think within our world it is fair to say most of us believe, from a mix of maybe it's ideology, maybe it's sort of historical experience or suspicion, that this is probably going to result in bad things if we go down that place.

And we come at it from different reasons. Like, my argument is we have no idea what the next couple of years look like in most countries, right, if we've learned anything in the past few years. And we have no idea what happens when capricious folks with access to the ability to request that data decide to do so in ways that their underlings have trouble refusing, right? And that itself is a good argument for the importance of encryption.

MR. MARKS: So before we run out of time, I want to ask, big picture, is there any light on the horizon for things getting better for the average person or for highly targeted people in the next five years?

MR. HUNTLEY: Yes. I think there's light at the end of this tunnel. Maybe I'm the optimist in the room.

MR. SCOTT-RAILTON: Tell us, Shane.

MS. GALPERIN: Because we're going to be all no.

MR. HUNTLEY: So one, what we're seeing is the attackers are having to work harder, right? So sort of the dumb attacks of three years ago are now just being blocked. Like the rate of sort of phishing and malware and all those sort of things be blocked by platforms, by systems is increasing. So attackers are having to work harder, which is a good thing. We're seeing these bugs being killed at a faster rate.

And we're also seeing that there is more things users can do. We have things like advanced protection, that if you really want to defend your Google account, you can sign out with security keys, all these other sort of mechanisms that the levers are there for somebody who really does want to get these extra protections, which to be honest, I don't think was there four or five years ago, that there was not as much you should do.

But I want people to walk away not thinking that it's all hopeless, there's nothing you can do, you're going to get hacked, so give up. Where what we really do see is that if you do take some protections and the platforms work at it, you trust the platforms that are doing a good job here, then your risks run a lot--you are a lot more secure and you actually have pretty good odds.

Of course there is the bolt out of the blue zero day super targeted stuff that might hit you the same way like getting hit by lightning does in the real world, but in the real world you should probably be worried about like getting fit and not having a heart attack, not about lightning strikes. You should also be more worried about the basic stuff. And I think the overall security level is increasing.

MR. SCOTT-RAILTON: Oh, Eva, did you want to?

MS. GALPERIN: Sure. So I'm going to take a dissenting view. Surprise. Yes, to some extent some of our accounts and some of our platforms are becoming safer and we have more options, and that is great.

But our attack surface is also expanding exponentially with every passing year. We are filling our homes and our offices with microphones and cameras that are extremely insecure and that are often manufactured by companies that don't have security and privacy as a particularly high value and that certainly don't think about nation-state-level APTs in their threat model, and they don't think about law enforcement.

For example, there is a great deal of argument about the installation of ringing doorbells in neighborhoods and sort of their partnership with local law enforcement. And Amazon continues to insist that this actually cuts down on crime, whereas the research seems to indicate that filling your neighborhood with cameras that everybody can see does not actually cut down on crime very much. It just increases the amount of surveillance that you have.

MR. MARKS: Real quick, before we run out of time here, I don't want to go through a panel without talking about election security. Big picture, how confident should we be, do you think guys think, from a private sector perspective, about the 2020 contest?

MR. SCOTT-RAILTON: My observation is every time we have looked at elections outside of the U.S. in the past couple of years, so every time we have scratched, we found all kinds of players, domestic and foreign, mucking around in those elections. I cannot think of an election that has happened in the past few years where there hasn't been experimentation and muckery. And the biggest thing that bugs me--

MR. MARKS: You said muckery, just to be clear.

[Laughter]

MR. SCOTT-RAILTON: The biggest thing that freaks me out is that so many of our analogies and the way that we're talking are still, by the virtue of the 2016 narrative and access, it's just pulling our intuitions back towards that. And I think that the problem space just looks really different, and I'm not at all convinced that we've got a good handle on it right now.

MR. MARKS: Guys, quick?

MR. HUNTLEY: I wouldn't say we've got a handle on it. I would say that unlike 2016--and I went through the 2016 things--that there is a lot more people working this problem. There are a lot more people taking this more seriously. The government's taking it more seriously, industry, people working together. And it is like the top priority of everyone. So watch this space to see how it plays out. But if anything does happen, it's not going to be due to a lack of effort by the platforms or anyone else, because I think people are taking these threats seriously.

MR. MARKS: That's all the time we have. Thank you, everyone. Please hold on for our final segment.

[Applause]

Threats on the Horizon: Securing our Digital Future Today

MS. NAKASHIMA: Hello again, everyone. Ellen Nakashima with The Washington Post, national security reporter. And for the last conversation of the morning we are so proud and honored to have Bill Evanina, the top U.S. counterintelligence official and Director of the National Counterintelligence and Security Center, the United States; as well as David Hickton, the first U.S. attorney to obtain an indictment of Chinese military spies for economic espionage, or as Bill likes to call them, the OG of Chinese espionage cases, and the founder of the University of Pittsburgh Institute for Law, Policy and Security.

So our conversation today is going to focus on the top counterintelligence priority for the country, China. And we often hear of the challenge of a rising China. It's an indispensable trading partner, and at the same time it's a rival on the global stage. So China has a complicated relationship with the United States, especially when it comes to technological advancement and global market dominance.

So, Bill, as the head of U.S. counterintelligence, you have a unique vantage point. When it comes to China, where is the U.S. most vulnerable? Is it from IP theft, or economic espionage? Is it the race to dominate advanced technologies? Is this the Chinese spy agencies versus U.S. spy agencies, or is it Chinse spy agencies versus the U.S. private sector and academia? How do you frame the challenge?

MR. EVANINA: So I will choose E, the answer all of the above. And I think when you look at it from a strategic perspective of the U.S. government and private sector, we have to look at all of those vectors individually, but as a group of one. And I think it's important for our audience to understand that geopolitically, military, economically, China is all of one, right?

So in America we have had the opportunity to grow up in a society where we have clear bifurcation between the government, the private sector, and the criminal element. And that's not the case in the People's Republic of China, or in Russia, or Iran. So it's an unfair playing field. And they utilize all those resources as one to combat us.

And I think for this conversation, the important part of answer D was that right now our struggle is that it's an intelligence services battle against our private industry, and that's not the way we do business. So we're trying to combat that and allow and alleviate the threat by integrating the private sector as part of the battle. And that's our biggest challenge right now.

MS. NAKASHIMA: Yeah. Well, and, Dave, as I mentioned, you led the case against hackers working for the People's Liberation Army of China, but that's just one of the many precedent-setting cases you've spearheaded in cyberspace. But in some sense, how many of them have actually wound up in prison. Once in a while you get lucky and some defendant travels to a country with an extradition treaty and it gets picked up and sent over here. But Chinese hackers are not likely to do that. So how do we hold these malign Chinese actors in cyberspace accountable?

MR. HICKTON: Well, you're correct, but I think that the case we brought in 2014 led to the agreement between President Obama and President Xi, which is an even greater result, which everybody agrees reduced intellectual property theft down until virtually the election of 2016. But you're making a very good point that we don't have an extradition treaty, and this is one of the challenges of the borderless nature of cybercrime.

I argue that unmasking cyber criminals have virtue in and of itself, because the principal currency of cyber criminals is their anonymity. And if you unmask them and declare that they did it, that's the first step. By the time I left the government, I was trying to expand the forums for adjudicating these cases beyond criminal investigations into the World Trade Organization, Commerce, and Treasury. My belief is that we need to hold foreign actors to the same standard we would hold American citizens so that if they steal from our industry, particularly intellectual property, they ought not participate in our markets.

MR. EVANINA: I want to jump on that, because I believe that that was a seminal moment in our government's ability to combat theft of IP and trade secrets, because it turned out to be a marketing endeavor where we were able to educate and inform the American public, as well as the entire government writ large, of an intelligence services--in this case, the People's Liberation Army--theft of our business and our economic and ingenuity and know-how for their military purposes. And I think that was a watershed moment, that we kept always in the government, but this was the first time we were able to shed light on that theft.

MS. NAKASHIMA: And one of the key achievements in that, Dave, was your ability to get these private sector companies who traditionally historically do not like to come forward and admit that they've been hacked or compromised and have their names out their publicly--it harms their reputation--you got them to actually agree to be public about it, have their names mentioned in the indictment.

MR. HICKTON: Right.

MS. NAKASHIMA: Talk about that. How did you get them to come forward? And why is that so significant?

MR. HICKTON: I was an unusual United States attorney because I hadn't served in the Department of Justice and I had represented many of these people and known many of them since childhood. But I spent most of my time trying to make sure that we could not only bring the case, but tell the story by putting a picture of the defendants, which we did at the back of the indictment, that iconic picture that came off a wanted poster, which showed the public who did it, and also, departing from what would have been the norm, which is company A, B, C, D, E, but also putting a picture of who the victims are.

And then when we announced the case, I described how this affected real people.

MS. NAKASHIMA: U.S. Steel.

MR. HICKTON: U.S. Steel, the United Steelworkers, Alcoa, Westinghouse, and how this led to factory closings and lost jobs, and why we needed to care about this.

MS. NAKASHIMA: So, Bill, expand on that. That was like 2014, was it? And now here we are five years later. It's not just steel and trade secrets that the Chinese are after. They're moving into biopharma and genetics. Can you talk a little bit about what you're seeing?

MR. EVANINA: So publicly we talk about the span of influence and requirements that I would say the Ministry of State Security works with the Communist Party that developed to come here and actually steal our innovation. And it goes from biopharma to green energy to leading technology to future markets to gas, oil, shale, clean energy. And we saw a few years ago with the Monsanto case, stealing hybrid greens and sand, because they have to feed 1.4 billion people. So they would rather not create their own research and development arm when they can come over here to the West and take it. And they go first to market and their patent program is quicker, more effective than ours, and they immediately get gain of a local or international market at 30 cents on the dollar.

MS. NAKASHIMA: Right. This idea of, then, stealing or working with genetic mapping companies in the U.S., I hadn't heard about that. What's going on there?

MR. EVANINA: So it's complicated. So not only do they use their intelligence arms and their nontraditional collectors to steal our intellectual property and trade secrets. In a case recently with utilization of Duke and Yale's capability for genome mapping, sometimes we actually engage with them and do great collaborative work with their research and development and their academic work. And they take it anyway. So it's a not winning environment.

But they took that technology on genome and DNA, and they used it to imprison over a million Uighurs, right? So even great technology that we utilize for great purposes sometimes is used nefariously by intelligence services of rogue nations.

MS. NAKASHIMA: So this was done by the MSS, which is sort of their major intelligence service.

MR. EVANTINA: Yeah.

MS. NAKASHIMA: They took this through legitimate lawful research partnership.

MR. EVANTINA: Some lawful, some unwitting, and some illegal, right? And I think that's the idea, that they utilize a whole of country approach to the theft of our intellectual property and trade secrets. They'll use collaborative mindsets in academia, joint ventures, private equity, venture capital to be able to utilize all tools, whole of society approach to obtaining our secrets.

MS. NAKASHIMA: Talk a little bit, both of you, I guess, about the academic approach that the Chinese are making, this issue now with the Chinese trying to or using, gaining access to university and university secrets, but also maybe trying to influence academics or Chinese students and researchers there. How much of a challenge of threat really is it, and what is the government's role here to do anything about it?

MR. HICKTON: Well, in my view, it's a huge threat. Look, the good news is we are still the cradle of innovation and the best academic country in the world. Everybody wants to send their kids to school, here, and lost in the shadow of the PLA case which I did in 2014 was, in 2015, I exposed the network of gunmen who were fictitious test-takers or fraudulent test-takers who existed in this country who were taking the SAT and the GRE for students in China. And somehow, they were getting passports, they would get admission to our colleges, and then they would get a student visa, and then go home after they were educated here.

And this was an organized network and, at the least, deprived American students who might have been paying taxes for some of these state-related colleagues--space in those universities.

There is invasion of our research. There have been cases that have done there.

So, I believe this is a real threat. I believe what the government should do about it is the same we do with intellectual property.

It seems to me that if we're going to have digital space and we are the number one economy and we are the number one research and development location in the world, American citizens should be treated equally with citizens around the world, and nation-state intrusions should be treated as a real and present threat. So, I cheer the expansion of this initiative.

MR. EVANINA: I'll double down on the threat. We believe it's critical up there next to, you know, 5G with moving forward.

But what we're doing about--so, this past year, we're working under the leadership of Senator Burr and Senator Warner, a bipartisan effort in Congress. My--

MS. NAKASHIMA: The Chairman and Vice Chairman of the Senate Intelligence Committee.

MR. EVANINA: Right. We utilized in my office, the FBI and DHS, and we met personally with over 150 university college presidents to talk about the threat, what it's like. We gave them a one-day classified reading so they could understand the intention of these foreign leaders, as well as, "Here's the threat and here's how it's manifested. And here's the amount of investigations that are being done by the FBI. And let's work together to find a solution that is not only effective and efficient for you as universities and colleges, but also doesn't, I would say, perform the effort of any type of racism. Because the argument has been this is a racist issue and the Chinese intelligence services have been pushing that envelope very, very effectively here in the U.S., but it's not.

When you look at the amount of investigations the FBI has, which is over 900, 95 percent of them are from the People's Republic of China.

MS. NAKASHIMA: Over 900 espionage or counter-espionage investigations?

MR. EVANINA: Yeah, with respect to economic espionage.

MS. NAKASHIMA: Oh, economic espionage, right. But in fact, there have been a few cases where the Department has had to drop the case, or the case got thrown out for lack of evidence, and these were cases of, I believe, economic espionage against Chinese-American, often, academics or researchers at universities, which has led to criticism that the Justice Department is overreaching and it's sort of seeing a Chinese threat amongst the Chinese-American community that doesn't really exist.

MR. HICKTON: Well, I'm in academia now, and I think that's a valid concern. And we still, in our institutions of higher education, aspire to have a worldwide student body and the educational opportunity, and a diverse population is valued.

So, I think we have to be real careful to get that point right.

MR. EVANINA: And I'll double down on the importance of understanding the threat versus actually who is committing the threat.

Recently, the FBI and DOJ charged and indicted an American citizen in a university campus for spying for the Chinse intelligence services. So, it's not about the Chinese individuals and students that are here; it's about the Communist Party of China and how they manifest their efforts here in the U.S. through the Ministry of State Security, as well as the Confucius Centers and the Thousand Talents programs. It's a holistic program, but it's certainly not about the legitimate coming from China to study here and, what my partner talked about, the greatest college university system ever invented.

MS. NAKASHIMA: Bill, China is said to be making great strides in the use of artificial intelligence. Where exactly in the field of AI is China most advanced, and what is the role of the U.S. Government to enhance U.S. competitiveness, here.

MR. EVANINA: So, I'll pass on the role of--advancing our competitiveness. I'll stay in the threat perspective.

MS. NAKASHIMA: Okay, maybe--

MR. EVANINA: I think that it is a significant threat, and their ability--and if you map their allocation of government funds to facilitating AI and ML in the billions of dollars is dramatic.

What they do have, also, which is an unfair playing advantage, is all the theft of everyone's PII they've stolen, not only here, in America, around the world, that theft of PII helps facilitate--

MS. NAKASHIMA: That's personally identifiable information.

MR. EVANINA: That's correct. And that allows them to use that--datasets, hundreds of thousands of petabytes of data. Just recently, the Anthem health care was that 78 million Americans had their health care records--they use that in the AI to be able to promulgate advanced analytics.

So, the more data they steal from us, with--from PII, whether child records--they use that to facilitate testing of their AI platforms.

MS. NAKASHIMA: Like the OPM breach, as well, right?

MR. EVANINA: Twenty-one million Americans' records.

MS. NAKASHIMA: All went into big databases over which the Chinese run their AI algorithms to--

MR. EVANINA: Right. Some of the current estimates say that more than 50 percent of the American adults have had all of their PII stolen by the People's Republic of China.

MS. NAKASHIMA: Wow, half of us here.

Dave, did you have--

MR. HICKTON: I mean, the current denigration of facts and science is a threat to us. The retreat on investment in scientific research is a threat to us. But you know, we in Pittsburgh have been the home of a lot of great advances, whether it's manufacturing, medical, and technology. And those have all been sponsored by partnership between the government, the academic community, and private industry. And we need to continue that so we can make Pittsburgh or Detroit or Philadelphia the envy of Shanghai, instead of the other way around.

MS. NAKASHIMA: Well, China has one advantage in this sense, in that they are much more of a command and control economy. And they--you know, the government can basically order companies and universities to do its bidding here. We have a much more free-market system and we try to keep independence from that market.

But is there now some--is there more of a need, do you think, for the government to sort of maybe direct areas of research funded, give incentives so that we're not left caught in the background?

MR. HICKTON: I mean, perhaps, but even when the government sponsors or directs it, it's still driven by the scientists.

I'd like to really address the premise of your question, though. Some think that the initiative, particularly the work I did, was anti-China and it was exactly the opposite.

I personally believe that China is the lynchpin in developing norms and laws in the emerging world of digital space, because they are the number two economy in the world. And at some point, they're going to appreciate that they have as much to lose as the number one economy has to lose.

I mean, there is the old saw in law enforcement from Willie Sutton: Why do you rob banks? Because that's where the money is, there.

If you look at the threat vectors, they're all coming at the United States because we have things to lose. They can barely turn the lights on in North Korea and Russia, but China is not like that. So, applying a lot of digital space was the essence of my mission in my former job. And I think, if we do that correctly, it becomes part of a strategy as opposed to a tactic to make China our partner.

MS. NAKASHIMA: And that was the strategy for years, of engagement, right, with--between the U.S. and China, to open our markets to them so that they would maybe become more like us, want to be part of the free market and abide by the rules of the WTO. They haven't followed those. They aren't following the same norms in cyberspace about respecting free and open Internet.

So, how likely is it that we'll be able to get China to become more like a rule-of-law nation and abide by Western norms and traditions?

MR. HICKTON: Well, I can't predict the end of that, but I can say that any effort like this requires persistence and is going to take time.

The one thing I think has been successful is that if you just look at China writ large, they largely have become more Western. Their young people are more Western. And I think our engagement strategy has worked, it just requires continuous effort.

MR. EVANINA: So, I will differ with my partner on this one a little bit. I think, under the leadership of Xi Jinping, they have become the most amazing surveillance state we've seen in decades. And the social score they have and the ability to photograph--and their facial recognition of every second of everybody's life over there is really--and you see what's going on in Hong Kong, right now. You see the power.

Secondly, I think with any change that we just talked about, has to come--the agreement of ceasing the theft of intellectual property and trade secrets. If we're somewhere in the middle of 400 to 600 billion a year in economic loss due to their theft, that's about $4,000 per American family, after taxes. So, we have to be able to stem the tide of their theft.

If we can't, then I don't know how we get to a place where we get back to the diplomacy of hope.

MS. NAKASHIMA: What will stem that? I mean, sanctions--there was a point at which the Obama administration was about to impose economic sanctions for economic espionage in China.

MR. EVANINA: I think that's going to take a multifaceted, multiple levers to be able to do that, to include policy from the White House, legislation from Congress. And I think a change of mindset of the American people to understand the damage and the value added--or the value subtracted from this effort. And I think that's going to be, literally, a whole-country approach to stem the tide.

MR. HICKTON: I don't really disagree with what Bill said, but I think that we do need sanctions and I think we have to really appreciate that the current trade war has impacted us negatively. We have divided our assets in the current trade war as opposed to multiply them by combining application of the rule of law with selling below cost within our markets, when we could multiply that, the conversation, we have halved it, in my view.

MS. NAKASHIMA: Okay, have the--I want to get to a question from the audience, but first I wanted to get to the issue of Huawei, the Chinese telecom equipment maker that is a big issue for the U.S. Government, and in Europe, too, especially as we're moving into 5G super-fast, super-advanced telecom networks.

The U.S. Government has been pressing allies, in Europe especially, to bar Huawei from their 5G networks, with mixed results of success. The argument there is that allowing Huawei into your networks will open a door for either Chinese surveillance or cyberattacks that could disrupt the network at a critical moment.

But Sue Gordon, who was the recent Deputy Director of National Intelligence--you know well, Bill--has publicly argued that, you know, we have to take a pragmatic view. That even if we don't have Huawei who have 5G, here, there will be other countries around the world that do have Huawei in their networks and we interconnect with those networks. So, you've got to risk--you've got to manage risk and presume a dirty network.

What do you think, Bill, is she right?

MR. EVANINA: Well, in my space, whether she's right or not, I hate to even think about having to presume a dirty network. In the world I live in on counterintelligence, I think that is the beginning of the end.

I think from a practical standpoint, she may be right, but I think our efforts in the intelligence community and the counterintelligence writ large is to not have that dirty network.

And I think we've been able to prove along the globe the nefarious activity of Huawei and what they're capable of doing now, never mind when we have a 5G platform.

I would also say that Huawei, to me, in my position, is not the problem; it's the Communist Party of China. So, if Huawei goes away, there's another company that's going to facilitate--draw the Communist Part of China and Xi Jinping's effort to be the global supplier of telecommunications. And I think that is the threat we face, not necessarily the company of Huawei.

MR. HICKTON: I agree completely. One of the last cases I worked on ultimately led to the indictment of the so-called Boyusec Group, which was Advanced Persistent Threat Group III. When the case was originally presented to me, it was enforcement of the Obama-Xi agreement. It later developed to be a global positioning satellite case. So, think Google Maps, think bombs, drones, and then, somebody talked to me about precision agriculture, which I didn't know about. And then, it later became written up as the spy arm of Huawei.

So, the Huawei conversation sounds like a 5G conversation that just emerged, but the Huawei conversation has been going on for some time.

And I agree that they will go away until we--and someone else will replace them, until we address with China what is going to be our understanding. And I'm confident we can get there; it's just going to get very hard.

MS. NAKASHIMA: So, I have a question from the audience about the law enforcement tool of indictments. A person asked, "Have the indictments against Chinese hackers done any good?" I mean, I think, Dave, you mentioned--at one point, this all led to kind of the Obama-Xi agreement, the pledge not to conduct economic espionage in cyberspace, which worked, it seemed, for about a year or so. But then, where the PLA started tailing off on its hacking the MSS picked up. So, now, that agreement didn't seem to really be meaningful.

So, what do you think?

MR. HICKTON: I think it's an expectation issue. No one would suggest for a minute that the FBI, which started investigating bankruptcies--I'm sorry, bank robberies is useless is because we've never solved bank robbery.

Our expectation in law enforcement is to reduce, not eliminate crime. I think it was a very important start. I would be the first to admit it was extremely controversial. And we did not bring them to Pittsburgh. I may be the only one left who believes they're ultimately going to be tried in Pittsburgh. But it did lead to the Obama-Xi agreement.

And that was something no one thought of at the time. And so, imagine: Do we give them three squares and a roof over their head for ten years, or do the Presidents of China and Russia get together and reach an agreement, which everybody agrees, for a period of time reduced--

MS. NAKASHIMA: Could I also--

MR. HICKTON: --intellectual property theft down to zero.

MS. NAKASHIMA: They also came and did--Xi came over and did the agreement in part because of, I think, the threat of economic sanctions.

MR. HICKTON: Correct.

MS. NAKASHIMA: Which, you know, Washington Post reported were about to happen. And I think that combined with the indictments may have pushed them to come and make the agreement.

MR. EVANINA: Well, two things.

Number one, I think the agreement with Xi is forefront of the conversation, and I agree that he, as President, agreed to stop the economic espionage from a cyber perspective. It did not stop it from a human perspective, the insider threat. So, that increased dramatically. So, they never stopped stealing and it did make the transition from the PLA to the MSS, a more human-based efforts.

Secondly, I think with the indictments--are critical because I spent a lot of time with our partners, especially in the Five "I"s, the recent two Huawei indictments have been earth-shattering, I think, in terms of getting the facts out by DOJ on what the indictments are and what they mean for the private sector industry.

So, when I go to Australia, New Zealand, Canada, Great Britain, they look at these indictments very carefully and see how that manifests in their country.

So, as much as we are exposing the People's Republic of China for their nefarious activity, there is positive impact with our partners around the globe about that same activity in their country.

MS. NAKASHIMA: Well, I'm afraid that's all the time we have right, but let's thank Bill and Dave for a wonderful conversation, today.

Thank you.

[Applause]

[End recorded session]