THERE’S PLENTY to do to secure our democracy against adversaries’ cyberattacks — but one way to stay safe is to persuade enemies not to attack in the first place because they fear the consequences if they do. It will prove difficult to deter adversaries in this way, though, if we can’t figure out who they are.

False-flag operations are becoming more common for nation-states, and more sophisticated. The National Security Agency this week released a warning that hacks in 35 countries that appeared at first glance to come from Iran were not what they seemed: The intruders instead were Russians who had hijacked the other country’s servers to spy in disguise. Days before, a Wired investigation meticulously recounted an assault that threatened to cripple the PyeongChang Winter Olympics last year, also by Russia, which in this case made it seem as though the attack came from North Korea.

Russia and other nations have tried this sort of obfuscation before. The 2016 hack on the Democratic National Committee was designed as if it originated with a solitary Romanian named Guccifer 2.0. North Korea’s retaliation against Sony Pictures for “The Interview’s” mockery of Kim Jong Un relied on a made-up group called “Guardians of Peace.” But those masks were easily yanked off. Today’s trickery seems more as if hackers are wearing second skins.

Micro-targeted data intended for advertising on social media is being used to to change power structures globally, says Philippine journalist Maria Ressa. (The Washington Post)

The trend holds true for disinformation campaigns, which of course rely on trickery to work at all. Russian Internet Research Agency trolls originally paid for advertisements on Facebook and other social media sites in rubles; a Twitter account whose machinations made their way into the mainstream that feigned affiliation with the Tennessee Republican Party was even registered to a Russian phone number. Now, operatives use virtual private networks to shield their locations — and they’re evolving still.

Facebook disabled a network of Russia-linked accounts this week that took pains to recraft popular memes from scratch rather than copy them over to avoid detection. The accounts also opted to post photos on Instagram or reproduced existing text written by Americans rather than risk grammatical errors — “What the future for their children will be?” asked one misfire with distinctly Russian syntax. Facebook removed Iranian accounts, too, and China has been showing an increasing interest in exporting disinformation. What if one of those countries decided to copy Moscow’s fingerprints along with its tactics?

President Trump has already made a mess of this country’s deterrence capabilities by pandering to Russian President Vladimir Putin instead of standing up to him. The threat that future investigators might not discover for weeks, or months, or at all, who meddled makes the problem worse. It also further muddies the nation’s already turbid understanding of what is real. Strengthening our ability to accurately attribute attacks is as challenging as it is essential. Strengthening societal resistance to division and distortion may prove even more essential — and even more challenging.

Read more: