DEEPFAKES ARE dangerous — not only the synthetic videos themselves but also the way they promise to blur reality in a country where facts are already up for debate.

Congress held a hearing last week on a development that researchers are concerned could become the next front in our disinformation wars: Deepfake technology allows legitimate creators and malicious actors alike to forge lifelike footage of any figure they please, and it is getting better, fast. A distorted clip of a “drunk” House Speaker Nancy Pelosi (D-Calif.), more “cheapfake” than deepfake, showed last month how even rudimentary manipulation can fool its way into virality. Imagine similar stunts, but more of them and more convincing, in the hands of adversaries who have already proved themselves eager to sow discord.

This is a real threat. For now, though, that is mostly all it is: a threat. Reports of possible political deepfakes exist abroad, yet here the technology appears primarily in the world of online pornography. It’s difficult to know how to regulate something that has not happened yet, which is why lawmakers should focus first on investigating how they might enforce existing copyright, defamation and harassment law to attack the most sinister deepfakes. Reaching further in an attempt to ban a specific technology risks stepping on online posts from, say, political parodists. And telling platforms they will be liable for damaging deepfakes they fail to remove could end up outsourcing similar censorship to them.

These risks are a reminder of pitfalls we have already witnessed in the fight over false content. Legitimate concerns about misleading the public have been co-opted into cries of “fake news” by President Trump and others determined to discredit the media. Worse still, authoritarian leaders have used those worries and the regulation they have prompted as an excuse to suppress speech they don’t like. Already, Mr. Trump has suggested that the “Access Hollywood” tape, in which he bragged about assaulting women, was doctored. Experts on the deepfake phenomenon have coined a term to describe this harm created by both the proliferation of doctored videos and the panic surrounding it: the liar’s dividend.

Deepfakes may just be the most technologically advanced manifestation of a much bigger problem. Trust is eroding, social media is accelerating the disintegration by allowing lies to spread at unseen speeds, and the leader of this country is joining our enemies in taking advantage of it. Government should invest in developing technology to detect deepfakes, and it should push platforms to do the same, as well as to label content that impersonates people in a manner invisible to the human eye. The task is to get ready without getting hysterical. That helps only the liars, after all.