Opinion Congress wants to regulate AI. Here’s where to start.

(Daniel Hertzberg for The Washington Post)
9 min

The conversation about artificial intelligence tends to devolve into panic over humanity’s eventual extinction, or at the very least subjugation: Will robot overlords one day rule the world? But machine-learning is more than a hypothetical, and it presents plenty of immediate problems that deserve attention, from the mass production of misinformation to discrimination to the expansion of the surveillance state. These harms — many of which have been with us for years — ought to be the focus of AI regulation today.

The good news is that Congress is on guard, holding hearings and drafting bills that attempt to grapple with these new systems that can absorb and process information in a manner that has typically required human input. Bipartisan legislation is under discussion, spearheaded by Senate Majority Leader Charles E. Schumer (D-N.Y.).

The bad news is that nothing so far is close to comprehensive — and piecing these ideas together with steps the White House and federal agencies have already taken entails some conflict and confusion. Before the country can even start to agree on a single, clear set of rules for these rapidly evolving tools, regulators need to agree on some basic principles.

AI systems should be safe and effective

This one is pretty basic. Anyone designing these tools should conduct a thorough evaluation of any harm they might cause, take steps to prevent it and measure the rate at which that harm occurs. Guarding against misuse or abuse could be trickiest of all. Already, con artists are using AI apps to simulate the voices of victims’ loved ones to persuade them to fork over cash; deepfake videos of celebrities and political candidates could threaten reputations or even democracy.

More generally, systems should be able to demonstrate some baseline accuracy. They should, in short, do what they say they’re going to do. But what exactly the threshold ought to be depends on the tool’s impact: A false negative in an initial test for cancer, for instance, is far more damaging than a false positive likely to be reassessed after further evaluation.

A chatbot such as ChatGPT “hallucinating,” or fabricating facts and figures out of nothing, can produce manifold inaccuracies — including, on the more worrying end of things, false claims of sexual assault against a law professor and plagiarism against an author. Maybe we’re willing to tolerate these flaws to some degree in a general-purpose assistant. But what about in a specialized one designed to offer legal advice or diagnose an ailment?

Then there are dangers that perhaps shouldn’t be tolerated, full stop. Think of a gun that determines when to discharge a bullet.

A final way to evaluate an AI’s performance is to weigh it against the alternative: the status quo, usually involving human input. What benefits does it provide, what problems could it cause and how do those stack up against each other? In other words, is it worth it?

AI systems shouldn’t discriminate

This principle nicely ties in with the safety and effectiveness guarantee — impact assessments, for instance, can help guard against discrimination if they measure effects by demographic group.

But to root out bias, it will also be essential to examine the data used to train these algorithms. Consider data drawn from criminal justice databases where higher arrest rates of minorities are baked in. Reusing those numbers to, for example, predict a convict’s chances of recidivism could end up reinforcing racist policing and punishment.

Data should be representative of the community in which a system will be deployed — for instance, facial recognition models crafted mostly from troves of photos of White men are likely to flop when it comes to identifying Black women.

Data should also be reviewed with an eye toward its historical context. For instance, technology companies that have tended to promote men to higher positions should realize that relying on those past statistics to measure the potential of would-be hires could disadvantage female applicants. With that knowledge, tool designers can correct for disparities in how a system favors or disfavors members of a protected class.

Skip to end of carousel
  • D.C. Council reverses itself on school resource officers. Good.
  • Virginia makes a mistake by pulling out of an election fraud detection group.
  • Vietnam sentences another democracy activist.
  • Biden has a new border plan.
The D.C. Council voted on Tuesday to stop pulling police officers out of schools, a big win for student safety. Parents and principals overwhelmingly support keeping school resource officers around because they help de-escalate violent situations. D.C. joins a growing number of jurisdictions, from Montgomery County, Md., to Denver, in reversing course after withdrawing officers from school grounds following George Floyd’s murder. Read our recent editorial on why D.C. needs SROs.
Gov. Glenn Youngkin (R) just withdrew Virginia from a data-sharing consortium, ERIC, that made the commonwealth’s elections more secure, following Republicans in seven other states in falling prey to disinformation peddled by election deniers. Former GOP governor Robert F. McDonnell made Virginia a founding member of ERIC in 2012, and until recently conservatives touted the group as a tool to combat voter fraud. D.C. and Maryland plan to remain. Read our recent editorial on ERIC.
In Vietnam, a one-party state, democracy activist Tran Van Bang was sentenced on Friday to eight years in prison and three years probation for writing 39 Facebook posts. The court claimed he had defamed the state in his writings, according to Radio Free Asia. In the past six years, at least 60 bloggers and activists have been sentenced to between 4 and 15 years in prison under the law, Human Rights Watch found. Read more of the Editorial Board’s coverage on autocracy and Vietnam.
The Department of Homeland Security has provided details of a plan to prevent a migrant surge along the southern border. The administration would presumptively deny asylum to migrants who failed to seek it in a third country en route — unless they face “an extreme and imminent threat” of rape, kidnapping, torture or murder. Critics allege that this is akin to an illegal Trump-era policy. In fact, President Biden is acting lawfully in response to what was fast becoming an unmanageable flow at the border. Read our most recent editorial on the U.S. asylum system.


End of carousel

AI systems should respect civil liberties

As always when personal data is involved, privacy is key. Essentially, what companies can and can’t do should depend on what consumers would reasonably expect in that particular context. For example, it makes sense that Netflix is vacuuming up viewer preferences to finely tune its recommendation algorithms; it would make a lot less sense if Netflix collected the precise locations of those viewers to build a tool unrelated to streaming. It would make even less sense if Netflix sold its viewers’ preferences to a Cambridge Analytica-style political consulting firm.

Then there’s the question of privacy in how these systems are used. The Chinese Communist Party has notoriously installed more than 500 million cameras around the country; it’s impossible to hire 500 million people to monitor them, so AI does the job. President Xi Jinping’s regime is pushing to sell these tools to other governments around the world. The United States can’t allow these violations of privacy to happen here — but the lack of oversight that has already allowed firms such as Clearview AI to scrape more than 30 billion images from social media sites and sell them to law enforcement agencies across the country suggests we’re not as far off as we might like to think.

AI systems should be transparent and explainable

People also need to know when they’re interacting with an AI system, period — not only so no one falls in love with their search engine but also so if one of these tools does cause injury, whoever has been hurt has an avenue to seek recourse. That’s why it’s important for AI systems to explain both that they’re AI and how they work.

This second part is far more difficult. Making accuracy and impact assessments accessible or disclosing the sources of training data is one thing. But detailing the causes of an AI tool’s behavior is another altogether — because, in many cases, those who build and run these tools have no way to peer into the black boxes they’re overseeing. The matter of “explainability” is front of mind for researchers today, but that does little good in a world where these systems are already making all sorts of decisions about our lives.

The extent to which a given tool must be able to explain itself should vary based on what it’s doing for or to us. Think, for example, of a toaster: Do consumers really need to know why this appliance determines a piece of bread has reached optimal crispness? Then think of a lending algorithm. The individual whose ability to rent an apartment depends on the algorithm’s answer has a right to understand the reasons they have been turned away. What about the person whose medical claim has been rejected by an insurance company without anyone even looking at the patient file? And somewhere in between lie the systems that social media sites rely on to feed posts and other content to their users.

Just as the possibility of serious harm posed by some AI systems might be too great for them to be deployed without meeting stringent standards for safety and effectiveness, the inability of the same sort of systems to explain themselves might mean that, for now, they should remain undeployed.

Putting principles to work

AI isn’t one thing — it’s a tool that allows for new ways of doing many things. Applying a single set of requirements to all machine-learning models wouldn’t make much sense. But to figure out what those requirements should be, case by case, the country does need a single set of goals.

Putting these principles — versions of which appear in the White House’s nonbinding Blueprint for an AI Bill of Rights as well as a framework from the National Institute of Standards and Technology — into practice won’t be easy. Even the question of who should be responsible for enforcing whatever regulations emerge is uncertain: a new federal agency? Existing agencies using their existing authorities? Or perhaps there’s a middle ground, a sort of coordinating body that reviews agency standards, reconciles authorities where they overlap and fills in any gaps.

There’s the matter of applying today’s laws to AI systems, and the matter of asking whether those systems create the need for new laws in new areas. What do we do about legal liability for speech that’s algorithmically generated? What about copyright? None of this is even to mention the potential for job loss at a massive scale in some areas (many experts point to accounting as an easy example) and perhaps equally dramatic job creation in others (humans will need to train and maintain these models, after all). That subject will likely need addressing outside the scope of safety regulations. And finally, there’s the matter of ensuring that any barriers to AI innovation don’t lock in the market power of today’s tech titans that can most easily afford compliance and computing costs.

The downside of any stringent AI regulation is that these technologies are going to exist regardless of whether the United States allows them to. Instead, it will be countries such as China that build them, without the commitment to democratic values that our nation could ensure. Certainly, it’s better for the United States to be involved and influential than to bow out and sacrifice its ability to point this powerful technology in a less terrifying direction. But that’s exactly why these principles are the essential place to begin: Without them, there’s no direction at all.

The Post’s View | About the Editorial Board

Editorials represent the views of The Post as an institution, as determined through debate among members of the Editorial Board, based in the Opinions section and separate from the newsroom.

Members of the Editorial Board and areas of focus: Opinion Editor David Shipley; Deputy Opinion Editor Karen Tumulty; Associate Opinion Editor Stephen Stromberg (national politics and policy); Lee Hockstader (European affairs, based in Paris); David E. Hoffman (global public health); James Hohmann (domestic policy and electoral politics, including the White House, Congress and governors); Charles Lane (foreign affairs, national security, international economics); Heather Long (economics); Associate Editor Ruth Marcus; Mili Mitra (public policy solutions and audience development); Keith B. Richburg (foreign affairs); and Molly Roberts (technology and society).