THE UNITED STATES sat by and watched five years ago as the European Union passed the General Data Protection Regulation, setting a standard for data privacy that has come to govern companies around the world. Now, the same thing appears to be happening with respect to artificial intelligence.
The E.U. last week revealed a detailed proposal of rules for AI — all of it. The aim is a noble one: ensure the next frontier in technological development is explored ethically, by furthering powerful tools’ positive uses and guarding against the insidious ways in which they are being and can be exploited. This is essential work for democracies today; it’s up to them, after all, to present an alternative to the Chinese surveillance state. The European plan offers an opportunity for the White House to engage on crafting this alternative. President Biden must not let it pass by.
There’s much to like in the E.U.’s outline, which wisely refuses to treat AI as a monolith. The approach is risk based: Some AI makes life easier without presenting much downside. Think, for instance, of a customer service bot on a retail website. Some AI could cause harm should it malfunction, but it exists in industries that are already regulated, from machinery to toys to medical devices. And some AI could cause harm and is entirely novel, such as algorithms that tell employers whom to hire or judges whom to allow bail.
The E.U. would impose different obligations for different categories, with the higher-risk systems following more stringent restrictions — for the most part conducting self-assessments of their own “trustworthiness” by, for instance, testing training data for error or bias. In many cases, a human must remain in charge, and firms face heavy fines if they fall short. Some uses of AI deemed inherently malicious, such as government “social scoring” systems, are outlawed altogether.
This generally makes sense, but the devil is in the definitions. What should qualify as high-risk as technology changes? And more troublingly, within the banned applications, what does it mean for a system to deploy “subliminal techniques” or “exploit vulnerabilities” to cause “physical or psychological harm”? Many would argue that most AI, and certainly those algorithms that undergird top social media sites, does precisely this.
These problems and more may well be addressed over the years it could take for this draft to become final. The United States should play an active role in addressing them: helping to shape the E.U.’s answers in a manner that enshrines liberties without stifling innovation. The United States should also let the E.U.’s answers inform its own. Many of this nation’s most successful companies will be subject to these rules because they serve E.U. citizens. Yet, though the Federal Trade Commission recently showed interest in policing against deceptive algorithms, and a hodgepodge of bills are floating about Congress, this country has presented no comprehensive vision for AI. Without any such vision, other governments will fill the void.