Meet the scientist teaching AI to police human speech

Alexis Conneau’s work has helped Facebook and Google build artificial intelligence systems that can understand dozens of languages with startling accuracy

(Jean-Francois Podevin for The Washington Post)

In his tiny corner of a seemingly endless expanse of workspace, the artificial-intelligence research scientist Alexis Conneau tapped his keyboard for a few seconds and then, suddenly, there everything was: Hundreds of billions of words, an immense torrent of human knowledge, raining down a window of his MacBook Pro screen.

For years, automated “crawlers” had been vacuuming the Internet — its old poems and angry comments and dessert recipes and everything else — into this gargantuan database in 100 languages: Arabic, Malagasy, Urdu and dozens more. Conneau couldn’t read it himself. But his creation, XLM-RoBERTa, had read it many times: This was its brain matter, the code with which the machine could, in some way, learn to emulate how people speak.

It was a quiet afternoon last year inside Facebook’s secretive AI lab in Menlo Park, Calif., where the trillion-dollar company had assembled a 60-building compound of glass edifices with giant monitors and catered lunch. Around Conneau’s desk, in the edifice known as MPK 21, the tech company’s young and extraordinarily well-paid workforce walked beneath snaking rivers of blue Ethernet cables and past walls of corporately acquired pop art: “You Already Know What You Need,” “Don’t Be Afraid,” “It Is Possible.”

As he looked at his screen, Conneau couldn’t help but feel that the inspirational company posters were right. The machine-learning system he had helped build could understand dozens of languages better than the best systems of yesteryear, even those trained in a single tongue.

“We were trying to understand all languages … doing things we didn’t really believe were possible a few years back,” Conneau said. But “this potential dream, it is actually something that we have right now.”

It can be hard to overstate how far artificial intelligence has advanced in the last few years. Supercharged by tens of billions of dollars in yearly investment, massive volumes of data, seemingly unlimited computing power, and a race between global tech giants whose clout rivals that of nation-states, the field has outgrown its academic roots to become a corporate juggernaut, with machines themselves rewriting how technology gets made.

Yet at its most basic level, the software that powers this artificial intelligence revolution is being built by people like Conneau, a brainy machine-learning obsessive who sees this race to the future as a set of engineering problems destined to be solved — and who believes, like many AI researchers, that the field’s growing number of ethical questions are better decided by someone else.

The battle to break up Big Tech has just begun

Ten years ago, Conneau was a classically trained French math nerd who loved arcane physics and barely knew how to write the code that tells computers what to do. Today, he works at the bleeding edge of an industry working to design imitation minds; in April, he jumped from Facebook to Google, where at the age of 30, as a staff research scientist, he makes nearly a million dollars a year.

But as the field has progressed, researchers like Conneau have found themselves in a moment of intense global upheaval that they can’t engineer themselves out of, with disconcerting questions being asked both about the tech conglomerates’ grip on their research-lab development of AI and about the corruptible power of AI’s real-world use.

Nearly everything has become up for debate: the racial bias of its algorithms and development teams; the anxieties over intellectual freedom and corporate suppression; and even more existential questions about the field’s gob-smacking imbalances of money, energy and power.

The industry crisis has shaken confidence in a groundbreaking field famous for its utopian optimism, pitting researchers, thinkers, executives and engineers against one another at a time when competitive tensions are at an all-time high. The dominant winners of these debates probably will shape a technology that is already changing the lives of millions of people worldwide.

Conneau has helped spearhead a category of AI known as natural-language processing that has redefined how we communicate on the Web. He has led research into AI that Facebook and others used to refine their automatic blocking systems for bullying, bigotry and hate speech, tackling the coarsening influence of the Web faster and more rigorously than any human moderator ever could.

But Conneau does not carry himself like a foot soldier in a battle for the future of the Web. He listens to trance music while coding and twiddles his pen while lost in thought, tallying his workday feats of AI training — such as “Prepare 1 billion sentences” — on Post-it notes he sticks to his laptop screen. No one asks him about big company policy decisions, even though systems like his can give those policies their mechanical might, making it possible for tech giants to even attempt to comprehend what so many people see online.

In a perfect world, Conneau believes his work could empower an automated speech watchdog to shield people from the worst of humankind and build a kinder, happier Internet. These systems, he believes, will be critical to navigating the signature tug-of-war of online speech: encouraging free expression while suppressing bigotry and bile. The challenge, to him, feels momentous — and deeply personal.

“The stuff we’ve been doing on hate speech and bullying classification: There is no other way than an automatic way to solve these problems,” he said. “There is no other way.”

But in his critics’ world, what they would call reality, his work will become just another tool for deep-pocketed companies to misuse: a biased and invasive force empowering more targeted advertising, more automated surveillance and more mass deception on a global scale.

Google’s approach to historically Black schools helps explain why there are few Black engineers in Big Tech

In that world, Conneau and other AI researchers are sometimes held up as just mercenaries, paid extravagant amounts of money that the companies earned from practically everyone on the Web — captive audiences that provide clues about themselves with every search, click and scroll.

These are the debates that Conneau, and many like him, find extremely uncomfortable: He is a machine-learning scientist, not a politician or policymaker. And while he said he thinks often about the big societal and ethical questions raised by his work, he believes deciding the answers to those is a task best suited for people with more global context and public power.

Yet these questions increasingly overshadow the entire field. As AI researchers’ work gradually reshapes the foundation of a new society, are they — are we — prepared for the world their tools might help create?

***

Growing up in northwest France, Conneau fell in love with the mind-bending properties of abstract math — how it broke existence into its most elemental parts: numbers, patterns, ideas. His father grew up on a farm and later managed a local bank branch, but Conneau credits his stay-at-home mother with pushing him and his brothers to work so hard in school, which in the French tradition involved punishing levels of advanced science and math.

Conneau studied pure mathematics at a prized research university outside Paris, École Polytechnique, and pursued advanced degrees at two of France’s most selective grandes écoles. He didn’t really care about computers, but he liked thinking about numbers, and the tech industry had turned playing with numbers into a multibillion-dollar art form. So when he graduated, he chose tech.

By 2012, the once-sleepy field of machine learning was experiencing a renaissance. A new wave of “neural networks” — chunks of software loosely modeled on brain neurons and the chemical interplay that gives us thoughts and memories — had begun dominating long-established methods of recognizing patterns and imagery.

Researchers were using superpowered graphics cards, the chips then relegated mostly to video games, to sprint past the old bottlenecks of computing. The Internet was exploding. Money was flowing. Everything felt exciting, visceral, real.

A ‘beautiful’ female biker was actually a 50-year-old man using FaceApp. After he confessed, his followers liked him even more.

In 2015, Conneau joined Facebook’s new AI lab in Paris, launched amid a global colonization effort by America’s top tech giants: Everyone wanted to scoop up the most talented research and engineering students around the world before they could graduate.

Facebook in particular believed it was sitting on an AI gold mine — hundreds of millions of photos, including people’s faces, that were being posted to the social network every day. The right algorithms could use it all to craft the next lucrative technological masterpiece.

Advances in AI “computer vision,” which developed software that could estimate what a tiger or tree or stop sign looked like, were also pushing other fields: Couldn’t the same techniques of training and repetition work with written languages too?

“It was fascinating to be able to model text as you model any other thing.”
— — Alexis Conneau

Fueled by Internet-sized libraries of text, AI models could start to guess how different words were used: what concepts they related to, what other ideas they resembled, what feelings they seemed meant to convey. The voluminous written works of human civilization could be quantified and analyzed in a mathematical way by models that could scout for patterns and predict future use.

The idea had already begun reshaping the way people interact with the Web: It is in the genetic code of nearly every smartphone auto-correct, personalized recommendation and search result we see today. The AI systems don’t know the words’ definition or significance, like people do. But what they lose in meaning and reasoning, they gain in pattern recognition: the ability to guess what other words or phrases might come next. “It was fascinating to be able to model text as you model any other thing,” Conneau said.

It had all felt conceptual then. But in 2016, Conneau faced a staggering loss: His wife’s younger sister committed suicide at the age of 21. She had been bullied relentlessly at school and on the Internet, including on the social network he now worked for. The family was devastated.

“There used to be a time when [the bullying] stopped, right? It stopped at 5 p.m., or … whenever you stopped school, and then that’s it, right?” he said. “Now it can happen to you 24/7, wherever you are; just a little message reminding you how worthless you are. One of the most painful parts is to see families broken by this. I’ve seen that with my own eyes.”

***

Facebook funds a ring of research labs — in New York, Pittsburgh, Seattle, London, Montreal, Paris and Tel Aviv — that have chased some of the wildest “moonshot” ideas in the field, from virtually simulated “AI habitats” to six-legged spider robots. But its AI Applied Research group, which Conneau began contributing to, developed products for the here and now: Software to make online ads grabbier, news feeds stickier, and global audiences that much more compelled to use Facebook in their daily lives.

Before the coronavirus shut down most of Silicon Valley, he had worked in a cavernous office at the center of Facebook’s 60-building compound in Menlo Park. His building, MPK 21, overlooked the salt marshes of San Francisco Bay and had been designed as a mix of fortress and fantasyland, with foxes scampering along a half-mile walking trail on its tree-lined rooftop park.

It was there, in the lab, that Conneau dove headfirst into refining AI that could plumb the depths of human speech. He and his fellow researchers published papers with names such as “Word Translation Without Parallel Data” and “Cross-lingual Language Model Pretraining” that helped push the state-of-the-art forward on “unsupervised machine translation”: software that could flit between languages without human intervention or a massive trove of rigorously labeled speech data.

“Maybe there is one message that we can classify [as harmful], that this person doesn’t receive. Maybe that totally changes their day, or changes their life.”
— — Alexis Conneau

Each expanded on a single idea: Instead of defining words by their meanings, as people do, neural networks could transform words into blips in a vast three-dimensional space. Each of these “embeddings” would be defined by its relation to others — “boy” to “man,” “girl” to “woman” — based on massive surveys of how the words had been used by real people around the Web. Instead of muddled definitions, words would become numerical calculations, the kind computers are quite good at. Language, in other words, would become math.

Conneau and his team took the breakthrough even further: Could those sprawling maps of a single language share similarities with all the others? He was especially energized by its potential impact on “low-resource” languages with smaller digitized histories, often ignored by tech giants in the West. The same caliber of hate-speech blockers shielding English speakers in the United States could also help protect teenagers speaking Swahili or Uzbek.

Facebook’s AI treats Palestinian activists like it treats American Black activists. It blocks them.

In 2019, Conneau and other researchers began training an AI model to piece through different languages simultaneously, speed-reading movie subtitles, U.N. meeting transcripts and other written works in different languages, pairing the sentences together as it went. Their theory was that the sentences could become bridges between languages, letting the model learn and generalize the universal concepts binding languages together — the basic code of the words themselves.

Conneau’s model, XLM-R, was built on “the Transformer,” an AI breakthrough developed in 2017 best known as the driving force for systems, such as BERT, used in Google’s English-language search results.

“Training” the system required a slog of preparatory work: coding algorithms that follow sets of rules; gathering and processing data into a form the machine can read; designing tests; analyzing the results. The model was spoon-fed hundreds of millions of webpages, with each word stripped into smaller sub-word chunks for embedding — 295 billion in all. To make sure the computer wouldn’t later just copy the text it had seen during training, they also “masked” the text, removing words here and there, forcing the computer to fill in the gaps.

To run the calculations, Conneau depended on an immense data-center cluster of processors able to run trillions of calculations every second; XLM-R’s training for “unsupervised cross-lingual representation learning” relied on 500 Nvidia Tesla V100 graphics cards, all of them costing more than $8,000 each. Once the cluster was running, all the researchers could do was wait, checking the training progress on their phones.

When Conneau and his colleagues ran the system through some of the language-understanding tests that international AI researchers use for benchmarks, they were stunned: The 100-language model’s accuracy closely matched those of its specialized single-language rivals. That meant the world’s biggest social network — with its core business model of news-feed algorithms, relationship graphs and targeted ads — could begin using the system to scan every single post, in tens of milliseconds, uploaded by its 3 billion users every day.

Huawei tested AI software that could recognize Uighur minorities and alert police, report says

In April, Conneau moved to Google as a staff research scientist. He had been impressed for years by the company’s natural-language research and — if he and his wife hadn’t gotten the opportunity, and the visa, to come to the United States through Facebook — he believes he would have ended up in Toronto instead of California a few years ago, working with “AI godfather” Geoff Hinton and the company’s research team, Google Brain.

One month later, Facebook unveiled one of his last contributions there, a new speech-recognition system he had helped design with other researchers, wav2vec-U. The tool held a critical advantage over its rivals: It had learned not by reading lots of human-transcribed speech but by listening to tons of audio and figuring out the words itself.

The “unsupervised” learning technique seen in wav2vec-U had long been AI researchers’ holy grail, because it could work on the many languages for which giant vats of hand-labeled training data don’t exist. But a lot of scientists, including Conneau, had been suspicious that it could ever truly work without all the human hand-holding these systems had always required.

“Nothing was working in the beginning, for months,” he said. But in recent tests, the system, which has been trained in Swahili, Kyrgyz and Tatar, has proved nearly as accurate as the meticulously trained systems that were state-of-the-art just two years ago.

Facebook and other companies have said AI represents humanity’s best chance at confronting the most divisive, hateful and harmful online speech. Facebook said in May that its AI systems now proactively detect 97 percent of the hate speech that ends up getting removed from the site before it’s even reported, and tens of millions of takedowns in recent months have been of rule-breaking content in non-English languages.

Facebook says its AI now can analyze a post’s images, video, text and comments together, instead of in isolation, to better assess whether a meme is, say, a rude attack or a friendly joke.

It’s that promise of a better world, Conneau said, that keeps him going. “Maybe there is one message that we can classify [as harmful], that this person doesn’t receive. Maybe that totally changes their day, or changes their life,” he said. “When I was in school, I was solving problems that I knew had a solution. But in research we are trying to solve problems for which we have no idea whether there is a solution or not.”

***

But should we trust him?

While Conneau was running benchmarks on AI models, the field was becoming to some a symbol for the dangerous hubris of modern tech. The systems they built were facing criticism for being easily abused, warped by bias or too powerful to control. And it did not help that the most influential players in AI are also the biggest tech companies, whose cash-rich recruiting of top talent has meant that most major advances in the field become new products, not public goods.

U.S. regulators have voiced interest in more aggressively policing how supposedly neutral algorithms can subtly reinforce old prejudices that could block people from housing, jobs and health care. And in draft regulation released in April, the European Union said it might ban AI use in “high risk” areas such as “indiscriminate surveillance” and “algorithmic social scoring.”

But the companies themselves, after years of funding divisions devoted to building “ethical AI,” are also facing questions of whether they can be trusted to support research that might make them look bad or undermine the technologies they want to sell.

Some of Google’s most notable researchers, such as Timnit Gebru and Margaret Mitchell, have resigned or been fired in recent months amid an internal outcry about how the company treats people of color and influences supposedly “independent” research. Samy Bengio, who had managed hundreds of top AI researchers during 14 years at Google, also jumped ship to Apple this spring.

Google hired Timnit Gebru to be an outspoken critic of unethical AI. Then she was fired for it.

For researchers like Conneau, the Internet had always been an almost miraculous resource, fully loaded with machine-readable text that could be poured into fast-learning software. And yet anyone who has been on the Web knows that so much of it is not the kind of language you would want to show someone just beginning to learn to speak. As Gebru and other researchers have argued, all the horrible things people say to one another — racism, sexism, violent threats — can also end up baked into AI systems’ brains for them to process and duplicate.

The most optimistic AI researchers believe this is a problem of engineering: Train the systems on “good” speech — more public-radio archives, fewer outrage blogs — and the toxic patterns will fade away. But Gebru and other researchers say the danger of these AI “parrots” could undermine the entire enterprise, and that downplaying the problem could be disastrous, given that the systems will increasingly shape how the modern world lives and communicates.

A great AI blocker could mean fewer people would see hateful insults in their inboxes, and fewer paid content moderators might have to look at videos of gory violence and sexual abuse all day. But today’s systems are imperfect: Researchers at the University of Oxford and the Alan Turing Institute in May tested more than two dozen language AI systems trained to detect hate speech and found a number of “critical model weaknesses,” either because they missed problematic words or deleted harmless ones.

And while algorithms can catch problematic content, they can promote it, too, because they’ve been designed to boost user engagement and respond to how people react. The systems could be used to amplify scams, fuel harassment campaigns or spit out long pieces of text, such as fake news stories, that would appear to have been written by a human hand. They could also be abused for censorship: A university research team in China, where the government bans anything it deems “subversive” speech, said in April that its text-censoring AI could filter out “sensitive information from online news media” with more than 90 percent accuracy.

Asked about the ongoing anger at Google, Conneau expressed discomfort: These are new conversations for many in Silicon Valley, a place that for him still rarely feels like home. He says he just wants to stay focused on his work and away from the personal dynamics ripping through his new workplace.

To critics, this kind of researcher self-appraisal can come across as self-serving and naive. But to Conneau, it’s a reflection of the trade-offs one has to deal with in a field that has become a flash point for some of the most heated debates in AI.

“In scientific terms, some would say it’s an ‘optimization under constraint’: You maximize open discourse under the constraint that hateful content is minimized. Or equivalently you minimize hate speech under the constraint that free speech is maximized,” he said.

“Those optimization problems are difficult. I like to think that we as a society will get closer to an optimal solution to such problems by continuing research work,” he added. “I don’t think this is naive, but pragmatic. And we might as well be optimistic.”

“You give it an input. It’s producing an output. It’s just a machine. Right?”
— — Alexis Conneau

For heads-down engineers, these questions of ethics and abuse are becoming increasingly difficult to dodge. The industry’s top machine-learning conference, NeurIPS, where thousands of elite researchers every year compete for attention and awards, announced last year that all submitted work would for the first time need to analyze “not only the beneficial applications … but also potential nefarious uses and the consequences of failure.”

A face-scanning algorithm increasingly decides whether you deserve the job

The change was not universally welcomed, and such shifts don’t happen overnight. One deep-learning researcher tweeted that most researchers wouldn’t “do good enough scholarship to say something meaningful” about the technology’s societal impacts. And the Partnership on AI, a nonprofit group founded by Silicon Valley’s biggest names, recommended in May that AI researchers and their bosses “normalize” talking about the real-world effects of their work.

“Some researchers saw the consideration of downstream consequences as outside their remit and believed their focus should be on scientific progress alone,” the industry report stated. “Others expressed a sense of personal or professional responsibility … but felt ill-equipped or confused when it came to what, if anything, to do about it.”

There are some clues that AI researchers are now more often reckoning with what they’ve built. In May, when Google debuted the new language-AI system LaMDA, which it said had been trained to emulate the meandering style of human dialogue that leads most by-the-book chatbots to awkwardly fail, the company acknowledged that such systems could also learn to internalize online biases, replicate hateful speech or misleading information, or otherwise be “put to ill use.” The company said it was “working to ensure we minimize such risks,” though some, including Gebru, were skeptical, panning the claims as “ethics washing.”

As their work progresses, Conneau said he expects that researchers will find themselves increasingly at the center of these battles over the evolution of online dialogue and the limits of learning machines. “We are all writing this history, somehow,” he said. It’s “the role of everyone, society, democracy, including public power, to decide.”

He is hopeful about the future, though he is cautious about one sticking point for the technology he believes most people don’t understand. Even with all of its advances, the best speech-understanding models still can’t get sarcasm, be funny or talk philosophically. The AI is not getting smarter, he said. It’s just reading, processing and manipulating the data in more convincingly human ways.

“We’re not creating something to me that feels particularly intelligent,” he said. “There are things that are happening that are really blowing my mind. … But when we train a language model, it’s able to produce text. It’s not thinking.”

He knows some people see in AI a source of fear: the threat of a superintelligence that could conquer us all. But what some regard as mysticism, he sees as cold, pure math — a problem, numerically solved. “You give it an input. It’s producing an output,” he said. “It’s just a machine. Right?”

Read more:

Artist Sougwen Chung wanted collaborators. So she designed and built her own AI robots.

Facebook to start policing anti-Black hate speech more aggressively than anti-White comments, documents show

A shadowy AI service has transformed thousands of women’s photos into fake nudes: ‘Make fantasy a reality’

How Facebook wrote its rules to accommodate Trump

Can computer algorithms learn to fight wars ethically?

Loading...
Loading...