The Washington PostDemocracy Dies in Darkness
The Technology 202

A newsletter briefing on the intersection of technology and politics.

Former Google CEO speaks out on working with the Pentagon

The Technology 202

A newsletter briefing on the intersection of technology and politics.

Good morning! I'm Gerrit De Vynck, a tech reporter at The Post who covers Google and the algorithms that affect our lives. I'm filling in for Cristiano today. In today's newsletter: new revelations about Facebook's algorithm and a delay in the company's plans to roll out encryption. First up:

Google's former CEO says the military must invest in artificial intelligence

Since stepping down as chairman of Google in 2019, Eric Schmidt has funded oceanographic research, bought some expensive real estate and even had his personal life splashed in the New York tabloids. But the biggest focus of his post-Google career has been on trying to get the U.S. government, especially the military, to adopt new technologies such as artificial intelligence.

In Schmidt’s view, artificial intelligence will change everything about how the world works, especially global competition between nations. He believes if the U.S. wants to counter China’s rising power, it needs to invest more in basic science and encourage tighter collaboration between the military and the country’s biggest tech companies, which employ the smartest tech minds. The military has been a major customer and funder of tech companies for decades, and Schmidt argues that will also apply to the next generation of tech, including artificial intelligence. He’s made his case at numerous events, chaired a government commission on national security and AI, and most recently put out a book with Cold War diplomat Henry Kissinger and computer vision pioneer Daniel Huttenlocher.

But his position is at odds with some Google employees. In 2018, thousands of them signed a petition asking the company to stop its work on Project Maven, a Pentagon initiative that would have used Google’s image recognition tech to scan reams of drone and satellite images, potentially helping the Defense Department choose new targets to bomb. Google dropped the project and ended up putting together a set of AI principles that included a commitment not to build AI weapons.

“I was chairman of Google at the time this happened, but I was not allowed to participate because I had a military role. I disagreed with the decision that Google made,” Schmidt said in a wide-ranging interview he and Huttenlocher gave The Technology 202 ahead of the book’s launch. Either way, other companies stepped in to take over the work, he said. “Whatever they’re not willing to work on, there will be another company who will be willing to work on it for the military.”

Despite the flare-up over Maven and the promise not to put its AI tech into weapons, Google’s current executives are keen to work with the Pentagon. The company is eager to bid on the military’s latest effort to get more of its systems on the cloud, and Google says its AI is already being used to help the Air Force predict when to replace aircraft parts. 

In their book, Schmidt, Kissinger and Huttenlocher argue that AI could affect conflict the same way technologies like mass production and railroads made World War I particularly devastating.

“Developments in the civilian economy that had massive implications on the ability to wage war that really should have been considered as people entered into conflict, weren’t,” Huttenlocher said in the interview. “The worst war known to humankind, up to that point, came out of it.”

In a future conflict, AI systems that can make decisions faster than humans might ratchet up a fight before the generals and politicians in charge even know what is happening. Because current machine learning algorithms operate in ways that are difficult or impossible to understand, a country that deploys a military AI may not even know exactly what it’s capable of ahead of time, the trio write in their book.

Despite the cautious note, Schmidt has already shown his support for the military doubling down on AI. The final report from the AI and national security commission Schmidt chaired argued that the U.S. should continue investing in AI weapons and said the military already had the checks in place to ensure humans always had the final say over whether to take a life or not. That’s at odds with some human rights campaigners and activists, who have been pushing for a broad ban on all AI weapons for years now. 

Schmidt is skeptical of other regulations on AI, too. Earlier this year, European lawmakers proposed strict new rules for AI systems, including the prospect of making companies explain how their AI algorithms make decisions, something computer scientists say might not always be possible. “It effectively stops research in that area for a very long time,” Schmidt said. “You should regulate last, not first.”

Our top tabs

Facebook’s race-blind policies came at expense of Black users, documents show

Although the majority of the “worst of the worst” language on Facebook was directed at minorities, top executives at the social media network feared that an overhaul of its algorithms would have tilted the scales for some minority groups over others, Elizabeth Dwoskin, Nitasha Tiku and Craig Timberg report

“The previously unreported debate is an example of how Facebook’s decisions in the name of being neutral and race-blind in fact come at the expense of minorities and particularly people of color,” Elizabeth, Nitasha and Craig write. “Far from protecting Black and other minority users, Facebook executives wound up instituting half-measures after the ‘worst of the worst’ project that left minorities more likely to encounter derogatory and racist language on the site,” people familiar with the debate said.

The algorithm had its problems. It was “aggressively detecting comments denigrating White people more than attacks on every other group,” according to documents my colleagues obtained. Facebook didn’t tell civil rights auditors about research that found that minority groups were disproportionately harmed by its algorithms, according to Laura Murphy, the leader of the audit.

Facebook spokesman Andy Stone defended the company’s policies and transparency with the auditors. He said the company has made progress on racial issues, but did not implement some of the “worst of the worst” project’s recommendations “as doing so would have actually meant fewer automated removals of hate speech such as statements of inferiority about women or expressions of contempt about multiracial people.” 

Facebook and Instagram owner Meta is delaying the rollout of encryption as default on its platforms

The company won’t add end-to-end encryption as the worldwide default on all of its messaging apps until 2023, Meta global head of safety Antigone Davis wrote in the Telegraph. In the meantime, the company is adding technology to proactively detect suspicious and concerning activity and accounts, giving users more controls and urging users to report harmful posts, Davis wrote. 

The delay comes after criticism by high-profile officials in the United Kingdom over the tech giant’s encryption plans. As U.K. lawmakers consider new legislation to combat harmful online content, Home Secretary Priti Patel has blasted the company’s encryption rollout, arguing that it would enable sexual abuse of children. U.S. officials have also argued that encryption would shield criminals online. Cybersecurity and privacy advocates have pushed back on that argument, saying the increased protection against hacking and privacy invasions outweigh any drawbacks.

Andy Burrows, online policy lead at the National Society for the Prevention of Cruelty to Children, called Meta’s delay “a welcome step, but only if it signals a genuine reset to reflect child safety concerns and isn’t just an attempt to weather current storms.” 

Seniors are juggling surveillance and independence

Seniors have embraced smart home technology as companies like Amazon and Apple unveil new products that try to get away from the most invasive technology in the market, Heather Kelly reports. However, the technology aimed at the elderly can have its fair share of downsides.

“But the devices, many of which grew out of security and surveillance systems, can take privacy and control away from a population that is less likely to know how to manage the technology themselves,” Heather writes. “The idea of using tech to help people as they age is not a problem, say experts, but how it’s designed, used and communicated can be. Done wrong or without consent, it is one-way surveillance that can lead to neglect. Done right, it can help aging people be more independent.”

Rant and rave

Adele thanked Spotify after the company removed the default setting to shuffle her new album:

The Verge's Tom Warren:

World Cafe contributing host Stephen Kallao:

Inside the industry

A robot wrote this book review (The New York Times)

The Amazon lobbyists who kill U.S. consumer privacy protections (Reuters)

How Facebook and Google fund global misinformation (MIT Technology Review)

India police charge Amazon execs in alleged marijuana smuggling case (Reuters)

Workforce report

Apple tells workers they have right to discuss wages, working conditions (Reuters)


Adele asked Spotify not to shuffle her carefully curated album by default. The streaming service listened. (Jennifer Hassan)

Before you log off

Thats all for today — thank you so much for joining us! Make sure to tell others to subscribe to The Technology 202 here. Get in touch with tips, feedback or greetings on Twitter or email