with Tonya Riley

Ctrl + N

The hottest trend among Silicon Valley executives is embracing regulation. As long as the rules are written on their terms. 

Sundar Pichai, the new chief executive of Google’s parent company Alphabet, was the latest tech titan to jump on board: He wrote in the Financial Times yesterday that “there’s no question in my mind that artificial intelligence needs to be regulated.” 

“There are real concerns about the potential negative consequences of AI, from deepfakes to nefarious uses of facial recognition,” Pichai wrote. “While there is already some work being done to address these concerns, there will inevitably be more challenges ahead that no one company or industry can solve alone.”

The op-ed, which was released the same day that Pichai gave a policy speech in Brussels, signals that Google wants a seat at the table as Europe aims to develop legislation that would address ethics in artificial intelligence. Pichai said he was open to “sensible” policies that take a “proportionate approach” to weigh the potential risks of A.I. against its social benefits. Europe’s privacy law, the General Data Protection Regulation, could serve as a model for A.I.-specific rules, Pichai proposed. 

Pichai's public outreach is a sign of how far the the tech industry’s relationship with regulators around the globe has shifted due to “techlash.” Companies, after years of simply not engaging with regulators, have accepted regulation is coming no matter what -- and are actively working to shape the debate so they can live with the results. 

Pichai has plenty of company: Apple chief executive Tim Cook took this tack in influencing the global privacy debate, calling on policymakers to focus on the shady practices of data brokers and free tech services that suck up people’s data. Microsoft President Brad Smith has called for facial recognition regulation to ensure that there are guardrails to limit surveillance or bias. Facebook chief executive Mark Zuckerberg published a carefully crafted op-ed last year, calling for multiple forms of regulation as an alternative to breaking the company up. 

Pichai's call for A.I. regulation was panned for being short on specifics. After all, it's just a first step to say it's time for regulation. It's an entirely different challenge to determine what specifically such regulations entail. From Jamie Susskind, a fellow at the Leverhulme Centre for the Future of Intelligence:

But Pichai did make clear that one of his top priorities in the A.I. debate is ensuring there's "international alignment" -- and not a patchwork of laws around the world. The European Commission is expected to release a series of proposals to regulate the tech industry, including a white paper due next month on possible rules for A.I., according to the Wall Street Journal. Trump administration officials, meanwhile, have favored taking a lighter-touch regulatory approach when it comes to A.I. 

That's so far left companies so far to develop their own norms and ethical guidelines as they develop the technology. Google published its own A.I. principles in 2018 which prevent the company from deploying A.I. for weapons, or to violate human rights. 

Competition may also be an incentive for Google and other tech titans to ensure governments adopt regulation. After all, companies have a financial motivation to ensure their rivals have to uphold the same ethical guidelines. 

Pichai's warning about facial recognition was timely in light of a New York Times story over the weekend by Kashmir Hill, which revealed that a small start-up has developed a database that can match unknown people to their online photos. The company is already working with 600 different law enforcement agencies. Hill wrote that tech companies including Google that were capable of releasing such a tool refrained from doing so because of ethical considerations. In 2011, Google’s then-chairman said it was the one technology the company had resisted because it could be used “in a very bad way.”


BITS: Apple dropped an effort about two years ago to allow iPhone users to fully encrypt iCloud backups of their devices after the FBI warned the plan would hamper their work, Reuters's Joseph Menn writes. The new revelation sheds light on how far the smartphone-maker is willing to go to help authorities, despite its efforts to shape itself as a defender of customers' privacy. 

Apple did, for instance, turn over the Pensacola shooter’s iCloud backups, highlighting how Apple has attempted to work with law enforcement in the case. Just last week, U.S. Attorney General William P. Barr publicly called on Apple to unlock two iPhones used by the gunman who shot dead three at a Florida naval base last year. President Trump also piled on, attacking the company for refusing to crack phones. 

Behind the scenes, Apple has given the FBI “more sweeping help,” Joseph reports, that was not related to any specific probe.

One person told Reuters that Apple did not want to risk being attacked by law enforcement for shielding criminals or used as a new excuse to regulate encryption. 

“They decided they weren’t going to poke the bear anymore,” the person said, referring to Apple’s battle with the FBI in 2016 over opening an iPhone used by one of the suspects in a San Bernardino, California, shooting. 

An Apple spokesman declined to comment on the company’s handling of the encryption issue or any discussions it has had with the FBI. The FBI did not respond to requests for comment on any discussions with Apple.

NIBBLES: Companies are struggling to comply with California's new privacy law during its first few weeks in effect, often turning over too little or far too much data, my colleague Greg Bensinger reports

The law was expected to provide new clarity about privacy practices because it lets consumers ask companies to turn over their data. But compliance remains haphazard, and many people are uncertain how to interpret their data even when they obtain it. 

Both Uber and Lyft failed to disclose all the data they collect on consumers in requests made under the law, Greg found. Lyft declined to comment on why it left out some data. “Our privacy policy and the tools and options we provide regarding data reflect our respect for customer data and privacy,” Lyft spokesman Adrian Durbin told Greg. 

Other companies are using the law's 45-day period to respond to requests as a grace period to figure out compliance, Greg reports. Many companies, including PayPal, still lack a means to request information beyond general customer service channels.

Part of the issue is lack of resources, says Mary Stone Ross, who helped design the legislation and is now associate director of the Electronic Privacy Information Center, a nonprofit research center focused on privacy in Washington. “Compliance is all over the map and will be until the rules are clear and there are actual penalties for noncompliance,” Ross told Greg.

Still, other experts say companies have no excuse. “Companies are required to disclose all the individual pieces of data they collect on consumers, and if they are not releasing that, that’s a violation of the law,” Adam Schwartz, a senior staff attorney with the Electronic Frontier Foundation, a nonprofit digital rights group, told Greg.

BYTES: French President Emmanuel Macron and President Trump have agreed to a cease-fire in a potential trade war over France's 3 percent digital tax hitting U.S. tech giants including Google, Apple and Facebook. The leaders said they are working on agreement to avoid tariffs the Trump administration proposed in retaliation, and the two countries will now extend negotiation deadlines for a global framework on digital taxes until the end of 2020, Ania Nussbaum and William Horobin at Bloomberg report.

Macron wrote on Twitter:

Trump confirmed and praised the move in a retweet, calling Macron's tweet "Excellent!"

The Trump administration just last month proposed tariffs of up to 100 percent on $2.4 billion of French products in retaliation for the digital tax. A report from the U.S. Trade Representative claims the tax “discriminates against U.S. companies, is inconsistent with prevailing principles of international tax policy and is unusually burdensome for affected U.S. companies.” Tech companies heavily lobbied the USTR to express opposition of the tax.


-- Party officials in Iowa are training staffers to combat disinformation ahead of next month's caucuses, my colleague Isaac Stanley-Becker reports. They're trying to avoid a repeat of the chaos and confusion on social media that dominated the process four years ago — and was subsequently exploited by Russian trolls to sow doubt about the entire process.

“Disinformation is something new we saw last election cycle, but people didn’t know it was happening at the time,” said Troy Price, the chairman of the Iowa Democratic Party. “Here, we know it’s going on, and we’ve had time to prepare for it.”

This time, they've made changes to make the process more transparent and clear to Iowa voters. But there will be new challenges in 2020: Iowa Democrats worry that President Trump may also amplify rumors online, they tell Isaac.

Iowa Democratic and Republican Party leaders in November brought in Harvard's Defending Digital Democracy Project to conduct a simulation of results night. Officials developed plans to address potential hypothetical scenarios, such as tweets advertising the wrong caucus time or reports that the mobile apps reporting returns had malfunctioned. The contingency plans involved bringing in the Department of Homeland Security and contacting executives at Twitter. 


— A slight majority of Americans believe Trump has personally encouraged election interference by U.S. adversaries, a new poll from NPR, PBS NewsHour and Marist finds. The results highlight serious skepticism from the American public just days before primary season kicks off, reports NPR's Brett Neely.

The poll also found that 41 percent of Americans believe the United States is not very prepared or not prepared at all to keep the November election safe. Four in 10 people expressed concern that foreign powers would tamper with votes to change election results, despite no evidence that Russia changed any votes in 2016. 

There was less of a partisan divide over disinformation on social media. A staggering 82 percent of surveyed Americans said they expect to read misleading information about the election on social media, with a similar percentage expecting foreign countries to be behind spreading disinformation. Three-quarters of respondents are not confident tech companies will prevent misuse of their platforms during the election, an increase from a 2018 survey.

— More news from the public sector:


— News from the private sector:


—  Tech news generating buzz around the Web:


— Coming up:

  • The Senate Commerce Committee will host a hearing on “the 5G Workforce and Obstacles to Broadband Deployment” at 10 a.m. on Wednesday.
  • Silicon Flatirons will host its "Technology Optimism and Pessimism" conference Feb. 9 and 10 at the University of Colorado Law School in Boulder, Colorado. Speakers include FCC Michael O'Rielly and FTC Commissioner Rohit Chopra.
  • Mobile World Congress takes place Feb. 24 to 27 in Barcelona.


The New York Times's Farhad Manjoo reacts to Amazon's new electric rickshaws. (Amazon CEO Jeff Bezos owns The Washington Post.)