with Tonya Riley
Twitter just rolled out voice tweets, which allow users to add up to 140 seconds of audio. But hold your jokes about how this is just the new voicemail: Tech experts warn that the audio feature could be a new vector for harmful content that's even harder to police than text.
The new feature could raise new problems as the upcoming 2020 election, civil unrest and coronavirus pandemic are raising the stakes in Big Tech's fight against abuse and disinformation on its platforms.
“Unfortunately, if the company gives users a new way to express themselves, some of those users are likely to take advantage of the situation to spew the sort of harmful content that has long been associated with Twitter,” said Paul Barrett, deputy director of the NYU Center for Business and Human Rights.
Twitter's handling of disinformation, violence and hate speech is already under the microscope.
Voice tweets will add “an additional layer of complexity” for Twitter, which has faced criticism for not doing enough to police falsehoods and other harmful content in traditional tweets, said John Redgrave, the chief executive of the start-up Sentropy, which makes content moderation tools.
“Any new medium for distribution creates additional moderation challenges,” Redgrave said.
Twitter is only making it available to a limited number of people initially as it works out these issues.
“We’re working to incorporate additional monitoring systems ahead of bringing this to everyone,” said Twitter spokeswoman Katie Rosborough. “We’ll review any reported voice tweets in line with our rules, and take action, including labeling as needed.”
But Twitter has been criticized for being inconsistent about how it enforces its rules for regular tweets, and audio could be an especially challenging decision if it's coming from public figures.
“If it's someone like Trump's account, it becomes harder to ascertain which of that content is legitimate,” said Ashkan Soltani, former chief technologist of the Federal Trade Commission.
Twitter particularly has taken a hard line on manipulated media, at times appending labels to videos in tweets that have been doctored in certain circumstances. The new audio feature could open it up to greater audio manipulation at a time when sound-editing technology is rapidly advancing.
There are technical reasons why audio has also traditionally been more challenging for tech companies to monitor than text.
“Moderating audio takes more resources because it takes more manual analysis and currently has less options for applying automated tools,” said Graham Brookie, director and managing editor of the Digital Forensic Research Lab at the Atlantic Council.
To analyze audio, human moderators may have to listen to the file in full, which at 140 seconds in length could be much more time consuming than scanning a 140-character tweet. They also may rely on transcription services to convert the audio to text, which could then be reviewed by humans or algorithms, Redgrave told me.
Given the current available technology, there is “no doubt” this will make Twitter’s challenge to moderate content more difficult, Dipayan Ghosh, a former Facebook employee who recently wrote the book “Terms of Disservice,” told me.
But this won’t be an entirely new challenge for Twitter.
Brookie points out that Twitter already allows video on its service, so it has dealt with similar issues with that content. And other companies that allow shared audio on their platforms are dealing with this, too.
It also remains to be seen how widely the new voice tweet feature is used. Already, it’s become something of a punchline on Twitter.
From the Verge's Casey Newton:
And others are still holding out for an edit button:
Have thoughts on voice tweets? Let me know at email@example.com or @Cat_Zakrzewski.
Our newsletters will be off tomorrow in honor of Juneteenth. Here's a explainer by Karen Attiah, Washington Post Global Opinions editor, about the significance of the holiday.
Our top tabs
Privacy policies are broken. Sen. Sherrod Brown has a radical new proposal to limit companies' data collection.
The Ohio Democrat will begin circulating a draft of legislation today that pushes industry standards away from broad opt-in consent to limiting online data collection for specific services requested by customers. The legislation could effectively put a halt to the business of data brokers that amass vast troves of online data for profit, Geoffrey A. Fowler reports.
“We just failed to establish clear rules about corporations using big data to dig into our private lives, and those days should be behind us,” Brown told Geoffrey. His bill, which was modeled after the same law that protects consumers' financial data, would give consumers a reset.
“There are a lot of proposals out there on privacy, but we think this is probably broader and deeper with some different ideas,” Brown said. “Fundamentally, to me, privacy is a civil rights issue.”
But online data is much more diverse than financial data, and regulating it is a big challenge. So far Congress has been unable to reach a consensus on the privacy legislation many lawmakers called for after the Cambridge Analytica scandal: there are six other pieces of legislation pending in addition to Brown's.
And industry players will likely try to fight the bill's mission to dismantle digital targeted advertising, a system many companies argue is key to allowing them to provide their services free.
Facebook, Twitter and Google are in the hot seat on disinformation today.
Senior officials from the social media titans are expected to tell the House Intelligence Committee about the investments they've made to prevent foreign actors from manipulating their service. Lawmakers will also ask the companies about whether they've seen any foreign misinformation linked to the coronavirus pandemic or protests against racism and police brutality.
Nick Pickles, Twitter's public Policy Strategy and Development Director, plans to tell lawmakers that the threat of foreign interference in elections is “real and evolving,” according to excerpts from prepared testimony shared with the Technology 202. Pickles will say that the company is watching for signs of foreign interference in conversations about the protests, after Russian actors sought to exploit U.S. racial divisions in their 2016 campaign to divide American voters.
"As America has responded to the death of George Floyd and cities across the country have seen protestors take to the streets, the public conversation on Twitter has highlighted the deep-rooted nature of issues related to race, justice, and equality," he will say. "While we have not seen evidence of concerted foreign state-backed efforts to manipulate the public conversation in recent weeks, we remain vigilant.”
Nathaniel Gleicher, Facebook head of security policy, intends to focus on the company's efforts to support voters, including the new Voter Information Hub the company unveiled this week. Yesterday he told reporters in a press call that he's monitoring for signs of foreign interference targeting the protests.
“Since the protests started at the end of May, we’ve seen some speculation about foreign interference targeting those protests. We’re actively looking and we haven’t yet seen coordinated inauthentic behavior targeting us,” Gleicher said. “We have seen isolated inauthentic accounts looking to impersonate authentic activists and where we’ve found that, we’ve taken action against them.”
Google declined to comment.
More than 70 employees at Mark Zuckerberg and Priscilla Chan's philanthropy are calling on the organization to address systemic racism.
The dozens of employees have put forth a list of 12 changes to guide the Chan Zuckerberg Initiative's investments in overhauling American education, Theodore Schleifer at Recode reports.
The proposals include increasing the number of black leaders in CZI's management and creating a committee of individuals from marginalized backgrounds to guide the changes. It's a rare instance of bottom-up activism in the philanthropy world, Theodore writes. The letter follows a push from a group of CZI backed scientists, who challenged Zuckerberg on Facebook's content moderation. A small handful of employees also signed that letter.
“We welcome feedback from our team members and have worked hard to create a safe environment for employees to make their voices heard,” CZI representative Raymonde Charles said in response to the letter. “We’re proud of the many CZI employees who have raised their hands in recent weeks to help in this new call to action.”
Meanwhile, Google announced a new plan to increase black representation at the company by improving leadership representation in underrepresented groups 30 percent by 2025. The company will also introduce new anti-racism educational programs and establish new talent liaisons focused on hiring and retention, Google chief executive Sundar Pichai wrote in company-wide note.
Pinterest chief executive Ben Silberman told employees that the company would hire a non-white board member, Bloomberg's Sarah Frier reports. He also promised to evaluate managers on diversity hiring, and to hire outside experts to comprehensively review employee compensation and evaluate possible unfairness. His action follows employee backlash to the company's response to two former black female employees' allegations of racial discrimination at the company.
The Justice Department proposed legislation that could punish Facebook, Google and Twitter for harmful content online.
The proposal the agency unveiled builds on an executive order from Trump earlier this month that threatens to erode a long-standing legal shield for tech companies from liability for content on their sites, Tony Romm reports.
It also raises the possibility that companies could lose their legal protection if their security practices hamper law enforcement, giving the department another potent tool in its war against encrypted technologies.
But Congress would need to get behind the proposal, which is a long shot in an election year. Both parties have called in recent months to overhaul the legal shield, Section 230 of the Communications Decency Act. Democrats tell Tony they're unlikely to line up behind the Justice Department.
“I’ve certainly been one of Congress’s loudest critics of Section 230, but I have no interest in being an agent of Bill Barr’s speech police," Democratic Sen. Richard Blumenthal (Conn.), one of the authors of bipartisan legislation to change the law, said in a statement Wednesday. “I’m deeply concerned that President Trump and Attorney General Barr are exploiting Big Tech’s complicity in human misery to advance their own political agenda.”
Representatives for Facebook, Google and Twitter declined to comment. The Internet Association, a Washington-based trade group representing those companies and other online services, sharply rebuked the Justice Department for its proposal.
“Rolling back Section 230 protections will make it harder, not easier, for online platforms to make their platforms safe," Jon Berroya, interim president of the group, said in a statement.
Rant and rave
Trump lashed out at NBC News after the outlet reported that Google banned Zero Hedge, a far-right website, from its advertising platform due to policy violations in its comment section. The company also issued a warning to The Federalist, another outlet.
The tweet adds to the conservative backlash against tech, as some Republicans cheered on the Justice Department's efforts to limit tech's immunity protections. From Rep. Doug Collins (R-Ga.):
I’m glad to see Attorney General Barr taking action to roll back Section 230 immunity.— Rep. Doug Collins (@RepDougCollins) June 17, 2020
Google — along with every other big tech company — shouldn’t be allowed to get away with content filtering or censorship.
Section 230 must be repealed! https://t.co/EZCOk4GvRh
Sen. Tom Cotton (R-Ark.) called out Google:
Sen. Kelly Loeffler (R-Ga.) promoted her legislation with Sen. Josh Hawley (R-Mo.) that would adjust Section 230:
Big Tech must be held accountable.— Senator Kelly Loeffler (@SenatorLoeffler) June 17, 2020
I sent a letter asking the @FCC to provide clarity on #Section230. I also cosponsored @HawleyMO’s bill to more clearly define Section 230 to prevent Big Tech from stifling Americans’ right to free speech.
It must end. https://t.co/yO48aqWr4K
MSNBC host Mika Brzezinski and her husband Joe Scarborough, who has been at the center of a conspiracy theory promoted by Trump, called on the need to change the law to rein in extremists.
The United States is withdrawing from negotiations over an international digital tax as the pandemic persists.
The Organization for Economic Cooperation and Development in Paris has been hosting negotiations aimed at brokering a compromise between the United States and European countries. The Trump administration alleges the proposed taxes unfairly targeted American companies. U.S. Trade Representative Robert E. Lighthizer confirmed the decision testifying in front of the House Ways and Means Committee yesterday, David J. Lynch reports.
“They all came together and agreed they'd screw America,” he said of the talks.
More White House news:
Zoom will provide the highest level of encryption to all customers starting in July.
That's a reversal of an earlier decision that gave only paying customers access to end-to-end encryption, Rachel Lerman reports.
The company faced a initial backlash from privacy advocates and users after it said that it wouldn't provide the top-shelf encryption to unpaid users in part because law enforcement might may need access to videoconference data with a warrant.
Chief executive Eric Yuan said he consulted with child safety experts and civil liberties groups ahead of the yesterday's decision. Zoom will require free users to provide personal information including their phone number to verify their identity for use of the service.
“We are confident that by implementing risk-based authentication, in combination with our current mix of tools — including our Report a User function — we can continue to prevent and fight abuse,” Yuan wrote in a blog.
Microsoft pitched its facial recognition software to government agencies for years, new public records show.
It pitched the technology to federal agencies, including the Drug Enforcement Administration, in the months leading up to its July 2018 call for increased regulation of facial recognition, emails obtained by the American Civil Liberties Union via a public information request show.
It's unclear whether the DEA ever moved forward with using the offerings, but a partnership would concern privacy advocates.
“The DEA has a long history of racially disparate or racist practices and has been engaged in wildly inappropriate mass surveillance,” ACLU Massachusetts Director of the Technology for Liberty Program Kade Crockford told BuzzFeed News. Microsoft recently said it would no longer provide the technology to police departments but has declined to commit to ending any federal partnerships.
Amazon has also placed a moratorium on police use of its facial recognition software but has not said whether that ban will apply to federal agencies. (Amazon CEO Jeff Bezos owns The Washington Post.)
- House Intelligence Committee will hold a virtual open hearing with Facebook, Google and Twitter on Foreign Influence and Election Security today at noon.
- Elizabeth Dwoskin will interview YouTube CEO Susan Wojcicki for Post Live online today at 5:15 pm
- Carnegie's Partnership for Countering Influence Operations and Twitter will host an event on influence operations on Twitter on July 9 at 1 p.m.
- The Energy and Commerce Committee will host a hearing on online disinformation on June 24. The hearing will cover disinformation related to covid-19 and the recent racial unrest.
Before you log off
More on John Bolton's book from Seth Meyers: