Trump denounced the company's decision to label his misleading tweets … on Twitter.
He did so even as the company failed to take action to remove or correct several tweets from the president promoting a baseless conspiracy theory accusing former congressman and now MSNBC host Joe Scarborough of having something to do with the death of a former staffer. That's even after the husband of the deceased woman pleaded with the company to take down the tweets.
“Throughout the day Tuesday, Twitter executives debated whether to apply its label to Trump’s statements about the widower’s wife, and if so, how to classify the statement under the companies’ current policies, according to a person familiar with the discussions who spoke on the condition of anonymity to describe the private deliberations. They debated whether the tweet constituted misinformation that has the potential to cause harm, or is a form of harassment, said the person, who was not authorized to speak publicly about ongoing deliberations,” my colleagues Toluse Olorunnipa, Elizabeth Dwoskin and John Wagner reported.
Trump instead blasted social media companies for their anti-conservative bias and vowed to “strongly regulate" or even to shut down the companies.
Trump also accused Twitter of “interfering in the 2020 Presidential Election.”
Trump's attacks could signal more skirmishes to come this election year as Twitter gets tougher on posts by politicians.
Twitter and other social networks have long given leaders like Trump greater leeway in what they can post under what's known as a “newsworthiness exemption.” But in deciding to tackle incorrect information on some subjects from Trump, Twitter is entering a Wild West and taking on a role social media companies have long sought to avoid.
The decision to join the fray signals the company could be more aggressive about moderating election misinformation during 2020 — potentially setting up partisan battles over what comments from politicians and candidates the company would call out.
Critics have pressured Twitter for years to police Trump's speech when he's harassed people or incited violence, and some have even called on the company to remove the president from the platform altogether. In recent months, the company has taken small steps to curb inaccurate information tweeted by the president.
In March, Twitter labeled a manipulated video of presumptive Democratic nominee Joe Biden that was retweeted by Trump. Earlier this month, the company adopted a new policy saying it would label or add a warning message to tweets containing coronavirus-related misinformation, even when the information does not violate its policies. The company said at the time that it would apply these labels to world leaders, including Trump. It has removed misleading tweets about the virus by Venezuelan President Nicolás Maduro and Brazilian President Jair Bolsonaro.
Twitter spokeswoman Katie Rosborough says Trump's tweets about mail-in voting “contain potentially misleading information about voting processes and have been labeled to provide additional context around mail-in ballots.”
But Twitter's approach appears inconsistent even as it's stepping up its enforcement.
Twitter's decision not to label or remove the president's post about the deceased Florida woman, Lori Klausutis, caused broad backlash.
The company maintains those tweets don't violate its existing policies.
“We are deeply sorry about the pain these statements, and the attention they are drawing, are causing the family,” the company said in a statement. “We’ve been working to expand existing product features and policies so we can more effectively address things like this going forward, and we hope to have those changes in place shortly.”
And other tech companies are not necessarily following suit.
Facebook is not taking action against the president's false claims that mail-in ballots are fraudulent. The president posted the same message that Twitter labeled on Facebook, where it has been shared at least 20,000 times with no warning.
“We believe that people should be able to have a robust debate about the electoral process, which is why we have crafted our policies to focus on misrepresentations that would interfere with the vote," Facebook spokesman Andy Stone says.
The company's divergent decision highlights the fact a politician could simply turn to a different social network if one takes action against a post and the other doesn't, further complicating efforts to stamp out falsehoods online. Some experts say Twitter's decision puts more pressure on Facebook to moderate the president's posts. From Yael Eisenstat, a visiting fellow at Cornell Tech's Digital Life Initiative:
Enforcing policies against the commander-in-chief comes with high political stakes.
Tech companies don't just have to worry about attacks in tweets. The Wall Street Journal recently reported that the White House is considering establishing a panel to review complaints of bias against conservatives. The plans are still under discussion, and the administration could also encourage similar reviews by federal regulatory agencies, such as the Federal Communications Commission and the Federal Election Commission, the Journal reported.
This could add to the tech industry's political headeaches in Washington as they already face antitrust investigations.
And some observers are concerned that Twitter's labels don't go far enough.
Some tech journalists and commentators noted the wording of Twitter's new label could be confusing for observers. From Bloomberg's Sarah Frier, who authored “No Filter: The Inside Story of Instagram”:
Greg Bensinger of the New York Times questioned how many people would click through to see the news articles fact-checking the tweet:
Our top tabs
Whistleblowers allege Facebook withheld information about illegal activity on the platform from investors in a complaint filed today.
A new complaint alleges Facebook violated its fiduciary duties by not informing shareholders about activity such as drug sales, Nitasha Tiku reports. The National Whistleblowers Center, an advocacy group, today filed the complaint with the Securities and Exchange Commission.
The complaint includes dozens of images of evidence of the sales of opioids and other drugs for sale on Facebook and Instagram. One of the whistleblowers behind the complaint, an ex-employee at a cybersecurity firm hired by Purdue Pharma to police online counterfeiters of its drug OxyContin, said Facebook refused to take down illegal offers of sales for the drug.
Another whistleblower, a former content moderator, said there was no way to flag Facebook about use of its internal payment technology for illegal goods including child pornography and drugs. “There were groups where pornographic content related to children was auctioned. And they used [Facebook] systems for all of it, from what I could see,” the ex-moderator wrote in a sworn statement.
Outside watchdogs and reporters have for years flagged the existence of illegal activity on Facebook. But the company has been shielded from lawsuits by a decades-old law that gives Internet companies a liability shield for content posted by users. Advocates at the National Whistleblowers Center hope that by targeting Facebook through financial laws, they can get around those protections, known as Section 230 of the Communications Decency Act.
Facebook ignored its own internal research that shows it polarizes users.
Top executives saw the research before the company faced a public reckoning for its role in influencing the public discourse in the wake of the 2016 election, documents and inside accounts obtained by the Wall Street Journal's Jeff Horwitz and Deepa Seetharaman reveal.
For instance, a 2016 presentation looking at extremist groups in Germany found that “64% of all extremist group joins are due to our recommendation tools.”
“Our recommendation systems grow the problem,” Facebook researcher Monica Lee noted in the presentation.
The company developed a now-disbanded task force dubbed “Common Ground” to work on features to neutralize the polarization. But solutions suggested by the team were shot down over fears that it would slow down engagement and force the company to “take a moral stance,” the Journal reports. People familiar with the process say executives also expressed concerns proposed changes would disproportionately affect conservative users.
Facebook's vice president of global public policy, Joel Kaplan, played a key role in vetting the proposed changes and shot down some of the task force's ideas as “paternalistic,” people familiar with his comments told the Journal. His vetting process became known as “Eat Your Veggies.”
Scrapped projects included efforts to suppress political clickbait. Another project aimed at decreasing the influence of hyperactive users was weakened after Kaplan argued it could hurt Girl Scout troops selling cookies.
“We’ve learned a lot since 2016 and are not the same company today,” a Facebook spokeswoman told the Journal.
Activists will today putting pressure on Facebook and Amazon at annual shareholders' meetings amid concerns about the companies' responses to the coronavirus.
Some Amazon investors plan to challenge the company's efforts to protect workers as the coronavirus hits its warehouses. Some investors intend to call on the company to be more transparent about how many of its workers are infected or have died of the novel coronavirus. The activist investors also want Amazon to appoint an independent board member to oversee worker safety. (Amazon CEO Jeff Bezos owns The Washington Post).
“At the same time that covid-19 has benefited Amazon's bottom line it's also exposed a weakness to its reputation,” said Illinois State Treasurer Mike Frerichs, who oversees the state's $31 billion investment portfolio in companies including Amazon and Facebook.
Meanwhile, Facebook shareholders say they the pandemic has highlighted the issue of hate speech and discrimination proliferating on its platform.
“When it came to dangerous coronavirus misinformation, we saw Facebook take swift and effective action,” Color of Change President Rashad Robinson will tell the board today. But “even then, Facebook initially ignored content related to race: the false idea that Black people couldn’t get the new coronavirus spread rapidly, and has proven very dangerous.” Robinson will propose urging the board of directors to oversee management's preparation of a report focused on civil and human rights.
Facebook stakeholders are urging the company to immediately adopt informal resolutions calling for the removal of all coronavirus misinformation and events promoting armed protests, such as those against stay-at-home orders.
The chances of activist resolutions passing at either company are slim. Both companies have urged shareholders to oppose the resolutions and key activist-led resolutions failed at both companies last year, despite a slightly more narrow defeat than in previous years.
Rant and rave
Instead of revealing worker infection rates, Amazon has taken another route: offering pre-scripted news segments to media outlets. Oklahoma City KOCO anchor Zach Rael first raised awareness to the practice.
But it turned out that Rael wasn't alone. Amazon had shopped the segment to multiple other news outlets. And at least 11 ran with it, progressive news outlet Courier found.
Timothy Burke, the reporter who broke the story:
“Amazon responded by stating the video and script were published to Business Wire as are many other companies’ in-house produced content for media organizations," Burke reported. But Rael said Amazon reached out to him directly:
The incident drew scrutiny on Twitter. Politico's Cristiano Lima:
“This type of video was created to share an inside look into the health and safety measures we’ve rolled out in our buildings and was intended for reporters who for a variety of reasons weren’t able to come tour one of our sites themselves,” Amazon spokeswoman Alyssa Bronikowski said in a statement.
This has been updated to include a statement from Amazon.
Inside the industry
Amazon is reportedly in talks to buy autonomous vehicle company Zoox.
But an agreement is uncertain and could be weeks away, sources told the Journal. Amazon has made several other investments in driverless vehicles geared toward its delivery business.
More industry news:
Uber and Lyft drivers sued the state of New York for allegedly failing to pay out unemployment benefits.
The drivers' lawsuit says that the state makes drivers wait far longer to receive benefits than other unemployed workers, Noam Scheiber at the New York Times reports. The complaint, which is being brought by four drivers and the advocacy group New York Taxi Workers Alliance, is seeking an injunction requiring the state to pay benefits owed to drivers immediately.
The lawsuit also says the city has put an undue burden on drivers applying for the benefits but failing to force companies such as Uber and Lyft to provide earnings data normally supplied in the process. Both Uber and Lyft said they were working with the state to provide data.
Uber also got the spotlight yesterday when former vice president Joe Biden slammed the company's efforts to roll back a California law giving some gig workers employee status.
The company is currently fighting a lawsuit from the state alleging it misclassified workers under the new law.
More tech workforce news: