Ctrl + N
A fringe social network called 8chan is in the spotlight after users on the anonymous message board cheered on the man suspected of attacking a California synagogue on Saturday.
Hours before a 19-year-old man opened fire at the San Diego-area synagogue, killing one and injuring three, an anti-Semitic screed was posted under his name on the website the Anti-Defamation League has compared to “round-the-clock digital white supremacist rallies.” 8chan is the same service that the alleged gunman in the New Zealand attacks used to blast his message of hate just weeks ago.
The pair of manifestos that appeared to preview real-world violence is rekindling calls for policymakers to look beyond just household names such as Facebook, YouTube and Twitter to curtail the spread of harmful content. Tech experts and civil rights advocates say there's far less awareness of the role extremist enclaves such as 8chan -- which does not appear in Google search results and is blocked by many corporate firewalls -- play in stoking hate and violence online.
“The consequences of such a concentrated echochamber of hate has played out multiple times around the world,” Oren Segal, the director of the ADL's Center on Extremism, told me. “At this point, we believe that if certain companies can’t take responsibility to regulate themselves, to protect their users, to protect others, then perhaps it’s time for policymakers to step in and to regulate them.”
In a tweet over the weekend, 8chan said it took swift action to remove the post connected with the attack. But in the aftermath of the New Zealand attack, experts in online extremism said white supremacists' discussions on the forum appeared similar to online activity typically tied to Islamic militants, my colleagues Drew Harwell and Craig Timberg reported.
Some experts are calling for the website -- which describes itself as the "darkest reaches of the Internet" -- to be monitored like a terrorist recruiting website, so law enforcement would intervene more when chatter becomes violent, my colleagues wrote.
While the House held a hearing on white nationalism last month, experts said there wasn't nearly enough attention paid to these fringe sites.
“These digital hate rallies are leading to violence on the ground,” Segal told me. “If we can’t try to understand the role of social media in not just encouraging hatred but spreading violence now, when will we be prepared to?”
The message is trickling to the campaign trail. Brianna Wu, a software engineer who is running as a Democrat for a House seat in Massachusetts, told me she is “angry” that law enforcement has not done more to rein in 8chan, which has also been connected to the circulation of child pornography and is a place where people are frequently doxxed.
After Wu herself was targeted on the website in 2014 with death threats during the Internet culture war known as Gamergate, she says she says she documented "tons of illegal activity" on 8chan and shared her findings with the FBI. She believes it's possible the recent shootings could have been avoided if law enforcement took greater action, she said, and wants to increase funding for the FBI to investigate online crime if elected to Congress.
“We need to fund a specific task force within the FBI that is very tech literate and tasked to prosecute these types of online crimes,” she said. More from Wu:
7/ If you want to stop future mass shootings planned on #8chan, the course of action is very simple.— Brianna Wu (@BriannaWu) April 28, 2019
Get a warrant, and START PROSECUTING all the blatantly illegal actions taken there. It is a crime to post stolen credit card information. It is a crime to host child pornography.
But it may not just be up to policymakers or law enforcement: There could be a role to play for Big Tech to stop the spread of hateful content even from fringe sites.
Alex Stamos, the former chief security officer at Facebook, tweeted that technology platforms should ban content from services like 8chan and other sites known as hotbeds for white supremacist terrorism. "While they work on improving the quality of moderation using their current standards, I would also propose a much more extreme step: a blockade of the small number of sites responsible for the majority of WST community and recruiting," he tweeted.
While the platforms have their own issues disrupting the on-platform radicalization cycle, they can take a big step by creating a moat that makes it less likely that an individual starts with slightly alt-right videos and ends up cheering on a shooter on a *chan.— Alex Stamos (@alexstamos) April 28, 2019
BITS: The Securities Exchange Commission and Elon Musk resolved their dispute over the Tesla chief executive's tweets, according to a Friday court filing obtained by my colleague Faiz Siddiqui.
The filing amends a previous deal Musk made with the agency that required the company to more closely monitor Musk's use of social media. The new agreement more specifically outlines which of Musk's tweets need to be reviewed by the company's lawyers.
"The new agreement details nine categories of information that must be screened, such as Tesla’s 'financial condition, statements, or results, including earnings or guidance,'" Faiz wrote. "The categories also include such information as possible acquisitions and business deals, as well as production, sales and delivery numbers that include forecast or projected figures. Senior personnel or board changes would also have to be approved, among other nonpublic disclosures."
A judge still has to approve the agreement. The agency is seeking stricter limits on Musk's use of Twitter after they say Musk violated their earlier agreement in February, when he tweeted that Tesla would delvier half a million vehicles this year.
NIBBLES: Apple has removed or restricted at least 11 of the 17 most-downloaded screentime monitoring apps since launching its own competing app last year, Jack Nicas reports in the New York Times.
"In some cases, Apple forced companies to remove features that allowed parents to control their children’s devices or that blocked children’s access to certain apps and adult content," Nicas reported. "In other cases, it simply pulled the apps from its App Store."
Some of the apps shut down, while others say their future is in jeopardy.
"The screen-time app makers are the latest companies to suddenly find themselves both competing against Apple and at the mercy of the tech titan," Nicas writes. "By controlling the iPhone App Store, where companies find some of their most lucrative customers, Apple has unusual power over the fortunes of other corporations."
The company defended its decision to remove or restrict the apps. Apple spokeswoman Tammy Levine told the New York Times that developers could gain too much access to people's devices through the monitoring apps and that the timing was unrelated from the company's own screentime app launch. Philip W. Schiller, Apple’s senior vice president of worldwide marketing, said in emails published by MacRumors that Apple “acted extremely responsibly in this matter, helping to protect our children from technologies that could be used to violate their privacy and security.”
BYTES: When the Mueller report came out, YouTube reccommended RT -- a global media outfit funded by the Russian government -- hundreds of thousands of times, according to my colleagues Drew Harwell and Craig Timberg.
The watchdog AlgoTransparency analyzed reccommendations made by the 1,000 YouTube channels it tracks daily. The group discovered that 236 of those recommended RT’s “On Contact: Russiagate & Mueller Report w/ Aaron Mate” more than 400,000 times.
"The video interview it recommended, posted by RT’s U.S.-focused division RT America, is sharply critical of American press coverage of the Mueller report and calls journalists 'Russiagate conspiracy theorists' and 'gossiping courtiers to the elite,'" my colleagues wrote.
YouTube pushed back on the watchdog's methodology, data and findings.
"The AlgoTransparency tool was created outside of YouTube and does not accurately reflect how YouTube’s recommendations work, or how users watch and interact with YouTube,” YouTube spokesman Farshad Shadloo said. “We’ve designed our systems to help ensure that content from more authoritative sources is surfaced prominently in search results and ‘watch next’ recommendations in certain contexts, including when a viewer is watching news-related content on YouTube.”
Technology news from the private sector:
-- The consumer tech sector directly employed 5.1 million workers and supported an additional 13.1 million jobs in 2017, according to a new report that The Consumer Technology Association provided exclusively to the Technology 202. That's 9.3% of the U.S. workforce in that year.
The study, conducted by Pricewaterhouse Coopers, gives a picture of the industry's economic impact in the United States. Gary Shapiro, president and chief executive of the CTA, say the report's findings show the tech sector "is an essential driver of our nation's economy."
"And the effects go beyond just the products our industry sells – we drive productivity for virtually every sector of the economy," Shapiro said in a statement.
Noteably, the study also provides some insight into the political ramifications of tech's growth amid growing political backlash against the industry. The study shows the industry has an increasing on-the-ground presence in blue states, such as New York, Massachusetts and California.
Technology news from the public sector:
Tech news generating buzz online: