Take Back the Tech’s “report card” on social media and violence against women. (TBTT)

Major American tech companies regularly publish “transparency” statistics on things such as how many items they take down for copyright violations and how often the government requests user data.

But according to a scathing new report, when it comes to user abuse on their platforms — an issue that has seen an explosion of interest and concern over the past 12 months — social networks remain obstinately, even disingenuously, mum.

The report, which was released as part of the Association for Progressive Communications’ “Take Back the Tech” campaign and funded in part by the Dutch government, analyzed Twitter, Facebook and YouTube’s user policies and their public response to international abuse incidents over the past five years.

While the report’s findings vary widely by platform, it points to one sweeping issue across the board: a total lack of transparency around how much abuse actually goes down on social media — particularly abuse directed at women — and how social media companies deal with it.

“These companies are responsible to their users, yet so much of what they do happens behind closed doors,” said Sara Baker, “Take Back the Tech’s” global coordinator with the APC. “We would love to see data on how many people submit reports, their general demographics (including country and language) and the overall results of those reports. We also want to know more about the people making decisions behind the scenes. What countries do they live in? How are they trained?”

These questions spring from more than mere curiosity, the report points out: Without reliable data on abuse, there’s no real way to scrutinize networks’ response or hold them accountable to their users.


A screenshot of Twitter’s automatic response to abuse reports.

As things stand, YouTube, Twitter and Facebook all rely on user reports to flag inappropriate or abusive content. When someone flags something on YouTube, the content is surfaced to a 24-hour team for review. (An algorithm determines the priority of review, so a victim has no way of knowing how long that will take.) Meanwhile, when a user flags content on Twitter, that user receives an automated e-mail urging “patience” and promising Twitter’s safety team will “review your report and take action if the user is found to be in violation of our abusive behavior policy.”

There is never any indication, the APC report points out, of who looks at the report, what factors they consider, and how long they expect the process to take. There is also no way to appeal a decision.

“When content is reported to us that violates our rules, which include a ban on targeted abuse and direct violent threats, we suspend those accounts,” a Twitter spokesman said, by way of explanation. “We evaluate and refine our safety policies based on input from users, while working with outside organizations to ensure that we have industry best practices in place.”

Facebook is, to a certain extent, the only outlier here: Like Twitter and YouTube, the network is quick to promote its collaboration with outside victim support and advocacy groups. But unlike Twitter and YouTube, Facebook has publicly explained the intricacies of its reporting process and advertised a ballpark timeline — 72 hours — for “the majority” of reviews. The company’s safety teams, operating in 20 languages, escalate some reports to law enforcement and crisis hotlines. And since 2012, a feature called the Support Dashboard has let victims track the status of their abuse reports — and why the decision came down the way it did.


Facebook’s reporting guide lays out its abuse-reporting process. (Facebook)

That helps explain why Facebook alone of the three networks received a “passing” grade from APC; Twitter and YouTube both scored Fs on issues like “engagement with stakeholder groups” and “proactive steps to eradicate violence against women.”

“We work hard to make sure that our community of more than one billion people has the best experience possible,” said Monika Bickert, Facebook’s Director of Public Policy. “We’ve created a global set of Community Standards, we make it easy for people to report anything they see on Facebook, and we have a team of hundreds of people who respond to reports twenty four hours a day, seven days a week, in more than two dozen languages.”

Still, even Facebook doesn’t publish its abuse numbers — the one metric, activists say, that would make the social network truly accountable to its users.

A Facebook spokesperson said the network is currently exploring ways to track more — and better – abuse-reporting data, and wouldn’t “rule out” the possibility of publishing it in the future. The site’s system isn’t presently designed to provide detailed breakdowns of data that is reported.

But Baker, of the APC, points out that there is some precedence for this type of action. Many tech companies have deployed startlingly effective, transparent policies around copyright, she notes. (Google’s running Transparency Report goes so far as to log exactly how many copyright-takedown requests it receives per week, from which owners and on what domains.) This year, Facebook, Twitter and YouTube parent Google also all released diversity reports that broke down the racial and gender compositions of their staffs in response to public pressure on the issue. It’s not inconceivable, nor technologically impossible, for those companies to do the same thing with their data on report volume and response times.

Until they do, the APC is asking users to publish data of their own: The organization is collecting user ratings of Twitter, Facebook and YouTube via the “Take Back the Tech” Web site. They do promise, notably, to publish that data soon.