Facebook, Google and other tech giants long have faced criticism from Congress for failing to crack down on a wide array of abusive posts, photos and videos that attack people on the basis of race, gender or other traits. These companies and their peers, including Twitter, explicitly bar such attacks. But their heightened attention to the issue and investments in more content reviewers —along with more potent artificial intelligence tools — still haven’t thwarted the proliferation of troubling content.
The shooting at two mosques in the New Zealand city of Christchurch brought this into sharp relief, after videos of the attack targeting Muslims spread rapidly on social-media sites. The problem is even more pronounced on lesser-known services, such as Gab.ai, a haven for alt-right users, and anonymous forums including 8chan. The suspect behind the shooting at a Pittsburgh synagogue last year, for example, posted anti-Semitic screeds on Gab and 8chan remained a place where users celebrated the New Zealand killings for days.
Facebook and Google did not immediately respond to requests for comment.
Last week, Facebook said it would ban photos, posts and other content that reference white nationalism and white separatism. Previously, Facebook only barred white supremacy from its site. The new policy, which also applies to Instagram, came in response to months of criticism from civil-rights groups, which fretted that Facebook had created a loophole for extremism through its inconsistent rules. Many recalled the tragedy two years earlier, when self-avowed white nationalists used Facebook as an organizing tool ahead of the “Unite the Right” rally in Charlottesville, Virginia, which ultimately left three dead.
One of the organizations that pressured Facebook to change its policy, the National Lawyers’ Committee for Civil Rights Under Law, is set to testify at the House hearing next week, along with representatives from groups including the Anti-Defamation League. Testifying on behalf of Facebook is Neil Potts, a public policy director, and testifying from Google is Alexandria Walden, a counsel for free expression and human rights. Committee leaders said Wednesday that additional witnesses could be scheduled.