Parler has resisted placing limits on what appears on its social network, and its leaders have equated blocking hate speech to totalitarian censorship, according to Amy Peikoff, chief policy officer. But Peikoff, who leads Parler’s content moderation, says she recognizes the importance of the Apple relationship to Parler’s future and seeks to find common ground between them.
“At Parler we embrace the entire First Amendment meaning freedom of expression and conscience are protected,” Peikoff said. “We permit a maximum amount of legally protected speech.”
Apple declined to comment.
Apple, like other major tech companies, took Parler down in the wake of the Jan. 6 Capitol riot. The app had been used to glorify and encourage the attack. Google also booted Parler from its app store. Shortly after, Amazon Web Services cut off Parler’s cloud computing power, essentially shutting it down. (Amazon founder Jeff Bezos owns The Washington Post.)
Last month, Apple confirmed it would let Parler back on its store with the proposed changes to the app’s moderation policies.
Parler’s maneuvering to get back on the App Store — where it was the number one app when it was taken down on Jan. 9 — shows the sway Apple and other major tech companies have over businesses that exist on their platforms. As the sole gatekeeper of what apps can appear on iPhones, Apple is as important an arbiter of online speech as Facebook or Twitter — though it is more often overlooked.
Parler is still pressing Apple to allow a function where users can see a warning label for hate speech, then click through to see it on iPhones. But the banning of hate speech was a condition for reinstatement on the App Store.
“Where Parler is different [from Apple], is where content is legal, we prefer to put the tools in the hands of users to decide what ends up in their feeds,” Peikoff said.
She added that the version on iOS, Apple’s mobile operating system, could be called “Parler Lite or Parler PG.”
Apps can only be downloaded on iPhones through the App Store, and Apple requires that social networking apps meet a certain standard for vetting content. While tech giants such as Facebook and Google’s YouTube can afford to employ thousands of human moderators, in addition to artificial intelligence, smaller companies often lean more heavily on a limited human staff or tech solution. Unlike Parler, the major social media platforms apply their policies universally, regardless of where users see a post.
Parler burst onto the scene in 2018 touting itself as a place for unfettered free speech. It gained rapid steam last year as Facebook and Twitter started penalizing former president Donald Trump for spreading misinformation. Politicians such as Sen. Ted Cruz (R-Tex.) and Rep. Kevin McCarthy (R-Calif.) joined. And the number of users surged, particularly among groups of people who supported Trump and leaned to the right.
As Parler grew, a major problem emerged: its hands-off approach to content on its site. It used a system of voluntary, unpaid, Parler-trained community jurors that depended on users to report violating content, which was then reviewed by the group. Jurors then referred potential violations to a five-person quorum that would render a verdict. Users were able to appeal.
Peikoff says the juror system was especially difficult to scale when the platform was experiencing wild growth during the 2020 presidential election, particularly around Nov. 3 and Jan. 6. After the election, the site was filled with misinformation on the election. And on Jan. 6, some users egged on violence at the Capitol, while others used the platform for planning.
In a letter to lawmakers, Apple said it had communicated to Parler before and after the Jan. 6 riot about repeated instances of incitement and hate speech on its platform and the need to fix its moderation. It took the app down after it deemed Parler’s response insufficient to its urgent demand for a “moderation improvement plan.”
Parler had shunned the use of more stringent moderation practices that mainstream social media sites Facebook and Twitter used, before adopting some similar methods.
One of Parler’s first moves to try to get back online was to approach Amazon, according to former CEO John Matze, who was fired in February. He offered to explore using the Amazon’s Rekognition AI tool, which reads faces, objects and scenes in images and videos and is used for content moderation by some of its customers. Amazon‘s own Trust & Safety team, which has fewer than 100 workers, acts only on complaints received and did receive complaints about Parler. But according to Matze, Amazon said implementing that tool wouldn’t be enough to fix Parler’s problem.
Amazon Web Services spokesperson Kristin Brown confirmed that while Rekognition can effectively moderate for image and video content, it does not yet have the same capability for text, which is a critical need for Parler.
Parler managed to get its site back up through hosting provider SkySilk in February.
Matze hired Hive, an AI-based content moderation company that does work for Reddit and Chatroulette, on Jan. 18. Hive, based in San Francisco, employs more than 2.5 million contractors who are paid per task, often in bitcoin, and annotate images, video, text and audio content collected from the Web. That feeds into Hive’s machine learning and algorithms, allowing it to better police content.
Every post on Parler runs through Hive’s AI for analysis. The algorithms are the first filter for content. More than 99.5 percent of posts on Parler are deemed safe based on algorithmic review, according to Hive and Parler; the remaining 0.5 percent are flagged for Hive’s human moderators to evaluate.
For Parler alone, Hive has contracted with more than 1,000 moderators, said Hive’s CEO, Kevin Guo.
Social media content moderation still has a long way to go, experts say. Even Facebook and Twitter’s efforts still fall short on occasion or make the wrong call, but they have put a lot more money into the problem than Parler.
AI moderation is “decently good at identifying the most obviously harmful material. It’s not so great at capturing nuance, and that’s where human moderation becomes necessary,” said Sarah Myers West, a post doctorate researcher with NYU’s AI Now Institute. Making content moderation decisions is “highly skilled labor,” West said.
Parler sets the guidelines on what Hive looks for. For example, all content that the algorithms flag as “incitement,” or illegal content threatening physical violence, is removed for all users, Peikoff and Guo said. That includes threats of violence against immigrants wanting to cross the border or politicians.
But Parler had to compromise on hate speech, Peikoff said. Those using iPhones won’t see anything deemed to be in that category. The default setting on Android devices and the website shows labels warning “trolling content detected,” with the option to “show content anyway.” Users have the option to change the setting and, like iOS users, never be exposed to posts flagged as hate.
Peikoff said the “hate” flag from the AI review will cue two different experiences for users, depending on the platform they use. Parler’s tech team is continuing to run tests on the dual paths to make sure each runs consistently as intended.
The chief policy officer acknowledges that many in her world view content moderation as anti-free speech and pro-censorship. Some Parler users decry the blocking of hate speech as a sellout to Big Tech.
But Peikoff said she believes revamped moderation allows Parler to better enforce its guidelines and uphold the Constitution by effectively removing inciteful content, which is illegal and threatens free speech.
She praised Apple for its user privacy protections, something she said aligns with Parler’s goals to do the same. She said Parler also argued to Apple that it had been unfairly singled out for contributing to the Jan. 6 violence and in fact had notified the FBI of threats of violence being planned at the Capitol.
Peikoff said that Apple has now observed Parler’s improved moderation and safer user experience from testing and browsing the app and buys into the new moderation approach.
But content moderation, whether performed by AI, humans, or with both in concert, is famously imperfect. Peikoff said Hive recently flagged for nudity her favorite art piece, the “To Mennesker” naked figures sculpture by Danish artist Stephan Sinding, when she posted it. The image was immediately covered with a splash screen indicating it was unsafe.
“Even the best AI moderation has some error rate,” Guo said. He said the company’s models show that one to two posts out of every 10,000 viewed by the AI should have been caught on Parler but aren’t.
In mid April, Apple’s App Review Board told Parler that it would be allowed back onto its App Store.
In an April 19 letter to Sen. Mike Lee (R-Utah) and Rep. Ken Buck (R-Colo.), who had asked Apple to explain why Parler was removed from the store, Apple stated that Parler’s “proposed updates to its app and the app’s content moderation practices” were now sufficient to comply with its rules.
Peikoff said she looks forward to the platform getting a new life through the App Store and perhaps a fresh look from people who are not part of the right-wing political base. She also hopes Apple will come around to Parler’s proposal giving its users the choice to opt out of receiving hate speech content, just like Apple’s users can now opt out of being tracked.
Timed with the restoration of Parler on the App Store on Monday, Parler named George Farmer as its new chief executive. Prior to joining Parler in March as chief operating officer, Farmer worked in London as a partner in a hedge fund and has been a Brexit Party supporter and candidate. He is married to conservative pundit Candace Owens.
Meanwhile, Parler is in no rush to return to the Google Play store.
Google spokesperson Dan Jackson says the Play Store will consider reinstatement once Parler submits an updated app for Google Play. But there has been little communication between Parler and Google over the past few months, both companies said.
Peikoff said Parler is not currently pursuing getting back on the Play Store since the updated app can be side-loaded through Parler’s site on Android phones.
Correction: A previous version of this story misquoted Sarah Myers West, a post doctorate researcher with NYU’s AI Now Institute, as saying it’s important for highly skilled labor to make content moderation decisions. West said making the content moderation decisions constitutes highly skilled labor. The story has been corrected.
Kevin Randall is a freelance writer whose work has appeared in the New York Times, The Economist, Vanity Fair and WIRED.