The Washington PostDemocracy Dies in Darkness

Can the Chinese government really control the Internet? We found cracks in the Great Firewall.

A man on a mobile phone passes a portrait of Mao Zedong in Beijing on May 20, 2016. Mobile device usage and e-commerce are in wide use in China despite serious restrictions on Internet access. (Michael Robinson Chavez/The Washington Post)
Placeholder while article actions load

The Great Firewall of China, the vast hardware and software system the Chinese government uses to prevent access to certain Internet content, is often depicted as monolithic and Orwellian.

Our research uses new online analysis methods and reveals serious cracks. Although extensive, the Chinese Communist Party (CCP) system of control over online social interaction is quite diffuse — and at times incoherent. New moves by Beijing, however, may be in line to shore up the Firewall.

Our analysis found many inconsistencies in government astroturfing, or faking “grass-roots” comments on news sites and social media. Using metadata from a database of more than 50 million posts on Chinese news websites, we identified government astroturfers (known colloquially as the Fifty Cent Party).

The Chinese government fakes nearly 450 million social media comments a year. Here’s why.

Beijing fabricates at least one in every six online comments

Chinese astroturfing work is highly decentralized and erratic, and the fake posts are often at odds with the expressed preferences of the central government. This is most likely a reflection of the fragmentation and internal conflict within the CCP, across different bureaucracies and regions.

Our research on what gets deleted shows similar inconsistencies. China’s Firewall relies on censors to delete content that suggests collective action against the government — protests, strikes, petitions — or taunts to the leadership.

Government censors do not uniformly and exclusively target collective action, as others have suggested. Censors target political humor almost as frequently. Posts depicting President Xi Jinping as Winnie the Pooh or his baby doppelgänger are heavily censored, for instance.

Here’s a closer look at how information control in China works — or not:

1) The government hogs the Internet sofa

Recent work holds that astroturfing is about strategic distraction through “cheerleading” for the state. But the Chinese government’s expressed intent is guiding the formation of opinions, changing minds through manipulation of available information.

Manuals on “online opinion guidance” instruct paid commentators in China to “hog the seats on the Internet sofa” by bombarding social media with pro-government messages. By quickly “diluting negative attitudes online” and spreading “positive energy” (正能量), government commentators can “guide public opinion as it develops … flood the comment section … and tirelessly grasp for the right to speak.”

China’s state-run media favors Clinton over Trump

How does this work in practice? At the Group of 20 meeting in September 2016, President Xi Jinping misread his teleprompter, telling people to “loosen their clothes” instead of “lighten the burden of farmers.” Government censors deleted online news references to the flub.

At the same meeting, Xi’s wife drew favorable attention for her outfit, a traditional dress that emphasized her good figure. These comments also disappeared. Astroturfers quickly filled the comment sections to G-20 news stories with “sofa” posts such as, “I support Xi Jinping! Under his leadership, China will become stronger and stronger!”

These comments take the first seats on the Internet sofa, pushing down negative comments such as the teleprompter mistake. In a sea of “upvoted,” highly visible fawning posts, “negative” comments are diluted until the censors can delete them.

Trump risks war by turning the One China question into a bargaining chip

These “positive energy” comments, especially when posted early and often, may discourage those with antigovernment opinions from voicing their complaints — or reassess their position once the astroturfed commentary appears.

2) Censors and astroturfers work at cross purposes, at times

We found examples where censors in one agency were deleting content identical to content produced by astroturfers.

After the June 2016 court ruling by The Hague on the South China Sea concluded in favor of the Philippines, China’s censors worked overtime to dampen protests from nationalistic “angry-youth.” The censors’ targets weren’t surprising: international, pro-ruling, and collective action content. However, most of the censored posts were from ultranationalists advocating war with the United States or the Philippines.

But some astroturfers used racist, offensive language to demean the Philippines and the United States — whom they blamed for meddling in the South China Sea. Instead of de-escalating per government instructions, astroturfers posted the same vitriolic messages that were heavily censored on Sina Weibo, such as, “It’s no use arguing with jackals, it’s more effective to hunt them down with a rifle,” and “Dispatch warships … Provoke the enemy. Open-fire when the Americans come.” Others called Filipinos “misbehaving monkeys.”

This astroturfing behavior is not consistent with other scholars’ findings, which reported a predominance of “cheerleading” comments in Chinese astroturfing. Our data show only 36 percent of comments related to The Hague ruling were “cheerleading.” Aside from “cheerleading” posts, more than 50 percent of the astroturfed commentary argued for war, or included racial slurs about Filipino people, insults of other netizens or rage over the ruling.

3) Beijing puts all hands on deck for ‘Public Opinion Emergencies’

Censorship and astroturfing are part of a broader information control strategy. Government opinion management manuals suggest that the same employees responsible for astroturfing also compile information on public opinion during collective action events — and use this to revise contingency plans for future “public opinion emergencies.”

What happened online after the August 2015 explosions at a Tianjin warehouse show how Beijing manages a “public opinion emergency.” These explosions killed 173 people, decimated a large area of Tianjin’s port, and displaced nearly 3,500 residents.

This was China’s most-censored event of 2015. It’s likely the government had in place a contingency plan with instructions on how to diffuse threatening or antigovernment opinion.

Censors aimed to limit access to nonofficial content — to limit the spread of information calling attention to government malfeasance and health concerns. As shown in Figure 1, astroturfers vigorously posted comments that contrasted with angry and indignant netizens who blamed the explosion on lax enforcement. Astroturfers reframed the conversations to highlight emotions of sadness, pride and unity to eulogize firefighters.

In the Tianjin crisis, the quick and unified online response suggests a great deal of government planning, coordination and iterative updating of information strategies. But the response to the South China Sea ruling points to a lack of preparation and coordination — and perhaps even disagreement between different parts of the government.

Our findings suggest that information control in China is more varied and decentralized than we thought, as a Washington Post editorial also argued. China’s ability to control information is impressive, but decentralization makes the system hard to tightly control. For example, although the surveillance powers granted by China’s recently issued Cybersecurity Law are formidable, its implementation has been far from disciplined. A recent Washington Post report details how private data obtained using these new powers is being openly sold on the market, perhaps by rogue government officials.

These moves suggest that Beijing is taking new steps to solidify control over the Web — and making sure what’s out there on the Internet are the messages that the CCP wants China’s citizens to read.

China’s direction is clear. Beijing’s goal is what MERICS researchers Mirjam Meissner and Jost Wübbeke call “IT-backed authoritarianism,” a tighter strategy of mass surveillance and individual targeting that aims not only to guide discourse online, but to leverage it for more effective governance, propaganda and political control.

Blake Miller is a PhD candidate in political science and master’s student in statistics at the University of Michigan. He specializes in information manipulation, authoritarian politics, and natural language processing. Follow him on Twitter @bapmiller.

Mary Gallagher is a professor of political science at the University of Michigan where she also directs the Lieberthal-Rogel Center for Chinese Studies.