Credit: J.J. Alcantara/The Washington Post; iStock

Three-year-old Madeline McCann disappeared from a resort apartment in Portugal in 2007 while on a family vacation. Her British parents were dining nearby. The McCann disappearance, which has never been resolved, became a major British tabloid story. The daily media frenzy has since faded, but a decade later, a tight-knit group of Twitter trolls who are convinced that they have proven the McCann parents’ guilt in their daughter’s disappearance still discuss the case online every single day.

John Synnott, a senior lecturer in investigative psychology at the University of Huddersfield in Britain, has long been interested in the McCann trolling, ever since he first saw it at work in about 2012.

“It was somewhat organized, it was repetitive, but the volume of information was the real surprising thing.” Synnott told me in a Skype interview recently. To find it, all you do is navigate to the #McCann hashtag on Twitter. #McCann will be there, as if embedded into the platform.

What you’ll see: a steady stream of chatter from a tight-knit community with a common goal. In a new paper on the #McCann group, Synnott and his co-authors tried to shed some light on how this devoted, online trolling community works.

There’s a lot of interest in trolling right now, but the body of academic work on the subject is still relatively small. That’s one reason Synnott’s team took up the topic: By better understanding “the damaging impact the McCann trolls’ behavior has had on those victimized, both online and offline,” the team hopes to eventually be able to better identify effective ways of dealing with this sort of behavior online.

“As psychologists we need to explore this behavior and see how we can contribute to our understanding of it,” Synnott said. “Our findings show that trolls approach their activity in a systematic way; they work within supportive communities, follow routines, and don’t appear to be open to changing this behavior because they don’t welcome opposing viewpoints.”

“Trolling” is hard to define, in part because it doesn’t always look the same, case to case. “Trolling” implies a combination of insincerity, provocativeness and causing harm, but not always in that order, or with the same balance. In the case of the McCann hashtag group, it would be a mistake to assume that its most devoted participants are in it for the “lulz” — joy or entertainment at someone else’s expense.

“What I’m interested in is something that people see as part of their identity,” Synnott told me. For the core members of the McCann group, “they wake up and engage in this.” Perhaps paradoxically, many of the identity trolls Synnott encountered during his team’s research worked entirely anonymously online. That anonymity, Synnott said, “affords a level of protection” to the McCann trolls on Twitter, one that “enables the boundaries to be pushed, of what might be considered constructive discourse. ”

(For Americans, the closest Internet phenomenon to the McCann group might be the #pizzagate conspiracy hashtag that exploded online after the election, or those who harass the parents of the children who died in the Sandy Hook shootings, believing a conspiracy theory that the massacre was a “false flag.”)

The group is defined as much by their belief in the guilt of McCann’s parents as they are in their behavior toward those who disagree. “You’re either with them or against them. If you’re against them, you can expect to hear from them,” Synnott said. McCann hashtaggers have a way of showing up in the mentions of people who express opinions on the case on Twitter, in other words. Some of them also spent a lot of time distributing offensive images and memes meant to get a rise out of another dedicated subset of Twitter users: those who regularly defend the McCanns against the McCann trolls, and believe they were not involved in their daughter’s death.

While the content of the McCann trolls is readily available, Synnott and his team wanted to know how the McCann group actually worked. What strategies bind them together? How do these conversations work? And how can researchers figure out ways to understand what’s going on within groups like these? That third question, as the paper makes clear, is pretty challenging to do.

The researchers tried to talk to the people they had identified as core members of the #McCann community, but that attempt was pretty unsuccessful. “If you try to speak to them directly, they don’t want to. They know you’re going to tell them stuff they don’t want to hear,” he said. So, they tried something else: setting up a Twitter account and joining in the discussion.  They did so by introducing a scientific paper that debunks the core piece of evidence cited by the McCann group into the discussion. From the paper:

This conversation was initiated on the evening of July 5th, 2015, after the researcher posted a tweet stating that cadaver dogs make false positive errors 10-20% of the time when working in hot temperatures, alongside a link to the journal article from which this finding was obtained. The trolls’ responses were plentiful and instantaneous, prompting an in-depth discussion which lasted approximately 3 hours.

And this is how Synnott and his team got to observe the importance of the word “shill.” Because their Twitter account was new, the #McCann hashtag members soon became suspicious of the researcher account. That led to the inevitable conclusion that the account must be a paid infiltrator, hence the “shill” label. Once the researcher account was labeled as a “shill,” things really took off:

This accusation prompted the involvement of several more anti-McCann as well as pro-McCann users, though any genuine attempts to discuss the case were rebuffed. Where references were made to the journal article, responses became hostile, with one user stating that he did not “care about the stupid article.” The discussion soon escalated, resulting in the anti-McCanns posting various insulting statements targeted at anyone who disagreed with them.

“Shill” the team observed, acted like a bat signal for the rest of the group. As soon as someone is called a shill, several other anti-McCann trolls joined in to attack the person in question. When I asked Synnott to elaborate further on what kind of abuse he and his team received as a result of this research, he said he’d prefer not to discuss it, because he’d like his paper to “stand on its own merits.”

The work, in Synnott’s words, simply scratches the surface of the bigger questions that prompted him to try and engage with the trolls in the first place. “The paper is a vehicle for a much larger issues,” he told me. “What does our online identity really represent? What does anonymity represent, and how do we understand this as psychologists?”

More reading: