Spotting an online troll is pretty easy for your average Internet user: They're the jerk hijacking otherwise earnest online conversations for their own amusement, often with the help of straw man arguments and profanities.

And a lot of online moderation schemes today rely on a large-scale version of this individual model: There are literally people whose job is reviewing posts that users marked as abusive or otherwise in violation of a site's commenting guidelines.

But there could be a better way. What if there was software that could predict if a user was going to be a troll before their behavior could tear online communities apart?

That's one of the questions that a study submitted this month to the 9th International Conference on Web and Social Media by researchers at Stanford and Cornell universities hopes to answer.

The researchers --  Justin Cheng, Cristian Danescu-Niculescu-Mizil and Jure Leskovec -- waded through 18 months of user activity in the comment sections of, conservative political news site, and gaming site looking for antisocial activity. Using data provided by commenting platform Disqus, they were eventually able to identify what they called "future banned users" -- commenters who were later blocked from the site for bad behavior.

Those users, they found, tended to focus their comments on a small number of stories and were more likely to post things otherwise irrelevant to the overall conversation. Trolls' behavior also tended to get worse over time, according to the researchers -- and they were generally successful at getting a rise out of those in an online community.

"They receive more replies than average users, suggesting that they might be successful in luring others into fruitless, time-consuming discussions," the researchers said.

The researchers used their findings to design a program that could predict who would be banned in the future -- looking at things like post content, user activity and community and moderator responses. And they had some success: They determined that they could predict with more than 80 percent certainty if a user would later be banned -- and only needed five to 10 of a user's posts to make a prediction.

But those results might not yet be good enough for sites walking a narrow line between protecting users from truly abusive behavior and banning unpopular opinions.

Some platforms are looking at ways to limit the impact of disruptive users without actually banning them. Twitter, for instance, said this week that it is testing out a product that flags potentially abusive tweets based on a "wide range of signals," such as account age and similarities to previous messages that staff deemed abusive, and then limits their reach.

But regardless of how sites approach it, online harassment is a serious problem: Some 40 percent of adult Internet users have experienced it, according to a Pew Research Center study last year. And controlling the bad actors responsible behind some of the most aggressive behavior is a struggle for many online sites, including Facebook and Twitter.

Unsurprisingly, research suggests these uncivil commenters tend to have some pretty dark personality traits. Research led by University of Manitoba graduate student Erin Buckels published in the journal Personality and Individual Differences last year found links between online trolling and narcissism, Machiavellianism, psychopathy and sadism.

The issue is particularly important for news sites. There's evidence that trolling behavior, such as ad hominem attacks in the comments of stories, can sway readers' opinions, regardless of the underlying facts -- that's one reason that Popular Science decided to do away with comments altogether in 2013.

But that's just not an option for every site, especially considering how important user contributions have become in our increasingly social online experiences. That's why systematic approaches could be a godsend to sites that otherwise must use a lot of manpower to stop their comment sections from turning into cesspools.

Read more: