Signage is displayed outside Facebook’s headquarters in Menlo Park, Calif., on Oct. 30. (David Paul Morris/Bloomberg News)

The U.S. midterm elections are over, but many Americans — and people around the world — are still worried about “fake news,” particularly disinformation. Now that social media has replaced mainstream news outlets as the most common source of information about our world, how can a reader know the difference between fact and conspiratorial fiction?

Nick Adams of Goodly Labs may have an answer. With colleagues, he has put together a tool called Public Editor that helps ordinary citizens to vet the quality of online news articles. I recently reached out to Nick with questions. What follows is a lightly edited version of our online conversation.

Joshua Tucker: What is Public Editor?

Nick Adams: Public Editor is a free online system allowing the public to take more responsibility for vetting the news articles that so many of us are reading, liking and sharing through social media. It takes in a news article, and 30 minutes later, after a lot of well-organized volunteer effort, the article is covered with labels pointing out fallacies and biases in its content, issues with tone, and even subtle inferential mistakes.

It also gives the article a 0-100 credibility score that content platforms like Facebook, Google and Twitter can use to promote higher-quality articles in their search results and news feeds. And all of Public Editor’s labels and scores will appear automatically over news content for readers who install our browser plug-in.

JT: What led you to want to create Public Editor?

NA: My commitment to the project really kicked into high gear as “fake news” and misinformation became a hot issue around the 2016 elections. But Public Editor really began as a sort of back-burner project for me and Nobel Prize-winning physicist Saul Perlmutter back in 2015. At that time, I was in the process of creating collaborative software (called TagWorks) that people can use to deeply analyze documents — tons of documents — with the help of Internet-based workers.

Back then, I was a fellow at the UC Berkeley Institute for Data Science, where Saul is still director. He asked me if the software might be useful for teaching the students in his own very popular critical thinking course to find examples of thinking errors in the news.

We started hashing out some designs. Once it become clear that some bad actors had weaponized news-sharing through social media to intentionally misinform voters, my nonprofit citizen science lab, the Goodly Labs, really kicked into high gear on the project.

JT: How can volunteers on the Internet be utilized for a task as complex as identifying the reliability of news?

NA: Well, science is a process of inquiry anyone can use to test out the validity of some idea. And people do little bits of science all the time — maybe some household experiment with the vegetables to see if they keep fresh better on the counter or in the fridge. But too many people feel like science is only for experts in ivory towers. Certainly, academics are often great scientists. They are usually building on and improving centuries of carefully sedimented knowledge. But a lot of scientific work — if it’s well-organized — can absolutely be done by citizens, by everyday people.

At the Goodly Labs, we don’t put untrained citizens in charge of our statistical analyses, but we can get their valuable help collecting and curating data. Birdwatching societies have been doing this for hundreds of years to track migratory patterns. And Chris Lintott, the British astronomer behind Galaxy Zoo, has gotten citizens to accurately classify millions of galaxies. Now, we’re getting the help of citizens to measure the features of the social world we all participate in creating and interpreting.

JT: So what exactly are these volunteer workers doing as part of the Public Editor process?

NA: There are a number of different tasks volunteers can perform. In every case, Public Editor presents some passage of text to the volunteer — sometimes a full article, but usually a shorter passage — and asks them to look for different types of information. Prompted by a series of questions, the volunteers hunt for information in the text that might reveal problems with language or tone, reasoning errors, improper use of evidence, or mistaken reasoning about probabilities.

JT: So what sort of public participation is needed to make this work, and how do you can think you can achieve these levels of participation?

NA: We believe that we can label and score somewhere between one-third to two-thirds of any given news reader’s content just by evaluating the top 100 most-shared articles per day. That could have a substantial positive impact. Since an article often remains most-shared for a couple days or so, we’re really talking about evaluating an average of 50 articles per day. It only takes 40 people working in parallel to evaluate an article. That means we would need 2,000 people to spend 15 minutes each day in Public Editor.

We think we can achieve that with some good recruitment campaigns. Tens of millions of people do the crossword or Sudoku each day, and those puzzles are usually more complex and time-consuming than the tasks we’re asking people to perform. Public Editor tasks give people a similar sense of accomplishment. That feeling is compounded when volunteers know their work is improving democracy, when they’re earning points and badges they can brag about on Facebook, and when they get to reap the rewards of their teamwork every time they start reading a popular news article.

JT: How can people get involved as public editors, and what does this entail?

NA: Anyone interested in volunteering should click the “Contact Us” button of our website — publiceditor.io. There’s plenty to do! It’s really simple. We give folks a little packet of information explaining the system, have them watch a seven-minute training video, and get them going on tasks. Beginners are totally encouraged to learn by doing. We give a lot of structured feedback, and we don’t start counting a volunteer’s work until they have built up their own confidence — usually after a dozen or more tasks. Everything is designed to be learning-friendly and low-stress, while also engaging and challenging.

This article is one in a series supported by the MacArthur Foundation Research Network on Opening Governance that seeks to work collaboratively to increase our understanding of how to design more effective and legitimate democratic institutions using new technologies and new methods. Neither the MacArthur Foundation nor the network is responsible for the article’s specific content. Other posts can be found here.