This is a guest post by Michael Horowitz, an associate professor at the University of Pennsylvania and currently a Council of Foreign Relations fellow at the Pentagon. —Erik Voeten

The Economist’s The World in 2014 issue just hit newsstands, focusing international attention on the geopolitical outcomes we can expect to see over the next 12-14 months. The issue features an article by University of Pennsylvania psychologist Phil Tetlock and journalist Dan Gardner on the Good Judgment Project, a research study funded by the Intelligence Advanced Research Projects Activity (IARPA, the U.S. government’s analog to DARPA), which makes such geopolitical predictions every day.

Since 2011, IARPA has posed about 100-150 questions each year to research teams participating in its ACE forecasting tournament on topics such as the Syrian civil war, the stability of the Eurozone and Sino-Japanese relations. Each research team was required to gather individual predictions from hundreds or thousands of forecasters online and to generate daily collective forecasts that assign realistic probabilities to possible outcomes.

The Good Judgment Project emerged as the clear winner and the Good Judgment Project forecasters have demonstrated the ability to generate increasingly accurate forecasts that have exceeded even some of the most optimistic estimates at the beginning of the tournament. The accompanying graphic shows the predictions from three GJP forecasting methods on a recent question concerning whether the first round of chemical weapons inspections in Syria would be completed before Dec. 1 –- an incredibly important and complex topic. Each prediction is a probability forecast that can range from 0 (a forecast of a 0% chance that the event will occur) to 1 (a forecast that the event is certain to occur).

Graph by Michael Horowitz

In this case, the question resolved as a “yes” because the Organization for the Prohibition of Chemical Weapons (OPCW), who was conducting the inspections, confirmed that they had completed the first round of inspection before Dec. 1. Since the question resolved as a “yes”, the closer each forecasting method’s predictions were to 1, the better they did. As the graph shows, after some initial uncertainty about whether inspections would occur, our forecasters generally converged on the correct answer well before the outcome.

How does the Good Judgment Project achieve such strikingly accurate results? The Project uses modern social-science methods ranging from harnessing the wisdom of crowds to prediction markets to putting together teams of forecasters. The GJP research team attributes its success to a blend of getting the right people (i.e., the “right” individual forecasters) on the bus, offering basic tutorials on inferential traps to avoid and best practices to embrace, concentrating the most talented forecasters into super teams, and constantly fine-tuning the aggregation algorithms it uses to combine individual forecasts into a collective prediction on each forecasting question. The Project’s best forecasters are typically talented and highly motivated amateurs, rather than subject matter experts.

Here’s your chance to “get on the GJP bus.” If you enjoy world politics and appreciate a good challenge, consider joining the Good Judgment Project, which has openings right now for Season 3 forecasters. The Project will give you the opportunity to receive training, to get regular feedback on your forecasting accuracy, and to test your forecasting skills against those of some of the most accurate forecasters around. Interested? To find out more and to register, go to