The big idea: Is it better to rely on the best expert or the average of many experts’ forecasts?

The scenario: For more than 30 years, the Wall Street Journal has surveyed economists for their forecasts of economic indicators such as gross domestic product, inflation and unemployment. Annually, the newspaper scores and ranks these forecasts based on their accuracy. Top-ranked forecasters are announced and celebrated on the Journal’s Web site.

In 2012, for the first time, the Journal included the average of the economists’ forecasts as an additional panelist. How did the “crowd” perform? How did it compare to “chasing the expert”? Among the 49 panelists in 2012, the average forecast ranked 12th. Don Leavens and Tim Gill, the top-ranked team in 2011, came in fifth in 2012.

James Surowiecki, in his bestselling book, “The Wisdom of Crowds,” popularized the notion that asking a crowd of people to forecast an event is often better than trying to find the one person who has the right answer. Nonetheless, when a given population has a dramatic range of forecasting expertise, some individuals in the crowd might stand out. In such cases, it may be better to seek out the best expert rather than rely on an average forecast.

Rick Larrick and Jack Soll, business professors at Duke University, have shown that when given a chance to do so, people often prefer to rely on experts. In laboratory experiments, they found that where experts disagreed, people would deem the “most able” among them and trust that individual’s judgment more.

Despite this perception, the average forecast often outperforms the best individual’s forecast. Such outperformance happens when forecasts bracket the true result.

When two forecasts bracket the truth, the difference between the average forecast and the true result will always be less than that of the average individual forecast. And in many cases, the average forecast’s error will be smaller than any individual’s.

The Federal Reserve Bank of Philadelphia has been surveying professional forecasters for decades about macroeconomic variables. Its focus, however, is on the panelists’ average forecast; no single panelist’s forecast accuracy is highlighted. The Fed also provides more convenient access to its long history of panelists’ forecasts. How has their crowd performed relative to the best expert?

The resolution: For 2003-12, the Philadelphia Fed’s data favor the crowd.

Top-ranked experts, forecasting nominal GDP in a given quarter, when ranked in a subsequent quarter, scored at the 49th percentile on average and as low as the second percentile. The average forecast, on the other hand, scored at the 60th percentile on average and always above the 47th percentile. The crowd beat the expert in 63 percent of the 40 quarters. Not surprisingly, any two forecasters in the survey often bracketed the truth, bracketing on average 28 percent of the time.

The lesson: Unlike with stock picking, when it comes to forecasting, averaging does not yield average performance. Averaging can do no worse than the average expert, and often does better than the best expert. By relying on the average forecast, one can also avoid the large forecasting errors that even the best expert occasionally makes.

Yael Grushka-Cockayne and Kenneth C. Lichtendahl Jr.

Grushka-Cockayne is an assistant professor and Lichtendahl is an associate professor at the University of Virginia’s Darden School of Business.