Getting a research grant from the National Science Foundation depends to a "significant" degree on "chance," a new five-year study of the government agency's funding process concludes.
While the odds are better than those in Las Vegas, a New York trio of researchers found that spinning the scientific roulette wheel is largely a matter of "which reviewers happen to be selected" to evaluate a given grant.
"The fate of a particular grant application is roughly half determined by the characteristics of the proposal and the principal investigator, and about half by apparently random elements which might be characterized as the 'luck of the draw,' " said State University of New York sociologist Stephen Cole and his colleagues Jonathan Cole and Gary A. Simon.
The National Science Foundation funds about 13,000 grants each year--about half of the applications submitted--for a total of $800 million.
In a system known as "peer review," NSF officials select a group of four or five scientists to rate a given grant on the basis of scientific merit and the ability of the researcher involved. This evaluation plays a major role in the final judgment as to which grants will be funded.
The Cole review of the process found there was "no evidence of systematic bias" in selection of NSF reviewers. Instead, there appeared to be "substantial disagreement" among the scientists selected as to whether a given proposal merited government funding.
Cole and his colleagues discovered this by conducting their own experiment: submitting 150 proposals in chemical dynamics, economics and solid state physics to an independent evaluation by a new set of reviewers. They found the new reviewers would have "reversed" the funding outcome about one-fourth of the time.
They questioned whether the current funding approach "is the most rational one" and noted that the more proposals a given researcher submits the greater the chance of being funded. The Cole study suggested that the disagreement is "probably a result of real and legitimate differences of opinion among experts about what good science is or should be." While individual scientists may suffer, it noted that the selection process may have "little effect on the rate of development of science as a whole." NSF assistant director Jack Sanderson yesterday defended the peer review approach as "fundamentally sound" and said that emphasizing the role of chance "does not recognize the decision-making role played by the foundation staff in establishing funding priorities." The members of the New York team were themselves good gamblers. They conducted their study for the National Academy of Sciences' Committee on Science and Public Policy which in turn got its funding, naturally, from NSF. The findings are published in the Nov. 20 issue of Science magazine.