Brian Nosek, director of the Center for Open Science, led a 2015 study that is now being harshly criticized. (Bill O'Leary/The Washington Post)

In a blistering announcement Thursday, scientists at Harvard University and the University of Virginia condemned the results of a 2015 landmark study that concluded more than half of 100 published psychology studies were not replicable.

The scientists said the research methods used to reproduce those studies were poorly designed, inappropriately applied and introduced statistical error into the data. The result: a gross over-estimation of the failure rate.

The 2015 meta-analysis, conducted by the nonprofit Center for Open Science and published in the journal Science, made headlines around the world. At the time, the journal's senior editor declared that "we should be less confident about many of the experimental results that were provided as empirical evidence in support of those theories."

Harvard psychologist Daniel Gilbert, a lead author of the critique, noted that such conclusions did significant harm to psychological research.

“This paper has had an extraordinary impact,” Gilbert said in a statement released Thursday. “It led to changes in policy at many scientific journals, changes in priorities at funding agencies, and it seriously undermined public perceptions of psychology.”

The first problem that he and his team noted was the center's non-random selection of studies to replicate.

"What they did is created an idiosyncratic, arbitrary list of sampling rules that excluded the majority of psychology subfields from the sample, that excluded entire classes of studies whose methods are probably among the best in science from the sample, and so on," according to the Harvard release. "Then they proceeded to violate all of their own rules. ... So the first thing we realized was that no matter what they found — good news or bad news — they never had any chance of estimating the reproducibility of psychological science, which is what the very title of their paper claims they did."

Among the most egregious errors: The replicated research was anything but a repeat of the original experiment. One example was a study of race involving white students and black students at Stanford University discussing affirmative action. Instead of reproducing the experiment at Stanford, however, the center's scientists substituted students at the University of Amsterdam.

[No, science's reproducibility problem is not limited to psychology]

After realizing their lack of fidelity to the original research, the center's scientists sought to remedy the situation by again repeating their work, this time at Stanford. When they did, Gilbert and his team found, the results were indeed reproducible. But this outcome was never acknowledged in the 2015 study.

Once the mistakes in that research were accounted for, the reproducibility rate was "about what we should expect if every single one of the original findings had been true," said co-author Gary King, director of Harvard's Institute for Quantitative Social Science.

"So the public hears that 'Yet another psychology study doesn't replicate' instead of 'Yet another psychology study replicates just fine if you do it right and not if you do it wrong,' which isn't a very exciting headline," King said.

The 2015 study took four years and 270 scientists to conduct and was led by Brian Nosek, director of the Center for Open Science and a University of Virginia psychology researcher.

Nosek, who took part in the new investigation, said Thursday night that the bottom-line message of the original undertaking was not that 60 percent of studies were wrong "but that 40 percent were reproduced, and that's the starting point."

As for the follow-up critique, it's another way of looking at the data, he said. Its authors "came to an explanation that the problems were in the replication. Our explanation is that the data is inconclusive."

Gilbert stressed that his team's work was a straightforward review. "Let's be clear, no one involved in this study was trying to deceive anyone," he said. "They just made mistakes, as scientists sometimes do. ... So this is not a personal attack, this is a scientific critique. We all care about the same things: doing science well and finding out what's true."

The critique is being published Friday as a commentary in Science.

This post has been updated.