Diederik Stapel, a professor of social psychology in the Netherlands, had been a rock-star scientist — regularly appearing on television and publishing in top journals. Among his striking discoveries was that people exposed to litter and abandoned objects are more likely to be bigoted.

And yet there was often something odd about Stapel’s research. When students asked to see the data behind his work, he couldn’t produce it readily. And colleagues would sometimes look at his data and think: It’s beautiful. Too beautiful. Most scientists have messy data, contradictory data, incomplete data, ambiguous data. This data was too good to be true.


Staff members at the Center for Open Science in Charlottesville, Va., develop software that allows researchers to share their data, part of a movement to improve reproducibility in experiments. (Bill O’Leary/The Washington Post)

In late 2011, Stapel admitted that he’d been fabricating data for many years.

The Stapel case was an outlier, an extreme example of scientific fraud. But this and several other high-profile cases of misconduct resonated in the scientific community because of a much broader, more pernicious problem: Too often, experimental results can’t be reproduced.

That doesn’t mean the results are fraudulent or even wrong. But in science, a result is supposed to be verifiable by a subsequent experiment. An irreproducible result is inherently squishy.

And so there’s a movement afoot, and building momentum rapidly. Roughly four centuries after the invention of the scientific method, the leaders of the scientific community are recalibrating their requirements, pushing for the sharing of data and greater experimental transparency.

Top-tier journals, such as Science and Nature, have announced new guidelines for the research they publish.

“We need to go back to basics,” said Ritu Dhand, the editorial director of the Nature group of journals. “We need to train our students over what is okay and what is not okay, and not assume that they know.”


Brian Nosek is a co-founder of the Center for Open Science. “We are incentivized to make our research more beautiful than it is,” Nosek says. (Bill O’Leary/The Washington Post)

The pharmaceutical companies are part of this movement. Big Pharma has massive amounts of money at stake and wants to see more rigorous pre-clinical results from outside laboratories. The academic laboratories act as lead-generators for companies that make drugs and put them into clinical trials. Too often these leads turn out to be dead ends.

Some pharmaceutical companies are now even willing to share data with each other, a major change in policy in a competitive business.

“It’s really been amazing the last 18 months, the movement of more and more companies getting in line with the philosophy of enhanced data-sharing,” says Jeff Helterbrand, global head of biometrics for Roche in South San Francisco.

But Ivan Oransky, founder of the blog Retraction Watch, says data-sharing isn’t enough. The incentive structure in science remains a problem, because there is too much emphasis on getting published in top journals, he said. Science is competitive, funding is hard to get and tenure harder, and so even an honest researcher may wind up stretching the data to fit a publishable conclusion.

“Everything in science is based on publishing a peer-reviewed paper in a high-ranking journal. Absolutely everything,” Oransky said. “You want to get a grant, you want to get promoted, you want to get tenure. That’s how you do it. That’s the currency of the realm.”


Software developers Chris Seto, left, and Lyndsy Simon work on laptops at the Center for Open Science. (Bill O’Leary/The Washington Post)
A core scientific principle

Reproducibility is a core scientific principle. A result that can’t be reproduced is not necessarily erroneous: Perhaps there were simply variables in the experiment that no one detected or accounted for. Still, science sets high standards for itself, and if experimental results can’t be reproduced, it’s hard to know what to make of them.

“The whole point of science, the way we know something, is not that I trust Isaac Newton because I think he was a great guy. The whole point is that I can do it myself,” said Brian Nosek, the founder of a start-up in Charlottesville, Va., called the Center for Open Science. “Show me the data, show me the process, show me the method, and then if I want to, I can reproduce it.”

The reproducibility issue is closely associated with a Greek researcher, John Ioannidis, who published a paper in 2005 with the startling title “Why Most Published Research Findings Are False.”

Ioannidis, now at Stanford, has started a program to help researchers improve the reliability of their experiments. He said the surge of interest in reproducibility was in part a reflection of the explosive growth of science around the world. The Internet is a factor, too: It’s easier for researchers to see what everyone else is doing.

“We have far more papers, far more scientists working on them, and far more opportunity to see these kinds of errors and for the errors to be consequential,” Ioannidis said.

Errors can potentially emerge from a practice called “data dredging”: When an initial hypothesis doesn’t pan out, the researcher will scan the data for something that looks like a story. The researcher will see a bump in the data and think it’s significant, but the next researcher to come along won’t see it — because the bump was a statistical fluke.


From left: Cailey Fitzgerald, Saman Ehsan, Sara Bowman and Josh Carp work on a problem in the lunchroom. (Bill O’Leary/The Washington Post)

“There’s an aphorism: ‘If you torture the data long enough, they will confess.’ You can always get the data to produce something that is publishable,” says the Center for Open Science’s Nosek, who is a University of Virginia professor of psychology.

His center is known among its employees as “the COS,” which is both an acronym and a homonym. They’re really talking about “the cause” — the struggle to make science more robust.

Nosek’s operation has grown from two employees in April 2013 to 53 employees today, about half of them interns, with everyone crammed into an office about a block from the downtown pedestrian mall. They spend much of their time designing software programs that let researchers share their data.

So far about 7,000 people are using that service, and the center has received commitments for $14 million in grants, with partners that include the National Science Foundation and the National Institutes of Health, Nosek said.

Another COS initiative will help researchers register their experiments in advance, telling the world exactly what they plan to do, what questions they will ask. This would avoid the data-dredging maneuver in which researchers who are disappointed go on a deep dive for something publishable.

Nosek and other reformers talk about “publication bias.” Positive results get reported, negative results ignored. Someone reading a journal article may never know about all the similar experiments that came to naught.

There’s a natural tendency to tidy up the experiment, and make the result prettier and less ambiguous, Nosek said. Call it airbrushed science.

“What is able to get published is positive, innovative, novel, and it’s really clean and beautiful. But most research in the laboratory doesn’t look like that,” Nosek says. “We are incentivized to make our research more beautiful than it is.”

Embarrassing cases

Scientific errors get a lot of publicity, but these embarrassing cases often demonstrate science at its self-correcting best.

Consider “cold fusion”: In 1989, two scientists claimed to have achieved nuclear fusion at room temperature, previously considered impossible. It was a bombshell announcement — but no one else could replicate their work. Cold fusion didn’t take off because mainstream scientists realized it wasn’t real.

A more recent case involved “arsenic life.” In 2010 a paper in Science suggested that a bacterium in Mono Lake, Calif., used arsenic instead of phosphorus in its genetic code and represented a new form of life. Rosemary Redfield, a scientist, cast doubt on the conclusion, and other researchers couldn’t replicate the finding. The consensus is that it was a misinterpretation.

In early 2014, the scientific world was rocked by a tragic case in Japan. A young scientist, Haruko Obokata, claimed to have found evidence for a phenomenon called “STAP,” stimulus-triggered acquisition of pluripotency — a way to manipulate ordinary cells to turn them into stem cells capable of growing into a variety of tissues.


Haruko Obokata speaks at a news conference in Osaka, Japan, following claims that her groundbreaking stem cell study was fabricated. (Jiji Press/Agence France-Presse via Getty Images)

But no one else could reproduce the experiment. An investigation found Obokata guilty of misconduct, and she later resigned from her institute. The journal Nature retracted the STAP papers, and then the case took a horrific turn in August, when Obokata’s mentor, the highly respected scientist Yoshiki Sasai, hanged himself.

Betsy Levy-Paluck, an associate professor of psychology and public policy at Princeton, said of the reproducibility movement, “I think it’s the future.” But there has been controversy at the laboratory level: Some researchers have complained that the reformers are going overboard.

“There are worries about there being witch hunts,” Levy-Paluck said. She said it’s frightening to think about someone discovering your mistake after publication.

Some veteran scientists have sounded a cautious note when discussing the reproducibility surge.

“Look, science is complicated, because the world is complicated,” says Eric Lander, head of the Broad Institute at MIT and co-chair of President Obama’s Council of Advisors on Science and Technology.


Eric Lander speaks in Cambridge, Mass., following the announcement of the creation of the Eli and Edythe L. Broad Institute for biomedical research. (Josh Reynolds/Associated Press)

Lander, who played a leading role in the decoding of the human genome, says the irreproducibility problem is caused in part by the many variables that go into any experiment. To take one simple example: During his research on the genome, he and his colleagues discovered that experiments were influenced by humidity levels in the lab. They had to control for that. He said the genomics community has also tightened the standard for a “significant” result, precisely to overcome the problem of statistical flukes being mistaken for discoveries.

“Nobody tells you in advance what variables are going to matter. There’s an art to doing science,” Lander said. “Reproducibility is actually the heart of science. The fact that not everything is reproducible is not a surprise. The remarkable thing about the scientific enterprise is that we try to reproduce things, and we worry about it.”

The scientific enterprise is growing at phenomenal speed, spurred by a hunger for knowledge and an awareness that science usually delivers reliable answers about the nature of the world.

Brian Nosek offers up a stunning factoid: More than half of the scientists who have ever lived are alive today.

Though, yeah, someone ought to double-check that.