One Duke University surgeon called it a “new frontier” in cancer treatment. Another said it could save “10,000 lives a year” or more. A researcher at Mass General Hospital called it “a very, very exciting tool” in the fight against lung cancer. As news spread in 2006 and 2007 of the work of Anil Potti, a star cancer researcher at Duke, the excitement grew.
What he had claimed to achieve, in leading medical journals, was a genomic technology that could predict with up to 90 percent accuracy which early stage lung cancer patients were likely to have a recurrence and therefore benefit from chemotherapy.
He had developed, Potti said in interviews at the time, a genomic “fingerprint unique to the individual patient” that would predict the chances of survival of early stage lung cancer patients.
It was considered a breakthrough because, as the Economist explained at the time, chemotherapy is “a blunt instrument … In most cases a patient’s survival depends on whether he dies from the side effects of chemotherapy before the chemotherapy kills the cancer, or vice versa. A way to pick the right type of chemotherapy would make a big difference. Anil Potti and colleagues, of Duke University in North Carolina, have proven — in principle, at least — that they can do exactly that. Instead of prescribing chemotherapies according to a doctor’s best guess, they propose a genetic analysis to predict which type of chemotherapy would stand the greatest chance of zapping cancerous cells.”
And they had ample reason for their praise. After all, the revolutionary findings by Anil Potti and his team were first published in Nature Medicine, one of the most prestigious peer-reviewed journals in the field, and later in a host of other prestigious journals.
Now, the Office of Research Integrity (ORI), the agency that investigates fraud in federally-funded medical research, has officially declared that the data generated by Potti was not only flawed, but “false.”
The data was “altered,” it said in a report published Monday in the Federal Register, to produce the results desired by the researchers. False data were also submitted to obtain further grants for research, it concluded, citing a claim by Potti that 6 of 33 patients responded favorably to a test when only 4 patients were enrolled in the trial, none of them responding positively.
The false data, said the report, were used for papers in nine journals, including the New England Journal, Nature Medicine, the Journal of the American Medical Association and Lancet Oncology, all of the articles since retracted.
The news, first reported by Retraction Watch, came as no surprise to those who have been following the case, for whom it simply reaffirmed what they already knew.
It wasn’t long after the exciting research was unveiled before questions were raised, starting about a year later, when other researchers reported they were unable to replicate Potti’s work.
Then, in 2010, his résumé began unraveling as well, when “The Cancer Letter” discovered that Potti’s claims on grant applications of having been a Rhodes Scholar were false. “‘It took that to make people sit up and take notice,’” Steven Goodman, a professor of oncology at Johns Hopkins University, told the New York Times in 2011.
By November of that year, Duke had determined that the the data from trials conducted by Potti and colleagues on dozens of patients was flawed, and he resigned. Retractions, lawsuits and a “60 Minutes” exposé followed.
Duke University had already settled a lawsuit on undisclosed terms brought by subjects in clinical trials based on Potti’s work, Retraction Watch reported. Potti had already been reprimanded by two medical boards, one on North Carolina and another in Missouri.
According to Retraction Watch and an answering service at the Cancer Center of North Dakota in Grand Forks, where Potti was last reported to be working, he is still practicing medicine.
The answering service declined to forward a request for comment from The Washington Post, and his lawyer did not respond to requests for comment.
According to the ORI, Potti has “entered into a voluntary settlement” in which he “neither admits nor denies” the ORI findings. Under the terms of the settlement, any research conducted by Potti using federal funds must be supervised for a period of five years. In addition, any institution employing him in federally-funded research must certify that any data he provided “are based on actual experiments and are otherwise legitimately derived.”
In a statement to Retraction Watch, a spokesman for Duke Medicine said “We are pleased with the finding of research misconduct by the federal Office of Research Integrity related to work done by Dr. Anil Potti. We trust this will serve to fully absolve the clinicians and researchers who were unwittingly associated with his actions, and bring closure to others who were affected.”
The Cancer Letter reported in January that the whole scandal could have been avoided if officials at Duke had listened to a third-year medical student, Bradford Perez, who wrote a three-page document to Duke Medical School deans warning of Potti’s misconduct in April 2008, when the clinical trials were getting started.
It reported that Potti’s collaborators and deans at the school pressured Perez into not taking the matter further.
Responding to that report, Duke officials said it had “acknowledged years ago there are many aspects of this situation that would have been handled differently had there been more complete information at the time decisions were made.”
Instead of heeding the warnings, Duke went on to use Potti’s research in advertising for its cancer center: one Duke physician featured on “60 Minutes” called his findings the “holy grail” of cancer research.
Potti was neither the first nor the last “star” researcher to have work fall apart under scrutiny. In late 2011, Diederik Stapel, a social psychologist in the Netherlands, wrote a much-discussed research paper suggesting that people exposed to litter and abandoned objects are more likely to be bigoted, reported the Post’s Joel Achenbach. When students asked to see the data behind his work, he couldn’t produce it readily and later Stapel admitted that he’d been fabricating data for many years.
Last year, a stem cell scientist who co-wrote two highly publicized stem cell articles in Nature committed suicide in the wake of a scandal over fraudulent research. In May, a highly publicized and influential scholarly study about people’s views on same-sex marriage was disavowed by one of its authors and ultimately retracted.
“Every day, on average, a scientific paper is retracted because of misconduct,” Ivan Oransky and Adam Marcus, who run Retraction Watch, wrote in a New York Times op-ed in May.
“Two percent of scientists admit to tinkering with their data in some kind of improper way. That number might appear small, but remember: Researchers publish some 2 million articles a year, often with taxpayer funding. In each of the last few years, the Office of Research Integrity, part of the United States Department of Health and Human Services, has sanctioned a dozen or so scientists for misconduct ranging from plagiarism to fabrication of results. Not surprisingly, the problem appears to get worse as the stakes get higher.”