‘The failure to win “the war on cancer” has been blamed on many factors, … But recently a new culprit has emerged: too many basic scientific discoveries… are wrong.’~C. Glenn Begley
It’s nothing new. Over 3 decades ago the problem was well-known, as memorialized in the premier journal Science, in its article “Reevaluation of cancer data eagerly awaited.”
Mark Spector, having already written his thesis, was due to receive his Ph.D in 1981 from Cornell University. He was Efraim Racker’s, his advisor’s, darling, being groomed to take over the lab, and conducting “seemingly spectacular experiments on a class of enzymes related to cell transformation.” Recognized in his lab–and outside–as a ‘superstar,’ his phenomenal data brought together research as far-flung as that on RNA tumor viruses and the biochemistry of cancer cells into, what one professor called a theory”as unifying and simplifying for studying the metabolic basis of cancer as Newton’s work was for studying mechanics.” Not tepid praise, I’d say.
Spector had the magic touch–it seemed that any experiment he ran turned to proverbial gold. And he worked hard–all recognized that, allowing Racker to spend less hands-on time in research, and more time over-seeing the lab, and planning for his upcoming retirement. Spector was, they guessed, just a wunderkind. It was almost too good to be true.
And, with the wisdom I’ve accumulated through age, I’ve found that if something seems too good to be true–you fill in the blank.
Doubts crept into the minds of those who worked with Spector, and they confronted Racker with these concerns. Racker worked hard to defend Spector, but when Racker went to replicate certain key experiments–Spector’s wonder-finds could not be replicated.
Racker put back on the lab coat, prepared to re-run every single one of Spector’s experiments, estimating that would take at least a year–and understanding that many would not work out.
As for Spector, he withdrew his dissertation–and withdrew from Cornell University for good measure–all the while maintaining that his work was impeccable. As time went by, and experiment after experiment failed to reproduce, he became one of very few who believed that.
Although Spector’s story has its own unique touches, the inability to reproduce cancer research has become a major cause for concern in the scientific community–as well as to those waiting to benefit from what the research might discover.
I wrote a post some time back called “Play it Again Sam: Trying–and Failing–To Reproduce Scientific Results,” in which I addressed the problem of replicability at large–through Alzheimer’s studies, mental health studies, heart disease. In that piece I touched upon the issue with cancer research as well:
In 2009, a group of Boston researchers published a study that described how they had destroyed cancer tumors by specifically targeting the protein STK33. This research was too good to pass up.
So the biotechnology firm Amgen Inc. quickly assigned two dozen researchers to try to replicate the experiment. End goal: turning the findings into a drug that could revolutionize cancer treatment.
After six months of intensive work, the scientists threw in the towel. There simply was no replicating the results. This, apparently, came as no surprise to Amgen.
“‘I was disappointed but not surprised,'” says Glenn Begley, at the time vice president of research at Amgen of Thousand Oaks, CA, in a December 2, 2011, Wall Street Journal article entitled “Scientists’ Elusive Goal: Reproducing Study Results.” “More often than not, we are unable to reproduce findings’ published by researchers in journals.”
Just to be clear, the research we’re discussing here is something called preclinical research. These studies with disastrous replicability records are not about treating humans with a given drug. Long before people are treated with medications, there is an extensive stage of research that goes on well before clinical trials (testing on humans) can even be considered. This preclinical development phase is heavy on theory and addresses whether a given drug might even have the possibility of working against a given tumor. Safety data is also collected–and, in theory–replicability is assessed.
However, these preclinical studies do not exist in a bubble. They are the direct pipeline into drug manufacturing and then clinical trials, as Begley emphasizes,
These are the studies the pharmaceutical industry relies on to identify new targets for drug development. But if you’re going to place a $1 million or $2 million or $5 million bet on an observation, you need to be sure it’s true. As we tried to reproduce these papers we became convinced you can’t take anything at face value.”
Be that as it may, the rather forlorn preclinical research situation doesn’t seem to be making a quick turn-around in recent days.
In a scandal that made big inroads into cancer research, the Office of Research Integrity at the U.S. Department of Health found that a Boston University cancer scientist, Ph.D. Sheng Weng, had made up his findings–whole-cloth.
His work was published in two journals in 2009, and he’s been ordered to retract them. But scientists learn from other scientists–and from journals–and a commentary in Gaia Health suggested that the fiasco could invalidate 10 years worth of studies at a place as important as the Mayo Clinic.
Reported the Wall Street Journal:
At the Mayo Clinic, a decade of cancer research, partly taxpayer-funded, went down the drain when the prestigious Minnesota institution concluded that intriguing data about harnessing the immune system to fight cancer had been fabricated. Seventeen scholarly papers published in nine research journals had to be retracted. A researcher, who protests his innocence, was fired. In another major flameout, 18 research journals have said they are planning to retract a total of 89 published studies by a German anesthesiologist …
Return Mr. Bagley, now going by C. Glenn Bagley, currently listing himself as former head of cancer research at ‘pharmaceutical giant Amgen and now senior vice-president of biotechnology company TetraLogic.,
Together with Lee M. Ellis, a cancer researcher at the University of Texas, he has published a paper in Nature sure to be unpopular with researcher, clinician, and consumer alike.
In “Raise standards for preclinical cancer research” they claim that, after much analysis and combing through studies at Amgen, that of 53 published studies described as ‘landmark,’ only 6 could be successfully replicated.
So an Amgen replication team of about 100 scientists got to work–fast–and, sure enough, they couldn’t confirm the results either. They promptly contacted the authors of the studies–and some researchers kicked in to help attack the problem. They discussed why the results didn’t replicate; some let Amgen borrow materials used in the original studies.
But some had a different approach. Some authors required that the scientists sign a confidentiality agreement that would bar them from disclosing data that contradicted the original findings.
“The world will never know” which 47 studies — many of them highly cited — are apparently wrong, Begley told journalist Sharon Begley [no relation].
Seems like he’s got something of a point there.
Amgen does not stand alone. Bayer Health Care in Germany analyzed its in-house studies–and was singularly unimpressed. In a 2011 paper that wins, in my mind, points for a clever title, “Believe it or not,” they looked at ‘exciting published data’ from their studies. Sadly, all too much of the data could not be reproduced. Soon Bayer had stopped almost 2/3 of its drug projects because experiments couldn’t replicate claims from the literature. A full 70% of those studies were cancer research.
As an additional cause of distress, some of these non-reproducible preclinical papers have taken on a life of their own and are quoted by secondary publications as if their word is law. Says Begley, these studies have “spawned an entire field, with hundreds of secondary publications that expanded on elements of the original observation, but did not actually seek to confirm or falsify its fundamental basis” [see chart below].
CHART: REPRODUCIBILITY OF RESEARCH FINDINGS
Preclinical research generates many secondary publications, even when results cannot be reproduced.
|Journal impact factor||Number of articles||Mean number of citations of non-reproduced articles*||Mean number of citations of reproduced articles|
*Source of citations: Google Scholar, May 2011.
If we’ve got, say, 248 citations in scientific journals to a study that is, fundamentally, no good, the problem has spread from the self-contained lab throughout the scientific community, and perhaps into the clinical one, as well.
There are a number of reasons for the lack of replicability, and the vast majority of them do not have to do with fraud, or hiding results, or anything of the sort. Some might perhaps be as simple as the standard ‘conflict-of-interest’ issues that impact so much of research, as almost a third of cancer research that went on to be published in premier journals had some form of financial conflict, according to a 2009 article in Cancer. That might start us off in our quest to understand what’s gone wrong.
But in the interest of length, and of keeping the topics separate, I ask you to please hang on until the next post for how the situation got to be this way–and how scientists can start to get out of it.
Let’s leave poor Begley, whose frustration may be reaching tolerance level, to finish off for us with a story that nearly led him to pop a gasket–and hasn’t the poor, frustrated man earned himself the final narrative of the piece?
While working on his project to reproduce promising studies, Begley met a lead scientist of one of the problematic studies for a –hopefully–friendly breakfast. [Note that the scientist's name is never revealed.]
As quoted in “In cancer science, many “discoveries” don’t hold up,” Begley describes his approach:
We went through the paper line by line, figure by figure. I explained that we re-did their experiment 50 times and never got their result. He said they’d done it six times and got this result once, but put it in the paper because it made the best story. It’s very disillusioning.
Or maybe just the best story for my blog, at the moment, if perhaps not the best for the future of cancer research.
It’s a fine mess we’re in [we're 'in a pickle,' as my husband likes to say], when it comes to preclinical cancer research. And government and investment money pours into the labs, but the labs produce relatively little, given the investment, to aid the sick. If Begley’s right, the War on Cancer is floundering–because of sloppy mistakes in research.
And where, precisely, does that leave cancer sufferers, waiting, hope against hope, that a new cure will be found, before it is too late? I shudder to think of it.
But let’s address two simpler questions: How’d labs get to such a sad state of affairs–and what can be done?
Let’s check it out next time.
Begley CG, Ellis LM. Raise standards for preclinical cancer research. Nature 2012; 483:531-533.
Begley S. In cancer science, many “discoveries” don’t hold up. NewsDaily 03/28/2012.
Hawkes N. Most laboratory cancer studies cannot be replicated, study shows. British Journal of Medicine 2012; 344:e2555.
Jagsi R, et al. Frequency, nature, effects, and correlates of conflicts of interest in published clinical cancer research. Cancer 2009; 15;115(12):2783-91.
Kolata GB. Reevaluation of cancer data eagerly awaited. Science 1981; 214(4518):316-318.