Markets in Everything: Research Validation Service
Science Exchange, in
partnership with the open-access publisher PLOS and open data repository
figshare, announced today the launch of the Reproducibility Initiative – a new program to help scientists, institutions and funding agencies validate their critical research findings.
“In the last year, problems in reproducing academic research have
drawn a lot of public attention, particularly in the context of
translating research into medical advances. Recent studies indicate that
up to 70% of research from academic labs cannot be reproduced,
representing an enormous waste of money and effort,” said Dr. Elizabeth
Iorns, Science Exchange’s co-founder and CEO. “In my experience as a
researcher, I found that the problem lay primarily in the lack of
incentives and opportunities for validation—the Reproducibility
Initiative directly tackles these missing pieces.”
The Reproducibility Initiative provides both a mechanism for
scientists to independently replicate their findings and a reward for
doing so. Scientists who apply to have their studies replicated are
matched with experimental service providers based on the expertise
required. The Initiative leverages Science Exchange’s existing
marketplace for scientific services, which contains a network of over
1000 expert providers at core facilities and contract research
organizations (CROs). “Core facilities and commercial scientific service
providers are the solution to this problem,” said Dr. Iorns. “They are
experts at specific experimental techniques, and operate outside the
current academic incentive structure.”
See related Reuters article here, which highlights the issue of why there is a need for research validation:
"Last year, Bayer Healthcare reported that its
scientists could not reproduce some 75 percent of published findings in
cardiovascular disease, cancer and women's health.
In March, Lee Ellis of M.D. Anderson Cancer Center and C. Glenn Begley, the former head of global cancer research at Amgen, reported that when the company's scientists tried to replicate 53 prominent studies in basic cancer biology, hoping to build on them for drug discovery, they were able to confirm the results of only six."
3 Comments:
There was an EconTalk about this with Ed Yong a couple months ago, where Russ Roberts also mentioned that study where they could only replicate 6 out of 53 cancer studies: "a study that identified 53 landmark studies in cancer research and allegedly 47 of these 53--47!--could not be replicated. And the part that I found--that really of course confirmed my own biases, you have to be careful here--is that the author, Begley, was talking to one of these studies that couldn't be replicated, and he said: We went through the paper line by line, figure by figure; I explained that we re-did their experiment 50 times and never got their result. So, I'm interrupting here to remark: 50 times; they gave it a good shot. And then, the quote continues: he said, he'd done it--he meaning the author of the original study that had made the claim--he said he'd done it 6 times and got this result once but put it in the paper because it made the best story. And that really sums up the problem. I mean, good grief! We're talking about cancer research. It's one thing to say people can be primed by language, or you put people in a blue room and they're more creative--I don't believe these kinds of studies until they've been replicated. But cancer research? People read these things, and they get scared, and they actually change their lives about what they eat and what they do. It's kind of more important. And they've got the same problem."
That podcast was more about the problems with social science experiments, but also pointed out the problems in other sciences, well worth listening to. I suspect that all of these studies will have to slap a lot more info online- all the failed results, all the data massaging that was done- to ever be viable in the future. Right now, it is a joke how little of the experimental setup and data are actually shared, especially given how easy it is to distribute info on the internet, even if it isn't drop dead simple yet to collect all the data together. However, that's only a matter of time and can be written into all the software soon enough.
The question to ask is, why did putting the result of the 1 out of 6 (or now, 1 out of 56) trial make the "best story"? Perhaps it was in their own best interest? Perhaps it matched the prominent narrative in the most prominent institutions? Perhaps they would be able to get more grant money for more studies?
So the real lesson here is, that even our best scientific researchers are flawed human beings and not the philosopher kings so many wish they were.
Perhaps we should recognize this is true for all people, no matter what level of prominence they achieve.
Wolf Howling called attention to this issue over 18 months ago:
The Scientific Method & Its Limits - The Decline Effect
His article points out that the problem isn't really The Scientific Method, it's lousy experimental design as the "science" gets softer and fuzzier and more dependent on statistical analysis. It's a reliance on peer-review as a process, when experiments aren't ever reproduced (or can't be, as with so much AGW) or, if they are, are done with an eye towards validation, rather than "let the chips fall as they will" -- much less actual animosity and desire to prove the opposition wrong.
Post a Comment
<< Home