Science has lost its way, at a big cost to humanity
In today's world, brimful as it is with opinion and falsehoods masquerading as facts, you'd think the one place you can depend on for verifiable facts is science.
You'd be wrong. Many billions of dollars' worth of wrong.
A few years ago, scientists at the Thousand Oaks biotech firm Amgen set out to double-check the results of 53 landmark papers in their fields of cancer research and blood biology.
The idea was to make sure that research on which Amgen was spending millions of development dollars still held up. They figured that a few of the studies would fail the test that the original results couldn't be reproduced because the findings were especially novel or described fresh therapeutic approaches.
http://www.latimes.com/business/la-fi-hiltzik-20131027,0,1228881.column
Warren Stupidity
(48,181 posts)The philosophical underpinnings of the scientific method require reproducibility as one of the lynchpins of scientific knowledge.
bemildred
(90,061 posts)The misuse of what is essentially a doctrine and methodology of extreme doubt and empiricism, phenomenology, essentially, to flog propaganda for dogmatic and authoritarian commercial and political bullshit, and we are submerged in that.
bemildred
(90,061 posts)It sounds like a lot of cases of outlier results being followed by reversion to the norm, and a certain optimism about the reliability of statistical methods, and confirmation biases, it sort of like magic thinking. I'm appalled at what I read there, now I think about it.
Warren Stupidity
(48,181 posts)HuckleB
(35,773 posts)Warren Stupidity
(48,181 posts)Read the linked article. What we "know" is highly questionable.
HuckleB
(35,773 posts)xocet
(3,871 posts)It seems that the main initial complaint is that a (possibly large) number of biomedical research studies have not been confirmed by independent studies and that the original researchers are driven to publish questionable and sensational results so that they can achieve a measure of professional success - however fleeting that may be.
The irony is that in order to achieve journalistic success - however fleeting that may be - the author of the article resorts to a sensational, over-broad headline to garner attention for his story.
Science is a lot more than just biomedical research, and the fact that other scientists are going back over original research to check that research is in itself not a problem. That science is not properly supported financially is the problem, but one cannot have that when the House Committee on SST is run by a bunch of Republican Congressmen.
At any rate, the article has a sensationalistic headline that neither serves the scientific community nor the journalistic community.
Researchers are rewarded for splashy findings, not for double-checking accuracy. So many scientists looking for cures to diseases have been building on ideas that aren't even true....
By Michael Hiltzik
October 27, 2013
...
The Economist recently estimated spending on biomedical R&D in industrialized countries at $59 billion a year. That's how much could be at risk from faulty fundamental research.
...
http://www.latimes.com/business/la-fi-hiltzik-20131027,0,1228881.column#axzz2ixV2MVQI
Here is a suggestion for a follow-up article:
Journalists are rewarded for splashy headlines, not for accuracy. So many citizens looking for reliable information upon which to base their opinions have been building on ideas that aren't even true....
By Michael Hiltzik
October 28, 2013
....
http://www.latimes.com/business/la-fi-hiltzik-20131038,1,2339992.column#axzz3ixV3MVQII
404
HuckleB
(35,773 posts)Pilot studies are almost always "wrong." That's nothing new. It doesn't mean they're not valuable.
Are Most Medical Studies Wrong?
http://theness.com/neurologicablog/index.php/are-most-medical-studies-wrong/
Reporting Preliminary Findings
http://www.sciencebasedmedicine.org/reporting-preliminary-findings/
Further, there is a growing movement to push for the publication of negative studies. (THANK GOODNESS!)
Negative results in medical research and clinical trials an interview with Ben Goldacre - See more at: http://blog.f1000research.com/2013/06/10/negative-results-in-medical-research-and-clinical-trials-an-interview-with-ben-goldacre/#sthash.YvlIcx8i.dpuf
bemildred
(90,061 posts)I thought that was the point of the OP, really, more skepticism. So we can certainly turn that back on it, and thereby reaffirm it's point.
WRT your posts, I would want any serious work published, positive, inconclusive, or negative. Of course, you are going to have fights over what is and is not "serious", but sometimes there is just no closed-form solution and you have to rely on heuristics.
I think all findings are preliminary, and ought to stay that way.
I do not know enough to comment on what ought or ought not be published in medical research, otherwise than I have, and that was my math training speaking.
bemildred
(90,061 posts)I don't really see that they disagree with each other much.
I do find Ioannidis' argument very questionable, but since he is just arguing for plausibility, it seems OK.
It's a very narrow discussion, which Hiltzik tranposes into a wider context.
I think Ioannidis' summary is as good as any:
Is it unavoidable that most research findings are false, or can we improve the situation? A major problem is that it is impossible to know with 100% certainty what the truth is in any research question. In this regard, the pure gold standard is unattainable. However, there are several approaches to improve the post-study probability.
Better powered evidence, e.g., large studies or low-bias meta-analyses, may help, as it comes closer to the unknown gold standard. However, large studies may still have biases and these should be acknowledged and avoided. Moreover, large-scale evidence is impossible to obtain for all of the millions and trillions of research questions posed in current research. Large-scale evidence should be targeted for research questions where the pre-study probability is already considerably high, so that a significant research finding will lead to a post-test probability that would be considered quite definitive. Large-scale evidence is also particularly indicated when it can test major concepts rather than narrow, specific questions. A negative finding can then refute not only a specific proposed claim, but a whole field or considerable portion thereof. Selecting the performance of large-scale studies based on narrow-minded criteria, such as the marketing promotion of a specific drug, is largely wasted research. Moreover, one should be cautious that extremely large studies may be more likely to find a formally statistical significant difference for a trivial effect that is not really meaningfully different from the null [3234].
Second, most research questions are addressed by many teams, and it is misleading to emphasize the statistically significant findings of any single team. What matters is the totality of the evidence. Diminishing bias through enhanced research standards and curtailing of prejudices may also help. However, this may require a change in scientific mentality that might be difficult to achieve. In some research designs, efforts may also be more successful with upfront registration of studies, e.g., randomized controlled trials.
Finally, instead of chasing statistical significance, we should improve our understanding of the range of R valuesthe pre-study oddswhere research efforts operate [10]. Before running an experiment, investigators should consider what they believe the chances are that they are testing a true rather than a non-true relationship. Speculated high R values may sometimes then be ascertained. As described above, whenever ethically acceptable, large studies with minimal bias should be performed on research findings that are considered relatively established, to see how often they are indeed confirmed. I suspect several established classics will fail the test [36].
Nevertheless, most new discoveries will continue to stem from hypothesis-generating research with low or very low pre-study odds. We should then acknowledge that statistical significance testing in the report of a single study gives only a partial picture, without knowing how much testing has been done outside the report and in the relevant field at large. Despite a large statistical literature for multiple testing corrections [37], usually it is impossible to decipher how much data dredging by the reporting authors or other research teams has preceded a reported research finding. Even if determining this were feasible, this would not inform us about the pre-study odds. Thus, it is unavoidable that one should make approximate assumptions on how many relationships are expected to be true among those probed across the relevant research fields and research designs. The wider field may yield some guidance for estimating this probability for the isolated research project. Experiences from biases detected in other neighboring fields would also be useful to draw upon. Even though these assumptions would be considerably subjective, they would still be very useful in interpreting research claims and putting them in context.
Since he is basically arguing for more context, considering studies in their context, less enthusiasm and more doubt and looking around, I would think that would be good.
Igel
(35,337 posts)It's no "science." It's "scientists."
Reproducibility matters. There's no gain in duplicating research. You wind up reproducing the research either as graduate student training, when you use the research in your own research and fail to get the expected results and go backtracking for the reason, or when you think it's wacked.
Lots of research never gets checked. Much of that gets believed, however.
Often the problem is that the scientists, psychologists, etc., have taken a "statistics for the _________ sciences" class, one that focuses on how to use various statistical techniques without actually having to understand the underpinnings of the techniques you're using.
Saw dozens of prestigious, peer-reviewed papers come unravelled when a guy with his MS in statistics took on his PI's groundbreaking paper and showed that for that technique the most common stat reference had omitted an important point. For that technique you start counting degrees of freedom not from 1, but from 0. Later on there's a +1 added to n, but it simplifies things. Everybody started counting from 1. Oops. That was only discovered because the PI thought a competitor's article was horribly wrong but couldn't find the error. So he had a couple of students replicate it. When it didn't fail replication, he had them reanalyse his data. It was a problem, so he had a student who was about to dissertate summarize the literature for the lab. Anybody without a good grounding in stats would have beat his head against the wall.
I've seen other papers that were assumed to be random but which were carefully constructed. The stats weren't applicable, but you couldn't tell that from the paper or the dataset that was made available online.
This, of course, is a different matter from the sweeping generalizations you see in the popular science press. And the even more egregious examples in mainstream science reporting.
I am reminded of the propensity of software engineers to fail to test their own code thoroughly, or to incorporate a test harness for it as they go along, or to test it at all as they go along.
I remember the start from zero thing, from the stat course, but it never became natural til I started coding C, where it is the norm for everything except public display.
Every time I look at randomness I start thinking we don't have any idea what we are talking about. But it's useful.
adirondacker
(2,921 posts)by Katherine Cirullo
"Recently, Steve Horn of the DeSmog Blog uncovered shocking information that leaves us shaking our head at our nations leaders and our once trusted scholars. Embedded in the Energy Policy Act of 2005 is section 999, which describes the U.S. Department of Energy-run Research Partnership to Secure Energy for America (RPSEA). We knew previously that oil and gas companies and industry executives have funded and advised academic research on fracking, but the U.S. government has a major role in these projects, too. Federal funding of oil and gas industry controlled frackademia leaves us concerned for the future of fracking, and for our air, water and public safety."
https://www.commondreams.org/view/2013/09/18-5
bemildred
(90,061 posts)kristopher
(29,798 posts)This is encouraging news, imo. It shows movement towards open access and long-term accountability - things that translate directly into better quality science.
The Commons is currently in its pilot phase, during which only registered users among the cadre of researchers whose work appears in PubMed NCBI's clearinghouse for citations from biomedical journals and online sources can post comments and read them. Once the full system is launched, possibly within weeks, commenters still will have to be members of that select group, but the comments will be public.
Science and Nature both acknowledge that peer review is imperfect. Science's executive editor, Monica Bradford, told me by email that her journal, which is published by the American Assn. for the Advancement of Science, understands that for papers based on large volumes of statistical data where cherry-picking or flawed interpretation can contribute to erroneous conclusions "increased vigilance is required." Nature says that it now commissions expert statisticians to examine data in some papers.
But they both defend pre-publication peer review as an essential element in the scientific process a "reasonable and fair" process, Bradford says.
Yet there's been some push-back by the prestige journals against the idea that they're encouraging flawed work ...
jsr
(7,712 posts)It's well known that much of medical research is make-work garbage.