Scientific Breakthroughs in Medical Research
"Less than one percent of published biomedical research is both scientifically valid and clinically useful."
Dr. Brian Haynes, clinical epidemiology professor, McMaster University
"Much of what biomedical researchers conclude in published studies is misleading, exaggerated, and often flat-out wrong."
"There is increasing concern that most current published research findings are false."
"The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance."
"Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias."
John Ioannidis, expert on credibility of medical research, Stanford University
Universal Biomedical Research Laboratory |
Doesn't the public get all worked up with excited anticipation on reading newspaper accounts of radical new health research that has succeeded in finding that elusive cure for whatever ails the anxious reader. That anxious reader will hope against hope that he will live long enough to take advantage of this breakthrough finding in new techniques, medicines, protocols that will surely benefit his/her condition, knowing that from laboratory finding to animal modelling to human clinical trials a decade may pass before government health bodies give regulatory assent for use.
The trouble is there are so many of these research projects and papers that come out of studies that are published to great acclaim. Whose conclusions have a qualifier such as "may", "might", "can", or "could" possibly have an impact on any given health condition. Despite which the research and its conclusion swiftly gets reported in headline news with added emphasis here and there to garner public interest and sell papers. And though people are at first thrilled at the prospect of a cure around the corner, some researchers in the field may think differently.
A seasoned researcher who would never dream of accelerating output on the basis of a fragile thread in research supposition might ask himself what selective science methods might have been used to conclude that their specific research points to a cure? Was the research contrived with a bias beforehand, a prior determined outcome? Was this a rigorous, double- or triple-blind study, one whose conclusion could be trusted?
Deceptive methods are often used by researchers anxious to publish and receive professional credit for producing yet another influential paper cited by other researchers, to advance their careers. This has become a motivating factor for many in the field of health science seeking to impress those who may then regard them as promising experts in a particular field of study for whom an attractive position in academia may be forthcoming.
Selective study designs make use selectively of data, taking care to use the analyses or reporting of results that most closely match the outcome they have determined to aim toward. The results are heavily biased studies, a common failing in research, but one that experiences little difficulty in finding a place within science publications of good reputation. As an example with health impacts reported out of air-pollution-epidemiology studies, researchers can select outcomes from previous studies that suit their purposes.
Ignoring studies that fail to demonstrate the impacts the researcher is looking for, heavily biases his or her own conclusions, making them worthless. In the same token, scientists who are informed that their study appears to be one based on bias will be affronted, and go into vigorous denial. But it is not the 'evidence' unearthed that reflects the nature of science, but the methods used; flawed methodology results in flawed outcomes. Bias nullifies any evidence that results from such studies.
When a study undergoes triple- or double-blinding, its purpose is specific to the reduction of scientific bias on the part of scientists undertaking the study. Peer reviewers for journals prior to publication, are meant to carefully read the papers to ask probing questions, and recommend changes that clarify issues, and if well done by knowledgeable reviewers, preclude bias, assuring the publishers that the paper they will then publish is scientifically rigorously defensible.
On the other hand, journal publishers and their editors look for novel research that will pique the interest of their readers. If a study fails to demonstrate health impacts, they may be rejected for publication, and when this occurs the bias shown is on the part of the publisher. As for the media, methods are irrelevant, they look for results and the more spectacular they can make those results appear in print by their own use of superlatives and excesses, the more saleable their product is to the public.
This is an issue that doesn't start in the laboratory, but in the classroom. Where professors in academic settings now all too often train their students what they must think, and not how they should think to reach reliable conclusions, muddying science with prevalent social attitudes. Often hopes of procuring more research funding motivates academics to rush their work to publication to impress funding agencies.
Academic education has a purpose, to teach students how to think analytically. Failing that purpose, degrees are mere papers signifying little. Supporting the view of the cynical that the experience of life aligned with plain good old common sense rate as a practical equal to the value of a doctorate.
Labels: Academia, Bioscience, Human Fallibility, Research
0 Comments:
Post a Comment
<< Home