Hostname: page-component-8448b6f56d-jr42d Total loading time: 0 Render date: 2024-04-25T02:08:32.675Z Has data issue: false hasContentIssue false

Questioning the Validity of Science

Published online by Cambridge University Press:  12 February 2014

Rights & Permissions [Opens in a new window]

Abstract

Type
Editorial
Copyright
Copyright © World Association for Disaster and Emergency Medicine 2014 

In October 2013, the British weekly The Economist published a series of articles that questioned the validity of much of modern scientific research. 1 In essence, The Economist reported that much of current published “scientific” research is rubbish that squanders money, stymies progress and may put the lives of people at risk. 1 The observations of The Economist are enlightening and compelling, and support the claim that much of modern published science is flawed. 1

As a relative newcomer to the area of accepted research, emergency and disaster health research is at risk of following the flawed publishing path described by The Economist. If this occurs, there will be a failing to validly advance the science that affects the lives of millions across the globe. At present, common errors in disaster planning and response are repeated more than they are corrected. For example, failures that occurred in the US Hurricane Katrina event were repeated later in Japan and the Philippines despite the opportunity to scientifically study and improve the field of emergency medical management and response. Further, in organized emergency medical services, there continues to be heavy use of resources to develop advanced life support activities (such as adult endotracheal intubation and use of vasopressors to manage ventricular fibrillation cardiac arrest) without scientific exploration of the value of these activities. As a new field of scientific inquiry, emergency/disaster health and medical research is at risk of failing because of a global breakdown in the manner in which modern research is conducted.

As pointed out in The Economist, important published research results often cannot be replicated when retested by reputable concerns such as the Amgen and Bayer pharmaceutical companies. 1 The validity of a scientific finding is established by the ability to repeat the original study and obtain the same results. Without verification through repeating a study and obtaining the same results, the findings of a single reported research study are unconvincing. This is true no matter how compellingly the original paper is written or how well the original research was conducted. An example is the use of thrombolytic therapy in the management of acute myocardial infarction. While original studies two decades ago were compelling for the use of thrombolytic therapy for treatment of acute ST elevation myocardial infarction, only after repeated studies provided results that confirmed the original findings was there acceptance clinically and scientifically of the validity of benefit for the intervention. The same verification is needed for many of the assumed scientific principles accepted in prehospital and disaster medicine.

To replicate an original study, one must have the recipe used for conducting the first study. This recipe is provided by presenting a Methods Section that will allow any researcher to repeat the study and determine if the reported results are the same. The Methods Section of a published scientific study is often scrutinized by readers and other researchers to determine if the validity of the research presented can be accepted. In fact, the methodology may be more important than the statistical analysis techniques chosen by the researcher. In review of submitted papers that have been returned to authors and not published this past year in Prehospital and Disaster Medicine, none were without significant criticism of the methods that were described for the study. In addition to study methods that tended to allow bias, the lack of a sufficient description of methods used for the presented research such that another researcher could repeat the study was common among manuscripts that were not published.

The need to publish for career advancement of some researchers is problematic. Publication of research is expected of many academics and professionals, and this stress leads to publication as a goal as opposed to conducting valid scientific research. The drive to publish can lead to exaggeration of study results and efforts to over-interpret the significance of findings. Medical journals add another obstacle to valid science by having a bias toward publishing papers that have positive outcomes. For journals there is a tendency to avoid publication of papers that replicate previous studies or that fail to prove a hypothesis (negative results papers). Also, many journals are interested in journalistic impact similar to that of newspapers as opposed to solid scientific work. This tendency stems from competition to sell the journals to university libraries at inflated subscription rates and to be indexed in various indexing services along with having a high impact factor (a measure of how often papers published in a journal are cited). In essence, current medical “scientific” publishing is a sophisticated form of journalism that is designed to sell subscriptions and draw authors of sensational and not necessarily valid studies.

Science is a higher goal than publication. Valid science should be published no matter how mundane the study. At this stage in development of the science for emergency/disaster health and medicine, publication of research should prioritize methodologically valid studies regardless of potential negative results or appeal outside the discipline. Citation indexes, subscription revenues, and media headlines are dangerous distractions for those with a goal of conducting and publishing valid scientific work. Beginning its 29th year of publication, Prehospital and Disaster Medicine will continue to focus on publication of valid science with the goal of advancing global knowledge of emergency/disaster health and medicine.

References

1. How science goes wrong. The Economist. 2013; October 19:13.Google Scholar