Sunday, April 28, 2013

Science, Media & The Mind

On April 25, The New York Times published Gary Gutting's essay with the title "What Do Scientific Studies Show?" written for The Stone. The author argues that much scientific reporting in the media suffers from journalistic misconceptions of the methodological limitations inherent in the reported studies. As consequence, the significance of the findings tends to appear disproportionately exaggerated in the news, particularly when advances in medical treatments are at stake.

Because epidemiological studies, as much as empirical social science studies, profoundly depend on statistical analyses of covariance and correlation, Professor Gutting admonishes that association is too easily confused with causation. He proposes that journalists ought to judge the value of the observations they wish to report, ranking studies by methodological rigor. Essentially, the author encourages the media to evaluate a study's scientific merit, before the implications of its findings are reported in the news. He squarely places responsibility with the news editors and journalists.

I agree with the author that quantitative studies commonly incur the risk of unrepresentative sampling (read my essay with the title "Representative Sampling & The Mind" dated Mar. 18, 2011) and that professional science correspondents should strive to understand the limitations of the empirical sciences and their statistical methods. Best practice and scientific integrity are of utmost importance because of the wide-spread skepticism of science we find in this country today.

However, in some cases unprofessional judgment by the media is not the sole culprit of the undue embellishment of the relevance of new research findings. Rather, the exaggeration may begin with the investigators and the public relation departments of the academic institutions they are affiliated with. Below I provide one example.

The Case
Since the catastrophic nuclear reactor meltdown near Chernobyl, Ukraine, in 1986, which blanketed vast areas of Europe with radioactive fallout, the effects of low-level ionizing radiation on public health have been of particular interest of research. The three reactor meltdowns in Fukushima Prefecture, Japan, two years ago brought the importance of this topic even more into the awareness of public health professionals. Which levels of ionizing radiation can be considered safe continues to be hotly debated among scientists as much as in the public, while the US Environmental Protection Agency is striving to revise its guidelines on recommended limits (Radiation Protection; Protective Action Guide updates, Mar. 2013).

Roughly a year ago the journal Environmental Health Perspectives published a research study conducted with mice at the Massachusetts Institute of Technology (MIT) (Olipitz and others, 2012). The authors could not find any statistically significant effect of low-level ionizing radiation in the mice. A discussion on brought this paper to my attention.

I found profound shortcomings in the design of the MIT study and the evaluation of the data. That is, the chosen time of exposure to ionizing radiation was shorter than that used in other studies showing dose-related chromosomal aberrations at low dose rates. Moreover, the investigators elected to integrate the results of separate experiments using different techniques, but no comprehensive statistical tests were carried out on the results. My concerns were published in a letter to the editor (Melzer P, 2012), to which the paper's senior authors responded (Engelward and Yanch, 2012).

Despite the study's shortcomings and of importance to the debate over Professor Gutting's stone of contention in The New York Times, the principal investigators brazenly chose to advertise their findings on MITnews as evidence that low-level ionizing radiation may be harmless to our health and that current emergency planning for radiological accidents may be too cautious in the assessment of the public health risks of ionizing radiation. I cite from Ann Trafton's post with the title "A new look at prolonged radiation exposure" published May 15, 2012:

“There are no data that say that’s a dangerous level,” says Yanch, a senior lecturer in MIT’s Department of Nuclear Science and Engineering. “This paper shows that you could go 400 times higher than average background levels and you’re still not detecting genetic damage. It could potentially have a big impact on tens if not hundreds of thousands of people in the vicinity of a nuclear power plant accident or a nuclear bomb detonation, if we figure out just when we should evacuate and when it’s OK to stay where we are.” 
In my letter to the editor-in-chief of Environmental Health Perspectives, I explained in no uncertain terms why the findings of this study remain ambiguous at best. In their response, the senior authors conceded that they understood their study's limitations. Therefore, it remains difficult to comprehend why the authors made such extremely far-fetched claims with profound implications for public health policy in the MITnews release.

Sadly, news releases that recklessly misrepresent research findings may not seem unexpected. Federal funding vital to investigators and host institutions alike has been tight over the last decade, taking another significant cut with this year's sequestration. The National Institutes of Health currently fund fewer than 1 in 10 investigator-initiated applications for research grants. I have written previously about the crucial role of federal funding in US biomedical research in my post with the title "Research Funding & Lost Treasures of the Mind" dated Oct. 23, 2008.

Regardless of the difficult times, studies like the one discussed here should never be embellished to influence decision making in public health policy. Moreover, news releases like the one above ought never be used to inform the public.


No comments:

Post a Comment