Media reporting of subsequent findings
You know that thing where a media report splashes out about some new ‘discovery’ as if it’s the definitive thing, then a few months later when new research calls it into question there’s little more than a ripple or, more often than not, silence?
A group of French scientists decided to look into how media portrayed a topic as it developed, using ADHD as their case example, setting out to test if media reported later developments in a topic and if they did, did they report the context of the later research findings.[2] (Readers could tackle the paper for themselves—it’s not too hard to follow[1].)
One element to meaningful reporting of a topic, to my mind, is to report the ‘state of play’ of the topic. This should be particularly true for initial reports. Initial findings are tentative, an argument for a case put forward to their scientific community peer for consideration. How well did the media in their case examples do?
The study used,
“47 scientific publications reporting primary observations related to ADHD and echoed at least once by newspapers during the nine days following their publication date”
along with related research and later findings on the original topic. At points they contrast the findings for a “top ten” papers against the remainder of the 47 studies selected.
The researchers noted that most of the initial findings were subsequently not confirmed, altered or shown to be incorrect,
Among “top 10” publications only two studies passed the test of the years, three studies were fully refuted, four were substantially attenuated by subsequent articles and one was not confirmed or refuted though its main conclusion appears unlikely.
They put the lack of coverage of follow-up research in part down to a tendency to cover the major journals at the expense of ‘lesser’ journals and observed a correlation with university ranking: findings from prominent universities received more newspaper coverage,
We hypothesized that this much lower coverage of subsequent studies was related to the lower impact factor of the journals that published them. Our observations are strongly consistent with this prediction when we compared the coverage of “top 10” publications with that of their 67 related studies. However, regarding the newspaper coverage of the 47 scientific publications of our initial search, its amplitude is not significantly correlated with the impact factor, but rather with the ranking of the university where the study was performed. This suggests that the publication of a scientific study in a high impact factor journal is a prerequisite, but does not guarantee a strong media coverage. The prestige of the university seems to exert an additional influence. Indeed, famous universities have powerful press offices that may help their researchers obtain press coverage
The authors pointed to the lack of coverage of scientific debate,
Scientific knowledge always matures from initial and uncertain findings to validated findings. This process often results from the debate of conflicting opinions in the scientific literature. Accordingly, apart from Wolraich’s study, our “top 10” publications were involved in scientific debates. However, their press coverage never reflected these debates, apart from a notable, but restricted, exception
This might both reflect a lack of wider reading or investigation on the part of reporters along with a tendency to report single papers. (As, ironically, I am doing here…)
I like that this paper has a Limitations section. I’d like to see that done more widely.
Reference
Why Most Biomedical Findings Echoed by Newspapers Turn Out to be False: The Case of Attention Deficit Hyperactivity Disorder
Gonon F, Konsman J-P, Cohen D, Boraud T (2012)
PLoS ONE 7(9): e44275. doi:10.1371/journal.pone.0044275
http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0044275
Footnotes
1. If you’re disinclined to read read the whole thing, the first four paragraphs of the discussion will give the main points. The passages I’ve excerpted cover most of this.
2. In their words: “the devaluation trend of initial findings is largely ignored by the media”
Other articles in Code for life:
Media thought: Ask what is known, not the expert’s opinion
XMRV prompts media thought: ask for the ’state of play’
Note to science communicators–alleles not ’disease genes’
Banished from science writing. Words, that is.
When the abstract or conclusions aren’t accurate or enough
6 Responses to “Media reporting of subsequent findings”
Seth Mnookin, author of The Panic Virus*, has touched on a number of happening in science communication in recent months, including the paper covered in my piece above.
—-
* Which I’ve now read and hope to offer you a review of some time.
If you’re looking for follow-up reading on the paper I’ve covered in my article, there is –
An article in The Economist, ‘Journalistic deficit disorder’: http://www.economist.com/node/21563275
This twitter conversation: https://twitter.com/noahWG/status/249223214079811585
Excerpt from The Economist: “A sensible prescription is hard. The matter goes beyond simply not believing what you read in the newspapers. Rather, it is a question of remembering that if you do not read subsequent confirmation, then the original conclusion may have fallen by the wayside.”
A similar thing happens in science – if there’s a suspicious lack of follow-on papers to a topic, there’s a fair chance it died because the field decided that thread of exploration was looking unlikely or wrong.
Andrew Revkin, writing at the New York Times Opinion Pages, has remarked on the same research paper, picking on the interpretation of abstracts.
I’ve also remarked on another study of abstracts, also covering ADHD papers, in an earlier blog post, an earlier paper by the same research team.
There now is further commentary in various forums with interests in science writing/journalism, including at Knight Science Tracker (which notes a few more sources):
http://ksj.mit.edu/tracker/2012/10/adhd-and-practice-science-journalism-def
[…] Because they present an argument for a case, initial reports are rarely ‘a done deal’. […]