6 Comments

You know that thing where a media report splashes out about some new ‘discovery’ as if it’s the definitive thing, then a few months later when new research calls it into question there’s little more than a ripple or, more often than not, silence?

A group of French scientists decided to look into how media portrayed a topic as it developed, using ADHD as their case example, setting out to test if media reported later developments in a topic and if they did, did they report the context of the later research findings.[2] (Readers could tackle the paper for themselves—it’s not too hard to follow[1].)

One element to meaningful reporting of a topic, to my mind, is to report the ‘state of play’ of the topic. This should be particularly true for initial reports. Initial findings are tentative, an argument for a case put forward to their scientific community peer for consideration. How well did the media in their case examples do?

The study used,

“47 scientific publications reporting primary observations related to ADHD and echoed at least once by newspapers during the nine days following their publication date”

along with related research and later findings on the original topic. At points they contrast the findings for a “top ten” papers against the remainder of the 47 studies selected.

The researchers noted that most of the initial findings were subsequently not confirmed, altered or shown to be incorrect,

Among “top 10” publications only two studies passed the test of the years, three studies were fully refuted, four were substantially attenuated by subsequent articles and one was not confirmed or refuted though its main conclusion appears unlikely.

They put the lack of coverage of follow-up research in part down to a tendency to cover the major journals at the expense of ‘lesser’ journals and observed a correlation with university ranking: findings from prominent universities received more newspaper coverage,

We hypothesized that this much lower coverage of subsequent studies was related to the lower impact factor of the journals that published them. Our observations are strongly consistent with this prediction when we compared the coverage of “top 10” publications with that of their 67 related studies. However, regarding the newspaper coverage of the 47 scientific publications of our initial search, its amplitude is not significantly correlated with the impact factor, but rather with the ranking of the university where the study was performed. This suggests that the publication of a scientific study in a high impact factor journal is a prerequisite, but does not guarantee a strong media coverage. The prestige of the university seems to exert an additional influence. Indeed, famous universities have powerful press offices that may help their researchers obtain press coverage

The authors pointed to the lack of coverage of scientific debate,

Scientific knowledge always matures from initial and uncertain findings to validated findings. This process often results from the debate of conflicting opinions in the scientific literature. Accordingly, apart from Wolraich’s study, our “top 10” publications were involved in scientific debates. However, their press coverage never reflected these debates, apart from a notable, but restricted, exception

This might both reflect a lack of wider reading or investigation on the part of reporters along with a tendency to report single papers. (As, ironically, I am doing here…)

I like that this paper has a Limitations section. I’d like to see that done more widely.

Reference

Why Most Biomedical Findings Echoed by Newspapers Turn Out to be False: The Case of Attention Deficit Hyperactivity Disorder

Gonon F, Konsman J-P, Cohen D, Boraud T (2012)

PLoS ONE 7(9): e44275. doi:10.1371/journal.pone.0044275

http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0044275

Footnotes

1. If you’re disinclined to read read the whole thing, the first four paragraphs of the discussion will give the main points. The passages I’ve excerpted cover most of this.

2. In their words: “the devaluation trend of initial findings is largely ignored by the media”


Other articles in Code for life:

Media thought: Ask what is known, not the expert’s opinion

XMRV prompts media thought: ask for the ’state of play’

Note to science communicators–alleles not ’disease genes’

Banished from science writing. Words, that is.

When the abstract or conclusions aren’t accurate or enough

Do TED lectures need better vetting?

Science blogging in the New Zealand media