Hyperbole from university press offices

By Grant Jacobs 11/12/2014

A newly-released study suggests university press releases are a key source of hyperbole seen in science stories in media, concluding that –

Exaggeration in news is strongly associated with exaggeration in press releases. Improving the accuracy of academic press releases could represent a key opportunity for reducing misleading health related news.

The study is now available to read* (and fortunately is open access) and is accompanied by an editorial by Ben Goldacre. Goldacre’s editorial points to several other studies of press releases, including another study published by the BMJ.

There is widespread commentary on this research in science communication circles (with no doubt more in coming days), including:

One story relates how the authors of the research group came to study press releases as a consequence of a ‘media circus’ surrounding one of their own articles in Science and health news hype: where does it come from? (The Guardian, UK) –

Unfortunately, we made the novice mistake of issuing the press release about our research during the riots, prompting a media circus.

Well-known science writer Ed Yong has tweeted his view,


The research study authors’ article in offers something similar –

This doesn’t let journalists off the hook. After all, what is the job of a journalist if not to investigate and critique hyped claims? Still, we have no choice but to accept the sombre reality that many journalists who report science don’t have time to conduct investigations or challenge the content of press releases. Competition between universities and the drive toward self-promotion have combined with economic pressures on journalism to create an unhealthy ecosystem.

but within content of caution about shifting blame (citations removed for clarity & brevity) –

It is important that these results are not perceived as simply shifting the blame from one group of non-scientists (journalists) to another (press officers). Most press releases issued by universities are drafted in dialogue between scientists and press officers and are not released without the approval of scientists (and confirmed in our survey), and thus most of the responsibility for exaggeration must lie with the scientific authors. At the other end of the chain, journalists have a continuing responsibility to cross check their sources even if their working conditions make that increasingly difficult. The blame—if it can be meaningfully apportioned—lies mainly with the increasing culture of university competition and self promotion, interacting with the increasing pressures on journalists to do more with less time. […]

Our findings may seem like bad news but we prefer to view them positively: if the majority of exaggeration occurs within academic establishments, then the academic community has the opportunity to make an important difference […]

The study aimed to go beyond previous studies of press releases to

… clarify how often news contains claims or advice from health related research that go beyond those in the peer reviewed journal articles, and to identify the likely source of these exaggerations (press releases or news). Furthermore, we tested whether exaggerations in press releases were associated with a higher likelihood of news coverage, compared with press releases without exaggeration.

They concluded that their

… principle findings were that most of the inflation detected in our study did not occur de novo in the media but was already present in the text of the press releases produced by academics and their establishments.

… the odds of exaggerated news were substantially higher when the press releases issued by the academic institutions were exaggerated (odds ratios 6.5, 20, and 56, respectively).

and note that

… contrary to common assumption, we did not find evidence that exaggerated statements in press releases are more likely to attract news uptake or substantially increase the number of news articles when they do occur. We also found no indication that caveats in press releases reduce uptake, although presumably the fear that they do is the reason caveats are so rare.

They note the limitations of retrospective studies (e.g. that they cannot infer cause-and-effect) and outline their future plans –

To dig deeper we need to move beyond observational research and conduct an experiment. With funding from the Economic and Social Research Council we are now preparing to conduct a randomised trial on how different styles of press releases, and variants in specific phrasing, influence the accuracy and quantity of science news. To do this we’re partnering with press offices around the UK.

I’m not going to offer a breakdown of their analysis: interested readers are encouraged to read the original paper, which is mostly straight-forward.

One thing I prefer to see is not just pointers about issues, but suggestions for their improvement.** Goldacre’s editorial offers some:

Accountability is straightforward: all academic press releases should have named authors, including both the press officers involved and the individual named academics from the original academic paper. This would create professional reputational consequences for misrepresenting scientific findings in a press release, which would parallel the risks around misrepresenting science in an academic paper.

Transparency is similarly straightforward. Press releases are a crucial part of communicating science, often more impactful than the paper, but they are often only sent privately to journalists and are rarely linked from academic papers. Instead, press releases should be treated as a part of the scientific publication, linked to the paper, referenced directly from the academic paper being promoted, and presented through existing infrastructure as online data appendices, in full view of peers.

Feedback requires a modest extension of current norms. At present, researchers who exaggerate in an academic paper are publicly corrected—and held to account—in commentaries and letters to the publishing journal, through the process of post-publication peer review. This could be extended. Press releases are a key part of the publication of the science: journals should reflect this and publish commentary and letters about misrepresentations in the press release, just as they publish commentary on the academic paper itself.

According to a comment by Pete Etchells in reply to Hughes blog the authors of the study offered a suggestion a few years ago –

Chris Chambers came up with a great idea for this a while back – a ‘gist’ section in PRs that tells you in a couple of bullet points what the study does, and does not, show

I offered something akin this in an early blog post, a ‘science sidebar’. My suggestion was for independent experts to offer to media, I can’t see any reason researchers can’t do something similar for their press offices.

Many journals now do ask authors to write an ‘author summary’, intended for non-specialists. One example of journals that do this are the PLoS family of journals, like PLoS Genetics.

Just my opinion, but these aren’t (usually) suited to non-scientist readers. By way of example here’s one from a paper that interests me, that happens to be open in my web browser:

In eukaryotic genomes, recombination plays a central role by ensuring the proper segregation of chromosomes during meiosis and increasing genetic diversity at the population scale. Recombination events are not uniformly distributed along chromosomes, but cluster in narrow regions called hotspots. The absence of overlap between human and chimpanzee hotspots indicates that the location of these hotspots evolves rapidly. However, the reasons for this rapid dynamic are still unknown. To gain insight into the processes driving the evolution of recombination hotspots we analyzed the recent history of human hotspots, using the genome of a closely related archaic hominid, Denisovan. We searched for genomic signatures of past recombination activity and compared them to present-day patterns of recombination in humans. Our results show that human hotspots are younger than previously thought and that they are not conserved in Denisovans. Moreover, we confirm that hotspots are subject to a self-destruction process, due to biased gene conversion. We quantified this process, and showed that its intensity is strong enough to cause the fast turnover of human hotspots.

A few journals take a different route to these, offering their own simplified takes either penned by an editor (as some medical journals routinely do) or through using science communicators (as eLife, for example, does).

In principle these ought to be less subject to ‘hype’. (Having written that, in my experience some of the science communicators’ efforts within the journals have their own, different, faults.)

I guess a point to factor in all this is that there are issues throughout the process of getting science to a general audience.

There are no two ways that press releases frequently overplay their hand. My own thoughts are that while press releases might be used as sources for story ideas, they shouldn’t really be used as the main or sole basis of the story itself. I know I’m not alone in thinking press releases should be treated with caution. (Like this recent example from fellow sciblogger, Alison.)

I would like to think that most journalists are already well aware that press releases are played up and they only differ by what degree they are played up. At least that’s been my impression for years – admittedly only judged anecdotally from press release services’ content.

Don’t get me wrong: I’m not suggesting I accept this practice, far from it. Universities no doubt see that a role of their press office is to promote the university’s work and with that, potentially, come advertorial aspects and pressure to create a competitive impression. Even without overt hype, what’s left out can misrepresent. Weakness in work are unlikely to be pointed out. Competitors’ work not mentioned. Previous studies or wider context not stated (or not clearly). And so on. I’ve seen what read to me as examples of clearly well-meant media coverage fall on what was not said, too.

Scientist-communicators might play a role, too. I’ll state my bias here: it’s role I’d love to see encouraged, despite that there are obstacles that would need overcoming. Scientists working in the same general area are likely to easily see if hands have been over-played, be more aware of wider context and implications, the weakness in particular types of studies and so on. Goldacre mentioned post-published review of press releases. Potentially, scientist-communicators can offer this concurrent with media reports.

(Ideally I’d like to see scientist-communicators be able to write for the media, but that would need for embargo systems to be revised and for the scientific institutions to support this — topics for other articles.)

I’ll leave you with this diagram offered by Ed Yong. I don’t agree with all of the details, but the overall message I do agree with: the ‘us’ v ‘them’ finger-pointing approaches to debating this issue are mostly just a nuisance (and old and tired for those who’ve seen several rounds of it…) Champion good approaches instead.



A tangential thought is if backlash from some sectors of the public about science is push-back of sorts from dubious members of the public, in part, due to widespread over-stating of science in media. Not a particularly comfortable suggestion, perhaps, but maybe one that should be considered?

* I dislike when research isn’t available at the time embargoes fall; it leaves the effect that readers can’t check commentary with the original research. To be fair as I’d like to see the embargo system itself improved, so I’m probably biased to criticising the embargo system.

** It’s something I try to remember to do when I write criticism through my articles in Code for life.

Other articles on Code for life related to science in the media:

When the abstract or conclusions aren’t accurate or enough

Three kinds of knowledge about science and journalism

Science journalism–critical analysis, not debate

Media reporting of subsequent findings

Media thought: Ask what is known, not the expert’s opinion

Fact or fallacy, a survey of immunisation statements in the print media

XMRV prompts media thought: ask for the ’state of play’

Post-embargo publication delays: be gone

Should all research papers have a ‘Limitations’ section?

Media now only report on public ignorance (cartoon)

0 Responses to “Hyperbole from university press offices”

  • Hi David,

    Thanks for the link and welcome to Code for life. Now that you’ve had your first comment approved, you should be able to comment at will.

    I did consider offering examples, but didn’t for a few reasons – the main one simply being that I felt my post was already saying a lot.

    I also felt that it was widely recognised that press releases frequently overstate, as I noted in the article. In fact, in my opinion, if people wanted examples they could take themselves to any press release service and pretty much take all but a handful of the PRs there: my own reading suggests (note: anecdotal) most PRs are guilty of playing their hand to at least some degree, particularly once you consider omission, as I noted in my article. (Mark Henderson also highlighted this point; see my previous comment for a link to his article.)

    For what it’s worth, ‘the’ mild example of an over-played hand in my head at this time is close to my own research interests – I’d be too harsh a critic and placed in this context criticism would be overbearing. I might have been able to provide an independent voice for the journalist (I am listed in the NZ SMC experts file) as I just happen to have have deep interests in the same specialist area.

  • One thing I didn’t write about, which is unexpected of me, is the role editors might play in this. Usually the role of editors is something that I’m keen on including.

    Editor Melissa McEwan offered some thoughts along these lines on twitter in response to new coverage of this story in The Atlantic.

    She accepted that press releases could be improved (I agree), but also widened the scope to include editors:

    “also to blame: editors. As an editor I often catch and remove questionable science in articles.”


    “it’s unreasonable to expect everyone writing about sci to know it well, but editor trained in sci can fix”

    (Mark Henderson made related remarks; see link in my first comment.)

    I would add to her remark, also scientists who become editors, not just (paraphrasing) “editors trained in science”. In this mix we might, perhaps, also include freelance specialist editors?

    I’ve argued something similar elsewhere, that it’s hard to expect every journalist to have enough science background to suss out where to draw the line, or sense an over-played hand, but one specialist editor can serve this role for many articles. (Of course, that comes back to money—someone has to pay the specialist editor—and, I guess, caring about the product as well as the bottom-line.)

    I also can’t help speculating that this problem is, in part, circular, rather than linear. (I’m not trying to excuse scientists, but be mindful of the wider picture.)

    Subjects I may return to in a later post.