More fraud behind paper retractions than you might have thought

By Aimee Whitcroft 02/10/2012 6

A subject that’s come up in discussion with my friend a couple of times recently has been the increase in retractions of scientific papers from journals. I’ve always staunchly defended allegations this might be due to naughty scientists.

I am  now having to make my own retraction about that.

According to research (as yet unretracted) in Nature last year, the number of retractions has increased by over tenfold in the last decade, to more than 300 a year. At the time, the article talked about how it was difficult to analyse what was causing this increase.




Today, famed science writer and scientific tattoo collector* Carl Zimmer published an article in the Washington Post with a very upsetting piece of news: while it’s generally been thought these retractions were due to error, it now turns out more may be due to misconduct and fraud than many of us had thought.

Which makes me want to go and kick something. Possibly a disingenous scientist. But more on why I’m so angry later on this rant piece.

A new study, published in PNAS, went and looked a little deeper than previous studies. Looking at the 2,047 retracted papers related to the biomedical and life sciences in PubMed, they found that misconduct (of which fraud was a major component)**  was responsible for fully 67.4% of the retractions where they could determine the retraction’s cause.

That’s appalling.

Now, one must remember that this still accounts for a very small percentage of the papers submitted. Quoting from Zimmer’s article:

Dr. Benjamin G. Druss, a professor of health policy of Emory University, said he found the statistics in the paper to be sound but added that they “need to be kept in perspective.” Only about one in 10,000 papers in PubMed have been officially retracted, he noted. By contrast, 112,908 papers have had published corrections.

I can’t read the paper (hello paywall), so I can’t say whether the total increase of retractions is in line with the increase in paper publication over the years (i.e. is the proportion of retraction increasing too?). However, the authors do stipulate that the number of retractions due to fraud has increased tenfold since 1975. UPDATE: having now read the paper (gonna protect my source), it would appear that “research publications: retractions for fraud or suspected fraud as a percentage of total articles have increased nearly 10-fold since 1975″. So, the abstract could have been a bit clearer, then :)

While the percentage of retractions when put against publications is still very small, this development is extremely worrying.

It’s being postulated by some that the increasing pressure on scientists to ‘Publish or Perish’ is pushing them too far – where a published paper can mean the difference between tenure and unemployment, suddenly the temptation to cheat can become unbearable. Thankfully, the problem’s been noticed and there aren’t tonnes of people already talking about how broken the current publishing system is, and what could be done to fix it (eagerly, as a rule, opposed by the journals).

The Publish or Perish culture is also extremely unfair to scientists who, for example, work very practically, or who work in organisations which focus on applied rather than research work.

Finally, however, and possibly most scarily – this simply fuels anti-science sentiment and propaganda. Those out there who believe that scientists lie and twist the truth in the ongoing battle for research grants are going to seize upon this as proof positive that they’re right. Science, and scientists, cannot be trusted. Something which, to be sure, is demonstrably untrue, but for which even the smallest numbers will be triumphally used.

So. For shame, to the scientists who cheat. You do yourselves, your work and your science a great disservice. But for shame, too, to the systems which encourage these scientists to do so.

Both need a long, hard look.

Further update:

The full paper also shows some other interesting numbers: amongst these, that journal-impact factor shows “a highly significant correlation with the number of retractions for fraud or suspected fraud.

Additionally, below are the numbers for country of origin for retraction types. These graphs would, I think, have been more useful had they included information about how many papers in total each country had been published, allowing us to see the proportional representation of each country, but yes. Still interesting.


Fang et al (2012). Misconduct accounts for the majority of retracted scientific publications. PNAS. Click to enlarge.


Related posts:

The (threat) challenge to science publishing

Geopolitics and science activity: 30 years’ worth


* Well, he collects pictures of them. I have no idea whether he collects them personally 😛

** The breakdown of this misconduct is as follows: “fraud or suspected fraud (43.4%), duplicate publication (14.2%), and plagiarism (9.8%)” (quotation from paper abstract)

6 Responses to “More fraud behind paper retractions than you might have thought”

  • For those wanting even more (!), Alok Jha has written long piece at the Guardian on the wider topic and Ivan Oransky has a piece up at Retraction Watch. And I’ve cited you in my latest post :)

    About Carl’s tattoo collection – he has a book of them, Science Ink. It leans towards a coffee table affair and has been available in NZ for a while now. IIRC it started with him writing a blog post on just one or two examples. Readers started posting photos of the science-inked parts of their bodies and it took off from there.

    • He does indeed have a book of them, I know :) Been following the whole saga with much interest and grinning :)

  • And note Carl Zimmer article ‘Misconduct Widespread in Retracted Science Papers’ in the New York Times, page D3 2,
    Published: October 2, 2012

    • Thanks, I noted that. It was published after I wrote this, though, so it’s why I didn’t get to include it in the original :)

  • “. . . extremely unfair to scientists who, for example, work very practically, or who work in organisations which focus on applied rather than research work.”
    A bit peripheral perhaps, but the same unfairness applies to journals that publish mainly applied research.
    While I was editor of the New Zealand Journal of Agricultural Research it proved impossible to raise the journal’s Impact Factor above about 0.65. The half-life of the average journal article was calculated at about 10 years (some papers published in the 1970s and 1980s are still frequently cited), but the Impact Factor is based on citations in the first 2 years after publication.

Site Meter