The recent and widely-reported retraction of a study on safety of genetically-modified (GM) maize has once-again raised the topic of what of peer-review offers.[1] This perennial topic includes what a scientific paper really is, how scientists respond to extraordinary claims (and what an extraordinary claim is) and, of course, what peer review contributes. With all the fuss the retraction of this paper has brought I thought it’s worth briefly looking what these mean, aimed at non-scientists.

What is a research paper?

Research papers present data along with interpretation of that data that argues a case for what the data might show.

Once it’s understood that conclusions drawn from research papers should be read as arguments for a case much of the rest of how research should be treated follows naturally.

Because they present an argument for a case, initial reports are rarely ‘a done deal’.

Research is more complex that the simple summaries you read in newspapers might suggest! Accounts for general readers usually present a ‘gloss’ of the broad questions the research hoped to probe.

By contrast, the validity of the conclusions often depend on details that are buried in the particulars of the methods used in the research. These methods are—understandably—rarely presented in accounts of research in media or press releases.[2]

Anyone who holds up a new paper in an argument, presenting it as definitive is most likely overstating the case; they are certainly overstating it if the claims made in the research paper are extraordinary. (By new research paper, I mean one that that has yet to followed up by further research, or criticism.)

Responding to extraordinary claims

An extraordinary claim is one that flies in the face of what is already thought to be correct. (It might also be a very strong claim being made using limited or poor evidence.)

Over-turning previous findings is a regular occurrence, something that is not greeted with dismay. While a scientist’s response to new findings is generally “that’s interesting”, the case has to be strong to over turn previous results. As a result, it’s a sound—and wise—response to extraordinary claims to pause and think “let’s look closer and make sure this thing is right.”

Taking extraordinary claims at face value, uncritically, is generally imprudent and can reveal a bias, a wish that the claim were true.

Here’s one response to an extraordinary claim that has just been published, that rats can pass on the memory of a smell to their offspring,[3] on twitter. As you can see, it’s focused on “let’s check this thing is right”, offering some specific issues that might want attention. (You’ll see from the writer’s profile he’s a specialist on olfaction, too.)

You can argue these things back-and-forth – that’s just how it goes. Uncertainty about how data should be interpreted can take time to resolve: think months even years. It’s not helped by that subsequent findings are often not reported by the media. Of course, we’d wish that media reported the initial findings with appropriate caution in the first place. (If they don’t an illusion of a ping-pong effect can take place, where science as presented in the media apparently keeps changing it’s mind about something.)

What peer-review offers

Peer-review by scientific journals is not a final arbitrator of truth and cannot be.

If reviewers were to rule that the conclusion were true for all time, they’d have to anticipate all future developments and objections. Obviously that’s impossible. (This is also why it’s reasonable for new findings to overturn older ones.)

That’s an unreasonable request, clearly. More reasonably reviewers might be tasked with ruling if the conclusions were ‘true’ to current knowledge. There are two problems with this. Firstly, it’s not the job of peer review to ask that all work be consistent with current knowledge. Some results simply won’t be. Work like that might remain a bit of a puzzle for some time, sometimes many years, even decades. Secondly, the reviewers would have to have an impossibly wide knowledge. Experts that they may be, reviewers cannot represent the wider range of expertise of the scientific community.

It’s a practical reality that reviewers simply can’t fully determine if a paper is ‘right’ and it’s not what is aimed for.

If someone presents a paper in an argument, saying that because it was published in a peer-review journal it “must” be ‘right’, or similar, they’re pulling a con. They’re not presenting the weaknesses and strengths of the argument, but putting something else in it’s place. (It had 5 peer-reviewers? So what.)

So what is aimed for in peer review?

In broad terms, they aim to eliminate papers that don’t meet the aim of the journal or it’s standards, and to try eliminate weak aspects of otherwise acceptable papers. Ideally you’d want high standards and to ensure that all the arguments are water-tight all of the time, but neither are truthfully possible.

A key thing that peer-review does is to try check if the logic, the statistics, the experimental methods are reasonable. Not if the conclusions are ‘right’ or not, but how those conclusions were arrived at—the bits making up the argument for the case in the paper—and if the conclusions are in fact consistent with the data.[4]

The standard of review will reflect the journal in question.[5]

From time to time peer review will fail, basically because people are fallible. We’d like to be perfect and all-knowing,[6] but we’re not. Ditto for reviewers.

Philosophically we might argue that peer review always fails – just by different degrees. Short of a purely mathematical paper or computer algorithm given with a proof, rigorous unfettered truth is hard to find.

Once a paper is published the wider range of expertise of the scientific community quickly points out any issues missed by the scientific journal’s reviewers – usually more vocally the more prominent a research paper and it’s claims are! (In cases, especially for less prominent research, specific issues can be known informally by the niche group working in the specific topic without it reaching wider recognition by those outside of the niche. This can be a problem for newcomers to an area of research.)

The final peer review of a research paper is it’s acceptance by the scientific community as a whole.

That can’t be over-emphasised. The peer-review of a paper by a journal is not it’s acceptance by science, the community. The peer review in journals is basically an attempt to eliminate the inappropriate, the obviously bad and to ensure reasonable arguments, and that’s all. The acceptance of the work — the conclusions offered — comes later, after the community has seen it and had time to explore any issues with the argument for those conclusions.

One more: putting right wrongs

Frequently research that draws attention to peer review in media are research being touted by advocates for a topical issue or cause, GMOs, climate change, a particular medical condition or illness.

One of the frustrations of seeing advocacy groups using initial findings to support their causes is that claims made (by advocates of a particular position) based on early reports that subsequently prove wrong are incredibly difficult to put right. A few persist in clinging to their original claim no matter that the research it is based on has been overturned. These can circulate on-line for years, misleading others.

You’d wish that people, including scientists, who promote research to advocacy groups would put the same effort to putting out word about something they espoused earlier having since been shown to be wrong, as they did in their original espousing of the now-incorrect claims.

Readers can help themselves by asking if the person presenting a new research ‘finding’ is (strongly) expressing appropriate caution and to remember that research papers are arguments for a case, not ‘a done deal’, and it is the acceptance by the wider scientific community is the true peer review of research work, and that that can take time. (As Yoda said, patience you must have my young padawan!)


1. I originally started on this at the time a report arguing that a species of arsenic-tolerant bacteria incorporates arsenic into it’s chemistry was published. That extra-ordinary claim received a lot of (justified) criticism, and promptly.

The ‘GMO’ paper in question is Séralini et al, Long term toxicity of a Roundup herbicide and a Roundup-tolerant genetically modified maizeFood and Chemical Toxicology Volume 50, Issue 11, November 2012, Pages 4221–4231. As you can see it’s preceded by a number of letters to the editor. It received considerably more fuss in the media, in part because of the unusual approach to reporting the authors’ views of the conclusions (which prevented the journalists from obtaining independent comment from other scientists).

2. An example might be research claiming that tiny (really tiny!) amounts of small RNAs in rice might affect humans; recent work aimed at testing this claim was unable to reproduce the original findings.

3. Put more correctly, it’s retention of a fear-conditioned response to an odour in the offspring. I’d like to get a copy of this paper (it’s pay-walled…) to check one thing that bothers me from reading the abstract (author’s summary of the research). One thing I felt in two minds about was this work was it being presented on-line before the research paper was available, as no-one (at that time) could look to see if the claim made might be sound or not.

4. You might think data just “is”, but data also has standards, standards set from exploration of the methodology. I’d elaborate on this but I don’t want to clutter the article.

5. The standard of a journal is a fraught question. Some appear to have no standards! Others set very onerous review standards, but get their share of papers that prove unsound. Suffice to say here, it’s a topic in it’s own right.

6. Not many people really would, but let me run with it. You got my point, right?

Related articles on Code for life:

Initial reports are not a done deal

Media reporting of subsequent findings

Arsenic life – more criticism, formally published

Trust science, not scientists

When the abstract or conclusions aren’t accurate or enough

XMRV-CFS, further retraction

Reproducible research and computational biology