The public and new research: peer review, initial reports and responses to extraordinary claims

By Grant Jacobs 03/12/2013 8

The recent and widely-reported retraction of a study on safety of genetically-modified (GM) maize has once-again raised the topic of what of peer-review offers.[1] This perennial topic includes what a scientific paper really is, how scientists respond to extraordinary claims (and what an extraordinary claim is) and, of course, what peer review contributes. With all the fuss the retraction of this paper has brought I thought it’s worth briefly looking what these mean, aimed at non-scientists.

What is a research paper?

Research papers present data along with interpretation of that data that argues a case for what the data might show.

Once it’s understood that conclusions drawn from research papers should be read as arguments for a case much of the rest of how research should be treated follows naturally.

Because they present an argument for a case, initial reports are rarely ‘a done deal’.

Research is more complex that the simple summaries you read in newspapers might suggest! Accounts for general readers usually present a ‘gloss’ of the broad questions the research hoped to probe.

By contrast, the validity of the conclusions often depend on details that are buried in the particulars of the methods used in the research. These methods are—understandably—rarely presented in accounts of research in media or press releases.[2]

Anyone who holds up a new paper in an argument, presenting it as definitive is most likely overstating the case; they are certainly overstating it if the claims made in the research paper are extraordinary. (By new research paper, I mean one that that has yet to followed up by further research, or criticism.)

Responding to extraordinary claims

An extraordinary claim is one that flies in the face of what is already thought to be correct. (It might also be a very strong claim being made using limited or poor evidence.)

Over-turning previous findings is a regular occurrence, something that is not greeted with dismay. While a scientist’s response to new findings is generally “that’s interesting”, the case has to be strong to over turn previous results. As a result, it’s a sound—and wise—response to extraordinary claims to pause and think “let’s look closer and make sure this thing is right.”

Taking extraordinary claims at face value, uncritically, is generally imprudent and can reveal a bias, a wish that the claim were true.

Here’s one response to an extraordinary claim that has just been published, that rats can pass on the memory of a smell to their offspring,[3] on twitter. As you can see, it’s focused on “let’s check this thing is right”, offering some specific issues that might want attention. (You’ll see from the writer’s profile he’s a specialist on olfaction, too.)

You can argue these things back-and-forth – that’s just how it goes. Uncertainty about how data should be interpreted can take time to resolve: think months even years. It’s not helped by that subsequent findings are often not reported by the media. Of course, we’d wish that media reported the initial findings with appropriate caution in the first place. (If they don’t an illusion of a ping-pong effect can take place, where science as presented in the media apparently keeps changing it’s mind about something.)

What peer-review offers

Peer-review by scientific journals is not a final arbitrator of truth and cannot be.

If reviewers were to rule that the conclusion were true for all time, they’d have to anticipate all future developments and objections. Obviously that’s impossible. (This is also why it’s reasonable for new findings to overturn older ones.)

That’s an unreasonable request, clearly. More reasonably reviewers might be tasked with ruling if the conclusions were ‘true’ to current knowledge. There are two problems with this. Firstly, it’s not the job of peer review to ask that all work be consistent with current knowledge. Some results simply won’t be. Work like that might remain a bit of a puzzle for some time, sometimes many years, even decades. Secondly, the reviewers would have to have an impossibly wide knowledge. Experts that they may be, reviewers cannot represent the wider range of expertise of the scientific community.

It’s a practical reality that reviewers simply can’t fully determine if a paper is ‘right’ and it’s not what is aimed for.

If someone presents a paper in an argument, saying that because it was published in a peer-review journal it “must” be ‘right’, or similar, they’re pulling a con. They’re not presenting the weaknesses and strengths of the argument, but putting something else in it’s place. (It had 5 peer-reviewers? So what.)

So what is aimed for in peer review?

In broad terms, they aim to eliminate papers that don’t meet the aim of the journal or it’s standards, and to try eliminate weak aspects of otherwise acceptable papers. Ideally you’d want high standards and to ensure that all the arguments are water-tight all of the time, but neither are truthfully possible.

A key thing that peer-review does is to try check if the logic, the statistics, the experimental methods are reasonable. Not if the conclusions are ‘right’ or not, but how those conclusions were arrived at—the bits making up the argument for the case in the paper—and if the conclusions are in fact consistent with the data.[4]

The standard of review will reflect the journal in question.[5]

From time to time peer review will fail, basically because people are fallible. We’d like to be perfect and all-knowing,[6] but we’re not. Ditto for reviewers.

Philosophically we might argue that peer review always fails – just by different degrees. Short of a purely mathematical paper or computer algorithm given with a proof, rigorous unfettered truth is hard to find.

Once a paper is published the wider range of expertise of the scientific community quickly points out any issues missed by the scientific journal’s reviewers – usually more vocally the more prominent a research paper and it’s claims are! (In cases, especially for less prominent research, specific issues can be known informally by the niche group working in the specific topic without it reaching wider recognition by those outside of the niche. This can be a problem for newcomers to an area of research.)

The final peer review of a research paper is it’s acceptance by the scientific community as a whole.

That can’t be over-emphasised. The peer-review of a paper by a journal is not it’s acceptance by science, the community. The peer review in journals is basically an attempt to eliminate the inappropriate, the obviously bad and to ensure reasonable arguments, and that’s all. The acceptance of the work — the conclusions offered — comes later, after the community has seen it and had time to explore any issues with the argument for those conclusions.

One more: putting right wrongs

Frequently research that draws attention to peer review in media are research being touted by advocates for a topical issue or cause, GMOs, climate change, a particular medical condition or illness.

One of the frustrations of seeing advocacy groups using initial findings to support their causes is that claims made (by advocates of a particular position) based on early reports that subsequently prove wrong are incredibly difficult to put right. A few persist in clinging to their original claim no matter that the research it is based on has been overturned. These can circulate on-line for years, misleading others.

You’d wish that people, including scientists, who promote research to advocacy groups would put the same effort to putting out word about something they espoused earlier having since been shown to be wrong, as they did in their original espousing of the now-incorrect claims.

Readers can help themselves by asking if the person presenting a new research ‘finding’ is (strongly) expressing appropriate caution and to remember that research papers are arguments for a case, not ‘a done deal’, and it is the acceptance by the wider scientific community is the true peer review of research work, and that that can take time. (As Yoda said, patience you must have my young padawan!)


1. I originally started on this at the time a report arguing that a species of arsenic-tolerant bacteria incorporates arsenic into it’s chemistry was published. That extra-ordinary claim received a lot of (justified) criticism, and promptly.

The ‘GMO’ paper in question is Séralini et al, Long term toxicity of a Roundup herbicide and a Roundup-tolerant genetically modified maizeFood and Chemical Toxicology Volume 50, Issue 11, November 2012, Pages 4221–4231. As you can see it’s preceded by a number of letters to the editor. It received considerably more fuss in the media, in part because of the unusual approach to reporting the authors’ views of the conclusions (which prevented the journalists from obtaining independent comment from other scientists).

2. An example might be research claiming that tiny (really tiny!) amounts of small RNAs in rice might affect humans; recent work aimed at testing this claim was unable to reproduce the original findings.

3. Put more correctly, it’s retention of a fear-conditioned response to an odour in the offspring. I’d like to get a copy of this paper (it’s pay-walled…) to check one thing that bothers me from reading the abstract (author’s summary of the research). One thing I felt in two minds about was this work was it being presented on-line before the research paper was available, as no-one (at that time) could look to see if the claim made might be sound or not.

4. You might think data just “is”, but data also has standards, standards set from exploration of the methodology. I’d elaborate on this but I don’t want to clutter the article.

5. The standard of a journal is a fraught question. Some appear to have no standards! Others set very onerous review standards, but get their share of papers that prove unsound. Suffice to say here, it’s a topic in it’s own right.

6. Not many people really would, but let me run with it. You got my point, right?

Related articles on Code for life:

Initial reports are not a done deal

Media reporting of subsequent findings

Arsenic life – more criticism, formally published

Trust science, not scientists

When the abstract or conclusions aren’t accurate or enough

XMRV-CFS, further retraction

Reproducible research and computational biology

8 Responses to “The public and new research: peer review, initial reports and responses to extraordinary claims”

  • Put more correctly, it’s retention of a fear-conditioned response to an odour in the offspring.

    Pure Neo-Lamarckian inheritance. I am happy with epigenetics. I am not happy with a suggestion that a particular molecule, stimulating olfactory receptors in a rodent’s nose, triggered a systemic response within that rodent’s body which somehow located the specific genes in the cells of its testicles which coded for that particular receptor, and methylated them.

    Is anyone running a Retraction Watch Sweepstake for that article yet?

  • herr doktor bimler,

    I hear you. There are a lot of things are bothering me about various claims for epigenetic inheritance; I would like to cover some of this under my ‘Not Just DNA’ series. I’d like to read that paper, but it’s pay-walled so it’ll have to wait until I get a hold of a copy of it.

    On another note – it’s nice to know at least one person reads my Footnotes! 🙂

  • I ended my piece with suggestions to readers. You could add plenty more things to look for. This article about limitations to look for in research closes with these two:

    Extraordinary claims require extraordinary evidence

    The single study that goes against accepted scientific wisdom is probably wrong

    The New Zealand Science Media Centre has a booklet that readers can try if they want more ideas. It’s really intended for journalists but others should find it useful too.

  • I’d like to read that paper, but it’s pay-walled so it’ll have to wait until I get a hold of a copy of it.

    Do you want to send me your e-address?

  • Superb, article, Grant! This should be required reading for anyone writing science-related stories in the media (and their editors, if they have them). It will also be a sobering reminder for anyone reading such articles.

    One thing a good peer-reviewer will do is check to see if the the author honestly cites the most relevant previous research in the literature, and whether the citations actually support the claims made by the author. Some scientists use the sleight-of-hand of citing their own previous paper to support a claim, but the previous paper may not actually provide this support—instead, having a citation to yet a different paper. This can lead to the perception of “established fact” in the eyes of an unsophisticated reviewer who is unwilling to validate that the citations are appropriate. Another trick is for some authors to obfuscate by having a list of references so long that almost no-one will be willing to wade through them (recent papers by Samsel and Seneff come to mind). I, personally, give the references in papers with “extraordinary” claims a higher level of scrutiny.

    I particularly like your point that papers are “arguments”, rather than presentations of information. In this regard, authors are somewhat similar to lawyers—presenting selected pieces of evidence in an attempt to make a case. (How many authors present, “The truth, the whole, truth, and nothing but the truth”?)

    One final comment: reporters often rely on press releases from Universities. We need to remember that press releases are marketing tools for the Universities, and are NOT peer-reviewed. They typical emphasize what the author would LIKE the truth to be, rather than an objective summary of the research.

    • Good points, Peter & thanks for replying.

      One thought – many journals have limitations on the number of references for research papers (but may allow more leeway for review articles).

      Whatever the case is for a particular research journal, extensive reference lists are certainly by those pushing pseudo-science on the ’web. (I seem to think Seneff curious papers—she falls in the wild ideas category, check out her hypotheses pieces—are often in pay-to-publish journals that may not have high standards or, some suggest, any real standards at all. Some also suggest these journals don’t have real peer-review, and in cases there is some evidence for this like where people have submitted random word papers and had them accepted.)

      A few scientists are very good at laying down their findings, then systematically attacking them. It’s not as common as you’d like. (Well, I’d like.) I wish I could offer an example or two – it would be good to hold up these for students, journalists, etc., to see. Anyone with possible examples — open access, please — are welcome to offer some. (If I motivate myself, I might put up a short post asking for examples later.)

      You’re quite right about the press releases. These often (I would think, usually) are written by non-scientists. Relying on press releases is common and a lot of science ‘reporting’ look to me to be, effectively, ghost re-written press release material. One particularly egregious error is to take the forward-looking statements, where the research might go, and present them as outcomes of the research!