By Grant Jacobs 31/12/2016

‘Substantially equivalent’ is the term used by regulatory bodies as part of confirming a GM crop is safe for consumption. Earlier work claims GM corn is ‘substantially equivalent’ to non-GM corn.

Earlier this month a study was published in Scientific Reports claiming genetically modified corn is not substantially equivalent to non-GM corn, “Our molecular profiling results show that NK603 and its isogenic control are not substantially equivalent.”

Plant biologists have said this research doesn’t show what it claims to.

Rather than repeat what others have already said, I’m going to offer a brief explanation of what they have said.

For those in a hurry the main point that has been made is that they haven’t found what the typical range of amounts of each protein* is first, and without that you can’t tell if the differences they found are unexpected or not—the results end up hanging the air, neither here nor there.

There are other points, too, such as if the main differences observed are due to a fungal infection.

Since this was written a few other pieces offering criticism have appeared:

I’ve excerpted portions of these in the comments below this piece. If there are others, let me know in the comments below.

The authors and the scientific journal

The authors are an international group of scientists, the best known who would be Gilles-Eric Séralini whose previous work has been widely criticised. The first author has collaborated with Séralini for several years (judging from the references cited).

The journal the work is published in is Scientific Reports, not Nature as many are saying. Nature is a very prominent journal. Scientific Reports is a much more modest affair offered by the same large publishing company, Nature Publications Ltd. Scientific Reports is an open-access journal. Like a few other open-access efforts, it accepts papers on technical merit, rather than if the work is especially meritorious or not.

Expert reaction at the UK Science Media Centre

Some expert comments are available at the UK Science Media Centre.

The first comment is by Dr Dan MacLean, Head of Bioinformatics at The Sainsbury Laboratory. The Sainsbury Laboratory is one of the biggest (if not the biggest) plant laboratories in the UK. They have a lot of experience studying plants. Loosely-speaking bioinformatics is the field that works with biological data using computational (mathematical, statistical) methods.**

His first point is the same as I made earlier,

A big issue with this analysis is that materials were collected under potentially quite different conditions. Different parts of the same farm, potentially different chemical makeups in the soil, different water contents, different elevations, exposures and temperatures. Under tight laboratory conditions the metabolome and proteome are very variable and the statistics presented here do not go anywhere near controlling for those factors.

“There are a huge amount of things that could be affecting the expression and levels of everything in those plants and no exploratory and controlling statistics are presented. The analysis just jumps straight into ‘everything is equal, let’s do tests’ […]

Dr Joe Perry, former Chair of the European Food Safety Authority GMO Panel picks up on this, too, noting that the EFSA checks,

In contrast with compositional analysis, which is done for every application, and reported by EFSA, and which involves proper replicated field trials, this study appears to have been done with single, unreplicated plots.

Therefore it is not possible to say with any certainty whether the differences reported are due to differences between the treatments or differences between the two fields (or two plots within the fields) used.

In other words the basic tenets of experimental design seem not to have been followed. For that reason I could not yet describe this as a thorough piece of science.

His last sentence is pretty damning, really. He’s saying the science done has a pretty incompetent blunder in it, that with a little thought, it should have been obvious from before the work was attempted that it couldn‘t show what they wanted to test.

In order to show that a difference is biologically significant, you need to know what the ‘normal’ range of differences are. Is the difference you see within that normal range. Or is it well outside it? To do that you first need to know the normal range.

Even if you accept a difference is a valid difference, you want to know if it’s meaningful. The last expert, Prof. Johnjoe McFadden, Professor of Molecular Genetics at the University of Surrey, picks on that. He seems to have taken the author’s conclusion on faith, but suggests that all these types of studies are fundamentally flawed, that the amounts of proteins in plants differ too easily for them to be meaningful,

“How equivalent does it need to be? If you perform this detailed level of analysis on any perturbation of any organism you will detect this level of change – organisms are extraordinary sensitive and, for example, similar changes are produced when treated with e.g. pesticide or herbicides1 or when attacked by pests2.

I would expect that practically any perturbation to an organism will generate a response that can be detected by these powerful techniques – that is after all what life does.

So all it shows is that GM, like pesticides, herbicides, drought, predation or even growing in a different field will produce a response by the organism. If GM was banned on these grounds then so would all herbicide pesticides and indeed anything that causes a change (which is everything).

You could read this as saying that these comparative tests are mostly effort to soothe, to be politically correct, that even if a difference were observed it most often wouldn’t mean much. It’s worth remembering that these comparative tests aren’t meant to be the last word, but rather to find those cases that might want looking into, and to eliminate the rest.

This, to me, brings back to a core point about complaints about GMOs: it’s if something affects something else that is wanted. That’s not about if the plant is a GMO or not. Just what affects they have. It’s why I continue to say that the term ‘GMO’ is a red herring. (And a damn waste of time, too!)

MacLean and Perry both point out that having identified a (major) issue in the original data, they’re not really able to draw conclusions,

This has the effect of making the decisions about what pathways are changing moot. No clear conclusions can be reached, and certainly not on the basis of p-values. Hence all downstream analyses could not be expected to show clearly any patterns because of considerable noise in the list of things that are changing.

Further details about the conduct of the experiment would be useful to confirm or otherwise this initial impression.

Comments following the paper

Another source of commentary are the comments following the research paper itself.

In many ways these points are moot given the issue of not having established what the normal variation is. Nonetheless a few interesting questions are asked.

These comments are unfortunately polluted by some uninformed commenters, as well as some sloppy comments by some who, in my opinion, should know better.

It‘s further confounded by that some of the comments have been removed. According to one comment, at one point there were over 130 comments: as I write there are 84.

With that in mind, I’ll select just a few that are more meaningful, each under their own header so that you can skim for the ones that interest you.

Others call out the basic design of the experiment

Paul Vincelli offers,

By my reading of this paper, the three corn plantings that represented the source of grain samples for the three experimental treatments studied were not spatially randomized/replicated in the field. Therefore, the statistical effect of treatment is confounded with the effect of planting position, making it seemingly impossible to statistically separate one effect from the other. Even in such highly controlled environments as growth chambers, plants in different positions can produce significant differences in growth. In the field, position effects are even more likely, as variation in the physical, chemical, and biological environment can be substantial even in sites which appear to be superficially similar. Therefore, the conclusions about the treatment effects reported here are called into question. I know of no statistical analysis that can overcome this design flaw. Others have independently identified this as a major concern, as well (

The paper seems not to discuss -omics effects of conventional plant breeding approaches, which are substantial (Sustainability 2016, 8(5), 495; doi:10.3390/su8050495).

For the record, I report no conflicts of interest in the topic of genetically engineered crops (GMOs).

Similarly, Rod Herman writes,

Before deeply considering the importance of the analytical results, I would think one would look at the experimental design of the field experiment from which the grain samples came. Can someone please explain how grain samples from unreplicated field plots can be used to determine anything about the effects of the crop genetics or production method? Am I missing something? Is it now scientifically acceptable to evaluate crop varieties from single unreplicated plots at one location?

The main difference is likely to be a fungal protein

‘Mem_somerville’ has asked if there is fungal contamination in the GM corn,

There’s a lot of nonsense drama below now, but I want to hear from the authors (Robin Mesnage asked me to post here, but I can’t see if he’s responding):

1. What is your explanation for the fact that top fold-change proteins in your data set are fungal proteins (and it’s a known maize pathogen)?

2. Are you aware that fungal contamination could result in similar changes in regards to the pathway changes that you describe? Did you consider this at all? Why didn’t you address this in your paper?

3. If you wish to dismiss your own top reported proteins, how can you stand by the importance of the fold-change claims you are making about other proteins?

Thanks for your guidance on this. It’s very perplexing.

According to others, these techniques are able to detect very early stages of infections that not yet visible to the eye. I’m wouldn’t be surprised – these are very sensitive techniques. In fact, in many ways they’re too sensitive. This is a recurring problem in all these system-wide screen using highly sensitive molecular techniques: you end up having to be extremely careful to test if the variations you see reflect what you are testing, not something else as the techniques are able to pick up differences that have been caused by the most mundane reasons.

One issue for these studies is contamination; testing for it needs to be built in.  Infection is a type of contamination in a sense – the sample isn’t just the plant. Infections can cause metabolic changes, of course, which would be a possible reason for the differences observed.

One suggestion, by ‘Rightbiotech’ was that the ‘top’ difference might be from a worm, rather than a fungus. If so, it would also be problematic (if not fatal) for this study, as that would be contamination. (Contamination large enough to show up dominantly in the results.)

For the curious, the fungus in question is Gibberella moniliformis, a pathogen of corn.

Does the study test what it claims to?

This one struck me, too.

On reading the abstract I realised the paper set out to test if there was any difference in metabolic activity, not the ‘substantial equivalence’ used by the regulatory bodies that it claimed to want to address.

Related to that there is an inherent problem in these large systemic surveys of ‘gold digging’, finding things that are meaningless if you try too hard.

Chris Preston replied to Damian, writing,

If you look at the paper, the stated intention was to address the issue of substantial equivalence: “In an effort to provide insight into the substantial equivalence classification of a Roundup tolerant NK603 GM maize”. However, the authors do not address substantial equivalence as interpreted by regulatory agencies, instead they address something much closer to what you suggest.

What Damian suggested the aim was is,

whether the inserted allele in NK603 caused a phenotypic difference (at proteome and metabolic levels),

Damian goes on to ask,

isn’t the inactivated complement the correct control, and not the “closest isogenic line”.

What he’s asking is that the correct comparison for seeing if there is a difference caused by carrying a inserted gene, would be to compare with the plant with the inserted gene inactivated, rather than a different strain of the plant, which, because it’s a different strain, will have differences in it’s biochemistry too.

Another approach is to compare a range of corns, and learn what is typical. (And then you still have to consider if the differences are meaningful.)

More comprehensively, Chris Preston goes on to write,

The control used should have been the one that addressed the hypothesis put forward. In the case of substantial equivalence, what regulatory agencies look at is whether there is evidence that the crop has a composition that might be outside the range of what humans are already exposed to in their diet, as that may indicate the need for more testing. Therefore, you will see most substantial equivalence tests address not only the non-transformed isogenic line and/or a null transformant, but also the range of known compositions for that crop.

If the authors truly wanted to address substantial equivalence, they should have tested the NK603 maize against a range of maize cultivars common in diets. If the idea was to address whether the specific transformation caused differences, then a null transformant would have been more appropriate. There are likely to be a reasonable number of differences between DKC 2678 and DKC 2675 irrespective of the transgene, so the experimental design used would be unable to address that question adequately.

In short, this study was flawed from the beginning as it wasn’t set up to test the hypothesis that the authors claimed they were testing. This is irrespective of the issue with Gibberella moniliformis infection that means the conclusions drawn by the authors are unsafe. The presence of significantly more of these proteins in the non-GM corn, despite the authors running a screen just for the the maize proteome, means that it is impossible to tell whether any differences between the two samples were due to fungal infection or the insertion of the gene.

The multiple flaws in this study mean that you really cannot conclude anything from the results.

You’ll see he talks about “the range of known compositions for that crop” — that’s the range of levels of proteins (etc) that was referred to at the top of this article.

Note also that he points out that even if you are looking for differences from adding the new gene, the experimental design isn’t able to address that, “There are likely to be a reasonable number of differences between DKC 2678 and DKC 2675 irrespective of the transgene, so the experimental design used would be unable to address that question adequately.”

At Genetic Literacy

It’s worth reading a blog post at the  Genetic Literacy blog post as a companion piece to my own, one that delves into a little more detail.

Kevin Folta, who has long been involved in communicating about genetic modification, particularly of crops, has expressed a few thoughts in the comments –

My favorite part of the paper is that they did NOT detect glyphosate on plants sprayed with glyphosate. However, activists claim to detect it in food.

The rest of this paper confirms well that the products are essentially the same. The differences observed are not much more than you’d expect from small environmental variations in plant biology. I would have liked to have seen a comparison within samples from the control group (the isoline). I have a funny feeling you’d see variation there too. Small differences in moisture, etc could account for the differences.

On the other hand there could be small collateral changes induced by a transgene. No surprise there. The question is, is there any reason to believe the changes observed in metabolites are problematic? No. Not at all. Other plants make the same polyamine compounds in mountains relative to corn.

The title and discussion were completely inappropriate for a scientific journal and should have been revised. But obviously soft reviewers and editor that let it slide.

More to think about

There’s more to think about, but I’ll leave it at this. This is more than enough to start with! In time there might also be comments at the PubMed source of the research paper.

But let me toss in two minor points, worth noting in a different way –

Claims that it’s the first

They claim it’s the first study of this kind. Sort-of-ish but not quite really. This paper also examines the metabolome of the GM and non-GM varieties of corn. It’s been out since earlier this year, so the authors should have been aware of it. They don’t cite this paper in their references.

They will have used different techniques, but essentially all research papers do that.

Some papers try inflate their work with a claim to a ‘first’. It’s a distraction really: better to just focus on the data and what it might mean in my humble opinion. Besides, I think claims to a ‘first’ are best left to editorials.

Not a good title

I’d also quibble, rather strongly, that the title is inappropriate. It might seem nit-picky of me, but this is the sort of detail tougher scientific journal editors insist on. The title reads, “An integrated multi-omics analysis of the NK603 Roundup-tolerant GM maize reveals metabolism disturbances caused by the transformation process”.

The trouble is that last bit.

Firstly “the transformation process”, strictly speaking, are the steps of genetically modifying the plant. They’re not studying that active process, they examine a resulting product of that, seeds. Also ‘disturbances’ is a loaded term. It implies the differences where through something ‘disturbing’ the ‘natural’ situation. At best they should say ‘differences’.


The research paper refers to maize, rather than corn, but corn is the more familiar term in New Zealand.

* I’m using ‘proteins’ as a bit of a short-cut to keep things simpler.

** It also happens to be my field.

Featured image

Kenyans examining insect-resistant transgenic Bt corn. Source: Wikipedia.Genetically Modified Corn— Environmental Benefits and Risks Gewin V PLoS Biology Vol. 1, No. 1, e8 doi:10.1371/journal.pbio.0000008 Creative Commons Attribution 2.5 Generic license.

0 Responses to “Is GM corn really different to non-GM corn?”

  • Great blog! Sounds like sloppy research that was just headline hunting. I’m surprised Scientific Reports published it. Modest journal, but still I normally like the research published in it. Peer review let us down here.

    • “Headline hunting”? I rather suspect that the headline was written first, and the research conducted as an afterthought.

  • The presence of a fungal pathogen absolutely could be affecting these results. And that’s got to be considered. But I agree that the statistical problems are the most important aspects of this.

    But I couldn’t resist using their own data to ask them questions about their protein fold changes. They can’t simultaneously claim the fold changes they care about are important, while dismissing their own top fold changes.

    The fact that they can’t detect any herbicide in their own samples is kind of funny, too–since they used the presence of herbicide in animal feed as the foundation for many of their previous claims. They really want to have it both ways, all the time.

  • Grant, in addition to your own comment about this being the first study of its kind, I remember reading a paper from 2005, on potatoes I think, looking at the effect of genetic modification on the proteome. That research showed that there were many fewer changes due to genetic modification than there were between cultivars of the same crop. I will dig the paper out when I get back to my office and post the reference for you.

    • Hi Chris –

      Wouldn’t be this paper by chance? (Had a little peek with Google Scholar…) Thanks for the heads-up.

      “There was much less variation between GM lines and their non-GM controls compared with that found between different varieties and landraces.”

      Comparison of Tuber Proteomes of Potato Varieties, Landraces, and Genetically Modified Lines1
      Satu J. Lehesranta, Howard V. Davies, Louise V.T. Shepherd, Naoise Nunan2, Jim W. McNicol, Seppo Auriola, Kaisa M. Koistinen, Soile Suomalainen, Harri I. Kokko and Sirpa O. Kärenlampi*

      Plant Physiology July 2005 vol. 138 no. 3 1690-1699


      Crop improvement by genetic modification remains controversial, one of the major issues being the potential for unintended effects. Comparative safety assessment includes targeted analysis of key nutrients and antinutritional factors, but broader scale-profiling or “omics” methods could increase the chances of detecting unintended effects. Comparative assessment should consider the extent of natural variation and not simply compare genetically modified (GM) lines and parental controls. In this study, potato (Solanum tuberosum) proteome diversity has been assessed using a range of diverse non-GM germplasm. In addition, a selection of GM potato lines was compared to assess the potential for unintended differences in protein profiles. Clear qualitative and quantitative differences were found in the protein patterns of the varieties and landraces examined, with 1,077 of 1,111 protein spots analyzed showing statistically significant differences. The diploid species Solanum phureja could be clearly differentiated from tetraploid (Solanum tuberosum) genotypes. Many of the proteins apparently contributing to genotype differentiation are involved in disease and defense responses, the glycolytic pathway, and sugar metabolism or protein targeting/storage. Only nine proteins out of 730 showed significant differences between GM lines and their controls. There was much less variation between GM lines and their non-GM controls compared with that found between different varieties and landraces. A number of proteins were identified by mass spectrometry and added to a potato tuber two-dimensional protein map.

  • In addition to the problems I introduced, it has been pointed out that there are problems with the corn lines used:

    One key message out for you from what he writes –

    “The choice of lines used in this study introduces several major sources of variation that make it impossible to account for. Because the lines are not isogenic (or near-isogenic) and there is no information on if they were hybridized to the same parent line, it is impossible to say if the observed differences are due to the transgenic trait or due the fact that lines with differing genetics were used.”

    The way researchers doing careful work resolve this is to generate their own isogenic lines, like the earlier Harrigan et al paper I linked to (which studies possible effects of the same GM trait as the Mesnage et al that is covered here) – see section Claims that it’s the first.

  • Somewhat off the main topic is this story, which while bizarre appears to have substance to it (I don’t have time to dig around verifying it). From a skim it seems that Seralini is, in part, funded by a company producing a homeopathic product that it claims to treat glyphosate poisoning. Seralini is a consultant to the company, and gets funded (paid?) by the company to both test for ‘treating’ glyphosate poisoning, and some members of that company join him in testing for ‘damage’ by glyphosate. Aside from the obvious conflicts of interest, this reads as a company playing both sides of the fence –

    • (I don’t have time to dig around verifying it). From a skim it seems that Seralini is, in part, funded by a company producing a homeopathic product that it claims to treat glyphosate poisoning.
      I had more time to dig around. Yes, Séralini has previously published two advertisements (disguised as scientific papers) claiming that his employer’s homeopathic products could protect against the purported liver- and kidney-damaging effects. More recently he found that these products also protect against the purported locomotor injuries.

      Sevene — the homeopathy company — is itself a branch of a French New-age / scientology / religious group of whackjobs that is weird even by my standards.

  • For those still (!) following this, there are further comments following the original research paper.

    (If the link above doesn’t open the comments, you’ll need to drop down to near the end of the page, just above the links to other sections in the publisher’s website, and wait for Disqus to load the comments.)

    The senior (last) author of the paper has written a comment—see comment by Michael Antoniou—in which he argues addresses the fungal infection concerns raised. Limiting myself to thoughts not related to the data itself,* the offered reason for not including an explanation of the fungal protein findings [paraphrasing] that is it outside the scope of the paper seems weak to me, as the presence of the fungal proteins (peptides) are part of the reported results, they are the largest (fold) differences observed, and he says they were aware of it before doing the analysis.

    (* I have other thoughts, but my main focus here and above is on collating other’s thoughts in a form that might be useful for others.)

    There is also a new concern raised by ‘Claire’ over that the paper is measuring the broken-up fragments of proteins (peptides), but [in places] reporting these as proteins, and that this might mean that “The major statistical problem in this paper is that if any single peptide in a protein is found enriched, the full protein has been counted as enriched, even if all other peptides of that protein are found at equivalent or opposite levels between the two samples”, and “This is not enough information to know if any of the full proteins made up of multiple peptides are statistically enriched or depleted.”

    Claire points to as an example of her concern where one fragment (peptide) of a protein is found twice as much, but another fragment from the same protein is found twice less as much.

    Her concern, to my reading, is that the variation in amounts of peptides from the same protein want to be discussed, reconciled, and be part of the analysis.

    She also says the data represents 105 proteins, not 156 — i.e. that the data is 156 peptide fragments from 105 proteins.

    I haven’t time to do much on this at the moment (I am travelling and have a bus to catch!), but perhaps the paper wants to be gone over carefully to ensure that these terms (peptide, protein) have been used appropriately?

    Using just a simple word search on the paper to locate a possible example, this statement –

    “While only one protein is newly produced as a result of the transgene insertion, a total of 117 proteins and 91 metabolites have been altered”

    would be incorrect if there are only 105 proteins.

    • I asked Robin Mesnage on twitter to explain how the same protein is both upregulated and downregulated as a result of the transformation process. He told me that it was because of post-translational modification.

      I asked how they demonstrated the PTM when there are multiple entries. No answer.

      I asked if how they handled that in their pathway analysis. No answer.

    • Just to make a small correction, the 156 peptides mapping to 105 proteins was from the one experiment in my example (out of the two experiments). Together the two experiments cover 117 proteins. Though they do describe looking for annotations for 156 proteins, though it would only ever be possible to find max 117… Though the 117 number really just represents proteins with at least one peptide enriched, not 117 proteins with statistically different levels…

      To explain the problem more, all proteomics software outputs both protein quantification and peptide quantifications, with proteins used to answer protein level questions, and with peptides mainly used for QC or very specific peptide-level analyses. I’m certain now that the authors here used the peptide quantification as if it represented protein quantification, even though there’s no justification for doing this. If you don’t see a difference at the protein level, that’s it.

      With -in most cases- only a single peptide per protein being found to be different between samples, I really doubt that the full proteins would show differences, and definitely not the protein with both enriched and depleted peptides! I bet these corns look just about the same.

      Interestingly, in same authors’ other proteomics paper that just came out in Sci Reports, they are very explicit about doing peptide level quantification, but then go on to use it as if it was protein level quantification, which is still wrong.

      • Hi Claire,

        Just so you know, now that I’ve approved your first comment you should be able to comment at will. (Holding up the first comment is just an attempt to head off spam.)

        “Together the two experiments cover 117 proteins.”

        Ah, thank you. It did seem unexpected, and I didn’t know quite what to make of it.

        I follow you re the protein v peptide issue.

        Personally I’ve been wondering if there is sense in comparing their data with that from Harrigan et al. (

  • If you were wondering how two random cultivars of wheat were selected and somehow presented as identical (apart from the Roundup-resistance gene), and how the experimental wheat samples turned out to be contaminated with pathogens, notice this in the description of authors’ contributions:
    G.E.S. conceived the animal feeding trial and provided maize samples for analysis.

    • It makes you wonder if the research model is that GES heads & organises these things, farming out* technical molecular work to collaborators.

      While I’m writing, are you aware of the latest from these research groups? Some has spotted that a figure used to illustrate a control was re-used in a paper 3 years later, but assigned to rats of different gender.

      There’s also a new paper just out in Scientific Reports, that uses tissue from the earlier work on feeding rats GM corn + RoundUp. Mesnage is the first author. There is a Daily Fail piece out on it. More on this later maybe? I can’t help noting the first author thanked has ‘The Food Babe’ for touting his paper which isn’t a promising sign given ‘The Food Babe’ is notorious for promoting nonsense.

      Someone has also pointed out that one reviewer (of the earlier paper with the reproduced control illustration) had collaborated with the authors within the previous five years, which is apparently is against the journal’s rules [and unethical anyway].) I’m also left wondering if journals need to do a better job of checking the peer reviews. (Yes, I know, checking the checkers.) It seems to me checking for collaboration conflicts of interest can be partially automated, and if not done might be a good idea?

      * I’d say no pun intended, but it’s almost too good not too! 🙂

    • Jack fails to do the first step, to critique if the papers warrant a re-examination of regulatory policy. I’d suggest that presuming that is the case, as he has, fails to do proper academic duty.

      The first paper at least certainly doesn’t warrant anything being based on it. There is just too much wrong.

      I’ve only skimmed the more recent paper (it’s only just out and I have other things I am trying to do), but I can’t help noting it also does not do the key point I others noted above, see “For those in a hurry the main point […]”

      To give them a little grace, the effect is that the work rest entirely on statistical difference (not biological, note), and my initial impression is that the second study doesn’t look good there either unfortunately.

  • [Author’s note: please note the points Michael Antoniou replies to below are not mine (Grant Jacobs), but the respective people I have quoted in my round-up. In particular, the quote he attributes to me is in fact Chris Preston’s words. Thanks.]

    Response to Dr Grant Jacobs’s article, “Is GM corn really different to non-GM corn?”
    Dr Michael Antoniou, Department of Medical and Molecular genetics, King’s College London, UK

    This is a response to Dr Grant Jacobs’ blog post ( titled, “Is GM corn really different to non-GM corn?”, in which he dismisses the significance of our findings ( that GM NK603 maize is not substantially equivalent to its non-GM counterpart (closest relative).

    In our state-of-the-art integrative multiomics analysis, we found significant differences in both protein and metabolite profiles in the GM maize. The alteration in protein profile indicated an imbalance in energy metabolism, whereas the most pronounced difference in metabolite profile was higher levels of polyamines, including the potential toxins putrescine and cadaverine.

    Dr Jacobs implies that we should have first established the typical range of each protein across the totality of maize crops, so that we could see how the GM maize under test fitted into this range of natural variation. The aim here appears to be to minimize the importance of any changes seen in the GM maize by stating that they are within the range of natural variation.
    However, Dr Jacobs misses the point of the study, which was to specifically ascertain the effect of the GM transformation process on the composition of this maize variety. As a result, the only scientifically valid comparator in this investigation is the non-GM closest (isogenic) relative . Comparisons with different varieties of maize that have been grown under different conditions would only serve to increase variation in the dataset and thus mask rather than highlight the effects of the GM transformation process, thereby negating the very purpose of the study.

    Scientifically invalid comparisons

    Dr Jacobs criticises us for not following the common regulatory practice of comparing the GM crop not only with the non-GM isogenic counterpart but also with “the range of what humans are already exposed to in their diet, as that may indicate the need for more testing. Therefore, you will see most substantial equivalence tests address not only the non-transformed isogenic line and/or a null transformant, but also the range of known compositions for that crop.”

    It is correct that regulators look at the range of normal variation across many varieties of maize and see whether the GM crop falls within that range; if it does, they often consider it “substantially equivalent” in spite of large differences in composition of the GM crop compared with the non-GM isogenic crop. However, as mentioned above, this practice is scientifically invalid because it only serves to mask differences in the GM crop, even though it is the purpose of GMO regulation to highlight such differences and investigate their importance, if any.

    We were not surprised to find significant proteome and metabolome differences between the GM NK603 maize and its non-GM counterpart. What is of interest is the quality and quantity of changes that we found. Such changes are part of the innate nature of transgenic technology and stem from a combination of transgene insertional mutagenesis, tissue culture-induced genome wide mutations, and the resulting novel combinations of gene functions.

    The changes observed in the metabolome and proteome profiles reflect a mixture of effects and outcomes. Some of the changes are a consequence of the GM transformation process whilst others can be due to downstream interventions such as backcrossing or outcrossing of the initial GM event. However, our interactome analysis makes us confident that the majority of the changes observed can be attributed to a metabolic effect of the heterologous EPSPS-CP4 transgene (Figure 5 in our publication). The analysis of predicted interactions of biochemicals and proteins that might have a link to the transgene-associated EPSPS-CP4 pathway reveals that some proteins or metabolites altered in the NK603 GM maize are interacting with EPSPS-CP4. We are nonetheless aware that our study does not present an absolute truth. We are building on previously published studies to make progress in the use of omics methods in the investigation of GM crops. We acknowledge the potential limitations of our study in the discussion section and suggest ways forward. For instance, we stated that “further experiments made under different environmental conditions would be needed to determine the full range of effects of the GM transformation process on NK603 phenotype”.

    Major regulatory implications

    We were aware that our multiomics analysis could not address the safety of this GM food product, unless it revealed something unexpectedly dramatic that was unequivocally harmful. As we state, the main purpose of our study was to see if the claims of NK603 being substantially equivalent to its non-GM counterpart stand up to an in-depth molecular profiling, rather than gross nutritional compositional analysis. Our results clearly show that they do not. And although we cannot make any definitive inferences regarding the safety of consuming this product, our findings nevertheless have major regulatory implications, since the starting point and indeed the foundation that underpins the safety evaluation of a GM food is whether it is “substantially equivalent” to its non-GM counterpart. If this is found to be the case (based on a crude nutritional compositional analysis only), then little or no further safety evaluation is usually required.

    Our multiomics results clearly call into question and indeed challenge the claim that NK603 maize is substantially equivalent to its non-GM counterpart. This in turn implies that industry and regulatory assertions of the safety of this product based on substantial equivalence are unfounded. Thus our study suggests that the health risk assessment of this product should be revisited, to possibly include more generic safety testing based on long-term animal toxicity feeding.

    We believe our study significantly builds on previous work demonstrating the value of using omics analyses to evaluate the effects of the GM transformation process on a crop. This approach can clearly provide initial insight into the safety of GM foods and can potentially inform more targeted toxicity follow-up studies in lab animals.

    A complete response to Dr Jacobs’ blog including a reply to comments on our paper posted on the Science Media Centre (UK) website and Mary Mangan (MEM_somerville) on the Scientific Reports journal site can be found here:

    • “in which he dismisses the significance of our findings”

      “Dr Jacobs implies that …”


      Please note carefully that my article for the very large part is a collation of other people’s responses, not my own.

      I collated other’s responses, adding “lite” re-capping their concerns in simpler words for the benefit the non-scientist readers my article is aimed at.

      I offered almost no suggestions of my own bar a couple of minor points near the end.

      This should have been obvious from a reading of my article. Certainly another reader picked up on this: “Thanks for this interesting round up Grant”, suggesting others were able to read it for what it was.

      I made this explicit in the opening paragraphs: “I’m going to offer a brief explanation of what they have said.”

      Perhaps you read and responded too hurriedly?

      I cannot be held to these concerns, nor “defend” them, as they are not my points: they are other people’s.

      I have not yet read your Google Docs article beyond a keyword search for my surname (I have work I would like to attend to*), but on a hurried keyword-search-based skim of it, it seems as if throughout you have made out that these claims are mine. (* I’m not sure if I will find time, either, I’m afraid.)

      If so, that is not correct. And if so, would please correct your Google Docs article? A correct way to do this would be note that the top that you erred in an earlier version and misattributed the claims to me, rather than to the actual authors of those claims. (This way reader are aware that an earlier edition of the response had issues, and that the document has been altered.)

      If you are concerned about these people’s claims, you will have to address them, I’m afraid. As I said earlier, they’re not mine to defend.

      I did consider replying with my own concerns, but I felt it would be more useful to just collate what other’s had already said seeing as a lot had been said. For what little it’s worth, ideally my own approach would have been to take the data and, if possible, compare it with other sources. (One problem is that I’m not sure I have time to do that, and certainly no-one is paying me to do it!)

      All this said I did offer two lesser points near the after “But let me toss in two minor points, […]”.

      To a few other points:

      “However, Dr Jacobs misses the point of the study, which was to specifically ascertain the effect of the GM transformation process on the composition of this maize variety.”

      Writing definitively, as you have there, has the effect of putting words in my mouth, and I don’t really appreciate that: best to let the person speak for themselves. (You’re not right.) I hope you haven’t also done this in your GoogleDocs.

      “In our state-of-the-art integrative multiomics analysis”

      Aside from that I always find this sort of grand-standing silly, it really doesn’t help discussion as how grand your techniques are is not relevant: crap work can be done with the fanciest tools.

      “As a result, the only scientifically valid comparator in this investigation is the non-GM closest (isogenic) relative . Comparisons with different varieties of maize that have been grown under different conditions would only serve to increase variation in the dataset and thus mask rather than highlight the effects of the GM transformation process, thereby negating the very purpose of the study.”

      I agree with others that this is not correct. Perhaps you don’t understand why others think that? But for another time, I’m afraid as I have to get to other things right now. It might need another blog post to explain – the general form of this is recurring problem with analysis of biology data, especially large-scale data, that has bothered (some) bioinformaticians for a long, long time. (Makes me think there might already be something out there.)

      • “As a result, the only scientifically valid comparator in this investigation is the non-GM closest (isogenic) relative . Comparisons with different varieties of maize that have been grown under different conditions would only serve to increase variation in the dataset and thus mask rather than highlight the effects of the GM transformation process, thereby negating the very purpose of the study.”

        Since the comparison grain in this study was not isogenic, and was grown under different conditions (it wasn’t infected with rust, and had not been sitting around for several years prior to the study), Dr Antoniou seems to have admitted that the study needs to be retracted.

  • Michael Antoniou,

    Just to quickly add to my previous comment, you wrote:

    Dr Jacobs criticises us for not following the common regulatory practice of comparing the GM crop not only with the non-GM isogenic counterpart but also with “the range of what humans are already exposed to in their diet, as that may indicate the need for more testing. Therefore, you will see most substantial equivalence tests address not only the non-transformed isogenic line and/or a null transformant, but also the range of known compositions for that crop.”

    Note that the quotation that you gives is from a quoted passage in my article, presented as a block quote, and introduced with,

    “More comprehensively, Chris Preston goes on to write,”

    It is not “Dr Jacobs criticises us” but “Chris Preston criticises us”.

    (To be up-front, I’m struggling to see how you’ve gotten this wrong; it is quite obvious they are not my words.)

  • They have absolutely not demonstrated that this effect is due to the “GM transformation process”. There are flaws in this study at every step–from experimental design through analysis–and they are doing nothing but perstisting in making claims their data can’t cash.

    And it is not merely a theory that the top fold change protein in your data is (twice) a maize pathogen protein. It is your data. You continue to fail to explain why the top fold change protein is a pathogen that could cause polyamine response changes.

    In short, no one in a regulatory capacity will ever take this paper seriously because of it’s flaws. Some anti-GMO folks who don’t understand the issues will certainly be misled, I’m sure.

    However, it would be a wise step to deposit all of the raw data in a public repository so more of your new claims can be checked. Please submit the data and let us know when that’s been done.

  • Until Dr Antoniou showed up, this whole discussion was entirely focused on all the ways in which his work was terrible.

    After Dr Antoniou provided a clear scientific defense of the paper in question, Dr Jacobs exploded, but his main outrage is not over the science, but instead about the suggestion that he, Dr Jacobs, might be criticising the paper.

    Dr Jacobs should stop pretending he is not criticising the paper, get down off his high horse, and engage in proper scientific discussion with Dr Antoniou.

    • Hi John,

      I have not “exploded” – the suggestion is hilarious! 🙂

      What I am is (very!) puzzled how Michael has managed to attribute other people’s words to me. They’re plainly quoted, after all, and clearly attributed to who wrote them. He can’t make them out to be my errors, or ask that I “defend” them, as I didn’t write them. It’s just something he’ll have to fix.


  • What I am is (very!) puzzled about is

    1. why aren’t you eager to discuss the science with Dr Antoniou?
    2. your basis for taking umbrage, bearing in mind your obviously very low opinion of the work (crap, etc) – why are you so resistant to owning your own opinion?

  • Grant – if someone writes an article quoting extensively from several sources all taking a similar line (in this case critical of the paper showing GM and non-GM maize are not substantially equivalent), all quoted with apparent approval (certainly no disapproval), is it not reasonable, ideed common practice, to assume the author agrees with those criticisms? All the more so when you quote the well-known pro-GM campaigners, the UK Science Media Centre, and (as you admit) add criticisms of your own. Why on earth are you pretending you do not share those views? Are you ashamed of them? Are they actually indefensible, which is the most rational response to your angry denials at having anything to do with them? Peter Melchett (UK organic farmer and campaigner opposed to GM crops)

    • It would have been good if you had read the other comments first, as I have actually addressed most of this earlier!

      Most of what you say can covered by remembering that I’m reporting other’s words. You wouldn’t assume a reporter agrees or not with what they cover or quote. (The important word is ‘assume’: you can’t make the assumption.)

      I have written recaps for the many of the responses in a way that hopefully helps others understand what is being said. I get the impression you are confusing that with me writing for myself, but I think I’ve made it pretty clear in the piece what I’m doing.

      “is it not reasonable, i[n]deed common practice, to assume the author agrees with those criticisms?”

      Actually, no, that would assuming to know what the author thinks – see my earlier point about assumptions. (If the author explicitly says what they think, of course then you can.)

      Just a side note: worth remembering that because something is common practice doesn’t mean it’s “correct” or right. There’s a logical fallacy along those lines: the argument from popularity, aka argumentum ad populum. (I’ve even written a post on it, but it’s from a different angle that would suit here.)

      I happen to agree with most of what is said, but it’s not as simple as straight ‘yes’ or ‘no’ because there are technical details involved. But note that I can’t speak for others, more on this below.

      “and (as you admit) add criticisms of your own”

      You seem to be over-reading here (also see my earlier point about my recaps).

      I added very little of my own, and have said so earlier in these comments. I said I’ve really added only the two smaller points at the end & that almost all of the rest is reporting other’s responses. I think this is pretty clear to a fair reading.

      “Why on earth are you pretending you do not share those views?”

      I’m not pretending anything. You can see what I’ve written in the comments, for example.

      “Are they actually indefensible, which is the most rational response to your angry denials at having anything to do with them?”

      No, that’s not the correct view, and I’ve explained the correct view earlier on these comments. (To be up-front: that’s a pretty weird way of reading it, and reads as trying to pin a position on me.)

      The problem is that I can’t speak for others, as I related earlier.

      Michael Antoniou has replied to other’s words but addressing me as if I wrote them – an error on his part. I can’t speak for those people. It has the effect of asking the messenger to defend other’s messages. (I could certainly add my thoughts, but it’s not for me to speak for them. There a few other errors in what he’s written but I haven’t time to address them.)

      There is no “angry denial” – I’ve already explained that earlier in the comments, too. I was just surprised at his misattributing other’s words to me, even direct quotes that were clearly attributed to others. I offered him the benefit of doubt that he’s just read too fast.

      Re obtaining approval, all quotes are taken from public sources & are linked in the piece (which has the additional value that people can verify their original context).

  • I posted this at the journal site, but the comment system there is absurd. It will likely be suppressed by trolls who can down-vote it away. So I’m posting it here as well.


    Just to summarize, for those who have come along later in the discussion and have lost access to comments because of poor moderation at this site:

    1. Authors admit that the top fold protein change in their data is a protein from a pathogen of maize, which could affect polyamine levels in maize.

    2. Authors admit that they knew about this, and despite this declined to address it in their paper in any manner.

    3. Authors claim to have mycotoxin data which they will share with academics (but not everyone, despite it being potentially influential in this analysis). They do not address the fact that mycotoxin production does not directly correlate with the amount of fungal infection present, so still doesn’t tell us much, even if they would provide this for everyone to see.

    4. Authors admit that they included non-maize proteins in their analysis to illustrate the differences in maize proteins.

    5. Authors used peptide-level data to incorrectly make claims about protein level changes, without supporting evidence. This has consequences for subsequent analysis steps as well.

    6. The claims of the suitability of their isogenic line are not supported by evidence. The conclusions about the differences arising from the “transformation event” are not supported by this work.

    7. Poor study design means that any statistical assessments are flawed right from the beginning. However, with proper design, some of the other items might have been addressed.

    8. More than one request for submission of the raw data to a public repository has been made, and authors have not complied with this request.

    Anything else? It may be time to take this up with the editors of this publication.

    • Authors chose to combine two years of observations rather than report both sets and allow the reproducibility of the results to be examined.

      • Sorry, just getting back to this now. There is a large orange fog over some issues in the US right now that have diverted my energy for science nonsense.

        @herr doktor bimler: Yes, quite so. In my head I bundled that into “poor study design” but I should be more specific on that point.

        Also on the back-channel, someone suggested that the lack of adequate literature assessment by this team means that they ignored crucial papers that provide context for this work. It also means their claim of being the first to evaluate these things is unsupported. I can include that.

        I was also waiting to see if the raw data became available in a public repository before going to the journal, but that time is running out. I know that multiple requests have been made and ignored. So that will be noted in the upcoming letter.

        There’s still time for any other issues you may come across. But I expect to send off a letter to the journal in the near future.

  • herr doktor bimler, Claire, and, well, anyone! –

    I wonder how many of these things could be resolved if the original data, or a more appropriate data set were available.

    On that note, I’m a big fan of data standards efforts, including determining what is needed for critical examination, reproducibly, etc. (I’ve written about this general issue previously a number of years ago, and am known to bore people with it 😀 )

    This initiative, for example, might be a place to start – the HUPO Proteomics Standards Initiative.

    Their overview gives their aims as,

    “The HUPO Proteomics Standards Initiative (PSI) defines community standards for data representation in proteomics to facilitate data comparison, exchange and verification.”

    Perhaps Claire (or anyone else) can comment on this?

    On a tangential note, it’s when journals encourage (or insist) on these, as it helps bring the data into more useable form for everyone. DNA sequences are a prime example of this: once you didn’t have to deposit your sequence data to the database to get it published. Once the journals started insisting on a Genbank accession number, things took off.

    (I realise metabolic products would have to be included.)

  • Just to carry this over from Scientific reports for any who may find it useful. Readers might recall some earlier comments referring to the use of peptides v. proteins in presenting the results.

    The senior author (Michael Antoniou) is maintains that this is a correct approach, Claire has replied to clarify why she feels this is wrong. I’ve copied both below and added a brief thought of mine at the end. (Sharp readers will note Micheal has also repeated his mistake of attributing to the ‘messenger’ words of others they have referred to, just as he did in replying to me above!)

    In her last posting mem_somerville (Mary Mangan) states:

    “Authors used peptide-level data to incorrectly make claims about protein level changes, without supporting evidence. This has consequences for subsequent analysis steps as well.”

    “The major statistical problem in this paper is that if any single peptide in a protein is found enriched, the full protein has been counted as enriched, even if all other peptides of that protein are found at equivalent or opposite levels between the two samples.”

    These statements seem to reflect a fundamental lack of understanding of how a contemporary proteomics analysis, such as we present in our publication, is undertaken.

    The proteomics approach used in our study constitutes a standard mass spectrometry analysis. In brief, mass spectrometry not only allows the precise determination of the molecular mass of peptides and the proteins from which they are derived, but also the determination of their sequence, especially when used with tandem mass techniques, such as we employed. Therefore, fragmentation of peptides and proteins is necessary to give sequence information that can be used for protein identification, de novo sequencing, and identification and localization of post-translational or other covalent modifications. There is absolutely no misinterpretation of peptide data for the identification of proteins. Information about mass spectrometry principles and applications can be easily obtained online or from this book: Mass Spectrometry: Principles and Applications, 3rd Edition. Edmond de Hoffmann, Vincent Stroobant. ISBN: 978-0-470-03310-4.

    As explained before, such a mass spectrometry approach also allows the detection of post-translational or other covalent modifications. This is the reason why peptides with the same amino acid sequence can have a different mass and thus are found separately in our dataset. Because of these differing post-translational modifications, which can lead to different enzymatic and other functions, they cannot be treated as belonging to the same protein and thus their fold changes cannot be grouped together into a single entity, as wrongly suggested by some commentators.

    Although we have controlled the rate of false positives by applying the Benjamini-Hochberg procedure, some statistically significant differences could be due to chance. As stated in our paper “P-values calculated by a pairwise Welch’s t-tests and adjusted by the Benjamini-Hochberg multi-test adjustment method for the high number of comparisons were below 5%”. This means that 5% of the results indicating alterations in levels of proteins could be false positives. However, this level of uncertainty is standard for this type of statistical analysis dealing large numbers of comparisons. Therefore, the finding of inconsistencies at the level of a few individual proteins does not invalidate the conclusions drawn from findings based on the overall proteome.

    Our interpretation of the data obtained follows basic standard procedures for mass spectrometry proteomics analysis, which have been developed and evolved over decades and are now held to be correct. We therefore encourage critics of our study who believe we have used the wrong analytical methods, to publish their views in specialized peer-reviewed journals for evaluation by experts in the field.

    Claire, who clearly (no pun intended!) has familiarity with analysis of protein/peptide mass spectrometry replied trying to clarify her reason for objecting to presenting peptide results rather than proteins:

    Sorry, but you’ve misunderstood the issue in my (not Mary Mangan’s) comment. In this paper, you’ve taken the peptide-level differences and represented them as if they were biologically relevant protein-level quantifications. This is a fundamental misunderstanding of how protein expression differences are calculated from peptide data in any proteomics experiment. The combined measures of multiple peptides in a TMT experiment gives statistical support for differences in a protein’s expression.

    Proteome Discoverer’s TMT workflow (which you used according to the methods) gives both peptide-level and protein-level changes between conditions on two different tabs in the interface. You can see this clearly on page 3 of this Thermo manual , which also states “A more biologically relevant representation of these [peptide] metrics is the data for individual proteins (Table 2), as expression differences for individual proteins is most of interest to biologists.” So you took the top scoring different peptides for this paper. Did the protein quantification show any significant differences? If there weren’t, then there are no significant differences in protein expression between these maizes.

    Some reading to give background of pep vs prot quantification:

    It’s also absolutely fine to combine peptides w/ different PTMs from the same protein when calculating protein differences.

    A quick comment here (I’d write more but have to head out) – you’d want to check if there is any difference in protein expression first (and perhaps also gene expression) — seems essential context for understanding this.

    On a side note, I find this suggestion is hand-waving (or worse):

    At present we can only speculate as to the mechanisms that may explain these effects but they may have their basis in epigenetic programming of gene expression patterns with consequent longer term effects. The spraying of Roundup could have acted as a signal causing an alteration in gene expression patterns in the growing maize. […]

    Epigenetics is the latest “trendy thing” to explain for just about anything people can’t explain. I would have thought a much simpler explanation would be environmental effects, but it’s not offered – should have been offered, I think. (Discussions can include speculation, but they should be reasonable, and if simpler alternatives are on the cards, those should be included.)

  • On the off-hand chance anyone wanders by this way (!), and adding belatedly as much as a note to myself if I revisit this – another study has found as Harrigan et al did that variation in the composition of GM maize (corn) was more due to genotype and environment, not the genetic modification –

    Metabolomics. 2018 Feb 17;14(3):36. doi: 10.1007/s11306-018-1329-9.
    Characterization of GMO or glyphosate effects on the composition of maize grain and maize-based diet for rat feeding.
    Bernillon S1,2, Maucourt M1,2, Deborde C1,2, Chéreau S3, Jacob D1,2, Priymenko N4, Laporte B4, Coumoul X5, Salles B4, Rogowsky PM6, Richard-Forget F3, Moing A7,8.

    (Harrigan et al is linked in my section, Claims that it’s the first;