By Guest Work 11/01/2017 29


by Professor Jack Heinemann

New studies published by Nature’s journal Scientific Reports are questioning the basis of how to determine the safety of products used in agriculture and at home.

The first of these featured reports is on the application of ‘omics’ techniques to a long familiar GM maize line called NK603. The second featured report is on the application of omics to rats that eat Roundup, one of the glyphosate-based herbicides used on NK603.

Where is the science in scientific risk assessment?

The basis for most risk assessments of genetically modified plants is ‘comparative’. The GM product is compared to something already assumed to be safe. The risk assessment is informed by scientific tests that measure the similarity or otherwise of the engineered product to, usually but not always, the closest parent that has not been genetically engineered.

The comparative method is informed by science but isn’t scientific data. For starters, someone has to decide what constituents of the GM and non-GM plant will be isolated for comparison. What could be important might not be known at the time this decision is made. In addition, even what constituents can be detected can vary as scientific instrumentation and methodologies change.

After measurements are made, the crop developer and regulators decide whether they believe that any detected dissimilarities are worrisome and whether the sum of similarities is reassuring enough to consider the new product ‘as safe as’ non-GM alternatives. That step is often referred to as the standard of ‘substantial equivalence’.

Determining substantial equivalence is an action of experts, including scientists, but is not itself ‘science’ or the data that comes from a particular scientific experiment. In technology risk assessment, the public deserves to know where the science ends and expert scientific judgment begins.

Substantial equivalence

A GM plant is intended to be different in at least one important way, such as in tolerating a herbicide. A decision on product safety follows considering potential adverse effects of both the intended change in traits and the unintended differences. Substantial equivalence to a non-GM relative implies that there were no important unintended changes in the GM plant.

NK603 was engineered to live after being treated with herbicide (e.g. Roundup). Regulatory approvals for cultivation of NK603 date back 17 years and it is approved for cultivation in 13 countries. It is one of the oldest and most widely adopted GM products in history. There should be no surprises from this maize if substantial equivalence is being used effectively to evaluate safety.

Challenge to substantial equivalence

roundupUsing methodologies that were not reasonably available when this product was developed in the late 1990s, the first featured study found previously undetected changes in the proteome and metabolome of NK603 compared to its non-GM relative. The proteome is the collection of proteins and the metabolome is the collection of small molecule biochemicals, in each case taken from specific tissues at a particular time.

Importantly, the study also compared herbicide sprayed NK603 to unsprayed NK603. Differences were found in maize seeds from sprayed and unsprayed plants too.

Differences per se may not make the product harmful to people, animals or the environment. However, identifying differences is a critical first step for constructing specific hypotheses about how those differences could cause harm. If this first step in a comparative risk assessment is compromised, then the final risk assessment might be too.

A critic of the study asked: “How equivalent does it need to be [to be safe]?” This is a reasonable question. Because the answer is a judgment, not the outcome of an experiment, it is contestable, a point made by the authors of this study. In whose judgment do we rely, and how might that judgment change depending on the context of how the product is used or who benefits from its use?

For example, pharmaceuticals have side-effects and don’t always work. The decision to use a particular drug is affected by both the health and history of the patient and the patient’s assessment that the side-effects and other risks are less important than the benefits the drug promises to deliver.

GM crop plants neither have safety trials equivalent to drug testing nor is their use as controlled as prescriptions. GM products may be distributed in much more varied ways, for example, through inhalation of flour or ingested as complex cooked mixtures that vary from country to country or by babies to adults.

Substantial equivalence works best when significant differences have been found, but is marginal as a method to ensure safety because it may not be informed by a full description of relevant differences.

Challenge to pesticide safety

The second featured study describes some important physiological changes in rats that have chronically ingested herbicide at and below legally allowed levels in food and drinking water.

This study used livers from rats fed Roundup in an earlier study that reported significant changes to blood and urine biochemistry, histological alterations reflective of structural damage, functional disturbances in liver and kidneys and tumour formations, especially those of the mammary gland. While the latter observation was contested, the former findings, to my knowledge, have received little challenge.

Significant differences were seen between the proteome and metabolome profiles of the livers of female treated and control rats. The changes caused by low levels of herbicide exposure were consistent with the manifestations of non-alcoholic fatty liver disease and its progression to non-alcoholic steatohepatosis.

Roundup

These liver diseases are important and growing in frequency. The study makes no claim that all or even most occurrences are due to herbicide residues, leaving this to future investigations. However, when symptoms of these diseases in rats are linked to herbicide exposures it makes sense to reconsider the risk assessment for Roundup and other herbicides based on glyphosate.

While much time is devoted to carcinogenicity, the glyphosate-based herbicides have been associated with a variety of other health effects, from inducing antibiotic resistance to endocrine system disruption. Were these herbicides obscure formulations used in specialist factories, such concerns might not be of such great public interest. But they are used worldwide more than any other kind of pesticide, and herbicides are some of the most common chemicals released into food and the environment.

Concerns about glyphosate-based herbicides are often countered by threats that their elimination would cause greater use of more toxic alternatives. This threat rings hollow, both because excessive use is leading to resistant weeds that is already driving farmers to use other herbicides, and because it is a false choice.

Let’s not swap glyphosate-based herbicides for those that have different toxic effects. Rather, let’s use science to reduce the use of herbicides and the products of technology that are dependent upon them.

Professor Jack Heinemann is a lecturer in genetics at the University of Canterbury.


29 Responses to “GM crops and herbicides: time to reassess risk assessment methods”

  • I fully agree – our regulators need to stop relying on secret studies paid for by those selling biotech & pesticides.

    Some scientists (http://sciblogs.co.nz/code-for-life/2016/12/31/gm-corn-really-different-non-gm-corn/) contest the first paper, including through ad-hominem attacks, but they do not address these much broader and more important questions about how to avoid regulatory capture, and how/when to rely on new scientific methods and the new findings they produce.

    Regulators really should be up with the play scientifically. These two papers are like coal-mine canaries – indications that stuff we can’t see may be harming us. If our regulators were truly independent, they’d be funding investigations like these, but at the very least they should be welcoming the new information.

    group that Debates over food safety Can never prove safety, though they can disprove concerns about particular forms of “unsafety”
    Are advancing rapidly, allowing the testing of new hypotheses about “unsafety”

  • These two papers should be conisdered in the context of the recent finding by the WHO that glyphosate is a probable carcinogen – and the response of the Australian chemical regulator (Australian Pesticide and Veterinary Medicines Authority) deciding in December 2016 that they disagreed with that finding and no formal review would take place. They continue to insist that glyphosate has a low risk of harm – the classification under which it was originally approved…Interestingly, part of the reason given by the APVMA for refusing to review glyphosate was they were in possession of ‘proprietary’ data (read industry funded non-peer reviewed) that the WHO didn’t have access to.

    A second reason for coming to such a different conclusion is that the regulators in Australia don’t assess full formulations of glyphosate, solely the so-called active ingredient, which is never used alone.

    It is hard to understand how such evidence can be so thoroughly ignored except by recognising how deeply compromised these agencies actually are.

  • The previous inability to fully characterise a GM product gave “substantial equivalence” a certain protection from scrutiny. But partial analysis was never going to provide an adequate assurance standard long term, and technology has now shone a light on the weak underpinnings of this methodology.

  • I note that Jack fails to do the first step, to first check if these papers warrant anything being based on them. (This also has him putting his advocacy ahead of the science.)

    Just because some research uses a newer technique does not in itself make it useful or sound: how the work is done matters, too. Crap science can be done with the fanciest tools!

    Long story short, the first is simply not good enough to base anything on. I’ve yet to look at the second (it’s only just come out and I have other things to do), but I suspect it also does not address the key point others noted about the first.

    You can see some of the concerns about the first paper (the corn study) gathered in an earlier post: http://sciblogs.co.nz/code-for-life/2016/12/31/gm-corn-really-different-non-gm-corn/

  • Ah, the start-on-an-ad-hominem approach.
    1. Your comments, Grant, don’t apply at all to the first half of my blog. That half is not dependent on an assessment of the two new papers.
    2. All science, and especially that on the cutting edge, is contestable. Even the science that you don’t criticise, Grant. The science of contrast is the collection of industry studies that the regulators used and are generally not available to scrutiny because it usually is never published nor put to blind peer review much less bravely put out by open access so that everyone can read it.
    The question isn’t does a study have flaws – they all do; the question is do they have data that provide new knowledge. These studies do. They are not ‘crap’. And republishing various comments collected from those who make broad sweeping, speculative and rhetorical statements without author right of reply doesn’t make your blog more convincing to me.
    3. Some of the comments, though, are of a scientific nature and could be useful to inform a follow-up study. That a study generates more questions for future work, and even for replication, is NORMAL science, not the hallmark of bad science. The rigorous blind peer review process of a top publishing house, as is behind these papers, increases our confidence that there is value in taking the science seriously, warranting further test and replication. Dismissing them in entirety, especially through rhetoric and speculative arguments in blogs and media releases is fair free speech, but it is not the scientific process.
    4. I disagree with what you think is your most important criticism, that in making a comparison between NK603 and the near isogenic parent, the authors’ fail to take into account the possible variation from all maize.
    As I’ve argued (e.g. Environ. Plan. Law J. 24, 157-160 (2007), Env. Int. 37, 1285-1293 (2011)) there are important reasons to consider both isogenic comparisons (which are the bedrock of most genetic experiments and have been since, well, forever) and species wide variations. Comparisons to the near isogenic parent are not only valid, but are the primary requirement in international guidance on risk assessment for products such as NK603. It wasn’t the standard invented by these authors.
    Species wide variation can also be useful if it takes into account the particular combination of deviations from the isogenic parent, not each variation one at a time. The reason for this is that we don’t know in most cases what allows e.g. a particular protein or metabolite to vary over a particular range in a ‘safe’ physiology. Some of what might allow it to vary outside of these parameters could be one of the other significant variations in the GM plant. Thus, how rare the combination of changes is would be a useful measure, not just how often any particular change is seen in some other study, done under different conditions and may be decades old and not available for replication.
    It would be like saying that you could find a 1970s Ford Pinto safe because the extreme position of its petrol tank was not any more extreme than the position of all gas tanks on all cars ever made, or even all cars made by Ford. The particular position of the petrol tank on that car, and the other particular aspects of that car, need to be taken into account. So I’d encourage you to do a study along the lines of what you think is important and then let us know what you find.

    • Jack,
      In the comment above, you reported that the Mesnage study used near-isogenic lines and that this is a requirement for assessments, however, they provided no evidence in the paper that they used near-isogenic lines (NIL). Only a genotyping assay for identity and purity is mentioned, and no results are presented. Moreover, the paper claims that the transgenic variety is DKC 2678, that the non-transgenic variety is both “isogenic” and “near-isogenic” at the same time, and that it is both varieties DKC 2675 and DKC 2575 depending on which sentence you read. Getting past the sloppy documentation of this information, it is clear that they used two different commercially-available field corn hybrids, with different parents that are cannot be considered near-isogenic lines without data supporting this claim. I have direct experience with determining the isogenicity of maize NILs, and it cannot be assumed. NILs must also be generated through painstaking backcrossing, or generated no dovo through transgenesis (which would not be possible for evaluating an already-existing transgenic event.) I might add that your statement that NK603 is a “line” is incorrect – it is an “event.” In science, specific terminology is exceedingly important, because it allows us to determine what an experiment actually finds. The difference between two varieties can be very large, and different hybrid varieties cannot be casually confused with NILs or Isogenic lines.
      My question to you is, if the different hybrids they used are not near-isogenic lines, would you agree that their conclusion that the observed differences are the result of the transgene and/or the transformation process is unwarranted and incorrect?

      • “if the different hybrids they used are not near-isogenic lines, would you agree that their conclusion that the observed differences are the result of the transgene and/or the transformation process is unwarranted and incorrect?”

        Karl,
        I should have said event for NK603.

        To your question: I think that the relevant science question is whether they have drawn legitimate conclusions from their data. The conclusion that they have drawn is reasonable. Reasonable doesn’t mean that it would be expressed that way by everyone. The way that they have gone about it, though, is constructive because the paper allows one to further test and increase or decrease certainty in the conclusion. This can be done by introducing even more closely related control lines if they are available.

        International guidance never quantifies ‘near’ isogenic. Therefore, it is a judgment call. The comparisons that I’ve seen in the studies provided by crop developers to regulators never have provided more detail on verification of the isogenic lines than this paper does. In those industry studies, the comparisons were made between lines that have had multiple breeding cycles separating them. Yet with all that breeding introducing variation, these other studies have come to the conclusion that, in effect, the variation is small enough to assert equivalence. Indeed, the extra breeding is done in part to remove unintended additional integrations and thus to increase similarity.

        How then does the extra breeding decrease unintended changes when the developer does it but increase variation when lines are compared by these authors? Well, either they measured different things (and that is true) or the industry data actually had so much variation that they couldn’t separate the samples. Despite using that as a confidence builder for safety, others would see it as accepting a low correlation between similarity and equivalence.

        Differences between lines can be large, as you say. But this challenge has always been with the comparative approach and, as this paper points out, is a weakness of basing safety on estimates of equivalence. There is nothing in this new study to suggest to me that the techniques used to ensure as close a relationship as possible were less robust than those in studies previously used by regulators. So while I would love there to be perfect information on comparators, there isn’t and that is a real world constraint that equally affects studies used to argue for substantial equivalence and ones that don’t.

        What is the cause of this variation? After using the nearest relatives that they could source (and possibly as near as any used in other studies) a reasonable hypothesis is other changes arising during the process of developing the line, including unintended changes at the time of event creation. Others might prefer the hypothesis that these changes had different causes. Fine. Test those hypotheses.

        I don’t know why they list DKC2675 and DKC2575. I assume that the former was a typo, because on the Dekalb site 2575 is RR2 and 2675 is VT2P.

        • Jack,
          You dodged the question. They claimed changes from the transformation process, and that requires a near-isogenic line, and a demonstration of its isogenicity. You seem to agree that they have not demonstrated this – and are instead pushing the question to what other people have done to avoid agreeing with my conclusion. This is a simple question of isolating variables and doing good science. You said that it comes down to a judgement call – can you tell me where in the paper they presented the evidence of making that judgement call based on evidence, or was it incorrectly assumed?
          It has already been demonstrated that the majority of differences detected between even related transgenic and non-transgenic lines is due to breeding, so that hypothesis has already been tested and you are no doubt aware of it.
          “Evaluation of metabolomics profiles of grain from maize hybrids derived from near-isogenic GM positive and negative segregant inbreds demonstrates that observed differences cannot be attributed unequivocally to the GM trait.”
          https://www.ncbi.nlm.nih.gov/pubmed/27453709
          I would like to give you another opportunity to answer the question that I asked. Did their data as presented actually test the hypothesis that they claimed?

          • Kia Ora Karl,
            As real working farmers and horticulturalists my partner and I like to know the research we are getting is free from any Conflicts of Interest.
            We have learned to value independent science as past advice from researchers directly working for or sponsored by the pesticide industry in supposedly Public Universities or Crown Research Institutes has cost us dearly.
            The research you linked to here is from Monsanto’s own lab. Have you considered that Monsanto’s researchers and those in partnership with Monsanto may not want metabolomics to demonstrate that “…observed differences cannot be attributed unequivocally to the GM trait”?
            It is not in Monsanto’s or any other GM seed company best interests to find problems with their GM products and publish it. Is it?

    • Jack,

      I’ve seen you so often assert that something that opposes you is an ad-hominem approach/attack/whatever, irrespective of whether it in fact was or not, that it looks like it’s your stock-in-trade response when you feel annoyed. I suggest you don’t make the accusation as you too often use it wrongly.

      It also has the effect of poisoning the well, as the expression goes, which, to be polite, isn’t helpful for discussion.

      I’ve had some people accuse me of this online that seem to stem from your incorrect accusation. Some of these people don’t check if it’s right, unfortunately, so the wider effect is to start a smear attack on someone.

      The term really means an argument directed against a person rather than the position they are maintaining. By contrast I wrote regards what you offered. You can disagree, but it’s not ad-hominem, and I think my points stand.

      I’m not going to reply to all of your points point by point as I haven’t time (I’ve only just found enough time to revisit this post, let alone read the other comments), but a general observation: you’ve mostly replied to things I haven’t written by putting my words out of context.

      But a few points nevertheless –

      re 2. “All science, and especially that on the cutting edge, is contestable.” – so? That would also mean that my points are worth hearing too, as are the points other’s made in the collation I drew up.

      “The question isn’t does a study have flaws – they all do; the question is do they have data that provide new knowledge. These studies do. They are not ‘crap’. And republishing various comments collected from those who make broad sweeping, speculative and rhetorical statements without author right of reply doesn’t make your blog more convincing to me.”

      Just being combative, I think. (Or defensive?)

      You must consider if a study have flaws, especially if the flaws mean that the conclusions are not valid or useful. (The raw data will usually stand, provided it’s not contaminated, etc., but most people, including the authors, are looking at the conclusions to be drawn.)

      I didn’t write “crap” in the context you’ve shifted it to. “Just because some research uses a newer technique does not in itself make it useful or sound: how the work is done matters, too. Crap science can be done with the fanciest tools!” Straight-forward and uncontroversial, with a dose of light-heartedness. Perhaps you read past ‘in itself’? (By the way, it is fair to dismiss work if it really isn’t good at all; you pretty much have to if that’s the case.)

      “And republishing various comments collected from those who make broad sweeping, speculative and rhetorical statements without author right of reply doesn’t make your blog more convincing to me.”

      This last bit is silly and trying to dismiss out of hand. It’s valid thing to bring together distributed discussion. (Review papers in the literature do something similar, for that matter.) While I’m writing, note I’ve linked to later contributions from others in the comments, as I often do.

      Some points are speculative because the paper hasn’t elaborated what they needed to know; they say that too – they’re not idly speculating, they’re trying to understand what was presented.

      I have no idea what “without author right of reply” is supposed to refer to. Antoniou has written in my comments, I certainly haven’t blocked that. If you mean on other forums he couldn’t comment, perhaps you should be thanking me for gathering these in a place where anyone could comment, instead being dismissive?

      re “4. I disagree with what you think is your most important criticism, that in making a comparison between NK603 and the near isogenic parent, the authors’ fail to take into account the possible variation from all maize.”

      You can disagree, but I didn’t write “all maize” – you added that, and with it have changed the meaning of what I wrote. (Can you please read more carefully? You too often shift the meaning of what people have written when you reply.)

      You then go on to “address” what you referred to, which misses the point I was making. Ironically some of your reply actually makes my point, but you don’t seem to be aware of that.

      I’m running out of time so to be brief: all these -omics methods always find a difference. The question is not if you find differences— these methods always will — but if the differences are meaningful.

      It’s not a new thing I’m pointing out, or particular to this study. The equivalent issue is found all over the place with biologists putting datasets, especially large datasets, to comparative study.

      Something I wrote on my post (http://sciblogs.co.nz/code-for-life/2016/12/31/gm-corn-really-different-non-gm-corn/):

      “This is a recurring problem in all these system-wide screen[s] using highly sensitive molecular techniques: you end up having to be extremely careful to test if the variations you see reflect what you are testing, not something else as the techniques are able to pick up differences that have been caused by the most mundane reasons.”

      I’m a computational biologist, so I will look at that side of things more than the rest. Over years I’ve seen technique after technique declaring they’ve “found” something, only for it to later have to be withdrawn because they didn’t first determine what the meaningful variation for what they were testing was. (Genomics has had many examples of this, for example.)

      This is in addition to the issue with isogenic lines, which seem clearly important, esp. if you bear in mind that (back-)crosses will change the genetics and hence the expression levels, as will other breeding. Karl seems to be your man for that!

      I’d suggest you also look at Harrigan et al too – work on the same NK603 line that Mesnage et al examined: “Results demonstrated that the largest effects on metabolomic variation were associated with different growing locations and the female tester. They further demonstrated that differences observed between GM and non-GM comparators, even in stringent tests utilizing near-isogenic positive and negative segregants, can simply reflect minor genomic differences associated with conventional back-crossing practices.” AFAICT Mesnage et al don’t cite or discuss these results in their paper.

      “3. Some of the comments, though, are of a scientific nature and could be useful to inform a follow-up study. That a study generates more questions for future work, and even for replication, is NORMAL science, not the hallmark of bad science. The rigorous blind peer review process of a top publishing house, as is behind these papers, increases our confidence that there is value in taking the science seriously, warranting further test and replication. Dismissing them in entirety, especially through rhetoric and speculative arguments in blogs and media releases is fair free speech, but it is not the scientific process.”

      Actually these go against the conclusions of the current work, rather than indicate a need for follow-up work. Also no need to try teach grandmother to suck eggs, or try prop the paper up: judge the paper, not it’s setting. The latter sentence is very loaded and unwarranted.

      I have to go to other things so I can’t finish this, sorry. Incidentally, there are a number of things I think are wrong in your essay/post, but I haven’t time to address that either. That’s life.

      • There is nothing not ad hominem about this sentence, Grant.

        “(This also has him putting his advocacy ahead of the science.)”

  • Interesting comments everyone.

    Weight of evidence, over decades, that I can see, is on the safety and positive environmental benefits of approved GMOs, so I think that Grant has a point.

    Jack, I wonder what “All science, and especially that on the cutting edge, is contestable” means to you? Why ‘especially… cutting edge’? I think I know but I’d like to know what you mean by ‘all’ and ‘especially’.

    You also say ‘NORMAL’ science. What do you mean by this? What is ‘normal’ and what is not? Surely, science is science – as I now get on to…

    Can I also ask what you mean by “The rigorous blind peer review process”? I’m not aware that peer review is particularly blind, nor overly rigorous, which is why falsifiability and repeatability are what really matter in science and in life.

    • Hi Gary

      “Weight of evidence, over decades, that I can see, is on the safety and positive environmental benefits of approved GMOs, so I think that Grant has a point.”
      I’m not sure what that point is, but to me the point of the paper was an evaluation of how we do risk assessment, not whether corn derived from event NK603 is ‘safe’. The authors do discuss how plausible human health hypotheses could be constructed to test newly discovered significant differences, but they make no claim to my memory that the products would or would not be safe for consumption. There are many more environmental hypotheses that regulators might have considered based on the differences too, if they had known about them. I guess I’m saying that NK603 can turn out to be safe, but it shouldn’t be by chance but because the way comparative risk assessment is done gives certainty in that outcome.

      “I wonder what “All science, and especially that on the cutting edge, is contestable” means to you?”
      What I mean is just that. Science doesn’t make claims that perfect knowledge on something has been achieved. That allows science to progress, because as techniques evolve and new discoveries are made, previous understandings can change. What I’ve observed is that ‘at the cutting edge’ of technique development, who does the experiment can have a big effect on the outcome. That can happen until the techniques become more widely adopted and the materials used get more standardised. For example, in the 1970s there were mixed reports of bacterial DNA in plants, with some scientists finding it and others not. It took about 10 years for even scientists that initially couldn’t find it to suddenly find it too, leading to agreement that DNA from Agrobacterium did transfer to plants. At the cutting edge, results are more in a sense personal and this can lead to greater uncertainty about findings.
      By calling the studies ‘crap’ Grant seems to just want to discredit these papers because he disagrees about either methodology or outcome. I think this is anti science. The papers are valid contributions to knowledge. No less valid than the studies that were provided to regulators supporting the case for release. The certainty we have in the results will improve with replication (and either support or not support the conclusions). That is the way I think science works, and I think it should be no different for these papers.

      “You also say ‘NORMAL’ science. What do you mean by this? ”
      Science is a process. It is normal in science for the outcome of someone’s work to lead to more questions that generate new hypotheses for testing. That these papers invoke just that is not, as I think Grant was implying, a sign that they are somehow specially flawed. Interesting science has always attracted strong opinions from critics who are sometimes right and sometimes wrong. It might be interesting because it does engage some deep question of humankind, or because it has financial or reputational risks for someone. It is a smear in my view to select just these critical opinions and then imply to a broad reading audience that it is an unusual experience in science for this to happen, and it does not mean that the critics are necessarily correct.

      “Can I also ask what you mean by “The rigorous blind peer review process”?”
      Peer review conducted by good journals is blind (to the authors), and sometimes the referees. The journal editor selects the referees and their identities remain confidential, allowing them to speak their minds without fear of any repercussions from the authors. There are other processes some people call peer review. For example, when regulatory agencies issue ‘peer reviewed’ reports, it usually means that the agency selected the reviewers and served as its own editor. This isn’t a blind peer review process. The journal that published these papers uses blind peer review.
      But I absolutely agree that having been peer reviewed, even by the gold standard of processes, is far from a guarantee that the paper is without flaw, or even all that good. It is just a much higher standard than critical comments published by SMC UK or in blogs, and a much higher standard than is used by our regulators for their decisions on products that can affect millions of people. I find that odd.
      And I absolutely agree that the real test of any particular finding is time and replication. Critiquing a paper on technical grounds can lead to better follow up experiments. But do the follow up, don’t try smear a paper out of existence.

    • Hi John. Repeat and replicate matters. New does not, to me do either.

      Why do you think it might matter? This I why I don’t ‘reject’ (if I said that I apologise) Jack’s reply. It’s why I’ve have asked for reasoning.

  • Well no, but “new” is of course a necessary precursor to “repeat & replicate”, not to mention that its also the reason people do science.

    I thought you implicitly rejected Jack’s reply to Grant – by not addressing its substance.

  • As scientists, we know that studies using new methods can provide new insights into an issue or debate, and that therefore studies using new methods should not be dismissed out of hand. In fact, the use of a new scientific method is often the thing that progresses our scientific understanding of an issue. I therefore agree with Prof. Jack Heinemann that we should actually consider the results of these new methods and disagree with Grant Jacobs that they should be regarded and ignored (11/1/17, 9.48pm).

    Karl Haro von Mogel (12/01/2017; 12:58 pm) says “Only a genotyping assay for identity and purity is mentioned, and no results are presented.” Having reviewed the safety assessments of 28 GM crops assessed by the food regulator of Australia and New Zealand (FSANZ)*, I can assure you that Jack is correct when he says that “The comparisons …. provided by crop developers to regulators never have provided more detail on verification of the isogenic lines than this paper does.” I can also assure you that in those industry studies, it was routine for the industry to compare the composition of the new GM variety of the plant to several different varieties of that plant that are far from near-isogenic varieties. In some instances, the industry did not even provide proper identifying names of the varieties they used as comparators, or even said whether the comparators had themselves been genetically modified or not, let alone providing an actual genotyping assay. The approach they regularly use is a kind of “normal range” approach, where they compare the composition of the new GM crop to a group of other varieties of that crop, and if the GM crop lies somewhere in that comparator range, then there is a conclusion that the GM crop is compositionally comparable to other varieties of that crop and hence the GM crop is substantially equivalent to other varieties of that crop.

    In contrast, the authors of the multi-omics paper have found and used a near-isogenic variety, thereby reducing the variability inherent in the industry approach. In addition, the authors of the multi-omics paper used two different cultivation years, thereby measuring and controlling-for between-season cultivation variability/error. They also they planted the GM and non-GM varieties close together, thereby controlling for environmental factors such as soil type, rainfall etc. And they further compared samples of the GM crop that had been sprayed with glyphosate to samples that had not been sprayed, therefore controlling for the use of glyphosate. It should be noted that the GM corn variety under consideration here (NK603) has been genetically engineered to be sprayed with the herbicide Roundup (containing the active ingredient glyphosate), so that it is likely that at least some of the NK603 entering animal feed and human food would have been sprayed with glyphosate. However, compositional comparison data provided to regulators by industry were obtained from samples of NK603 that had not been sprayed with glyphosate. That is, an assumption was made that the application of glyphosate to NK603 would not change the composition of the corn. There was no experimental evidence provided that this was in fact the case. In comparison, the authors of the multi-omics paper made no such assumption. Rather, they measured whether this was the case using experimental methods.

    Furthermore, in my review of the 28 GM crop varieties, I found that there was no threshold upon which a regulator would decide what passes or fails a substantial equivalence test, so everything passed the test. For example, for the GM corn variety MON810, almost half of the amino acids were statistically significantly different in the GM corn variety, but it was still assessed as being substantially equivalent by FSANZ, because there was no prior determination that if a GM crop got beyond x% of amino acids being statistically significantly different, then the crop was no longer substantially equivalent.

    In addition, I found that the substances that were being compared in industry studies for substantial equivalence were not very relevant to determine if there had been an important change to the crop that may affect the health of those that eat it. For example, one of the concerns about GM crops is that they may inadvertently produce one or more proteins that may cause a problem such as an allergic reaction. Yet none of the 28 GM crop varieties contained a comparison of important proteins that may be produced by the crop. Rather, proteins in the crops were broken-down into their constituent amino acids and those were compared. Since amino acids do not cause disease**, but proteins can cause illness (allergies etc), you have in effect destroyed the thing that may cause disease in order to measure the things that do not cause disease.

    In comparison, the authors of the multi-omics paper have measured substances that are far more relevant to health.

    For all of these reasons, I consider that this new paper should be considered as an important contributor to the debate and not dismissed.

    * Carman C (2004). Is GM food safe to eat? In: Hindmarsh R, Lawrence G editors. Recoding Nature: Critical Perspectives on Genetic Engineering. Sydney: UNSW Press; p. 82-93, references 228-229.
    ** They do not cause disease unless you have one of a few rare inborn errors of metabolism and need to control the intake of certain amino acids in your diet to prevent damage. For example, if you have maple syrup urine disease (MSUD; branched-chain ketoaciduria), you need to carefully control your dietary intake of the amino acids leucine, isoleucine and valine to prevent neurological damage.

  • Hi Judy,

    A quick foreword before my reply proper — I am struck by how those opposing GM so far have either done one of two things to me, or sometimes both: (a) try shoot the messenger, or/and (b) misrepresent what I wrote, then object to their alternative version (i.e. not what I wrote).

    I did not dismiss the study out-of-hand, nor did I dismiss the general method in and of itself either — both of which you claim I have done; both of which I did not do.

    You’ve misrepresented what I wrote, then objected to something I did not write.

    I was quite clear, writing:

    Just because some research uses a newer technique does not in itself make it useful or sound: how the work is done matters, too. Crap science can be done with the fanciest tools!”

    Note the emphasis added for your benefit.

    That some research paper uses a new method does not make the conclusions of that research paper worthy or meaningful if the research is done badly. (Note care to note I wrote conclusions.)

    Both Jack and the authors have touted that it uses some new method, but that in itself does not make the work worthy or the conclusions sound. You have consider the whole study.

    On the note of considering relevant things, you might want to consider the concerns others have raised. These have been collated on my post http://sciblogs.co.nz/code-for-life/2016/12/31/gm-corn-really-different-non-gm-corn/ I linked to this in the comment you are replying to. Note that the examinations of the Mesnage et al paper that came after I wrote my round-up post are introduced in comments. (I probably should shift these to the post when I can find time.)

    If papers are unable to draw the conclusions they propose to have reached, or there are too many uncertainties over the conclusions, then those conclusions are set aside.

    A paper being found to not be able to conclude what it claims to conclude is also one reason why papers get retracted, sometimes voluntarily by their authors.

    I’ll let Karl speak for what you have addressed to him, but re

    “It should be noted that the GM corn variety under consideration here (NK603) has been genetically engineered to be sprayed with the herbicide Roundup (containing the active ingredient glyphosate), so that it is likely that at least some of the NK603 entering animal feed and human food would have been sprayed with glyphosate. […]”

    If my recollection of Mesnage et al’s paper is correct, you would want to add that the authors found the resulting corn did not contain glyphosate in (or on?) it.

    (Excuse my writing to a general audience here; I prefer to try keep non-scientist readers in the loop.)

  • “It is just a much higher standard than critical comments published by SMC UK or in blogs,”

    The way you are presenting this is misleading, I think. Criticism should be taken on the merit of what was said, not the media they are presented on.

    Peer review is just a filter, and one that sometimes fails at that. (It’s a human endeavour after all.)

    Equally as good, sometimes excellent, stuff is written with no peer review. All peer review says is that a filter was applied; it cannot say that work written without that filter is “poor” or “lesser”.

    Furthermore, peer review is not a reason to accept the research without critique. That’s never true, whatever the journal the research is published in.

    Generally, peer review mostly gets rid of the worst, obvious issues. A wider audience can often spot problems that were not seen in peer review. Sometimes these will be major issues, especially if the paper wasn’t reviewed as well as it might have been, as happens on occasion.

    “much less bravely put out by open access so that everyone can read it.”

    Blog posts are open publication, too. Science on posts is, if anything, more like bioRxiv where work is presented to open criticism by anyone.

    The ‘bravely’ bit is silly. Open-access isn’t “brave”. It simply gives what is published wider access.

    It’s also worth remembering that a few who want to have their work to be ‘in the lime-light’ use open-access publishing so that it can get wider exposure. In that case, it’s not “brave”, but self-serving!

    While I’m writing, you want to edit “Nature’s journal” to “Nature Publishing Group’s journal” or delete the ‘Nature’ bit. You’re trying a bit too hard to rub off Nature’s (the journal’s) creditability to these papers. Like most larger publishing groups, NPG also has their “lesser” journals, and in their case Scientific Reports is one of them (whatever their blurbs say!)

    • Kia Ora Grant, Thanks Grant for keeping us non -scientist readers in the loop. As a non scientist I am to take from your response
      *peer review is a filtering process.
      *sometimes issues can be spotted by a wider audience
      *claims of bravery are silly if a scientific team is going open access.
      *Researchers may be seeking the limelight for their work and risk being seen as self serving.
      *Nature Publishing Group’s journal “Scientific Reports” is not a good place to mention as a scientist may be trying to rub off “Nature’s credibility” and not really attempting to inform readers of Sciblognz where the studies are to be found.
      It should not surprise you Grant that comments like these are not helpful to those Special Needs teachers like me having to read widely and filter through a wide range of research into the role dietary allergies may be contributing to some of the difficulties those with new forms of autism face .
      The data base run by Richard E Goodman at the University of Nebraska does not include GM proteins which is surprising and also unhelpful.
      Could you please address Dr Carman’s point that “substantial equivalence” enables the GM crop industry “not” to measure something they probably if not legally should measure when seeking regulatory approval from Government Food Safety agencies such as FSANZ.
      “Yet none of the 28 GM crop varieties contained a comparison of important proteins that may be produced by the crop.Rather, proteins in the crops were broken-down into their constituent amino acids and those were compared. Since amino acids do not cause disease**,but proteins can cause illness (allergies etc),you have in effect destroyed the thing that may cause the disease in order to measure the things that do not cause the disease”
      Further use of proteome and metabolome profiles when assessing all GM crops may benefit those children with NAFLD which as Professor Heinemann says appear to be “increasing”. What do you think?.

  • As scientists, we know that studies using new methods can provide new insights into an issue or debate, and that therefore studies using new methods should not be dismissed out of hand. In fact, the use of a new scientific method is often the thing that progresses our scientific understanding of an issue. I therefore agree with Prof. Jack Heinemann that we should actually consider the results of these new methods and disagree with Grant Jacobs that they should be disregarded and ignored (11/1/17, 9.48pm).

    Karl Haro von Mogel (12/01/2017; 12:58 pm) says “Only a genotyping assay for identity and purity is mentioned, and no results are presented.” Having reviewed the safety assessments of 28 GM crops assessed by the food regulator of Australia and New Zealand (FSANZ)*, I can assure you that Jack is correct when he says that “The comparisons …. provided by crop developers to regulators never have provided more detail on verification of the isogenic lines than this paper does.” I can also assure you that in those industry studies, it was routine for the industry to compare the composition of the new GM variety of the plant to several different varieties of that plant that are far from near-isogenic varieties. In some instances, the industry did not even provide proper identifying names of the varieties they used as comparators, or even said whether the comparators had themselves been genetically modified or not, let alone providing an actual genotyping assay. The approach they regularly use is a kind of “normal range” approach, where they compare the composition of the new GM crop to a group of other varieties of that crop, and if the GM crop lies somewhere in that comparator range, then there is a conclusion that the GM crop is compositionally comparable to other varieties of that crop and hence the GM crop is substantially equivalent to other varieties of that crop.

    In contrast, the authors of the multi-omics paper have found and used a near-isogenic variety, thereby reducing the variability inherent in the industry approach. In addition, the authors of the multi-omics paper used two different cultivation years, thereby measuring and controlling-for between-season cultivation variability/error. They also they planted the GM and non-GM varieties close together, thereby controlling for environmental factors such as soil type, rainfall etc. And they further compared samples of the GM crop that had been sprayed with glyphosate to samples that had not been sprayed, therefore controlling for the use of glyphosate. It should be noted that the GM corn variety under consideration here (NK603) has been genetically engineered to be sprayed with the herbicide Roundup (containing the active ingredient glyphosate), so that it is likely that at least some of the NK603 entering animal feed and human food would have been sprayed with glyphosate. However, compositional comparison data provided to regulators by industry were obtained from samples of NK603 that had not been sprayed with glyphosate. That is, an assumption was made that the application of glyphosate to NK603 would not change the composition of the corn. There was no experimental evidence provided that this was in fact the case. In comparison, the authors of the multi-omics paper made no such assumption. Rather, they measured whether this was the case using experimental methods.

    Furthermore, in my review of the 28 GM crop varieties, I found that there was no threshold upon which a regulator would decide what passes or fails a substantial equivalence test, so everything passed the test. For example, for the GM corn variety MON810, almost half of the amino acids were statistically significantly different in the GM corn variety, but it was still assessed as being substantially equivalent by FSANZ, because there was no prior determination that if a GM crop got beyond x% of amino acids being statistically significantly different, then the crop was no longer substantially equivalent.

    In addition, I found that the substances that were being compared in industry studies for substantial equivalence were not very relevant to determine if there had been an important change to the crop that may affect the health of those that eat it. For example, one of the concerns about GM crops is that they may inadvertently produce one or more proteins that may cause a problem such as an allergic reaction. Yet none of the 28 GM crop varieties contained a comparison of important proteins that may be produced by the crop. Rather, proteins in the crops were broken-down into their constituent amino acids and those were compared. Since amino acids do not cause disease**, but proteins can cause illness (allergies etc), you have in effect destroyed the thing that may cause disease in order to measure the things that do not cause disease.

    In comparison, the authors of the multi-omics paper have measured substances that are far more relevant to health.

    For all of these reasons, I consider that this new paper should be considered as an important contributor to the debate and not dismissed.

    * Carman C (2004). Is GM food safe to eat? In: Hindmarsh R, Lawrence G editors. Recoding Nature: Critical Perspectives on Genetic Engineering. Sydney: UNSW Press; p. 82-93, references 228-229.
    ** They do not cause disease unless you have one of a few rare inborn errors of metabolism and need to control the intake of certain amino acids in your diet to prevent damage. For example, if you have maple syrup urine disease (MSUD; branched-chain ketoaciduria), you need to carefully control your dietary intake of the amino acids leucine, isoleucine and valine to prevent neurological damage.

  • Hi all

    Sorry about the repeat posting of my original post. On my computer, it looked like my original post had not gone up.

    Hi Grant

    Sorry you didn’t understand what I wrote. Also, you seem to be including me in a group of “those opposing GM”. Please stand corrected.

  • Philli,

    “*Researchers may be seeking the limelight for their work and risk being seen as self serving.”

    I didn’t write that. You’ve changed the meaning; the latter bit doesn’t reflect what I wrote.

    “*Nature Publishing Group’s journal “Scientific Reports” is not a good place to mention as a scientist may be trying to rub off “Nature’s credibility” and not really attempting to inform readers of Sciblognz where the studies are to be found”

    I didn’t write that either – suggest try reading it again a bit more carefully!

    “It should not surprise you Grant that comments like these are not helpful to those Special Needs teachers like me having to read widely and filter through a wide range of research into the role dietary allergies may be contributing to some of the difficulties those with new forms of autism face .”

    No idea what you’re trying to say, sorry, but it reads though that you might want to re-read what I wrote in it’s context. I’m replying to the quoted bit at the top, that Jack wrote. I hasn’t much to do with you! (It’s good practice to quote what you’re replying to – easier for people to follow and understand.)

    “Could you please address Dr Carman’s point that “substantial equivalence” enables the GM crop industry “not” to measure something they probably if not legally should measure when seeking regulatory approval from Government Food Safety agencies such as FSANZ.”

    That is addressed to Karl, not me (everything after the first paragraph in Judy’s comment is addressed to Karl).

    “The data base run by Richard E Goodman at the University of Nebraska does not include GM proteins which is surprising and also unhelpful.”

    This doesn’t appear relevant to anything I written. I’ve never mentioned it, and “Goodman” doesn’t turn up anywhere in this post or comments. Perhaps you’ve crossed with something you’re writing somewhere else?

    • Kia Ora Grant,
      Thank you for responding to me. I am not a scientist and probably that is why I am having difficulty understanding who you are referring in your lengthy responses. I was trying to summarise your points as best I could. Thank you for clarifying and in future will try to read your many responses a little more carefully and work out what you actually mean. Although I have to say it is not always easy as a lay person.
      I suggest you go back to the two studies and the use of metabolomic and proteomic profiling. Does it not surprise you that no allergy testing appeared to be carried out before NK603 corn was released? Allergy testing was available 17 years ago. Why didn’t FSANZ or NZFSA as it was then insist on allergy tests for all GM foods? Wouldn’t it be sensible for FSANZ and the US FDA independent toxicologists to use Multi-omics before introducing the new GM products soon to reach our country e.g Enlist Duo Corn with 2,4-d and glyphosate tolerance rather than using “substantial equivalence”
      I am particularly concerned with the increase in Non Alcoholic Fatty Liver Disease in obese and non obese teens from reading current District Health Board 2015 stats and also the apparent increase in Irritable Bowel disease. I appreciate there are likely to be many factors you as a scientist can point to and yes, I do understand correlation is not causation however the two studies we are all discussing here raise public health concerns that FSANZ probably need to address.
      I work with a particularly vulnerable group with food sensory issues on a highly processed corn and soy diet. As you are no doubt aware hepatitis B and liver disease is a concern for Maori and Pacifica. I am very sorry you have no idea what I am trying to say but I will point out I have been a child advocate for a long time and in my field advocacy is not an insult.
      I am sorry I had no idea you could not respond to Dr Carman’s points or to the points raised by Professor Heinemann on the limitations of “substantial equivalence” when assessing GM crops and that only Karl Von Mogel should. I hope he does address the points raised by Dr Carman.
      I apologise for bringing up Richard E Goodman’s FARRP service as this may have appeared a red herring. FARRP is the laboratory used by submitters to FSANZ to assess the safety of the GM corn and soy now so ubiquitous in nearly all processed foods in NZ. I thought you might see my concerns that GM proteins are not included when assessing GM food safety.
      Thank you for pointing out that raising FAARP laboratory was inappropriate and “this doesn’t appear relevant to anything I have written.” I can see you are right. I apologise again.
      Finally it does appear relevant to me when discussing these 2 new studies published by Nature’s Journal Scientific Reports to consider that the 90 day trial conducted by Monsanto and used by FSANZ to assess the safety of NK603 corn after it was released may be inadequate.
      The EU food safety authority EFSA has stopped the use of only 90 day feeding trials for GM foods so perhaps FSANZ will look at these two new studies raising concerns regarding Non Alcoholic Fatty Liver disease and reconsider their GM food safety assessment procedures.

  • Judy,

    “Sorry you didn’t understand what I wrote.”

    I understood you perfectly well – that sort of reply is just trying to dismiss what I wrote.

    You misrepresented my message in your earlier reply to me, and I’m entitled correct that.

    I was writing the effect that you can’t claim a paper is “worthy” or it’s conclusions sound just because it uses a new method.

    You made me out to have written to the effect that “studies using new methods” … “should be disregarded and ignored” which I certainly did not write.

    “Also, you seem to be including me in a group of “those opposing GM”. Please stand corrected.””

    Choose whatever label you like, my points stand.

    (Just while I’m writing: personally, I prefer people to be honest about their positions. I find those that claim not to oppose GM, but whose actions and writing indicate they do aren’t being very honest. It’s a bit like anti-vaccine groups claiming they offer “vaccine advice” when in practice they mean to oppose vaccines.)

    • Dear Grant

      My previous post stands and I am honest about my position. I am not anti-GM. And please do not liken me to anti-vaccine groups. I advise people to be fully vaccinated.

    • Kia Ora Grant,
      Does Otago University pay you to deflect sci blog readers away from the real issues raised by these two studies? I hope not as to deflect like this is not a particularly effective technique often used by the teenagers I work with. You are letting yourself as a science communicator down and this is disappointing.
      Can you please stick to discussing the two papers without trying to introduce other issues or I will begin to suspect you work for the Ag biotech and/or the AgChem industry.
      Can you please address Dr Carman’s point that “…there was no threshold upon which a regulator would decide what passes or fails a substantial equivalence test, so everything passed the test”
      With Duo Enlist GM corn and GM soy products/oils such as Enlist E3 soy with glufosinate and 2,4-d on the way don’t you think FSANZ needs to use a range of more robust tests and independent non industry toxicology rather than just approving potentially unsafe food products only tested by industry to be “substantially equivalent”? This is a public health issue.

  • Even in the early days, knowing the mode of action of glyphosate on the shikimate pathway (chelation of manganese, magnesium etc,) on production of essential amino-acids like tryptophan and the resulting effects on production of serotonin, dopamine and the consequential neurotoxic effects etc. would dictate that these be included in the assessment of “substantial equivalence”. Along with this the increase in known allergens, like lectins and trypsin inhibitors in Dow Ag Science soy a few years ago, are required to be included but FSANZ refused to do further testing as they are required to do ( they dismissed increases of 25% and 35% respectively as irrelevant)
    So it appears there are two aspects of this discussion
    1. the relative merits of the testing science and
    2. the failure of companies to provide all the data and the regulators to do their job and
    a)demand all the data, from applicants and independent researches and
    b) assess all the data and far more of the unintended effects of the GM process itself and the associated pesticides and not just the active ingredient.
    The more tools that can be used the better to control the adverse effects, even if they are debateable, for they are far better than the system of no or irrelevant data provision and lack of adequate assessment that we have at the moment.

  • Dear all,
    Thank you for fruitful discussions.
    I tend to disagree that the methods used in the paper are new methods as if there was little standardization or harmonization in the field. Just a quick example, there are some specialized journals dedicated to the field of proteomics that have been running since 2001, 2002 and 2008 (Proteomics, Journal of Proteome Research and Journal of Proteomics). There is then at least 15 years of recorded scientific advancement in proteomics methods.
    On the other hand, it is true that the application of omics methods in risk assessment of GMOs is an evolving view among regulators. But it is also true that some regulators have already acknowledged their potential usefulness. This is the case in EFSA guidance on selection of comparators for the risk assessment, the UN Convention on Biological Diversity guidance on GMO RA and also the outcomes of the EFSA workshop on the RA of RNAi-based GM plants. Therefore, the study actually fits within the context of what is being discussed for the improvement of RA methods at both academic and regulatory environments.

Site Meter