Water quality models – are they good enough for management?

By Waiology 02/12/2013 7


By Sandy Elliott

Un-muddying the Waters : Waiology : Oct-Dec 2013Water quality models are making their way to a farm or catchment near you – so what are they, and how good are they?

Models are being used in New Zealand to address water quality impacts of land use from national to farm scales. At national scale, the CLUES model was recently linked to a land-use evolution model to predict future changes in water quality in a study for the Parliamentary Commissioner for the Environment (see figure for an example of the predictions). The Freshwater Reforms propose that models be used in community deliberation processes for catchments. Down on the farm, the leaching model OVERSEER is being used under the Waikato Regional Plan Variation No 5 to regulate nutrient emissions in the Lake Taupo catchment.

Predicted nitrogen yield increase from 2008 to 2020.
Predicted nitrogen yield increase from 2008 to 2020.

How do water quality models work?

Most water quality models are built around mass budgeting approaches, whereby the water quality constituents are generated and transformed according to approximate mathematical representation of the processes or rates – an example is the dynamic point-scale crop growth and leaching model APSIM. But others are purely statistical, such as a recent model to estimate median concentrations for every stream reach in New Zealand as a statistical function of catchment characteristics (PDF) – and there are various mixtures of these approaches. New Zealand has a fairly rich array of climate, landuse, hydrometric, and water quality data to drive and calibrate these models. And models of different environments are being linked. For example recent modelling in the Ruataniwha Basin linked OVERSEER, a groundwater model, and a stream periphyton growth model.

How do model errors affect their usefulness?

All models entail errors and uncertainty, which arise from errors in inputs such as rainfall or land use, uncertainty in estimating various coefficients, or inappropriate representation of the processes. And often as we look closer – for example at fine time scale or an individual stream reach in the figure above – the errors get larger. Even at the large scale of a catchment and annual timescales, we can expect a standard error of around 30% for nitrogen loads for CLUES, and larger errors for phosphorus, sediment, and microbial indicators. Uncertainty in OVERSEER has also been discussed, which is particularly relevant when the model is used for regulatory purposes. And using purely statistical models or throwing more and more detail into the model doesn’t solve the problem.

So in a sense all models are wrong: but are they so wrong that they are useless? Here are a few thoughts on this topic:

  1. Over time, water quality models are getting better. For example, the first versions of OVERSEER provided only crude estimates of nitrogen leaching losses, but now the situation has improved considerably. Other models, such as flood models, were once on the fringes but are now relied on for predicting hazards, and this gives hope for water quality modelling.
  2. There are more data available for input to models and for calibration of parameters. For example, the amount of water quality data for calibrating models has exploded, from about 70 sites in 1990 to about 1000 now, and this allows improved calibration of models such as CLUES.
  3. Information technology and number-crunching power continues to increase, making more detailed modelling tractable and advanced uncertainty methods possible.
  4. We are getting more savvy at using models in a relative sense, fusing observations with model predictions. We can use the projected factor change in model predictions for different scenarios to adjust measured concentrations, which reduces errors in predictions of future conditions.
  5. Techniques for establishing and communicating uncertainty are improving. In the climate arena, probabilistic model predictions using a range of models are becoming routine, and knowledge of uncertainty helps decision-makers decide what weight to give the model predictions. We haven’t seen this much in water quality modelling yet, but it is bound to come.
  6. People are starting to take a more mature view of modelling, rather than a completely trusting or disbelieving attitude. This involves a more knowledge about strengths and weaknesses of a models, a bigger knowledge base of applications, smarter ways of driving the models, and better representation and communication of their uncertainty.

Overall, I think water quality models have met the usability threshold, and this will only get better with time.


Dr Sandy Elliott is a catchment modeller at NIWA.


7 Responses to “Water quality models – are they good enough for management?”

  • “People are starting to take a more mature view of modelling, rather than a completely trusting or disbelieving attitude”
    I’m not so sure.

    http://www.stuff.co.nz/southland-times/opinion/letters/9439122/Don-t-tar-all-with-one-dirty-dairying-brush

    http://www.stuff.co.nz/southland-times/opinion/letters/9441672/Letter-Action-needed

    30% SE means that 64% of results will fall within 30% accuracy.
    That makes it an interesting result, but not one to bank on.

    Dr Sandy, Dr Jan’s work integrated 2 models. What happens when you add the errors together – Or multiply it as it may be? I studied this as Uni but it has long gone from my mind.

  • PS
    I’d point out the unscientific use of colours, but I’ve beat that drum and although recognised seems to go unfettered.

  • That first letter doesn’t touch on perceptions of modelling, but on perceptions of media coverage. Its point that one shouldn’t generalise about farmers is great. And then it generalises about “Greenie” letters. The second letter doesn’t express an opinion on modelling either. Instead it proposes some water quality solutions.

    Personally, I would love to know what levels of certainty are useful for different types of decision-making. It would be really helpful to guide the science.

    Propagation of errors is indeed an important issue, but it’s not as simply as multiplying or adding. See here for a better explanation than I could offer.

    As for colours, it goes unfettered because it is rather the norm. The job of graphics is to communicate the scientific content as clearly and accurately as possible to the audiences. Colours matter. But not all audiences respond the same way. When seeing red, some “see red” as you’ve pointed out; some see “danger”, some see “extreme”, some see “hey, look here, it’s important”. I’ve tried to find discussions about the use of “danger” colours in scientific contexts but haven’t found anything. But I still inclined to think that criticising the use of red is like shooting the messenger. Both the science and the delivery of the science are important.

    Sandy may have more to add.
    [Daniel]

  • I pretty much concur with Daniel’s comments.

    Re colours, green to red is pretty conventional – people can understand the map without having to consult a legend (green = low, red = high (‘hot spot), at least in a relative sense). But I do understand how there are some sensitivities about colour choices.

  • Regarding those letters. Both referred to the work of Dr Jan. One apparently trusting. The other a polar mistrusting end of the debate.

    Regarding the use of colour. I know this is a blog designed for debate but I can’t believe this is a debatable issue.
    Colours, particularly red, can present emotively and with bias. A spade is a spade. Neither emotions nor bias belong in environmental science. Environmental science, in its pursuit of acceptance needs to present better than this. That’s how I see it.

    I have no issue with you tolerating it Daniel. This blog is about learning and it is well delivered. Not many blogs have editors.

  • In the interests of better science communication, if a graphic is misleading to a large enough audience (certainly 10%, and I’d say 5%), then it is worth evolving. Green-to-red and blue-to-red are conventional palettes in science. This stems from a mix of reasons: how our eyes have evolved to perceive colours, how nature has evolved to advertise danger, and obvious palettes in nature (rainbows; hot-cold). But science is supposed to be value-neutral. Edward Tufte, W.S. Cleveland and Felice Frankel all have wisdom to offer on this issue…To be continued… [Daniel]

  • Daniel,
    If that is the case, if the colours are only there to provide contrast, swap them. I double dare NIWA scientists to do that. Actually I would double dare our local council to do the same.

    Studies showing the affect of colours are numerous. It’s thoroughly taught at Otago University, in Marketing courses, the affect of Red and Yellow and adopters of such techniques are well recognised (McDonalds). Science shows colours affect our perceptions.

    This series is about “un-muddying”. My opinion is that this begins with delivering unbiased, un-emotive science.