By Marcus Wilson 22/01/2016

Going back to my last post, our fancy balance proclaims that it weighs objects from 0 to 200 g with a precision of 0.001 g (that’s one milligram).  

And it does – put an object on and the balance gives you an attractive-looking number on its prominent display reading 184.139 g, or something similar. It is precise to 1 milligram. It’s not reading 184.138 g, neither is it reading 184.140 g, it is reading 184.139 g.

So does that mean our test object has a mass of 184.139 g? Unfortunately not. Just because the balance gives us that number that precisely, it doesn’t mean that it is that accurate. University lecturers always have a good giggle when some poor unsuspecting first-year student records an answer to a wildly inappropriate number of significant figures – for example, she might measure a speed in the lab of 1.48392348837 m/s.   Precise, yes. Accurate, no. However, when a third-year student does the same time (I’ve usually got the message across by then, aided by deducting marks on assignments for stupid use of significant figures), the humour turns into despair.

So what does our test object weigh? (What is its mass? I mean). Well, I can weigh it several times and see how the results are spread. I’ve done that. It’s a few milligrams. On taking the object off, and putting it on again, I don’t see the recorded mass change by more than three of four milligrams. If I take a lot of measurements, and work out the mean mass and its standard uncertainty (with a bit of statistics) I can get something that has a random uncertainty of only a milligram or so.

However, that still doesn’t mean our test object has a mass of 184.139 g (or whatever our calculation says). I may have accounted for random uncertainty, by weighing it multiple times, but there’s certainly other systematic sources of uncertainty. These are significant. Just look at the graph. This is the mass of the test object (as recorded by the balance) over the period starting Tuesday morning this week. Our object is getting lighter! Quite a lot lighter, too. It’s moved about 40 mg over three days! That’s about one part in 5000.


What’s happening? One interesting thing to note is that we initially calibrated the balance with a nominal 200.000 g mass (sent with the equipment) and a completely empty pan (0.000 g). I’ve also been weighing the calibration mass and the empty pan over the course of the three days too. They have shown no drift at all – just a couple of milligrams of random uncertainty, as far as I can see.

The manual suggests that the equipment is affected by temperature and humidity. Now, Monday was one of those horrible Waikato days with a warm, damp atmosphere and lots of rain. One of those days where, if the humidity got any higher, it would be raining in your office. And there was a lot of rain Monday night, before I made the measurements. On Tuesday morning, everything ‘felt’ damp – but we’ve been drying out ever since. Is it a long-term drop in humidity in the lab that’s caused the change? And is it because the test object was actually heavier (maybe there was some condensation on it) or is it because the humidity has made the balance ‘stick’ or affected the electronics in some way. I’m not sure?

But what I am sure about is that saying I have a 184.139 g test mass is, at present, unjustified.


Featured image: Flickr CC, Güldem Üstün.

0 Responses to “Don’t confuse accuracy with precision”

  • Great post Marcus! This is a distinction that I had a hard time grasping as a student, but I think you’ve explained it well here. I’ve actually found this concept very useful once I understood it.

    A few years ago I noticed a red flag in an ad claiming to reduce cellulite by 25.68%. That’s very precise, but I struggled to figure out how they could possibly have measured it so accurately. Perhaps unsurprisingly, it turned out to be a bit of a farce:

  • I think though that when a product is pitched as being more precise, accuracy is assumed. The dubious assumption though is that given the same accuracy (or even, in principle, perfect accuracy), more precision is better. This is probably not however the case. The world’s most boring book is supposed to be an approximation of pi to something like 10 million decimal places! I doubt that this has m(any) practical applications, and that 3.14159 is sufficient for most purposes! So the moral of the story is not to be suckered into buying more precision than one actually needs. Of course, accuracy at low precision is cheap! So, don’t be suckered either into buying greater accuracy at the cost of less precision!

    • The graphs of mean global temperature has an error-bar clearly indicated. Personally, I wouldn’t call it a major blunder. Giving the value to 0.1 of a degree Celsius might be better – but I don’t think 0.01 is silly.

  • It looks more like science that way, and less like the alarmist propaganda that it probably is!

    Funny how nobody seems to actually be able to measure changes in average sea level! The implication is always that if the average global tempareture increases, then so will the sea level, and many lowland areas may be inundated. But is there a measurable rise in sea level?

  • Great post, Marcus. This stuff is important, but rarely discussed.

    Measurement results, after all, are critical inputs to a bewilderingly large number of decisions made in today’s world. The idea of measurement uncertainty, and the notion of traceability in measurement that goes along with it, is needed to make informed, reliable, decisions.

    Another way to think about your weighing is that the balance only produces an estimate of the mass that you want to know. It displays a number `m` that is approximately equal to the actual mass `M`. The difference between `m` and `M` is the measurement error `E`, such that `M = m – E`. This error is the combination of many small factors: some fluctuating wildly, leading to randomness in your readings, and others almost static, so you would not notice their contribution in a quick sample of readings. The effect of temperature and humidity you describe are nice examples of slowly changing factors, but there may be others that change even more slowly.

    The nature of the different contributions to the measurement error is what makes accuracy different from precision. A large fixed, or very slowly changing, error detracts from the accuracy of measurement results, even if there is very little in the way of observable fluctuations. (That, incidentally, is why measuring equipment should be regularly calibrated to ensure that it is accurate, even at universities! 🙂

    Traditionally, measurement errors get sorted into one of two buckets: ‘random’ or ‘systematic’; but the distinction is not always clear cut, as your example clearly shows.

    The same terminology, ‘random’ and ‘systematic’, are not used to qualify different ‘types’ of uncertainty. If, for example, you decide to use the number `m` as an estimate of `M` then there will be uncertainty: you know that `M` could be larger, or smaller, than `m`, but which is it and by how much?

    The topic of ‘measurement uncertainty’ deals with a blend of measurement science and basic statistics that can provide an answer to that question. The document that your post links to above is a good old beginner’s guide. Unfortunately, for historical reasons, it shies away from the term ‘error’. That is a pity, because, from a physical point of view, the problem is all about errors!

  • @Blair It is not just measurement error which makes accuracy different from precision. Suppose there was no error. One can still specify the measurement with more or less decimal places depending on the appropriate or desired precision. The accuracy of the measurement would always be perfect (no error), regardless of the precision.

  • @stho002, There is always measurement error!

    Incidentally, regarding significant figures, when you express a result using a finite number of digits you generate another error: suppose `X` is a value (infinite precision) and `x` is that value expressed using finite precision: the error `E = X – x` is generated by using the finite-digit representation. This becomes just one more contribution to the overall error (often people say a component of uncertainty) that needs to be considered when assessing the measurement accuracy.

  • Incidentally, the reading had plummeted over the weekend and was reading a tiny 184.110 g Monday morning, but is now slowly on its way back up again. Curiously, the calibration mass still gives something from 199.999 g to 200.001 g. When the promised rain finally comes, it will be interesting to see whether the reading rises considerably.

  • Marcus, a more interesting meteorological component to this metrological problem (note the subtle difference here) is to consider the change of atmospheric (barometric) pressure. A mass has a tendency to “float” in the air. Similar to how a mass seems to lose “weight”when it is immersed in water. If the atmospheric pressure varies then the measured mass using your balance will vary. If the “object” you are weighing has a density less than the 200g calibration mass then the error due to the pressure change will be even bigger. For a 200g mass of 8000 kg/m3 (density of stainless steel) and a 10% change in atmospheric pressure (not unheard of in NZ) there will be change in apparent weight of around 3mg. If the object has a density half stainless steel the change in mass can be up to 10 mg. One tenth the density of Stainless (wood for instance?) the apparent change could be upwards of 30 mg. That is getting to be some proportion of the change in mass and could be a component of the “error” you are observing. The other interesting point is that even if you calibrate the balance again with the 200 g mass the “error” will still be there if the atmospheric variation is having an effect.

    Looking at Weather Underground for Auckland the barometric pressure was at a minimum last Tuesday (20 Jan) of around 1006 hPa and on Friday/Saturday was around 1020 hPa. Denser air will mean the “object” (if less dense than SS) will weigh less.

  • I have just reread your prior post and noted that the mass in both articles is, surprisingly, the same. So lets take a guess here and you are measuring the same Titanium powder sample??? If so then humidity too will have a significant effect on the weight. Nasty nasty water sucks itself onto the surface and pores of the powder even more than you can imagine. Relative humidity sensors utilise this very effect to gather water vapour and thus change an electrical property of the dielectric in the sensor. It will be interesting to see how the humidity has changed at your site as well and then try and separate now (at least) two (conflicting? ) effects.

  • One more exercise: What’s the time in the header photo at the top of the article??
    Is a “behind time” or “ahead of time” clock ever “accurate”? At least a stopped clock is “accurate” twice a day…….

    • By the same token, a clock that runs backwards is right four times a day 😉

      • Yeah, yeah, yeah. And a clock that runs backwards at twice normal speed is right six times a day. Enough!

  • A stopped clock is right twice a day and uses up no energy!