Summer of stats part 7- margins of error

By Guest Author 25/01/2010

A running series from Statistics New Zealand helping us make sense of political polls…

No matter how often we deny it, results from political polls make irresistible reading. But the most important information is often in the last line of the story — if you’re lucky.

pollsA survey in November reported ’little change in the ratings for the two large political parties’. News bulletins told us support for National was ’down 1 point’ and support for Labour was ’down 2 points’. But the Greens got ’a significant three-point boost’.

But the last line of the report read: ’The poll of 1000 voters had a margin of error of 3.1 percent.’ Which means the poll did not confidently indicate any change at all, even for the Greens.

So what is a ‘margin of error’? It’s a measure of how accurately the results of a poll reflect the views of the whole ‘population’. In a political poll, the whole ’population’ means all potential voters. The margin of error tells you how confident you should be about drawing conclusions from the results.

For example, let’s say Party A is supported by 50 percent of those polled. If the margin of error is 3 percent, you can be 95 percent confident that the true value of Party A’s support is somewhere between 53 percent and 47 percent.

Let’s say Party B had 47 percent support. The true value of their support is somewhere between 50 percent and 44 percent.

The bands for Party A and Party B overlap. What we’ve got is a ‘statistical dead heat’ — we can’t separate the parties with much confidence. In fact, they could both be on 50 percent.

If we look at the extremes, there’s a good chance that Party A could be rating as high as 53 percent and Party B as low as 44 percent – so Party A could be as much as 9 points ahead. But it could also be 3 points behind. We just can’t tell for sure.

It generally takes a sample of just over 1,000 to get a margin of error of about three percent. You can decrease the margin of error by polling more people, but the more you poll, the more it costs. And to reduce the margin of error a little bit, you have to increase the number of people polled by a lot.

If a poll doesn’t declare its margin of error, be very careful about drawing conclusions from the results. It could be like deciding the winner of a close horse race from a grainy photograph.

A majority of whom?

Opinion pollsters love to be able to report that ’a majority’ supported or opposed something. But what about the ones who didn’t care either way?

Take a survey that asked ’Do you think lettuce is good for your health?’.

Let’s say you ask 100 people and 45 say ’yes’ and 35 say ’no’. The other 20 apparently didn’t know, didn’t care, didn’t want to be bothered by a surveyor, or asked something awkward like ’What variety of lettuce?’.

How do you report results from this survey? The 45 who said ‘yes’ make up 56 percent of the 80 people who had an opinion. So on the surface, especially if you are promoting the health benefits of lettuces, you could conclude ’most people think lettuce good for their health’.

But shouldn’t you include those whose opinion you don’t know about? That would mean only 45 percent think lettuce good is for their health. You have not proved that ’most people think lettuce good for their health’.

Unpack those averages

Then there’s the story about the man with his head in the oven and his legs in the freezer. He reckoned he’d be fine – his average temperature was about normal.

Just before Christmas, we heard there was ’no overall change in average weekly expenditure on housing’ in the year to June 2009. This would have come as a surprise to many who were renting, but also to many with mortgages.

They needed to read on. It cost more to rent but less to pay off a mortgage. Rents were up 8.1 percent but mortgage payments were down (mortgage principal repayments were down 7.1 percent and interest payments were down 2.8 percent). It was the old averages trick — some numbers go up and some go down, but the average stays the same.

At the same time we heard average household income was up 5.6 percent. That would have surprised lots of people too.

Of course, you don’t need an increase across the board to increase an average. If some earners get an increase and some don’t, the average will go up. If lots of well paid people lose their jobs and others don’t, the average will go down, and vice-versa.

In fact, fewer people received a wage or salary in the year to June 2009. There was more total money earned (up by 8.2 percent), but it was shared across slightly fewer people (down 1 percent). Also, there were fewer people in the lower income brackets (below $44,300) and more in the higher income brackets.

Some people had a wage increase, some would have had no increase, and others dropped out altogether, so the average income increased.

Compared with what?

The price of food fell in November 2009. Food prices were down for the fourth month in a row. That hadn’t happened since 2004. So why were you not dancing in the confectionary aisle? Maybe you know about the folly of short-term comparisons.

But food prices were only 0.9 percent higher than a year before. That was the smallest annual increase since 2005. Surely that was good news?

Maybe you remember that the latest prices were still 11.4 percent higher than they were two years ago. If anything is up or down, always ask ’Compared with what?’.

People who watch their weight know about comparisons. Losing five kilograms last month sounds great – you can congratulate yourself. But you should also face the facts – you are still 15 kilos heavier than you were a year ago.

0 Responses to “Summer of stats part 7- margins of error”

  • “margin of error”? 1 Standard Deviation or 2??? Coverage factor ??? Polls rarely say.Could you clarify please?

  • Hi Ross, I’ve had a couple of questions about this, I’ll get in touch with the author at Stats NZ and get them to respond… stand by…

  • I suspect the uncertainty analysis used in sampling theory is based on the binomial distribution. If that’s the case then the “margin of error” is roughly two standard deviations giving (a 95% confidence interval) *if* there are two parties with close to 50% support each. But really the margin of error shouldn’t apply to the poll itself – rather a seperate margin of error should be applied to each party’s level of support. So if in a poll of 1000 people, the Nats poll 50% support, we would estimate a margin of error of 3.1% in their support. However if the Greens poll 5% support in the same poll, we would estimate a margin of error of only 1.4% (i.e. 2 sqrt(p(1-p)/n) where p is the true level of support, estimated here by the sample statistic, and n is the sample size). I would be interested in what our Stats NZ author thinks about this.