Reading opinion polls in the papers tends to cause despair and hilarity in roughly equal measure. Despair, not because my current favourite party might be nose-diving in the polls, rather because of the journalists scant regard for statistics, and hilarity for the way a story is built up where no story exists.
A really, really rough understanding of statistics is all that’s required here. Really crudely put, the more people you interview, the more accurate your poll is going to be. Of course, that assumes you are actually carrying out the poll well in the first place, not just interviewing whoever you come across on the street or whoever answers the telephone, because such crude selection methods will inevitably bias your results - opinion pollsters know better than this.
Every poll that’s published has buried away at the bottom the number of people interviewed and the ‘margin of error’ (call it M percent). This latter is something to take note of – basically it means that if the poll says the major party’s share of he vote is X percent, their true popularity is most likely to lie in the range of X – M to X + M. Given that many polls are based on around 750 people, giving a margin of error of 3%, that’s a huge range. So if a party polled at say 40% one week, and polled at 38% the following week, their sudden drop in popularity isn’t a story at all. It may well be that they have dropped, but you can’t tell on the basis of these two polls alone. Also possible is that they’ve stayed exactly where they were in popularity.
For minor parties, struggling on 5% or lower, the margin of error needs recalculating. Compared to the fraction of the vote they are getting, the margin of error can be really high indeed,.
Of course, what you can do to improve matters is combine the results of lots of polls. Basically, what you are doing here is pushing your sample size upwards, and reducing your margin of error. A helpful geek or two have been doing this for you – have a look at this Wikipedia page which shows how the different parties have polled since the last election, and, more importantly, a measure of the margin of error in the calculations. Note the size of it for the minor parties, compared with the fraction of the vote they are getting
So where does the anti-matter come in? Physicists are currently looking at some results from the LHCb experiment at the Large Hadron Collider, which look like they are indicating charge-parity (CP) violation in a particular decay process. What that means is that there is an asymmetry between the way matter and its corresponding anti-matter behaves. Swap matter for anti-matter, reverse the directions in space, and you don’t get the same results as one might naively expect. Now CP violation isn’t new, but this would be a new manifestation of it.
The question then, is do the statistics show CP violation? Is the measured asymmetry bigger than the margin of error (what we’d call the standard uncertainty) in the results? Yes it is, and very much so, in fact about 3.5 times. So, by the standards of even the most cautious political reporter, we have a story here. However, for physicists, three and a half times still isn’t quite big enough. There’s still a small chance (roughly 1 in a 1000) that this result is just statistical fluctuation. One in a thousand doesn’t sound big, but when you have an interesting result like this one it pays to be cautious. The ‘standard’ for these kind of experiments is considered 5 margins of error, before people will start jumping up and down to say that they have discovered something.
How can we go from 3.5 times to 5 times? Simple – more data – just like for the opinion polls.