We've just had our first session at the NZ Institute of Physics Conference. The focus was on astrophysics, and we heard from Richard Easther about 'Precision Cosmology' – measuring things about the universe accurately enough to test theories and models of the universe. We ablso heard about binary stars and supernovae, and evidence for the existence of dark matter from observing high energy gamma rays.
Perhaps the most telling insight into cosmology was given in an off-the-cuff comment from one of our speakers, David Wiltshire. It went something like this. “In cosmology, if you have a model that fits all the experimental data then your model will be wrong, because you can guarantee that some of the data will be wrong.”
Testing models against experimental observation is a necessary step in their development. We call it validation. Take known experimental results for a situation and ask the model to reproduce them. If it can't (or can't get close enough) then the model is either wrong or it's missing some important factor.(s). Of course, this relies on your experimental observations being correct. And, if they're not, you're going to struggle to develop good models an good understanding about a situation.
The problem with astrophysics and cosmology is that experimental data is usually difficult and expensive to collect. There's not a lot of it – you don't tend to have twenty experiments sitting in orbit all measuring the same thing to offer you cross-checks of results – so if something goes wrong it might not be immediately apparent. And if you can't cross-check, you can't be terribly sure that your results are correct. It's a very standard idea across all of science – don't measure something just once, or just twice, (like so many of my students want to do), keep going until you are certain that you have agreement.
Little wonder why people have only very recently taken the words 'precision cosmology' at all seriously.