This weekend myself and my fellow scibloggers Siouxie Wiles and aimee whitcroft are privileged enough to be talking at the New Zealand skeptics conference alongside the likes of Pamela Gay and Kylie Sturgess. Needless to say I’m more than a little excited (especially as I get to show off the capabilities of the Carter observatory planetarium the following week with Dr. Gay)!
During the panel discussion we will be broaching some rather difficult topics (I hope). So I wrote this post to give sciblogs readers the opportunity to comment on some of my ethical musings and hopefully have their thoughts feed into our discussion.*
I grew up in a household with a very open mentality. All religions, philosophies, myths and legends were fair game for debate, discussion and belief or disbelief. If my whanau had a catchcry it would be “If a belief doesn’t harm anybody, then where is the harm in people believing it?”. It’s a remarkably pervasive philosophy that I believe many can relate too, yet simultaneously naïve, and I wonder if that’s because it conveniently side-steps many of the complicated ethical questions associated with belief. The simplest is obvious: “What happens when a person’s belief DOES harm another?” And once you start down that path there’s no stopping the flood of followup questions: Is it O.K. to allow an animal to be harmed for the continuation of a person’s belief? What about a child? And what precisely constitutes harm? As a society we have answered some of these questions (at least legally if not morally) but certainly not all and, of course, we’re encountering new ones everyday.
This is where I see modern skepticism – not simply being skeptical for the sake of being skeptical, but continuously re-evaluating the social and ethical implications of the research and technologies we develop in light of new information. One of the many issues though is that humans continuously find new ways to ask old questions as well – and keeping up with both is a daunting task. More than this however is the human brain’s propensity towards preferentially storing surprising or interesting information, regardless of its truth.
Unstated behind much of the outreach that scientists do today is the assumption that people make better decisions when they are given better information. I contend that this is simply untrue. If what people retain is intimately linked with how they feel about a certain piece of information – then providing information will never be enough to counteract the action of vocal minorities who, by their very definition, will espouse information a listener will find contrary a the majority view – and so find surprising or novel and will thus be more likely to retain that information rather than a boring old (true!) fact.
So if you want to prevent this – the course of action is clear: you make YOUR information as interesting and novel, or you attempt to instill your listeners with skepticism about all information. (Or you could eliminate the vocal minority but a) it wouldn’t work because there’s always a minority and b) eliminating a contrary viewpoint simply because it’s contrary is HIGHLY unethical – at least in my book!). However, by doing this you’re also making a judgement call and deliberately trying to manipulate your audience into sharing your point of view. Of course this isn’t new, it happens daily in politics, the media and the comments section on skeptics and non-skeptics web pages – so one could counter argue “If everyone else is doing it what’s the harm in me doing it too?”
So the crux of this blog post and something I hope to raise at the skeptics conference is this: The human brain and the person that inhabits it are innately curious and inventive creatures. We ask questions, we question answers we imagine how things could be better or worse than the world we inhabit. Yet our brains are uniquely inept at certain problems. Look at our innate sense of probability, or optical illusions, or our ability to create and retrieve memories. These are all inherently flawed processes – which is why we need the scientific method to help us ascertain when our intuition is wrong. The problem is that when this becomes a majority viewpoint** – there exists a stable population who will oppose it simply because it IS a majority viewpoint. And no amount of education or information can ever make this stop. Yet we have EVERY incentive, morally and ethically to minimise the impact of non-skeptical viewpoints on our society because – if for no other reason – it saves lives (look at the effect of the anti-vaccine promoters and the increased incidence of measles for one current example).
So when is enough skepticism enough? What beliefs must we counter more strongly than others if we wish to minimise suffering and save lives? And is there a way to do this that doesn’t come across as simply a majority attempting to stamp out the beliefs of a contrary vocal minority?
I would be hope we might be up for an interesting panel discussion!
* This blog post is an OPINION PIECE that is intended to promote discussion. I have attempted to include references where I can (mostly to interesting Wikipedia articles I’ve read recently) so any statements I present or imply as fact should be treated with … you guessed it … extreme skepticism at best. Any corrections/references/criticisms relevant to statements I have made in this blog would be welcomed in the comments below.
** Personally I don’t think skepticism/scientific literacy IS a majority viewpoint in NZ yet – which is why I’m involved with so much science communication. But then again there’s a pretty strong selection bias there isn’t there!