Archive June 2010

why an evolutionary image merits a ‘fail’ Alison Campbell Jun 29

No Comments




Last year I commented that the following image, while funny, was a ‘fail’ in scientific terms:

evolution of the cat.jpg

A recent commenter asked, so is this image scientifically correct or incorrect? (My first thought was that teh lolcat at the end should be a clue…) 

But no, it’s not scientifically correct (lolcats aside). It’s another in the long line of images of ‘evolutionary iconography’ that portray evolution as an inexorable march towards some sort of progress – a generalisation that isn’t particularly helpful in explaining how evolution actually works.

It’s not good on the particulars of feline evolution, either…

The word ‘cats’, in its broadest sense, encompasses 38 different living species, which fall into 8 major groups comprising 11 genera (Johnson et al., 2006). All the extant species have evolved relatively recently: a combination of fossils & DNA analyses suggest their radiation began no more than 11 million years ago (mya) in the late Miocene (ibid.). The earliest divergence (10.8 mya) was between the lineage leading to the ’big cats’ (lion, tiger, leopard, jaguar, snow leopard & clouded leopard) and ‘the rest’. In other words, domestic cats are not particularly closely related to lions, despite the iconography above.

Taxonomists have found classifying the various felids a difficult problem, due to the paucity of recent fossils (notwithstanding the classic sabre-toothed cats of the Pleistocene), a shortage of distinctive skeletal features, & some confusing distirbution patterns. Johnson & his team obtained sequences from autosomal and X- and Y-linked genes, plus mitochondrial DNA, for a total of 39 gene segments, which they then compared across all living cat species. A group of 7 distantly-related species – including hyaenas, which are more closely related to cats than to dogs - made up the ‘outgroup’, something that’s used in a phylogenetic analysis in order to distinguish between ‘ancestral’ & ‘derived’ features. (Basically, if a feature is found in the outgroup as well as the group of interest, then it’s likely to be ancestral & so won’t be particularly informative about patterns of evolution in your study group.) And the molecular dates were calibrated using 16 sets of fossil remains. 

The team found that the 8 major cat lineages evolved relatively quickly, over about 4.6 million years. Between6.4 & 2.9 mya these lineages in turn underwent a fair bit of adaptive radiation, at a time when sea levels were around 100m higher than they are at the moment. There was another burst of divergence 3.1-0.7 mya which produced 27 of the extant cat species. This was at a time when sea levels were on average relatively low.

The sea level part is important, because during periods of low sea level it would have been possible for species to migrate via land bridges into previously inaccessible areas. Based on their molecular data & available information on sea level changes, Johnson et al. suggest that modern cats evolved in Asia with that divergence between the big cats (Panthera) and all other feilds.Somewhere between 8.5 & 5.6 mya the ancestors of caracals, servals & golden cats arrived in Africa. Then, between 8.5 & 8.0 mya felids arrived in North America for the first time via the Bering Strait land bridge. This immigrant group seems to have been the common ancestor to ocelots, puma, leopard cats, lynxes – and the domestic cat. When the Panamanian land bridge formed 2.7 mya this opened up more new ecological opportunities for the feline explorers.

Subsequently there were other migrations back from the Americas to Eurasia & then further west. Cheetahs, for example, are now found in Africa, but the genetic analyses by Johnson’s team indicate that their closest relatives are the North American pumas. Similarly members of the genus Felis must have crossed back into Eurasia at least once, given that the domestication of the common moggy seems to have occurred in the Near East, at about the same time that agricultural settlements were developing in the Fertile Crescent (Driscoll et al., 2007) (Other American species moved across the Bering land bridge to Eurasia, & hence Europe, at various times – most notably the various horse species. The fossil remains of this particular sequence of species migrations were interpreted by T.H.Huxley as evidence for a European origin of the horses, a view he rapidly & happily relinquished when presented with evidence of the horses’ long evolutionary history in America.)

Once more – that simple linear iconography is not a scientific representation of feline evolution, and a long way from the much more complex and fascinating reality.

C.A.Driscoll, M.MenottiRaymond, A.L.Roca, K.Hupe, W.E.Johnson, E.Geffen, E.H.Harley, M.Delibes, D.Pontier, A.C.Kitchener, n.Yamaguchi, S.J.O’Brien & D.W.Macdonald (20008) The Near Eastern origin of cat domestication. Science  317: 519-523

Johnson, W., E.Eizirik, J.Pecon-Slattery, W.J.Murphy, A.Antunes, E.Teeling & S.J.O’Brien (2006). The Late Miocene Radiation of Modern Felidae: A Genetic Assessment Science, 311 (5757), 73-77 DOI: 10.1126/science.1122277

another early hominin specimen, & other things to read Alison Campbell Jun 27

No Comments

I’m catching up on my reading of other people’s blogs, so here are some interesting posts to share with you.

At Laelaps Brian Switek has commented on the latest fossil hominin find. Dubbed ‘Kadanuumuu’ (or ‘Big Man’), this is a partial Australopithecus afarensis skeleton.Kadanuumuu was much larger than the more familiar (& more recent) ‘Lucy’, & because of this & because of features of the pelvis, the scientists who described the remains feel they were probably those of a male. There’s also the suggestion (see the comments thread for Brian’s article) that these remains may overturn the current hypothesis that afarensis‘s ribcage was funnel-shaped. Or may not – we probably need more data on this one.

There’s an interesting discussion on Pharyngula  around the separation of science & belief. Part of the post, & the ensuing comments thread, focus on a post by another blogger that appears to be making an argument for students’ personal beliefs to count as valid answers in science exams. Every now & then I’ve seen a student answer a question in this way, rather than giving a reasoned scientific response to said question. In each case I have marked them down, & it’s not because I deny students the right to personal belief systems. It’s because the question has been science-based, & that’s what I expect the answer to be as well. Anyway, the post & discussion are interesting & thought-provoking.

And the Silly Beliefs team have taken a critical look at a recent item on ’60 Minutes’ that took an extremely credulous stance on the issue of UFOs & alien visitations. I had wondered whether to watch the program but the promos made me think that this would do damage to my blood pressure. Presenting information that turns out to be at least a decade old as something new & exciting doesn’t strike me as particularly good journalism…

Enjoy :-)

the genetics of lactase persistence Alison Campbell Jun 25


Some time ago now I wrote about lactose intolerance in humans & the domestication of cattle. Last year the Schol Bio exam included a question that looked more deeply into lactase non-persistence (which is the normal genetic condition: around 70% of all adults can’t digest the milk sugar lactose because the gene coding for the necessary enzyme is ‘switched off’ in early childhood). The examiner asked students to 

[D]iscuss the presence & occurrence of lactase persistence in different regions of the world. In your discussion consider: the genetics & inheritance of the lactase persistence allele in humans; the role of cultural evolution in the selection of lactase persistence in only certain regions of the world; & the reasons for the current frequency distribution of lactase persistence.

It’s an interesting question & so I thought I’d talk more about the whole lactase thing here.

As I said in my previous post, people first domesticated cattle around 8000 years ago; probably the animals were first kept for meat but at some point farmers thought of using the milk as well. Now, in terms of the human population as a whole, most adults can’t digest the lactose that milk contains, & can be described as lactose-intolerant.. Rather than being broken down into the monosaccharide sugars glucose & galactose (which are small enough to be absorbed across the wall of the small intestine), the lactose passes on to the large intestine where bacteria use it as an energy source. Unfortunately this bacterial fermentation also produces a lot of gas (which can be a bit anti-social & more than a little uncomfortable) & a range of other by-products that can cause considerable discomfort to those concerned. (This includes diarrhoea: the sugars that remain in the gut raise its osmotic potential & this means that a lot of water’s retained in the faeces,with unpleasantly sloppy results.)

It turns out that human lactase production is under the control of a gene on chromosome 2 (which means that it’s not sex-linked). The gene’s switched on in babies – as for all young mammals – & as a result the lactase enzyme is produced in the infant’s small intestine, allowing them to completely digest their milky diet. And, as in all young mammals, there’s a developmental pattern of gene expression in the small intestine: the gene is turned ‘off’ (ie the DNA is altered in a way that means that the gene cannot be expressed) when the infant is weaned. In other words, this change in gene expression is induced by an environmental change – an example of epigenetics. The result is ‘lactase non-persistence’ in the majority of human adults. However, this non-persistence is neither universal nor distributed evenly in human populations. Instead, lactase persistence is the norm in some parts of the world, and for most people in these populatins the lactase gene remains active, continuing to express lactase in the small intestine.

It seems that the gene remains active in these individuals because they all carry a dominant mutation that prevents the permanent inactivation of the lactase gene. (‘Dominant’ means that the mutation is expressed in everyone who carries at least a single copy of that allele. For a ‘recessive’ allele to be expressed you need to have 2 copies of it – unless it’s sex-linked, that is.) What’s really interesting, in terms of the early history of agriculture, is that this mutation has become fixed in more than one regional population. It appears to have occurred – quite independently – in populations in northern Europe & also in parts of Africa, around the same time that milk cattle were domesticated in these areas. Now, realistically this mutation could have occurred many times over. But it wouldn’t have become fixed in a population until environmental conditions meant that it conveyed a selective advantage – in this case, the ability to digest milk & milk products, & thus take advantage of a novel source of protein, vitamins, & calories not available to the rest of the population. Bear that in mind when you look at the following map.

The distribution of lactose-intolerance is shown on the image below (if you click on it you’ll get to a higher-res form), where bright red represents the highest frequency of lactase-intolerant individuals (91-100%) & hence the highest frequency of lactase non-persistence. Bright green shows the lowest frequency (0%) [& apologies to those of my readers who are colour-blind! Blame wikipedia...].


So, let’s look in a bit more detail at the distribution of lactase non-persistence (shades of red) & persistence (shades of green). The high frequency of lactase persistence in parts of Europe & North Africa is related to the fact that these are areas where dairy farming was independently ‘invented’. Once people thought of drinking milk, those with the mutant allele that allows lactase persistence would be at an advantage because of their ability to access a good-quality source of nutrients & calories. If they produced more children, on average, than non-milk-drinkers, & some of those children carried the mutant allele, then it would spread through the population & milk drinking would become more common.The similarly high frequency of the allele in North American and Australasian populations can be put down to high migration rates from Europe.

There are of course exceptions to that last statement. Indigenous populations in both North America & Australia have a high frequency of lactase non-persistence, as do African Americans. Not to mention Asia & southern Africa: for an excellent Schol answer you’d need to suggest a reason for all this.

For the indigenous populations of Australia & North America, lactase non-persistence (& thus lactose intolerance) would be expected to be at high frequency in the populations because these are areas where early human populations did not develop dairying. Thus there’d have been no selection pressure & no ‘fixing’ of any ‘persistence’ mutations that occurred. You could also suggest that until recently there’s been little gene flow into these countries from the areas where dairying developed (with the resultant high frequency of the lactase persistence allele). It’s also likely that until recently high gene flow didn’t equate to high levels of interbreeding (necessary to introduce the persistence allele into indigenous populations). And you could also suggest that, for African Americans, the source of Africans taken to the US by the slave trade would have something to do with it.

See? I said it was an interesting question :-)

a sponge makes the top 10 Alison Campbell Jun 24

No Comments

Sponges are strange organisms – classified as animals, they definitely look the odd one out. I rather like them: no real tissue development, no organs, immobile, & a growth habit that looks distinctly plant-like. Instead, what you get is an organism formed from just a few types of loosely-organised cells, all sitting (& moving) on & within a ‘skeleton’ made either of a protein (aptly enough, called ‘spongin’) or of spicules, which are something like fibreglass. You would not want to use a spicule-sponge in the bath, unless you were intending some serious exfoliating.

The simplest sponge body is rather like a hollow tube with perforated walls: the perforations, or pores, are what gives the phylum its Latin name, Porifera (literally, ‘pore-bearer’). Each pore is lined with a type of cell known as a porocyte (where ‘-cyte’ means ‘cell’) & leads in to the sponge’s inner cavity (the ‘spongocoel’. Choanocytes (‘collar’ cells) line the cavity & their beating flagella draw water in through the pores; they also trap small food particles drawn into the sponge on the water currents. Amoebocytes are able to move through the sponge’s body & transfer digested food from the choanocytes to porocytes & the epidermal cells that cover the outside of the sponge.

So, sponges are filter feeders. Well, most of them are. But some deep-sea sponges turn out to be carnivores. One of them lives in NZ waters and this year was chosen as one of the international Top 10 New Species of 2010. Chondrocladia (Meliiderma) turbiformis is just 2cm long & lives at depths of around 1000 metres on the Chatham Rise, off the eastern NZ coast.

It’s hard to see how a small, sessile organism with no apparent means of catching prey can lead a carnivorous lifestyle. It turns out that the outer surface of C.turbiformis is covered by most unusual spicules. One type of spicule (‘D’ & ‘E’) on the image below is C-shaped, while another type is described by the scientists who discovered the sponge (NIWA scientist Michelle Kelly & French sponge expert Jean Vacelet) as reminiscent of spinning tops – they’re labelled ‘G’ on the image, which also shows the sponge itself (‘A’) (image by Jean Vacelet).


Apparently the C-shaped spicules are sticky (must be to do with their spiky ends) & trap small animals that brush against them. The amoebocytes then swing into action, moving towards the trapped animal & each engulfing a tiny portion of it to take back to the rest of the cells of the sponge’s body. It seems that this strange lifestyle is an ancient one – because the only other known sponges with ‘spinning top’ spicules come from early Jurassic fossils.

Catching your dinner with velcro sounds unusual enough, but there is another carnivorous sponge that lassos its prey! Asbestopluma hypogea extrudes tiny filaments, all covered with hook-like spicules, that snag small prey out of the water column.


More filaments grow out to entangle and surround the prey & amoebocytes flock to the captured organism & engulf if, piece by tiny piece. Death by a thousand nibbles? A slow death, anyway – apparently it takes 8-10 days for a large prey organism to be consumed in this way.

And I’m sure there are stranger creatures yet living in the deep oceans of the world…

the camel’s hump Alison Campbell Jun 22

1 Comment

Right now, like many of my colleagues, I’m busy marking end-of-semester exams. (In my case this process is complicated by the worst cold I’ve had in ages…) However, I’m happily procrastinating – as far as the marking’s concerned – because something a student wrote in an essay triggered this post :-)

One of my essay questions asked for a discussion of the ways in which terrestrial animals manage the problem of water loss in what is a rather dehydrating environment. With examples. Anyhow, in the course of their answer someone mentioned camels & the widely-believed-but-inaccurate factoid that these desert-dwelling mammals store quantities of water in their humps…

Which they don’t. We’d looked (albeit briefly) at this in lectures, partly because I know from my secondary-teaching experience how widespread that particular misconception is. Seeing that answer made me realise that I need to think carefully how I approach that one when I teach the topic again next year, as it was a timely reminder of how strongly-held some misconceptions can be; it’s not simply a matter of presenting the accurate informaton a few times & assuming that this will replace the existing alternative conceptions.

What camels do do is more complex – & more fascinating – than pumping a hump full of water. (One way to approach that misconception could be to ask the class to consider how the water would get there, how it could be stored, & how it could be mobilised. The stomach – which is I suspect the most likely candidate they’d put forward for a storage organ – doesn’t extend to the hump. That’s above the backbone; the stomach, with the rest of the gut, is slung below.) They have a suite of adaptations that mean that camels can go without water for several days while being physically active in the extremely dry, & dehydrating, desert environment. In fact, they can lose a volume of water equivalent to around 40% of their body weight – humans can cope with losing no more than 10%,

One of the issues with dehydration for most mammals is that blood plasma volume decreases, which can in turn cause a whole range of problems. This means that the blood becomes thick & ‘gluggy’ & the heart has to work much harder to shift it around the body. But not in camels. They manage to retain blood plasma volumes at the expense of other body tissues, & in addition their red blood cells can still move smoothly even when the blood does become more viscous.

In addition, camels’ kidneys can produce extremely hyperosmotic urine – many times more concentrated than their blood. Like all mammals, humans too can produce hyperosmotic urine, but in our case it’s only around 4 times the concentration of blood plasma. Camel urine’s been described as ‘syrupy’ & extremely salty. (It must also be very dark brown in colour. I remember when I went down to Antarctica, one of the pre-flight talks was about the dangers of dehydration – Antarctica is a very dry place – & to keep an eye on urine colour as a measure of how dehydrated we were. Pale straw-coloured, good; dark brown, not good at all!) That camels can do this suggests that they must have very long loops of Henle in their kidneys, relative to kidney size, as it’s these fine tubular structures that set up the conditions for final urine concentration.

It pains me to think about it, but camels also produce very dry faeces – a common adaptation in desert animals. How they avoid terminal constipation I do not know :-) And we’re talking faecal pellets here, rather than big ploppy poos – small pellets have a high surface area: volume ratio & so ‘lose’ more water back across the gut wall & into the blood stream than a large single faecal mass would. Plus, a camel doesn’t begin to sweat until the body temperature reaches 42oC, which would be dangerously high in a human.

And when they do get a chance to drink, they drink! And drink. And drink. Up to 57L at one sitting. Taking on a really big volume of water at one session is a Bad Thing for most animals: if the water’s absorbed rapidly then it can dilute blood plasma & cellular fluids to dangerously low levels. One of the side effects of this would be lysis of red blood cells as they absorbed water & swelled past the point that could be contained by the cell membrane. Apparently camels get around this one by absorbing water only very slowly, & their red blood cells can swell to more than twice their normal size before they burst. 

So what is in the camel’s hump? Fat. It’s a food reserve – & one that does supply the animal with some water. This is because when the fat is metabolised, water is released as a by-product. In some desert animals, such as the kangaroo rat, this ‘metabolic’ water is the sole source of water for the organism. For much of the time, kangaroo rats do not drink at all.

And with that, I really must get back to my marking!

contaminated dietary supplements Alison Campbell Jun 19

No Comments

Trawling through my ‘blogging’ folder, wondering what to write about, I came across a paper from the New England Journal of Medicine that discusses problems with contaminated dietary supplements in the US (Cohen, 2009). I’ve previously written about the recall of ‘natural’ treatments for impotence, & Grant’s talked on more than one occasion for the need for ‘truth in labelling’ for such supplements & other forms of complementary & alternative medicine. So I thought it was about time for a follow-up.

Every so often the issue of regulating complementary & alternative health products comes to the fore in NZ. And when it does there are usually fairly strident arguments made against the suggestion. I’ve never really been able to understand why, given the evidence that these products can be adulterated, or aren’t standardised in terms of dosage. I’m also at a loss to see how some (many?) of these products can be described as ‘natural’, and held to be much better for you than those made by ‘Big Pharma", when in fact they’re often (as we shall see) not all-natural & also often produced by the same pharmaceutical companies that make prescription drugs.

Cohen’s paper begins with the cautionary tale of an American police officer who took what was presented as a ‘natural’ weight-loss supplement to help him lose some excess kilos. The supplement also lost him his job – it contained amphetamine, which was detected in a routine urine test & led to him being fired. Apparently, by August 2009 the US Food & Drug Administration had identified more than 140 ‘natural’ products containing active pharmaceutical ingredients, most of them marketed as dietary supplements – & this is regarded as the tip of the iceberg.

Apparently, before 1994 herbal products fell under the rubric of food additives; manufacturers had to prove their products were safe before they could market them. These days, since the 1994 US Dietary Supplement Health & Education Act, it’s assumed that supplements are safe & there’s little control over their marketing. However, it seems that this deregulated environment isn’t well understood by either consumers or doctors. The majority of consumers believe that the supplements they take have been approved by the relevant government agency, & must carry warning labels about any side effects that may exist. (The absence of such warnings is then taken to mean that the product is safe, where in fact they may simply be missing.) Similarly, a survey of doctors in training found that a large minority also believed that the products had to have FDA approval, while most didn’t know that adverse events had to be reported to the FDA.

Cohen’s list of contaminants found in ‘health’ products on the US market is alarming. While poisonous plant materials, heavy metals, & bacterial contamination are commonly found, what’s even worse are the many supplements – touted as ‘natural’ – that contain ‘prescription medications, controlled substances, experimental compounds, or drugs rejected by the FDA because of safety concerns’ (Cohen 2009). They’re most often detected in products sold to enhance sexual or athletic perfomance, or for weight loss but are also found in supplements aimed at people with serious health concerns, such as diabetes. In the light of all this I find it more than a little ironic that a New Zealand natural health products website – arguing against regulation of these products in NZ – says that "the status quo, mirrored in the USA, has been shown over many years to be safe, inexpensive and efficacious. Existing legislation protects consumers from dangerous products and misleading advertising." As Cohen has pointed out, this is a long way from reality. 

Given patients’ tendency not to tell their GPs what supplements they are using (unless the doctor asks fairly specific questions), & the fact that some supplements can act as antagonists to prescription medicines**, & that the contaminants themselves can have serious health effects, such widespread contamination may well represent a significant public health risk in the US (where it’s estimated that 114 million Americans use some sort of dietary supplement: Cohen, 2009)). And in New Zealand. You might argue that ‘our’ products are much better formulated – but remember that many supplements are readily available on-line from overseas sellers, or are imported from overseas and, as the recent withdrawal of sexual enhancement products shows, may be subject to the same serious problems as those discussed by Cohen.

Personally, I think there’s a good argument for regulation of dietary supplements & other over-the-counter health products. You may argue that it’s a case of ‘buyer beware’ & individual freedom to choose – but there’s likely to be a significant cost to the individual and to the public health service when (& it is when, rather than if) things go wrong.

** For example, Oneschuk & Younus (2008) note that while some ‘natural health products’ may have the potential (based on animal & in vitro studies) to help cancer patients manage the side effects of chemo- & radiotherapy, others significantly reduce the effectiveness of chemotherapeutic drugs.

PS (23/06/2010) Anyone interested in a more detailed coverage of the US situation should have a look at Steven Barrett’s post on Quackwatch.

P.A.Cohen (2009) American roulette – contaminated dietary supplements. New England Journal of Medicine 361(16): 1523-1525. doi 10.1056/NEJMp0904768

D.Oneschuk & J.Younus (2008) Natural health products and cancer chemotherapy and radiation therapy. Oncology Review 1: 233-242. doi 10.1007/s12156-008-0028-6 

a follow-up on bleeding for the cause Alison Campbell Jun 18


A couple of days ago, on my post about World Blood Donor day, one of my commenters noted that the NZ Blood Service is apparently going to follow their Canadian & Australian counterparts in banning people from giving blood if they’ve ever had Chronic Fatigue Syndrome. (At the moment folks who’ve had CFS are OK to donate once they’re fully recovered.) The reason for doing so is a purported link between CFS & a particular retrovirus (XMRV, or xenotropic murine retrovirus). 

But the link between CFS & XMRV is not particularly clear-cut. A study in Nevada found that >60% of CFS sufferers (N=101) also had traces of XMRV in their blood, compared to <4% of healthy controls (N=218). (Sorry, the link is to Science & may not work for all.) This sounded like something that ERV would be interested in & I was fairly sure she’d written something on it earlier, so I checked. I was right: she’s got a very interesting commentary on the methods used by the Nevada researchers. But she’s aslo cautious about the overall conclusions: fairly obviously XMRV isn’t the sole agent involved in CFS (if it’ is an agent), given that 33% of CFS patients didn’t express it in their blood. It would also be important to know where the samples came from: if the individuals with CFS lived where XMRV infection is common, then this would skew the results & make any relationship appear stronger than it is. And It does look as if at least some other labs haven’t been able to replicate these findings.

I can understand the Blood Service wanting to err on the side of caution, given issues with contaminated blood in the past (the Hep C/haemophilia problem, for example). Consequently I have to disagree with Smut on this one – it probably is better to be safe than sorry. A ban can always be reversed if the apparent XMRV-CFS link turns out to be non-existent after all.

On the other hand, I find it concerning that various commenters, including the lead researcher in the Nevada study, have made statements explicityly linking CFS & XMRV – when a causal relationship has yet to be demonstrated. (It could equally well be an opportunistic infection.) A commercial test for XMRV is now available. While this is valuable as a research tool (in measuring the incidence of infection, for example), identifying a particular individual as +ve for the virus can’t at present assist in actually treating the patient. However, in at least some cases people with both CFS and an XMRV infection are taking powerful anti-retroviral drugs (commonly used against AIDS) that can themselves have significant side effects, in the hope that ridding themselves of the virus will also cure the CFS. This seems to be drawing a long bow indeed.

pseudoscientific gambits Alison Campbell Jun 16

No Comments

This one struck a chord with me – it highlights the ’Intelligent Design’ (cdesign proponentsists) tactics during in the Dover trial, and also various anti-vaccination shenanigans such as the use of celebrity endorsements. Well, any anti-science shenanigans, actually…

From Tree Lobsters, via the Millenium Project.

world blood donor day Alison Campbell Jun 14

No Comments

Today (June 14) is World Blood Donor Day. Blood’s not a product that keeps particularly well (about a month, if we’re talking whole blood) & blood banks are always looking for new donors. In New Zealand, around 3,000 donations per week are needed in order to meet the demand.

So, if you’re been considering giving blood, put the thought into action & rock on down to your nearest NZ Blood Service centre. Giving blood’s a straightforward process & can make a profound difference to the lives of others.

the tyranny of powerpoint Alison Campbell Jun 13

1 Comment

I began my university teaching career in the years B.P. (Before Powerpoint). Blackboards, chalk, & overhead transparencies (often hand-written & hand-drawn) were the order of the day. Since then, Powerpoint has become an almost universal tool & ‘chalk-&-talk’ is a rarity. But Powerpoint is just a tool, & using it doesn’t guarantee a good presentation. (Slides that simply present large blocks of text; blocks of text in tiny fonts; lines of text that ‘fly’ in from one side or the other; typewriter sounds as letters appear on the screen – don’t do it! Please don’t go there!)

Anyway, a colleague has just given me a copy of Yiannis Gabriel’s 2008 paper looking at the use (& abuse) of Poweroint as a teaching tool. And it’s really got me thinking.

Gabriel begins by noting that Powerpoint “accomplish[ed] what earlier technologies did (overhead transparencies, slides, chalk and blackboard) only more efficiently, more stylishly.” However, it’s probably had more widespread, more pervasive effects: Powerpoint has become the basic lecture  tool, but simply relying on it without thinking about how it’s used can have some far-reaching effects on the nature of the learning that goes on in lecture theatres. One of his concerns is that, while Powerpoint is great for showing information in visual form (graphs, diagrams, photos, embedded videos), it may also affect students’ abilities to analyse & think critically about information. (It can also act as a prop – how many lecturers these days would feel comfortable giving a lecture without powerpoint, if the power goes down or the technology fails?) In fact, he expresses his own concern that “Powerpoint inevitably leads to comfortable, incontestable, uncritical, visually seductive and intellectually dulling communication.”

Now, like almost all my colleagues I use Powerpoint on pretty much an everyday basis, & so Gabriel’s ideas gave me considerable food for thought. It’s easy to slip into using this technology routinely, in a way that’s really just ‘chalk-&-talk elevated to another level. I try hard to avoid this: I use images & phrases as something to talk around & as cues for students to think about concepts, & I try to encourage discussion around the ‘big ideas’ of each lecture, using things like pop quizzes to start things off. (I really enjoy it when students ask probing questions that require a bit of thought for me to answer properly, not least because it lets me model how scientists think about things.) But is this enough?

Certainly the technology has its shortcomings, although these tend to be in how it’s applied rather than inherent in Powerpoint itself. You’ve planned your lecture in advance, all the images & words are assembled onto your slides – how easy is it to deviate from this if during the course of the lecture it becomes obvious that some in the class don’t understand what you’re saying, or want to ask questions around a particular issue? It could be argued that you just have to get through that material – it’s needed as the basis for the next lecture or some other paper – & the students will have to come to tutorials or ‘office hours’ to fill the gaps. But by then the moment’s passed.

Myself, I don’t see the value in that. Better by far to address the issues that students raise, on the spot – after all, how can I expect them to understand the material that follows if they haven’t ‘got’ what I’m talking about at the moment? You can deal with this with Powerpoint, as you would have done in the ‘old days’: I had the experience a few weeks ago where it became clear that many in the class hadn’t a clue about meiosis, & without it much of the rest of the lecture wasn’t going to make much sense to them. We ended up with an impromptu tutorial, with me using the computer mouse to ‘draw’ on my slides (having changed it from the usual arrow to a virtual felt-tip pen) to illustrate the points we were talking about. Yes, we didn’t get through everything I’d intended to for that class – but I was able to do an extra panopto recording later that day for the students to follow, & there were always the tutorials…

So I thought I was doing OK – & then Gabriel mentioned bullet-point lists… These are pretty much the standard way to present information in Powerpoint, but Gabriel points out that they contain some fish-hooks for teacher & student alike: “many people (and most  students) confronting a list will assume that it is exhaustive, that the items on it are co-equivalent…, and that they are mutually exclusive. In reality, few lists meet these requirements, and yet they block thinking into precise areas of overlap or items that are absent from the list.” There’s also a risk that students will see the lists as completely authoritative where they may actually be tentative. And it’s easy to use them to gloss over things that the lecturer’s not sure about, or doesn’t want to discuss – just don’t put those items on the list! 

When I think about it, I can see some of these things coming through in students’ test papers. For example, in teaching about the different ‘major phyla’ of animals, it’s easy to list the key features of each phylum in a series of bullet-points. I make the point in lectures that there may be other interesting features in a particular phylum – but in a test, for many students it’s as if I’d never said that; the bullet-point items seem to be all-important. This suggests to me that these students haven’t thought about other things that were said in lecture, or maybe those other things didn’t even register. And it’s made me wonder if there are other steps I could take to get this information across in a meaningful way that prompts the class to think carefully about what’s being said & why it matters.

Gabriel criticises images as well. And I agree with him – it’s quite easy to put together a sequence of images that can engross the audience, to the point where they don’t actually think critically about what’s being said. But I also strongly agree that it can enhance student learning & understanding of things like anatomy or physics. Diagrams, too, are a double-edged sword. Used simply to present large amounts of information they can be both boring & overwhelming – but they can “also open up new possibilities of creative thinking, communication and learning.”

I can see that I’ve got a lot of thinking and reorganising to do. I’d like to re-jig my Powerpoints to encourage a number of skills in my students, to enhance their learning – and because many of the skills that Gabriel identifies as desirable emphasise aspects of the nature of science itself:

  • filtering out the irrelevant & focusing on the memorable and significant;
  • tolerating uncertainty;
  • coping with ambiguity;
  • recognising & enjoying the fact that we don’t have clear, permanent solutions to every puzzle & problem;
  • developing the capacity for analytical, critical thought.

Using Powerpoint in a way that goes beyond it being merely a tool for presenting information can only enhance students’ learning (& – speaking personally – my enjoyment of teaching).

Y.Gabriel (2008) Against the tyranny of powerpoint: technology-in-use and technology abuse. Organisation Studies 29: 255-276. doi: 10.1177/0170840607079536. Document available online at

Network-wide options by YD - Freelance Wordpress Developer