I know I’m creeping into Marcus’s territory here but the research I’m going to discuss today would apply to pretty much any tertiary classroom :-)
This story got a bit of press about a month ago, with the Herald carrying a story under the headline: It’s not teacher, but method that matters. The news article went on to say that “students who had to engage interactively using the TV remote-like devices [aka 'clickers'] scored about twice as high on a test compared to those who heard the normal lecture.” However, as I suspected (being familiar with Carl Wieman’s work), there was a lot more to this intervention than using a bit of technology to ‘vote’ on quiz answers :-)
The methods traditionally used to teach at university (ie classes where the lecturer lectures & the students take notes) have been around for a very long time & they work for some – after all, people of my generation were taught that way at uni, & it’s not uncommon to hear statements like, we succeeded & today’s students can do it too. But transmission methods of teaching don’t reach a lot of students particularly well, nor do they really engage students with the subject as well as they might. (And goodness knows, we need to engage students with science!)
Wieman has already documented the impact (or lack of it) of traditional teaching methods on student learning in physics, but this paper (Deslauriers, Schelew & Wieman, 2011) goes further in examining the effect on student learning and engagement of changing teaching methods in one group of first-year students in a large undergraduate physics class. It can be hard to manage a class of 850 students, and so the lecturers at the University of British Columbia had split it into 3 groups, with each group taught by a different lecturer. While the lecturers prepared and taught the course material independently, exams, assignments and lab work were the same for all students.
Two of the three groups of students were involved in the week-long experiment; one continued to be taught by its regular, highly experienced instructor, while the other group was taught by a graduate student (Deslauriers) who’d been trained in ‘active learning’ techniques known to be effective in enhancing student learning. And ‘active learning’ wasn’t just using clickers: the ‘experimental’ group had: “pre-class reading assignments, pre-class reading quizzes [on-line, true/false quizzes based on that reading], in-class clicker questions…, small-group active learning tasks, and targeted in-class instructor feedback” (Deslauriers et al, 2011). Students worked on challenging questions and learned to practice scientific reasoning skills to solve problems, all with frequent feedback from the instructor. There was no formal lecturing at all; the pre-class reading was intended to cover the factual content normally delivered in class time. While the control group’s lecturer also used clickers, this was simply to gain class answers to quiz questions & wasn’t used along with student-student discussion, which was the case with the experimental class.
One reason often given by lecturers for not trying new things in the classroom is that the students might resist the changes. But you can avoid that. I know Marcus finds his students are very accepting of change if he explains in advance what he’s doing & how the innovation will hopefully enhance their learning, and Deslauriers, Schelew & Wieman did the same, explaining to students “why the material was being taught this way and how research showed that this approach would increase their learning.”
So, what was the effect of this classroom innovation? Well, it was assessed in several ways. During the experiment, observers assessed how much the students seemed to be engaged in & involved with the learning process; they also counted heads to see what attendance was like. At the end of the intervention, learning was assessed using a multichoice test written by both instructors – prior to this, all learning materials were provided to both groups of students. And students were asked to complete a questionnaire looking at their attitudes to the intervention.
In both classes, only 55-57% of students actually attended class, prior to the experiment. Attendance remained at this level in the control group, but it shot up to 75% during the experimental teaching sessions. Engagement prior to the intervention was the same in both groups, 45%, but nearly doubled to 85% in the experimental cohort. Test scores taken in the week before the experiment were identical for the two groups (an average mark of 47%, which doesn’t sound very flash) – but the post-intervention test told a completely different story. The average score for the control group was 41% and for the experimental class it was 74% (with a standard deviation in each case of 13%). And the intervention was very well-received by students, with 77% feeling that they’d have learned more if the entire first-year course had been taught using interactive methods, rather than just that one week’s intervention.
Which is fairly compelling evidence that there really are better ways of teaching than the standard ‘transmission-of-knowledge’ lecture format. I try to use a lot of interactive techniques anyway – but reading this paper has cemented my intention to try something completely different next year, giving readings before a class on excretion (a subject which a large proportion of the class always seem to struggle with), and using the lecture time for questions, discussion, and probably a quiz that carries a small amount of credit, based on the readings they’ll have done. And of course, carefully explaining to the students about what I’m doing.
I’ll keep you posted :-)
Deslauriers L, Schelew E, & Wieman C (2011). Improved learning in a large-enrollment physics class. Science (New York, N.Y.), 332 (6031), 862-4 PMID: 21566198