No Comments

Next week my first-year biology students will be doing an appraisal of this semester’s paper, & of those academic staff involved in teaching it. They’re asked about the perceived difficulty of the paper, the amount of work they’re expected to do for it, whether they’ve been intellectually stimulated, the amount of feedback they receive on their work, how approachable staff are, & much else besides. (The feedback one was always my worst scoring attribute – until I asked the students what they thought ‘feedback’ met. It turned out that they felt this described one-to-one verbal communication. We had a discussion about all the other ways in which staff can give feedback – & the scores went up.) The results are always extremely useful, as not only to we find out what’s working, but we also discover what’s not (or at least, what the students perceive as not working) & so may need further attention.

Anyway, my friend Annette has just drawn my attention to a lengthy post in The Atlantic, by Amanda Ridley. It made fascinating reading.

In towns around the country this past school year, a quarter-million students took a special survey designed to capture what they thought of their teachers and their classroom culture. Unlike the vast majority of surveys in human history, this one had been carefully field-tested. That research had shown something remarkable: if you asked kids the right questions, they could identify, with uncanny accuracy, their most – and least – effective teachers.

Ridley, reporting for the Atlantic, was able to follow a 4-month pilot project that was run in 6 schools in the District of Colombia. She notes that about half the states in the US use student test data to evaluate how teachers are doing.

Now, this approach is fraught with difficulty. It doesn’t tell you why children aren’t learning something, for example (or why they do, which is just as interesting). And it puts huge pressure on teachers to ‘teach to the test’ (although Ridley says that in fact “most [American] teachers still do not teach the subjects or grade levels covered by mandatory standardized tests”). It ignores the fact that student learning success can be influenced by a wide range of factors, some of which are outside the schools’ control. (And it makes me wonder how I’d have done, back when I was teaching a high school ‘home room’ class in Palmerston North. Those students made a fair bit of progress, and we all learned a lot, but they would likely not have done too well on a standardised test of academic learning, applied across the board in the way that National Standards are now.)

So, the survey. It grew out of a project on effective teaching funded by the Bill & Melinda Gates Foundation, which found that the top 5 questions – in terms of correlation with student learning – were

  1. Students in this class treat the teacher with respect.
  2. My classmates behave the way my teacher wants them to.
  3. Our class stays busy and doesn’t waste time.
  4. In this class, we learn a lot almost every day.
  5. In this class, we learn to correct our mistakes.

and the version used with high school students in the survey Ridley writes about contained 127 questions. That sounds an awful lot, to me, but apparently most kids soldiered on & answered them all. Nor did they simply choose the same answer for each & every question, or try to skew the results:

Students who don’t read the questions might give the same response to every item. But when Ferguson [one of the researchers] recently examined 199,000 surveys, he found that less than one-half of 1 percent of students did so in the first 10 questions. Kids, he believes, find the questions interesting, so they tend to pay attention. And the ‘right’ answer is not always apparent, so even kids who want to skew the results would not necessarily know how to do it.

OK – kids (asked the right questions) can indicate is a good, effective teacher. What use is made of these results, in the US? The researchers say that they shouldn’t be given too much weighting, in assessing teachers – 20-30% – & only after multiple runs through the instrument, though at present few schools actually use them that way. This is important – no appraisal system should rely on just one tool.

That’s only part of it, of course, because the results are sent through to teachers themselves, just as I get appraisal results back each semester. So the potential’s there for the survey results to provide the basis of considerable reflective learning, given the desire to do so, & time to do it in. Yet only 1/3 of teachers involved in this project even looked at them.

This is a problem in the NZ tertiary system too, & I know it’s something that staff in our own Teaching Development Unit grapple with. Is it the way the results are presented? Would it be useful to be given a summary with key findings highlighted? Do we need a guide in how to interpret them? Do people avoid possibly being upset by the personal comments that can creep into responses (something that can be avoided/minimised by explaining in advance the value of constructive criticism – and by being seen to pay attention to what students have to say)?

Overall, this is an interesting study & one whose results may well inform our own continuing debate on how best to identify excellent teaching practice. What we need to avoid is wholesale duplication and implementation in our own school system without first considering what such surveys can & can’t tell us, and how they may be incorporated as one part of a reliable, transparent system of professional development and goal-setting. And that, of course, is going to require discussion with and support from all parties concerned – not implementation from above.