No Comments
The University put out a press release yesterday describing some summer research I did on this topic with a summer student, Marcus Downs. It has been picked up by a few electronic media outlets, including the Hearld, and the two of use were interviewed by One News, with the plan to include it on the the Six O Clock News tonight.

What we did was to go through the cricinfo commentary for 122 ODI matches played in 2011 and 2012, and identify every ball where there was an opportunity for a fielder to bring about a dismissal, either by taking a catch, effecting a run-out, or making a stumping. We then used the commentary to characterise the degree of difficulty of the opportunity (blinder, difficult, normal, absolute dolly), and find the probability that the dismissal would be made for each of these difficulty levels.

We then used the same analysis that produces the first-innings score predictor in the WASP (see my previous post on that here), to calculate how many runs a batting team's expected score in the first innings would increase by after each ball. Batters get credit (discredit) for all of that increase (decrease), whereas the credit or discredit is shared between bowlers and fielders in those cases where there is a fielding dismissal opportunity, with fielders getting more of the credit for a blinder, and more of a penalty for dropping a dolly.

We calculated a distribution across all batters, bowlers, and fielders in our datbase. What we found was that a batsman who is one standard deviation above the average contributes about 8 runs more to his team than an average batsman; a bowler who is one s.d. above average contributes about 6 runs more (that is, he restricts the opposition's score by about 6 runs more than an average bowler), but a one-s.d.-above-average fielder contributes less than 2 extra runs. 8 runs may not sound like much, but an additional 8 runs can make quite a difference to the chances of a first-innings score being successfully chased.

We still have some improvements to make to the analysis, but they are only going to further minimise the relative importance of fielding.

There are two main reasons for why catches and run-outs are not that important (notwithstanding the recent 2nd ODI between NZ and South Africa, where 5 run-outs tipped the balance in New Zealand's favour). The first is that a lot of the run-outs and catches in ODI games occur near the end of the innings where their impact on the score is not so great. The second is that most of the opportuntities that arrive are ones that are (or should be) straightforward for an international cricketer. We all recall moments of fielding brilliance, but those opportunities simply don't arrive often enough to make the contributions of great fielders worth a place in the team for that reason alone.

There are a couple of caveats to any coaches taking policy conclusions from this.
  1. We have only looked at dismissal chances. If we were able to get good data on ground fielding, it might make a difference. I suspect not, though.
  2. We have only looked at ODI cricket. I supsect the role of catching might be greater in test cricket. (I am showing my age here, but I continue to believe that Jeremy Coney should have been in the NZ team between in the 76-78 period, simply to make sure there was someone who could hold on to the slip catches that Richard Hadlee was generating and having continually dropped at that time.)
  3. It may be that fielding is more dependent on coaching and practice rather than natural talent, relative to batting and bowling, and so the reason that the better-than-average fielders are not that much better than average, is because coaches have correctly emphasised bringing all fielders up to a minimum standard, and have not selected players who don't meet that threshold.
Notwithstanding the caveats, I think it is still fair to say that the "catches win matches" cliche should be put to bed.