The mantra that “the biggest sin a team batting first in an ODI can commit is to not bat our its overs” has long been a bugbear of mine. As Dan Liebke noted in a rant about net-run-rate the other day,

We’ve had Duckworth Lewis for decades now and, even if the mathematics of it is beyond most casual fans, the basic concept that wickets remaining are a resource that need to be considered along with overs remaining is pretty well established.

Yes, a team has two resources. If it is a sin to not use one of those two resources to the max, why is not also a sin to bat out 50 overs leaving capable batsmen in the pavilion with their pads on? A batting team has to manage both declining resources with no certainty as to the effect that its actions will have on either the rate of scoring or the loss of wickets.

So I was very happy to see Chris Smith take on this mantra in his *Declaration Game *blog, and also to see him quote a former player, Geoff Lawson, who was prepared to take a contrarian view.

`Why?’ asked Geoff Lawson, who went on to rationalise that if all the batting side attempted was to survie the 50 overs, they were very unlikely to set a winning total. `Wouldn’t it be better’, Lawson argued, `to hit out wiht the aim setting a challenging targe, accepting the risk that they could be bowled out, than to crawl to an unsatisfactory total?’

Lawson is right, although maybe not quite. In this quote, he seems to be suggesting that a team that is heading towards a very low score might as well start taking more risks to get to a competitive total. This is a manifestation of a mathematical theorem known as Jensen’s inequality, when optimising over a relationship that is not linear, but actually, the relationship between the total score and the probability of winning is pretty much linear *over the range of possibilities that can occur on any particular ball*. That means, that a batting team should always ignore the current score, accept bygones as bygones, and base their level of aggression on how many balls and wickets they have remaining.

As it happens, we can quantify this decision reasonably precisely. The graph below gives a measure of what I like to call “deathness” for the first innings. The particular metric I use is the payoff to a risky single. Imagine that the batsmen have to choose between trying for a run or not. If they choose not to run, they will score 0 runs but not lose a wicket. If they try for the run, there is some probability that attempt will fail and one batsman will be run out, or they might succeed. What probability of being run out would be too high to make the risk not worth the cost. The graph shows that cross-over probability as a function of the number of overs bowled, for each possible number of wickets lost. The higher is the probability, the greater is the risk that it is worth taking and so the greater is the level of deathness (so called, because the final overs in an innings where batsmen start to take higher levels of risk is often termed “the death”). The actual numbers aren’t particularly interesting (most decisions on aggression are about striking the ball, not about whether to attempt a run), but the comparison across different lines in the graph is revealing. So, for example, the graph reveals that if a particular level of aggression is warranted after 40 overs when a team is 5 wickets down, then the same level can be justified at 23 overs if no wickets have been lost.

Before getting to batting out your overs, a few things to note about this graph:

- It is based on WASP data that predates the rule change to two new balls and only four outside the circle. That said, the basic story would not change using more recent data or some other estimate of the cost of a wicket such as the Duckworth-Lewis tables.
- This table indicates what the expected
*payoffs*are to different levels of risk and return in different game situations; it does not show what different risk-return combinations are possible. So, for 0-7 wickets down, the graphs indicate that the cost of risk is high at the start of the innings (the probability of a run-out has to be very low to justify attempting a run). With the fielding restrictions in the first 10 overs, however, it can be that the return to batsmen from a particular level of risk is much higher than in the middle overs, so that a high-risk strategy is still worthwhile, despite the costs. - The graphs all hit 100% for the final ball of the innings. That makes sense. It is simply saying that as long as there is any probability whatsoever of not being run out, you might as well keep running until you lose your wicket on the final ball.
- Interestingly, though, for 1, 3 and 6 wickets lost, the graphs hit 100%
*before*getting to the final ball of the innings. Remember that this is based on average-team versus average-team data. What is going on here is that on average the batters deeper in the batting order, are better at power slugging than those further up the order. So, for example, it is common for a batting order to have two aggressive openers followed by an accumulating #3 to take the team through the middle overs. If a team gets to 43 overs with only one wicket down, it might be better to go for a suicidal run (with the #3 coming to the danger end) and bring in a power hitter than to play out a dot ball. - The graph for 9 wickets down slopes down for most of the graph. This is mostly reflects out-of-sample extrapolation (there is no actual data for games where a team is 9 wickets down after 2 overs), and also the fact that when a team is 9 wickets down very early, there is almost no chance they will bat out their overs, and are likely to lose their last wicket any time so its worthwhile the batters taking risky singles while they are still there to do so. The longer the innings progresses, the less reason there is to think that the next wicket is imminent and so more need for caution.
- While there is a general tendency for the graph to be lower the more wickets that have been lost, this tendency is not absolute. This is because, while losing a wicket will reduce the expected number of runs the team will score, the cost of the next wicket is not necessarily greater. For example, after about 46 overs, the incremental cost to a team of losing its 7th wicket is less than losing its 5th or 6th at that stage, so a team being 6 wickets down should be more aggressive than one that has lost only 4 or 5 wickets.

So let’s now think about batting out your overs. In the World Cup game between New Zealand and England, England batting first lost their 6th wicket at 28.1 overs, their 7th later in the same over, and their 8th at 30.4 overs. Looking at the purple, yellow and pink lines, the deathness measures at 28-30 overs, are all pretty much the same. Yes, a lot more caution was called for than if they had only been 2 wickets down at that point (and so Broad’s approach at that point was probably not beyond reproach), but also the optimal strategy was not for the team to go into its shell. Rather the situation called for playing in much the same style as any team should do in the middle overs (10-30) but delaying all-out aggression for a bit longer than if they had more wickets in hand. This pretty much describes any situation where the “make sure you bat out your overs” comment is likely to arise. A team should probably delay its all-out assault for a bit if it loses too many wickets, but at no point should it bat more conservatively than in a normal middle-orders situation.

]]>
It is an interesting question. First-class cricket has the same general format as test cricket (no limitation on overs, so the emphasis on bowlers is to get batsmen out rather than simply restrict runs, and the emphasis on batters is to bat for long periods). On the other hand, we can all think of players who have been successful playing for their province, state or county, who were unable to make the step up to international level. So performance in first-class cricket is perhaps the better measure of a player’s skill set while performance in ODI’s the better measure of his temperament, and also of whether there are technical flaws in a player’s game that will be found out at the highest level.

I don’t know what the correct answer to this is. My guess would be that the stronger is the competition at sub-national level, the better would be first-class cricket as a predictor. So, for instance, I would expect that first-class cricket is a better predictor of Australian players’ success at test level than it is for New Zealand players. And I suspect that, overall, even in country’s with weak domestic competitions, first-class cricket would be the better correlate, but it is just a guess.

But this rasies a different question: What measure of performance in ODI cricket would be the best measure of success to use in a correlation with test averages? This rises again because of the different format. Batting and bowling averages are a very good measure of performance in test or first-class cricket. Yes, we need to adjust for the conditions in which different players have played, and the quality of the opposition, but in general maximising your batting average and minimising your bowling average is the way to maximise the chance of your team winning. In ODI cricket, this is not the case. The relative importance of runs and wickets changes depending on the game context, so that averages are not a good measure of a player’s contribution to his team. Naturally, I would prefer a measure like the player’s contribution to the WASP, as maximising that is what a player should be doing to help his team win.* But let’s say you had two players, one with a better average and one with a better WASP-based performance. Does the difference arise because the latter is a better player in general, or does it indicate that the former has a skill set more suited to test cricket. I really don’t know. Maybe that is a future Honours project.

* Actually, WASP-based measures would only be useful for comparing players at a similar point in the batting order. My student, Marcus Downs, has been writing an Honours dissertation this year developing an adjusted WASP-based measure that enables better comparisons across players with different positions in the order, but this is secondary to the main point.