By Dave Heatley 28/06/2019

Can automation tech keep improving at the current rate?

First, how good is today’s AI-enabled software? There is no question that it is a lot better at specific tasks than its predecessors. These tasks are more than toys – they include classifying photos, language translation, speech recognition and text synthesis.

My photo library has lived on an iPad for many years now. More recently, Apple added AI-based image classification to its operating system. Search for “beach”, and scenes like this pop up:

Dave Heatley.

But Apple’s AI makes some perplexing errors. On the left, Turoa ski field is a “beach” (a false positive), while on the right, Murray Beach on Stewart Island doesn’t make the cut (a false negative).













Such classification errors are fun and have low consequence in a photo search.1 But should I ask an autonomous vehicle to “take me to the beach”, I wouldn’t expect it to head for the snow!

Competing predictions follow a patchy history of AI performance

AI research has gone through several cycles of high promise and deep pessimism. Histories document two major “AI winters”, covering 1974–80 and 1987–93, along with several smaller episodes of slow or seemingly backwards progress. Forecasters offer three starkly contrasting trajectories, as shown in this stylised chart.

Which trajectory is more credible?

Clearly, current technology has room to improve, and it is yet to reach all possible applications. Still, deep learning has inherent limitations, and it is debateable whether the approach is extendable to situations that require more “general” intelligence. As Marcus points out, while deep learning is very good at interpolation (cases between known training examples) it performs poorly at extrapolation (cases beyond the range of examples). The problem is that in many cases it is the ability to cope with unusual cases that matter.

The self-driving cars of Waymo, Uber and Tesla, for example, have been unable to reach human-equivalent driving performance, let alone earlier predictions of better-than-human performance. According to commentator Timothy B Lee:

Driverless cars seemed to reach peak hype some time in late 2017. Then in 2018, the industry plunged into the trough of disillusionment, with some people wondering if driverless technology might be decades away.

An illustrative example is responding to an unusual object on the road. A human driver can draw on other experience to distinguish between, for example, a cushion and similarly-shaped rock on the road and respond accordingly. A self-driving car is unlikely to have sufficient examples of both in its training set, and so may take (potentially dangerous) evasive action to avoid hitting a soft cushion! Google’s Chief Economist, Hal Varian points out that data has diminishing returns to scale – the value of data scales with the square root of its quantity. Collecting huge amounts of data cannot guarantee that all unusual cases are included in the training dataset.

Overall, I’m inclined to forecast a near-term plateau in performance improvement. For two reasons:

  • The exponential and linear improvement projections discount earlier experience of sharp improvements punctuated by “winters”, waiting for improved methods and algorithms.
  • Marcus’ case is strong. That is, current AI technologies face some impending technical roadblocks and it will take time – perhaps decades – to deal with them comprehensively.

What does this mean for the future of work?

Early waves of automation tech replaced routine manual work with machines. They haven’t necessarily eliminated occupations. The quintessential manual worker – depicted on road signs as a person leaning on a spade – is still required, but today’s worker is most often seen working alongside an excavator. The excavator (and its operator) do the heavy lifting, while the spade operator handles the “edge cases” – exploiting their dexterity and decision-making capability to work near power and communication lines for example.

Later waves of automation tech replaced routine cognitive work with machines. For example, “computers” – at one time humans employed doing maths – were replaced by their electronic namesakes. But again, electronic computers handle the regular and standard cases, and we still need humans with maths skills to program those computers and to deal with the irregular.

I think what we are now looking at is a progressively expanding definition of “routine”, as AI technologies take on routine decision-making. Human decision makers will still be necessary for the non-routine – the special cases. And we will be even more valuable for that purpose. But I can’t see any strong evidence that this wave of automation will hit harder or faster than did previous ones.

Dave Heatley is a principal advisor with the Productivity Commission.

0 Responses to “Is the post-2012 acceleration in automation tech sustainable?”

  • Hi Dave

    I agree that technological progress isn’t a smooth curve. Current AI methods have been harvesting the low hanging pattern matching fruit. A signal for some of the challenges ahead is the recruitment of animal neuroscientists into the AI tech industry – – to improve understanding of brain functions. Marcus and others are also shifting focus to teach AI how to learn like a baby – – rather than have a smart AI spring Athena like ready for action from the head of it’s creator.

  • Well, pattern recognition is indeed the low hanging fruit of AI. That doesn’t change the fact that when Deepmind made a “pattern recognition” engine for chess, what it was “pattern recognizing” was the tree of consequences of a given chess action/position. In order to get an “image” of that tree of consequences, it used Monte Carlo techniques, and then run pattern recognition on that “image”.

    That’s a (very very very) huge oversimplification, but the message is that the innovation that enabled the chess breakthrough, was the coupling of pattern recognition with another nifty algorithm in order to pattern recognize the Right Thing.

    So there are two fronts on which “immediate” progress can be pushed forward: 1. better theory and implementation of neural networks (which might need to go through a mathematical detour with applications of information geometry, off the top of my head). 2. Niftier tools than Monte Carlo tree search engines to expand the scope of what pattern recognition is applied to.

    Before being “able to handle unusual cases” systematically, my guess/insight is that a game like Hanabi needs to be solved “first”.

    Suppose X is a bunch of data (say an image). The idea you have of X, the human mental representation of X, has an AI analogue that can be thought of as what X looks like after it went through pattern recognition. Handling unusual cases means, for a human, at a minimum, having an idea of how inadequate an idea of X can be. So technically, it involves a bit of nested thought (such as the idea of an idea of X). Whether AI can pull that off or not? I believe it can.

    But it will have to pull it off first by cracking Hanabi, in my opinion.

    So yeah, we may have another AI winter. I do not outright deny the possibility. But will it be a long AI winter? Somehow, I cannot bring myself to believe that. Seems quite crackable to me. And companies like Deepmind seem to be rather thorough in their methodology to investigate AI that I do not believe they will leave any stone unturned.

    (Honestly, I’m feeling a bit negative about how AI will be put to use. I see quite some potential for abuse of AI technology.)

  • Hi Robert. Thank you for your comment and the interesting links. Recommended reading! Do you think that autonomous vehicles on public roads fit within “low hanging pattern matching fruit”? If not, what else do their developers need to be solve? Thanks, Dave

  • There is often a lack of clarity of what is meant by “autonomous” (there are 6 levels [0 to 5] in the official vehicle definition of automation; with levels 3 and above being able to self-drive with different degrees of human oversight). There is already a lot of automation within most vehicles today (braking systems, speed control, etc). That is the real low hanging fruit.

    For the self-driving vehicles the fruit are higher up. Self-driving vehicles on well made non-windy roads in daylight and fine weather, with no human drivers around, are lowish hanging fruit (though a decade ago we wouldn’t have thought that). More testing is required on non-highway and rural roads.

    The biggest challenge are humans – other drivers, pedestrians and cyclists (scooterists too now). Google’s data for it’s cars shows that most of their accidents are rear-enders – where the human following the AV didn’t anticipate what the car in front was going to do. Local customs & behaviours can be difficult to navigate as well – do human-driven cars really stop at stop signs, do they indicate?

    So developers need to include not just engineers but anthropologists and other social & behavioural scientists. Future visions usually imagine every vehicle being autonomous, whereas reality will be for quite a while a mix of human and self-driving vehicles. A key question is whether we want self-driving vehicles to be perfect, or as good (or a little bit better than) current road users?

    There’s another useful report just out from KPMG – Autonomous Vehicles Readiness Index – This looks at not only technology, but policy and regulation, infrastructure and consumer acceptance. The Netherlands leads the field (NZ is at 11, down from 8 in 2018).

  • I read the article about Gary Marcus’s view on AI. There’s also LeCun’s perspective that is developed:

    “LeCun tells Science that translational invariance, too, could eventually emerge on its own with better general learning mechanisms. “A lot of those items will kind of spontaneously pop up as a consequence of learning how the world works,” he says. Geoffrey Hinton, a pioneer of deep learning at the University of Toronto in Canada, agrees. “Most of the people who believe in strong innate knowledge have an unfounded belief that it’s hard to learn billions of parameters from scratch,” he says. “I think recent progress in deep learning has shown that it is actually surprisingly easy.””

    I tend to strongly agree with LeCun. From what I’ve seen, most behaviours can be emergent properties of various forms of deep learning. There’s no strong argument, I feel, against this position.

    However, it may be true that to accelerate application to real-world problems, some inspiration from humans or animals can help quite a lot conceptualising what the roadmap to AGI could be. Or even not only conceptualizing but implementing.

    But on a theoretical level, I side with LeCun. And I also believe that downplaying LeCun’s position still is an anthropomorphic bias when it comes to conceptualizing what AI can be. And if we want to discuss what dangers there may be in AI, I believe that this anthropomorphic bias is stifling discussion of real issues.