Biotech. Nanotech. Cleantech. Gene tech. Cloud computing tech. Successive waves of technological change are the norm for those of us in the developed world. These waves typically rise, peak and ebb while barely raising a concern about wider labour market effects.
Further, it seems strange to be worrying about unemployment – technologically induced or otherwise – when, according to The Economist, across the rich world, an extraordinary jobs boom is under way. In New Zealand, workforce participation is at a historic high while unemployment is also very low (in its second-lowest dip in nearly 40 years).
So, what is this “tech” that has everyone worried? And how does it differ from the regular, garden variety tech? Most of the noise is around robots, autonomous vehicles, bots and artificial intelligence (AI). Broadly speaking, these are automation technologies, that is, those that potentially replace human workers with machines. That’s nothing new in itself – it was the source of the Luddite’s complaints in 1811–16.
Robots, autonomous vehicles, bots and AI have been around in one form or another for many decades:
- Unimate, the first industrial robot, was installed in 1959.
- The London Underground’s Victoria line has had automatic trains from 1968.
- ELIZA, the first chatbot, was developed in 1966.
- AI research dates from 1956.
What is different is that the software that controls their behaviour has recently got much, much better at making decisions. Starting from around 2009, Artificial neural networks (an “old” AI technique dating back to 1965) became a practical technique for pattern recognition and image classification. And there have been ongoing significant improvements since then.
These improvements are due to four factors:
- better algorithms that allow systems to “learn” from large datasets, and then apply this training to making decisions about data not previously encountered;
- better and cheaper hardware (including special-purpose hardware for training and classification);
- larger, more reliable and cheaper datasets to use for training; and
- the commodification of the preceding three factors as on-demand services, allowing low-cost experimentation and widespread application.
Exploding interest in this combination of software, hardware and datasets started around 2012. Furman and Seamans (2018) document a huge jump in venture capital funding for AI since 2014, a significant climb in global industrial robot shipments starting in 2013, and a spike in AI-related US patent applications starting around 2014. The widespread commodification of AI tech is even more recent.
Systems incorporating AI (or more specifically, an AI technology generally called “deep learning”) can substitute for human decision makers. Put it into a car or plane, and you have an autonomous vehicle. A bot, living in a data centre, might substitute for humans in a call centre.
Why has this spawned a “future of work” industry?
For three reasons, I think.
First, this time around, automation appears to threaten service-sector employment. Automation has hit the agricultural and goods-producing sectors of the economy over the past century or so. The share of jobs in the agriculture has plummeted in New Zealand since 1891; and goods-sector jobs have had a less dramatic but significant decline since they peaked (as a share of employment) in 1975. Jobs moved to the service sector – but now at least some of those seem threatened.
Second, service-sector employment includes many well-paid professional jobs. Professionals make a big investment in their careers (e.g., through years of tertiary study) and are naturally fearful of anything that might devalue their skills. Such professionals are generally comfortable with using technology but not with the idea they may be replaced by it. Professionals are very influential in society and expect to have their opinions heard and respected.
Third, fears of job insecurity due to technology have been with us for at least two centuries. Joel Mokyr et al. (2015) point out that “from generation to generation, literature has often portrayed technology as alien, incomprehensible, increasingly powerful and threatening, and possibly uncontrollable”. They document concerns about machines replacing human workers dating back to 1772. More recent concerns hit peaks in the 1920s and 1970s. Jobs seem central for both income and identity, and it doesn’t seem to take too much to fan the flames of the fearful.
Can automation tech keep improving at this rate?
Is the post-2012 acceleration in automation tech sustainable? I’ll punt up my view on this in my next post.
- AI and the Economy, Jason Furman, Robert Seamans. in Innovation Policy and the Economy, Volume 19, Lerner and Stern. 2019.
- Employment by industry sector breakdown for the US, 1850-2015.
- Mokyr, Joel, Chris Vickers, and Nicolas L. Ziebarth. 2015. “The History of Technological Anxiety and the Future of Economic Growth: Is This Time Different?” Journal of Economic Perspectives, 29 (3): 31-50.
- Some (deeply technical) breakthrough papers on deep learning: Hinton (2007) Learning multiple layers of representation; Raina, Madhavan & Ng (2009) Large-scale Deep Unsupervised Learning using Graphics Processors; and Krizhevsky, Sutskever & Hinton (2012) ImageNet Classification with Deep Convolutional Neural Networks.
Dave Heatley is a principal advisor with the Productivity Commission.
This post was originally published on the Productivity Commission's website.