By Associate Professor Colin Gavaghan, Director of the New Zealand Law Foundation Centre for Law and Policy in Emerging Technologies, University of Otago
The European Parliament’s Draft Report on Robotics has certainly captured the public imagination, and it’s easy to see why.
Proposals about ‘smart robots,’ ‘robot killswitches’ and ‘electronic persons’ perfectly capture the zeitgeist, where talk of the perils and promises of artificial intelligence and smart robots is a daily feature in our media, and robot-themed TV series like Humans and the remade Westworld attract critical acclaim.
But what exactly is the EU Parliament proposing? And are they the sorts of proposals that we ought to be considering for New Zealand?
The last of those questions is likely to command a lot of my own processing power over the next few years. Together with my Otago University colleagues Ali Knott and James Maclaurin, I have been awarded a grant by the New Zealand Law Foundation to investigate the legal and social implications of artificial intelligence.
The research is just now getting underway, so I won’t have any conclusions to share right now. But some of the questions and suggestions in the Parliament’s Report will reflect the sorts of things we’ll be considering too.
Creating laws for robots
The report was produced and agreed not by the whole of the EU Parliament, but by the 25-member Committee on Legal Affairs. It will be the subject of a vote by the full Parliament next month, and if it receives an absolute majority (i.e. a majority of all those eligible to vote) it will be passed onto the EU Commission, with a recommendation that it produce draft legislation.
There are, then, a few hurdles to overcome before this could conceivably lead to any actual law.
The Report is concerned with ‘smart robots’, which it defines as those that acquire information through sensors or info-sharing, are ‘self-learning’ and capable of adapting their behaviours and actions to their environment. For we lawyers, these qualifying criteria would need some more detailed definition if they are to be useable, but this is a general proposal and the Committee is doubtless aware of the fine-grained work that lies ahead.
The Report’s suggestions include a Register of smart robots and an Agency for Robotics and Artificial Intelligence. No doubt the more UKIP-inclined will view this as another example of EU bureaucracy gone mad. On the face of it, though, when faced with an uncertain new technology, a degree of monitoring and supervision doesn’t seem like a ridiculous idea.
Whether this achieves much of value will depend on such things as whether registration is compulsory, and whether that compulsion is backed with effective enforcement measures (The best law in the world is only as good as its enforcement and compliance mechanisms).
Elsewhere, the Report uses familiar terms like ‘precautionary approach’ and ‘minimal risk.’ Again, the devil will be in the details. Put five stakeholders in a room, and you’ll get ten different interpretations of ‘precaution’. But most of the credible iterations allow for the possibility that something might go wrong at some point.
Which brings up the next of the Report’s areas of interest: liability. Who’s responsible when a ‘smart robot’ causes damage or injury?
The blame game
Until now, there hasn’t been too much difficulty in attributing responsibility for machine injuries – other than deciding if the problem was a manufacturing defect, an operator error or just a chance event. The new challenge with ‘smart robots’ is that they will be programmed to adapt and learn and change from their original programming, and typically, they won’t have an operator at all. So the question is: who’ll be responsible if they go wrong?
The Report considers various options, including a strict liability scheme, a mandatory insurance scheme, or payment into a central compensation fund – maybe something like our ACC scheme. But it seems to take as its starting point that the current rules around responsibility won’t be a very good fit for ‘smart robots’.
There has been lots of debate about the likely impact of smart robots on the job situation (examples include this report from the Oxford Martin Programme, and discussion in the Journal of Economic Perspectives, the Economist and even here on Sciblogs).
The more optimistic just think we’ll see one sort of job replaced by another. They point out that these prophecies of human redundancy have been around for ages, and we haven’t yet seen the end of work. When the car replaced the horse, that was bad news for blacksmiths, but plentiful other opportunities arose.
But others have a feeling that this may be different – that we’re actually running out of things that humans can do better than machines.
The Report doesn’t take a firm position on that question, other than to note that ‘the development of robotics and AI may result in a large part of the work now done by humans being taken over by robots.’ Aside from the obvious implications for those reliant on wages – for which it advocates serious consideration of the currently very fashionable idea of a general basic income – the Report also considers other likely impacts. Many social security systems are funded through wage transactions, via national Insurance contributions or employer levies of various sorts (including much of our ACC). Will those mechanisms survive if increasingly robot workers take the place of humans?
Again, the report’s most concrete proposal is fairly modest – that employers should be required to disclose ‘the savings made in social security contributions through the use of robotics in place of human personnel.’
Probably the most controversial and eye-catching proposal relates to the idea of ‘artificial personhood.’ This is probably the point where, for many people, the Report departs from the staid business of EU regulation and catches the train into Westworld. But again, the proposal stops well short of any specific proposal for such recognition, noting only that their emergence raises questions about whether they should be regarded as natural persons, legal persons, animals or objects – or whether a new category should be created, with its own specific features and implications as regards the attribution of rights and duties
Overall, the Report does a good and timely job of drawing attention to a range of questions that are likely to become a lot more urgent in the near future. That it doesn’t propose many implementable solutions is no great failure at this very early stage of the process. The test – and the challenge for those of us working in the field – will be whether the next stages see something a bit more concrete emerge from the worthy but somewhat woolly aspirations that have thus far commanded widespread support.
The Sciblogs Horizon Scan
This post is part of the Sciblogs Horizon Scan summer series, featuring posts from New Zealand researchers exploring what the future holds across a range of fields.
Featured image: Bansky street art. Credit: Scott Lynch / Wikimedia