By Robert Hickson 01/05/2017

The biggest fear of artificial intelligence is usually it getting smart enough to get rid of humans. But what if one of the biggest threats is regulating us to death, or at least to subservience?

“Efficiency and effectiveness” is a common refrain from those advocating the use of artificial intelligence in government. There is certainly plenty of opportunity for improving government (central and local) processes and services with smarter automation.

Even, perhaps, up to and including “Fully automated luxury communism” as Brian Merchant called it in the Guardian two years ago.

However, one aspect of AI and governance that I haven’t seen discussed much yet is the potential for over-regulation.

The Common Good, which calls itself a “nonpartisan reform coalition” in the US, argues for more common sense in law-making, and keeping people in charge, not rules. In contrast to those who view the government as having an insidious “dead hand” that stifles, they don’t advocate government getting out of the way, nor wholesale deregulation. But they do suggest less mindless rules, and more human responsibility.


A similar message was given by New Zealand’s Rules Reduction Taskforce last year. Though they did note that there are some myths about so-called “loopy rules”.

That got me thinking about the role artificial intelligence – machine learning, cognitive systems, and their ilk – could play in the regulatory space.

I mentioned previously the deep fat fryer analogy Maciej Cegłowski used for artificial intelligence. Governments may get too attracted to automated regulatory systems and rules, with the view that automation and smart algorithms make it easier to regulate and be regulated. 

However, such an approach is likely to take us even further away from the Common Good’s desire for simpler bounded open roads and frameworks. Just because a set of rules can be defined it doesn’t mean that they are good or should be applied. Particularly if it’s not clear why. Rules imply precision, but as Common Good highlights, what we need is probably more clear boundaries rather than narrowly proscribed paths.

We need to ask what outcomes may arise from automated processes.

The important issue at the moment is not that we are under-regulated (though in some areas we are) but that we are often already overly, and inconsistently, regulated.

The New Zealand situation

Sir Geoffrey Palmer, who knows a thing or two about NZ legislation, noted a couple of years ago, the swampyness of our Statute books [Pdf].

In 2014, Sir Geoffrey pointed out, there were over 1,000 primary acts of parliament in force, and 1,800 amending acts. Over 90,000 pages of less than concise and consistent text, and he noted that the average length of Statutes increasing. In addition, there were 2,400 legislative instruments (ie regulations), and unknown numbers of local government Acts.

We, like most other governments, are good at creating laws, but bad at removing outdated ones.

As both Sir Geoffrey and the Rules Reduction Taskforce noted, there’s plenty of scope for cleaning up what laws we already have. It is just so difficult to do.

A better use of AI and similar systems in the legislative and regulatory space would be to first get it/them to review what we already have and identify what’s duplicated, outdated, or inconsistent so we can simplify the system.

It’s come to a sad state of affairs if we have to use AI systems like IBM’s Watson to identify regulatory requirements.


Featured image: Courtesy of Elysium: The Art of the Film © 2013 Tristar Pictures