By Guest Work 08/01/2018


This opinion piece by Dr Matt Boyd and Professor Nick Wilson kicks off a series that will run over the next couple of weeks looking at tech giant Google’s AI aspirations.

In a just published paper in the NZ journal ‘Policy Quarterly’ we look at how the NZ Government might respond to rapid developments in artificial intelligence (AI). In this blog we summarise some of the issues and argue the need for government agencies to seriously invest in understanding both the benefits and risks of AI.

Transformative advances in artificial intelligence (AI) have generated much hype and a burst of dialogue in NZ. Past technological change has led to adaptation, but adaptation takes time and the pace with which AI is arriving appears to be accelerating. For example, recent news about the unfolding ‘Russia Investigation’ may be just a prelude to what is possible if AI tools hijack our social systems.

Technology offers us opportunities to do things we previously could not, but in doing so the use of technology also changes us, and it changes the systems and norms of society.

Unchecked advances “self defeating”

AI is a global issue and presents great opportunities for benefit, but also great risk. We argue that these risks are insufficiently articulated in NZ Government reports to date, and there is an obligation for NZ Government agencies to consider the kind of society we wish to live in and our role in the emerging global transition to a world with extensive AI.

AI digests information and the world has created vast databases that represent aspects of the human reality. These datasets are filled with sequences of cause and effect, associations, beliefs, emotions, goals, hopes, dreams, and behaviour. AI advances by consuming such data.

However, unchecked technological advance is self-defeating if it deploys significant threats. We don’t want to throw away advances in democracy and human rights in blind pursuit of efficiency and productivity. Organisations (and governments) that adopt a risk-aware mindset around AI will create the true long-term opportunities. Here we summarise the key risks AI poses and some potential responses (with further details in our just published paper).

The risks of bias and injustice

Algorithmic bias can cause AI systems to behave in unintended ways. Microsoft’s Twitter chatbot was initially notoriously racist. The public may be much less forgiving of a biased machine than a biased individual. This suggests the need for careful pre-testing of such systems and constant vigilance – particularly as there are already serious inequalities in NZ society (by socio-economic status, by ethnicity and by gender).

The risks of AI dominance of media discourse

The moot at a recent Oxford Union debate reads, ‘This house believes that fake news is a serious threat to democracy and truth.’ Yet it is potentially far worse than that. Content created by AI could win the fake news game, and exploit the psychological weaknesses of human beings. Human minds are essentially ‘hackable’ as classic experiments in power and authority, conformity, bias and ideology demonstrate and social media databases can be exploited to psychologically profile every user on Earth.

The scale of such impacts is already unprecedented. Autonomous agents now account for 45% of social media posts in some countries. Masses of fake content can give the impression of popularity and cause conformist behavioural effects. Almost all internet traffic and content could become AI generated. No one knows where ‘persuasive computing’ and ‘computational propaganda’ may lead us.

The risks of economic chaos and the transformation of work

AI has the potential to disrupt economic systems. Automation could lead to mass unemployment and new jobs created may not be jobs that NZ’s labour market is equipped to capitalise upon. Job loss and productivity gain exacerbate inequalities, which is concerning given the important relationship between socio-economic conditions and health.

Security and existential risks

Security risks include the vulnerability of AI systems to cyber attack, and advances in foreign weapon and intelligence systems (in particular, autonomous weapon systems). Autonomous weapon systems could be made extremely difficult to ‘turn off’ to evade enemy interference. Alternatively, AI could pose even an existential threat by doing something accidental or unexpected, such as detonating nuclear weapons or releasing a pandemic virus. Serious authors on AI have written about how existential risks from AI may arise (eg, Bostrom in his book ‘Superintelligence’).

AI and Policy

Policy around AI needs to be flexible and based upon core societal values. However, NZ Government agencies seem strikingly upbeat about AI and articulate few risks. For example, the Ministry of Business, Innovation and Employment (MBIE) argues that we ought to promote NZ as a test bed for emerging technologies. Yet we don’t know whether a fully informed NZ public would concur with this position.

We suggest that the NZ Government needs to see AI as more than just a tool for increasing GDP, and needs to be transparent in communicating the changes AI poses for society. Ultimately we probably need to be designing to ensure sustainable human well-being and for the long-term functioning of democracy.

These needs are underscored in an open letter from the Future of Life Institute and an opinion article in Scientific American offering warnings about some of the most insidious risks of AI. Many of these authors concur that we need some form of global governance board on AI.

So what are the potential policy goals? We suggest at a minimum these could include:

  • Monitoring of AI development milestones to aid prediction of how AI is developing
  • Devising mechanisms to ensure fairness of distributing the benefits of AI
  • Supporting informational self-determination and popular participation
  • Improving transparency and minimising ‘information pollution’
  • Improving collaboration at national and global levels – to maximise benefits while minimising the risks of AI
  • Promoting responsible behaviour by citizens and organisations through digital literacy and digital ethics

Similarly, there are a number of questions pertaining to AI that may require a local answer by NZ Government agencies. These include questions of whether our present legal tools are suitable, whether we ought to take a stance on banning autonomous weapons, and whether we need to regulate persuasion systems using AI that threaten to undermine a truthfully informed public?

In moving forward, we suggest that NZ Government might consider the following:

  1. Funding research and reports on AI that include the ethical, philosophical, social, and psychological issues (from a range of NZ perspectives – including Māori perspectives).
  2. Distilling a wide range of academic publications on AI risks so that the NZ public remains informed and empowered.
  3. Producing clear policy recommendations to address the risks of bias and injustice, the dominance of media discourse by AI, and threats to autonomy, employment and security.
  4. Supporting the formation of a global singleton body on AI benefits and risks (perhaps like the successful Intergovernmental Panel on Climate Change [IPCC]).
  5. Maintaining a vision for NZ as a society that strives for equality, empowerment, respect for autonomy, and with rights to truthful information.

It is not simply a matter of ensuring this country ‘stays ahead’ by using AI and it is not a matter of ‘how can NZ win some race to make use of AI’. These sorts of soundbites are dangerous and potentially shift the public toward a predetermined mindset. It is time for us to have a much richer and engaged discussion – that is supported by serious research investment by NZ Government agencies.

Dr Matt Boyd is a Wellington-based freelance science writer and researcher. Professor Nick Wilson is a researcher in the Department of Public Health, University of Otago, Wellington.