By Guest Author 28/05/2017

The following is a synopsis of a presentation last week by a Wellington business strategy consultant, David Miller. He expressed deep concerns about the nature and extent of global risks of future superintelligence. Miller’s presentation  was part of Hutt City Council’s Science, Technology, Engineering, Manufacturing and Mathematics (STEMM) Festival.

The arrival of superintelligence is probably still several decades away – specifically the singularity – a point at which superintelligence might exceed aggregate human intelligence before rapidly accelerating much further.

However, there is an extremely wide range of organisations and people targeting superintelligence.

The participants include multinational corporations and smaller companies, right down to start-ups – seeking financial returns; military branches and other government agencies – seeking military or political power; universities and scientific institutions, individual scientists and engineers – seeking world-class reputations; and terrorists and criminal syndicates – seeking ideological or illegal gains.

We need to differentiate superintelligence from more conventional artificial intelligence. We can usefully compare superintelligence risks with those applicable to the nuclear threat, pandemics and a potential asteroid hitting the earth.

Oxford Professor’s Nick Bostrom’s defines several categories for future superintelligence, varying from analytical tools targeting particular goals to “Sovereigns” that could operate autonomously and define and solve a wide range of problems.

A non-sentient form of extreme intelligence might be particularly dangerous. It might not be possible to reason with it and it might be extremely goal-directed at all costs, consuming resources that could even include humans – for example as a source of energy.

There are all sorts of possible scenarios including some in which the superintelligence might not divulge its status until it is in a position to escape and operate autonomously.

Controlling superintelligence will be extremely difficult because of the nature and extent of the global risks.

The areas of technological development we will need to monitor span advanced software and robotics, autonomous military weapons, several biotechnology disciplines, the protocols and regulatory regimes used to restrict or control autonomy of advanced technologies and the intellectual property likely to be prized and protected by participants that is critical to the development of superintelligence.

It is likely that corporate, military, government and scientific participants will want to get as close to dangerous superintelligence as they can, because that is where they will get the greatest bang for their bucks. But experimenting with artificial intelligence that is close to the boundary of superintelligence may have devastating implications for humanity.

Comparable global risks associated with other potentially catastrophic events are qualitatively different. The nuclear threat is characterised by a relatively small number of nations, and we have a long and successful history of explicit control, despite a few issues with North Korea just at the moment. Climate change is a slow and visible process, and humanity hopefully has time to respond to the threat. Pandemics, along with a hypothetical asteroid collision, are very much a common threat, and people across national boundaries are highly motivated to address the threats with the best technology available.

Consider, for example, the risks of superintelligence vis a vis an asteroid collision with the earth.

An asteroid collision threat would be clearly definable and measurable, and its timing of arrival predictable; it would also clearly be a common threat to all; and it would not be difficult to garner global support to address the threat. By contrast, superintelligence could derive from any one of numerous sources; the timing is likely to be unpredictable; the perpetrator may well perceive themselves to be a winner from it; it may be very difficult to focus resources to resist or overcome it because of national or military boundaries; and it may happen extremely fast with no time for otherwise valuable iterative information feedback loops. We may not appreciate the downside until it is too late.

Some experts believe that machine learning, and particularly more advanced neural programming and predictive analytics, are inherently unpredictable.

There are several methods that could be used to control superintelligence. These include boxing it – rather like a dangerous pathogen in a secure laboratory; providing incentives for good behaviour; implementing kill switches, tripwires and honey pots for direct control and monitoring; controlling energy sources; and running parallel forms of superintelligence with identical initial instructions in order to watch for undesirable behaviour. All of these had significant weaknesses.

Nick Bostrom concludes that early installation of good values into superintelligence is one of the better options. The idea would be to inform and encourage superintelligence to place a high value on humanity and its future. Even this is complicated – who will decide on the values? We have many different political systems and ideologies on the planet and it is not easy to get people to agree on important principles of how we should all live together in peace, prosperity and harmony.

Successful future international collaboration is critical. There is a number of successful precedents for this, e.g. the International Atomic Energy Commission, CERN, the International Space Station, the Human Genome Project, the Square Kilometre Array and international agencies such as Red Cross/Red Crescent, UNDP and UNESCO.



However, these organisations receive variable support from different nations, both in terms of numbers and resource commitments. And this is in spite of most of them having a clear public good with minimal if any competition amongst the participants.

In the case of superintelligence, it will be much more complex and challenging. We could face numerous players with multiple, powerful, selfish motives. We will have to deal with an existing lack of trust between governments, militaries and even government agencies within nations. The proponents of superintelligence may be strongly committed to protecting their intellectual property and not sharing it with the world. Further, Government agencies are typically slow to react and to make policy, and this may be incompatible with the problem we face.

Approaching superintelligence will require a need for unprecedented accountabilities of world political, military and business leaders, along with those involved directly and even perhaps indirectly in the creation of even near-superintelligence.

We will need a powerful global monitoring and control entity operating for and on behalf of humanity, with some kind of “technological hit squad” able to act rapidly, decisively and effectively anywhere in the world.

The Trumps, Xis, Putins, Merkels, Macrons and Mays of the coming decades – and all political, military, business and scientific leaders, scientists and engineers involved – will need to think in terms of new international treaties and accountability to a powerful International Criminal Court. We’re talking about outcomes much worse than treason – potentially unimaginable crimes against humanity. The accountabilities will need to be such that if superintelligence even begins to get away from us, the severity of the response will make the Nuremberg trials after World War II look like a kindergarten picnic.

We need to ensure that the public is well educated and exerts pressure on political, military and business leaders and on scientists and engineers working in the space. We need to collate and integrate numerous analyses and reports that are already being undertaken around the world by government agencies, AI research and educational institutions and professional and industry bodies. And we should include analysis of protocols on autonomous weapons.

There is some excellent work being undertaken by entities such as the Centre for the Study of Existential Risk – University of Cambridge, the Singularity University in California and many others. We are now approaching the time when their outputs need to be integrated into international policy frameworks.

We have some time. We have a good record of responding to massive risks (e.g. nuclear, pandemics). We have a responsibility not to give our future generations what in rugby is called a ‘hospital pass’ – the one where you get smashed by the opposition. We must think beyond money, military superiority and political power and think of future generations – right across the planet.

New Zealand is particularly competent in information technology and biotechnology and as an independent and respected participant in world affairs, could play a useful role in gaining long-term support for international collaboration, agreement and action.


See also EU Parliament Release:  Robots and artificial intelligence: MEPs call for EU-wide liability rules.