By Guest Work 11/05/2017


Guest post from David Miller, Vantage Consulting

We need to be thinking about the long-term risks of super-intelligence.

Next week I’ll be giving a talk on this as part of the Hutt STEMM Festival.  It’s an area I’ve had a keen interest in for several years. Despite not having any domain expertise in the technical disciplines associated with artificial intelligence, it has been fascinating to read about the potential for the so-called “singularity” – a hypothetical point when artificial intelligence exceeds and then accelerates far beyond human intelligence.

While argument from authority is never valid, it is interesting to note that some of the world’s relevant leading thinkers on the subject have expressed significant concerns, e.g. Stephen Hawking, Eion Musk.

The writings and thinking of Prof Nick Bostrom at Oxford University are especially stimulating, and I will draw on several of his important ideas. There are of course some who assure us that there is no risk. Remember the bright sparks (sometimes “experts” in their day) who assured us that aeroplanes, computers and telephones had no future when they were first invented?

Many writers such as Ray Kurzweil predict machine intelligence to far exceed that of all human intelligence combined. Kurzweil forecasts that by 2045, the human intelligence created in that year will be a billion times greater than the sum of human intelligence.

Nick Bostrom in Superintelligence outlines a cogent and detailed case for why superintelligent AI poses a risk to humans. The degree of autonomy provided to advanced AI is a particular risk, and Bostrom’s analysis far exceeds Isaac Asimov’s three simple laws of robotics. He classifies several different levels of superintelligence – the most powerful of which is a Sovereign, which would have maximum autonomy and be by far the most difficult to control. And biological superintelligence is a separate risk category to machine superintelligence.

We need to think about mitigating these risks. We need to think in terms of the economic, political, scientific and military environments in which superintelligence might emerge. It will take a very long time indeed to institute a set of internationally agreed legal and other protective protocols that works across the globe. But we have to do so, because with a possible singularity, unlike with pandemics and nuclear explosions, we mightn’t get a second chance. Think along the lines of how to deal with a prospective asteroid collision with the earth.

The topics of my presentation and the discussion to follow include:

  • What sorts of super-intelligence might develop, and what are the different risks associated with these?
  • How important in assessing risks are self-awareness and sentience vis a vis sheer intelligence?
  • What type of sneaky short-term strategies might superintelligence adopt?
  • What timeframe are we talking about?
  • What are the likely human sources of superintelligence and what are the risk implications? (I believe these are hugely significant)
  • Can we learn from academia, industry, science fiction writers and producers and from social science?
  • What mechanisms and approaches are possible to minimise the risks?
  • How might the global community (i.e. human race) respond and develop strategies to protect future generations? What precedents are there and what is different about superintelligence that is particularly concerning?

This is not a session designed for the presenter to demonstrate any particular knowledge or to provide any answers.

Following the presentation, which is a “starter for 10”, there will be plenty of time for questions and discussion and perhaps to cover off some global issues which don’t seem to have been well covered in the literature to date. So be prepared to pitch in!

David Miller,  Vantage Consulting director,  will be leading a presentation and discussion on the global risks arising from super-intelligence as part of the Hutt City Council’s STEMM Festival, Thursday May 18 from 5.30-6.30pm at The Dowse Art Museum.