By Robert Hickson 19/01/2017 2


The biggest futurist thing for me this last year has been the progress and hype around artificial intelligence.

horizon-scan-badge-4Three quotes encapsulate the current state of artificial intelligence, and aspects of its future over the short term.

Stephen Hawking made a Dickensian AI gambit with his quote:

 Artificial Intelligence is “either the best, or the worst thing, ever to happen to humanity”

That just about sums up the whole field – no one really knows what will happen, but many gravitate toward one extreme or the other.

A more accurate call on the current environment came from Maciej Cegłowski:

 Machine learning is like a deep-fat fryer. If you’ve never deep-fried something before, you think to yourself: “This is amazing! I bet this would work on anything!”

As with many new technologies, we can exhibit irrational exuberance over how we can apply it.

Margaret Mitchell, from Microsoft, noted that AI has

“A sea of dudes” problem

Too many (white) guys are involved, which influences the approaches and types of questions being asked.

AI is ultimately less about the machines that take over some of our cognitive tasks and more about our own mindset. How we approach designing and using systems like machine learning to help solve more complex or cognitive problems will be what matters most.

Mindset

AI isn’t a deep fat fryer, or a hammer, that suits every problem. Careful thought needs to be given to what problems the techniques are appropriate for, and the information that will be required. AI can only handle some types of well-defined problems at the moment.

In addition, the potential power of AI approaches may often require thinking about a new approach to an old problem. Particularly as we start to see more applications not simply replacing human tasks (such as chatbots), but working to augment human capabilities. Just adding AI to an existing process won’t necessarily be a good strategy.

 AI works well with probabilities, not possibilities.

For AI to be effective the problem must be clear, and the outcomes need to be predictable (like keeping a car on the road and not hitting other objects, identifying the best moves in a game, and analysing a set of images to find a particular object or anomaly).

Ill defined problems – where it’s not clear what the critical question is, how to approach answering it, nor what a solution would look like – aren’t suitable for machines, at the moment.

Things like social inequality, water allocation, and responses to climate change effects involve getting groups of people (often with conflicting or different interests) sitting down to help define the issue and contest, concede and collaborate to arrive at what the core issues are, and how they could be addressed.

You need to have an understanding of the methodology and assumptions in your AI, and have confidence that it makes intuitive sense. AI systems shouldn’t be viewed or used as black boxes, whose workings are unknowable, and whose answers are unchallengeable.

Having access to the right information to train the algorithm(s), and then to find potential solutions, is also critical. As with “big data” generally, the quality and relevance of the information is usually of more importance than simply the quantity. “Garbage In, Garbage Out” doesn’t disappear with smarter machines.

Having both a clear and solvable problem and access to the relevant data is the opportunity space of AI.

 

The opportunity space for artificial intelligence - AI
The opportunity space for artificial intelligence

Another factor to consciously address is our risk appetite for relying on software. What’s the consequence if the AI prediction or solution is wrong? Possibly not so significant if it is just a chatbot providing non-essential  information, but more so if it is informing criminal sentencing or medical diagnoses, or controlling a vehicle or a city’s power supply.

There is also the risk that being over-reliant on automated systems leads us into learned helplessness, which can have trivial (taking the wrong road) or tragic consequences (crashing an airliner). We will need to think more about when such “helplessness” becomes dangerous, and how we can reduce the risks. 

The broader concept of mindset matters too. Many writers on AI note the importance of education – getting more students to take science, technology, engineering and mathematics – learning academic and technical skills so they have better job prospects.

But if AI and it’s technological siblings have a profound impact on work and life that many anticipate, then more will need to be done to help both young and old develop a better growth mindset to help adapt to rapid change, and inform the nature of acceptable changes.

As AI advances, so does the terminology that we use to describe it – machine learning, or machine intelligence, or cognitive computing, or intelligence augmentation. We’ll keep refining our definitions of what intelligence is. But we’ll also keep developing our attitudes to where and how we can best use cognitive technologies to improve the lives of most of us.

 Our mindset, not a machine’s software, will be the best or worst thing to happen to us.

The Sciblogs Horizon Scan

This post, originally published 22nd Dec 2016,  has been re-posted as part of the Sciblogs Horizon Scan summer series, featuring posts from New Zealand researchers exploring what the future holds across a range of fields.


2 Responses to “Thinking about machines that “Think””

  • Hi, I am an atheist. As AI progresses, what will happen to “God” ? Will God somehow appear in the coding ? What exposure should AI be given to the concept of God ?

    • Based on current approaches, an AI may undertake a probabilistic assessment of whether a god, or gods, exist based on material provided to it. Perhaps like the Catholic Church in its process of beatification. We could probably do that with the IBM Watson system now, but that wouldn’t be very meaningful since one person’s miracle (or god) is someone else’s coincidence, or natural phenomena.

      Also based on current work, it would seem straight forward to train an AI system to identify religious or godly images, but that wouldn’t mean it understands the concept of “god”, just as they don’t understand the concept of “cat”.

      How would/could an atheist recognise a god appearing in the code?

      Some think that we’ll start looking on “super intelligent” AI as a new god.

      Some are starting to think that the pursuit of “artificial super intelligence” is beginning to look like a religion – http://blogs.gartner.com/magnus-revang/2016/12/12/artificial-super-intelligence-is-a-modern-day-religion/

Site Meter