Rise of the machines: how computers could control our lives

By Guest Author 15/03/2012 3


by Professor Tony Walsh, research director at NICTA (National ICT Australia Ltd), Australia’s Information and Communications Technology Research Centre of Excellence

Predicting the future is a risky business. If it wasn’t, we’d all be very wealthy by now. The Danish physicist Neils Bohr famously opined: ’Prediction is very difficult, especially about the future’.

Despite this, I confidently predict that machines will come to run our lives. And I’m not alone in this view. US mathematician Claude Shannon, one of the fathers of computation, wrote: ’I visualize a time when we will be to robots what dogs are to humans, and I’m rooting for the machines.’

And physicist Stephen Hawking, who is never short of a quote on life, the universe and everything has said that: ’Unless mankind redesigns itself by changing our DNA through altering our genetic makeup, computer-generated robots will take over our world’.

So how can we be so sure? Well, in a sense, it’s already happened. Computers are in charge of many aspects of our lives and it’s probably too late to turn them off.

Last month, medical bills in Australia couldn’t be paid. The cause? Computer software in the Australian Health Industry Claims and Payments Service (HICAPS) system that didn’t know about the leap day.

In November 2009, the entire air traffic control system of the United States crashed, causing chaos to travellers. The cause? The failure of a single router board.

And in August 2003, a powercut in the United States put 55 million people in the dark. The cause? Faulty software on a single computer that failed to detect what should have been a harmless local outage.

And there are many more examples. When computers fail, we see just how dependent we have become on them.

Historians will probably look back from the 22nd century and observe that the rise of machines became inevitable the day we first picked up a rock and started using it as a tool. Since then, we’ve been using machines to amplify our physical and, more recently, our mental capabilities.

Computers are now embedded into almost every aspect of our lives. Sometimes they’re even making life and death decisions:

Given these incidents (and others), it is unsurprising there is concern in some quarters about the risk of giving up control to machines. As a scientist, I welcome this discussion.

Roman Yampolskiy, a computer scientist at the University of Louisville in Kentucky, recently joined this debate with an article in the March issue of the Journal of Consciousness Studies (yes, such a scholarly tome does exist).

Yampolskiy proposed that any artificial intelligence we develop should be confined within a secure computing environment. In practical terms, this could mean severely limiting the ability of the AI to interact with the outside world. The AI would live in a virtual ’prison’.

Confining AI in this way would prevent harmful effects since the computer would not be able to take direct actions, only offer advice. However, it would still allowing humanity to benefit from the AI’s super-intelligence.

This might sound like a good idea, but there are many arguments against this strategy.

Samaja

First, it’s probably not possible. Where mankind has faced other, similar threats, confinement has been a controversial option.

For instance, while the smallpox virus is now confined to just two laboratories around the world, many believe this leaves us exposed to bioterrorist threats.

And cinema is full of examples where artificial intelligence manages to escape any such controls — think of films such as Blade Runner, The Matrix series and The Terminator series. Sure, these are just films, but fiction has a terrible habit of becoming fact. Our imaginations are often the best tool we have for predicting the future.

Second, confining AI is not desirable. Artificial intelligence can help us tackle many of the environmental, financial and other problems facing society today. This just won’t be possible if we isolate machines. If you isolate a child, they will struggle to learn and develop intelligence.

Many scientists, myself included, believe intelligence doesn’t exist in isolation, but emerges from our interaction with the ever-changing world.

Third, confining AI creates a false sense of security. Isaac Asimov had the right idea here: we need to ensure the DNA of any machine is designed to prevent harm. Asimov’s First Law of Robotics — which appeared in his 1942 short story, Runaround — states:

’A robot may not injure a human being or, through inaction, allow a human being to come to harm.’

Like all technologies, computers offer immense potential for good and for bad. It is our duty to properly train the next generation of computer scientists so ’good’ is programmed into the very DNA of future computers.

This article was originally published at The Conversation.
Read the original article.


3 Responses to “Rise of the machines: how computers could control our lives”

  • Tony Walsh, an interesting post and I agree that over time, we human species will be over-reliant on computers to do everything for us. The only danger here, is we may be subservient to machines. Machines may view us as their pets the same as we view cats.

    Former Sun Microsystem Chief scientist (and founder), Dr. Bill Choy wrote an article about a decade ago, where he highlighted a similar scenario for the future where machines will come to dominate. His 11 page article on Wired magazine is shown below:

    Why the future doesn’t need us.

    The patriot missile disaster in Saudi Arabia was attributed to rounding errors in numerical computation of the tracking algorithm. There are a few of those numerical computation error which lead to disasters.

    Some disasters attributable to bad numerical computing

    This is a type of error that I face all the time, when I develop my numerical codes. It is so hard to track down exactly of which “while-loop” or “for-loop” where the rounding-error/s or over/under-flowing error/ss takes place, because when an input of say a matrix with 5,000 rows by 3,000 columns of numerical values as an input to the algorithm. It is impossible to pinpoint where exactly the line/s of codes that the rounding error or the over/under-flowing problems is taking place, the initial input parameter of [5000 x 3000] matrix itself can get pregnant inside the routine/function which as a result lead to many temporary matrices that are created such as the original input matrix is split (to many sub-matrices), transpose (into another matrix), taking the inverse matrix of the original (or any of the created temporary matrices) , multiply/add/subtract/divide into or with other new temporary created matrices, and so forth.

    If one has to debug by taking a single step at a time in order to track the error, then one has to do this single step debugging in a whole week or even a month, since the internal data has exploded (it could be upto more than a million double floating points – even for a small dataset of 5000×3000). The difference between say, 5.125321454442247 in one iteration and 5.125321454442241 for next iteration is so small, that no debugging tool can detect of where the change (ie, potential rounding/over-underflow error) has occurred. To the best of my knowledge there’s no development tool (commercial or open source) specifically tailored for numerical algorithm development. Any software application that involves numerical libraries, must be at the mercy of the numerical algorithm developer. If the developer doesn’t consider round/over-underflow error during implementation, then God help the users of the software because disasters may arise.

  • Tony Walsh said…
    And cinema is full of examples where artificial intelligence manages to escape any such controls – think of films such as Blade Runner, The Matrix series and The Terminator series.

    Some things that are depicted in science fiction movies are actually realities, but they mingle fantasies in there as well. The neural network chip depicted in the movie Terminator is real (ie, a self-learning algorithm that can be used in embedded device), these neural net chips are already with us today in electronics devices and it has already been available in the markets since the 1st Terminator movie came out in the late 80s, however, the idea of a machine with a neural net that can think like a human and dominate humans is far fetched fantasy.

  • Tony Walsh said…
    Roman Yampolskiy, a computer scientist at the University of Louisville in Kentucky, recently joined this debate with an article in the March issue of the Journal of Consciousness Studies (yes, such a scholarly tome does exist).

    I have to first declare here that my knowledge of neuroscience is basically zero, but I often come across research publications in neuroscience (by physicists/computer-scientists/mathematicians/statisticians) who have applied a theory that originated in Statistical Mechanics from Physics (as in Complex System Theory – CST) into neuroscience.

    This theory is called SOC (Self-organized criticality) and it was first proposed in the literature (Physical Review Letters) in the late 80s.

    From my limited understanding of what I’ve read about consciousness in neuroscience, I believe that Physicists are on the right track in their interpretation about the mechanics or physicality of consciousness process, because this falls right in their doorstep of complexity theory.

    As a trained physicist myself, I don’t want to be dismissive here about the work of neuro-scientists, but I genuinely believe that they (neuro-scientists) should look into the work of physicists in the domain of complexity theory and try to accommodate ideas from there because after all, the brain is made up of physical components which they interact with each other in a dynamic manner (both linear & non-linear). In saying that, I have come across a few publications from the neuroscience community who have adopted physicists complex system theory in their research.

    There are some that I have seen in the literature, but the followings are only a few examples. Only link for paper#1 is downloadable, but the rest are available but just google for them since there’s only 2 links that one can post in a single blog comment.

    #1) Consciousness Viewed in the Framework of Brain Phase Space Dynamics, Criticality, and the Re-normalization Group

    #2) Metastability, Criticality and Phase Transitions in brain and its model (Essay)

    #3) Scaling and self-organized criticality in proteins: Lysozyme c (Physical. Reviews. E 80, 051916)

    #4) Ising-like dynamics in large scale brain functional networks (Physical. Reviews. E 79, 061922)

    #5) Critical brain networks (Physica A: Statistical Mechanics and its Applications – Volume 391, Issue 12, 2004)

    The list above is not exhaustive but there are numerous publications in the (various) scientific literature on the complexity theory topics from physics in their application to neuroscience.