By Robert Hickson 14/10/2016

Artificial Intelligence (AI) is everywhere these days, especially in opinion pieces and reports to or from governments. It’s almost like you need IBM Watson Analytics to keep up with it all.

Here’s a sampling from the last week or so.

Venture Capitalist Marc Andreessen notes that it’s a new architecture not just a new feature.

Andrew Ng, chief scientist at Baidu Research, suggests “AI is the new electricity”.

Some of the big internet companies are funding a partnership to “formulate best practices on AI technologies, to advance the public’s understanding of AI”, and to discourage government’s over-regulating the field.

Stanford’s AI100 project has just released its first report describing current developments and what may be feasible in 2030. Their main message is “don’t panic!”. Though one columnist has pointed out that the report panel is mostly made up of those with an interest in AI development, and doesn’t include real social scientists.

The UK’s parliament’s Science and Technology Committee is recommending their government look more closely at it:

“It is too soon to set down sector-wide regulations for this nascent field but it is vital that careful scrutiny of the ethical, legal and societal ramifications of artificially intelligent systems begins now.”


The White House has also put out a report Preparing for the Future of Artificial Intelligence

And President Obama is the guest editor of November’s edition of Wired magazine, where in an interview there is an extensive discussion with him and MIT’s Joi Ito on AI. President Obama makes the point that he doesn’t want to see AI R&D over-regulated in the basic research stage, but should be involved in its funding and encouraging discussion about where the research may lead.

And just today (Friday) New Zealand’s Institute of Directors and Chapman Tripp released a paper  calling for our government to establish a high-level working group on AI to:

  • consider the potential impacts for New Zealand
  • identify major areas of opportunity and concern, and
  • make recommendations about how New Zealand should prepare for AI driven change.


Plus, I’m giving a talk on AI at Wellington’s Space Place  (formerly Carter Observatory) next month. That was booked months ago, so how’s that for excellent timing?

As the above recent activity shows, AI advances have reached a tipping point where industry, academia and government’s are starting to get nervous about potential societal effects. Less AI killing us all, but taking more of “our” jobs.

I’ve written about AI, automation and robotics quite a lot in this blog before, so I won’t go into details again here. But in a future post I’ll set out some of the main thoughts for my up-coming talk, and include an actual futures methodological approach that helps with considering some of the issues that AI raises.

Addressing technology policy challenges

The main thing that struck me from the various governmental reports noted above is that we have been here before in a technology policy sense. Biotechnology and nanotechnology have prompted governments (somewhat belatedly in the former, more proactively in the latter case) and industries to take a similar approach.

Every decade or so a new suite of technologies emerges, prompting similar flurries of activities. Reports are made (“opportunities to warrant continued R&D, some challenges that need to be addressed through further research”, etc), some committees are established, regulations clarified, a few social science research and community engagement projects get set up. Then attention moves elsewhere, until the next technological issue emerges.

That may be a good way of dealing with them, and may give the public some assurance that the issue is being addressed. But it seems like we always reinvent parts of the same wheel.

AI is raising some important issues that do need to be considered. It’s how we do it that I think needs more attention.

A weakness in the current approach is that we continually focus largely on the technologies in question. This risks missing some broader economic and social context and systems issues. For example, you can’t effectively discuss the impacts of AI without having a broader discussion about automation and global trade trends, and other augmentation technologies, that will also have important influences on the outcomes under consideration(such as employment, or what it means to be human).

That, obviously, is a much harder issue to look at objectively. But that broader systems thinking is where we need to be heading as the world gets more complex. Otherwise we a just putting more rubber on the same old wheel.

Maybe we could develop an AI agent that identifies the most efficient and effective process to help us with this approach.


Featured image: By Alejandro Zorrilal Cruz [Public domain], via Wikimedia Commons

0 Responses to “AI, why?”

  • I sigh a little when I think of our government approaching issues like Friendly AI and AI as autonomous agents.