Deep Learninig

Our June 2017 article How Dangerous Is Deep Learning? reviewed the deep learning arguments (sometimes acrimonious) as “part of an ongoing controversy which engages some of the best minds on the planet”.  New evidence is offering further insights for and against the proposition that “Artificial Intelligence (AI) will soon be able to set its own agenda, leading to AI control of the human race

A 20 Dec 2017 WIRED UK article by Sian Bradley, All the creepy, crazy and amazing things that happened in AI in 2017, gives a good overview, noting:

“It’s been a busy year in artificial intelligence. Kicking off with DeepMind beating a world champion at his own game and reaching an impressive crescendo with a realistic humanoid that wants a baby, this was the year that AI breakthroughs – and our fears about them – went mainstream.”

The article lists the “creepy, crazy and amazing things”  under various headings, starting with

AI became more human

“On October 11, 2017, Sophia was introduced to the United Nations. Fame beckoned. That same month, she was granted citizenship in Saudi Arabia; becoming the first robot to have a nationality and simultaneously raised a lot of eyebrows, and even more questions on robot rights. It was also somewhat ironic that a robot was granted rights in a country where women were only recently allowed to drive.”

The article notes that “Whilst she is a fascinating development in AI technology, Sophia is mostly evidence of our ability to laugh at the possibility of robots overthrowing us. She even got into a Twitter feud with Elon Musk.”

Other headings were:

AI was plagued with human bias

AI outdid humans… on several occasions

All the big tech companies wanted a taste

Alexa, are you self aware?

AI got creative

It got harder to tell what’s real

AI got involved in law. And healthcare

So where do humans fit into this AI future?

For further details, see the full article at http://www.wired.co.uk/article/what-happened-in-ai-in-2017

However, although in 2017  Sophia is mostly evidence of our ability to laugh at the possibility of robots overthrowing us”,  that possibility may not be so laughable in 2018, according to a 10 January 2018 ScienceDaily article from the U.S. Department of Energy’s Oak Ridge National Laboratory, Accelerating design, training of deep learning networks. That article reports that a team of ORNL researchers “has married artificial intelligence and high-performance computing to achieve a peak speed of 20 petaflops (20 thousand trillion calculations per second) in the generation and training of deep learning networks on the laboratory’s Titan supercomputer.”

The ORNL research has already been used to tackle a current scientific data challenge, achieving results in only 24 hours results that could easily have taken scientists months to accomplish. The research team expects even further improvement following the upgrade to OLCF’s next leadership computing system, Summit, which will come online this year.

While this is all good news for the ORNL scientists, there is some unease in other quarters. We touched on some of the ethical concerns in the December 2017 article Engineers, philosophers and sociologists release ethical design guidelines for future technology. Now a December 29, 2017 “Long Read” article in The Conversation, AI is learning from our encounters with nature – and that’s a concern , by Andrew Robinson , discusses “a bigger concern – whether such apps change what it means to be human.”

This is perhaps the most persuasive of the recent contributions to the deep learning debate. In a thoughtful, wide-ranging “Long Read” article, the author gives examples of how AI has been used to create massive databases. These include the Global Biodiversity Information Facility (GBIF, an online database run out of Copenhagen), which overall the holds more than 850 million observations of over a million different species of flora and fauna. Other examples include “more than 42,000 recorded sightings from more than 5,000 participants using WhaleShark.org” and “on an bigger scale, the millions of bird observations generated through an app called eBird have allowed us to visualise the precise migratory routes of over a hundred different bird species.”

However the article warns that “At the same time, in an outcome largely unforeseen by its early collectors, info-engineers are using the data to train artificial intelligence (AI), particularly computer vision apps to help us interpret the plants and animals we see around us. And these tools are raising some interesting, sometimes troubling questions.”

The troubling questions include some we may have heard before, e.g. questions about  ethics, questions about data bias; questions about ownership, data appropriation, human agency. Others are newer, e.g.  the question of empire building, “where the only meaningful descriptions of nature existed in empire-approved systems of classified truth: museums, libraries, the biology labs of universities.”

The author stresses “But there are also questions about these AI tools interfering with our ability – perhaps a human need – to easily transfer our unique nature expertise to, or gain expertise from, other people.”

Some of these  questions have been addressed by other participants in the debate, e.g.

“Everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone’s list. Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last.” Stephen Hawking

“Pattern recognition and association make up the core of our thought. These activities involve millions of operations carried out in parallel, outside the field of our consciousness. If AI appeared to hit a brick wall after a few quick victories, it did so owing to its inability to emulate these processes.” Daniel Crevier

“By their very nature, heuristic shortcuts will produce biases, and that is true for both humans and artificial intelligence, but the heuristics of AI are not necessarily the human ones.” Daniel Kahneman

“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. I mean with artificial intelligence we’re summoning the demon.” —Elon Musk

All would agree with the author’s summary: “One thing’s for sure: when it comes to developing AI, there’s an urgent need for more thinking, more consideration, a broader diversity of viewpoints. In developing AI tools, can we program them to value the creative act of human perception – the authentic, the spontaneous, the unpredictable?”

For further details, see the full article at https://theconversation.cmail20.com/t/r-l-jrihkihy-ihihuyyuhh-k/.


Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.