How Dangerous Is Deep Learning?
By Ted Smillie on Friday, June 23rd, 2017
Features in QESP Newsletter
Volume 29 , Issue 6 - ISSN 1325-2070
Recent and ongoing debate on the dangers of deep learning tends to focus on whether Artificial Intelligence (AI) will soon be able to set its own agenda, leading to AI control of the human race. However, lessons from the computer based disasters of the past 50 years suggest that the real danger is human error.
“There was a Wired article titled “Deep Learning isn’t a Dangerous Genie, it is Just Math”. This is really the most vacuous statement I’ve heard!”
The above argument between WIRED and Intuition Machine dates back to 2016 but is part of an ongoing controversy which engages some of the best minds on the planet. In Why Stephen Hawking and Bill Gates Are Terrified of Artificial Intelligence, the WorldPost/HuffPost explains the need to add the safeguards which are currently absent in Deep Learning. The podcasts from The Conversation, No problem too big #1: Artificial intelligence and killer robots, are based on the question ” Imagine a world where artificial intelligence is in control and humans are brink of extinction. What went wrong? What could we have done?”
The questions What went wrong? What could we have done? were being asked and addressed as part of the Systems Theory work on intercontinental ballistic missile (ICBM) systems of the 1950s/1960s. Nancy G. Leveson, Professor of Aeronautics and Astronautics and Engineering Systems at MIT. is an acknowledged leader in the field of safety engineering, and has worked to improve safety in nearly every industry over the past thirty years. In a 2011 MIT presentation, Engineering a Safer and More Secure World, Professor Leveson gives examples of a range of aerospace disasters and shows how they could have been prevented by using the System – Theoretic Accident Model and Processes (STAMP.) The presentation is in terms a layman can understand, including many photos, graphics and cartoons. It identifies changes since the 1980s which have caused failures, including:
- Introduction of computer control
- Exponential increases in complexity
- New technology
- Changes in human roles
The presentation also identifies a number of STAMP Uses Beyond Traditional System Safety, i.e. Quality, Producibility (of aircraft), Nuclear security, Banking and finance, Engineering process optimization, Organizational culture and Workplace safety. Clearly, STAMP can also contribute to the AI discussion.
The bad news is that that the exponential increases in complexity are increasing the risk of human error. A June 1, 2017 ScienceDaily article from Rice University, Scientists slash computations for deep learning has the subtitle ‘Hashing’ can eliminate more than 95 percent of computations. Hashing is a widely used technique for rapid data lookup and the researchers give details of an approach which “could reduce computation by as much as 95 percent and still be within 1 percent of the accuracy obtained with standard approaches.”
So who decides whether 1% inaccuracy is acceptable? STAMP would address this if properly implemented.
Tags: Artificial Intelligence, Deep Learning, Intuition Machine, MIT/ STAMP, ScienceDaily/Rice University/'Hashing', The Conversation, WIRED, WorldPost/HuffPost