Newsletter Volume 30 Issue 1, Jan 2018
From The Editor
“Most computer bugs go unnoticed by the general public — as opposed to our tech milieu. An update fixes the problem and the typical user is none the wiser. But two recent security vulnerabilities, Meltdown and Spectre, were momentous enough for headlines to spill outside of the tech world.”
The above quote is from Jean-Louis Gassée’s Jan 14, 2018 Monday Note, a “5 minute read” which reveals that the Meltdown and Spectre vulnerabilities were understood more than 20 years ago. The author gives a clear explanation of the problem and warns that “A malicious designer can hide undocumented and virtually undetectable functions inside their chip. This would amount to a mole inside any home or industrial systems that uses devices built with the diabolical processor.” See below, Beyond Spectre & Meltdown CPU Bugs. (Other interesting Monday Note contributions include a very good Jan 2 “67 minute read” Story from Code Like A Girl, which pulls no punches, see The Divine Comedy of the Tech Sisterhood .) Also, RSA Conference has provided a January 10, 2018 Spectre and Meltdown Podcast featuring Paul Kocher, the researcher credited with co-discovering Spectre and a co-author of the Meltdown research paper.
Our September 2017 article, Democracy and the 15 Hour Week, talked about the John Maynard Keynes 1930 prediction, that by 2030 our grandchildren would be working a 15 hour week, due to the power of compound interest and to technological advances. Now
a January 2018 report from The McKell Institute shows that “in recent years, the fair go has been under threat, particularly as wage and income inequality has widened, leaving more Australians behind.” See below, Mapping Opportunity: a national index on wages and income, for a reprint of the Executive Summary and link to the full Report.
Our June 2017 article How Dangerous Is Deep Learning? reviewed the deep learning arguments (sometimes acrimonious) as “part of an ongoing controversy which engages some of the best minds on the planet”. New evidence is offering further insights for and against the proposition that “Artificial Intelligence (AI) will soon be able to set its own agenda, leading to AI control of the human race” See below, More Dangers of Deep Learning.
“Multifaceted design of the mantis shrimp club is inspiring advanced composite materials for airplanes and football helmets”
Yes, Another example of AI at work. The above quote is from a 16 January 2018 ScienceDaily article from University of California – Riverside, which notes “We believe the role of the fiber-reinforced striated region in the smasher’s club is much like the hand wrap used by boxers when they fight: to compress the club and prevent catastrophic cracking. Together, the impact, periodic and striated regions form a club of incredible strength, durability and impact resistance,” See below, How mantis shrimp pack the meanest punch.
Articles in the current Issue cover:
“As this flaw is generally difficult to detect from normal analysis techniques, we have developed a detection tool that is semi-automated and easy to operate. This will help developers and penetration testers ensure their apps are secure against this attack..”
“ Interestingly, some research for other purposes could be applied to prevent cybercrime. An example is a 30 November 2017. ScienceDaily article from Columbia University School of Engineering and Applied Science. New software can verify someone’s identity by their DNA in minutes..”
“One thing’s for sure: when it comes to developing AI, there’s an urgent need for more thinking, more consideration, a broader diversity of viewpoints. In developing AI tools, can we program them to value the creative act of human perception – the authentic, the spontaneous, the unpredictable?”
“Interestingly, aerodynamic cycling helmets and golf clubs already incorporate this design, suggesting that nature was one step ahead of humans in achieving high performance structures. The natural world can provide many more design cues that will enable us to develop high performance synthetic materials,.”
A 9 Jan 2018 blog by Clem Colman from the Australian Government Digital Transformation Agency (DTA) debunks some myths and gives some tips on running a modern security operation centre (SOC). See Shields up! And other modern security operation centre (SOC) myths.
We are planning ACOSM18 as a QESP/ACS event to be scheduled in April after Easter. The plan is for an evening event, 5.30 for 6.00, keynote, 2 speakers and Forum till 7.30, drinks & fingerfood till 8.00. Further details will be provided in the February 2018 Newsletter.
Quote of the Day
The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast—it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most. – Elon Musk
Quote from Yesteryear
I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. – Alan Turing.