Is Dumb AI making Us Dumber?
By Ted Smillie on Monday, October 24th, 2016
Features in QESP Newsletter
Volume 28 , Issue 10 - ISSN 1325-2070
Social media algorithms have come under scrutiny recently as examples of unintelligent AI.
A September 14, 2016 article in The Conversation, Facebook’s algorithms give it more editorial responsibility – not less, reviews the unfortunate consequences of a move to replace humans with AI: “After accusations of political bias, Facebook recently decided to fire the human editors of its “trending” news section and replace them entirely with algorithms. Yet this software was less-than-perfect and Facebook engineers now check the automatically calculated trending stories after several inaccurate or sexually explicit posts were promoted”. Author Ansgar Koene, Senior Research Fellow, Horizon Digital Economy, UnBias, University of Nottingham explains why “the idea that algorithms are “neutral” and rely on logical decisions that are inherently free from prejudice is deeply flawed.”
The above is one of a series of articles in The Conversation on The social media revolution, which looks at how social media “has changed the media, politics, health, education and the law.” Another comes in the October 20, 2016 article, Algorithms might be everywhere, but like us, they’re deeply flawed, by Jonathan Albright, Assistant Professor, Elon University. Here, the message is that “algorithms are important because they are the key process in artificial intelligence: decision-making….Yet, as many researchers argue, due to their design by humans, algorithms can never be neutral.” The author gives examples to show that “As algorithms “learn” more about us through our financial data, location history, biometric features, voice patterns, social networks, stored memories, and “smart home” devices, we move towards a reality constructed by imperfect machine learning systems which try to understand us through other people’s expectations and sets of “rules”.
The above article also gives the interesting historic example of the first AI to beat a chess world champion. This was in May 1997, when IBM supercomputer, Deep Blue, beat Garry Kasparov, who blamed the defeat on a single move made by the IBM machine. Kasparov and other chess masters thought the move was too sophisticated for a computer, suggesting there had been some sort of human intervention during the game. However, a September 2012 Wired article, Did a Computer Bug Help Deep Blue Beat Kasparov? reveals that “Fifteen years later, one of Big Blue’s designers says the move was the result of a bug in Deep Blue’s software.”
On a brighter note, a 6 October 2016 ScienceDaily article from University of Nottingham, “Online emancipation: Protecting users from algorithmic bias“, describes a joint research project “ to establish a system of auditability and build trust and transparency when it comes to internet use.” Researchers from The University of Nottingham, the University of Oxford and the University of Edinburgh will study the concerns and perspectives of internet users with the aim of drawing up policy recommendations, ethical guidelines and a ‘fairness toolkit’. Input to the research project will include the above article in The Conversation, Facebook’s algorithms give it more editorial responsibility – not less.
Tags: algorithmic bias, Deep Blue, Garry Kasparov, Social Media, Social media algorithms, The Conversation, The social media revolution, UnBias, unintelligent AI