Brain mad of circuits

“Democratizing data science is the notion that anyone, with little to no expertise, can do data science if provided ample data and user-friendly analytics tools. Supporting that idea, the new tool ingests datasets and generates sophisticated statistical models typically used by experts to analyze, interpret, and predict underlying patterns in data.” So says a 15 January 2019 ScienceDaily article from Massachusetts Institute of Technology, Tool for nonstatisticians automatically generates models that glean insights from complex datasets. “Researchers are hoping to advance the democratization of data science with a new tool for nonstatisticians that automatically generates models for analyzing raw data.”

The researchers’ high-level goal is “making data science accessible to people who are not experts in statistics.” They note that “Ultimately, the tool addresses a bottleneck in the data science field… There is a widely recognized shortage of people who understand how to model data well. This is a problem in governments, the nonprofit sector, and places where people can’t afford data scientists.” The paper explains in detail how “Bayesian models can be used to ‘forecast’ — predict an unknown value in the dataset — and to uncover patterns in data and relationships between variables”, using the example of how “ users can check if a time series dataset like airline traffic volume has seasonal variation just by reading the code.”

The paper concludes that “Probabilistic programming is an emerging field at the intersection of programming languages, artificial intelligence, and statistics”, and  refers to the first International Conference on Probabilistic Programming, hosted in 2018 by MIT, which had more than 200 attendees, including leading industry players in probabilistic programming such as Microsoft, Uber, and Google (see videos). (See also in this Issue the Blog , which gives a link to the youtube video AI in 2019, posted by Siraj Raval on Dec 31, 2018.)

Democratizing data science for AIs?

A 25 January 2019 article from MIT, Self-driving cars, robots: Identifying AI ‘blind spots’, describes how “A novel model developed by MIT and Microsoft researchers identifies instances in which autonomous systems have “learned” from training examples that don’t match what’s actually happening in the real world.”

The researchers describe a model that uses human input to uncover training “blind spots”. “A human closely monitors the system’s actions as it acts in the real world, providing feedback when the system made, or was about to make, any mistakes.”

The article concludes:

“When the system is deployed into the real world, it can use this learned model to act more cautiously and intelligently. If the learned model predicts a state to be a blind spot with high probability, the system can query a human for the acceptable action, allowing for safer execution,” (See also in this Issue the January 25, 2019 article from The Conversation, To protect us from the risks of advanced artificial intelligence, we need to act now).

The Safety Institute of Australia

The Safety Institute of Australia gives an occupational health and safety (OHS) view on  analyzing big data in a 22 January, 2019  article, Where OHS can get the most benefit out of digital technology. The article  quotes  Nicole Meacock, Senior Manager HSE Consulting for EY:  “Often OHS functions equate big data to large spreadsheets, and analytics as simple graphs and calculations,…More digitally capable professionals understand that big datasets are those that are large in volume, high velocity (e.g. real time), and highly variable (e.g. not simply counting inspections).”

In terms of digital workplace health and safety WHS initiatives Meacock said those that are successful  require three elements: “subject matter expertise; technology that works; and human-centred design.” The article gives examples of how these three areas should be addressed collectively, noting that “In our experience it is the final element (human-centred design) that organisations find the most challenging.”

Meacock will be talking about organisations taking a high level, strategic roadmap to success at the The SIA National Health and Safety Conference (link is external), which will be held from 22-23 May 2019 at the International Convention Centre (ICC) Sydney. For more information or to register, please call (03) 8336 1995, email [email protected] or visit https://sianationalconference.com.au/ (link is external).

(See also  in this Issue the Summary  from a 21 January 2019 Australian National Audit Office (ANAO) Report on the ACIC Administration of the BIS Project ) 

Tags:  ScienceDaily, Massachusetts Institute of Technology,  Artificial intelligence, Autonomous vehicles, Artificial general intelligence, Killer robots, Safety Institute of Australia,   ANAO


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.