“if we actually began relying on the claims made by big data surveillance in public health, we would come to some peculiar conclusions. Some of these conclusions may even pose serious public health harm.”

This quote comes from a July 2, 2014 Science Daily article Finding real value in big data for public health, which provides an update to our March 2014 article When big isn’t better: How the flu bug bit Google. The new research shows that the promise of big data can be fulfilled by ” revising the inner plumbing of the Google Flu Trends (GFT) digital disease surveillance system”

The authors refer to the earlier criticism of GFT, noting  that  “Assuming you can’t use big data to improve public health is simply wrong. Existing shortcomings are a result of methodologies, not the data themselves.” The article describes how the existing GFT model was “beefed up” to give more accurate influenza predictions. The authors also note the need for “greater transparency in the reporting of studies and better methods when using big data in public health.”

The article concludes “We certainly don’t want any single entity or investigator, let alone Google — who has been at the forefront of developing and maintaining these systems — to feel like they are unfairly the targets of our criticism. It’s going to take the entire community recognizing and rectifying existing shortcomings. When we do, big data will certainly yield big impacts.”

See San Diego State University. “Finding real value in big data for public health.” ScienceDaily. ScienceDaily, 2 July 2014. <www.sciencedaily.com/releases/2014/07/140702122432.htm>

The original research paper published in Science Magazine, 14 March 2014, was titled The Parable of Google Flu: Traps in Big Data Analysis and the authors discuss two issues that contributed to GFT’s mistakes: big data hubris and algorithm dynamics. “Big data hubris” is defined as “the often implicit assumption that big data are a substitute for, rather than a supplement to, traditional data collection and analysis.”  Algorithm dynamics are “the changes made by engineers to improve the commercial service and by consumers in using that service.” The authors suggest that “by combining GFT and lagged CDC (Centers for Disease Control and Prevention) data, as well as dynamically recalibrating GFT, we can substantially improve on the performance of GFT or the CDC alone.”

See http://blogs.iq.harvard.edu/netgov/The%20Parable%20of%20Google%20Flu%20%28WP-Final%29.pdf

However, these suggestions are debated in a Mar 27 2014 article in The Atlantic, titled  In Defense of Google Flu Trends – If the system failed, it did so only in the outsize dreams of Big Data acolytes. The author picks up on the suggestion for improving performance by combining GFT and lagged CDC data and makes the point that this was never the intention.  From the initial concept in 2004, the Google  Team’s early-and-frequent contact with the CDC resulted in GFT being built the way it is. “The goal was to build a complementary signal to other signals.” In practice, if GFT and CDC data were combined, it would offer less to the CDC, even if it gave greater predictive value to GFT. A Google spokesman noted,  “In talking with the various public health officials over the years, we’ve gotten the feeling that it is very beneficial to see multiple angles on the flu.” Another point is that

“By keeping the Flu Trends signal pure, officials can check to see whether the GFT data reflects a real deviation from the standard trend.”

The author also quotes research out of Johns Hopkins  in early 2013 which found that Flu Trend data “was the only source of external information to provide statistically significant forecast improvements over the base model”, i.e. GFT was doing what it was designed for.

The author ends with his own take on the GFT “Parable”:

“New technology comes along. The hype that surrounds it exceeds that which its creators intended. The technology fails to live up to that false hope and is therefore declared a failure in the court of public opinion.” He adds: “Luckily, that’s not the only arena that matters. Researchers both in and outside epidemiology have found Google Flu Trends and its methods useful and relevant. The initial Nature paper describing the experiment now has over 1,000 citations in many different fields.”

 

See http://www.theatlantic.com/technology/archive/2014/03/in-defense-of-google-flu-trends/359688/

However the definitive take on the Big Data Debate comes in a substantial FT Magazine article, Big data: are we making a big mistake? by Tim Harford, author of The Undercover Economist  and The Undercover Economist Strikes Back. In this article, Tim does some “myth-busting” on recent Big Data claims, quoting David Spiegelhalter, Winton Professor of the Public Understanding of Risk at Cambridge university, who says that at worst they can be “complete bollocks. Absolute nonsense.”

The  article traces the “Big data hubris” back to 1936 with a highly readable account of the pre-election surveys for the contest between Republican Alfred Landon and President Franklin Delano Roosevelt. Tim explains why The Literary Digest predicted the wrong outcome from a survey yielding 2.4 million returns, whereas opinion poll pioneer George Gallup got it right from a 3000 sample.

Tim notes “The big data craze threatens to be The Literary Digest all over again.”

The current version of “Big data hubris” is the “N=All” delusion. Tim quotes Professor Viktor Mayer-Schönberger of Oxford’s Internet Institute, co-author of Big Data, whose “favoured definition of a big data set is one where “N = All” – where we no longer have to sample, but we have the entire background population.” Unfortunately, ”N =All” is unrealistic, as Tim illustrates with a few anecdotes, including Boston’s “Street Bump” smartphone app and Target’s “uncanny success” in predicting pregnancy. The article also highlights some of the pitfalls, such as the “multiple-comparisons problem” and lack of transparency. His conclusion is that “Big data” has arrived, but big insights have not. “The challenge now is to solve new problems and gain new answers – without making the same old statistical mistakes on a grander scale than ever.”

See http://www.ft.com/cms/s/2/21a6e7d8-b479-11e3-a09a-00144feabdc0.html#ixzz37JOTDTbZ