(Editor’s Note: The following is an extract from a Science Daily report on work by University of Houston researchers, published recently in the Science journal. The original. with photograph and links is available at http://www.sciencedaily.com/releases/2014/03/140313142608.htm )
Numbers and data can be critical tools in bringing complex issues into crisp focus. The understanding of diseases, for example, benefits from algorithms that help monitor their spread. But without context, a number may just be a number, or worse, misleading.
“The Parable of Google Flu: Traps in Big Data Analysis” is published in the journal Science, funded, in part, by a grant from the National Science Foundation. Specifically, the authors examine Google’s data-aggregating tool Google Flu Trend (GFT), which was designed to provide real-time monitoring of flu cases around the world based on Google searches that matched terms for flu-related activity.
“Google Flu Trend is an amazing piece of engineering and a very useful tool, but it also illustrates where ‘big data’ analysis can go wrong,” said Ryan Kennedy, University of Houston political science professor. He and co-researchers David Lazer (Northeastern University/Harvard University), Alex Vespignani (Northeastern University) and Gary King (Harvard University) detail new research about the problematic use of big data from aggregators such as Google.
Even with modifications to the GFT over many years, the tool that set out to improve response to flu outbreaks has overestimated peak flu cases in the U.S. over the past two years.
“Many sources of ‘big data’ come from private companies, who, just like Google, are constantly changing their service in accordance with their business model,” said Kennedy, who also teaches research methods and statistics for political scientists. “We need a better understanding of how this affects the data they produce; otherwise we run the risk of drawing incorrect conclusions and adopting improper policies.”
GFT overestimated the prevalence of flu in the 2012-2013 season, as well as the actual levels of flu in 2011-2012, by more than 50 percent, according to the research. Additionally, from August 2011 to September 2013, GFT over-predicted the prevalence of flu in 100 out of 108 weeks.
The team also questions data collections from platforms such as Twitter and Facebook (like polling trends and market popularity) as campaigns and companies can manipulate these platforms to ensure their products are trending.
Still, the article contends there is room for data from the Googles and Twitters of the Internet to combine with more traditional methodologies, in the name of creating a deeper and more accurate understanding of human behavior.
“Our analysis of Google Flu demonstrates that the best results come from combining information and techniques from both sources,” Kennedy said. “Instead of talking about a ‘big data revolution,’ we should be discussing an ‘all data revolution,’ where new technologies and techniques allow us to do more and better analysis of all kinds.”
The above story is based on materials provided by University of Houston. The original article was written by Marisa Ramirez. Note: Materials may be edited for content and length.
D. Lazer, R. Kennedy, G. King, A. Vespignani. The Parable of Google Flu: Traps in Big Data Analysis. Science, 2014; 343 (6176): 1203 DOI: 10.1126/science.1248506
University of Houston