This is Part 2 of 7 of the highlights from “Illuminating the Path: The Research and Development Agenda for Visual Analytics.” Please see this post for an introduction to the study and access to the other 6 parts.
Data flooding is a term I use to illustrate the fact that “our ability to collect data is increasing at a faster rate than our ability to analyze it.” To this end, I completely agree with the recommendation that new methods are required to “allow the analyst to examine this massive, multi-dimensional, multi-source, time-varying information stream to make decisions in a time-critical manner.”
We don’t want less, but rather more information since “large data volumes allow analysts to discover more complete information about a situation.” To be sure, “scale brings opportunities as well.” As a result, for example, “analysts may be able to determine more easily when expected information is missing,” which sometimes “offers important clues […].”
However, while computer processing power and memory density have changed radically over the decades, “basic human skills and abilities do not change significantly over time.” Technological advances can certainly leverage our skills “but there are fundamental limits that we are asymptotically approaching,” hence the notion of information glut.
In other words, “human skills and abilities do not scale.” That said, the number of humans involved in analytical problem-solving does scale. Unfortunately, however, “most published techniques for supporting analysis are targeted for a single user at a time.” This means that new techniques that “gracefully scale from a single user to a collaborative (multi-user) environment” need to be developed.
However, current technologies and platforms being used in the humanitarian and human rights communities do not address the needs for handling ever-changing volumes of information. “Furthermore, current tools provide very little in the way of support for the complex tasks of anlaysis and discovery process.” There clearly is a platform scarcity.
Admittedly, “creating effective visual representations is a labor-intensive process that requires a solid understanding of the visualization pipeline, characteristics of the data to be displayed, and the tasks to be performed.”
However, as is clear from the crisis mapping projects I have consulted on, “most visualization software is written with incomplete knowledge of at least some of this information.” Indeed, it is rarely possible for “the analyst, who has the best understanding of the data and task, to construct new tools.”
The NVAC study thus recommends that “research is needed to create software that supports the most complex and time-consuming portions of the analytical process, so that analysts can respond to increasingly more complex questions.” To be sure, “we need real-time analytical monitoring that can alert first responders to unusual situations in advance.”
Pingback: Research Agenda for Visual Analytics « iRevolution
Thanks for the summary Patrick, sounds like a fantastic report.
Mulling over this post today, I am inclined to characterize data flooding as 1.) an interaction design problem and 2.) an application design problem.
The interaction design problem is about humans and software usability: We need tools that make it easier for groups of “entry level” analysts to participate and make sense of data.
The application design problem is (perhaps more surprisingly) about interoperability and data standards: we need to be able to thread data together in ways that were not preconceived by the application designer.
Granted, these are not trivial issues to solve, but they are now relevant to dozens of industries, and their import is increasingly apparent. We have a large body of literature on usability and machine interoperability, and profound motivations in every information-driven field.
To my eye the humanitarian opportunities are hanging low and heavy on the tree.
So what forces are holding us back from creating more advanced tools for situational awareness?
Thanks as always for sharing your thoughts on iRevolution, Chris, much appreciated indeed. You make very good points and also preempted my next series of posts, which I shall publish shortly and look for any further comments you might have.
Pingback: Part 4: Automated Analysis and Uncertainty Visualized « iRevolution