Tag Archives: Visual Analytics

Towards a “Theory” (or analogy) of Crisis Mapping?

The etymology of the word “theory” is particularly interesting. The word originates from the ancient Greek; theoros means “spectator,” from thea “a view” + horan “to see.” In 1638, theory was used to describe “an explanation based on observation and reasoning.” How fitting that the etymologies of “theory” resonate with the purpose of crisis mapping.

But is there a formal theory of crisis mapping per se?  There are little bits and pieces here and there, sprinkled across various disciplines, peer-reviewed journals and conference presentations. But I have yet to come across a “unified theory” of crisis mapping. This may be because the theory (or theories) are implicit and self-evident. Even so, there may be value in rendering the implicit—why we do crisis mapping—more visible.

Crises occur in time and space. Yet our study of crises (and conflict in particular) has generally focused on identifying trends over time rather than over space. Why? Because unlike the field of disaster management, we do not have seismographs scattered around the planet that  precisely pint point the source of escalating social tremors. This means that the bulk of our datasets describe conflict as an event happening in countries and years, not cities and days, let alone towns and hours.

This is starting to change thanks to several factors: political scientists are now painstakingly geo-referencing conflict data (example); natural language processing algorithms are increasingly able to extract time and place data from online media and user-generated content (example);  and innovative crowdsourcing platforms are producing new geo-referenced conflict datasets (example).

In other words, we have access to more disaggregated data, which allows us to study conflict dynamics at a more appropriate scale. By the way, this stands in contrast to the “goal of the modern state [which] is to reduce the chaotic, disorderly, constantly changing social reality beneath it to something more closely resembling the administrative grid of its observations” (1). Instead of Seeing Like a State, crisis mapping corrects the myopic grid to give us The View from Below.

Crises are patterns; by this I mean that crises are not random. Military or militia tactics are not random either. There is a method to the madnes—the fog of war not withstanding. Peace is also a pattern. Crisis mapping gives us the opportunity to detect peace and conflict patterns at a finer temporal and spatial resolution than previously possible; a resolution that more closely reflects reality at the human scale.

Why do scientists increasingly build more sophisticated microscopes? So they can get more micro-level data that might explain patterns at a macro-scale. (I wonder whether this means we’ll get to a point where we cannot reconcile quantum conflict mechanics with the general theory of conflict relativity). But I digress.

Compare analog televisions with today’s high-definition digital televisions. The latter is a closer reflection of reality. Or picture a crystal clear lake on a fine Spring day. You peer over the water and see the pattern of rocks on the bottom of the lake. You also see a perfect reflection of the leaves on the trees by the lake shore. If the wind picks up, however, or if rain begins to fall, the water drops cause ripples (“noise” in the data) that prevent us from seeing the same patterns as clearly. Crisis mapping calms the waters.

Keeping with the lake analogy, the ripples form certain patterns. Conflict is also the result of ripples in the socio-political fabric. The question is how to dampen or absorb the ripples without causing unintended ripples elsewhere? What kinds of new patterns might we generate to “cancel out” conflict patterns and amplify peaceful patterns? Thinking about patterns and anti-patterns in time and space may be a useful way to describe a theory of crisis mapping.

Some patterns may be more visible or detectable at certain temporal-spatial scales or resolutions than at others. Crisis mapping allows us to vary this scale freely; to see the Nazsca Lines of conflict from another perspective and at different altitudes. In short, crisis mapping allow us to escape the linear, two-dimensional world of Euclidean political science to see patterns that otherwise remain hidden.

In theory then, adding spatial data should improve the accuracy and explanatory power of conflict models. This should provide us with better and more rapid ways detect the patterns behind conflict ripples before they become warring tsunamis. But we need more rigorous and data-driven studies that demonstrate this theory in practice.

This is one theory of crisis mapping. Problem is, I have many others! There’s more to crisis mapping than modeling. In theory, crisis mapping should also provide better decision support, for example. Also, crisis mapping should theoretically be more conducive to tactical early response, not to mention monitoring & evaluation. Why? I’ll ramble on about that some other day. In the meantime, I’d be grateful for feedback on the above.

Patrick Philippe Meier

Proposing the Field of Crisis Mapping

There are no books on Crisis Mapping, no peer-reviewed journals, no undergraduate or graduate courses, no professional seminars. And yet, after co-directing the Harvard Humanitarian Initiative’s (HHI) Program on Crisis Mapping and Early Warning (CM&EW) for 2-years, I can confirm that an informal field and community of crisis mapping is veritably thriving.

The incredible interest around the first International Conference on Crisis Mapping (ICCM 2009) is further testament to this effect. Over 50 organizations are expected to participate and three leading donors have come together to generously support the formalization of Crisis Mapping as a field of study and practice. The conference is co-organized by myself at HHI and my colleague Professor Jen Ziemke at John Carroll University (JCU).

The findings from HHI’s 2-year program on Crisis Mapping were invaluable in developing a proposed research agenda for the field. This agenda serves the basis of ICCM 2009. I regularly refer to this research agenda when asked by colleagues: “What is crisis mapping?” Crisis Mapping is more than mapping crisis data. There are three key pillars that I have identified as being integral to crisis mapping.

1. Crisis Map Sourcing (CMS)
2. Crisis Mapping Analysis (CMA)
3. Crisis Mapping Response (CMR)

Each of these three pillars constitutes an important area of research for crisis mapping. I briefly describe what each of these constitutes below. Professor Ziemke and I are working together to further develop the crisis mapping taxonomy I crafted at HHI. If we are to begin formalizing the field, then the community may benefit from a common language. So we’re co-authoring a paper on the topic and look forward to sharing it in the near future.

Crisis Map Sourcing

How does one collect information in such a way that mapping can add value? There are four principal methodologies in crisis map sourcing: (1) Crisis Map Coding, (2) Participatory Crisis Mapping, (3) Mobile Crisis Mapping and (4) Automated Crisis Mapping. The common thread between the three is that they each look to extract event-data for crisis mapping purposes.

Crisis Map Coding (CMC) draws on hand-coding geo-referenced event-data like the project ACLED at the Peace Research Institute, Oslo (PRIO). This methodology is widely used by political scientists as evidenced by the peer-reviewed literature on the topic.  See also Jen Ziemke’s work on hand-coding conflict data on the Angolan civil war. While manually coding event data is certainly not a new approach, the focus on geo-referencing this data is relatively recent.

Participatory Crisis Mapping (PCM) is participatory mapping with a focus on crises. A good example is the UNDP’s Threat and Risk Mapping Analysis (TRMA) project in the Sudan, which uses focus groups to map local knowledge on threats and risks at the community level.

Mobile Crisis Mapping (MCM) seeks to leverage mobile technologies for crisis mapping. This includes the use of mobile phones, geospatial technologies and unmanned areal vehicles (UAVs). Ushahidi, AAAS and ITHACA are all good examples of mobile crisis mapping in action. These different technologies enable us to experiment with new methodologies such as the crowdsourcing of crisis information, automated change detection using satellite imagery and real-time mashups with UAVs. More information on MCM is available here.

Automated Crisis Mapping (ACM) looks at natural language processing and computational linguistics to extract event-data. While this field of study is not new, it has been progressing rapidly over the years as evidenced by Crimson Hexagon’s work on sentiment extraction and the European Media Monitor’s (EMM) clustering algorithms. What is new in this area is the focus on automated mapping like GDACS and the use  semantic web parsing like BioCaster.

Crisis Mapping Analysis

How does one analyze crisis mapping data to identify patterns over space and time? There are three principle approaches in crisis mapping analysis: (1) Crisis Mapping Visualization, (2) Crisis Mapping Analytics and (3) Crisis Map Modeling. Note that there is a pressing need to enable more collaboration in the analytical process. Platforms that facilitate collaborative analytics are far and few between. In addition, there is a shift towards mobile crisis mapping analysis. That is, leveraging mobile technologies to carry out analysis such as Folksomaps and Cartagen.

Crisis Mapping Visualization (CMV) seeks to visualize data in such a way that patterns are identifiable through visual analysis, i.e., using the human eye. For example, patterns may be discernible at one spatial and/or temporal scale, but not at another. Or patterns may not appear using 2D visualization but instead using 3D, like GeoTime. Varying the speed of data animation over time may also shed light on certain patterns. More on visualization here and here.

Crisis Mapping Analytics (CMA) is GIS analysis applied to crisis data. This approach draws on applied geo-statistics and spatial econometrics to identify crisis patterns otherwise hidden to the human eye. This includes Exploratory Spatial Data Analysis (ESDA). A good example of crisis mapping analytics is HunchLab and other crime mapping analysis platforms like GeoSurveillance.

Crisis Map Modeling (CMM) combines GIS analysis with agent-based modeling. See this example published in Science. While the conclusions of the article are suspect, the general approach highlights the purpose of crisis map modeling. The point is to use empirical data to simulate different scenarios using agent-based models. My colleague Nils Wiedmann is doing some of the most interesting work in this area.

Crisis Mapping Response

Like early warning, there is little point in doing crisis mapping if it is not connected to strategic and/or operational response. There are three principle components of crisis mapping response: (1) Crisis Map Dissemination, (2) Crisis Map Decision Support, and (3) Crisis Map Monitoring and Evaluation.

Crisis Map Dissemination (CMD) seeks to disseminate maps and/or share information provided by maps. Maps can be shared in hard copy format, such as with Walking Papers. They can also be shared electronically and the underlying data synchronized using Mesh4X. Another approach is crowdfeeding, where indicator alerts are subscribed to via email or SMS.

Crisis Map Decision Support (CMDS) leverages decision-support tools specifically for crisis mapping response. This approach entails the use of interactive mapping platforms that users can employ to query crisis data. There is thus a strong link with crisis mapping analysis since the decision process is informed by the patterns identified using crisis mapping analytics. In other words, the point is to identify patterns so we can amplify, mitigate or change them. It is vital that crisis map decision support platforms have well designed user interfaces.

Crisis Map Monitoring and Evaluation (CMME) combines crisis mapping with monitoring and evaluation (M&E) to produce basemaps (baselines mapped in space and time). This approach seeks to identify project impact or lack thereof by comparing basemaps with new data being collected throughout the project cycle. More information on this approach is available here.

I’d be grateful for feedback on this proposed taxonomy.

Patrick Philippe Meier

Crisis Mapping Angola

My colleague Jen Ziemke did her dissertation research on identifying patterns of civil war abuse in Angola. In particular, she sought to determine why combatants sometimes target civilians while at other time refrain from doing so. She explored this question by looking for both spatial and temporal patterns of conflict.


Jen was one of the first scholars to do applied research in the field of crisis mapping and continues to be an invaluable source of expertise on the subject. For her doctoral work, she coded and geo-referenced 41 years of conflict data on Angola by drawing on more than 180 different sources. This yielded a stunning 9,216 events of violence between 1961 and 2002.

Jen’s spatial analysis of the data yielded the following conclusion:

Resource wealth, ethnicity, and ‘war by other means’ do not adequately explain the observed patterns of civilian abuse in the Angolan civil war. However, losses on the battlefield escalate patterns of violence against civilians. Belligerent actors do not always abuse civilians; rather they participate in these behaviors in relatively predictable ways: when they are losing and as the war progresses.  In short, a theory of loss best explains the variation in levels of civilian abuse across time in the Angolan war than any other explanation.

More on Jen’s research is available in a paper (PDF) she presented at Yale University last year entitled: “From Battles to Massacres: Explaining the patterns of civil war abuse in Angola, 1961-2002.”

Patrick Philippe Meier

Crisis Mapping for Monitoring & Evaluation

I was pleasantly surprised when local government ministry representatives in the Sudan (specifically Kassala) directly requested training on how to use the UNDP’s Threat and Risk Mapping Analysis (TRMA) platforms to monitor and evaluate their own programs.


The use of crisis mapping for monitoring and evaluation (M&E) had cropped up earlier this year in separate conversations with the Open Society Institute (OSI) and MercyCorps. The specific platform in mind was Ushahidi, and the two organizations were interested in exploring the possibility of using the platform to monitor the impact of their funding and/or projects.

As far as I know, however, little to no rigorous research has been done on the use of crisis mapping for M&E. The field of M&E is far more focused on change over time than over space. Clearly, however, post-conflict recovery programs are implemented in both time and space. Furthermore, any conflict sensitivity programming must necessarily take into account spatial factors.


The only reference to mapping for M&E that I was able to find online was one paragraph in relation to the Cartametrix 4D map player. Here’s the paragraph (which I have split into to ease legibility) and below a short video demo I created:

“The Cartametrix 4D map player is visually compelling and fun to use, but in terms of tracking results of development and relief programs, it can be much more than a communications/PR tool. Through analyzing impact and results across time and space, the 4D map player also serves as a good program management tool. The map administrator has the opportunity to set quarterly, annual, and life of project indicator targets based on program components, regions, etc.

Tracking increases in results via the 4D map players, gives a program manager a sense of the pace at which targets are being reached (or not). Filtering by types of activities also provides for a quick and easy way to visualize which types of activities are most effectively resulting in achievements toward indicator targets. Of course, depending on the success of the program, an organization may or may not want to make the map (or at least all facets of the map) public. Cartametrix understands this and is able to create internal program management map applications alongside the publicly available map that doesn’t necessarily present all of the available data and analysis tools.”

Mapping Baselines

I expect that it will only be a matter of time until the M&E field recognizes the added value of mapping. Indeed, why not use mapping as a contributing tools in the M&E process, particularly within the context of formative evaluation?

Clearly, mapping can be one contributing tool in the M&E process. To be sure, baseline data can be collected, time-stamped and mapped. Mobile phones further facilitate this spatially decentralized process of information collection. Once baseline data is collected, the organization would map the expected outcomes of the projects they’re rolling out and estimated impact date against this baseline data.

The organization would then implement local development and/or conflict management programs  in certain geographical areas and continue to monitor local tensions by regularly collecting geo-referenced data on the indicators that said projects are set to influence. Again, these trends would be compared to the initial baseline.

These program could then be mapped and data on local tensions animated over time and space. The dynamic mapping would provide an intuitive and compelling way to demonstrate impact (or the lack thereof) in certain geographical areas where the projects were rolled out as compared to other similar areas with no parallel projects. Furthermore, using spatial analysis for M&E could also be a way to carry out a gap analysis and to assess whether resources are being allocated efficiently in more complex environments.

Next Steps

One of my tasks at TRMA is to develop a short document on using crisis mapping for M&E so if anyone has any leads on applied research in this area, I would be much obliged.

Patrick Philippe Meier

GeoTime: Crisis Mapping Analysis in 3D

I just came across GeoTime, a very neat GIS platform for network analyis in time and space. GeoTime is developed by a company called Oculus and does a great job in presenting an appealing 3D visual interface for the temporal analysis of geo-referenced data. The platform integrates timeline comparisons, chart statistics and network analysis tools to support decision making. GeoTime also includes plug-ins for Excel and ArcGIS.


GeoTime includes a number of important functionalities including:

  • Movement trails: to display the history and behavior as paths in space-time;
  • Animation: to play back sequences and see how events unfold. Both the direction and speed of the animation can be adjusted.
  • Pattern recognition: to automatically identify key behaviors.
  • Annotate and Sketch: to add notes directly in the scene and save views as reports.
  • Fast Maps: to automatically adjust level of detail.
  • Interactive Chain and Network Analysis: to identify related events.


Below is an excerpt of a video demo of GeoTime which is well worth watching to get a sense of how these functionalities come into play:

The demo above uses hurricane data to highlight GeoTime’s integrated functionalities. But the application can be used to analyze a wide range of data such as crime incidents to identify patterns in space and time. A demo for that is avaiable here.


My interest in GeoTime stems from it’s potential application to analyzing conflict datasets. Problem is, the platform will set you back a cool $3,925. They do have a university rate of $1,675 but what’s missing is a rate for humanitarian NGOs or even a limited trial version.

Patrick Philippe Meier

Part 7: A Taxonomy for Visual Analytics

This is Part 7 of 7 of the highlights from the National Visualization Analysis Center. (NVAC). Unlike previous parts, this one focuses on a May 2009 article. Please see this post for an introduction to the study and access to the other 6 parts.

Jim Thomas, the co-author of “Illuminating the Path: A Research and Development Agenda for Visual Analytics,” and Director of the National Visualization Analysis Center (NVAC) recently called for the development of a taxonomy for visual analytics. Jim explains the importance of visual analytics as follows:

“Visual analytics are valuable because the tool helps to detect the expected, and discover the unexpected. Visual analytics combines the art of human intuition and the science of mathematical deduction to perceive patterns and derive knowledge and insight from them. With our success in developing and delivering new technologies, we are paving the way for fundamentally new tools to deal with the huge digital libraries of the future, whether for terrorist threat detection or new interactions with potentially life-saving drugs.”

In the latest edition of VAC Views, Jim expresses NVAC’s interest in helping to “define the study of visual analytics by providing an order and arrangement of topics—the taxa that are at the heart of studying visual analytics. The reason for such a “definition” is to more clearly describe the scope and intent of impact for the field of visual analytics.”

Jim and colleagues propose the following higher-order classifications:

  • Domain/Applications
  • Analytic Methods/Goals
  • Science and Technology
  • Data Types/Structures.

In his article in VAC Views, Jim requests feedback and suggestions for improving the more detailed taxonomy that he provides in the graphic below. The latter was not produced in very high resolution in VAC Views and does not reproduce well here, so I summarize below whilst giving feedback.


1. Domain/Applications

While Security (and Health) are included in the draft NVAC proposal as domains / applications, what is missing is Humanitarian Crises, Conflict Prevention and Disaster Management.

Perhaps “domain/applications” should not be combined since “applications” tends to be a subset of associated “domains” which poses some confusion. For example, law enforcement is a domain and crime mapping analysis could be considered as an application of visual analytics.

2. Analytic Methods/Goals

Predictive, Surveillance, Watch/Warn/Alert, Relationship Mapping, Rare Event Identification are included. There are a host of other methods not referred to here such as cluster detection, a core focus of spatial analysis. See the methods table in my previous blog post for examples of spatial cluster detection.

Again I find that combining both “analytic methods” and “goals” makes the classification somewhat confusing.

3. Scientific and Technology

This classification includes the following entries (each of which are elaborated on individually later):

  • Analytic reasoning and human processes
  • Interactive visualization
  • Data representations and theory of knowledge
  • Theory of communications
  • Systems and evaluations.

4. Data Types/Structures

This includes Text, Image, Video, Graph Structures, Models/Simulations, Geospatial Coordinates, time, etc.

Returning now to the sub-classifications under “Science and Technology”:

Analytic reasoning and human processes

This sub-classification, for example, includes the following items:

  • Modes of inference
  • Knowledge creation
  • Modeling
  • Hypothesis refinement
  • Human processes (e.g., perception, decision-making).

Interactive visualization

This is comprised of:

  • The Science of Visualization
  • The Science of Interaction.

The former includes icons, positioning, motion, abstraction, etc, while the latter includes language of discourse, design and art, user-tailored interaction and simulation interaction.

Data representations and theory of knowledge

This includes (but is not limited to):

  • Data Sourcing
  • Scale and Complexity
  • Aggregation
  • Ontology
  • Predictions Representations.

Theory of communications

This sub-classification includes for example the following:

  • Story Creation
  • Theme Flow/Dynamics
  • Reasoning representation.

Systems and evaluations

This last sub-classification comprises:

  • Application Programming Interface
  • Lightweight Standards
  • Privacy.

Patrick Philippe Meier

Part 6: Mobile Technologies and Collaborative Analytics

This is Part 6 of 7 of the highlights from “Illuminating the Path: The Research and Development Agenda for Visual Analytics.” Please see this post for an introduction to the study and access to the other 6 parts.

Mobile Technologies

The National Visual Analytics Center (NVAC) study recognizes that “mobile technologies will play a role in visual analytics, especially to users at the front line of homeland security.” To this end, researchers must “devise new methods to best employ these technologies and provide a means to allow data to scale between high-resolution displays in command and control centers to field-deployable displays.”

Collaborative Analytics

While collaborative platforms from wiki’s to Google docs allow many individuals to work collaboratively, these functionalities rarely feature in crisis mapping platforms. And yet, humanitarian crises (just like homeland security challenges) are so complex that they cannot be addressed by individuals working in silos.

On the contrary, crisis analysis, civilian protection and humanitarian response efforts are “sufficiently large scale and important that they must be addressed through the coordinated action of multiple groups of people, often with different backgrounds working in disparate locations with differing information.”

In other words, “the issue of human scalability plays a critical role, as systems must support the communications needs of these groups of people working together across space and time, in high-stress and time-sensitive environments, to make critical decisions.”

Patrick Philippe Meier

Part 5: Data Visualization and Interactive Interface Design

This is Part 5 of 7 of the highlights from “Illuminating the Path: The Research and Development Agenda for Visual Analytics.” Please see this post for an introduction to the study and access to the other 6 parts.

Data Visualization

The visualization of information “amplifies human cognitive capabilities in six basic ways” by:

  • Increasing cognitive resources, such as by using a visual resource to expand human working memory;
  • Reducing search, such as by representing a large amount of data in a small place;
  • Enhancing the recognition of patterns, such as when information is organized in space by its time relationships;
  • Supporting the easy perceptual inference of relationships that are otherwise more difficult to induce;
  • Enabling perceptual monitoring of a large number of potential events;
  • Providing a manipulable medium that, unlike static diagrams, enables the exploration of a space of parameter values.

The table below provides additional information on how visualization amplifies cognition:


Clearly, “these capabilities of information visualization, combined with computational data analysis, can be applied to analytic reasoning to support the sense-making process.” The National Visualization and Analysis Center (NVAC) thus recommends developing “visually based methods to support the entire analytic reasoning process, including the analysis of data as well as structured reasoning techniques such as the construction of arguments, convergent-divergent investigation, and evaluation of alternatives.”

Since “well-crafted visual representations can play a critical role in making information clear […], the visual representations and interactions we develop must readily support users of varying backgrounds and expertise.” To be sure, “visual representations and interactions must be developed with the full range of users in mind, from the experienced user to the novice working under intense pressure […].”

As NVACs notes, “visual representations are the equivalent of power tools for analytical reasoning.” But just like real power tools, they can cause harm if used carelessly. Indeed, it is important to note that “poorly designed visualizations may lead to an incorrect decision and great harm. A famous example is the poor visualization of the O-ring data produced before the disastrous launch of the Challenger space shuttle […].”

Effective Depictions

This is why we need some basic principles for developing effective depictions, such as the following:

  • Appropriateness Principle: the visual representation should provide neither more or less information than that needed for the task at hand. Additional information may be distracting and makes the task more difficult.
  • Naturalness Principle: experiential cognition is most effective when the properties of the visual representation most closely match the information being represented. This principle supports the idea that new visual metaphors are only useful for representing information when they match the user’s cognitive model of the information. Purely artificial visual metaphors can actually hinder understanding.
  • Matching Principle: representations of information are mst effective when they match the task to be performed by the user. Effective visual representations should present affordances suggestive of the appropriate action.
  • Congruence Principle: the structure and content of a visualization should correspond to the structure and content of the desired mental representation.
  • Apprehension Principle: the structure and content of a visualization should be readily and accurately perceived and comprehended.

Further research is needed to understand “how best to combine time and space in visual representation. “For example, in the flow map, spatial information is primary” in that it defines the coordinate system, but “why is this the case, and are there visual representations where time is foregrounded that could also be used to support analytical tasks?”

In sum, we must deepen our understanding of temporal reasoning and “create task-appropriate methods for integrating spatial and temporal dimensions of data into visual representations.”

Interactive Interface Design

It is important in the visual analytics process that researchers focus on visual representations of data and interaction design in equal measure. “We need to develop a ‘science of interaction’ rooted in a deep understanding of the different forms of interaction and their respective benefits.”

For example, one promising approach for simplifying interactions is to use 3D graphical user interfaces. Another is to move beyond single modality (or human sense) interaction techniques.

Indeed, recent research suggests that “multi-modal interfaces can overcome problems that any one modality may have. For example, voice and deictic (e.g., pointing) gestures can complement each other and make it easier for the user to accomplish certain tasks.” In fact, studies suggest that “users prefer combined voice and gestural communication over either modality alone when attempting graphics manipulation.”

Patrick Philippe Meier

Part 4: Automated Analysis and Uncertainty Visualized

This is Part 4 of 7 of the highlights from “Illuminating the Path: The Research and Development Agenda for Visual Analytics.” Please see this post for an introduction to the study and access to the other 6 parts.

As data flooding increases, the human eye may have difficulty focusing on patterns. To this end, VA systems should have “semi-automated analytic engines and user-driven interfaces.” Indeed, “an ideal environment for analysis would have a seamless integration of computational and visual techniques.”

For example, “the visual overview may be based on some preliminary data transformations […]. Interactive focusing, selecting, and filtering could be used to isolate data associated with a hypothesis, which could then be passed to an analysis engine with informed parameter settings. Results could be superimposed on the original information to show the difference between the raw data and the computed model, with errors highlighted visually.”

Yet current mathematical techniques “for representing pattern and structure, as well as visualizing correlations, time patterns, metadata relationships, and networks of linked information,” do not work well “for more complex reasoning tasks—particularly temporal reasoning and combined time and space reasoning […], much work remains to be done.” Furthermore, “existing techniques also fail when faced with the massive scale, rapidly changing data, and variety of information types we expect for visual analytics tasks.”

Furthermore, “the complexity of this problem will require algorithmic advances to address the establishment and maintenance of uncertainty measures at varying levels of data abstraction.” There is presently “no accepted methodology to represent potentially erroneous information, such as varying precision, error, conflicting evidence, or incomplete information.”

To this end, “interactive visualization methods are needed that allow users to see what is missing, what is known, what is unknown, and what is conjectured, so that they may infer possible alternative explanations.”

In sum, “uncertainty must be displayed if it is to be reasoned with and incorporated into the visual analytics process. In existing visualizations, much of the information is displayed as if it were true.”

Patrick Philippe Meier

Part 3: Data Tetris and Information Synthesis

This is Part 3 of 7 of the highlights from “Illuminating the Path: The Research and Development Agenda for Visual Analytics.” Please see this post for an introduction to the study and access to the other 6 parts.

Visual Analytics (VA) tools need to integrate and visualize different data types. But the integration of this data needs to be “based on their meaning rather than the original data type” in order to “facilitate knowledge discovery through information synthesis.” However, “many existing visual analytics systems are data-type-centric. That is, they focus on a particular type of data […].”

We know that different types of data are regularly required to conduct solid anlaysis, so developing a data synthesis capability is particularly important. This means ability to “bring data of different types together in a single environment […] to concentrate on the meaning of the data rather than on the form in which it was originally packaged.”

To be sure, information synthesis needs to “extend beyond the current data-type modes of analysis to permit the analyst to consider dynamic information of all types in seamless environment.” So we need to “eliminate the artificial constraints imposed by data type so that we can aid the analyst in reaching deeper analytical insight.”

To this end, we need breakthroughs in “automatic or semi-automatic approaches for identifying [and coding] content of imagery and video data.” A semi-automatic approach could draw on crowdsourcing, much like Ushahidi‘s Swift River.

In other words, we need to develop visual analytical tools that do not force the analyst to “perceptually and cognitively integrate multiple elements. […] Systems that force a user to view sequence after sequence of information are time-consuming and error-prone.” New techniques are also needed to do away with the separation of ‘what I want and the act of doing it.'”

Patrick Philippe Meier