Tag Archives: Crisis Mapping

GeoTime: Crisis Mapping Analysis in 3D

I just came across GeoTime, a very neat GIS platform for network analyis in time and space. GeoTime is developed by a company called Oculus and does a great job in presenting an appealing 3D visual interface for the temporal analysis of geo-referenced data. The platform integrates timeline comparisons, chart statistics and network analysis tools to support decision making. GeoTime also includes plug-ins for Excel and ArcGIS.

GeoTime

GeoTime includes a number of important functionalities including:

  • Movement trails: to display the history and behavior as paths in space-time;
  • Animation: to play back sequences and see how events unfold. Both the direction and speed of the animation can be adjusted.
  • Pattern recognition: to automatically identify key behaviors.
  • Annotate and Sketch: to add notes directly in the scene and save views as reports.
  • Fast Maps: to automatically adjust level of detail.
  • Interactive Chain and Network Analysis: to identify related events.

GeoTime2

Below is an excerpt of a video demo of GeoTime which is well worth watching to get a sense of how these functionalities come into play:

The demo above uses hurricane data to highlight GeoTime’s integrated functionalities. But the application can be used to analyze a wide range of data such as crime incidents to identify patterns in space and time. A demo for that is avaiable here.

GeoTime3

My interest in GeoTime stems from it’s potential application to analyzing conflict datasets. Problem is, the platform will set you back a cool $3,925. They do have a university rate of $1,675 but what’s missing is a rate for humanitarian NGOs or even a limited trial version.

Patrick Philippe Meier

Part 7: A Taxonomy for Visual Analytics

This is Part 7 of 7 of the highlights from the National Visualization Analysis Center. (NVAC). Unlike previous parts, this one focuses on a May 2009 article. Please see this post for an introduction to the study and access to the other 6 parts.

Jim Thomas, the co-author of “Illuminating the Path: A Research and Development Agenda for Visual Analytics,” and Director of the National Visualization Analysis Center (NVAC) recently called for the development of a taxonomy for visual analytics. Jim explains the importance of visual analytics as follows:

“Visual analytics are valuable because the tool helps to detect the expected, and discover the unexpected. Visual analytics combines the art of human intuition and the science of mathematical deduction to perceive patterns and derive knowledge and insight from them. With our success in developing and delivering new technologies, we are paving the way for fundamentally new tools to deal with the huge digital libraries of the future, whether for terrorist threat detection or new interactions with potentially life-saving drugs.”

In the latest edition of VAC Views, Jim expresses NVAC’s interest in helping to “define the study of visual analytics by providing an order and arrangement of topics—the taxa that are at the heart of studying visual analytics. The reason for such a “definition” is to more clearly describe the scope and intent of impact for the field of visual analytics.”

Jim and colleagues propose the following higher-order classifications:

  • Domain/Applications
  • Analytic Methods/Goals
  • Science and Technology
  • Data Types/Structures.

In his article in VAC Views, Jim requests feedback and suggestions for improving the more detailed taxonomy that he provides in the graphic below. The latter was not produced in very high resolution in VAC Views and does not reproduce well here, so I summarize below whilst giving feedback.

VacViews

1. Domain/Applications

While Security (and Health) are included in the draft NVAC proposal as domains / applications, what is missing is Humanitarian Crises, Conflict Prevention and Disaster Management.

Perhaps “domain/applications” should not be combined since “applications” tends to be a subset of associated “domains” which poses some confusion. For example, law enforcement is a domain and crime mapping analysis could be considered as an application of visual analytics.

2. Analytic Methods/Goals

Predictive, Surveillance, Watch/Warn/Alert, Relationship Mapping, Rare Event Identification are included. There are a host of other methods not referred to here such as cluster detection, a core focus of spatial analysis. See the methods table in my previous blog post for examples of spatial cluster detection.

Again I find that combining both “analytic methods” and “goals” makes the classification somewhat confusing.

3. Scientific and Technology

This classification includes the following entries (each of which are elaborated on individually later):

  • Analytic reasoning and human processes
  • Interactive visualization
  • Data representations and theory of knowledge
  • Theory of communications
  • Systems and evaluations.

4. Data Types/Structures

This includes Text, Image, Video, Graph Structures, Models/Simulations, Geospatial Coordinates, time, etc.

Returning now to the sub-classifications under “Science and Technology”:

Analytic reasoning and human processes

This sub-classification, for example, includes the following items:

  • Modes of inference
  • Knowledge creation
  • Modeling
  • Hypothesis refinement
  • Human processes (e.g., perception, decision-making).

Interactive visualization

This is comprised of:

  • The Science of Visualization
  • The Science of Interaction.

The former includes icons, positioning, motion, abstraction, etc, while the latter includes language of discourse, design and art, user-tailored interaction and simulation interaction.

Data representations and theory of knowledge

This includes (but is not limited to):

  • Data Sourcing
  • Scale and Complexity
  • Aggregation
  • Ontology
  • Predictions Representations.

Theory of communications

This sub-classification includes for example the following:

  • Story Creation
  • Theme Flow/Dynamics
  • Reasoning representation.

Systems and evaluations

This last sub-classification comprises:

  • Application Programming Interface
  • Lightweight Standards
  • Privacy.

Patrick Philippe Meier

Part 6: Mobile Technologies and Collaborative Analytics

This is Part 6 of 7 of the highlights from “Illuminating the Path: The Research and Development Agenda for Visual Analytics.” Please see this post for an introduction to the study and access to the other 6 parts.

Mobile Technologies

The National Visual Analytics Center (NVAC) study recognizes that “mobile technologies will play a role in visual analytics, especially to users at the front line of homeland security.” To this end, researchers must “devise new methods to best employ these technologies and provide a means to allow data to scale between high-resolution displays in command and control centers to field-deployable displays.”

Collaborative Analytics

While collaborative platforms from wiki’s to Google docs allow many individuals to work collaboratively, these functionalities rarely feature in crisis mapping platforms. And yet, humanitarian crises (just like homeland security challenges) are so complex that they cannot be addressed by individuals working in silos.

On the contrary, crisis analysis, civilian protection and humanitarian response efforts are “sufficiently large scale and important that they must be addressed through the coordinated action of multiple groups of people, often with different backgrounds working in disparate locations with differing information.”

In other words, “the issue of human scalability plays a critical role, as systems must support the communications needs of these groups of people working together across space and time, in high-stress and time-sensitive environments, to make critical decisions.”

Patrick Philippe Meier

Updated: Humanitarian Situation Risk Index (HSRI)

The Humanitarian Situation Risk Index (HSRI) is a tool created by UN OCHA in Colombia. The objective of HSRI is to determine the probability that a humanitarian situation occurs in each of the country’s municipalities in relation to the ongoing complex emergency. HSRI’s overall purpose is to serve as a “complementary analytical tool in decision-making allowing for humanitarian assistance prioritization in different regions as needed.”

UPDATE: I actually got in touch with the HSRI group back in February 2009 to let them know about Ushahidi and they have since “been running some beta-testing on Ushahidi, and may as of next week start up a pilot effort to organize a large number of actors in northeastern Colombia to feed data into [their] on-line information system.” In addition, they “plan to move from a logit model calculating probability of a displacement situation for each of the 1,120 Colombian municipalities, to cluster analysis, and have been running the identical model on data [they] have for confined communities.”

hsrimap

HSRI uses statistical tools (principal component analysis and the Logit model) to estimate the risk indexes. The indexes range from 0 to 1, where 0 is no risk and 1 is maximum risk. The team behind the project clearly state that the tool does not indicate the current situation in each municipality given that the data is not collected in real-time. Nor does the tool quantify the precise number of persons at risk.

The data used to estimate the Humanitarian Situation Risk Index “mostly comes from official sources, due to the fact that the vast majority of data collected and processed are from State entities, and in the remaining cases the data is from non-governmental or multilateral institutions.” The following table depicts the data collected.

hsri

I’d be interested to know whether the project will move towards doing any temporal analysis of the data over time. This would enable trends analysis which could more directly inform decision-making than a static map representing static data. One other thought might be to complement this “baseline” type data with event-data by using mobile phones and a “bounded crowdsourcing” approach a la Ushahidi.

Patrick Philippe Meier

Part 5: Data Visualization and Interactive Interface Design

This is Part 5 of 7 of the highlights from “Illuminating the Path: The Research and Development Agenda for Visual Analytics.” Please see this post for an introduction to the study and access to the other 6 parts.

Data Visualization

The visualization of information “amplifies human cognitive capabilities in six basic ways” by:

  • Increasing cognitive resources, such as by using a visual resource to expand human working memory;
  • Reducing search, such as by representing a large amount of data in a small place;
  • Enhancing the recognition of patterns, such as when information is organized in space by its time relationships;
  • Supporting the easy perceptual inference of relationships that are otherwise more difficult to induce;
  • Enabling perceptual monitoring of a large number of potential events;
  • Providing a manipulable medium that, unlike static diagrams, enables the exploration of a space of parameter values.

The table below provides additional information on how visualization amplifies cognition:

NAVCsTable

Clearly, “these capabilities of information visualization, combined with computational data analysis, can be applied to analytic reasoning to support the sense-making process.” The National Visualization and Analysis Center (NVAC) thus recommends developing “visually based methods to support the entire analytic reasoning process, including the analysis of data as well as structured reasoning techniques such as the construction of arguments, convergent-divergent investigation, and evaluation of alternatives.”

Since “well-crafted visual representations can play a critical role in making information clear […], the visual representations and interactions we develop must readily support users of varying backgrounds and expertise.” To be sure, “visual representations and interactions must be developed with the full range of users in mind, from the experienced user to the novice working under intense pressure […].”

As NVACs notes, “visual representations are the equivalent of power tools for analytical reasoning.” But just like real power tools, they can cause harm if used carelessly. Indeed, it is important to note that “poorly designed visualizations may lead to an incorrect decision and great harm. A famous example is the poor visualization of the O-ring data produced before the disastrous launch of the Challenger space shuttle […].”

Effective Depictions

This is why we need some basic principles for developing effective depictions, such as the following:

  • Appropriateness Principle: the visual representation should provide neither more or less information than that needed for the task at hand. Additional information may be distracting and makes the task more difficult.
  • Naturalness Principle: experiential cognition is most effective when the properties of the visual representation most closely match the information being represented. This principle supports the idea that new visual metaphors are only useful for representing information when they match the user’s cognitive model of the information. Purely artificial visual metaphors can actually hinder understanding.
  • Matching Principle: representations of information are mst effective when they match the task to be performed by the user. Effective visual representations should present affordances suggestive of the appropriate action.
  • Congruence Principle: the structure and content of a visualization should correspond to the structure and content of the desired mental representation.
  • Apprehension Principle: the structure and content of a visualization should be readily and accurately perceived and comprehended.

Further research is needed to understand “how best to combine time and space in visual representation. “For example, in the flow map, spatial information is primary” in that it defines the coordinate system, but “why is this the case, and are there visual representations where time is foregrounded that could also be used to support analytical tasks?”

In sum, we must deepen our understanding of temporal reasoning and “create task-appropriate methods for integrating spatial and temporal dimensions of data into visual representations.”

Interactive Interface Design

It is important in the visual analytics process that researchers focus on visual representations of data and interaction design in equal measure. “We need to develop a ‘science of interaction’ rooted in a deep understanding of the different forms of interaction and their respective benefits.”

For example, one promising approach for simplifying interactions is to use 3D graphical user interfaces. Another is to move beyond single modality (or human sense) interaction techniques.

Indeed, recent research suggests that “multi-modal interfaces can overcome problems that any one modality may have. For example, voice and deictic (e.g., pointing) gestures can complement each other and make it easier for the user to accomplish certain tasks.” In fact, studies suggest that “users prefer combined voice and gestural communication over either modality alone when attempting graphics manipulation.”

Patrick Philippe Meier

Part 4: Automated Analysis and Uncertainty Visualized

This is Part 4 of 7 of the highlights from “Illuminating the Path: The Research and Development Agenda for Visual Analytics.” Please see this post for an introduction to the study and access to the other 6 parts.

As data flooding increases, the human eye may have difficulty focusing on patterns. To this end, VA systems should have “semi-automated analytic engines and user-driven interfaces.” Indeed, “an ideal environment for analysis would have a seamless integration of computational and visual techniques.”

For example, “the visual overview may be based on some preliminary data transformations […]. Interactive focusing, selecting, and filtering could be used to isolate data associated with a hypothesis, which could then be passed to an analysis engine with informed parameter settings. Results could be superimposed on the original information to show the difference between the raw data and the computed model, with errors highlighted visually.”

Yet current mathematical techniques “for representing pattern and structure, as well as visualizing correlations, time patterns, metadata relationships, and networks of linked information,” do not work well “for more complex reasoning tasks—particularly temporal reasoning and combined time and space reasoning […], much work remains to be done.” Furthermore, “existing techniques also fail when faced with the massive scale, rapidly changing data, and variety of information types we expect for visual analytics tasks.”

Furthermore, “the complexity of this problem will require algorithmic advances to address the establishment and maintenance of uncertainty measures at varying levels of data abstraction.” There is presently “no accepted methodology to represent potentially erroneous information, such as varying precision, error, conflicting evidence, or incomplete information.”

To this end, “interactive visualization methods are needed that allow users to see what is missing, what is known, what is unknown, and what is conjectured, so that they may infer possible alternative explanations.”

In sum, “uncertainty must be displayed if it is to be reasoned with and incorporated into the visual analytics process. In existing visualizations, much of the information is displayed as if it were true.”

Patrick Philippe Meier

Part 3: Data Tetris and Information Synthesis

This is Part 3 of 7 of the highlights from “Illuminating the Path: The Research and Development Agenda for Visual Analytics.” Please see this post for an introduction to the study and access to the other 6 parts.

Visual Analytics (VA) tools need to integrate and visualize different data types. But the integration of this data needs to be “based on their meaning rather than the original data type” in order to “facilitate knowledge discovery through information synthesis.” However, “many existing visual analytics systems are data-type-centric. That is, they focus on a particular type of data […].”

We know that different types of data are regularly required to conduct solid anlaysis, so developing a data synthesis capability is particularly important. This means ability to “bring data of different types together in a single environment […] to concentrate on the meaning of the data rather than on the form in which it was originally packaged.”

To be sure, information synthesis needs to “extend beyond the current data-type modes of analysis to permit the analyst to consider dynamic information of all types in seamless environment.” So we need to “eliminate the artificial constraints imposed by data type so that we can aid the analyst in reaching deeper analytical insight.”

To this end, we need breakthroughs in “automatic or semi-automatic approaches for identifying [and coding] content of imagery and video data.” A semi-automatic approach could draw on crowdsourcing, much like Ushahidi‘s Swift River.

In other words, we need to develop visual analytical tools that do not force the analyst to “perceptually and cognitively integrate multiple elements. […] Systems that force a user to view sequence after sequence of information are time-consuming and error-prone.” New techniques are also needed to do away with the separation of ‘what I want and the act of doing it.'”

Patrick Philippe Meier

Armed Conflict and Location Event Dataset (ACLED)

I joined the Peace Research Institute, Oslo (PRIO) as a researcher in 2006 to do some data development work on a conflict dataset and to work with Norways’ former Secretary of State on assessing the impact of armed conflict on women’s health for the Ministry of Foreign Affairs (MFA).

I quickly became interested in a related PRIO project that had recently begun called the “Armed Conflict and Location Event Dataset, or ACLED. Having worked with conflict event-datasets as part of operational conflict early warning systems in the Horn, I immediately took interest in the project.

While I have referred to ACLED in a number of previous blog posts, two of my main criticisms (until recently) were (1) the lack of data on recent conflicts; and (2) the lack of an interactive interface for geospatial analysis, or at least more compelling visualization platform.

Introducing SpatialKey

Independently, I came across UniveralMind back November of last year when Andrew Turner at GeoCommons made a reference to the group’s work in his presentation at an Ushahidi meeting. I featured one of the group’s products, SpatialKey, in my recent video primer on crisis mapping.

As it turns out, ACLED is now using SpatialKey to visualize and analyze some of it’s data. So the team has definitely come a long way from using ArcGIS and Google Earth, which is great. The screenshot below, for example, depicts the ACLED data on Kenya’s post-election violence using SpatialKey.

ACLEDspatialkey

If the Kenya data is not drawn from the Ushahidi then this could be an exciting research opportunity to compare both datasets using visual analysis and applied geo-statistics. I write “if” because PRIO somewhat surprisingly has not made the Kenya data available. They are usually very transparent so I will follow up with them and hope to get the data. Anyone interested in co-authoring this study?

Academics Get up To Speed

It’s great to see ACLED developing conflict data for more recent conflicts. Data on Chad, Sudan and the Central African Republic (CAR) is also depicted using SpatialKey but again the underlying spreadsheet data does not appear to be available regrettably. If the data were public, then the UN’s Threat and Risk Mapping Analysis (TRMA) project may very well have much to gain from using the data operationally.

ACLEDspatialkey2

Data Hugging Disorder

I’ll close with just one—perhaps unwarranted—concern since I still haven’t heard back from ACLED about accessing their data. As academics become increasingly interested in applying geospatial analysis to recent or even current conflicts by developing their own datasets (a very positive move for sure), will these academics however keep their data to themselves until they’ve published an article in a peer-reviewed journal, which can often take up to a year or more to publish?

To this end I share the concern that my colleague Ed Jezierski from InSTEDD articulated in his excellent blog post yesterday: “Academic projects that collect data with preference towards information that will help to publish a paper rather than the information that will be the most actionable or help community health the most.” Worst still, however, would be academics collecting data very relevant to the humanitarian or human rights community and not sharing that data until their academic papers are officially published.

I don’t think there needs to be competition between scholars and like-minded practitioners. There are increasingly more scholar-practitioners who recognize that they can contributed their research and skills to the benefit of the humanitarian and human rights communities. At the same time, the currency of academia remains the number of peer-reviewed publications. But humanitarian practitioners can simply sign an agreement such that anyone using the data for humanitarian purposes cannot publish any analysis of said data in a peer-reviewed forum.

Thoughts?

Patrick Philippe Meier

Part 2: Data Flooding and Platform Scarcity

This is Part 2 of 7 of the highlights from “Illuminating the Path: The Research and Development Agenda for Visual Analytics.” Please see this post for an introduction to the study and access to the other 6 parts.

Data Flooding

Data flooding is a term I use to illustrate the fact that “our ability to collect data is increasing at a faster rate than our ability to analyze it.” To this end, I completely agree with the recommendation that new methods are required to “allow the analyst to examine this massive, multi-dimensional, multi-source, time-varying information stream to make decisions in a time-critical manner.”

We don’t want less, but rather more information since “large data volumes allow analysts to discover more complete information about a situation.” To be sure, “scale brings opportunities as well.” As a result, for example, “analysts may be able to determine more easily when expected information is missing,” which sometimes “offers important clues […].”

However, while computer processing power and memory density have changed radically over the decades, “basic human skills and abilities do not change significantly over time.” Technological advances can certainly leverage our skills “but there are fundamental limits that we are asymptotically approaching,” hence the notion of information glut.

In other words, “human skills and abilities do not scale.” That said, the number of humans involved in analytical problem-solving does scale. Unfortunately, however, “most published techniques for supporting analysis are targeted for a single user at a time.” This means that new techniques that “gracefully scale from a single user to a collaborative (multi-user) environment” need to be developed.

Platform Scarcity

However, current technologies and platforms being used in the humanitarian and human rights communities do not address the needs for handling ever-changing volumes of information. “Furthermore, current tools provide very little in the way of support for the complex tasks of anlaysis and discovery process.” There clearly is a platform scarcity.

Admittedly, “creating effective visual representations is a labor-intensive process that requires a solid understanding of the visualization pipeline, characteristics of the data to be displayed, and the tasks to be performed.”

However, as is clear from the crisis mapping projects I have consulted on, “most visualization software is written with incomplete knowledge of at least some of this information.” Indeed, it is rarely possible for “the analyst, who has the best understanding of the data and task, to construct new tools.”

The NVAC study thus recommends that “research is needed to create software that supports the most complex and time-consuming portions of the analytical process, so that analysts can respond to increasingly more complex questions.” To be sure, “we need real-time analytical monitoring that can alert first responders to unusual situations in advance.”

Patrick Philippe Meier

Part 1: Visual Analytics

This is Part 1 of 7 of the highlights from “Illuminating the Path: The Research and Development Agenda for Visual Analytics.” Please see this post for an introduction to the study and access to the other 6 parts.

NVAC defines Visual Analytics (VA) as “the science of analytical reasoning facilitated by interactive visual interfaces. People use VA tools and techniques to synthesize information and derive insights from massive, dynamic, ambiguous, and often conflicting data; detect the expected and discover the unexpected; provide timely, defensible, and understandable assessments; and communicate assessment effectively for action.”

The field of VA is necessarily multidisciplinary and combines “techniques from information visualization with techniques from computational transformation and analysis of data.” VA includes the following focus areas:

  • Analytical reasoning techniques, “that enable users to obtain deep insights that directly support assessment, planning and decision-making”;
  • Visual representations and interaction techniques, “that take advantage of the human eye’s broad bandwidth pathway to into the mind to allow users to see, explore, and understand large amounts of information at once”;
  • Data representation and transformations, “that convert all types of conflicting and dynamic data in ways that support visualization and analysis”;
  • Production, presentation and dissemination techniques, “to communicate information in the appropriate context to a variety of audiences.”

As is well known, “the human mind can understand complex information received through visual channels.” The goal of VA is thus to facilitate the analytical reasoning process “through the creation of software that maximizes human capacity to perceive, understand, and reason about complex and dynamic situations.”

In sum, “the goal is to facilitate high-quality human judgment with a limited investment of the analysts’ time.” This means in part to “expose all relevant data in a way that facilitates the reasoning process to enable action.” To be sure, solving a problem often means representing it so that the solution is more obvious (adapted from Herbert Simon). Sometimes, the simple act of placing information on a timeline or a map can generate clarity and profound insight.” Indeed, both “temporal relationships and spatial patterns can be revealed through timelines and maps.”

VA also reduces the costs associated with sense-making in two primary ways, by:

  1. Transforming information into forms that allow humans to offload cognition onto easier perceptual processes;
  2. Allowing software agents to do some of the filtering, representation translation, interpretation, and even reasoning.

That said, we should keep in mind that “human-designed visualizations are still much better than those created by our information visualization systems.” That is, there are more “highly evolved and widely used metaphors created by human information designers” than there are “successful new computer-mediated visual representations.”

Patrick Philippe Meier