Tag Archives: Visualization

Part 2: Data Flooding and Platform Scarcity

This is Part 2 of 7 of the highlights from “Illuminating the Path: The Research and Development Agenda for Visual Analytics.” Please see this post for an introduction to the study and access to the other 6 parts.

Data Flooding

Data flooding is a term I use to illustrate the fact that “our ability to collect data is increasing at a faster rate than our ability to analyze it.” To this end, I completely agree with the recommendation that new methods are required to “allow the analyst to examine this massive, multi-dimensional, multi-source, time-varying information stream to make decisions in a time-critical manner.”

We don’t want less, but rather more information since “large data volumes allow analysts to discover more complete information about a situation.” To be sure, “scale brings opportunities as well.” As a result, for example, “analysts may be able to determine more easily when expected information is missing,” which sometimes “offers important clues […].”

However, while computer processing power and memory density have changed radically over the decades, “basic human skills and abilities do not change significantly over time.” Technological advances can certainly leverage our skills “but there are fundamental limits that we are asymptotically approaching,” hence the notion of information glut.

In other words, “human skills and abilities do not scale.” That said, the number of humans involved in analytical problem-solving does scale. Unfortunately, however, “most published techniques for supporting analysis are targeted for a single user at a time.” This means that new techniques that “gracefully scale from a single user to a collaborative (multi-user) environment” need to be developed.

Platform Scarcity

However, current technologies and platforms being used in the humanitarian and human rights communities do not address the needs for handling ever-changing volumes of information. “Furthermore, current tools provide very little in the way of support for the complex tasks of anlaysis and discovery process.” There clearly is a platform scarcity.

Admittedly, “creating effective visual representations is a labor-intensive process that requires a solid understanding of the visualization pipeline, characteristics of the data to be displayed, and the tasks to be performed.”

However, as is clear from the crisis mapping projects I have consulted on, “most visualization software is written with incomplete knowledge of at least some of this information.” Indeed, it is rarely possible for “the analyst, who has the best understanding of the data and task, to construct new tools.”

The NVAC study thus recommends that “research is needed to create software that supports the most complex and time-consuming portions of the analytical process, so that analysts can respond to increasingly more complex questions.” To be sure, “we need real-time analytical monitoring that can alert first responders to unusual situations in advance.”

Patrick Philippe Meier

Part 1: Visual Analytics

This is Part 1 of 7 of the highlights from “Illuminating the Path: The Research and Development Agenda for Visual Analytics.” Please see this post for an introduction to the study and access to the other 6 parts.

NVAC defines Visual Analytics (VA) as “the science of analytical reasoning facilitated by interactive visual interfaces. People use VA tools and techniques to synthesize information and derive insights from massive, dynamic, ambiguous, and often conflicting data; detect the expected and discover the unexpected; provide timely, defensible, and understandable assessments; and communicate assessment effectively for action.”

The field of VA is necessarily multidisciplinary and combines “techniques from information visualization with techniques from computational transformation and analysis of data.” VA includes the following focus areas:

  • Analytical reasoning techniques, “that enable users to obtain deep insights that directly support assessment, planning and decision-making”;
  • Visual representations and interaction techniques, “that take advantage of the human eye’s broad bandwidth pathway to into the mind to allow users to see, explore, and understand large amounts of information at once”;
  • Data representation and transformations, “that convert all types of conflicting and dynamic data in ways that support visualization and analysis”;
  • Production, presentation and dissemination techniques, “to communicate information in the appropriate context to a variety of audiences.”

As is well known, “the human mind can understand complex information received through visual channels.” The goal of VA is thus to facilitate the analytical reasoning process “through the creation of software that maximizes human capacity to perceive, understand, and reason about complex and dynamic situations.”

In sum, “the goal is to facilitate high-quality human judgment with a limited investment of the analysts’ time.” This means in part to “expose all relevant data in a way that facilitates the reasoning process to enable action.” To be sure, solving a problem often means representing it so that the solution is more obvious (adapted from Herbert Simon). Sometimes, the simple act of placing information on a timeline or a map can generate clarity and profound insight.” Indeed, both “temporal relationships and spatial patterns can be revealed through timelines and maps.”

VA also reduces the costs associated with sense-making in two primary ways, by:

  1. Transforming information into forms that allow humans to offload cognition onto easier perceptual processes;
  2. Allowing software agents to do some of the filtering, representation translation, interpretation, and even reasoning.

That said, we should keep in mind that “human-designed visualizations are still much better than those created by our information visualization systems.” That is, there are more “highly evolved and widely used metaphors created by human information designers” than there are “successful new computer-mediated visual representations.”

Patrick Philippe Meier

Research Agenda for Visual Analytics

I just finished reading “Illuminating the Path: The Research and Development Agenda for Virtual Analytics.” The National Visualization and Analytics Center (NVACs) published the 200-page book in 2004 and the volume is absolutely one of the best treaties I’ve come across on the topic yet. The purpose of this series of posts that follow is to share some highlights and excerpts relevant for crisis mapping.

NVACcover

Co-edited by James Thomas and Kristin Cook,  the book focuses specifically on homeland security but there are numerous insights to be gained on how “virtual analytics” can also illuminate the path for crisis mapping analytics. Recall that the field of conflict early warning originated in part from World War II and  the lack of warning during Pearl Harbor.

Several coordinated systems for the early detection of a Soviet bomber attack on North America were set up in the early days of the Cold War. The Distant Early Warning Line, or Dew Line, was the most sophisticated of these. The point to keep in mind is that the national security establishment is often in the lead when it comes to initiatives that can also be applied for humanitarian purposes.

The motivation behind the launching of NVACs and this study was 9/11. In my opinion, this volume goes a long way to validating the field of crisis mapping. I highly recommend it to colleagues in both the humanitarian and human rights communities. In fact, the book is directly relevant to my current consulting work with the UN’s Threat and Risk Mapping Analysis (TRMA) project in the Sudan.

So this week, iRevolution will be dedicated to sharing daily higlights from the NVAC study. Taken together, these posts will provide a good summary of the rich and in-depth 200-page study. So check back here post for live links to NVAC highlights:

Part 1: Visual Analytics

Part 2: Data Flooding and Platform Scarcity

Part 3: Data Tetris and Information Synthesis

Part 4: Automated Analysis and Uncertainty Visualized

Part 5: Data Visualization and Interactive Interface Design

Part 6: Mobile Technologies and Collaborative Analytics

Part 7: Towards a Taxonomy of Visual Analytics

Note that the sequence above does not correspond to specific individual chapters in the NVAC study. This structure for the summary is what made most sense.

Patrick Philippe Meier

JRC: Geo-Spatial Analysis for Global Security

The European Commission’s Joint Research Center (JRC) is doing some phenomenal work on Geo-Spatial Information Analysis for Global Security and Stability. I’ve had several meetings with JRC colleagues over the years and have always been very impressed with their projects.

The group is not very well known outside Europe so the purpose of this blog post is to highlight some of the Center’s projects.

  • Enumeration of Refugee Camps: The project developed an operational methodology to estimate refugee populations using very high resolution (VHR) satellite imagery. “The methodology relies on a combination of machine-assisted procedures, photo-interpretation and statistical sampling.”

jrc1

  • Benchmarking Hand Held Equipment for Field Data Collection: This project tested new devices for the collection for geo-referenced information. “The assessment of the instruments considered their technical characteristics, like the availability of necessary instruments or functionalities, technical features, hardware specifics, software compatibility and interfaces.”

jrc3

  • GEOCREW – Study on Geodata and Crisis Early Warning: This project analyzed the use of geo-spatial technology in the decision-making process of institutions dealing with international crises. The project also aimed to show best practice in the use of geo-spatial technologies in the decision-making process.
  • Support to Peacekeeping Operations in the Sudan: Maps are generally not available or often are out of date for most of the conflict areas in which peacekeping personnel is deployed,  This UNDPKO Darfur mapping initiative aimed to create an alliance of partners that addressed this gap and shared the results.

jrc4

  • Temporary Settlement Analysis by Remote Sensing: The project analyzes different types of refugee and IDP settlements to identify single structures inside refugee settlements. “The objective of the project is to establish the first comprehensive catalog of image interpretation keys, based on last-generation satellite data and related to the analysis of transitional settlements.”

JRC colleagues often publish papers on their work and I highly recommend having a look at this book when it comes out in June 2009:

jrc5

Patrick Philippe Meier

Video Introduction to Crisis Mapping

I’ve given many presentations on crisis mapping over the past two years but these were never filmed. So I decided to create this video presentation with narration in order to share my findings more widely and hopefully get a lot of feedback in the process. The presentation is not meant to be exhaustive although the video does run to about 30 minutes.

The topics covered in this presentation include:

  • Crisis Map Sourcing – information collection;
  • Mobile Crisis Mapping – mobile technology;
  • Crisis Mapping Visualization – data visualization;
  • Crisis Mapping Analysis – spatial analysis.

The presentation references several blog posts of mine in addition to several operational projects to illustrate the main concepts behind crisis mapping. The individual blog posts featured in the presentation are listed below:

This research is the product of a 2-year grant provided by Humanity United  (HU) to the Harvard Humanitarian Initiative’s (HHI) Program on Crisis Mapping and Early Warning, where I am a doctoral fellow.

I look forward to any questions/suggestions you may have on the video primer!

Patrick Philippe Meier

Threat and Risk Mapping Analysis in Sudan

Massively informative.

That’s how I would describe my past 10 days with the UNDP‘s Threat and Risk Mapping Analysis (TRMA) project in the Sudan. The team here is doing some of the most exciting work I’ve seen in the field of crisis mapping. Truly pioneering. I can’t think of  a better project to apply the past two years of work I have done with the Harvard Humanitarian Initiative’s (HHI) Crisis Mapping and Early Warning Program.

TRMA combines all the facets of crisis mapping that I’ve been focusing on since 2007. Namely, crisis map sourcing, (CMS), mobile crisis mapping (MCM), crisis mapping visualization (CMV), crisis mapping analytics (CMA) and crisis mapping platforms (CMP). I’ll be blogging about each of these in more detail later but wanted to provide a sneak previous in the meantime.

Crisis Map Sourcing (CMS)

The team facilitates 2-day focus groups using participatory mapping methods. Participants identify and map the most pressing crisis factors in their immediate vicinity. It’s really quite stunning to see just how much conversation a map can generate. Rich local knowledge.

trma1

What’s more, TRMA conducts these workshops at two levels for each locality (administrative boundaries within a state): the community-level and at the state-level. They can then compare the perceived threats and risks from both points of view. Makes for very interesting comparisons.

trma2

In addition to this consultative approach to crisis map sourcing, TRMA has played a pivotal role in setting up an Information Management Working Group (IMWG) in the Sudan, which includes the UN’s leading field-based agencies.

What is truly extraordinary about this initiative is that each agency has formally signed an information sharing protocol to share their geo-referenced data. TRMA had already been using much of this data but the process until now had always been challenging since it required repeated bilateral efforts. TRMA has also developed a close professional relationship with the Central Bureau of Statistics Office.

Mobile Crisis Mapping (MCM)

The team has just partnered with a multinational communications corporation to introduce the use of mobile phones for information collection. I’ll write more about this in the coming weeks. Needless to say, I’m excited. Hopefully it won’t be too late to bring up FrontlineSMS‘s excellent work in this area, as well as Ushahidi‘s.

Crisis Mapping Visualization (CMV)

The team needs some help in this area, but then again, that’s one of the reasons I’m here. Watching first reactions during focus groups when we show participants the large GIS maps of their state is  really very telling. Lots more to write about on this and lots to contribute to TRMA’s work. I don’t yet know which maps can be made public but I’ll do my utmost best to get permission to post one or two in the coming weeks.

Crisis Mapping Analytics (CMA)

The team has produced a rich number of different layers of data which can be superimposed to identify visual correlations and otherwise hidden patterns. Perhaps one of the most exciting examples is when the team started drawing fault lines on the maps based on the data collected and their own local area expertise. The team subsequently realized that these fault lines could potential serve as “early warning” markers since a number of conflict incidents subsequently took place along those lines. Like the other crisis mapping components described above, there’s much more to write on this!

Crisis Mapping Platforms (CMP)

TRMA’s GIS team has used ArcGIS but this has been challenging given the US embargo on the Sudan. They therefore developed their own in-house mapping platforms using open-source software. These platforms include the “Threat Mapper” for data entry during (or shortly after) the focus groups and “4Ws” which stands for Who, What, Where and When. The latter tool is operational and will soon be fully developed. 4Ws will actually be used by members of the IMWG to share and visualize their data.

In addition, TRMA makes it’s many maps and layers available by distributing a customized DVD with ArcReader (which is free). Lots more on this in the coming weeks and hopefully some screenshots as well.

Closing the Feedback Loop

I’d like to add with one quick thought, which I will also expand on in the next few weeks. I’ve been in Blue Nile State over the past three days, visiting a number of different local ministries and civil society groups, including the Blue Nile’s Nomadic Union. We distributed dozens of poster-size maps and had at times hour long discussions while pouring over these maps. As I hinted above, the data visualization can be improved. But the question I want to pose at the moment is: how can we develop a manual GIS platform?

While the maps we distributed were of huge interest to our local partners, they were static, as hard-copy maps are bound to be. This got me thinking about possibly using transparencies to overlap different data/thematic layers over a general hard-copy map. I know transparencies can be printed on. I’m just not sure what size they come in or just how expensive they are, but they could start simulating the interactive functionality of ArcReader.

transparency

Even if they’re only available in A4 size, we could distribute binders with literally dozens of transparencies each with a printed layer of data. This would allow community groups to actually start doing some analysis themselves and could be far more compelling than just disseminating poster-size static maps, especially in rural areas. Another idea would be to use transparent folders like those below and hand-draw some of the major layers. Alternatively, there might a type of thin plastic sheet available in the Sudan.

I’m thinking of trying to pilot this at some point. Any thoughts?

folders

Patrick Philippe Meier

Crime Mapping Analytics

There are important parallels between crime prevention and conflict prevention.  About half-a-year ago I wrote a blog post on what crisis mapping might learn from crime mapping. My colleague Joe Bock from Notre Dame recently pointed me to an excellent example of crime mapping analytics.

The Philadelphia Police Department (PPD) has a Crime Analysis and Mapping Unit  (CAMU) that integrates Geographic Information System (GIS) to improve crime analysis. The Unit was set up in 1997 and the GIS data includes a staggering 2.5 million new events per year. The data is coded from emergency distress calls and police reports and overlaid with other data such as bars and liquor stores, nightclubs, locations of surveillance cameras, etc.

For this blog post, I draw on the following two sources: (1) Theodore (2009). “Predictive Modeling Becomes a Crime-Fighting Asset,” Law Officer Journal, 5(2), February 2009; and (2) Avencia (2006). “Crime Spike Detector: Using Advanced GeoStatistics to Develop a Crime Early Warning System,” (Avencia White Paper, January 2006).

Introduction

Police track criminal events or ‘incidents’ which are “the basic informational currency of policing—crime prevention cannot take place if there is no knowledge of the location of crime.” Pin maps were traditionally used to represent this data.

pinmap

GIS platforms now make new types of analysis possible beyond simply “eyeballing” patterns depicted by push pins. “Hot spot” (or “heat map”) analysis is one popular example in which the density of events is color coded to indicate high or low densities.

Hotspot analysis, however, in itself, does not tell people much they did not already know. Crime occurs in greater amounts in downtown areas and areas where there are more people. This is common sense. Police organize their operations around these facts already.

The City of Philadelphia recognized that traditional hot spot analysis was of limited value and therefore partnered with Avencia to develop and deploy a crime early warning system known as the Crime Spike Detector.

Crime Spike Detector

The Crime Spike Detector is an excellent example of a crime analysis analytics tool that serves as an early warning system for spikes in crime.

The Crime Spike Detector applies geographic statistical tools to discover  abrupt changes in the geographic clusters of crime in the police incident database. The system isolates these aberrations into a cluster, or ‘crime spike’. When such a cluster is identified, a detailed report is automatically e-mailed to the district command staff responsible for the affected area, allowing them to examine the cluster and take action based on the new information.

The Spike Detector provides a more rapid and highly focused evaluation of current conditions in a police district than was previously possible. The system also looks at clusters that span district boundaries and alerts command staff on both sides of these arbitrary administrative lines, resulting in more effective deployment decisions.

spikedetector

More specfically, the spike detector analyzes changes in crime density over time and highlights where the change is statistically significant.

[The tool] does this in automated fashion by examining, on a nightly basis, millions of police incident records, identifying aberrations, and e-mailing appropriate police personnel. The results are viewed on a map, so exactly where these crime spikes are taking place are immediately understandable. The map supports ‘drill-through’ capabilities to show detailed graphs, tables, and actual incident reports of crime at that location.

Spike Detection Methodology

The Spike Detector compares the density of individual crime events over both space and time. To be sure, information is more actionable if it is geographically specified for a given time period regarding a specific type of crime. For example, a significant increase in drug related incidents in a specific neighborhood for a given day is more concrete and actable than simply observing a general increase in crime in Philadelphia.

The Spike Detector interface allows the user to specify three main parameters: (1) the type of crime under investigation; (2) the spatial and, (3) the temporal resolutions to analyze this incident type.

Obviously, doing this in just one way produces very limited information. So the Spike Detector enables end users to perform its operations on a number of different ways of breaking up time, space and crime type. Each one of these is referred to as a user defined search pattern.

To describe what a search pattern looks like, we first need to understand how the three parameters can be specified.

Space. The Spike Detector divides the city into circles of a given radius. As depicted below, the center points of these circles from a grid. Once the distance between these center points is specified, the radius of the circle is set such that the area of the circles completely covers the map. Thus a pattern contains a definition of the distance between the center points of circles.

circles

Time. The temporal parameter is specified such that a recent period of criminal incidents can be compared to a previous period. By contrasting the densities in each circle across different time periods, any significant changes in density can be identified. Typically, the most recent month is compared to the previous year. This search pattern is know as bloc style comparison. A second search pattern is periodic, which “enables search patterns based on crime types that vary on a seasonal basis.”

Incident. Each crime is is assigned a Uniform Crime Reporting code. Taking all three parameters together, a search pattern might look like the following

“Robberies no Gun, 1800, 30, Block, 365”

This means the user is looking for robberies committed without a gun, with distance between cicle center points of 1,800 feet, over the past 30 days of crime data compared to the previous year’s worth of crime.

Determining Search Patterns

A good search pattern is determined by a combination of three factors: (1) crime type density; (2) short-term versus long-term patterns; and (3) trial and error. Crime type is typically the first and easiest parameter of the search pattern to be specified. Defining the spatial and temporal resolutions requires more thought.

The goal in dividing up time and space is to have enough incidents such that comparing a recent time period to a comparison time period is meaningful. If the time or space divisions are too small, ‘spikes’ are discovered which represent a single incident or few incidents.

The rule of thumb is to have an average of at least 4-6 crimes each in each circle area. More frequent crimes will permit smaller circle areas and shorter time periods, which highlights spikes more precisely in time and space.

Users are typically interested in shorter and most recent time periods as this is most useful to law enforcement while “though the longer time frames might be of interest to other user communities studying social change or criminology.” In any event,

Patterns need to be tested in practice to see if they are generating useful information. To facilitate this, several patterns can be set up looking at the same crime type with different time and space parameters. After some time, the most useful pattern will become apparent and the other patterns can be dispensed with.

Running Search Patterns

The spike detection algorithm uses simple statistical analysis to determine whether the  probability that the number of recent crimes as compared to the comparison period crimes in a given circle area is possible due to chance alone. The user specifies the confidence level or sensitivity of the analysis. The number is generally set at 0.5% probability.

Each pattern results in a probability (or p-value) lattice assigned to every circle center point. The spike detector uses this lattice to construct the maps, graphs and reports that the spike detector presents to the user. A “Hypergeometic Distribution” is used to determine the p-values:

hypergeometric

Where, for example:

N – total number of incidents in all Philadelphia for both the previous 365 days and the current 30 days.

G – total number of incidents in all Philadelphia for just the past 30 days.

n – number of incidents in just this circle for both the previous 365 days and the past 30 days.

x – number of incidents in just this circle for the past 30 days.

After the probability lattice is generated, the application displays spikes in order of severity and whether they have increased or decreased as compared to the previous day.

Conclusion

One important element of crisis mapping which is often overlooked is the relevance to monitoring and evaluation. With the Spike Detector, the Police Department “can assess the impact and effectiveness of anticrime strategies.” This will be the subject of a blog post in the near future.

For now, I conclude with the following comment from the Philadelphia Police Department:

GIS is changing the way we operate. All police personnel, from the police commissioner down to the officer in the patrol car, can use maps as part of their daily work. Our online mapping applications needed to be fast and user-friendly because police officers don’t have time to become computer experts. I think we’ve delivered on this goal, and it’s transforming what we do and how we serve the community.

Clearly, crime mapping analytics has a lot offer those of us interested in crisis mapping of violent conflict in places like the DRC and Zimbabwe. What we need is a Neogeography version of the Spike Detector.

Patrick Philippe Meier

A Brief History of Crisis Mapping (Updated)

Introduction

One of the donors I’m in contact with about the proposed crisis mapping conference wisely recommended I add a big-picture background to crisis mapping. This blog post is my first pass at providing a brief history of the field. In a way, this is a combined summary of several other posts I have written on this blog over the past 12 months plus my latest thoughts on crisis mapping.

Evidently, this account of history is very much influenced by my own experience so I may have unintentionally missed a few relevant crisis mapping projects. Note that by crisis  I refer specifically to armed conflict and human rights violations. As usual, I welcome any feedback and comments you may have so I can improve my blog posts.

From GIS to Neogeography: 2003-2005

The field of dynamic crisis mapping is new and rapidly changing. The three core drivers of this change are the increasingly available and accessible of (1) open-source, dynamic mapping tools; (2) mobile data collection technologies; and lastly (3) the development of new methodologies.

Some experts at the cutting-edge of this change call the results “Neogeography,” which is essentially about “people using and creating their own maps, on their own terms and by combining elements of an existing toolset.” The revolution in applications for user-generated content and mobile technology provides the basis for widely distributed information collection and crowdsourcing—a term coined by Wired less than three years ago. The unprecedented rise in citizen journalism is stark evidence of this revolution. New methodologies for conflict trends analysis increasingly take spatial and/or inter-annual dynamics into account and thereby reveal conflict patterns that otherwise remain hidden when using traditional methodologies.

Until recently, traditional mapping tools were expensive and highly technical geographic information systems (GIS), proprietary software that required extensive training to produce static maps.

In terms of information collection, trained experts traditionally collected conflict and human rights data and documented these using hard-copy survey forms, which typically became proprietary once completed. Scholars began coding conflict event-data but data sharing was the exception rather than the rule.

With respect to methodologies, the quantitative study of conflict trends was virtually devoid of techniques that took spatial dynamics into account because conflict data at the time was largely macro-level data constrained by the “country-year straightjacket.”

That is, conflict data was limited to the country-level and rarely updated more than once a year, which explains why methodologies did not seek to analyze sub-national and inter-annual variations for patterns of conflict and human rights abuses. In addition, scholars in the political sciences were more interested in identifying when conflict as likely to occur as opposed to where. For a more in-depth discussion of this issue, please see my paper from 2006  “On Scale and Complexity in Conflict Analysis” (PDF).

Neogeography is Born: 2005

The pivotal year for dynamic crisis mapping was 2005. This is the year that Google rolled out Google Earth. The application marks an important milestone in Neogeography because the free, user-friendly platform drastically reduced the cost of dynamic and interactive mapping—cost in terms of both availability and accessibility. Microsoft has since launched Virual Earth to compete with Google Earth and other  potential contenders.

Interest in dynamic crisis mapping did exist prior to the availability of Google Earth. This is evidenced by the dynamic mapping initiatives I took at Swisspeace in 2003. I proposed that the organization use GIS tools to visualize, animate and analyze the geo-referenced conflict event-data collected by local Swisspeace field monitors in conflict-ridden countries—a project called FAST. In a 2003 proposal, I defined dynamic crisis maps as follows:

FAST Maps are interactive geographic information systems that enable users of leading agencies to depict a multitude of complex interdependent indicators on a user-friendly and accessible two-dimensional map. […] Users have the option of selecting among a host of single and composite events and event types to investigate linkages [between events]. Events and event types can be superimposed and visualized through time using FAST Map’s animation feature. This enables users to go beyond studying a static picture of linkages to a more realistic dynamic visualization.

I just managed to dig up old documents from 2003 and found the interface I had designed for FAST Maps using the template at the time for Swisspeace’s website.

fast-map1

fast-map2

However, GIS software was (and still is) prohibitively expensive and highly technical. To this end, Swisspeace was not compelled to make the necessary investments in 2004 to develop the first crisis mapping platform for producing dynamic crisis maps using geo-referenced conflict data. In hindsight, this was the right decision since Google Earth was rolled out the following year.

Enter PRIO and GROW-net: 2006-2007

With the arrival of Google Earth, a variety of dynamic crisis maps quickly emerged. In fact, one if not the first application of Google Earth for crisis mapping was carried out in 2006 by Jen Ziemke and I. We independently used Google Earth and newly available data from the Peace Research Institute, Oslo (PRIO) to visualize conflict data over time and space. (Note that both Jen and I were researchers at PRIO between 2006-2007).

Jen used Google Earth to explain the dynamics and spatio-temporal variation in violence during the Angolan war. To do this, she first coded nearly 10,000 battle and massacre events as reported in the Portuguese press that took place over a 40 year period.

Meanwhile, I produced additional dynamic crisis maps of the conflict in the Democratic Republic of the Congo (DRC) for PRIO and of the Colombian civil war for the Conflict Analysis Resource Center (CARC) in Bogota. At the time, researchers in Oslo and Bogota used proprietary GIS software to produce static maps (PDF) of their newly geo-referenced conflict data. PRIO eventually used Google Earth but only to publicize the novelty of their new geo-referenced historical conflict datasets.

Since then, PRIO has continued to play an important role in analyzing the spatial dynamics of armed conflict by applying new quantitative methodologies. Together with universities in Europe, the Institute formed the Geographic Representations of War-net (GROW-net) in 2006, with the goal of “uncovering the causal mechanisms that generate civil violence within relevant historical and geographical and historical configurations.” In 2007, the Swiss Federal Institute of Technology in Zurich (ETH), a member of GROW-net, produced dynamic crisis maps using Google Earth for a project called WarViews.

Crisis Mapping Evolves: 2007-2008

More recently, Automated Crisis Mapping (ACM), real-time and automated information collection mechanisms using natural language processing (NLP) have been developed for the automated and dynamic mapping of disaster and health-related events. Examples of such platforms include the Global Disaster Alert and Crisis System (GDACS), CrisisWire, Havaria and HealthMap. Similar platforms have been developed for  automated mapping of other news events, such as Global Incident Map, BuzzTracker, Development Seed’s Managing the News, and the Joint Research Center’s European Media Monitor.

Equally recent is the development of Mobile Crisis Mapping (MCM), mobile crowdsourcing platforms designed for the dynamic mapping of conflict and human rights data as exemplified by Ushahidi (with FrontLineSMS) and the Humanitarian Sensor Web (SensorWeb).

Another important development around this time is the practice of participatory GIS preceded by the recognition that social maps and conflict maps can empower local communities and be used for conflict resolution. Like maps of natural disasters and environmental degradation, these can be developed and discussed at the community level to engage conversation and joint decision-making. This is a critical component since one of the goals of crisis mapping is to empower individuals to take better decisions.

HHI’s Crisis Mapping Project: 2007-2009

The Harvard Humanitarian Initiative (HHI) is currently playing a pivotal role in crafting the new field of dynamic crisis mapping. Coordinated by Jennifer Leaning and myself, HHI is completing a two-year applied research project on Crisis Mapping and Early Warning. This project comprised a critical and comprehensive evaluation of the field and the documentation of lessons learned, best practices as well as alternative and innovative approaches to crisis mapping and early warning.

HHI also acts as an incubator for new projects and  supported the conceptual development of new crisis mapping platforms like Ushahidi and the SensorWeb. In addition, HHI produced the first comparative and dynamic crisis map of Kenya by drawing on reports from the mainstream media, citizen journalists and Ushahidi to analyze spatial and temporal patterns of conflict events and communication flows during a crisis.

HHI’s Sets a Research Agenda: 2009

HHI has articulated an action-oriented research agenda for the future of crisis mapping based on the findings from the two-year crisis mapping project. This research agenda can be categorized into the following three areas, which were coined by HHI:

  1. Crisis Map Sourcing
  2. Mobile Crisis Mapping
  3. Crisis Mapping Analytics

1) Crisis Map Sourcing (CMS) seeks to further research on the challenge of visualizing disparate sets of data ranging from structural and dynamic data to automated and mobile crisis mapping data. The challenge of CMS is to develop appropriate methods and best practices for mashing data from Automated Crisis Mapping (ACM) tools and Mobile Crisis Mapping platforms (see below) to add value to Crisis Mapping Analytics (also below).

2) The purpose of setting an applied-research agenda for Mobile Crisis Mapping, or MCM, is to recognize that the future of distributed information collection and crowdsourcing will be increasingly driven by mobile technologies and new information ecosystems. This presents the crisis mapping community with a host of pressing challenges ranging from data validation and manipulation to data security.

These hurdles need to be addressed directly by the crisis mapping community so that new and creative solutions can be applied earlier rather than later. If the persistent problem of data quality is not adequately resolved, then policy makers may question the reliability of crisis mapping for conflict prevention, rapid response and the documentation of human rights violations. Worse still, inaccurate data may put lives at risk.

3) Crisis Mapping Analytics (CMA) is the third critical area of research set by HHI. CMA is becoming increasingly important given the unprecedented volume of geo-referenced data that is rapidly becoming available. Existing academic platforms like WarViews and operational MCM platforms like Ushahidi do not include features that allow practitioners, scholars and the public to query the data and to visually analyze and identify the underlying spatial dynamics of the conflict and human rights data. This is largely true of Automated Crisis Mapping (ACM) tools as well.

In other words, new and informative metrics are need to be developed to identify patterns in human rights abuses and violent conflict both retrospectively and in real-time. In addition, existing techniques from spatial econometrics need to be rendered more accessible to non-statisticians and built into existing dynamic crisis mapping platforms.

Conclusion

Jen Ziemke and I thus conclude that the most pressing need in the field of crisis mapping is to bridge the gap between scholars and practitioners who self-identify as crisis mappers. This is the most pressing issue because bridging that divide will enable the field of crisis mapping to effectively and efficiently move forward by pursuing the three research agendas set out by the Harvard Humanitarian Initiative (HHI).

We think this is key to moving the crisis-mapping field into more mainstream humanitarian and human rights work—i.e., operational response. But doing so first requires that leading crisis mapping scholars and practitioners proactively bridge the existing gap. This is the core goal of the crisis mapping conference that we propose to organize.

Patrick Philippe Meier

LIFT09: LifeStream Data Visualization

I hope the team behind LifeStream uploads their official visualizations video very soon. LifeStream takes a design approach to visualizing large quantities of information. LifeStreamer Jan-Christoph Zoels explores how “design information visualizations make what is hidden, unhidden.” When visualized in certain ways, data moves from information to knowledge and knowledge, and knowledge to wisdom.

“The representation stage is the single most important stage in a visualization project,” add Jan-Christoph. Why? “Because decisions made at this stage can necessitate a rethinking of decisions made at earlier stages.” Lifestreaming is not finished once the data is visualized in an engaging and clear way. The next important step is to “add methods to interact with the visualization, to manipulate and control the visible features.” (Incidentally, this is also critical for crisis mapping).

The team previewed their very neat visualization video at LIFT09 and below is a copy filmed by Mark Krinsky during the presentation.

In sum, Lifestream is about shaping new paradigms in user interface design; “paradigms that will allow us to see and handle more information than traditional interfaces by combining different aspects or perspectives.” The team at Lifestream concluded their superb presentation by sharing their thoughts on what makes the best visualizations so appealing. In their own words, visualizations need to be “Natural, Engaging, Flowing, Climactic, Seamless, Accessible, Forgiving, Multi-Model/Sensorial and Enjoyable.

Patrick Philippe Meier

LIFT09: Visualizing City Dynamics (Updated)

There were two neat presentations on data visualization of communication dynamics in urban environments. The first, by Stéphane Distinguin from UrbanMobs, included the following visualization of text messages sent throughout Paris during World Music Day:

The visualization below is of “mobile phone calls in Barcelona during the European Football Championship 2008 final and the day after the victory. You can easily notice the different game phases: kick off, half time, goal, end of the match and celebration of the Spanish team victory.”

Carlo Ratti from MIT’s SENSEable City Lab also gave a really neat talk on dynamic visualizations within cities and the patterns that arise.

rottilift09

Carlo showed engaging visualizations are a series of cities. Take the Real Time Rome project which aggregated data from mobile phones over different periods in Rome. The video represents the communication patterns across Rome during a Madonna concert.

Time zones influence the global rhythm of communications. In the video below, international calls between New York and 255 countries are visualized over a 24-hour period. “Areas of the world receiving and making fewer phone calls shrink while areas experiencing a greater amount of voice call activity expand.”

Carlo also showed an animation of “The Water Pavilion” located at the entrance to Expo Zaragoza 2008. Carlo and his team wanted to convey the sense of water in digital terms and therefore designed an interactive building made of water. Think of digital water like an inkjet printer on a large scale but with water instead of ink.

Patrick Philippe Meier