Tag Archives: Visual Analytics

Armed Conflict and Location Event Dataset (ACLED)

I joined the Peace Research Institute, Oslo (PRIO) as a researcher in 2006 to do some data development work on a conflict dataset and to work with Norways’ former Secretary of State on assessing the impact of armed conflict on women’s health for the Ministry of Foreign Affairs (MFA).

I quickly became interested in a related PRIO project that had recently begun called the “Armed Conflict and Location Event Dataset, or ACLED. Having worked with conflict event-datasets as part of operational conflict early warning systems in the Horn, I immediately took interest in the project.

While I have referred to ACLED in a number of previous blog posts, two of my main criticisms (until recently) were (1) the lack of data on recent conflicts; and (2) the lack of an interactive interface for geospatial analysis, or at least more compelling visualization platform.

Introducing SpatialKey

Independently, I came across UniveralMind back November of last year when Andrew Turner at GeoCommons made a reference to the group’s work in his presentation at an Ushahidi meeting. I featured one of the group’s products, SpatialKey, in my recent video primer on crisis mapping.

As it turns out, ACLED is now using SpatialKey to visualize and analyze some of it’s data. So the team has definitely come a long way from using ArcGIS and Google Earth, which is great. The screenshot below, for example, depicts the ACLED data on Kenya’s post-election violence using SpatialKey.

ACLEDspatialkey

If the Kenya data is not drawn from the Ushahidi then this could be an exciting research opportunity to compare both datasets using visual analysis and applied geo-statistics. I write “if” because PRIO somewhat surprisingly has not made the Kenya data available. They are usually very transparent so I will follow up with them and hope to get the data. Anyone interested in co-authoring this study?

Academics Get up To Speed

It’s great to see ACLED developing conflict data for more recent conflicts. Data on Chad, Sudan and the Central African Republic (CAR) is also depicted using SpatialKey but again the underlying spreadsheet data does not appear to be available regrettably. If the data were public, then the UN’s Threat and Risk Mapping Analysis (TRMA) project may very well have much to gain from using the data operationally.

ACLEDspatialkey2

Data Hugging Disorder

I’ll close with just one—perhaps unwarranted—concern since I still haven’t heard back from ACLED about accessing their data. As academics become increasingly interested in applying geospatial analysis to recent or even current conflicts by developing their own datasets (a very positive move for sure), will these academics however keep their data to themselves until they’ve published an article in a peer-reviewed journal, which can often take up to a year or more to publish?

To this end I share the concern that my colleague Ed Jezierski from InSTEDD articulated in his excellent blog post yesterday: “Academic projects that collect data with preference towards information that will help to publish a paper rather than the information that will be the most actionable or help community health the most.” Worst still, however, would be academics collecting data very relevant to the humanitarian or human rights community and not sharing that data until their academic papers are officially published.

I don’t think there needs to be competition between scholars and like-minded practitioners. There are increasingly more scholar-practitioners who recognize that they can contributed their research and skills to the benefit of the humanitarian and human rights communities. At the same time, the currency of academia remains the number of peer-reviewed publications. But humanitarian practitioners can simply sign an agreement such that anyone using the data for humanitarian purposes cannot publish any analysis of said data in a peer-reviewed forum.

Thoughts?

Patrick Philippe Meier

Part 2: Data Flooding and Platform Scarcity

This is Part 2 of 7 of the highlights from “Illuminating the Path: The Research and Development Agenda for Visual Analytics.” Please see this post for an introduction to the study and access to the other 6 parts.

Data Flooding

Data flooding is a term I use to illustrate the fact that “our ability to collect data is increasing at a faster rate than our ability to analyze it.” To this end, I completely agree with the recommendation that new methods are required to “allow the analyst to examine this massive, multi-dimensional, multi-source, time-varying information stream to make decisions in a time-critical manner.”

We don’t want less, but rather more information since “large data volumes allow analysts to discover more complete information about a situation.” To be sure, “scale brings opportunities as well.” As a result, for example, “analysts may be able to determine more easily when expected information is missing,” which sometimes “offers important clues […].”

However, while computer processing power and memory density have changed radically over the decades, “basic human skills and abilities do not change significantly over time.” Technological advances can certainly leverage our skills “but there are fundamental limits that we are asymptotically approaching,” hence the notion of information glut.

In other words, “human skills and abilities do not scale.” That said, the number of humans involved in analytical problem-solving does scale. Unfortunately, however, “most published techniques for supporting analysis are targeted for a single user at a time.” This means that new techniques that “gracefully scale from a single user to a collaborative (multi-user) environment” need to be developed.

Platform Scarcity

However, current technologies and platforms being used in the humanitarian and human rights communities do not address the needs for handling ever-changing volumes of information. “Furthermore, current tools provide very little in the way of support for the complex tasks of anlaysis and discovery process.” There clearly is a platform scarcity.

Admittedly, “creating effective visual representations is a labor-intensive process that requires a solid understanding of the visualization pipeline, characteristics of the data to be displayed, and the tasks to be performed.”

However, as is clear from the crisis mapping projects I have consulted on, “most visualization software is written with incomplete knowledge of at least some of this information.” Indeed, it is rarely possible for “the analyst, who has the best understanding of the data and task, to construct new tools.”

The NVAC study thus recommends that “research is needed to create software that supports the most complex and time-consuming portions of the analytical process, so that analysts can respond to increasingly more complex questions.” To be sure, “we need real-time analytical monitoring that can alert first responders to unusual situations in advance.”

Patrick Philippe Meier

Part 1: Visual Analytics

This is Part 1 of 7 of the highlights from “Illuminating the Path: The Research and Development Agenda for Visual Analytics.” Please see this post for an introduction to the study and access to the other 6 parts.

NVAC defines Visual Analytics (VA) as “the science of analytical reasoning facilitated by interactive visual interfaces. People use VA tools and techniques to synthesize information and derive insights from massive, dynamic, ambiguous, and often conflicting data; detect the expected and discover the unexpected; provide timely, defensible, and understandable assessments; and communicate assessment effectively for action.”

The field of VA is necessarily multidisciplinary and combines “techniques from information visualization with techniques from computational transformation and analysis of data.” VA includes the following focus areas:

  • Analytical reasoning techniques, “that enable users to obtain deep insights that directly support assessment, planning and decision-making”;
  • Visual representations and interaction techniques, “that take advantage of the human eye’s broad bandwidth pathway to into the mind to allow users to see, explore, and understand large amounts of information at once”;
  • Data representation and transformations, “that convert all types of conflicting and dynamic data in ways that support visualization and analysis”;
  • Production, presentation and dissemination techniques, “to communicate information in the appropriate context to a variety of audiences.”

As is well known, “the human mind can understand complex information received through visual channels.” The goal of VA is thus to facilitate the analytical reasoning process “through the creation of software that maximizes human capacity to perceive, understand, and reason about complex and dynamic situations.”

In sum, “the goal is to facilitate high-quality human judgment with a limited investment of the analysts’ time.” This means in part to “expose all relevant data in a way that facilitates the reasoning process to enable action.” To be sure, solving a problem often means representing it so that the solution is more obvious (adapted from Herbert Simon). Sometimes, the simple act of placing information on a timeline or a map can generate clarity and profound insight.” Indeed, both “temporal relationships and spatial patterns can be revealed through timelines and maps.”

VA also reduces the costs associated with sense-making in two primary ways, by:

  1. Transforming information into forms that allow humans to offload cognition onto easier perceptual processes;
  2. Allowing software agents to do some of the filtering, representation translation, interpretation, and even reasoning.

That said, we should keep in mind that “human-designed visualizations are still much better than those created by our information visualization systems.” That is, there are more “highly evolved and widely used metaphors created by human information designers” than there are “successful new computer-mediated visual representations.”

Patrick Philippe Meier

Research Agenda for Visual Analytics

I just finished reading “Illuminating the Path: The Research and Development Agenda for Virtual Analytics.” The National Visualization and Analytics Center (NVACs) published the 200-page book in 2004 and the volume is absolutely one of the best treaties I’ve come across on the topic yet. The purpose of this series of posts that follow is to share some highlights and excerpts relevant for crisis mapping.

NVACcover

Co-edited by James Thomas and Kristin Cook,  the book focuses specifically on homeland security but there are numerous insights to be gained on how “virtual analytics” can also illuminate the path for crisis mapping analytics. Recall that the field of conflict early warning originated in part from World War II and  the lack of warning during Pearl Harbor.

Several coordinated systems for the early detection of a Soviet bomber attack on North America were set up in the early days of the Cold War. The Distant Early Warning Line, or Dew Line, was the most sophisticated of these. The point to keep in mind is that the national security establishment is often in the lead when it comes to initiatives that can also be applied for humanitarian purposes.

The motivation behind the launching of NVACs and this study was 9/11. In my opinion, this volume goes a long way to validating the field of crisis mapping. I highly recommend it to colleagues in both the humanitarian and human rights communities. In fact, the book is directly relevant to my current consulting work with the UN’s Threat and Risk Mapping Analysis (TRMA) project in the Sudan.

So this week, iRevolution will be dedicated to sharing daily higlights from the NVAC study. Taken together, these posts will provide a good summary of the rich and in-depth 200-page study. So check back here post for live links to NVAC highlights:

Part 1: Visual Analytics

Part 2: Data Flooding and Platform Scarcity

Part 3: Data Tetris and Information Synthesis

Part 4: Automated Analysis and Uncertainty Visualized

Part 5: Data Visualization and Interactive Interface Design

Part 6: Mobile Technologies and Collaborative Analytics

Part 7: Towards a Taxonomy of Visual Analytics

Note that the sequence above does not correspond to specific individual chapters in the NVAC study. This structure for the summary is what made most sense.

Patrick Philippe Meier