Tag Archives: Conflict Early Warning

Ushahidi: From Croudsourcing to Crowdfeeding

Humanitarian organizations at the Internews meetings today made it clear that information during crises is as important as water, food and medicine. There is now a clear consensus on this in the humanitarian community.

This is why I have strongly encouraged Ushahidi developers (as recently as this past weekend) to include a subscription feature that allows crisis-affected communities to subscribe to SMS alerts. In other words, we are not only crowdsourcing crisis information we are also crowdfeeding crisis information.

I set off several flags when I mentioned this during the Internews meeting since crowdsourcing typically raises concerns about data validation or lack thereof. Participants at the meeting began painting scenarios whereby militias in the DRC would submit false reports to Ushahidi in order to scare villagers (who would receive the alert by SMS) and have them flee in their direction where they would ambush them.

Here’s why I think humanitarian organizations may in part be wrong.

First of all, militias do not need Ushahidi to scare or ambush at-risk communities. In fact, using a platform like Ushahidi would be tactically inefficient and would require more coordinating on their part.

Second, local communities are rarely dependent on a single source of information. They have their own trusted social and kinship networks, which they can draw on to validate information. There are local community radios and some of these allow listeners to call in or text in with information and/or questions. Ushahidi doesn’t exist in an information vacuum. We need to understand information communication as an ecosystem.

Third, Ushahidi makes it clear that the information is crowdsourced and hence not automatically validated. Beneficiaries are not dumb; they can perfectly well understand that SMS alerts are simply alerts and not confirmed reports. I must admit that the conversation that ensued at the meeting reminded me of Peter Uvin’s “Aiding Violence” in which he lays bare our “infantilizing” attitude towards “our beneficiaries.”

Fourth, many of the humanitarian organizations participating in today’s meetings work directly with beneficiaries in conflict zones. Shouldn’t they take an interest in the crowdsourced information and take advantage of being in the field to validate said information?

Fifth, all the humanitarian organizations present during today’s meetings embraced the need for two-way, community-generated information and social media. Yet these same organizations fold there arms and revert to a one-way communication mindset when the issue of crowdsourcing comes up. They forget that they too can generate information in response to rumors and thus counter-act misinformation as soon as it spreads. If the US Embassy can do this in Madagascar using Twitter, why can’t humanitarian organizations do the equivalent?

Sixth, Ushahidi-Kenya and Ushahidi-DRC were the first deployments of Ushahidi. The model that Ushahidi has since adopted involves humanitarian organizations like UNICEF in Zimbabwe or Carolina for Kibera in Nairobi, and international media groups like Al-Jazeera in Gaza, to use the free, open-source platform for their own projects. In other words, Ushahidi deployments are localized and the crowdsourcing is limited to trusted members of those organizations, or journalists in the case of Al-Jazeera.

Patrick Philippe Meier

Internews, Ushahidi and Communication in Crises

I had the pleasure of participating in two Internews sponsored meetings in New York today. Fellow participants included OCHA, Oxfram, Red Cross, Save the Children, World Vision, BBC World Service Trust, Thomson Reuters Foundation, Humanitarian Media Foundation, International Media Support and several others.

img_0409

The first meeting was a three-hour brainstorming session on “Improving Humanitarian Information for Affected Communities” organized in preparation for the second meeting on “The Unmet Need for Communication in Humanitarian Response,” which was held at the UN General Assembly.

img_0411

The meetings presented an ideal opportunity for participants to share information on current initiatives that focus on communications with crisis-affected populations. Ushahidi naturally came to mind so I introduced the concept of crowdsourcing crisis information. I should have expected the immediate push back on the issue of data validation.

Crowdsourcing and Data Validation

While I have already blogged about overcoming some of the challenges of data validation in the context of crowdsourcing here, there is clearly more to add since the demand for “fully accurate information” a.k.a. “facts and only facts” was echoed during the second meeting in the General Assembly. I’m hoping this blog post will help move the discourse beyond the black and white concepts that characterize current discussions on data accuracy.

Having worked in the field of conflict early warning and rapid response for the past seven years, I fully understand the critical importance of accurate information. Indeed, a substantial component of my consulting work on CEWARN in the Horn of Africa specifically focused on the data validation process.

To be sure, no one in the humanitarian and human rights community is asking for inaccurate information. We all subscribe to the notion of “Do No Harm.”

Does Time Matter?

What was completely missing from today’s meetings, however, was a reference to time. Nobody noted the importance of timely information during crises, which is rather ironic since both meetings focused on sudden onset emergencies. I suspect that our demand (and partial Western obsession) for fully accurate information has clouded some of our thinking on this issue.

This is particularly ironic given that evidence-based policy-making and data-driven analysis are still the exception rather than the rule in the humanitarian community. Field-based organizations frequently make decisions on coordination, humanitarian relief and logistics without complete and fully accurate, real-time information, especially right after a crisis strikes.

So why is this same community holding crowdsourcing to a higher standard?

Time versus Accuracy

Timely information when a crisis strikes is a critical element for many of us in the humanitarian and human rights communities. Surely then we must recognize the tradeoff between accuracy and timeliness of information. Crisis information is perishable!

The more we demand fully accurate information, the longer the data validation process typically takes and thus the more likely the information will be become useless. Our public health colleagues who work in emergency medicine know this only too well.

The figure below represents the perishable nature of crisis information. Data validation makes sense during time-periods A and B. Continuing to carry out data validation beyond time B may be beneficial to us, but hardly to crisis affected communities. We may very well have the luxury of time. Not so for at-risk communities.

relevance_time

This point often gets overlooked when anxieties around inaccurate information surface. Of course we need to insure that information we produce or relay is as accurate as possible. Of course we want to prevent dangerous rumors from spreading. To this end, the Thomson Reuters Foundation clearly spelled out that their new Emergency Information Service (EIS) would only focus on disseminating facts and only facts. (See my previous post on EIS here).

Yes, we can focus all our efforts on disseminating facts, but are those facts communicated after time-period B above really useful to crisis-affected communities? (Incidentally, since EIS will be based on verifiable facts, their approach may well be liked to Wikipedia’s rules for corrective editing. In any event, I wonder how EIS might define the term “fact”).

Why Ushahidi?

Ushahidi was created within days of the Kenyan elections in 2007 because both the government and national media were seriously under-reporting widespread human rights violations. I was in Nairobi visiting my parents at the time and it was also frustrating to see the majority of international and national NGOs on the ground suffering from “data hugging disorder,” i.e., they had no interest whatsoever to share information with each other or the public for that matter.

This left the Ushahidi team with few options, which is why they decided to develop a transparent platform that would allow Kenyans to report directly, thereby circumventing the government, media and NGOs, who were working against transparency.

Note that the Ushahidi team is only comprised of tech-experts. Here’s a question: why didn’t the human rights or humanitarian community set up a platform like Ushahidi? Why were a few tech-savvy Kenyans without a humanitarian background able to set up and deploy the platform within a week and not the humanitarian community? Where were we? Shouldn’t we be the ones pushing for better information collection and sharing?

In a recent study for the Harvard Humanitarian Initiative (HHI), I mapped and time-stamped reports on the post-election violence reported by the mainstream media, citizen journalists and Ushahidi. I then created a Google Earth layer of this data and animated the reports over time and space. I recommend reading the conclusions.

Accuracy is a Luxury

Having worked in humanitarian settings, we all know that accuracy is more often luxury than reality, particularly right after a crisis strikes. Accuracy is not black and white, yes or no. Rather, we need to start thinking in terms of likelihood, i.e., how likely is this piece of information to be accurate? All of us already do this everyday albeit subjectively. Why not think of ways to complement or triangulate our personal subjectivities to determine the accuracy of information?

At CEWARN, we included “Source of Information” for each incident report. A field reporter could select from several choices: (1) direct observation; (2) media, and (3) rumor. This gave us a three-point weighted-scale that could be used in subsequent analysis.

At Ushahidi, we are working on Swift River, a platform that applies human crowdsourcing and machine analysis (natural language parsing) to filter crisis information produced in real time, i.e., during time-periods A and B above. Colleagues at WikiMapAid are developing similar solutions for data on disease outbreaks. See my recent post on WikiMapAid and data validation here.

Conclusion

In sum, there are various ways to rate the likelihood that a reported event is true. But again, we are not looking to develop a platform that insures 100% reliability. If full accuracy were the gold standard of humanitarian response (or military action for that matter), the entire enterprise would come to a grinding halt. The intelligence community has also recognized this as I have blogged about here.

The purpose of today’s meetings was for us to think more concretely about communication in crises from the perspective of at-risk communities. Yet, as soon as I mentioned crowdsourcing the discussion became about our own demand for fully accurate information with no concerns raised about the importance of timely information for crisis-affected communities.

Ironic, isn’t it?

Patrick Philippe Meier

A Brief History of Crisis Mapping (Updated)

Introduction

One of the donors I’m in contact with about the proposed crisis mapping conference wisely recommended I add a big-picture background to crisis mapping. This blog post is my first pass at providing a brief history of the field. In a way, this is a combined summary of several other posts I have written on this blog over the past 12 months plus my latest thoughts on crisis mapping.

Evidently, this account of history is very much influenced by my own experience so I may have unintentionally missed a few relevant crisis mapping projects. Note that by crisis  I refer specifically to armed conflict and human rights violations. As usual, I welcome any feedback and comments you may have so I can improve my blog posts.

From GIS to Neogeography: 2003-2005

The field of dynamic crisis mapping is new and rapidly changing. The three core drivers of this change are the increasingly available and accessible of (1) open-source, dynamic mapping tools; (2) mobile data collection technologies; and lastly (3) the development of new methodologies.

Some experts at the cutting-edge of this change call the results “Neogeography,” which is essentially about “people using and creating their own maps, on their own terms and by combining elements of an existing toolset.” The revolution in applications for user-generated content and mobile technology provides the basis for widely distributed information collection and crowdsourcing—a term coined by Wired less than three years ago. The unprecedented rise in citizen journalism is stark evidence of this revolution. New methodologies for conflict trends analysis increasingly take spatial and/or inter-annual dynamics into account and thereby reveal conflict patterns that otherwise remain hidden when using traditional methodologies.

Until recently, traditional mapping tools were expensive and highly technical geographic information systems (GIS), proprietary software that required extensive training to produce static maps.

In terms of information collection, trained experts traditionally collected conflict and human rights data and documented these using hard-copy survey forms, which typically became proprietary once completed. Scholars began coding conflict event-data but data sharing was the exception rather than the rule.

With respect to methodologies, the quantitative study of conflict trends was virtually devoid of techniques that took spatial dynamics into account because conflict data at the time was largely macro-level data constrained by the “country-year straightjacket.”

That is, conflict data was limited to the country-level and rarely updated more than once a year, which explains why methodologies did not seek to analyze sub-national and inter-annual variations for patterns of conflict and human rights abuses. In addition, scholars in the political sciences were more interested in identifying when conflict as likely to occur as opposed to where. For a more in-depth discussion of this issue, please see my paper from 2006  “On Scale and Complexity in Conflict Analysis” (PDF).

Neogeography is Born: 2005

The pivotal year for dynamic crisis mapping was 2005. This is the year that Google rolled out Google Earth. The application marks an important milestone in Neogeography because the free, user-friendly platform drastically reduced the cost of dynamic and interactive mapping—cost in terms of both availability and accessibility. Microsoft has since launched Virual Earth to compete with Google Earth and other  potential contenders.

Interest in dynamic crisis mapping did exist prior to the availability of Google Earth. This is evidenced by the dynamic mapping initiatives I took at Swisspeace in 2003. I proposed that the organization use GIS tools to visualize, animate and analyze the geo-referenced conflict event-data collected by local Swisspeace field monitors in conflict-ridden countries—a project called FAST. In a 2003 proposal, I defined dynamic crisis maps as follows:

FAST Maps are interactive geographic information systems that enable users of leading agencies to depict a multitude of complex interdependent indicators on a user-friendly and accessible two-dimensional map. […] Users have the option of selecting among a host of single and composite events and event types to investigate linkages [between events]. Events and event types can be superimposed and visualized through time using FAST Map’s animation feature. This enables users to go beyond studying a static picture of linkages to a more realistic dynamic visualization.

I just managed to dig up old documents from 2003 and found the interface I had designed for FAST Maps using the template at the time for Swisspeace’s website.

fast-map1

fast-map2

However, GIS software was (and still is) prohibitively expensive and highly technical. To this end, Swisspeace was not compelled to make the necessary investments in 2004 to develop the first crisis mapping platform for producing dynamic crisis maps using geo-referenced conflict data. In hindsight, this was the right decision since Google Earth was rolled out the following year.

Enter PRIO and GROW-net: 2006-2007

With the arrival of Google Earth, a variety of dynamic crisis maps quickly emerged. In fact, one if not the first application of Google Earth for crisis mapping was carried out in 2006 by Jen Ziemke and I. We independently used Google Earth and newly available data from the Peace Research Institute, Oslo (PRIO) to visualize conflict data over time and space. (Note that both Jen and I were researchers at PRIO between 2006-2007).

Jen used Google Earth to explain the dynamics and spatio-temporal variation in violence during the Angolan war. To do this, she first coded nearly 10,000 battle and massacre events as reported in the Portuguese press that took place over a 40 year period.

Meanwhile, I produced additional dynamic crisis maps of the conflict in the Democratic Republic of the Congo (DRC) for PRIO and of the Colombian civil war for the Conflict Analysis Resource Center (CARC) in Bogota. At the time, researchers in Oslo and Bogota used proprietary GIS software to produce static maps (PDF) of their newly geo-referenced conflict data. PRIO eventually used Google Earth but only to publicize the novelty of their new geo-referenced historical conflict datasets.

Since then, PRIO has continued to play an important role in analyzing the spatial dynamics of armed conflict by applying new quantitative methodologies. Together with universities in Europe, the Institute formed the Geographic Representations of War-net (GROW-net) in 2006, with the goal of “uncovering the causal mechanisms that generate civil violence within relevant historical and geographical and historical configurations.” In 2007, the Swiss Federal Institute of Technology in Zurich (ETH), a member of GROW-net, produced dynamic crisis maps using Google Earth for a project called WarViews.

Crisis Mapping Evolves: 2007-2008

More recently, Automated Crisis Mapping (ACM), real-time and automated information collection mechanisms using natural language processing (NLP) have been developed for the automated and dynamic mapping of disaster and health-related events. Examples of such platforms include the Global Disaster Alert and Crisis System (GDACS), CrisisWire, Havaria and HealthMap. Similar platforms have been developed for  automated mapping of other news events, such as Global Incident Map, BuzzTracker, Development Seed’s Managing the News, and the Joint Research Center’s European Media Monitor.

Equally recent is the development of Mobile Crisis Mapping (MCM), mobile crowdsourcing platforms designed for the dynamic mapping of conflict and human rights data as exemplified by Ushahidi (with FrontLineSMS) and the Humanitarian Sensor Web (SensorWeb).

Another important development around this time is the practice of participatory GIS preceded by the recognition that social maps and conflict maps can empower local communities and be used for conflict resolution. Like maps of natural disasters and environmental degradation, these can be developed and discussed at the community level to engage conversation and joint decision-making. This is a critical component since one of the goals of crisis mapping is to empower individuals to take better decisions.

HHI’s Crisis Mapping Project: 2007-2009

The Harvard Humanitarian Initiative (HHI) is currently playing a pivotal role in crafting the new field of dynamic crisis mapping. Coordinated by Jennifer Leaning and myself, HHI is completing a two-year applied research project on Crisis Mapping and Early Warning. This project comprised a critical and comprehensive evaluation of the field and the documentation of lessons learned, best practices as well as alternative and innovative approaches to crisis mapping and early warning.

HHI also acts as an incubator for new projects and  supported the conceptual development of new crisis mapping platforms like Ushahidi and the SensorWeb. In addition, HHI produced the first comparative and dynamic crisis map of Kenya by drawing on reports from the mainstream media, citizen journalists and Ushahidi to analyze spatial and temporal patterns of conflict events and communication flows during a crisis.

HHI’s Sets a Research Agenda: 2009

HHI has articulated an action-oriented research agenda for the future of crisis mapping based on the findings from the two-year crisis mapping project. This research agenda can be categorized into the following three areas, which were coined by HHI:

  1. Crisis Map Sourcing
  2. Mobile Crisis Mapping
  3. Crisis Mapping Analytics

1) Crisis Map Sourcing (CMS) seeks to further research on the challenge of visualizing disparate sets of data ranging from structural and dynamic data to automated and mobile crisis mapping data. The challenge of CMS is to develop appropriate methods and best practices for mashing data from Automated Crisis Mapping (ACM) tools and Mobile Crisis Mapping platforms (see below) to add value to Crisis Mapping Analytics (also below).

2) The purpose of setting an applied-research agenda for Mobile Crisis Mapping, or MCM, is to recognize that the future of distributed information collection and crowdsourcing will be increasingly driven by mobile technologies and new information ecosystems. This presents the crisis mapping community with a host of pressing challenges ranging from data validation and manipulation to data security.

These hurdles need to be addressed directly by the crisis mapping community so that new and creative solutions can be applied earlier rather than later. If the persistent problem of data quality is not adequately resolved, then policy makers may question the reliability of crisis mapping for conflict prevention, rapid response and the documentation of human rights violations. Worse still, inaccurate data may put lives at risk.

3) Crisis Mapping Analytics (CMA) is the third critical area of research set by HHI. CMA is becoming increasingly important given the unprecedented volume of geo-referenced data that is rapidly becoming available. Existing academic platforms like WarViews and operational MCM platforms like Ushahidi do not include features that allow practitioners, scholars and the public to query the data and to visually analyze and identify the underlying spatial dynamics of the conflict and human rights data. This is largely true of Automated Crisis Mapping (ACM) tools as well.

In other words, new and informative metrics are need to be developed to identify patterns in human rights abuses and violent conflict both retrospectively and in real-time. In addition, existing techniques from spatial econometrics need to be rendered more accessible to non-statisticians and built into existing dynamic crisis mapping platforms.

Conclusion

Jen Ziemke and I thus conclude that the most pressing need in the field of crisis mapping is to bridge the gap between scholars and practitioners who self-identify as crisis mappers. This is the most pressing issue because bridging that divide will enable the field of crisis mapping to effectively and efficiently move forward by pursuing the three research agendas set out by the Harvard Humanitarian Initiative (HHI).

We think this is key to moving the crisis-mapping field into more mainstream humanitarian and human rights work—i.e., operational response. But doing so first requires that leading crisis mapping scholars and practitioners proactively bridge the existing gap. This is the core goal of the crisis mapping conference that we propose to organize.

Patrick Philippe Meier

NeoGeography and Crisis Mapping Analytics

WarViews is Neogeography

Colleagues at the Swiss Federal Institute of Technology Zurich (ETH) are starting to publish their research on the WarViews project. I first wrote about this project in 2007 as part of an HHI deliverable on Crisis Mapping for Humanity United. What appeals to me about WarViews is the initiative’s total “Neogeography” approach.

WarView

picture-21

What is Neogeography? Surprisingly, WarViews‘s first formal publication (Weidmann and Kuse, 2009) does not use the term but my CrisisMappers colleague Andrew Turner wrote the defining chapter on Neogeography for O’Reilly back in 2006:

Neogeography means ‘new geography’ and consists of a set of techniques and tools that fall outside the realm of traditional GIS, Geographic Information Systems. Where historically a professional cartographer might use ArcGIS, talk of Mercator versus Mollweide projections, and resolve land area disputes, a neogeographer uses a mapping API like Google Maps, talks about GPX versus KML, and geotags his photos to make a map of his summer vacation.

Essentially, Neogeography is about people using and creating their own maps, on their own terms and by combining elements of an existing toolset. Neogeography is about sharing location information with friends and visitors, helping shape context, and conveying understanding through knowledge of place.

Compare this language with Wiedmann and Kuse, 2009:

[The] use of geographic data requires specialized software and substantial training and therefore involves high entry costs for researchers and practitioners. [The] War Views project [aims] to create an easy-to-use front end for the exploration of GIS data on conflict. It takes advantage of the recent proliferation of Internet-based geographic software and makes geographic data on conflict available for these tools.

With WarViews, geographic data on conflict can be accessed, browsed, and time-animated in a few mouse clicks, using only standard software. As a result, a wider audience can take advantage of the valuable data contained in these databases […].

The team in Zurich used the free GIS server software GeoServer, which reads “vector data in various formats, including the shapefile format used for many conflict-related GIS data sets.” This way, WarViews allows users to visualize data both statically and dynamically using Google Earth.

Evidently, the WarViews project is not groundbreaking compared to many of the applied mapping projects carried out by the CrisisMappers Group. (Colleagues and I in Boston created a Google Earth layer of DRC and Colombia conflict data well before WarViews came online).

That said the academic initiative at the University of Zurich is an important step forward for neogeography and an exciting development for political scientists interested in studying the geographic dimensions of conflict data.

Geographic Data

Geo-tagged conflict data is becoming more widely available. My Alma Matter, the Peace Research Institute in Oslo (PRIO), has made an important contribution with the Armed Conflict Location and Event Dataset (ACLED). This dataset includes geo-tagged conflict data for 12 countries between 1946 to present time.

In addition to ACLED, Wiedman and Kus (2009) also reference two additional geo-tagged datasets. The first is the Political Instability Task Force’s Worldwide Atrocities Dataset (PITF), which comprises a comprehensive collection of violent events against noncombatants. The second is the Peacekeeping Operations Locations and Event Dataset (Doroussen 2007, PDF), which provides geo-tagged data on interventions in civil wars. This dataset is not yet public.

Weidmann and Kuse (2009) do not mention Ushahidi, a Mobile Crisis Mapping (CMC) platform nor do the authors reference HHI’s Google Earth Crisis Map of Kenya’s Post-Election violence (2008). Both initiatives provide unique geo-tagged peace and conflict data. Ushahidi has since been deployed in the DRC, Zimbabwe and Gaza.

Unlike the academic databases referenced above, the Ushahidi data is crowdsourced and geo-tagged in quasi-real time. Given Ushahidi’s rapid deployment to other conflict zones, we can expect a lot more geo-tagged information in 2009. The question is, will we know how to analyze this data to detect patterns?

Crisis Mapping Analytics (CMA)

The WarViews project is “not designed for sophisticated analyses of geographic data […].” This is perhaps ironic given that academics across multiple disciplines have developed a plethora of computational methods and models to analyze geographic data over time and space. These techniques necessarily require advanced expertise in spatial econometric analysis and statistics.

The full potential of neography will only be realized when we have more accessible ways to analyze the data visualized on platforms like Google Earth. Neogeography brought dynamic mapping to many more users, but spatial econometric analysis has no popular equivalent.

This is why I introduced the term Crisis Mapping Analytics (CMA) back in August 2008 and why I blogged about the need to develop the new field of CMA here and here. The Harvard Humanitarian Initiative (HHI) is now spearheading the development of CMA metrics given the pressing need for more accessible albeit rigorous methods to identify patterns in crisis mapping for the purposes of early warning. Watching data played on Google Earth over and over will only take us so far, especially as new volumes of disparate datasets become available in 2009.

HHI is still in conversation with a prospective donor to establish the new field of CMA so I’m unable to outline the metrics we developed here but hope the donor will provide HHI with some funding so we can partner and collaborate with other groups to formalize the field of CMA.

Crisis Mapping Conference

In the meantime, my colleague Jen Ziemke and I are exploring the possibility of organizing a 2-3 day conference on crisis mapping for Fall 2009. The purpose of the conference is to shape the future of crisis mapping by bridging the gap that exists between academics and practitioners working on crisis mapping.

In our opinion, developing the new field of Crisis Mapping Analytics (CMA) will require the close and collegial collaboration between academic institutes like PRIO and operational projects like Ushahidi.

Jen and I are therefore starting formal conversations with donors in order to make this conference happen. Stay tuned for updates in March. In the meantime, if you’d like to provide logistical support or help co-sponsor this unique conference, please email us.

Patrick Philippe Meier

Crowdsourcing Honesty?

I set an all-time personal record this past week: my MacBook was dormant for five consecutive days. I dedicate this triumph to the delightful friends with whom I spent New Year’s. Indeed, I had the pleasure of celebrating with friends from Digital Democracy, The Fletcher School and The Global Justice Center on a Caribbean island for some much needed time off.

Ariely

We all brought some good reading along and I was finally able to enjoy a number of books on my list. One of these, Dan Ariely’s “Predictably Irrational” was recommended to me by Erik Hersman, and I’m really glad he did. MIT Professor Ariely specializes in behavioral economics. His book gently discredits mainstream economics. Far from being rational agents, we are remarkably irrational in our decision-making, and predictably so.

Ariely draws on a number of social experiments to explicate his thesis.

For social scientists, experiments are like microscopes or strobe lights. They help us slow human behavior to a frame-by-frame narration of events, isolate individual forces, and examine those forces carefully and in more detail. They let us test directly and unambiguously what makes us tick.

In a series of fascinating experiments, Ariely seeks to understand what factors influence our decisions to be honest, especially when we can get away with dishonesty. In one experiment, participants complete a very simple math exercise. When done, the first set of participants (control group) are asked to hand in their answers for independent grading but the second set are subsequently given the answers and asked to report their own scores. At no point do the latter hand in their answers; hence the temptation to cheat.

In this experiment, some students are asked to list the names of 10 books they read in high school while others are asked to write down as many of the Ten Commandments as they can recall prior to the math exercise. Ariely’s wanted to know whether this would have any effect on the honesty of those participants reporting their scores? The statistically significant results surprised even him: “The students who had been asked to recall the Ten Commandments had not cheated at all.”

In fact, they averaged the same score as the (control) group that could not cheat. In contrast, participants who were asked to list their 10 high school books and self-report their scores cheated: they claimed grades that were 33% higher than those who could not cheat (control group).

What especially impressed me about the experiment […] was that the students who could remember only one or two commandments were as affected by them as the students who remembered nearly all ten. This indicated that it was not the Commandments themselves that encouraged honestly, but the mere contemplation of a moral benchmark of some kind.

Ariely carried out a follow up experiment in which he asked some of his MIT students to sign an honor code instead of listing the Commandments. The results were identical. What’s more, “the effect of signing a statement about an honor code is particularly amazing when we take into account that MIT doesn’t even have an honor code.”

In short, we are far more likely to be honest when reminded of morality, especially when temptation strikes. Ariely thus concludes that the act of taking an oath can make all the difference.

I’m intrigued by this finding and it’s potential application to crowdsourcing crisis information, e.g., Ushahidi‘s work in the DRC. Could some version of an honor code be introduced in the self-reporting process? Could the Ushahidi team create a control group to determine the impact on data quality? Even if impact were difficult to establish, would introducing an honor code still make sense given Ariely’s findings on basic behavioral psychology?

Patrick Philippe Meier

Covering the DRC – opportunities for Ushahidi

This blog entry was inspired by Ory’s recent blog post on “Covering the DRC – challenges for Ushahidi.” The thoughts that follow were originally intended for the comments section of Ushahidi’s blog but they surreptitiously morphed into a more in depth reflection. First, however, many thanks to Ory for taking the time to share the team’s experience in the DRC over the past few weeks.

Much of what Ory writes resonates with my own experience in conflict early warning/response. While several factors contribute to the challenge of Ushahidi’s deployment, I think one in particular regrettably remains a constant in my own experience: the link to early response, or rather the lack thereof. The main point I want to make is this: if Ushahidi addresses the warning-response gap, then Ushahidi-DRC is likely to get far more traction on the ground than it currently is.

To explain, if Ushahidi is going to provide a platform that enables the crowdsourcing of crisis information, then it must also facilitate the crowdsourcing of response. Why? For otherwise the tool is of little added value to the individuals who constitute said crowd, ie, the Bottom of the Pyramid (BoP) in conflict zones. If individuals at the BoP don’t personally benefit from Ushahidi, then they should not be spending time/resources on communicating alerts. As one of my dissertation committee members, Peter Walker wrote in 1991 vis-a-vis famine early warning/response systems in Africa:

It must be clearly understood that the informants are the most important part of the information system. It is their information […] upon which the rest of the system is based […]. The informant must be made to feel, quite rightly, that he or she is an important part of the system, not just a minion at the base, for ever giving and never receiving.

In 1988 (that’s write ’88), Kumar Rupesinghe published a piece on disaster early warning systems in which he writes that civil society has

… a vital role to play in the development of a global, decentralized early warning system. They now need the capacity to build information systems and to provide the basis for rapid information exchange. In general [civil society] will have to confront the monopolization of information with a demand for the democratic access to information technology.

Information on local concerns must be available to the local structures in society. The right to be informed and the right to information have to find entry into international discussions.

Ushahidi’s crowdsourcing approach has the potential to reverse the monopolization of information and thereby create a demand for access to conflict information. Indeed, Ushahidi is starting to influence the international discourse on early warning (forthcoming reports by the EC and OECD). However, it is the mobile crowdsourcing of response that will create value and thereby demand by the BoP for Ushahidi.

Put it this way, Twitter would be far less useful if it were limited to one (and only one) global website on which all tweets were displayed. What makes Twitter valuable is the ability to select specific feeds, and to have those feeds pushed to us effortlessly, using Twhirl or similar programs, and displayed (in less than 141 characters) on our computer screens in real time. At the moment, Ushahidi does the equivalent of the former, but not the latter.

Yet the latter is precisely where the added value to the individual lies. An NGO may be perfectly content with Ushahidi’s current set up, but NGOs do not constitute the BoP; they are not the “fundamental unit” of crowdsourcing—individuals are. (Just imagine if Wikipedia entries could only be written/edited by NGOs).

This mismatch in fundamental units is particularly prevalent in the conflict early warning/response field. NGOs do not have the same incentive structures as individuals. If individuals in at-risk communities were to receive customized alerts on incidents in/near their own town (if they themselves send alerts to Ushahidi), then that response presents a far more direct and immediate return on investment. Receiving geo-specific alerts in quasi real-time improves situational awareness and enables an individual to take a more informed decision about how to respond to the alerts. That is added value. The BoP would have an incentive, empowerment, to crowdsource crisis information.

Here’s a scenario: if an individual texts in an alert for the first time, Ushahidi should: (1) contact that person as soon as possible to thank them for their alert and, (2) ask them what SMS alerts they would like to receive and for what town(s). I guarantee you this person will spread the word through their own social network and encourage others to send in alerts so that they too may receive alerts. (Incidentally, Erik, this is the strategy I would recommend in places like Jos, Nigeria).

In summary, while the Ushahidi team faces a multitude of other challenges in the DRC deployment, I believe that addressing the early response dimension will render the other challenges more manageable. While the vast majority of conflict early warning systems are wired vertically (designed by outsiders for outsiders), the genius of Ushahidi is the paradigm shift to horizontally wired, local early warning/response, aka crowdsourcing.

In a way, it’s very simple: If Ushahidi can create value for the BoP, the client base will necessarily expand (substantially). To this end, Ushahidi should not be pitched as an early warning system, but rather as an early response service. This is one of the reasons why I am trying hard to encourage the operationalization of mobile crisis mapping.

Policy Briefing: Information in Humanitarian Responses

The BBC World Service Trust just released an excellent Policy Brief (PDF) on “The Unmet Need for Information in Humanitarian Responses.” The majority of the report’s observations and conclusions are in line with the findings identified during Harvard Humanitarian Initiative’s (HHI) 18-month applied research project on Conflict Early Warning and Crisis Mapping.

I include below excerpts that resonated particularly strongly.

  • People need information as much as water, food, medicine or shelter. Information can save lives, livelihoods and resources. Information bestows power.
  • Effective information and communication exchange with affected populations are among the least understood and most complex challenges facing the humanitarian sector in the 21st century.
  • Disaster victims need information about their options in order to take any meaningful choices about their future. Poor information follow is undoubtedly the biggest source of dissatisfaction, anger and frustration among affected people.
  • Information—and just as important communication—is essential for people to start claiming a sense of power and purpose over their own destiny.

In this context, recall the purpose of people-centered early warning as defined by the UN International Strategy for Disaster Reduction (UNISDR) during the Third International Conference on Early Warning (EWC III) in 2006:

To empower individuals and communities threatened by hazards to act in sufficient time and in an appropriate manner so as to reduce the possibility of personal injury, loss of life, damage to property and the environment, and loss of livelihoods.

picture-6

Other important observations worth noting from the Policy Brief:

  • Sometimes information is the only help that can be made available, especially when isolated populations are cut off and beyond the reach of aid.
  • There are still misplaced assumptions and confusion about how and what to think about information and communication—and where organizationally to locate it. Humanitarian actors systematically fail to see the difference between public relations and communications with affected populations, and thus funds neither the expertise nor infrastructure necessary.
  • The information needs of people affected by disasters remain largely unmet because the people, systems and resources that are required to meet them simply don’t exist in a meaningful way.
  • The humanitarian system is not equipped with either the capacity or the resources to begin tackling the challenge of providing information to those affected by crises.
  • A prior understanding of how populations in disaster prone areas source information is vital in determining the best channels for information flow: for example, local media, local religious networks and local civil society groups.
  • Studies have shown that affected populations go to great lengths to reinstate their media infrastructure and access to information at the earliest opportunity following a disaster. Relief efforts should recognize these community-driven priorities and response accordingly.

My one criticism of the report has to do with the comments in parentheses in this paragraph:

Rebuilding the local media infrastructure for sustained operations must be prioritized as aid efforts continue. This may be as simple as providing a generator to a radio station that has lost its electricity supply, using UN communications structures such as the World Food Program towers to relay local radio stations (though in politically complex environments this needs careful thought)…

The BBC’s Policy Brief focuses on the unmet need for information in humanitarian responses but leaves out of the equation “politically complex environments.” This is problematic. As the UNISDR remarked in it’s 2006 Global Survey of Early Warning Systems (PDF), “the occurrence of “natural” disasters amid complex political crises is increasingly widespread: over 140 natural disasters have occurred alongside complex political crises in the past five years alone.”

Operating in politically volatile, repressive environments and conflict zones presents a host of additional issues that the majority of policy briefs and reports tend to ignore. HHI’s research has sought to outline these important challenges and to highlight potential solutions both in terms of technology and tactics.

picture-7

The importance technology design has been all but ignored in our field. We may continue to use every day communication tools and adopt them for our purposes, but these will inevitably come with constraints. Mobile phones were not designed for operation in hostile environments. This means, for example, that mobile phones don’t come preinstalled with encrypted SMS options. Nor are mobile phones designed to operate in a peer-to-peer (mesh) configuration, which would render mobile phones less susceptible to repressive regimes switching off entire mobile phone networks.

What are today’s most vexing problems in the field of humanitarian early warning and response? This is a question often posed by my colleague Ted Okada to remind us that we often avoid the most important challenges in the humanitarian field. It’s one thing to respond in a post-disaster environment with easy access to refugee populations and donor funding. It’s quite another to be operating in a conflict zone, with restrictions on mobility, with no clear picture of the effected population and with donors reluctant to fund experimental communication projects.

It is high time we focus our attention and resources on tackling the most vexing issues in our field since the solutions we develop there will have universal application elsewhere. In contrast, identifying solutions to the less vexing problems will be of little benefit to large humanitarian community operating in political complex environments. As my colleague Erik Hersman is fond of saying, “If it works in Africa, it’ll work anywhere.” I’ve been making a similar argument over the past year: “If it works in conflict zones, it’ll work anywhere.”

Patrick Philippe Meier