Tag Archives: Ushahidi

Crowdsourcing in Crisis: A More Critical Reflection

This is a response to Paul’s excellent comments on my recent posts entitled “Internews, Ushahidi and Communication in Crisis” and “Ushahidi: From Croudsourcing to Crowdfeeding.”

Like Paul, I too find Internews to be a top organization. In fact, of all the participants in New York, the Internews team in was actually the most supportive of exploring the crowdsourcing approach further instead of dismissing it entirely. And like Paul, I’m not supportive of the status quo in the humanitarian community either.

Paul’s observations are practical and to the point, which is always appreciated. They encourage me revisit and test my own assumptions, which I find stimulating. In short, Paul’s comments are conducive to a more critical reflection of crowdsourcing in crisis.

In what follows, I address all his arguments point by point.

Time Still Ignored

Paul firstly notes that,

Both accuracy and timeliness are core Principles of Humanitarian Information management established at the 2002 Symposium on Best Practices in Humanitarian Information Exchange and reiterated at the 2007 Global Symposium +5. Have those principles been incorporated into the institutions sufficiently? Short answer, no. Is accuracy privileged at the expense of timeliness? Not in the field.

The importance of “time” and “timeliness” was ignored during both New York meetings. Most field-based humanitarian organizations dismissed the use of  “crowdsourcing” because of their conviction that “crowdsourced information cannot be verified.” In short, participants did not privilege timeliness at the expense accuracy because they consider verification virtually impossible.

Crowdsourcing is New

Because crowdsourcing is unfamiliar, it’s untested in the field and it makes fairly large claims that are not well backed by substantial evidence. Having said that, I’m willing to be corrected on this criticism, but I think it’s fair to say that the humanitarian community is legitimately cautious in introducing new concepts when lives are at stake.

Humanitarian organizations make claims about crowdsourcing that are not necessarily backed by substantial evidence because crowdsourcing is fairly new and untested in the field. If we use Ushahidi as the benchmark, then crowdsourcing crisis informaiton is 15 months old and the focus of the conversation should be on the two Ushahid deployments (Kenya & DRC) during that time.

The angst is understandable and we should be legitimately cautious. But angst shouldn’t mean we stand back and accept the status quo, a point that both Paul and I agree on.

Conflict Inflamation

Why don’t those who take the strongest stand against crowdsourcing demonstrate that Ushahidi-Kenya and Ushahidi-DRC have led to conflict inflammation? As far we know, none of the 500+ crowdsourced crisis events in those countries were manufactured to increase violence. If that is indeed the case, then skeptics like Paul should explain why we did not see Ushahidi be used to propagate violence.

In any event, if we embrace the concept of human development, then the decision vis-à-vis whether or not to crowdsource and crowdfeed information ultimately lies with the crowd sourcers and feeders. If the majority of users feel compelled to generate and share crisis information when a platform exists, then it is because they find value in doing so. Who are we to say they are not entitled to receive public crisis information?

Incidentally, it is striking to note the parallels between this conversation and skeptics during the early days of Wikipedia.

Double Standards

I would also note that I don’t think the community is necessarily holding crowdsourcing to a higher standard, but exactly the same standard as our usual information systems – and if they haven’t managed to get those systems right yet, I can understand still further why they’re cautious about entertaining an entirely new and untested approach.

Cautious and dismissive are two different things. If the community were holding crowdsourcing to an equal standard, then they would consider both the timeliness and accuracy of crowdsourced information. Instead, they dismiss crowdsourcing without recognizing the tradeoff with timeliness.

What is Crisis Info?

In relation to my graphic on the perishable nature of time, Paul asks

What “crisis information” are we talking about here? I would argue that ensuring your data is valid is important at all times, so is this an attack on dissemination strategies rather than data validation?

We’re talking about quasi-real time and geo-tagged incident reporting, i.e., reporting using the parameters of incident type, location and time. Of course it is important that data be as accurate as possible. But as I have already argued, accurate information received late is of little operational value.

On the other hand information that has not been yet validated but received early gives those who may need the information the most (1) more time to take precautionary measures, and (2) more time to determine its validity.

Unpleasant Surprises

On this note, I just participated in the Harvard Humanitarian Initiative (HHI)’s Humanitarian Action Summit  (HAS) where the challenge of data validation came up within the context of public health and emergency medicine. The person giving the presentation had this to say:

We prefer wrong information to no information at all since at least we can at least take action in the case of the former to determine the validity of the information.

This reminds me of the known unknowns versus unknown unknowns argument. I’d rather know about a piece of information even though I’m unable to validate it rather than not know and be surprised later in case it turns out to be true.

We should take care not to fall into the classic trap exploited by climate change skeptics. Example: We can’t prove that climate change is really happening since it could simply be that we don’t have enough accurate data to arrive at the correct conclusion. So we need more time and data for the purposes of validation. Meanwhile, skeptics argue, there’s no need to waste resources by taking precautionary measures.

Privileging Time

It also strikes me as odd that Patrick argues that affected communities deserve timely information but not necessarily accurate information. As he notes, it may be a trade-off – but he provides no argument for why he privileges timeliness over accuracy.

I’m not privileging one over the other. I’m simply noting that humanitarian organizations in New York completely ignored the importance of timeliness when communicating with crisis-affected communities, which I still find stunning. It is misleading to talk about accuracy without talking about timeliness and vice versa. So I’m just asking that we take both variables into account.

Obviously the ideal would be to have timely and accurate information. But we’re not dealing with ideal situations when we discuss sudden onset emergencies. Clearly the “right” balance between accuracy and timeliness depends who the end users are and what context they find themselves in. Ultimately, the end users, not us, should have the right to make that final decision for themselves. While accuracy can saves lives, so can timeliness.

Why Obligations?

Does this mean that the government and national media have an obligation to report on absolutely every single violation of human rights taking place in their country? Does this mean that the government and national media have an obligation to report on absolutely every single violation of human rights taking place in their country?

I don’t understand how this question follows from any of my preceding comments. We need to think about information as an ecosystem with multiple potential sources that may or may not overlap. Obviously governments and national media may not be able to—or compelled to—report accurately and in a timely manner during times of crises. I’m not making an argument about obligation. I’m just making an observation about there being a gap that crowdsourcing can fill, which I showed empirically in this Kenya case study.

Transparency and Cooperation

I’m not sure it’s a constructive approach to accuse NGOs of actively “working against transparency” – it strikes me that there may be some shades of grey in their attitudes towards releasing information about human rights abuses.

You are less pessimistic than I am—didn’t think that was possible. My experience in Africa has been that NGOs (and UN agencies) are reluctant to share information not because of ethical concerns but because of selfish and egotistical reasons. I’d recommend talking with the Ushahidi team who desperately tried to encourage NGOs to share information with each other during the post-election violence.

Ushahidi is Innovation

On my question about why human rights and humanitarian organizations were not the one to set up a platform like Ushahidi, Paul answers as follows.

I think it might be because the human rights and humanitarian communities were working on their existing projects. The argument that these organisations failed to fulfill an objective when they never actually had that objective in the first place is distinctly shakey – it seems to translate into a protest that they weren’t doing what you wanted them to do.

I think Paul misses the point. I’m surprised he didn’t raise the whole issue of innovation (or rather lack thereof) in the humanitarian community since he has written extensively about this topic.

Perhaps we also have to start thinking in terms of what damage might this information do (whether true or false) if we release it.

I agree. At the same time, I’d like to get the “we” out of the picture and let the “them” (the crowd) do the deciding. This is the rationale behind the Swift River project we’re working on at Ushahidi.

Tech-Savvy Militias

Evidence suggests that armed groups are perfectly happy to use whatever means they can acquire to achieve their goals. I fail to see why Ushahidi would be “tactically inefficient, and would require more co-ordinating” – all they need to do is send a few text messages. The entire point of the platform is that it’s easy to use, isn’t it?

First of all, the technological capacity and sophistication of non-state armed groups varies considerably from conflict to conflict. While I’m no expert, I don’t know of any evidence from Kenya or the DRC—since those are our empirical test cases—that suggest tech-savvy militia members regularly browse the web to identify new Web 2.0 crowdsourcing tools they can use to create more violence.

Al Qaeda is a different story, but we’re not talking about Al Qaeda, we’re talking about Kenya and the DRC. In the case of the former, word about Ushahidi spread through the Kenyan blogosphere. Again, I don’t know of any Kenyan militia groups in the Rift Valley, for example, that monitors the Kenyan blogosphere to exploit violence.

Second of all, one needs time to learn how to use a platform like Ushahidi for conflict inflammation. Yes, the entire point of the platform is that it’s easy to use to report human rights violations. But it obviously takes more thinking to determine what, where and when to text an event in order to cause a particular outcome. It requires a degree of coordination and decision-making.

That’s why it would be inefficient. All a milita would need to do is fire a few bullets from one end of a village to have the locals run the other way straight into an ambush. Furthermore, we found no evidence of hate SMS submitted to Ushahidi even though there were some communicated outside of Ushahidi.

Sudan Challenges

The government of Sudan regularly accuses NGOs (well, those NGOs it hasn’t expelled) of misreporting human rights violations. What better tool would the government have for discrediting human rights monitoring than Ushahidi? All it would take would be a few texts a day with false but credible reports, and the government can dismiss the entire system, either by keeping their own involvement covert and claiming that the system is actually being abused, or by revealing their involvement and claiming that the system can be so easily gamed that it isn’t credible.

Good example given that I’m currently in the Sudan. But Paul is mixing human rights reporting for the purposes of advocacy with crisis reporting for the purposes of local operational response.

Of course government officials like those in Khartoum will do, and indeed continue to do, whatever the please. But isn’t this precisely why one might as well make the data open and public so those facing human rights violations can at least have the opportunity to get out of harms way?

Contrast this with the typical way that human rights and humanitarian organizations operate—they typically keep the data for themselves, do not share it with other organizations let alone with beneficiaries. How is data triangulation possible at all given such a scenario even if we had all the time in the world? And who loses out as usual? Those local communities who need the information.

Triangulation

While Paul fully agrees that local communities are rarely dependent on a single source of information, which means they can triangulate and validate, he maintains that this “is not an argument for crowdsourcing.” Of course it is, more information allows more triangulation and hence validation. Would Paul argue that my point is an argument against crowdsourcing?

We don’t need less information, we need more information and the time element matters precisely because we want to speed up the collection of information in order to triangulate as quickly as possible.

Ultimately, it will be a question of probability whether or not a given event is true, the larger your sample size, the more confident you can be. The quicker you collect that sample size, the quicker you can validate. Crowdsourcing is a method that facilitates the rapid collection of large quantities of information which in turn facilitates triangulation.

Laughing Off Disclaimers

The idea that people pay attention to disclaimers makes me laugh out loud. I don’t think anybody’s accusing affected individuals of being dumb, but I’d be interested to see evidence that supports this claim. When does the validation take place, incidentally? And what recourse do individuals or communities have if an alert turns out to be false?

Humanitarians often treat beneficiaries as dumb, not necessarily intentionally, but I’ve seen this first hand in East and West Africa. Again, if you haven’t read “Aiding Violence” then I’d recommend it.

Second, the typical scenario that comes up when talking about crowdsourcing and the spreading of rumors has to do with refugee camp settings. The DRC militia story is one that I came up with (and have already used in past blog posts) in order emphasize the distinction with refugee settings.

The scenario that was brought up by others at the Internews meeting was actually one set in a refugee camp. This scenario is a classic case of individuals being highly restricted in the variety of different information sources they have access to, which makes the spread of rumors difficult to counter or dismiss.

Crowdsourcing Response

When I asked why field-based humanitarian organizations that directly work with beneficiaries in conflict zones don’t take an interest in crowdsourced information and the validation thereof, Paul responds as follows.

Yes, because they don’t have enough to do. They’d like to spend their time running around validating other people’s reports, endangering their lives and alienating the government under which they’re working.

I think Paul may be missing the point—and indeed power—of crowdsourcing. We need to start thinking less in traditional top-down centralized ways. The fact is humanitarian organizations could subscribe to specific alerts of concern to them in a specific and limited geographical area.

If they’re onsite where the action is reportedly unfolding and they don’t see any evidence of rumors being true, surely spending 15 seconds to text this info back to HQ (or to send a picture by camera phone) is not a huge burden. This doesn’t endanger their lives since they’re already there and quelling a rumor is likely to calm things down. If we use secure systems, the government wouldn’t be able to attribute the source.

The entire point behind the Swift River project is to crowdsource the filtering process, ie, to distribute and decentralizes the burden of data validation. Those organizations that happen to be there at the right time and place do the filtering, otherwise they don’t and get on with their work. This is the whole point behind my post last year on crowdsourcing response.

Yes, We Can

Is there any evidence at all that the US Embassy’s Twitter feed had any impact at all on the course of events? I mean, I know it made a good headline in external media, but I don’t see how it’s a good example if there’s no actual evidence that it had any impact.

Yes, the rumors didn’t spread. But we’re fencing with one anecdote after the other. All I’m arguing is that two-way communication and broadcasting should be used to counter misinformation;  meaning that it is irresponsible for humanitarian organizations to revert to one-way communication mindsets and wash their hands clean of an unfolding situation without trying to use information and communication technology to do something about it.

Many still don’t understand that the power of P2P meshed communication can go both ways. Unfortunately, as soon as we see new communication technology used for ill, we often react even more negatively by pulling the plug on any communication, which is what the Kenyan government wanted to do during the election violence.

Officials requested that the CEO of Safaricom switch off the SMS network to prevent the spread of hate SMS, he chose to broadcast text messages calling for peace, restraint and warning that those found to be creating hate SMS would be tracked and prosecuted (which the Kenyan Parliament subsequently did).

Again, the whole point is that new communication technologies present a real potential for countering rumors and unless we try using them to maximize positive communication we will never get sufficient evidence to determine whether using SMS and Twitter to counter rumors can work effectively.

Ushahidi Models

In terms of Ushahidi’s new deployment model being localized with the crowdsourcing limited to members of a given organization, Paul has a point when he suggests this “doesn’t sound like crowdsourcing.” Indeed, the Gaza deployment of Ushahidi is more an example of “bounded crowdsourcing” or “Al Jazeera sourcing” since the crowd is not the entire global population but strictly Al Jazeera journalists.

Perhaps crowdsourcing is not applicable within those contexts since “bounded crowdsourcing” may in effect be an oxymoron. At the same time, however, his conclusion that Ushahidi is more like classic situation reporting is not entirely accurate either.

First of all, the Ushahidi platform provides a way to map incident reports, not situation reports. In other words, Ushahidi focuses on the minimum essential indicators for reporting an event. Second, Ushahidi also focuses on the minimum essential technology to communicate and visualize those events. Third, unlike traditional approaches, the information collected is openly shared.

I’m not sure if this is an issue of language and terminology or if there is a deeper point here. In other words, are we seeing Ushahidi evolve in such a way that new iterations of the platform are becoming increasingly similar to traditional information collection systems?

I don’t think so. The Gaza platform is only one genre of local deployment. Another organization might seek to deploy a customized version of Ushahidi and not impose any restrictions on who can report. This would resemble the Kenya and DRC deployments of Ushahidi. At the moment, I don’t find this problematic because we haven’t found signs that this has led to conflict inflammation. I have given a number of reasons in this blog post why that might be.

In any case, it is still our responsibility to think through some scenarios and to start offering potential solutions. Hence the Swift River project and hence my appreciating Paul’s feedback on my two blog posts.

Patrick Philippe Meier

Ushahidi: From Croudsourcing to Crowdfeeding

Humanitarian organizations at the Internews meetings today made it clear that information during crises is as important as water, food and medicine. There is now a clear consensus on this in the humanitarian community.

This is why I have strongly encouraged Ushahidi developers (as recently as this past weekend) to include a subscription feature that allows crisis-affected communities to subscribe to SMS alerts. In other words, we are not only crowdsourcing crisis information we are also crowdfeeding crisis information.

I set off several flags when I mentioned this during the Internews meeting since crowdsourcing typically raises concerns about data validation or lack thereof. Participants at the meeting began painting scenarios whereby militias in the DRC would submit false reports to Ushahidi in order to scare villagers (who would receive the alert by SMS) and have them flee in their direction where they would ambush them.

Here’s why I think humanitarian organizations may in part be wrong.

First of all, militias do not need Ushahidi to scare or ambush at-risk communities. In fact, using a platform like Ushahidi would be tactically inefficient and would require more coordinating on their part.

Second, local communities are rarely dependent on a single source of information. They have their own trusted social and kinship networks, which they can draw on to validate information. There are local community radios and some of these allow listeners to call in or text in with information and/or questions. Ushahidi doesn’t exist in an information vacuum. We need to understand information communication as an ecosystem.

Third, Ushahidi makes it clear that the information is crowdsourced and hence not automatically validated. Beneficiaries are not dumb; they can perfectly well understand that SMS alerts are simply alerts and not confirmed reports. I must admit that the conversation that ensued at the meeting reminded me of Peter Uvin’s “Aiding Violence” in which he lays bare our “infantilizing” attitude towards “our beneficiaries.”

Fourth, many of the humanitarian organizations participating in today’s meetings work directly with beneficiaries in conflict zones. Shouldn’t they take an interest in the crowdsourced information and take advantage of being in the field to validate said information?

Fifth, all the humanitarian organizations present during today’s meetings embraced the need for two-way, community-generated information and social media. Yet these same organizations fold there arms and revert to a one-way communication mindset when the issue of crowdsourcing comes up. They forget that they too can generate information in response to rumors and thus counter-act misinformation as soon as it spreads. If the US Embassy can do this in Madagascar using Twitter, why can’t humanitarian organizations do the equivalent?

Sixth, Ushahidi-Kenya and Ushahidi-DRC were the first deployments of Ushahidi. The model that Ushahidi has since adopted involves humanitarian organizations like UNICEF in Zimbabwe or Carolina for Kibera in Nairobi, and international media groups like Al-Jazeera in Gaza, to use the free, open-source platform for their own projects. In other words, Ushahidi deployments are localized and the crowdsourcing is limited to trusted members of those organizations, or journalists in the case of Al-Jazeera.

Patrick Philippe Meier

Internews, Ushahidi and Communication in Crises

I had the pleasure of participating in two Internews sponsored meetings in New York today. Fellow participants included OCHA, Oxfram, Red Cross, Save the Children, World Vision, BBC World Service Trust, Thomson Reuters Foundation, Humanitarian Media Foundation, International Media Support and several others.

img_0409

The first meeting was a three-hour brainstorming session on “Improving Humanitarian Information for Affected Communities” organized in preparation for the second meeting on “The Unmet Need for Communication in Humanitarian Response,” which was held at the UN General Assembly.

img_0411

The meetings presented an ideal opportunity for participants to share information on current initiatives that focus on communications with crisis-affected populations. Ushahidi naturally came to mind so I introduced the concept of crowdsourcing crisis information. I should have expected the immediate push back on the issue of data validation.

Crowdsourcing and Data Validation

While I have already blogged about overcoming some of the challenges of data validation in the context of crowdsourcing here, there is clearly more to add since the demand for “fully accurate information” a.k.a. “facts and only facts” was echoed during the second meeting in the General Assembly. I’m hoping this blog post will help move the discourse beyond the black and white concepts that characterize current discussions on data accuracy.

Having worked in the field of conflict early warning and rapid response for the past seven years, I fully understand the critical importance of accurate information. Indeed, a substantial component of my consulting work on CEWARN in the Horn of Africa specifically focused on the data validation process.

To be sure, no one in the humanitarian and human rights community is asking for inaccurate information. We all subscribe to the notion of “Do No Harm.”

Does Time Matter?

What was completely missing from today’s meetings, however, was a reference to time. Nobody noted the importance of timely information during crises, which is rather ironic since both meetings focused on sudden onset emergencies. I suspect that our demand (and partial Western obsession) for fully accurate information has clouded some of our thinking on this issue.

This is particularly ironic given that evidence-based policy-making and data-driven analysis are still the exception rather than the rule in the humanitarian community. Field-based organizations frequently make decisions on coordination, humanitarian relief and logistics without complete and fully accurate, real-time information, especially right after a crisis strikes.

So why is this same community holding crowdsourcing to a higher standard?

Time versus Accuracy

Timely information when a crisis strikes is a critical element for many of us in the humanitarian and human rights communities. Surely then we must recognize the tradeoff between accuracy and timeliness of information. Crisis information is perishable!

The more we demand fully accurate information, the longer the data validation process typically takes and thus the more likely the information will be become useless. Our public health colleagues who work in emergency medicine know this only too well.

The figure below represents the perishable nature of crisis information. Data validation makes sense during time-periods A and B. Continuing to carry out data validation beyond time B may be beneficial to us, but hardly to crisis affected communities. We may very well have the luxury of time. Not so for at-risk communities.

relevance_time

This point often gets overlooked when anxieties around inaccurate information surface. Of course we need to insure that information we produce or relay is as accurate as possible. Of course we want to prevent dangerous rumors from spreading. To this end, the Thomson Reuters Foundation clearly spelled out that their new Emergency Information Service (EIS) would only focus on disseminating facts and only facts. (See my previous post on EIS here).

Yes, we can focus all our efforts on disseminating facts, but are those facts communicated after time-period B above really useful to crisis-affected communities? (Incidentally, since EIS will be based on verifiable facts, their approach may well be liked to Wikipedia’s rules for corrective editing. In any event, I wonder how EIS might define the term “fact”).

Why Ushahidi?

Ushahidi was created within days of the Kenyan elections in 2007 because both the government and national media were seriously under-reporting widespread human rights violations. I was in Nairobi visiting my parents at the time and it was also frustrating to see the majority of international and national NGOs on the ground suffering from “data hugging disorder,” i.e., they had no interest whatsoever to share information with each other or the public for that matter.

This left the Ushahidi team with few options, which is why they decided to develop a transparent platform that would allow Kenyans to report directly, thereby circumventing the government, media and NGOs, who were working against transparency.

Note that the Ushahidi team is only comprised of tech-experts. Here’s a question: why didn’t the human rights or humanitarian community set up a platform like Ushahidi? Why were a few tech-savvy Kenyans without a humanitarian background able to set up and deploy the platform within a week and not the humanitarian community? Where were we? Shouldn’t we be the ones pushing for better information collection and sharing?

In a recent study for the Harvard Humanitarian Initiative (HHI), I mapped and time-stamped reports on the post-election violence reported by the mainstream media, citizen journalists and Ushahidi. I then created a Google Earth layer of this data and animated the reports over time and space. I recommend reading the conclusions.

Accuracy is a Luxury

Having worked in humanitarian settings, we all know that accuracy is more often luxury than reality, particularly right after a crisis strikes. Accuracy is not black and white, yes or no. Rather, we need to start thinking in terms of likelihood, i.e., how likely is this piece of information to be accurate? All of us already do this everyday albeit subjectively. Why not think of ways to complement or triangulate our personal subjectivities to determine the accuracy of information?

At CEWARN, we included “Source of Information” for each incident report. A field reporter could select from several choices: (1) direct observation; (2) media, and (3) rumor. This gave us a three-point weighted-scale that could be used in subsequent analysis.

At Ushahidi, we are working on Swift River, a platform that applies human crowdsourcing and machine analysis (natural language parsing) to filter crisis information produced in real time, i.e., during time-periods A and B above. Colleagues at WikiMapAid are developing similar solutions for data on disease outbreaks. See my recent post on WikiMapAid and data validation here.

Conclusion

In sum, there are various ways to rate the likelihood that a reported event is true. But again, we are not looking to develop a platform that insures 100% reliability. If full accuracy were the gold standard of humanitarian response (or military action for that matter), the entire enterprise would come to a grinding halt. The intelligence community has also recognized this as I have blogged about here.

The purpose of today’s meetings was for us to think more concretely about communication in crises from the perspective of at-risk communities. Yet, as soon as I mentioned crowdsourcing the discussion became about our own demand for fully accurate information with no concerns raised about the importance of timely information for crisis-affected communities.

Ironic, isn’t it?

Patrick Philippe Meier

WikiMapAid, Ushahidi and Swift River

Keeping up to date with science journals always pays off. The NewScientist just published a really interesting piece related to crisis mapping of diseases this morning. I had to hop on a flight back to Boston so am uploading my post now.

The cholera outbreak in Zimbabwe is becoming increasingly serious but needed data on the number cases and fatalities to control the problem is difficult to obtain. The World Health Organization (WHO) in Zimbabwe has stated that “any system that improves data collecting and sharing would be beneficial.”

This is where WikiMapAid comes in. Developed by Global Map Aid, the wiki enables humanitarian workers to map information on a version of Google Maps that can be viewed by anyone. “The hope is that by circumventing official information channels, a clearer picture of what is happening on the ground can develop.” The website is based on a “Brazilian project called Wikicrimes, launched last year, in which members of the public share information about crime in their local area.”

wikimapaid

WikiMapAid allows users to create markers and attach links to photographs or to post a report of the current situation in the area. Given the context of Zimbabwe, “if people feel they will attract attention from the authorities by posting information, they could perhaps get friends on the outside to post information for them.”

As always with peer-produced data, the validity of the information will depend on those supplying it. While moderators will “edit and keep track of postings […], unreliable reporting could be a problem. In order to address this, the team behind the project is “developing an algorithm that will rate the reputation of users according to whether the information they post is corroborated, or contradicted.”

This is very much in line with the approach we’re taking at Ushahidi for the Swift River project. As WikiMapAid notes, “even if we’re just 80 per cent perfect, we will still have made a huge step forward in terms of being able to galvanize public opinion, raise funds, prioritize need and speed the aid on those who need it most.”

Time to get in touch with the good folks at WikiMapAid.

Patrick Philippe Meier

A Brief History of Crisis Mapping (Updated)

Introduction

One of the donors I’m in contact with about the proposed crisis mapping conference wisely recommended I add a big-picture background to crisis mapping. This blog post is my first pass at providing a brief history of the field. In a way, this is a combined summary of several other posts I have written on this blog over the past 12 months plus my latest thoughts on crisis mapping.

Evidently, this account of history is very much influenced by my own experience so I may have unintentionally missed a few relevant crisis mapping projects. Note that by crisis  I refer specifically to armed conflict and human rights violations. As usual, I welcome any feedback and comments you may have so I can improve my blog posts.

From GIS to Neogeography: 2003-2005

The field of dynamic crisis mapping is new and rapidly changing. The three core drivers of this change are the increasingly available and accessible of (1) open-source, dynamic mapping tools; (2) mobile data collection technologies; and lastly (3) the development of new methodologies.

Some experts at the cutting-edge of this change call the results “Neogeography,” which is essentially about “people using and creating their own maps, on their own terms and by combining elements of an existing toolset.” The revolution in applications for user-generated content and mobile technology provides the basis for widely distributed information collection and crowdsourcing—a term coined by Wired less than three years ago. The unprecedented rise in citizen journalism is stark evidence of this revolution. New methodologies for conflict trends analysis increasingly take spatial and/or inter-annual dynamics into account and thereby reveal conflict patterns that otherwise remain hidden when using traditional methodologies.

Until recently, traditional mapping tools were expensive and highly technical geographic information systems (GIS), proprietary software that required extensive training to produce static maps.

In terms of information collection, trained experts traditionally collected conflict and human rights data and documented these using hard-copy survey forms, which typically became proprietary once completed. Scholars began coding conflict event-data but data sharing was the exception rather than the rule.

With respect to methodologies, the quantitative study of conflict trends was virtually devoid of techniques that took spatial dynamics into account because conflict data at the time was largely macro-level data constrained by the “country-year straightjacket.”

That is, conflict data was limited to the country-level and rarely updated more than once a year, which explains why methodologies did not seek to analyze sub-national and inter-annual variations for patterns of conflict and human rights abuses. In addition, scholars in the political sciences were more interested in identifying when conflict as likely to occur as opposed to where. For a more in-depth discussion of this issue, please see my paper from 2006  “On Scale and Complexity in Conflict Analysis” (PDF).

Neogeography is Born: 2005

The pivotal year for dynamic crisis mapping was 2005. This is the year that Google rolled out Google Earth. The application marks an important milestone in Neogeography because the free, user-friendly platform drastically reduced the cost of dynamic and interactive mapping—cost in terms of both availability and accessibility. Microsoft has since launched Virual Earth to compete with Google Earth and other  potential contenders.

Interest in dynamic crisis mapping did exist prior to the availability of Google Earth. This is evidenced by the dynamic mapping initiatives I took at Swisspeace in 2003. I proposed that the organization use GIS tools to visualize, animate and analyze the geo-referenced conflict event-data collected by local Swisspeace field monitors in conflict-ridden countries—a project called FAST. In a 2003 proposal, I defined dynamic crisis maps as follows:

FAST Maps are interactive geographic information systems that enable users of leading agencies to depict a multitude of complex interdependent indicators on a user-friendly and accessible two-dimensional map. […] Users have the option of selecting among a host of single and composite events and event types to investigate linkages [between events]. Events and event types can be superimposed and visualized through time using FAST Map’s animation feature. This enables users to go beyond studying a static picture of linkages to a more realistic dynamic visualization.

I just managed to dig up old documents from 2003 and found the interface I had designed for FAST Maps using the template at the time for Swisspeace’s website.

fast-map1

fast-map2

However, GIS software was (and still is) prohibitively expensive and highly technical. To this end, Swisspeace was not compelled to make the necessary investments in 2004 to develop the first crisis mapping platform for producing dynamic crisis maps using geo-referenced conflict data. In hindsight, this was the right decision since Google Earth was rolled out the following year.

Enter PRIO and GROW-net: 2006-2007

With the arrival of Google Earth, a variety of dynamic crisis maps quickly emerged. In fact, one if not the first application of Google Earth for crisis mapping was carried out in 2006 by Jen Ziemke and I. We independently used Google Earth and newly available data from the Peace Research Institute, Oslo (PRIO) to visualize conflict data over time and space. (Note that both Jen and I were researchers at PRIO between 2006-2007).

Jen used Google Earth to explain the dynamics and spatio-temporal variation in violence during the Angolan war. To do this, she first coded nearly 10,000 battle and massacre events as reported in the Portuguese press that took place over a 40 year period.

Meanwhile, I produced additional dynamic crisis maps of the conflict in the Democratic Republic of the Congo (DRC) for PRIO and of the Colombian civil war for the Conflict Analysis Resource Center (CARC) in Bogota. At the time, researchers in Oslo and Bogota used proprietary GIS software to produce static maps (PDF) of their newly geo-referenced conflict data. PRIO eventually used Google Earth but only to publicize the novelty of their new geo-referenced historical conflict datasets.

Since then, PRIO has continued to play an important role in analyzing the spatial dynamics of armed conflict by applying new quantitative methodologies. Together with universities in Europe, the Institute formed the Geographic Representations of War-net (GROW-net) in 2006, with the goal of “uncovering the causal mechanisms that generate civil violence within relevant historical and geographical and historical configurations.” In 2007, the Swiss Federal Institute of Technology in Zurich (ETH), a member of GROW-net, produced dynamic crisis maps using Google Earth for a project called WarViews.

Crisis Mapping Evolves: 2007-2008

More recently, Automated Crisis Mapping (ACM), real-time and automated information collection mechanisms using natural language processing (NLP) have been developed for the automated and dynamic mapping of disaster and health-related events. Examples of such platforms include the Global Disaster Alert and Crisis System (GDACS), CrisisWire, Havaria and HealthMap. Similar platforms have been developed for  automated mapping of other news events, such as Global Incident Map, BuzzTracker, Development Seed’s Managing the News, and the Joint Research Center’s European Media Monitor.

Equally recent is the development of Mobile Crisis Mapping (MCM), mobile crowdsourcing platforms designed for the dynamic mapping of conflict and human rights data as exemplified by Ushahidi (with FrontLineSMS) and the Humanitarian Sensor Web (SensorWeb).

Another important development around this time is the practice of participatory GIS preceded by the recognition that social maps and conflict maps can empower local communities and be used for conflict resolution. Like maps of natural disasters and environmental degradation, these can be developed and discussed at the community level to engage conversation and joint decision-making. This is a critical component since one of the goals of crisis mapping is to empower individuals to take better decisions.

HHI’s Crisis Mapping Project: 2007-2009

The Harvard Humanitarian Initiative (HHI) is currently playing a pivotal role in crafting the new field of dynamic crisis mapping. Coordinated by Jennifer Leaning and myself, HHI is completing a two-year applied research project on Crisis Mapping and Early Warning. This project comprised a critical and comprehensive evaluation of the field and the documentation of lessons learned, best practices as well as alternative and innovative approaches to crisis mapping and early warning.

HHI also acts as an incubator for new projects and  supported the conceptual development of new crisis mapping platforms like Ushahidi and the SensorWeb. In addition, HHI produced the first comparative and dynamic crisis map of Kenya by drawing on reports from the mainstream media, citizen journalists and Ushahidi to analyze spatial and temporal patterns of conflict events and communication flows during a crisis.

HHI’s Sets a Research Agenda: 2009

HHI has articulated an action-oriented research agenda for the future of crisis mapping based on the findings from the two-year crisis mapping project. This research agenda can be categorized into the following three areas, which were coined by HHI:

  1. Crisis Map Sourcing
  2. Mobile Crisis Mapping
  3. Crisis Mapping Analytics

1) Crisis Map Sourcing (CMS) seeks to further research on the challenge of visualizing disparate sets of data ranging from structural and dynamic data to automated and mobile crisis mapping data. The challenge of CMS is to develop appropriate methods and best practices for mashing data from Automated Crisis Mapping (ACM) tools and Mobile Crisis Mapping platforms (see below) to add value to Crisis Mapping Analytics (also below).

2) The purpose of setting an applied-research agenda for Mobile Crisis Mapping, or MCM, is to recognize that the future of distributed information collection and crowdsourcing will be increasingly driven by mobile technologies and new information ecosystems. This presents the crisis mapping community with a host of pressing challenges ranging from data validation and manipulation to data security.

These hurdles need to be addressed directly by the crisis mapping community so that new and creative solutions can be applied earlier rather than later. If the persistent problem of data quality is not adequately resolved, then policy makers may question the reliability of crisis mapping for conflict prevention, rapid response and the documentation of human rights violations. Worse still, inaccurate data may put lives at risk.

3) Crisis Mapping Analytics (CMA) is the third critical area of research set by HHI. CMA is becoming increasingly important given the unprecedented volume of geo-referenced data that is rapidly becoming available. Existing academic platforms like WarViews and operational MCM platforms like Ushahidi do not include features that allow practitioners, scholars and the public to query the data and to visually analyze and identify the underlying spatial dynamics of the conflict and human rights data. This is largely true of Automated Crisis Mapping (ACM) tools as well.

In other words, new and informative metrics are need to be developed to identify patterns in human rights abuses and violent conflict both retrospectively and in real-time. In addition, existing techniques from spatial econometrics need to be rendered more accessible to non-statisticians and built into existing dynamic crisis mapping platforms.

Conclusion

Jen Ziemke and I thus conclude that the most pressing need in the field of crisis mapping is to bridge the gap between scholars and practitioners who self-identify as crisis mappers. This is the most pressing issue because bridging that divide will enable the field of crisis mapping to effectively and efficiently move forward by pursuing the three research agendas set out by the Harvard Humanitarian Initiative (HHI).

We think this is key to moving the crisis-mapping field into more mainstream humanitarian and human rights work—i.e., operational response. But doing so first requires that leading crisis mapping scholars and practitioners proactively bridge the existing gap. This is the core goal of the crisis mapping conference that we propose to organize.

Patrick Philippe Meier

Crisis Mapping Conference Proposal

Bridging the Divide in Crisis Mapping

As mentioned in a recent blog post, my colleague Jen Ziemke and I are organizing a workshop on the topic of crisis mapping. The purpose of this workshop is to bring together a small group of scholars and practitioners who are pioneering the new field of crisis mapping. We are currently exploring funding opportunities with a number of donors and welcome any suggestions you might have for specific sponsors.

The new field of crisis mapping encompasses the collection, dynamic visualization and subsequent analysis of georeferenced information on contemporary conflicts and human rights violations.  A wide range of sources are used to create these crisis maps, (e.g. events data,  newspaper and intelligence parsing, satellite imagery, interview and survey data, SMS, etc). Scholars have developed several analytical methodologies to identify patterns in dynamic crisis maps. These range from computational methods and visualization techniques to spatial econometrics and “hot spot” analysis.

While scholars employ these sophisticated methods in their academic research, operational crisis mapping platforms developed by practitioners are completely devoid of analytical tools. At the same time, scholars often assume that humanitarian practitioners are conversant in quantitative spatial analysis, which is rarely the case. Furthermore, practitioners who are deploying crisis mapping platforms do not have time to the academic literature on this topic.

Mobile Crisis Mapping and Crisis Mapping Analytics

In other words, there is a growing divide between scholars and practitioners in the field of crisis mapping. The purpose of this workshop is to bridge this divide by bringing scholars and practitioners together to shape the future of crisis mapping. At the heart of this lies two new developments: Mobile Crisis Mapping (MCM) and Crisis Mapping Analytics (CMA). See previous blog posts on MCM and CMA here and here.

I created these terms to highlight areas in need for further applied research. As MCM platforms like Ushahidi‘s become more widely available, the amount of crowdsourced data will substantially increase and so mays of the challenges around data validation and analysis. This is why we need to think now about developing a field of Crisis Mapping Analytics (CMA) to make sense of the incoming data and identify new and recurring patterns in human rights abuses and conflict.

This entails developing user-friendly metrics for CMA that practitioners can build in as part of their MCM platforms. However, there is no need to reinvent the circle since scholars who analyze spatial and temporal patterns of conflict already employ sophisticated metrics that can inform the development of CMA metrics. In sum, a dedicated workshop that brings these practitioners and scholars together would help accelerate the developing field of crisis mapping.

Proposed Agenda

Here is a draft agenda that we’ve been sharing with prospective donors. We envisage the workshop to take place over a Friday, Saturday and Sunday. Feedback is very much welcomed.

Day 1 – Friday

Welcome and Introductions

Keynote 1 – The Past & Future of Crisis Mapping

Roundtable 1 – Presentation of Academic and Operational Crisis Mapping projects with Q&A

Lunch

Track 1a – Introduction to Automated Crisis Mapping (ACM): From information collection and data validation to dynamic visualization and dissemination

Track 1b – Introduction to Mobile Crisis Mapping (MCM): From information collection and data validation to dynamic visualization and dissemination

&

Track 2a – Special introduction for newly interested colleagues  and students on spatial thinking in social sciences, using maps to understand crisis, violence and war

Track 2b – Breakout session for students and new faculty: hands-on introduction to GIS and other mapping programs

Dinner

Day 2 – Saturday

Keynote 2 – Crisis Mapping and Patterns Analysis

Roundtable 2 – Interdisciplinary Applications: Innovations & Challenges

Roundtable 3 – Data Collection & Validations: Innovations & Challenges

Lunch

Roundtable 4 – Crisis Mapping Analytics (CMA): Metrics and Taxonomies

Roundtable 5 – Crisis Mapping & Response: Innovations & Challenges

Dinner

Day 3 – Sunday

Keynote 3 – What Happens Next – Shaping the Future of Crisis Mapping

Self-organized Sessions

Wrap-Up

Proposed Participants

Here are some of the main academic institutes and crisis mapping organizations we had in mind:

Institutes

  • John Carrol University (JCU)
  • Harvard Humanitarian Initiative (HHI)
  • Peace Research Institute, Oslo (PRIO)
  • International Conflict Research, ETH Zurich
  • US Institute for Peace (USIP)
  • Political Science Department, Yale University

Organizations

Next Steps

Before we can move forward on any of this, we need to identify potential donors to help co-sponsor the workshop. So please do get in touch if you have any suggestions and/or creative ideas.

Patrick Philippe Meier

LIFT09: Collective Action and Technology

Ramesh Srinivasan and Juliana Rotich spoke about how technologies have changed collective action and solidarity over the past 15 years.

rameshlift09

Ramesh recalled the story of an Indian fishermen who was far out at sea when the Sumatra seaquake launched a tidal wave of Biblical proportions. He had never seen anything like this in his lifetime and used his mobile phone to warn family and friends near the shore and thereby saved many lives. He himself was far enough from the coastline and survived.

Ramesh shared other stories on technology and solidarity. He spoke of  a community-based digital video literacy project in India. In his own words, the project enabled “mobility, dissemination and documentation, claim-making based on documented evidence, community, social capital and kinship,” and “stimulated dialague beyond the focus group.”

In another project, Ramesh explained how the website “Public Grievances & Redressal” designed by the eGovernments Foundation allows Indian citizens to file pubic complaints. Complaints are posted online and only removed when both the plaintiff and designated government official agree that the issue has been resolved. The length of time a complaint remains on the website impacts  future funding for the respective branch of government.

As for the future impact of technology on collective action and  solidarity, Ramesh is concerned that design is being coopted by usability. He also pointed to the growing “mismatch of ontology between the policy world and the local,” which reminded me of James Scott’s Seeing Like a State. Indeed, one of the prinicpal questions that guide both Ramesh and James is: “How do we build websystems that show differences?”

Take Google, for example. While simplicity is a hallmark of the company’s successful websystems, if you Google “Africa” the first link directly relevant to Africa appears only after the second page. This is worrying since the vast majority of web users hardly browse beyond the first page of Google results. Search online is no longer about using the intellectual expanse of the mind but about what what you can find.

julianalift09

My friend and colleague Juliana presented her work with Global Voices and Ushahidi. She spoke about globalism, mobile technology and the Cloud from the perspective of Africa, which was particularly refreshing.

The mobile phone is becoming increasingly important for Africa and, in my opinion, Africa is becoming more important for the world, For example, some 80% of the BBC‘s mobile traffic originates from Africa. Juliana also pointed out how Kenyans now use a free text message service that allows users low on credit to text another person so they can call them back, something called “flashing” or “beeping.” The only catch is that the text comes with a short ad.

Patrick Philippe Meier

HURIDOCS09: From Wikipedia to Ushahidi

The Panel

I just participated in a panel on “Communicating Human Rights Information Through Technology” at the HURIDOCS conference in Geneva and presented Ushahidi as an alternative model. My fellow panelists included Florence Devouard, Chair of the Wikimedia Foundation, Sam Gregory from Witness.org, Lars Bromley from AAAS and Dan Brickley, a researcher, advocate and developer of Semantic Web technologies.

ushahidi

Out of the hundred-or-so participants in the plenary, only a handful, five-or-so, had heard of the Kenyan initiative. So this was a great opportunity to share the Ushahidi story with a diverse coalition of committed human rights workers. There were at least 40 countries or territories represented, ranging from Armenia and Ecuador to Palestine and Zimbabwe.

Since I’ve blogged about Ushahidi extensively already, I will only add a few observations here (see Slideshare for the slides). My presentation followed Florence’s talk on the latest developments at Wikipedia and I really hope to get more of her thoughts on applying lessons learned to the Ushahidi project. Both projects entail crowdsourcing and data validation processes.

Crowdsourcing

“Nobody Knows Everything, but Everyone Knows Something.” I borrowed this line from Florence’s talk to explain the rationale behind Ushahidi. Applied to human rights reporting, “nobody knows about every human rights violation taking place, but everyone may know of some incidents.” The latter is the local knowledge that Ushahidi seeks to render more visible by taking a crowdsourcing approach.

Recognizing the powerful convergence of communication technologies and information ecosystems is key to Ushahidi’s platform. Various deployments of Ushahidi have allowed individuals to report human rights violations online, by SMS and/or via Twitter. Unlike the majority of human rights monitoring platforms, Ushahidi seeks to “close the feedback loop” by allowing individuals to subscribe to alerts in their cities. As we know only too well, monitoring human rights violations is not equivalent to preventing them.

Validation

Given the importance of data validation vis-a-vis human rights reporting, I outlined Ushahidi’s approach and introduced the Swift River initiative which uses crowdsourcing to filter crisis information reported via Twitter, Ushahidi, Flickr, YouTube, local mobile and web social networks. When Ushahidi published their first blog post on Swift River, I commented that Wikipedia was most likely the best at crowdsourcing the filter.

This explains why I’m eager to learn more from Florence regarding her experience with Wikipedia. She mentioned that one new way they track online vandalism of Wikipedia entries is by detecting “sudden changes” in the flow of edits by anonymous users. Edits of this nature must be validated by a third party before being officially published—a new rule being considered by Wikipedia.

One other point worth noting, and which I’ve blogged about before, is that Wikipedia continues to be used for real-time reporting of unfolding crises. We saw this during the London bombings back in 2005 and more recently with the Mumbai attacks. The pages were being edited at least a hundred times a day and as far as I know were as accurate as mainstream media reports and more up-to-date.

The point is, if Wikipedia can serve as a platform for accurate, real-time reporting of political crises, then so should Ushahidi. The challenge is to get enough contributors to Ushahidi to constitute “the crowd” and sufficient alerts to constitute a river. The power here is in the numbers. Perhaps in time the Ushahidi platform may become more like a public sphere where different perspectives on alerts might be exchanged. In other words, we may see a shift away from data “deconfliction” which is reductionist.

The Q&A

The Questions and Answers session was productive and lively. Concerns about data validation and the security of those reporting in repressive environments were raised. The point to keep in mind is that Ushahidi does not exist in a vacuum, which is why I showed HHI’s Google Earth Layer of Kenya’s post-election violence. To be sure, Ushahidi does not replace but rather complements traditional sources of reporting like the national media or alternative sources like citizen journalism. Think of a collage as opposed to a painting.

Human rights incidents mapped on the Ushahidi platform may not be fully validated, but the purpose of Ushahidi is not to provide information on human rights violations that meet ICC standards. The point is to document instances of violations so they (1) can be investigated by interested parties, and (2) serve as potential early warnings for communities caught in conflict. In terms of the security of those engaged in reporting alerts using the Ushahidi platform, the team is adding a feature that allows users to report anonymously.

As expected, there were also concerns about “bad guys” gaming the Ushahidi platform. This is a tricky point to respond to because (1) to the best of my knowledge this hasn’t happened; (2) I’m not sure what the “bad guys” would stand to gain tactically and strategically; (3) Ushahidi has a fraction of the audience—and hence political influence—that television and radio stations have; (4) I doubt “bad guys” are immune to the digital “fog of war“; (5)  the point of Swift River is to make gaming difficult by filtering it out.

In any event, it would behoove Ushahidi to consider potential scenarios in which the platform could be used to promote disinformation and violence. At this point, however, I’m really not convinced that “bad guys” will see the Ushahidi platform as a useful tool to further their own ends.

Patrick Philippe Meier

NeoGeography and Crisis Mapping Analytics

WarViews is Neogeography

Colleagues at the Swiss Federal Institute of Technology Zurich (ETH) are starting to publish their research on the WarViews project. I first wrote about this project in 2007 as part of an HHI deliverable on Crisis Mapping for Humanity United. What appeals to me about WarViews is the initiative’s total “Neogeography” approach.

WarView

picture-21

What is Neogeography? Surprisingly, WarViews‘s first formal publication (Weidmann and Kuse, 2009) does not use the term but my CrisisMappers colleague Andrew Turner wrote the defining chapter on Neogeography for O’Reilly back in 2006:

Neogeography means ‘new geography’ and consists of a set of techniques and tools that fall outside the realm of traditional GIS, Geographic Information Systems. Where historically a professional cartographer might use ArcGIS, talk of Mercator versus Mollweide projections, and resolve land area disputes, a neogeographer uses a mapping API like Google Maps, talks about GPX versus KML, and geotags his photos to make a map of his summer vacation.

Essentially, Neogeography is about people using and creating their own maps, on their own terms and by combining elements of an existing toolset. Neogeography is about sharing location information with friends and visitors, helping shape context, and conveying understanding through knowledge of place.

Compare this language with Wiedmann and Kuse, 2009:

[The] use of geographic data requires specialized software and substantial training and therefore involves high entry costs for researchers and practitioners. [The] War Views project [aims] to create an easy-to-use front end for the exploration of GIS data on conflict. It takes advantage of the recent proliferation of Internet-based geographic software and makes geographic data on conflict available for these tools.

With WarViews, geographic data on conflict can be accessed, browsed, and time-animated in a few mouse clicks, using only standard software. As a result, a wider audience can take advantage of the valuable data contained in these databases […].

The team in Zurich used the free GIS server software GeoServer, which reads “vector data in various formats, including the shapefile format used for many conflict-related GIS data sets.” This way, WarViews allows users to visualize data both statically and dynamically using Google Earth.

Evidently, the WarViews project is not groundbreaking compared to many of the applied mapping projects carried out by the CrisisMappers Group. (Colleagues and I in Boston created a Google Earth layer of DRC and Colombia conflict data well before WarViews came online).

That said the academic initiative at the University of Zurich is an important step forward for neogeography and an exciting development for political scientists interested in studying the geographic dimensions of conflict data.

Geographic Data

Geo-tagged conflict data is becoming more widely available. My Alma Matter, the Peace Research Institute in Oslo (PRIO), has made an important contribution with the Armed Conflict Location and Event Dataset (ACLED). This dataset includes geo-tagged conflict data for 12 countries between 1946 to present time.

In addition to ACLED, Wiedman and Kus (2009) also reference two additional geo-tagged datasets. The first is the Political Instability Task Force’s Worldwide Atrocities Dataset (PITF), which comprises a comprehensive collection of violent events against noncombatants. The second is the Peacekeeping Operations Locations and Event Dataset (Doroussen 2007, PDF), which provides geo-tagged data on interventions in civil wars. This dataset is not yet public.

Weidmann and Kuse (2009) do not mention Ushahidi, a Mobile Crisis Mapping (CMC) platform nor do the authors reference HHI’s Google Earth Crisis Map of Kenya’s Post-Election violence (2008). Both initiatives provide unique geo-tagged peace and conflict data. Ushahidi has since been deployed in the DRC, Zimbabwe and Gaza.

Unlike the academic databases referenced above, the Ushahidi data is crowdsourced and geo-tagged in quasi-real time. Given Ushahidi’s rapid deployment to other conflict zones, we can expect a lot more geo-tagged information in 2009. The question is, will we know how to analyze this data to detect patterns?

Crisis Mapping Analytics (CMA)

The WarViews project is “not designed for sophisticated analyses of geographic data […].” This is perhaps ironic given that academics across multiple disciplines have developed a plethora of computational methods and models to analyze geographic data over time and space. These techniques necessarily require advanced expertise in spatial econometric analysis and statistics.

The full potential of neography will only be realized when we have more accessible ways to analyze the data visualized on platforms like Google Earth. Neogeography brought dynamic mapping to many more users, but spatial econometric analysis has no popular equivalent.

This is why I introduced the term Crisis Mapping Analytics (CMA) back in August 2008 and why I blogged about the need to develop the new field of CMA here and here. The Harvard Humanitarian Initiative (HHI) is now spearheading the development of CMA metrics given the pressing need for more accessible albeit rigorous methods to identify patterns in crisis mapping for the purposes of early warning. Watching data played on Google Earth over and over will only take us so far, especially as new volumes of disparate datasets become available in 2009.

HHI is still in conversation with a prospective donor to establish the new field of CMA so I’m unable to outline the metrics we developed here but hope the donor will provide HHI with some funding so we can partner and collaborate with other groups to formalize the field of CMA.

Crisis Mapping Conference

In the meantime, my colleague Jen Ziemke and I are exploring the possibility of organizing a 2-3 day conference on crisis mapping for Fall 2009. The purpose of the conference is to shape the future of crisis mapping by bridging the gap that exists between academics and practitioners working on crisis mapping.

In our opinion, developing the new field of Crisis Mapping Analytics (CMA) will require the close and collegial collaboration between academic institutes like PRIO and operational projects like Ushahidi.

Jen and I are therefore starting formal conversations with donors in order to make this conference happen. Stay tuned for updates in March. In the meantime, if you’d like to provide logistical support or help co-sponsor this unique conference, please email us.

Patrick Philippe Meier

ISA 2009: Digital Technologies in Kenya’s Post Election Crisis

The fourth presentation at the ISA panel that I’m chairing will feature research by Joshua Goldstein and Juliana Rotich on the role of digital networked technologies during Kenya’s post-election violence (PDF). Blog posts on the other three presentations are available here on human rights, here on political activism and here on digital resitance.

Introduction

Josh and Juliana pose the following question: do mobile phones and the Internet promote transparency and good governance or do they promote hate speech and conflict? The authors draw on the 2007-2008 Kenyan presidential elections to assess the impact of digitally networked technologies, specifically mobile phones and the Internet, on the post-election violence.

This study is an important contribution to the scholarly research on the impact of digital technology on democracy since the majority of the existing literature is largely written through the lens of established, Western democracies. The literature thus “excludes the experience of Sub-Saharan Africa, where meaningful access to digital tools is only beginning to emerge, but where the struggle between failed state and functioning democracy are profound.”

Case Study

Josh and Juliana draw on Kenya as a case study to assess the individual impact of mobile phones and the Internet on the post-election violence. The mobile phone is the most widely used digital application in Kenya and the rest of Africa. The low cost and ease of texting explains how quickly “hate SMS” began circulating after Kenya’s election day. Some examples of the messages texted:

Fellow Kenyans, the Kikuyu’s have stolen our children’s future… we must deal with them in a way they understand… violence.

No more innocent Kikuyu blood will be shed. We will slaughter them right here in the capital city. For justice, compile a list of Luo’s you know… we will give you numbers to text this information.

The authors are concerned about the troubling trend of hate SMS in East Africa citing a violent icident in neighboring Uganda that was organized via SMS to protest the government’s sale of a forest to a company. As they note, “mass SMS tools are remarkably useful for organizing this type of explicit, systematic, and publicly organized campaign of mob violence.”

However, the authors also recognize that “since SMS, unlike radio, is a multi-directional tool, there is also hope that voices of moderation can make themselves heard.” They point to the response taken by Michael Joseph, the CEO of Kenya’s largest mobile phone provider Safaricom when he was asked by government officials to consider shutting down the SMS system:

Joseph convinced the government not to shut down the SMS system, and instead to allow SMS providers to send out messages of peace and calm, which Safaricom did to all nine million of its customers.

Josh and Juliana also note that tracking and identifying individuals that promote hate speech is relatively easy for governments and companies to do. “In the aftermath of the violence, contact information for over one thousand seven hundred individuals who allegedly promoted mob violence was forwarded to the Government of Kenya.” While Kenya didn’t have a law to prosecute hate SMS, the Parliament has begun to create such a law.

The Internet in Kenya was also used for predatory and civic speech. For example, “the leading Kenyan online community, Mashahada, became overwhelmed with divisive and hostile messages,” which prompted the moderators to “shut down the site, recognizing that civil discourse was rapidly becoming impossible.”

However, David Kobia, the administrator of Mashahada, decided to launch a new site a few days later explicitly centered on constructive dialogue. The site, “I Have No Tribe,” was successful in promoting a more constructive discourse and demonstrates “that one possible response to destructive speech online is to encourage constructive speech.”

Mobile phones and the Internet were combined by Ushahidi to crowdsource human rights violation during the post-election violence. The authors contend that the Ushahidi platform is “revolutionary for human rights campaigns in the way that Wikipedia is revolutionary for encyclopedias; they are tools that allow cooperation on a massive scale.” I have already blogged extensively about Ushahidi here and here so will not expand on this point other than to emphasize that Ushahidi was not used to promote hate speech.

Josh and Juliana also draw on the role of Kenya’s citizen journalists to highlight another peaceful application of digital technologies. As they note, Kenya has one of the richest blogging traditions in sub-Saharan Africa, which explains why,

Kenyan bloggers became a critical part of the conversation [when] the web traffic from within Kenya shot through the roof. The influence ballooned further when radio broadcasters began to read influential bloggers over the airwaves, helping them reach […] 95% of the Kenyan population.”

When the Government of Kenya declared a ban on live news coverage on December 30, 2007, Kenyan bloggers became indispensable in their role as citizen journalists. […] Blogs challenged the government’s version of events as they unfolded.

[…] Further, Blogs became a critical source of information for Kenyans in Nairobi and the diaspora. Rumors spread via SMS were dispelled via an online dialogue that took place on blogs and in the comments section of blogs.

Conclusion

When we talk about the ‘networked public sphere,’ we are usually referring to a Western public sphere; one that facilitates public discourse, increased transparency and positive cooperation. However, as the case study above demonstrates, the narrative is more involved when we talk about an African or Kenyan ‘networked public sphere.’ Indeed, the authors conclude that digital networked technologies catalyzed both “predatory behavior such as ethnic-based mob violence and civic behavior such as journalism and human rights campaigns.”

Several questions remain to be addressed in further research. Namely, how important is a vibrant blogosphere to promote positive applications of digital technologies in times of crises? Are networked digital technologies like Ushahidi more susceptible to positive uses than predatory uses? And finally, how does the Kenya case compare to others like the Orange Revolution in the Ukraine?

Patrick Philippe Meier