Tag Archives: Haiti

An Open Letter to the Good People at Benetech

Dear Good People at Benetech,

We’re not quite sure why Benetech went out of their way in an effort to discredit ongoing research by the European Commission (EC) that analyzes SMS data crowdsourced during the disaster response to Haiti. Benetech’s area of expertise is in human rights (rather than disaster response), so why go after the EC’s findings, which had nothing to do with human rights?  To our fellow readers who desire context, feel free to read this blog postof mine along with these replies by Benetech’s CEO:

Issues with Crowdsourced Data Part 1
Issues with Crowdsourced Data Part 2

The short version of the debate is this: the EC’s exploratory study found that the spatial pattern of text messages from Mission 4636 in Haiti was positively correlated with building damage in Port-au-Prince. This would suggest that crowdsourced SMS data had statistical value in Haiti—in addition to their value in saving lives. But Benetech’s study shows a negative correlation. That’s basically it. If you’d like to read something a little more spicy though, do peruse this recent Fast Company article, fabulously entitled “How Benetech Slays Monsters with Megabytes and Math.” In any case, that’s the back-story.

So lets return to the Good People at Benetech. I thought I’d offer some of my humble guidance in case you feel threatened again in the future—I do hope you don’t mind and won’t take offense at my unsolicited and certainly imperfect advice. So by all means feel free to ignore everything that follows and focus on the more important work you do in the human rights space.

Next time Benetech wants to try and discredit the findings of a study in some other discipline, I recommend making sure that your own counter-findings are solid. In fact, I would suggest submitting your findings to a respected peer-reviewed journal—preferably one of the top tier scientific journals in your discipline. As you well know, after all, this really is the most objective and rigorous way to assess scientific work. Doing so would bring much more credibility to Benetech’s counter-findings than a couple blog posts.

My reasoning? Benetech prides itself (and rightly so) for carrying out some of the most advanced, cutting-edge quantitative research on patterns of human rights abuses. So if you want to discredit studies like the one carried out by the EC, I would have used this as an opportunity to publicly demonstrate the advanced expertise you have in quantitative analysis. But Benetech decided to use a simple non-spatial model to discredit the EC’s findings. Why use such a simplistic approach? Your response would have been more credible had you used statistical models for spatial point data instead. But granted, had you used more advanced models, you would have found evidence of a positive correlation. So you probably won’t want to read this next bit: a more elaborate “Tobit” correlation analysis actually shows the significance of SMS patterns as an explanatory variable in the spatial distribution of damaged buildings. Oh, and the correlation is (unfortunately) positive.

But that’s really beside the point. As my colleague Erik Hersman just wrote on the Ushahidi blog, one study alone is insufficient. What’s important is this: the last thing you want to do when trying to discredit a study in public is to come across as sloppy or as having ulterior motives (or both for that matter). Of course, you can’t control what other people think. If people find your response sloppy, then they may start asking whether the other methods you do use in your human rights analysis are properly peer-reviewed. They may start asking whether a strong empirical literature exists to back up your work and models. They may even want to know whether your expert statisticians have an accomplished track record and publish regularly in top-tier scientific journals. Other people may think you have ulterior motives and will believe this explains why you tried to discredit the EC’s preliminary findings. This doesn’t help your cause either. So it’s important to think through the implications of going public when trying to discredit someone’s research. Goodness knows I’ve made some poor calls myself on such matters in the past.

But lets take a step back for a moment. If you’re going to try and discredit research like the EC’s, please make sure you correctly represent the other side’s arguments. Skewing them or fabricating them is unlikely to make you very credible in the debate. For example, the EC study never concluded that Search and Rescue teams should only rely on SMS to save people’s lives. Furthermore, the EC study never claimed that using SMS is preferable over using established data on building density. It’s surely obvious—and you don’t need to demonstrate this statistically—to know that using a detailed map of building locations would provide a far better picture of potentially damaged buildings than crowdsourced SMS data. But what if this map is not available in a timely manner? As you may know, data layers of building density are not very common. Haiti was a good example of how difficult, expensive and time-consuming, the generation of such a detailed inventory is. The authors of the study simply wanted to test whether the SMS spatial pattern matched the damage analysis results, which it does. All they did was propose that SMS patterns could help in structuring the efforts needed for a detailed assessment, especially because SMS data can be received shortly after the event.

So to summarize, no one (I know) has ever claimed that crowdsourced data should replace established methods for information collection and analysis. This has never been an either or argument. And it won’t help your cause to turn it into a black-and-white debate because people familiar with these issues know full well that the world is more complex than the picture you are painting for them. They also know that people who take an either-or approach often do so when they have either run out of genuine arguments or had few to begin with. So none of this will make you look good. In sum, it’s important to (1) accurately reflect the other’s arguments, and (2) steer clear of creating an either-or, polarized debate. I know this isn’t easy to do, I’m guilty myself… on multiple counts.

I’ve got a few more suggestions—hope you don’t mind. They follow from the previous ones. The authors of the EC study never used their preliminary findings to extrapolate to other earthquakes, disasters or contexts. These findings were specific to the Haiti quake and the authors never claimed that their model was globally valid. So why did you extrapolate to human rights analysis when that was never the objective of the EC study? Regardless, this just doesn’t make you look good. I understand that Benetech’s focus is on human rights and not disaster response, but the EC study never sought to undermine your good work in the field of human rights. Indeed, the authors of the study hadn’t even heard of Benetech. So in the future, I would recommend not extrapolating findings from one study and assume they will hold in your own field of expertise or that they even threaten your area of expertise. That just doesn’t make any sense.

There are a few more tips I wanted to share with you. Everyone knows full well that crowdsourced data has important limitations—nobody denies this. But a number of us happen to think that some value can still be derived from crowdsourced data. Even Mr. Moreno-Ocampo, the head of the International Criminal Court (ICC), who I believe you know well, has pointed to the value of crowdsourced data from social media. In an interview with CNN last month, Mr. Moreno-Ocampo emphasized that Libya was the first time that the ICC was able to respond in real time to allegations of atrocities, partially due to social-networking sites such as Facebook. He added that, “this triggered a very quick reaction. The (United Nations) Security Council reacted in a few days; the U.N. General Assembly reacted in a few days. So, now because the court is up and running we can do this immediately,” he said. “I think Libya is a new world. How we manage the new challenge — that’s what we will see now.”

Point is, you can’t control the threats that will emerge or even prevent them, but you do control the way you decide to publicly respond to these threats. So I would recommend using your response as an opportunity to be constructive and demonstrate your good work rather than trying to discredit others and botching things up in the process.

But going back to the ICC and the bit in the Fast Company article about mathematics demonstrating the culpability of the Guatemalan government. Someone who has been following your work closely for years emailed me because they felt somewhat irked by all this. By the way, this is yet another unpleasant consequence of trying to publicly discredit others, new critics of your work will emerge. The critic in questions finds the claim a “little far fetched” re your mathematics demonstrating the culpability of the Guatemalan government. “There already was massive documented evidence of the culpability of the Guatemalan government in the mass killings of people. If there is a contribution from mathematics it is to estimate the number of victims who were never documented. So the idea is that documented cases are just a fraction of total cases and you can estimate the gap between the two. In order to do this estimation, you have to make a number of very strong assumptions, which means that the estimate may very well be unreliable anyway.”

Now, I personally think that’s not what you, Benetech, meant when you spoke with the journalist, cause goodness knows the number of errors that journalists have made writing about Haiti.

In any case, the critic had this to add: “In a court of law, this kind of estimation counts for little. In the latest trial at which Benetech presented their findings, this kind of evidence was specifically rejected. Benetech and others claim that in an earlier trial they nailed Milosevic. But Milosevic was never nailed in the first place—he died before judgment was passed and there was a definite feeling at the time that the trial wasn’t going well. In any case, in a court of law what matters are documented cases, not estimates, so this argument about estimates is really beside the point.”

Now I’m really no expert on any of these issues, so I have no opinion on this case or the statistics or the arguments involved. They may very well be completely wrong, for all I know. I’m not endorsing any of the above statements. I’m simply using them as an illustration of what might happen in the future if you don’t carefully plan your counter-argument before going public. People will take issue and try to discredit you in turn, which can be rather unpleasant.

In conclusion, I would like to remind the Good People at Benetech about what Ushahidi is and isn’t. The Ushahidi platform is not a methodology (as I have already written on iRevolution and the Ushahidi blog). The Ushahidi platform is a mapping tool. The methodology that people choose to use to collect information is entirely up to them. They can use random sampling, controlled surveys, crowdsourcing, or even the methodology used by Benetech. I wonder what the good people at Benetech would say if some of their data were to be visualized on an Ushahidi platform. Would they dismiss the crisis map altogether? And speaking of crisis maps, most Ushahidi maps are not crisis maps. The platform is used in a very wide variety of ways, even to map the best burgers in the US. Is Benetech also going to extrapolate the EC’s findings to burgers?

So to sum up, in case it’s not entirely clear, we know full well that there are important limitations to crowdsourced data in disaster response and have never said that the methodology of crowdsourcing should replace existing methodologies in the human rights space (or any other space for that matter). So please, lets not continue going in circles endlessly.

Now, where do we go from here? Well, I’ve never been a good pen pal, so don’t expect any more letters from me in response to the Good People at Benetech. I think everyone knows that a back and forth would be unproductive and largely a waste of time, not to mention an unnecessary distraction from the good work that we all try to do in the broader community to bring justice, voice and respect to marginalized communities.


New Publications on Haiti, Crowdsourcing and Crisis Mapping

Two new publications that may be of interest to iRevolution readers:

MIT’s Journal, Innovations: Technology, Governance, Globalization just released a special edition focused on Haiti which includes lead articles by President Bill Clinton and Digicel’s CEO Denis O’Brien. My colleague Ida Norheim-Hagtun and I were invited to contribute the following piece: Crowdsourcing for Crisis Mapping in Haiti. The edition also includes articles by Mark Summer from Inveneo and my colleague Josh Nesbit from Medic:Mobile.

The SAIS Review of International Affairs recently published a special edition on the cyber challenge threats and opportunities in a networked world, which includes an opening article on Internet Freedom by Alec Ross. My colleague Robert Munro and I were invited to submit write the following piece: The Unprecedented Role of SMS in Disaster Response, which focuses specifically on Haiti. Colleagues from Havard University’s Berkman Center also had a piece on Political Change in the Digital Age, which I reviewed here.

Please feel free to get in touch if you’d like copies of the articles on Haiti. In the meantime, here is a must-read for everyone working in Haiti: “Foreign Friends, Leave January 12th to Haitians.”

How Crowdsourced Data Can Predict Crisis Impact: Findings from Empirical Study on Haiti

One of the inherent concerns about crowdsourced crisis information is that the data is not statistically representative and hence “useless” for any serious kind of statistical analysis. But my colleague Christina Corbane and her team at the European Commission’s Joint Research Center (JRC) have come up with some interesting findings that prove otherwise. They used the reports mapped on the Ushahidi-Haiti platform to show that this crowdsourced  data can help predict the spatial distribution of structural damage in Port-au-Prince. The results were presented at this year’s Crisis Mapping Conference (ICCM 2010).

The data on structural damage was obtained using very high resolution aerial imagery. Some 600 experts from 23 different countries joined the World Bank-UNOSAT-JRC team to assess the damage based on this imagery. This massive effort took two months to complete. In contrast, the crowdsourced reports on Ushahidi-Haiti were mapped in near-real time and could “hence  represent an invaluable early indicator on the distribution and on the intensity of building damage.”

Corbane and her co-authors “focused on the area of Port-au-Prince (approximately 9 by 9 km) where a total of 1,645 messages have been reported and 161,281 individual buildings have been identified, each classified into one of the 5 different damage grades.” Since the focus of the study is the relationship between crowdsourced reports and the intensity of structural damage, only grades 4 and 5 (structures beyond repair) were taken into account. The result is a bivariate point pattern consisting of two variables: 1,645 crowdsourced reports and 33,800 damaged buildings (grades 4 and 5 combined).

The above graphic simply serves as an illustrative example of the possible relationships between simulated distributions of SMS and damaged buildings. The two figures below represent the actual spatial distribution of crowdsourced reports and damaged buildings according to the data. The figures show that both crowdsourced data and damage patterns are clustered even though the latter is more pronounced. This suggests that some kind of correlation exists between the two distributions.

Corbane and colleagues therefore used spatial point pattern process statistics to better understand and characterize the spatial structures of crowdsourced reports and building damage patterns. They used the Ripley’s K-function which is often considered “the most suitable and functional characteristic for analyzing point processes.” The results clearly demonstrate the existence of statistically significant correlation between the spatial patterns of crowdsourced data and building damages at “distances ranging between 1 and 3 to 4 km.”

The co-authors then used the marked Gibbs point process model to “derive the conditional intensity of building damage based on the pairwise interactions between SMS [crowdsourced reports] and building damages.” The resulting model was then used to compute the predicted damage intensity values, which is depicted below with the observed damage intensity.

The figures clearly show that the similarity between the patterns exhibited by the predictive model and the actual damage pattern is particularly strong. This visual inspection is confirmed by the computed correlation between the observed and predicted damage patterns shown below.

In sum, the results of this empirical study demonstrates the existence of a spatial dependence between crowdsourced data and damaged buildings. The results of the analysis also show how statistical interactions between the patterns of crowdsourced data and building damage can be used for modeling the intensity of structural damage to buildings.

These findings are rather stunning. Data collected using unbounded crowdsourcing (non-representative sampling) largely in the form of SMS from the disaster affected population in Port-au-Prince can predict, with surprisingly high accuracy and statistical significance, the location and extent of structural damage post-earthquake.

The World Bank-UNOSAT-JRC damage assessment took 600 experts 66 days to complete. The cost probably figured in the hundreds of millions of dollars. In contrast, Mission 4636 and Ushahidi-Haiti were both ad-hoc, volunteer-based projects and virtually all the crowdsourced reports used in the study were collected within 14 days of the earthquake (most within 10 days).

But what does this say about the quality/reliability of crowdsourced data? The authors don’t make this connection but I find the implications particularly interesting since the actual content of the 1,645 crowdsourced reports were not factored into the analysis, simply the GPS coordinates, i.e., the meta-data.

Here Come the Crowd-Sorcerers: “No We Can’t, No We Won’t” says Muggle Master

Sigh indeed. Yawn, even.

The purpose of this series is not to make it about Paul and Patrick. That’s boring as heck. The idea behind the series was not simply to provoke and use humorous analogies but to dispel confusion about crowdsourcing and thereby provide a more informed understanding of this methodology. I fear this is getting completely lost.

Recall that it was a humanitarian colleague who came up with the label “Crowd Sorcerer”. It made me laugh so I figured we’d have a little fun by using the label Muggle in return. But that’s all it is, good fun. And of course many humanitarians see eye to eye with the Crowd Sorcerer approach, so apologies to those who felt they were wrongly placed in the Muggle category. We’ll use the Sorting Hat next time.

Henry and Erik from Ushahidi

This is not about a division between Crowd Sorcerers and Muggles. As a colleague recently noted, “the line lies somewhere else, between effective implementation of new tools and methodologies versus traditional ways of collecting crisis information.” There are plenty of humanitarians who see value in trying out new approaches. Of course, there are some who simply say “No We Can’t, No We Won’t.”

There’s no point going back and forth with Paul on every one of his issues because many of these have actually little to do with crowdsourcing and more to do with him being provoked. In this post, I’m going to stick to the debate about the in’s and out’s of crowdsourcing in humanitarian response.

On Verification

Muggle Master: And of course the way in which Patrick interprets those words bears little relation to what those words actually said, which is this: “Unless there are field personnel providing “ground truth” data, consumers will never have reliable information upon which to build decision support products.”

I disagree. Again, the traditional mindset here is that unless you have field personnel (your own people) in charge, then there is no way to get accurate information. This implies that the disaster affected populations are all liars, which is clearly untrue.

Verification is of course important—no one said the contrary. Why would Ushahidi be dedicating time and resources to the Swift platform if the group didn’t think that verification was important.

The reality here is that verification is not always possible regardless of which methodology is employed. So it boils down to this: is having information that is not immediately verified better than having no information at all? If your answer is yes or “it depends”, then you’re probably a Crowd Sorcerer. If your answer is, “lets try to test some innovative ways to make rapid verification possible,” then again, you likely are a Crowd Sorcerer/ette.

Incidentally, no one I know has advocated for the use of crowdsourced data at the expense of any other information. Crowd Sorcerers and (many humanitarians) are simply suggesting that it be considered one of multiple feeds. Also, as I’ve argued before, a combined approach of bounded and unbounded crowdsourcing is the way to go.

On Impact Evaluation

The Fletcher Team has commissioned an independent evaluation of the Ushahidi deployment in Haiti to go beyond the informal testimonies of success provided by first responders. This is a four-week evaluation lead by Dr. Nancy Mock, a seasoned humanitarian and M&E expert with over 30 years of experience in the humanitarian and development field.

Nathan Morrow will be working directly with Nancy. Nathan is a geographer who has worked extensively on humanitarian and development information systems. He is a member of the European Evaluation Society and like Nancy a member of the American Evaluation Association. Nathan and Nancy will be aided by a public health student who has several years of experience in community development in Haiti and is a fluent Haitian Creole speaker.

The evaluation team has already gone through much of the data and been in touch with many of the first responders as well as other partners. Their job is to do as rigorous an evaluation  as possible and do this fully transparently. Nancy plans to present her findings publicly at the 2010 Crisis Mappers Conference where we’ve dedicated a roundtable to reviewing these findings, as well as other reviews.

As for background, the ToR (available here) was drafted by graduate students specializing in M&E and reviewed closely by Professor Cheyanne Church, who teaches advanced graduate courses on M&E. She is considered a leading expert on the subject. The ToR was then shared on a number of listserves including the ReliefWeb, CrisisMappers Group and Pelican (a listserve for professional evaluators).

Nancy and Nathan are both experienced in the method known as utilization-focused evaluation (UFE), an approach chosen by The Fletcher Team to ensure that the evaluation is useful to all primary users as well as the humanitarian field. The UFE approach means that the ToR is a living document and being adapted as necessary by the evaluators to ensure that the information gathered is useful and actionable, not just interesting.

We don’t have anything to hide here, Muggles. This was a complete first in terms of live crisis mapping and mobile crowdsourcing. Unlike the humanitarian community, we weren’t prepared at all, nor trained, nor had prior experience with live crisis mapping and mobile crowdsourcing, nor with the use of crowdsourcing for near real-time translation, nor with managing hundreds of unpaid volunteers, nor did the vast majority of them have any background in disaster response, nor were most able to focus on this full time because of their under/graduate coursework and mid-term exams, nor did they have direct links or contacts with first responders prior to the deployment, nor did the many responders know they existed and/or who they were. In sum, they had all the odds stacked against them.

If the evaluation shows that the deployment and the Fletcher Team’s efforts didn’t save lives or are unlikely to have saved any lives, rescued people, had no impact, etc., none of us will dispute this. Will we give up? Of course not, Crowd Sorcerers don’t give up. We’ll learn and do better next time.

One of the main reasons for having this evaluation is not only to assess the impact of the deployment but to create a concrete list of lessons learned so that what didn’t work then is more likely to work in the future. The point here is to assess the impact just as much as it is to assess the potential added value of the approach for future deployments.

How can anyone innovate in a space riddled with a “No We Can’t, No We Won’t” mindset? Trial and error is not allowed, iterative learning and adaptation is as illegal as the dark arts. Some Muggles really need to read this post “On Technology and Learning, or Why the Wright Brothers Did Not Create the 747.” If die-Hard Muggles had had their way, they would have forced the brothers to close up shop after just their first attempt because it “failed.”

Incidentally, the majority of development, humanitarian, aid, etc., projects are never evaluated in any rigorous or meaningful way (if at all, even). But that’s ok because these are double (Muggle) standards.

On Communicating with Local Communities

Concerns over security need not always be used as an excuse for not communicating with local communities. We need to find a way not to exclude potentially important informants. A little innovation and creative thinking wouldn’t hurt. Humanitarians working with Crowd Sorcerers could use SMS to crowdsource reports, triangulate as best as possible using manual means combined with Swift River, cross-reference with official information feeds and investigate reports that appear the most clustered and critical.

That way, if you see a significant number of text messages reporting the lack of water in an area of Port-au-Prince then at least this gives you an indication that something more serious may be happening in that location and you can cross-reference your other sources to check whether the issue has already been picked up. Again, it’s this clustering affect that can provide important insights on a given situation.

This would provide a mechanism to allow Haitians to report problems (or complaints for that matter) via SMS, phone, etc. Imogen Wall and other experienced humanitarians have long called for this to change. Hence the newly founded group Communicating with Disaster Affected Communities (CDAC).

Confusion to the End

Me: Despite what some Muggles may think, crowdsourcing is not actually magic. It’s just a methodology like any other, with advantages and disadvantages.

Muggle Master: That’s exactly what “Muggles” think.

Haha, well if that’s exactly what Muggles think, then this is yet more evidence of confusion in the land of Muggles. Crowdsourcing is just a methodology to collect information. There’s nothing new about non-probability sampling. Understanding the  advantages and disadvantages of this methodology doesn’t require an advanced degree in statistical physics.

Muggle Master: Crowdsourcing should not form part of our disaster response plans because there are no guarantees that a crowd is going to show up. Crowdsourcing is no different from any other form of volunteer effort, and the reason why we have professional aid workers now is because, while volunteers are important, you can’t afford to make them the backbone of the operation. The technology is there and the support is welcome, but this is not the future of aid work.

This just reinforces what I’ve already observed, many in the humanitarian space are still confused about crowdsourcing. The crowd is always there. Haitians were always there. And crowdsourcing is not about volunteering. Again, crowdsourcing is just a methodology to collect information. When the UN does it’s rapid needs assessment does the crowd all of a sudden vanish into thin air? Of course not.

As for volunteers, the folks at Fletcher and SIPA are joining forces to work together on deploying live crisis mapping projects in the future. They’re setting up their own protocols, operating procedures, etc. based on what they’ve learned over the past 6 months in order to replicate the “surge mapping capacity” they demonstrated in response to Haiti and Chile. (Swift River will make the need for a large number of volunteers unnecessary).

And pray tell who in the world has ever said that volunteers should be the backbone of a humanitarian operation? Please, do tell. That would be a nice magic trick.

Muggle Master: “The technology is there and the support is welcome, but this is not the future of aid work.”

The support is welcome? Great! But who said that crowdsourcing was the future of aid work? It’s just a methodology. How can one sole methodology be the future of aid work?

I’ll close with this observation. The email thread that started this Crowd-Sorcerer series ended with a second email written by the same group that wrote the first. That second email was far more constructive and conducive to building bridges. I’m excited by the prospects expressed in this second email and really appreciate the positive tone and interest they expressed in working together. I definitely look forward to working with them and learning more from them as we proceed forward in this space and collaboration.

Patrick Philippe Meier

Here Come the Crowd-Sorcerers: Highlighting Some Misunderstandings

Welcome back, folks. Here is the third episode in our “Crowd-Sorcerers Series.” You can read the first episode on “How Technology is Disrupting the Humanitarian Space and Why It’s Easy” right here. The second episode, which in a tongue-in-cheek way asks “Is it Possible to Teach an Old (Humanitarian) Dog New Tech’s?” is available here. Those episodes will highlight what this new “Crowd-Sorcerer Series” is all about.

Oh, but just before we go to episode 3, it seems someone following this series doesn’t appear to have the good sense to recognize the sarcasm and humorous tone in my posts and thereby  missed the point entirely. I’m just using these silly analogies and metaphors to get some points across. I’m drawing a caricature, so to speak, as some of these points often get overlooked in aid/dev speak.

This is not personal at all, and I very much welcome an open conversation with all interested, i.e, the point of this series. A Muggles and Crowd-Sorcerer comparison is just for fun, it isn’t about classy/not-classy, it’s about getting a point or two across to more than just a narrow segment of the aid/dev industry. So again, like I wrote in my first blog post in the series, lets please not take ourselves too seriously, ok?

Muggles: Internet-based platforms may be generating good data within a certain segment of the IT community, such as Open Street Maps, and others like Ushahidi are providing an interesting alternative to real-time news channels, but this data is not getting to where it is needed in an operational sense – the guy/gal sitting in the tent with no Internet connection trying to plan a (name your Cluster/sector/need) survey.

The Ushahidi platform allows end-users to subscribe to alerts via SMS. And that core feature is not new to the Haiti deployment, it’s been there for a good while. Not only can users get automated SMS alerts with the Haiti deployment, but they can also define exactly the type of alerts they wish to receive by setting geographic parameters, tags and even keywords. Thanks to a new plugin for the Ushahidi platform, visual voicemail is also an option for the Ushahidi platform.

In addition, a group deploying the Ushahidi platform can respond to incoming text messages directly from the same interface, allowing for near real-time, two-way communication with the disaster affected communities. See this blog post to find out how that all works.

By the way, not all guys/gals will be sitting in a tent and/or have no Internet access. Also, not all data need to go to guys/gals in tents in the first place.

On Ushahidi being an alternative to real-time news channels, the vast majority of the information mapped on the Haiti platform during the 5 days (before the 4636 SMS short code) came from:

  1. Mainstream media (television, radio, online newspapers)
  2. The Haitian Diaspora
  3. Social media (Twitter, Facebook, Flickr)
  4. Humanitarian sources (emails, situation reports, skype chats, phone calls)

One member of the Diaspora had this to say: “We are the country’s middle and upper class and Haitians living abroad. We do we monitor the Haiti radio, Facebook feeds and Twitter from all our contacts. Filter it and redistribute it […]. We also have a few contacts on the ground in Haiti. All the information we post has been confirmed to the best of our ability.” (Thanks to Rob Munro of Mission 4636 for sharing this).

Muggles: The crucial link that is required, and that the [Humanitarian Information Management] community seems to be drifting farther and farther from as we are collectively distracted by shiny objects and/or the latest, greatest thing since sliced bread, is field-based NGOs equipped with proper information-sharing platform(s) that can be used even when there is no Internet connectivity or Washington-based (or London-based or Paris-based) IT, mapping and GIS skills and support available.

Mobile pones are not new and shiny. Nor is Google Maps. Integrating both is not new either, SMS/map integration has been around for half-a-decade. The fact that the humanitarian community faces a challenge in innovating and keeping up with technology is certainly a problem. Free and open source platforms wouldn’t be filling a technology-information void if a gap didn’t exist in the first place.

Crowd-Sorcerers want to help (the ones I know at least) and they realize full well that they’re new to this space and don’t have all the answers. They want (and actually) do partner with a number of humanitarian organizations on joint projects. But are the rest of the Muggles ready to join forces with sorcerers? Or will it take a disaster like Voldermort to make that happen? (Just in case someone missed the humorous tone here, that was a joke). Incidentally, I never mentioned the humanitarian organization (from the email thread) in my blog posts. So they are completely anonymous unless they choose otherwise.

Actually, the same Muggles that started the email exchange wrote a second email which was far, far more constructive and conducive to building bridges between Muggles and Crowd-Sorcerers than other humanitarians. I’m excited by the prospects and really appreciate the positive tone and interest they expressed in working together. I definitely look forward to working with them and learning more from them as we proceed forward in this space and collaboration.

Patrick Philippe Meier

Here Come the Crowd-Sorcerers: Is it Possible to Teach an Old (Humanitarian) Dog New Tech’s?

Thanks for joining us for the second episode in the new “Crowd-Sorcerers Series.” If you missed the season premiere on “How Technology is Disrupting the Humanitarian Space and Why It’s Easy,” you can read it here. For a quick synopsis of what this is all about, I’m responding to some initial “anti-crowdsourcing” remarks made by a frustrated humanitarian group in a recent email exchange. I’m referring to this group as Muggles after they christened the Crowdsourcing Community as “Crowd-Sorcerers.” The name calling is of course all in good fun.

Here’s more from the original email exchange:

Muggles: [Our] view is that the focus [on crowdsourcing] needs to be turned around. Don’t use crowdsourcing as technology to collect data, but as a means to distribute verified, accurate and reliable information that has been collected according to recognized/accepted standards.

Well, well, well. Isn’t this interesting? Writing that “crowdsourcing is a technology” reveals how out of touch Muggles are. Crowdsourcing is a methodology, not a technology. See my blog posts on “Demystifying Crowdsourcing” and “Know What Ushahidi Is? Think Again.” Worse, to write that crowdsourcing should be used to disseminate information shows just how much confusion exists in the humanitarian space.

The importance of information dissemination has long been documented and has nothing to do with crowdsourcing! Perhaps the term they’re looking for is “crowdfeeding” but I coined this to highlight the need for technologies that promote information dissemination by the crowd for the crowd.

Confession: I shudder when reading language like “according to recognized/accepted standards.” Not because standards are not important, but just because I’m weary of the exclusive and at times elitist attitude that tends to come with this language. I get flashbacks from “Seeing Like a State.”

Perhaps an astute reader will have recognized that the title of this blog post (Here Come the Crowd-Sorcerers”) is inspired from Clay Shirky’s book “Here Comes Everybody: The Power of Organizing Without Organizations.” I won’t try to summarize all of Clay’s many lucid observations here but I do highly recommend the book to Muggles (along with Seeing Like a State).

This type of tension between regulation and innovation has been playing out in several other sectors as well, including banking (vs. mobile banking) and perhaps most notably in journalism (vs. citizen journalism). But the tensions there have matured somewhat (at least relatively). In the latter case, people are increasingly recognizing the value of citizen journalism while better understanding its limits—so much so that large media companies have themselves started to leverage crowdsourcing for content in their programming.

The journalism community’s initial reaction against bloggers is not too dissimilar to the frustration expressed by Muggles who keep hoping that crowdsourcing will just go away if they pout and stamp their feet hard enough. (Reminds me of the way that some Muggles freaked out at the invention of the printing press and later the telephone).

Here’s the bad news folks, you’ve seen nothing yet. The Crowd-Sorcerers are just getting warmed up. The level of crowdsourcing we’ve seen to date is just the tip of the wand. Haiti was a first, just a first. User-generated content is not about to vanish any time soon. In fact, it will continue growing exponentially. The vast majority of content available on the web will soon be user-generated.

The good news? Muggles can take this as an opportunity to demonstrate leadership and share their savoir-faire. What should Muggles not do? Let me share a real example from another sector: election monitoring. One of the world’s leading election monitoring groups actively discouraged local NGOs in a developing country from contributing any reports to an Ushahidi deployment that was run in-country by a local civil society network—lets call them the Gryffindors.

The Gryffindors discovered this interference when they spoke with other local NGOs. They want to partner with these NGOs for the next elections but these groups are now hesitant. So here we have a Western (i.e. external group) directly interfering by telling local NGOs they cannot participate in a local initiative to document their own elections in their own country. (Sound familiar to the LogBase example from Episode 1? Naturally). Who do the elections belong to? Citizens or foreigners?

Muggles have the opportunity to provide unique thought leadership here. Make Crowd-Sorcerers part of the solution, not the problem.

There’s more good news. Despite what some Muggles may think, crowdsourcing is not actually magic. It’s just a methodology like any other, with advantages and disadvantages. At the end of the day, you’re just collecting information and this information can also be triangulated and verified like any other type of information.

That’s the whole point behind Swift River, to provide a free and open source platform that can help validate large quantities of information in near real time. Is it the silver bullet that we’ve all been dreaming of? Of course not, this ain’t Hogwarts. What Swift River does, however, is make the triangulation of crowdsourced information far more efficient for Muggles than ever before. So to suggest that crowdsourced information is inherently unverifiable is rather shortsighted.

Was the technology community’s response to Haiti perfect? Not even close, hence the current M&E on the Ushahidi deployment and these blog posts that I wrote up earlier this year:

In fact, much of my own frustration during the emergency period stemmed from the reckless behavior of some in the technology community. In addition, some tech folks who mean well end up producing tech solutions that don’t solve anything and never get used. So as I’ve blogged about before, tech folks need to get up to speed and get their act together. Hacking away every other weekend is all fine and well as long as the tech produced is actually in line with the needs of the humanitarian and disaster affected communities.

But lets be clear that the humanitarian community’s response to Haiti was hardly stellar (c.f., John Holmes’s leaked email, etc.). No one’s perfect, of course, and that includes Crowd-Sorcerers. The volunteer community that mobilized around the Ushahidi platform had never done anything like this (because nothing like this had quite happened) before, they had no prior training nor did they have much (if any) humanitarian experience to speak of. I, for one, had never launched an Ushahidi platform before. So boy did we all learn a heck of a lot.

Haiti was a complete first as far as live crisis mapping and mobile crowdsourcing goes. Yet Muggles  blame Crowd-Sorcerers for not getting everything right on their first try. The importance of standards is repeatedly voiced by Muggles, as noted above. Well I call this a double-standard.

Stay tuned for Episode 3 in the new series: “Here Come the Crowd-Sorcerers: Highlighting Some Misunderstandings.”

Patrick Philippe Meier

Here Come the Crowd-Sorcerers: How Technology is Disrupting the Humanitarian Space and How Easy It Is

I’ve recently been cc’d on an email thread in which a humanitarian group has started to “air out some latent issues and frustrations” vis-a-vis the use of crowdsourcing in emergencies. I applaud them for speaking up and credit them for coining the fantastic term “crowd-sorcerers” which is brilliant! The group is apparently preparing to publish a report concerning Humanitarian Information Management in Haiti. I really hope to appear in their chapter on “The Crowd-Sorcerers.”

I wonder which kind of sorcerer I am...

I too prefer candid conversations over diplomatic pillow talk. Lets be honest, it’s not actually difficult to disrupt the humanitarian system. It’s  hierarchical, overly bureaucratic, slow, often unaccountable and at times spectacularly corrupt. But I want to make sure my tone here is not misunderstood. I want to be constructive but playful and provocative at the same time, to “lighten things” up a bit. We often take ourselves way too seriously, too often. That’s why I absolutely love the term Crowd-Sorcerer! Lets use Muggles for our humanitarian friends.

I’ll first lay out some of the frustrations aired by the Muggles in their own words so I don’t  misrepresent their concerns—some of which are obviously valid (but not necessarily new). I’ll be reviewing these concerns in a series of blog posts, so stay tuned for future episodes in the new Crowd-Sorcerer Series! Caution: in case it’s not yet obvious, I will be deliberately provocative and playful in this series.

Muggles: Unless there are field personnel providing “ground truth” data, consumers will never have reliable information upon which to build decision support products. Crowdsourcing may be a quick way to get a message out, but it is not good information unless there is on-the-ground verification going on.

Not sure how you’d interpret these words but what they say to me is this: unless information comes from official field personnel, i.e., Muggles, it’s absolutely useless and should be dumped in the trash. I personally find that somewhat… is colonial too provocative?

Crisis information that was crowdsourced using the distributed short code 4636 in Haiti helped save hundreds of lives according to the Marine Corps. The vast majority of this information could not be verified and yet both the Marine Corps and Coast Guard used this as one of their feeds while FEMA encouraged the crowd-sorcerers to continue mapping, calling the crisis map of Haiti the most comprehensive and up-to-date source of information available to Muggles.

There’s another extraordinary story here, and that’s the story of Mission 4636. Tens of thousands of incoming text messages from disaster affected communities in Haiti were translated from Haitian Kreyol to English in near real-time thanks to crowdsourcing. These text messages were translated by thousands of Haitian Kreyol speaking volunteers from all around the world.

Map of volunteer locations

Without this crowdsourcing, the Marine Corps, Coast Guard, FEMA and others could not have used the information streaming in from 4636 as effectively as they did. And guess what? The original platform that was used to do this translation-by-crowdsourcing was built overnight by Brian Herbert, a 20-something tech developer at Ushahidi.

Where were the Muggles then? I’m sorry to put it in these terms but if we listened to (and waited for) Muggles all the time, then perhaps several hundred more people would have needlessly lost their lives in Haiti.

A forthcoming USIP report that reviews the deployment of the Ushahidi platform found that Haitian NGO’s and local civil society groups were physically barred from entering LogBase—the humanitarian community’s compound near the airport in Port-au-Prince. One Haitian NGO rep who was interviewed said he felt like a foreigner in his own country when he wasn’t allowed to enter LogBase and attend meetings where he could share vital information on urgent needs.

Now tell me, how is trashing Haitian text messages any different than  physically excluding Haitians from having a voice at LogBase? Because the so-called “unwashed masses” don’t have the “right” credentials as defined by the Muggles? Either way, they are excluded from having a stake in the hierarchical system that is supposed help them.

Incidentally, a fully independent evaluation led by a team of three accomplished experts in M&E  (monitoring and evaluation) are currently carrying out their impact assessment of the Ushahidi deployment during the emergency period. They will be in Haiti to for the field work and yes, one member of the team speaks fluent Kreyol. The PI from Tulane University has over 20 years of relevant experience. It would make absolutely no sense for Ushahidi to carry out this review.

Ushahidi has little to no expertise in M&E and such a review would likely be viewed as biased if Ushahidi was authoring it. In fact, Ushahidi didn’t even commission the evaluation, The Fletcher Team did, and they should be applauded for doing so. By the way, as I have blogged here, it is misguided to assume that experts in, say development, are by definition experts at evaluating development projects. M&E is a separate area of expertise and profession in it’s own right. Anyone who has taken M&E 101 will know this from the first lecture.

We’re going to a commercial break now, but stay tuned for the next episode: “Here Come the Crowd-Sorcerers: Is it Possible to Teach an Old (Humanitarian) Dog New Tech’s?”

Patrick Philippe Meier