Calling 911: What Humanitarians Can Learn from 50 Years of Crowdsourcing

Before emergency telephone numbers existed, one would simply pick up the receiver and say “get me the police” when the operator answered. In fact, operators became the first point of contact for emergency dispatch. They would keep lists of specific numbers in their local towns (local fire department, local doctor, etc) to provide a very personalized emergency service and fast track response.


London was the first city to deploy an emergency number system. The number 999 was launched on June 30, 1937. When called, “a buzzer sounded and a red light flashed in the exchange to attract an operator’s attention” (1). The first number to be used in North America was the 911 system deployed in Winnipeg, Canada in 1959. The first US system, also using the 911 number, was launched in Alabama and Alaska in 1968. It wasn’t until the 1980s, however, that 911 was adopted as the standard number across most of the country.

Today, about 240 million 911 calls are made in the US each year, 30%-50% of which are placed using wireless services, and this number is increasing steadily.

When discussing the use of crowdsourcing to collect crisis information in humanitarian disasters, one of the overriding concerns is: “But what if the information is false? How can we trust that the information reported is true?” We forget that national emergency telephone systems have faced the same challenge for half-a-century. Indeed, 911 is a 50 year-old crowdsourcing system! So our colleagues in law enforcement may have learned a few things during this time, which could inform our work in the humanitarian field.

Incidentally, this may be a silly question but why in the world did governments set up  emergency phone numbers if the information collected using this crowdsourced approach is not immediately verifiable? Have the police gone nuts? What were/are they thinking? Were police crowdsourcing reports before telephone lines sprung up across the country? Maybe one had to run, bike or drive to the police station. Or if you were lucky, perhaps you’d have a police officer strolling the streets at just the right time.

So why not keep that good old analog system then? Well, lets face it, do we really want to leg it to the station every time something’s strange in the neighborhood?  No, we want to be able to call…

Can we assume that we’ll always be mobile during an emergency? Do we want to leave it up to chance that a fire truck might be patrolling the streets when the house next door house goes up in flames? Probably not.

In fact, the world’s oldest emergency (crowdsourcing!) call service—the UK’s 999—was introduced over 70 years ago after a London fire on November 10, 1935 killed 5 women. A neighbor had tried to phone the fire brigade but was held up in a queue by the telephone exchange. Neighbor Norman was so outraged that he wrote a letter to the editor of The Times:

A public outcry resulted (which could have been crowdsourced and mapped on an Ushahidi platform as a complaints mechanism):

This prompted a government inquiry. And thus was born the largest crowdsourcing system of the day. Rather ironic that it was ultimately user generated content that created today’s national emergency phone services. Wouldn’t it be great to get a UN inquiry along those same lines that established a crowdsourcing system for humanitarian crises? Sounds crazy, I know, but hey, 999 probably sounded a little nuts back in 1937 as well. And yet, they decided to test the idea in London, then extended trials in Glasgow and within 10 years the entire country was covered. I don’t see why a similar iterative approach couldn’t work in disaster response.

But what challenges does an emergency phone system face? The misuse and abuse of 911 can be divided into 2 categories: unintentional and intentional calls (2). The former includes phantom calls, misdials and hang-up calls. Lets focus on the latter issue, which be divided into the following categories: Non-emergency Calls, Prank Calls, Exaggerated Calls and Lonely Complainant Calls.

  • Non-emergency Calls: reports suggest that non-emergency calls account for a large percentage of 911 calls. For example, callers will phone in to report their car radio getting stolen, or ask for the results of a football game, the time of day, etc. 911 operators even get callers who ask them to transfer their calls to another number since calling 911 is free.
  • Prank Calls: most agencies apparently do not keep figures on total number of prank calls but these generally come from children and teenagers. Diversionary calls represent a sub-category of prank calls. Callers will dial 911 to send the police to a location where not emergency has occurred, sometimes to divert attention away from criminal activity committed by the caller. “There are only a few ways to determine if a call is diversionary: if the caller admits it; if someone informs on the caller; or if the dispatcher or police compare the caller’s location with that of the alleged emergency, to determine if the caller could plausibly claim an emergency at the called in location” (4).
  • Exaggerated Emergency Calls: callers will sometime intentionally exaggerate the seriousness of an emergency expecting that the police will respond faster. It is reportedly unclear how extensive this problem is.
  • Lonely Complainant Calls: other callers will repeatedly report an emergency over a series of month or years but the police never find evidence of there being one. These calls are often made by the elderly and those with mental health problems.

As these news articles here & here show, false reports to 911 can claim lives. Does this mean that law enforcement is considering pulling the plug on the 911 system? Of course not. So how does law enforcement deal with all this? Lets stick to prank and diversionary calls since this comes closest to the most pressing concerns we face in the humanitarian context. (Note the other issues listed above are typically addressed by educating the public).

Law enforcement’s response to prank calls involves targeting violators and applying graduated sanctions, such as fines or jail time. In Ohio, a public service announcement made clear to users that “we know where you are” when you call 911. Prank calls reportedly dropped as a result (5). Police can also take action by targeting specific phones that are used for prank calls. In another example, a hotel in Vegas routed all 911 calls to hotel security for triage after a large number of false 911 reports were made to the fire department.

Could we do something similar within a humanitarian operation? There’s already precedent to prosecute hate-based SMS, as happened in Kenya. We could work with telcos in question to send out a mass SMS broadcast to all subscribers letting them know they can be prosecuted for deliberately reporting false information.

That’s not a silver bullet, of course, but it seems to help national emergency phone systems. We could also draw on natural language processing (NLP) technologies like Swift River to create veracity ratings for crowdsourced reports. Of course, when confronted with a major disaster, everyone may be calling 911 at the same time, thus overwhelming capacity to respond.

In terms of this operational response, one partial answer may be revitalizing Community Emergency Response Teams (CERTs). Another partial answer may be the idea of crowdfeeding, or crowdsourcing response as I blogged about here based on a recent presentation I gave at Red Cross Conference in DC. During that same conference, the Red Cross revealed the results of a study on the role of social media in emergencies, which showed that more than 70% of those surveyed expect a response within an hour after posting a request on an social media platform. Now, there are too few disaster response professionals to assign to every street to meet those expectations (not to mention the cost implications). So they can’t always be there, but the crowd, by definition, is always there.

In the case of national phone emergency systems, there are usually laws that require the police to respond (not that they always do). This may be difficult to work out in the context of humanitarian response. So let me share an anecdote from the Ushahidi-Haiti Project. One of our overriding concerns after launching the 4636 short code with colleagues was raising expectations among those texting that someone would respond to these SMS’s. Three points:

1) Colleagues and I spent hours on Haitian Diaspora radio and television answering questions from listeners and viewers about the purpose of 4636. We made it very clear that the service was simply an information service and that we couldn’t guarantee any kind of response. We also explained that some responders like the US Coast Guard and Marine Corps were prioritizing life and death situations and therefore were not responding to every text message. This helped callers understand the purpose and limits of the service.

2) As a result of these concerns regarding expectations, my colleague Jaroslav Valuch and I recommended that PakReport.org adjust their public messaging campaign by asking people to report their observations instead of their needs. One could also invite people to text in their complaints, thus crowdsourcing perceptions (real or otherwise) of frustration and discontent which could provide humanitarians with important situational awareness. But this too may raise expectations of response. So sticking with simple reports based on observations is sometimes more prudent.

3) As studies from Aceh (the 2004 tsunami) and Pakistan (the 2005 earthquake) showed, it is important to communicate with disaster affected communities, even if the message is that help is not yet on the way. See Imogen Hall’s research and the CDAC consortium, for example.

I’m using the 911 emergency system as an analogy and don’t pretend that the model can be automatically applied to the humanitarian context. But these phone-based emergency crowdsourcing systems have been around for half-a-century and it would be naive to discount any of the lessons learned and best practices that this wealth of experience has produced across such a large scale.

Analyzing the Veracity of Tweets during a Major Crisis

A research team at Yahoo recently completed an empirical study (PDF) on the behavior of Twitter users after the 8.8 magnitude earthquake in Chile. The study was based on 4,727,524 indexed tweets, about 20% of which were replies to other tweets. What is particularly interesting about this study is that the team also analyzed the spread of false rumors and confirmed news that were disseminated on Twitter.

The authors “manually selected some relevant cases of valid news items, which were confirmed at some point by reliable sources.” In addition, they “manually selected important cases of baseless rumors which emerged during the crisis (confirmed to be false at some point).” Their goal was to determine whether users interacted differently when faced with valid news vs false rumors.

The study shows that about 95% of tweets related to confirmed reports validated that information. In contrast only 0.03% of tweets denied the validity of these true cases. Interestingly, the results also show  that “the number of tweets that deny information becomes much larger when the information corresponds to a false rumor.” In fact, about 50% of tweets will deny the validity of false reports. The table below lists the full results.

The authors conclude that “the propagation of tweets that correspond to rumors differs from tweets that spread news because rumors tend to be questioned more than news by the Twitter community. Notice that this fact suggests that the Twitter community works like a collaborative filter of information. This result suggests also a very promising research line: it could posible to detect rumors by using aggregate analysis on tweets.”

I think these findings are particularly important for projects like *Swift River, which try to validate crowdsourced crisis information in real-time. I would also be interested to see a similar study on tweets around the Haitian earthquake to explore whether this “collaborative filter” dynamic is an emergent phenomena in this complex systems or simply an artifact of something else.

Interested in learning more about “information forensics”? See this link and the articles below:

Disaster Relief 2.0: Towards a Multipolar System?

My colleague Adele Waugaman from the UN Foundation & Vodafone Foundation Technology Partnerships has kindly invited some colleagues and I to participate on the following panel at the Mashable Social Good Summit:

Disaster relief 2.0: collaborative technologies & the future of aid
In humanitarian crises, information-sharing and coordination among relief agencies is essential. But what about communications between aid groups and individuals? From Haiti to Pakistan, collaborative technologies are enabling survivors and concerned citizens alike to become important sources of information. Join innovation experts to discuss how new citizen-centered technologies are shaping the future of disaster relief.

I expect that the panel will set the stage and tone for the upcoming 2010 International Conference on Crisis Mapping (ICCM 2010) in Boston in two weeks time. What follows then is a quick recap of where we are in the field of disaster 2.0 and where we might be headed. The recap is based on conversations with colleagues at OCHA and the Crisis Mappers Network, particularly with Oliver Hall and Nigel Snoad.

What Happened

I think it’s fair to say that the disaster response to Haiti was a departure from the past in more ways than one. The Crisis Mappers Network while relatively new played an impressive role in catalyzing rapid collaboration and open information sharing as detailed in this empirical study. In addition, the response to Haiti saw widespread and global volunteer involvement from the Haitian Diaspora and university students: more than 2,000 volunteers based in some 40 countries used their cognitive surplus to try and help those affected by the earthquake thousands of miles away. Ushahidi-Haiti and Mission4636 are both examples of volunteer based projects.

Craig Fugate is the head of FEMA

My friend and colleague Chris Blow described the change best in his phenomenal presentation on Crisis Mapping and Interaction Design. The following slides depict what we are all experiencing–the shift to a multipolar system:

In sum, the rise in informal volunteer networks is shifting the disaster response system towards a more multipolar one, not only in terms of actors but also in terms of the new technologies they employ.

Where To From Here

What does this mean for the future of disaster relief? I think this remains to be seen. The new “world order” brings new possibilities and new consequences. The shift will require both formal and informal actors to adapt and interface in different ways. If the analogy to international relations (unipolar vs multipolar world orders) is apt, then this suggests that new “institutions” are perhaps needed to manage the new constellation of actors.

The launch of the Crisis Mappers Network is a direct response to this transition. The network comprises some 900 members from both state and non-state, formal and informal actors including all the major humanitarian organizations and technology groups in the world. The newly established group Communications with Disaster Affected Communities (CDAC) is an important member. The purpose of the Crisis Mappers Network is to catalyze information sharing, collaboration, partnerships and joint learning in this rapidly changing space—hence the Crisis Mapping Conference series, Annual Meeting of the Crisis Mappers Group, Crisis Mapping Trainings, monthly webinars, blog posts, online discussions and the dedicated Crisis Mappers Google Group.

I believe the Crisis Mappers Group provides an ideal forum to help shape the new conversations, policies and applied research necessary to improve disaster relief 2.0. We need to consider new coordination and cooperation frameworks that connect formal actors with informal networks. As a community, we also need to catalyze joint learning so that informal actors deploying new technologies can learn from more experienced actors who have established best practices in disaster response.

Our Questions

The world of disaster relief 2.0 also brings new possibilities to render humanitarian response more effective and accountable. How can formal actors and informal networks collaborate to foster and implement innovations in humanitarian technology? How do we evaluate this collaboration and the impact of individual crisis mapping initiatives? Information sharing and interoperability are two additional challenges that need to be tackled by the Crisis Mapping Community. This inevitably means that some basic data standards need to be defined—or have existing ones communicated in an accessible manner to volunteer and informal networks.

Some of the most pressing questions in this new “world order” have to do with replicability and sustainability—not to mention leadership—of new crisis mapping initiatives. The challenge of future replicability (and hence reliability) is an issue that colleagues at OCHA communicated to me just weeks after Haiti; rightly so since Ushahidi-Haiti and Mission4636 were both self-organized volunteer driven efforts. I do believe there is room to professionalize some volunteer groups, hence the launch of Universities for Ushahidi (U4U) next month. It is also worth noting that Ushahidi-Chile deployed even more rapidly than Ushahidi-Haiti, even though the former was also purely volunteer driven. The same is true of the Ushahidi-Pakistan deployment called PakReport.

Sustainability in my opinion is less of a challenge if these volunteer groups are well organized and linked to local communities. In the case of Ushahidi-Haiti, for example, the project was successfully transitioned to Solutions.ht, a local software company in Port-au-Prince. But the question of leadership—or governance—is one that has not been sufficiently addressed. An accessible code of conduct is needed to guide informal actors who wish to volunteer their time in aid of disaster response projects. Not all volunteers will add value. Some may actually be disruptive if not destructive. How should state-actors and informal networks manage such situations? This code of conduct should also focus on establishing standards for local ownership, data privacy and data security.

In Sum…

The level of information sharing, collaboration and volunteer involvement in the disaster response to Haiti was unprecedented. This also means it was completely reactive, which is why the disaster relief 2.0 panel and Crisis Mapping Conference are important. They give us the opportunity to begin aligning expectations and catalyze new, responsible partnerships between established actors and informal networks so they can be more deliberate and less reactive in future responses.

In sum, while the transition to a multipolar system may initially bring some disruption, we can all choose to collaborate, iterate and learn quickly to become a more adaptive, transparent and effective community.

Swarming Crisis Response

[Cross-posted from my Conflict Early Warning Blog]

John Arquilla had a very interesting Op-Ed in the New York Times this weekend on “The Coming Swarm.” I’ve been interested in John’s work for years given his application of complexity science to the study of terrorism and would have assigned this Op-Ed as required reading for the graduate course I co-taught on “Complexity Science and International Relations.”

John writes about the recent simultaneous suicide attacks in Kabul last week and argues that a new ‘Mumbai model’ of swarming, smaller-scale terrorist violence is emerging:

The basic concept is that hitting several targets at once, even with just a few fighters at each site, can cause fits for elite counterterrorist forces that are often manpower-heavy, far away and organized to deal with only one crisis at a time.

[…] This pattern suggests that Americans should brace for a coming swarm. Right now, most of our cities would be as hard-pressed as Mumbai was to deal with several simultaneous attacks. Our elite federal and military counterterrorist units would most likely find their responses slowed, to varying degrees, by distance and the need to clarify jurisdiction.

Current strategy for counterterrorism contemplates having to respond using “overwhelming force” to as many as three simultaneous terrorist attacks. This would imply mobilizing as many as 3,000 ground troops to each site.

If that’s an accurate picture, it doesn’t bode well. We would most likely have far too few such elite units for dealing with a large number of small terrorist teams carrying out simultaneous attacks across a region or even a single city.

“So how are swarms to be countered?” John asks. In his opinion,

The simplest way is to create many more units able to respond to simultaneous, small-scale attacks and spread them around the country. This means jettisoning the idea of overwhelming force in favor of small units that are not “elite” but rather “good enough” to tangle with terrorist teams. In dealing with swarms, economizing on force is essential.

For the defense of American cities against terrorist swarms, the key would be to use local police officers as the first line of defense instead of relying on the military. The first step would be to create lots of small counterterrorism posts throughout urban areas instead of keeping police officers in large, centralized precinct houses. This is consistent with existing notions of community-based policing […]

At the federal level, we should stop thinking in terms of moving thousands of troops across the country and instead distribute small response units far more widely.

I think John’s recommendations are very important and directly applicable to the field of operational crisis early warning and rapid response, particularly on the response side.  This means taking more of a people-centered or community-based approach to early response and shifting away from the top-down mentality of “The Responsibility to Protect” to one of “The Responsibility to Empower” from the bottom-up.

The Future of Digital Activism and How to Stop It

I’ve been following a “debate” on a technology list serve which represents the absolute worse of the discourse on digital activism. Even writing the word debate in quotes is too generous. It was like watching Bill O’Reilly or Glenn Beck go all out on Fox News.

The arguments were mostly one-sided and mixed with insults to create public ridicule. It was blatantly obvious that those doing the verbal lynching were driven by other motives. They have a history of being aggressive and seeking provocation in public because it gets them attention, which further bloats their egos. They thrive on it. The irony? Neither of them have much of a track record to speak of in the field of digital activism. All they seem to do is talk about tech in the context of insulting others who get engaged operationally and try to make a difference. Constructive criticism is important, but this hardly qualifies. This is a shame as these individuals are otherwise quite sharp.

So how do we prevent a Fox-styled future of Digital Activism? First, ignore these poisonous debates. If people were serious about digital activism, the discourse would take on a very different tone, a professional one. Second, don’t be fooled, most of the conversations on digital activism are mixed with anecdotes, selection bias and hype, often to get media attention. You’ll find that most involved in the “study” of digital activism have no idea about methodology and research design. Third, help make data-driven, mixed-methods research on digital activism  possible by adding data to the Global Digital Activism Data Set (GDADS). The Meta-Activism Project (MAP) recently launched this data project to catalyze more empirical research on digital activism.

Another title for this post might have been “Here Come the Crowd-Sorcerers…” I’ll be following up with Crowd-Sorcerer sequels soon (to answer many readers who have been asking) but before  I do, I want to look at a prequel. In 2005, Charles Leadbeater gave what is without doubt one of my all time favorite TED Talks ever. The examples he shares—mountain bikes, telescopes and computer games—provide excellent insights into the opportunities and challenges that companies like Ushahidi face. This talk foretells what may very well be the future of crisis mapping.

If you don’t have 20 minutes to watch the talk, just continue reading since I tease out the most salient points in this post. Charles gave this talk in 2005, before Jeff Howe had even coined the term “crowdsourcing”;  before Brafman and Beckstrom’s book “Spider and the Starfish: The Unstoppable Power of Leaderless Organizations”; and way before Clay Shirky wrote his book “Here Comes Everybody: The Power of Organizing without Organizations.”

Charles starts by asking: who invented the mountain bike?  Not a company with a large R&D team. Nor a lone innovative genius in some garage. The mountain bike came from young users in northern California who were frustrated by  heavy traditional bikes and old racing bikes. So they hacked a few bikes and voila, the mountain bike was born. But it wasn’t until 10-15 years later that a small company thought to create a business out of these hacked bikes. Today, mountain bike sales account for some 65% of the bike market in the US alone.

And so, the mountain bike was created entirely by consumers, not by the mainstream bike market because they didn’t see the need, opportunity or have the incentive to create the mountain bike.

Charles argues that it is now possible to “organize without organizations: you don’t need an organization to organize, to achieve large and complex tasks like innovating new software programs” (hint hint). He notes that people (previously consumers now producers) are  increasingly becoming the source of big disruptive ideas. Some of these individuals are amateurs so “they do what they do for the love of it but they want to do it to very high standards.” They take their leisure very seriously, they refine their skills, they invest their own time, etc. This has huge organizational implications for many sectors.

Take astronomy for example. Some 30 years ago, only professional astronomers with huge and very expensive telescopes could see far into space. Today, individuals using “open source” telescopes and the Internet can do what only professional astronomers could do and help discover new stars, meteors at virtually no cost. “So there is a huge competitive argument about sustaining capacity for open source and consumer-driven innovation because it is one of the greatest competitive levers against monopoly,” says Charles.

As a former journalist, Charles recounts from a personal view the significant change that has happened in his profession. He describes the thrill of seeing others in the subway reading his article. At the same time though, he notes that readers only had two places where they could contribute: letters to the editor or the op-ed page. In the case of the former, editors would select the ones they liked, cut them in half and print them three days later. As for op-eds, if readers “knew the editor, been to school with them, slept with their wife, then they could write an article for the op-ed page.”

“Shock horror now, the readers want to be writers and publishers. That’s not their role, they’re supposed to read what we write! But they don’t want to be journalists. The journalists think that the bloggers want to be journalists. They don’t want to be journalists. They just want to have a voice, they want to have a dialogue, a conversation. They want to be part of that flow of information.

So there’s going to be tremendous struggle. But also there’s going to be tremendous movement, from the closed to the open. What you’ll see is two things that are critical, and these are two challenges for the open movement. The first is, can we really survive on volunteers? If this is so critical, do we not need this funded, organized, supported in much more structured ways? Can we really organize that just on volunteers?

And finally, what you will see is the intelligent, closed organizations moving increasingly in the open direction. So it’s not going to be a contest between two camps, but in-between them you’ll find all sorts of interesting places that people will occupy. New organization models coming about, mixing closed and open in tricky ways. […] And those organizational models it turns out are incredibly powerful and the people who can understand them will be very very successful.”

Charles ends his presentation with a final example, the biggest computer games company in China with 250,000,000 subscribers. The CEO of the company only employs 500 people to service these gamers. “How can this be?” asks Charles?

“Because basically he doesn’t service them, he gives them a platform, he gives them some rules, he gives them the tools and then he kind of orchestrates the conversation, he orchestrates the action. But actually a lot of the content is created by the users themselves. And this creates a kind of stickiness between the community and the company which is really, really powerful. […] So this is about companies built on communities that provide communities with tools, resources platforms with which they can share.”

Wanted for Pakistan: A Turksourcing Plugin for Crisis Mapping

A few days after the Haiti earthquake, Ushahidi‘s Brian Herbert set up a dedicated website to crowdsource the translation and geo-location of text messages from Haitian Kreyol to English. This allowed thousands of volunteers from across the globe to help out in the disaster response. We need something similar for crisis mapping Pakistan but Mechanical Turk style.

I coined the term “turksourcing” a while back to mean crowdsourcing applied to micro-tasks. See this previous blog post for a quick introduction. A colleague from Pakistan recently launched this Crowdmap and short code to map flood related incidents. What I’d really like to see happen now is the development of a Turksourcing plugin for this and any other crisis mapping initiatives in Pakistan.

The idea would be to set up a simple website where incoming text messages could be pushed to for tagging and geo-location. Volunteers would use their email address and a password to access the platform. Once they login, they simply select an incoming SMS which they tag based on pre-set categories like those displayed on the Crowdmap for Pakistan. Volunteers would also map the location of the incident being reported. They would then press submit and move on to the next text message.

Each SMS would have to be validated by 3 or 5 volunteers before being officially mapped. This means that a given text message is only mapped if 3+ volunteers have each assigned the SMS the same tag(s) and approximate location. This is to ensure the quality of the data. If a given user consistently mis-tags/geo-locates incoming text messages, their contributions could be automatically ignored. (As opposed to barring them from the system which would prompt them to try and game it some other way).

Volunteers could also be awarded points for each correctly tagged and geo-located SMS. A public scoreboard could be displayed with the rank of most prolific volunteers to create further incentives to help out by rewarding turksourcing efforts. This introduces a gaming component to crisis mapping as I blogged about here. Colleagues of mine with Revision Labs in Seattle have termed  this “Playsourcing”.

The map below represents the location of volunteers who helped out with the Kreyol text messages in January. There’s no reason why we can’t rally volunteers around the world to do the same for the 20 million affected Pakistanis.

I have touched base with friends at Stanford, Crowdflower and with CrisisCommons and hope someone will be able to develop a quick turksourcing plugin for crisis mapping Pakistan and future disasters. Please do get in touch if you have bandwidth to take this on or help out. My email address is patrick at irevolution dot net.

The Crowd is Always There: A Marketplace for Crowdsourcing Crisis Response

This blog post is based on the recent presentation I gave at the Emergency Social Data Summit organized by the Red Cross this week. The title of my talk was “Collaborative Crisis Mapping” and the slides are available here.

What I want to expand on is the notion of a “marketplace for crowdsourcing” that I introduced at the Summit. The idea stems from my experience in the field of conflict early warning, the Ushahidi-Haiti deployment and my observations of the Ushahidi-DC and Ushahidi-Russia initiatives.

The crowd is always there. Paid Search & Rescue (SAR) teams and salaried emergency responders aren’t. Nor can they be on the corners of every street, whether that’s in Port-au-Prince, Haiti, Washington DC or Sukkur, Pakistan. But the real first responders, the disaster affected communities, are always there. Moreover, not all communities are equally affected by a crisis. The challenge is to link those who are most affected with those who are less affected (at least until external help arrives).

This is precisely what PIC Net and the Washington Post did when they  partnered to deploy this Ushahidi platform in response to the massive snow storm that paralyzed Washington DC earlier this year. They provided a way for affected residents to map their needs and for those less affected to map the resources they could share to help others. You don’t need to be a professional disaster response professional to help your neighbor dig out their car.

More recently, friends at Global Voices launched the most ambitious crowdsourcing initiative in Russia in response to the massive forest fires. But they didn’t use this Ushahidi platform to map the fires. Instead, they customized the public map so that those who needed help could find those who wanted to help. In effect, they created an online market place to crowdsource crisis response. You don’t need professional certification in disaster response to drive someone’s grandparents to the next town over.

There’s a lot that disaster affected populations can (and already do) to help each other out in times of crisis. What may help is to combine the crowdsourcing of crisis information with what I call crowdfeeding in order to create an efficient market place for crowdsourcing response. By crowdfeeding, I mean taking crowdsourced information and feeding it right back to the crowd. Surely they need that information as much if not more than external, paid responders who won’t get to the scene for hours or days.

We talk about top-down and bottom-up approaches. Crowdfeeding is a “bottom-bottom” approach; horizontal, meshed communication for local rapid response. Information of the crowd, by the crowd and for the crowd. For the marketplace to work at the technical level, users should easily be able to map their needs or map the resources they have to help others. They should be able to do this via webform, SMS, Twitter, smart phone apps, phone call, etc.

But users shouldn’t have to keep looking back at the map to check whether anyone has posted offers to help in their area, or vice versa. They should get an automated email and/or text message when a potential match is found. The matching should be done by a simple algorithm, a Match.com for crowdsourcing crisis response. (Just like online dating, users should take appropriate precautions when contacting their match). On a practical level, this marketplace will work best if it draws many traders. That’s why the data should be easily shared across platforms.

During the Summit, the Red Cross presented findings from this study which revealed that 75% of people now expect an almost-immediate response after posting a call for help on a social media platform during a disaster. The Red Cross and other humanitarian organizations are particularly troubled by this figure. They shouldn’t be. As the Head of FEMA noted at the summit, it is high time that crisis response organizations start viewing the public as part of the team. One way to make them part of the team is to create an open marketplace for crowdsourcing crisis response.

Is Ushahidi a Liberation Technology?

Professor Larry Diamond, one of my dissertation advisers, recently published a piece on “Liberation Technology” (PDF) in the Journal of Democracy in which he cites Ushahidi and FrontlineSMS amongst other tools. Is Ushahidi really a liberation technology?

Larry recently set up the Program on Liberation Technology at Stanford University together with colleagues Joshua Cohen and Terry Winograd to catalyze more rigorous, applied research on the role of technology in repressive environments—both in terms of liberation and repression. This explains why I’ll be joining the group as a Visiting Fellow this year. The program focuses on the core questions I’m exploring in my dissertation research and ties in technologies like Ushahidi which I’m directly working on.

What is Liberation Technology? Larry defines this technology as,

“… any form of information and communication technology (ICT) that can expand political, social, and economic freedom. In the contemporary era, it means essentially the modern, interrelated forms of digital ICT—the computer, the Internet, the mobile phone, and countless innovative applications for them, including “new social media” such as Facebook and Twitter.”

As is perfectly well known, however, technology can also be used to repress. This should not be breaking news. Liberation Technology vs Digital Repression. My dissertation describes this competition as an arms-race, a cyber game of cat-and-mouse. But the technology variable is not the most critical piece, as I argue in this recent Newsweek article:

“The technology variable doesn’t matter the most,” says Patrick Meier […] “It is the organizational structure that will matter the most. Rigid structures are unable to adapt as quickly to a rapidly changing environment as a decentralized system. Ultimately, it is a battle of organizational theory.”

As Larry writes,

“Democrats and autocrats now compete to master these technologies. Ultimately, however, not just technology but political organization and strategy and deep-rooted normative, social, and economic forces will determine who ‘wins’ the race.”

That is precisely the hypothesis I am testing in my dissertation research. As the Newsweek article put it,

“The only way to stay ahead in this cyberwar, though, is to play offense, not defense. ‘If it is a cat-and-mouse game,’ says Meier of Ushahidi, ‘by definition, the cat will adopt the mouse’s technology, and vice versa.’ His view is that activists will have to get better at adopting some of the same tactics states use. Just as authoritarian governments try to block Voice of America broadcasts, so protest movements could use newer technology to jam state propaganda on radio or TV.”

Larry rightly notes that,

“In the end, technology is merely a tool, open to both noble and nefarious purposes. Just as radio and TV could be vehicles of information pluralism and rational debate, so they could also be commandeered by totalitarian regimes for fanatical mobilization and total state control. Authoritarian states could commandeer digital ICT to a similar effect. Yet to the extent that innovative citizens can improve and better use these tools, they can bring authoritarianism down—as in several cases they have.”

A bold statement for sure. But as Larry recognizes, it is particularly challenging to disentangle political, social and technology factors. This is why more empirical research is needed in this space which is largely limited to qualitative case-studies. We need to bring mixed-methods research to the study of digital activism in repressive environments. This is why I’m part of the Meta-Activism Project (MAP) and why I’m particularly excited to be collaborating on the development of a Global Digital Activism Dataset (GDADS).

Larry writes that Liberation Technology is also “Accountability Technology” in that “it provides efficient and powerful tools for transparency and monitoring.” This is where he describes the FrontlineSMS and Ushahidi platforms. In some respects, these tools have already served as liberation technologies. The question is, will innovative citizens improve these tools and use them more effectively to be able to bring down dictators? I’d love to know your thoughts.

Patrick Philippe Meier

Here Come the Crowd-Sorcerers: “No We Can’t, No We Won’t” says Muggle Master

Sigh indeed. Yawn, even.

The purpose of this series is not to make it about Paul and Patrick. That’s boring as heck. The idea behind the series was not simply to provoke and use humorous analogies but to dispel confusion about crowdsourcing and thereby provide a more informed understanding of this methodology. I fear this is getting completely lost.

Recall that it was a humanitarian colleague who came up with the label “Crowd Sorcerer”. It made me laugh so I figured we’d have a little fun by using the label Muggle in return. But that’s all it is, good fun. And of course many humanitarians see eye to eye with the Crowd Sorcerer approach, so apologies to those who felt they were wrongly placed in the Muggle category. We’ll use the Sorting Hat next time.

Henry and Erik from Ushahidi

This is not about a division between Crowd Sorcerers and Muggles. As a colleague recently noted, “the line lies somewhere else, between effective implementation of new tools and methodologies versus traditional ways of collecting crisis information.” There are plenty of humanitarians who see value in trying out new approaches. Of course, there are some who simply say “No We Can’t, No We Won’t.”

There’s no point going back and forth with Paul on every one of his issues because many of these have actually little to do with crowdsourcing and more to do with him being provoked. In this post, I’m going to stick to the debate about the in’s and out’s of crowdsourcing in humanitarian response.

On Verification

Muggle Master: And of course the way in which Patrick interprets those words bears little relation to what those words actually said, which is this: “Unless there are field personnel providing “ground truth” data, consumers will never have reliable information upon which to build decision support products.”

I disagree. Again, the traditional mindset here is that unless you have field personnel (your own people) in charge, then there is no way to get accurate information. This implies that the disaster affected populations are all liars, which is clearly untrue.

Verification is of course important—no one said the contrary. Why would Ushahidi be dedicating time and resources to the Swift platform if the group didn’t think that verification was important.

The reality here is that verification is not always possible regardless of which methodology is employed. So it boils down to this: is having information that is not immediately verified better than having no information at all? If your answer is yes or “it depends”, then you’re probably a Crowd Sorcerer. If your answer is, “lets try to test some innovative ways to make rapid verification possible,” then again, you likely are a Crowd Sorcerer/ette.

Incidentally, no one I know has advocated for the use of crowdsourced data at the expense of any other information. Crowd Sorcerers and (many humanitarians) are simply suggesting that it be considered one of multiple feeds. Also, as I’ve argued before, a combined approach of bounded and unbounded crowdsourcing is the way to go.

On Impact Evaluation

The Fletcher Team has commissioned an independent evaluation of the Ushahidi deployment in Haiti to go beyond the informal testimonies of success provided by first responders. This is a four-week evaluation lead by Dr. Nancy Mock, a seasoned humanitarian and M&E expert with over 30 years of experience in the humanitarian and development field.

Nathan Morrow will be working directly with Nancy. Nathan is a geographer who has worked extensively on humanitarian and development information systems. He is a member of the European Evaluation Society and like Nancy a member of the American Evaluation Association. Nathan and Nancy will be aided by a public health student who has several years of experience in community development in Haiti and is a fluent Haitian Creole speaker.

The evaluation team has already gone through much of the data and been in touch with many of the first responders as well as other partners. Their job is to do as rigorous an evaluation  as possible and do this fully transparently. Nancy plans to present her findings publicly at the 2010 Crisis Mappers Conference where we’ve dedicated a roundtable to reviewing these findings, as well as other reviews.

As for background, the ToR (available here) was drafted by graduate students specializing in M&E and reviewed closely by Professor Cheyanne Church, who teaches advanced graduate courses on M&E. She is considered a leading expert on the subject. The ToR was then shared on a number of listserves including the ReliefWeb, CrisisMappers Group and Pelican (a listserve for professional evaluators).

Nancy and Nathan are both experienced in the method known as utilization-focused evaluation (UFE), an approach chosen by The Fletcher Team to ensure that the evaluation is useful to all primary users as well as the humanitarian field. The UFE approach means that the ToR is a living document and being adapted as necessary by the evaluators to ensure that the information gathered is useful and actionable, not just interesting.

We don’t have anything to hide here, Muggles. This was a complete first in terms of live crisis mapping and mobile crowdsourcing. Unlike the humanitarian community, we weren’t prepared at all, nor trained, nor had prior experience with live crisis mapping and mobile crowdsourcing, nor with the use of crowdsourcing for near real-time translation, nor with managing hundreds of unpaid volunteers, nor did the vast majority of them have any background in disaster response, nor were most able to focus on this full time because of their under/graduate coursework and mid-term exams, nor did they have direct links or contacts with first responders prior to the deployment, nor did the many responders know they existed and/or who they were. In sum, they had all the odds stacked against them.

If the evaluation shows that the deployment and the Fletcher Team’s efforts didn’t save lives or are unlikely to have saved any lives, rescued people, had no impact, etc., none of us will dispute this. Will we give up? Of course not, Crowd Sorcerers don’t give up. We’ll learn and do better next time.

One of the main reasons for having this evaluation is not only to assess the impact of the deployment but to create a concrete list of lessons learned so that what didn’t work then is more likely to work in the future. The point here is to assess the impact just as much as it is to assess the potential added value of the approach for future deployments.

How can anyone innovate in a space riddled with a “No We Can’t, No We Won’t” mindset? Trial and error is not allowed, iterative learning and adaptation is as illegal as the dark arts. Some Muggles really need to read this post “On Technology and Learning, or Why the Wright Brothers Did Not Create the 747.” If die-Hard Muggles had had their way, they would have forced the brothers to close up shop after just their first attempt because it “failed.”

Incidentally, the majority of development, humanitarian, aid, etc., projects are never evaluated in any rigorous or meaningful way (if at all, even). But that’s ok because these are double (Muggle) standards.

On Communicating with Local Communities

Concerns over security need not always be used as an excuse for not communicating with local communities. We need to find a way not to exclude potentially important informants. A little innovation and creative thinking wouldn’t hurt. Humanitarians working with Crowd Sorcerers could use SMS to crowdsource reports, triangulate as best as possible using manual means combined with Swift River, cross-reference with official information feeds and investigate reports that appear the most clustered and critical.

That way, if you see a significant number of text messages reporting the lack of water in an area of Port-au-Prince then at least this gives you an indication that something more serious may be happening in that location and you can cross-reference your other sources to check whether the issue has already been picked up. Again, it’s this clustering affect that can provide important insights on a given situation.

This would provide a mechanism to allow Haitians to report problems (or complaints for that matter) via SMS, phone, etc. Imogen Wall and other experienced humanitarians have long called for this to change. Hence the newly founded group Communicating with Disaster Affected Communities (CDAC).

Confusion to the End

Me: Despite what some Muggles may think, crowdsourcing is not actually magic. It’s just a methodology like any other, with advantages and disadvantages.

Muggle Master: That’s exactly what “Muggles” think.

Haha, well if that’s exactly what Muggles think, then this is yet more evidence of confusion in the land of Muggles. Crowdsourcing is just a methodology to collect information. There’s nothing new about non-probability sampling. Understanding the  advantages and disadvantages of this methodology doesn’t require an advanced degree in statistical physics.

Muggle Master: Crowdsourcing should not form part of our disaster response plans because there are no guarantees that a crowd is going to show up. Crowdsourcing is no different from any other form of volunteer effort, and the reason why we have professional aid workers now is because, while volunteers are important, you can’t afford to make them the backbone of the operation. The technology is there and the support is welcome, but this is not the future of aid work.

This just reinforces what I’ve already observed, many in the humanitarian space are still confused about crowdsourcing. The crowd is always there. Haitians were always there. And crowdsourcing is not about volunteering. Again, crowdsourcing is just a methodology to collect information. When the UN does it’s rapid needs assessment does the crowd all of a sudden vanish into thin air? Of course not.

As for volunteers, the folks at Fletcher and SIPA are joining forces to work together on deploying live crisis mapping projects in the future. They’re setting up their own protocols, operating procedures, etc. based on what they’ve learned over the past 6 months in order to replicate the “surge mapping capacity” they demonstrated in response to Haiti and Chile. (Swift River will make the need for a large number of volunteers unnecessary).

And pray tell who in the world has ever said that volunteers should be the backbone of a humanitarian operation? Please, do tell. That would be a nice magic trick.

Muggle Master: “The technology is there and the support is welcome, but this is not the future of aid work.”

The support is welcome? Great! But who said that crowdsourcing was the future of aid work? It’s just a methodology. How can one sole methodology be the future of aid work?

I’ll close with this observation. The email thread that started this Crowd-Sorcerer series ended with a second email written by the same group that wrote the first. That second email was far more constructive and conducive to building bridges. I’m excited by the prospects expressed in this second email and really appreciate the positive tone and interest they expressed in working together. I definitely look forward to working with them and learning more from them as we proceed forward in this space and collaboration.

Patrick Philippe Meier