Tag Archives: Humanitarian

Towards a Twitter Dashboard for the Humanitarian Cluster System

One of the principal Research and Development (R&D) projects I’m spearheading with colleagues at the Qatar Computing Research Institute (QCRI) has been getting a great response from several key contacts at the UN’s Office for the Coordination of Humanitarian Affairs (OCHA). In fact, their input has been instrumental in laying the foundations for our early R&D efforts. I therefore highlighted the initiative during my recent talk at the UN’s ECOSOC panel in New York, which was moderated by OCHA Under-Secretary General Valerie Amos. The response there was also very positive. So what’s the idea? To develop the foundations for a Twitter Dashboard for the Humanitarian Cluster System.

The purpose of the Twitter Dashboard for Humanitarian Clusters is to extract relevant information from twitter and aggregate this information according to Cluster for analytical purposes. As the above graphic shows, clusters focus on core humanitarian issues including Protection, Shelter, Education, etc. Our plan is to go beyond standard keyword search and simple Natural Language Process-ing (NLP) approaches to more advanced Machine Learning (ML) techniques and social computing methods. We’ve spent the past month asking various contacts whether anyone has developed such a dashboard but thus far have not come across any pre-existing efforts. We’ve also spent this time getting input from key colleagues at OCHA to ensure that what we’re developing will be useful to them.

It is important to emphasize that the project is purely experimental for now. This is one of the big advantages of being part of an institute for advanced computing R&D; we get to experiment and carry out applied research on next-generation humanitarian technology solutions. We realize full well what the many challenges and limitations of using Twitter as an information source are, so I won’t repeat these here. The point is not to suggest that a would-be Twitter Dashboard should be used instead of existing information management platforms. As United Nations colleagues themselves have noted, such a dashboard would simply be another dial on their own dashboards, which may at times prove useful, especially when compared or integrated with other sources of information.

Furthermore, if we’re serious about communicating with disaster affected comm-unities and the latter at times share crisis information on Twitter, then we may want to listen to what they are saying. This includes Diasporas as well. The point, quite simply, is to make full use of Twitter by at least extracting all relevant and meaningful information that contributes to situational awareness. The plan, therefore, is to have the Twitter Dashboard for Humanitarian Clusters aggregate information relevant to each specific cluster and to then provide key analytics for this content in order to reveal potentially interesting trends and outliers within each cluster.

Depending on how the R&D goes, we envision adding “credibility computing” to the Dashboard and expect to collaborate with our Arabic Language Technology Center to add Arabic tweets as well. Other languages could also be added in the future depending on initial results. Also, while we’re presently referring to this platform as a “Twitter” Dashboard, adding SMS,  RSS feeds, etc., could be part of a subsequent phase. The focus would remain specifically on the Humanitarian Cluster system and the clusters’ underlying minimum essential indicators for decision-making.

The software and crisis ontologies we are developing as part of these R&D efforts will all be open source. Hopefully, we’ll have some initial results worth sharing by the time the International Conference of Crisis Mappers (ICCM 2012) rolls around in mid-October. In the meantime, we continue collaborating with OCHA and other colleagues and as always welcome any constructive feedback from iRevolution readers.

Become a (Social Media) Data Donor and Save a Life

I was recently in New York where I met up with my colleague Fernando Diaz from Microsoft Research. We were discussing the uses of social media in humanitarian crises and the various constraints of social media platforms like Twitter vis-a-vis their Terms of Service. And then this occurred to me: we have organ donation initiatives and organ donor cards that many of us carry around in our wallets. So why not become a “Data Donor” as well in the event of an emergency? After all, it has long been recognized that access to information during a crisis is as important as access to food, water, shelter and medical aid.

This would mean having a setting that gives others during a crisis the right (for a limited time) to use your public tweets or Facebook status updates for the ex-pressed purpose of supporting emergency response operations, such as live crisis maps. Perhaps switching this setting on would also come with the provision that the user confirms that s/he will not knowingly spread false or misleading information as part of their data donation. Of course, the other option is to simply continue doing what many have been doing all along, i.e., keep using social media updates for humanitarian response regardless of whether or not they violate the various Terms of Service.

Disaster Response, Self-Organization and Resilience: Shocking Insights from the Haiti Humanitarian Assistance Evaluation

Tulane University and the State University of Haiti just released a rather damming evaluation of the humanitarian response to the 2010 earthquake that struck Haiti on January 12th. The comprehensive assessment, which takes a participatory approach and applies a novel resilience framework, finds that despite several billion dollars in “aid”, humanitarian assistance did not make a detectable contribution to the resilience of the Haitian population and in some cases increased certain communities’ vulnerability and even caused harm. Welcome to supply-side humanitarian assistance directed by external actors.

“All we need is information. Why can’t we get information?” A quote taken from one of many focus groups conducted by the evaluators. “There was little to no information exchange between the international community tasked with humanitarian response and the Haitian NGOs, civil society or affected persons / communities themselves.” Information is critical for effective humanitarian assistance, which should include two objectives: “preventing excess mortality and human suffering in the immediate, and in the longer term, improving the community’s ability to respond to potential future shocks.” This longer term objective thus focuses on resilience, which the evaluation team defines as follows:

“Resilience is the capacity of the affected community to self-organize, learn from and vigorously recover from adverse situations stronger than it was before.”

This link between resilience and capacity for self-organization is truly profound and incredibly important. To be sure, the evaluation reveals that “the humani-tarian response frequently undermined the capacity of Haitian individuals and organizations.” This completely violates the Hippocratic Oath of Do No Harm. The evaluators thus “promote the attainment of self-sufficiency, rather than the ongoing dependency on standard humanitarian assistance.” Indeed, “focus groups indicated that solutions to help people help themselves were desired.”

I find it particularly telling that many aid organizations interviewed for this assessment were reluctant to assist the evaluators in fully capturing and analyzing resource flows, which are critical for impact evaluation. “The lack of transparency in program dispersal of resources was a major constraint in our research of effective program evaluation.” To this end, the evaluation team argue that “by strengthening Haitian institutions’ ability to monitor and evaluate, Haitians will more easily be able to track and monitor international efforts.”

I completely disagree with this remedy. The institutions are part of the problem, and besides, institution-building takes years if not decades. To assume there is even political will and the resources for such efforts is at best misguided. If resilience is about strengthening the capacity of affected communities to self-organize, then I would focus on just that, applying existing technologies and processes that both catalyze and facilitate demand-side, people-centered self-organization. My previous blog post on “Technology and Building Resilient Societies to Mitigate the Impact of Disasters” elaborates on this point.

In sum, “resilience is the critical link between disaster and development; monitoring it will ensure that relief efforts are supporting, and not eroding, household and community capabilities.” This explains why crowdsourcing and data mining efforts like those of Ushahidi, HealthMap and UN Global Pulse are important for disaster response, self-organization and resilience.

Does the Humanitarian Industry Have a Future in The Digital Age?

I recently had the distinct honor of being on the opening plenary of the 2012 Skoll World Forum in Oxford. The panel, “Innovation in Times of Flux: Opportunities on the Heels of Crisis” was moderated by Judith Rodin, CEO of the Rockefeller Foundation. I’ve spent the past six years creating linkages between the humanitarian space and technology community, so the conversations we began during the panel prompted me to think more deeply about innovation in the humanitarian industry. Clearly, humanitarian crises have catalyzed a number of important innovations in recent years. At the same time, however, these crises extend the cracks that ultimately reveal the inadequacies of existing organiza-tions, particularly those resistant to change; and “any organization that is not changing is a battle-field monument” (While 1992).

These cracks, or gaps, are increasingly filled by disaster-affected communities themselves thanks in part to the rapid commercialization of communication technology. Question is: will the multi-billion dollar humanitarian industry change rapidly enough to avoid being left in the dustbin of history?

Crises often reveal that “existing routines are inadequate or even counter-productive [since] response will necessarily operate beyond the boundary of planned and resourced capabilities” (Leonard and Howitt 2007). More formally, “the ‘symmetry-breaking’ effects of disasters undermine linearly designed and centralized administrative activities” (Corbacioglu 2006). This may explain why “increasing attention is now paid to the capacity of disaster-affected communities to ‘bounce back’ or to recover with little or no external assistance following a disaster” (Manyena 2006).

But disaster-affected populations have always self-organized in times of crisis. Indeed, first responders are by definition those very communities affected by disasters. So local communities—rather than humanitarian professionals—save the most lives following a disaster (Gilbert 1998). Many of the needs arising after a disaster can often be met and responded to locally. One doesn’t need 10 years of work experience with the UN in Darfur or a Masters degree to know basic first aid or to pull a neighbor out of the rubble, for example. In fact, estimates suggest that “no more than 10% of survival in emergencies can be attributed to external sources of relief aid” (Hilhorst 2004).

This figure may be higher today since disaster-affected communities now benefit from radically wider access to information and communication technologies (ICTs). After all, a “disaster is first of all seen as a crisis in communicating within a community—that is as a difficulty for someone to get informed and to inform other people” (Gilbert 1998). This communication challenge is far less acute today because disaster-affected communities are increasingly digital, and thus more and more the primary source of information communicated following a crisis. Of course, these communities were always sources of information but being a source in an analog world is fundamentally different than being a source of information in the digital age. The difference between “read-only” versus “read-write” comes to mind as an analogy. And so, while humanitarian organiza-tions typically faced a vacuum of information following sudden onset disasters—limited situational awareness that could only be filled by humanitarians on the ground or via established news organizations—one of the major challenges today is the Big Data produced by disaster-affected communities themselves.

Indeed, vacuums are not empty and local communities are not invisible. One could say that disaster-affected communities are joining the quantified self (QS) movement given that they are increasingly quantifying themselves. If inform-ation is power, then the shift of information sourcing and sharing from the select few—the humanitarian professionals—to the masses must also engender a shift in power. Indeed, humanitarians rarely have access to exclusive information any longer. And even though affected populations are increasingly digital, some groups believe that humanitarian organizations have largely failed at commu–nicating with disaster-affected communities. (Naturally, there are important and noteworthy exceptions).

So “Will Twitter Put the UN Out of Business?” (Reuters), or will humanitarian organizations cope with these radical changes by changing themselves and reshaping their role as institutions before it’s too late? Indeed, “a business that doesn’t communicate with its customers won’t stay in business very long—it’ll soon lose track of what its clients want, and clients won’t know what products or services are on offer,” whilst other actors fill the gaps (Reuters). “In the multi-billion dollar humanitarian aid industry, relief agencies are businesses and their beneficiaries are customers. Yet many agencies have muddled along for decades with scarcely a nod towards communicating with the folks they’re supposed to be serving” (Reuters).

The music and news industries were muddling along as well for decades. Today, however, they are facing tremendous pressures and are undergoing radical structural changes—none of them by choice. Of course, it would be different if affected communities were paying for humanitarian services but how much longer do humanitarian organizations have until they feel similar pressures?

Whether humanitarian organizations like it or not, disaster affected communities will increasingly communicate their needs publicly and many will expect a response from the humanitarian industry. This survey carried out by the American Red Cross two years ago already revealed that during a crisis the majority of the public expect a response to needs they communicate via social media. Moreover, they expect this response to materialize within an hour. Humanitarian organizations simply don’t have the capacity to deal with this surge in requests for help, nor are they organizationally structured to do so. But the fact of the matter is that humanitarian organizations have never been capable of dealing with this volume of requests in the first place. So “What Good is Crowd-sourcing When Everyone Needs Help?” (Reuters). Perhaps “crowdsourcing” is finally revealing all the cracks in the system, which may not be a bad thing. Surely by now it is no longer a surprise that many people may be in need of help after a disaster, hence the importance of disaster risk reduction and preparedness.

Naturally, humanitarian organizations could very well chose to continue ignoring calls for help and decide that communicating with disaster affected communities is simply not tenable. In the analog world of the past, the humanitarian industry was protected by the fact that their “clients” did not have a voice because they could not speak out digitally. So the cracks didn’t show. Today, “many traditional humanitarian players see crowdsourcing as an unwelcome distraction at a time when they are already overwhelmed. They worry that the noise-to-signal ration is just too high” (Reuters). I think there’s an important disconnect here worth emphasizing. Crowdsourced information is simply user-generated content. If humanitarians are to ignore user-generated content, then they can forget about two-way communications with disaster-affected communities and drop all the rhetoric. On the other hand, “if aid agencies are to invest time and resources in handling torrents of crowdsourced information in disaster zones, they should be confident it’s worth their while” (Reuters).

This last comment is … rather problematic for several reasons (how’s that for being diplomatic?). First of all, this kind of statement continues to propel the myth that we the West are the rescuers and aid does not start until we arrive (Barrs 2006). Unfortunately, we rarely arrive: how many “neglected crises” and so-called “forgotten emergencies” have we failed to intervene in? This kind of mindset may explain why humanitarian interventions often have the “propensity to follow a paternalistic mode that can lead to a skewing of activities towards supply rather than demand” and towards informing at the expense of listening (Manyena 2006).

Secondly, the assumption that crowdsourced data would be for the exclusive purpose of the humanitarian cavalry is somewhat arrogant and ignores the reality that local communities are by definition the first responders in a crisis. Disaster-affected communities (and Diasporas) are already collecting (and yes crowdsourcing) information to create their own crisis maps in times of need as a forthcoming report shows. And they’ll keep doing this whether or not humanita-rian organizations approve or leverage that information. As my colleague Tim McNamara has noted “Crisis mapping is not simply a technological shift, it is also a process of rapid decentralization of power. With extremely low barriers to entry, many new entrants are appearing in the fields of emergency and disaster response. They are ignoring the traditional hierarchies, because the new entrants perceive that there is something that they can do which benefits others.”

Thirdly, humanitarian organizations are far more open to using free and open source software than they were just two years ago. So the resources required to monitor and map crowdsourced information need not break the bank. Indeed, the Syria Crisis Map uses a free and open source data-mining platform called HealthMap, which has been monitoring some 2,000 English-based sources on a daily basis for months. The technology powering the map itself, Ushahidi, is also free and open source. Moreover, the team behind the project is comprised of just a handful of volunteers doing this in their own free time (for almost an entire year now). And as a result of this initiative, I am collaborating with a colleague from UNDP to pilot HealthMap’s data mining feature for conflict monitoring and peacebuilding purposes.

Fourth, other than UN Global Pulse, humanitarian agencies are not investing time and resources to manage Big (Crisis) Data. Why? Because they have neither the time nor the know-how. To this end, they are starting to “outsource” and indeed “crowdsource” these tasks—just as private sector businesses have been doing for years in order to extend their reach. Anyone actually familiar with this space and developments since Haiti already knows this. The CrisisMappers Network, Standby Volunteer Task Force (SBTF), Humanitarian OpenStreetMap (HOT) and Crisis Commons (CC) are four volunteer/technical networks that have already collaborated actively with a number of humanitarian organizations since Haiti to provide the “surge capacity” requested by the latter; this includes UN OCHA in Libya and Colombia, UNHCR in Somalia and WHO in Libya, to name a few. In fact, these groups even have their own acronym: Volunteer & Technical Communities (V&TCs).

As the former head of OCHA’s Information Services Section (ISS) noted after the SBTF launched the Libya Crisis Map, “Your efforts at tackling a difficult problem have definitely reduced the information overload; sorting through the multitude of signals on the crisis is not easy task” (March 8, 2011). Furthermore, the crowdsourced social media information mapped on the Libya Crisis Map was integrated into official UN OCHA information products. I dare say activating the SBTF was worth OCHA’s while. And it cost the UN a grand total of $0 to benefit from this support.

Credit: Chris Bow

The rapid rise of V&TC’s has catalyzed the launch of the Digital Humanitarian Network (DHN), formerly called the Humanitarian Standby Task Force (H-SBTF). Digital Humanitarians is a network-of-network catalyzed by the UN and comprising some of the most active members of the volunteer & technical co-mmunity. The purpose of the Digital Humanitarian platform (powered by Ning) is to provide a dedicated interface for traditional humanitarian organizations to outsource and crowdsource important information management tasks during and in-between crises. OCHA has also launched the Communities of Interest (COIs) platform to further leverage volunteer engagement in other areas of humanitarian response.

These are not isolated efforts. During the massive Russian fires of 2010, volunteers launched their own citizen-based disaster response agency that was seen by many as more visible and effective than the Kremlin’s response. Back in Egypt, volunteers used IntaFeen.com to crowdsource and coordinate their own humanitarian convoys to Libya, for example. The company LinkedIn has also taken innovative steps to enable the matching of volunteers with various needs. They recently added a “Volunteer and Causes” field to its member profile page, which is now available to 150 million LinkedIn users worldwide. Sparked.com is yet another group engaged in matching volunteers with needs. The company is the world’s first micro-volunteering network, sending challenges to registered volunteers that are targeted to their skill set and the causes that they are most passionate about.

It is not farfetched to envisage how these technologies could be repurposed or simply applied to facilitate and streamline volunteer management following a disaster. Indeed, researchers at the University of Queensland in Australia have already developed a new smart phone app to help mobilize and coordinate volunteer efforts during and following major disasters. The app not only provides information on preparedness but also gives real-time updates on volunteering opportunities by local area. For example, volunteers can register for a variety of tasks including community response to extreme weather events.

Meanwhile, the American Red Cross just launched a Digital Operations Center in partnership with Dell Labs, which allows them to leverage digital volunteers and Dell’s social media monitoring platforms to reduce the noise-to-signal ratio. This is a novel “social media-based operation devoted to humanitarian relief, demonstrating the growing importance of social media in emergency situations.” As part of this center, the Red Cross also “announced a Digital Volunteer program to help respond to question from and provide information to the public during disasters.”

While important challenges do exist, there are many positive externalities to leveraging digital volunteers. As deputy high commissioner of UNHCR noted about this UNHCR-volunteer project in Somalia, these types of projects create more citizen-engagement and raises awareness of humanitarian organizations and projects. This in part explains why UNHCR wants more, not less, engage-ment with digital volunteers. Indeed, these volunteers also develop important skills that will be increasingly sought after by humanitarian organizations recruit-ing for junior full-time positions. Humanitarian organizations are likely to be come smarter and more up to speed on humanitarian technologies and digital humanitarian skills as a result. This change should be embraced.

So given the rise of “self-quantified” disaster-affected communities and digitally empowered volunteer communities, is there a future for traditional humani-tarian organizations? Of course, anyone who suggests otherwise is seriously misguided and out of touch with innovation in the humanitarian space. Twitter will not put the UN out of business. Humanitarian organizations will continue to play some very important roles, especially those relating to logistics and coor-dination. These organizations will continue outsourcing some roles but will also take on some new roles. The issue here is simply one of comparative advantage. Humanitarian organizations used to have a comparative advantage in some areas, but this has shifted for all the reasons described above. So outsourcing in some cases makes perfect sense.

Interestingly, organizations like UN OCHA are also changing some of their own internal information management processes as a result of their collaboration with volunteer networks like the SBTF, which they expect will lead to a number of efficiency gains. Furthermore, OCHA is behind the Digital Humanitarians initiative and has also been developing a check-in app for humanitarian pro-fessionals to use in disaster response—clear signs of innovation and change. Meanwhile, the UK’s Department for International Development (DfID) has just launched a $75+ million fund to leverage new technologies in support of humani-tarian response; this includes mobile phones, satellite imagery, Twitter as well as other social media technologies, digital mapping and gaming technologies. Given that crisis mapping integrates these new technologies and has been at the cutting edge of innovation in the humanitarian space, I’ve invited DfID to participate in this year’s International Conference on Crisis Mapping (ICCM 2012).

In conclusion, and as argued two years ago, the humanitarian industry is shifting towards a more multi-polar system. The rise of new actors, from digitally empowered disaster-affected communities to digital volunteer networks, has been driven by the rapid commercialization of communication technology—particularly the mobile phone and social networking platforms. These trends are unlikely to change soon and crises will continue to spur innovations in this space. This does not mean that traditional humanitarian organizations are becoming obsolete. Their roles are simply changing and this change is proof that they are not battlefield monuments. Of course, only time will tell whether they change fast enough.

Imagery and Humanitarian Assistance: Gems, Errors and Omissions

The Center for Technology and National Security Policy based at National Defense University’s Institute for National Strategic Studies just published an 88-page report entitled “Constructive Convergence: Imagery and Humanitarian Assistance.” As noted by the author, “the goal of this paper is to illustrate to the technical community and interested humanitarian users the breadth of the tools and techniques now available for imagery collection, analysis, and distribution, and to provide brief recommendations with suggestions for next steps.” In addition, the report “presents a brief overview of the growing power of imagery, especially from volunteers and victims in disasters, and its place in emergency response. It also highlights an increasing technical convergence between professional and volunteer responders—and its limits.”

The study contains a number of really interesting gems, just a few errors and some surprising omissions. The point of this blog post is not to criticize but rather to provide constructive-and-hopefully-useful feedback should the report be updated in the future.

Lets begin with the important gems, excerpted below.

“The most serious issues overlooked involve liability protections by both the publishers and sources of imagery and its data. As far as our research shows there is no universally adopted Good Samaritan law that can protect volunteers who translate emergency help messages, map them, and distribute that map to response teams in the field.”

Whether a Good Samaritan law could ever realistically be universally adopted remains to be seen, but the point is that all of the official humanitarian data protection standards that I’ve reviewed thus far simply don’t take into account the rise of new digitally-empowered global volunteer networks (let alone the existence of social media). The good news is that some colleagues and I are working with the International Committee of the Red Cross (ICRC) and a consor-tium of major humanitarian organizations to update existing data protection protocols to take some of these new factors into account. This new document will hopefully be made publicly available in October 2012.

“Mobile devices such as tablets and mobile phones are now the primary mode for both collecting and sharing information in a response effort. A January 2011 report published by the Mobile Computing Promotion Consortium of Japan surveyed users of smart phones. Of those who had smart phones, 55 percent used a map application, the third most common application after Web browsing and email.”

I find this absolutely fascinating and thus read the January 2011 report, which is where I found the graphic below.

“The rapid deployment of Cellular on Wheels [COW] is dramatically improving. The Alcatel-Lucent Light Radio is 300 grams (about 10 ounces) and stackable. It also consumes very little power, eliminating large generation and storage requirements. It is capable of operating by solar, wind and/or battery power. Each cube fits into the size of a human hand and is fully integrated with radio processing, antenna, transmission, and software management of frequency. The device can operate on multiple frequencies simultaneously and work with existing infrastructure.”

“In Haiti, USSOUTHCOM found imagery, digital open source maps, and websites that hosted them (such as Ushahidi and OpenStreetMap) to occasionally be of greater value than their own assets.”

“It is recommended that clearly defined and restricted use of specialized #hashtags be implemented using a common crisis taxonomy. For example:

#country + location + emergency code + supplemental data

The above example, if located in Washington, DC, U.S.A., would be published as:

#USAWashingtonDC911Trapped

The specialized use of #hashtags could be implemented in the same cultural manner as 911, 999, and other emergency phone number systems. Metadata using these tags would also be given priority when sent over the Internet through communication networks (landline, broadband Internet, or mobile text or data). Abuse of ratified emergency #hashtag’s would be a prosecutable offense. Implementing such as system could reduce the amount of data that crisis mappers and other response organizations need to monitor and improve the quality of data to be filtered. Other forms of #Hashtags syllabus can also be implemented such as:

#country + location + information code (411) + supplemental data
#country + location + water (H20) + supplemental data
#country + location + Fire (FD) + supplemental data”

I found this very interesting and relevant to this earlier blog post: “Calling 911: What Humanitarians Can Learn from 50 Years of Crowdsourcing.” Perhaps a reference to Tweak the Tweet would have been worthwhile.

I also had not come across some of the platforms used in response to the 2011 earthquake in New Zealand. But the report did an excellent job sharing these.

EQviewer.co.nz

Some errors that need correcting:

Open source mapping tools such as Google Earth use imagery as a foundation for layering field data.”

Google Earth is not an open source tool.

CrisisMappers.net, mentioned earlier, is a group of more than 1,600 volunteers that have been brought together by Patrick Meier and Jen Ziemke. It is the core of collaboration efforts that can be deployed anywhere in the world. CrisisMappers has established workshops and steering committees to set guidelines and standardize functions and capabilities for sites that deliver imagery and layered datasets. This group, which today consists of diverse and talented volunteers from all walks of life, might soon evolve into a professional volunteer organization of trusted capabilities and skill sets and they are worth watching.”

CrisisMappers is not a volunteer network or an organization that deploys in any formal sense of the word. The CrisisMappers website explains what the mission and purpose of this informal network is. The initiative has some 3,500 members.

“Figure 16. How Ushahidi’s Volunteer Standby Task Force was Structured for Libya. Ushahidi’s platform success stems from its use by organized volunteers, each with skill sets that extract data from multiple sources for publication.”

The Standby Volunteer Task Force (SBTF) does not belong to Ushahidi, nor is the SBTF an Ushahidi project. A link to the SBTF website would have been appropriate. Also, the majority of applications of the Ushahidi platform have nothing to do with crises, or the SBTF, or any other large volunteer networks. The SBTF’s original success stems from organized volunteers who where well versed in the Ushahidi platform.

“Ushahidi accepts KML and KMZ if there is an agreement and technical assistance resources are available. An end user cannot on their own manipulate a Ushahidi portal as an individual, nor can external third party groups unless that group has an arrangement with the principal operators of the site. This offers new collaboration going forward. The majority of Ushahidi disaster portals are operated by volunteer organizations and not government agencies.”

The first sentence is unclear. If someone sets up an Ushahidi platform and they have KML/KMZ files that they want to upload, they can go ahead and do so. An end-user can do some manipulation of an Ushahidi portal and can also pull the Ushahidi data into their own platform (via the GeoRSS feed, for example). Thanks to the ESRI-Ushahidi plugin, they can then perform a range of more advanced GIS analysis. In terms of volunteers vs government agencies, indeed, it appears the former is leading the way vis-a-vis innovation.

Finally, below are some omissions and areas that I would have been very interested to learn more about. For some reason, the section on the Ushahidi deployment in New Zealand makes no reference to Ushahidi.

Staying on the topic of the earthquake in Christchurch, I was surprised to see no reference to the Tomnod deployment:

I had also hoped to read more about the use of drones (UAVs) in disaster response since these were used both in Haiti and Japan. What about the rise of DIY drones and balloon mapping? Finally, the report’s reference to Broadband Global Area Network (BGAN) doesn’t provide information on the range of costs associated with using BGANs in disasters.

In conclusion, the report is definitely an important contribution to the field of crisis mapping and should be required reading.

The Horn of Africa and the Crisis Mapping Community

“… the Horn of Africa famine and the associated crises gravely affecting millions of people has not animated the crisis-mapping community and its online platforms to the extent of post-Haiti or, more recently, following the 2011 earthquake in Japan.”

I’m somewhat concerned by the phrasing of this statement, which comes from this recent article published by ICT4Peace. Perhaps the author is simply unaware of the repeated offers made by the crisis mapping community to provide crisis mapping solutions, mobile information collection platforms, short codes, call center services, etc., to several humanitarian organizations including UN OCHA, UNDP and WFP over the past three months.

In the case of OCHA, the team in Somalia replied that they had everything under control. In terms of UNDP, the colleagues we spoke with simply did/do not have the capacity, time or skill-set to leverage new crisis mapping solutions to improve their situational awareness or better communicate with disaster affected comm-unities. And WFP explained that lack of access rather than information was the most pressing challenge they were facing (at least two months ago), an issue echoed by two other humanitarian organizations.

This excellent report by Internews details the complete humanitarian tech-nology failure in Dadaab refugee camp and underscores how limited and behind some humanitarian organizations still are vis-a-vis the prioritization of “new” in-formation and communication technologies (ICTs) to improve humanitarian response and the lives of refugees in crisis situations. These organizations require support and core funding to “upgrade”. Throwing crisis mapping technologies at the problem is not going to solve many problems if the under-lying humanitarian mechanisms are not in place to leverage these solutions.

This is not a criticism of humanitarian organizations but rather hard reality. I’ve had numerous conversations with both technology and humanitarian colleagues over the past three months about how to reach for low hanging fruits and catalyze quick-wins with even the most minimal ICT interventions. But as is often the case, the humanitarian community is understandably overwhelmed and genu-inely trying to do the best they can given the very difficult circumstances. Indeed, Somalia presents a host of obvious challenges and risks that were not present in either Haiti or Japan. (Incidentally, only a fraction of the crisis mapping commu-nity was involved in Japan compared to overall efforts in Somalia).

Perhaps ICT4Peace is also unaware that some colleagues and I spent many long days and nights in August and September preparing the launch of a live crisis map for Somalia, which ESRI, Google, Nethope and several other groups provided critical input on. See my blog post on this initiative here. But the project was torpedoed by a humanitarian organization that was worried about the conse-quences of empowering the Somali Diaspora, i.e., that they would become more critical of the US government’s perceived inaction as a result of the information they collected—a consequence I personally would have championed as an indica-tor of success.

Maybe ICT4Peace is also unaware that no humanitarian organization formally requested the activation of the Standby Volunteer Task Force (SBTF) in August. That said, the SBTF did engage in this pilot project to crowdsource the geo-tagging of shelters in Somalia in September as a simple trial run. Since then, the SBTF has officially partnered with UNHCR and the Joint Research Center (JRC) to geo-tag IDP camps in specific regions in Somalia next month. Digital Globe is a formal partner in this project, as is Tomnod. Incidentally, JRC is co-hosting this year’s International Conference of Crisis Mappers (ICCM 2011).

ICT4Peace is perhaps also not aware of a joint project between Ushahidi and UN OCHA Kenya to provide crisis mapping support, or of recent conversations with Al Jazeera, Souktel, the Virgin Group, K’naan, PopTech, CeaseFire, PeaceTXT, GSMA, DevSeed and others on implementing crisis mapping and SMS solutions for Somalia. In addition, the Humanitarian Open Street Map Team (HOT) has been busy improving the data for Somalia and the only reason they haven’t been able to go full throttles forward is because of data licensing issues beyond their control. Colleagues from the Harvard Humanitarian Initiative (HHI) have also been offering their help where and when they can.

In sum, to say that the crisis mapping community has not been as “animated” in response to the crisis in the Horn is misleading and rather unfortunate given that ICT4Peace is co-hosting this year’s International Conference of Crisis Mappers (ICCM 2011). All ICT4Peace had to do was to send one simple email to the CrisisMappers.net membership to get all the above information (and likely more). Just because these efforts are not captured on CNN or on the front pages of the UN Chronicle does not mean that there haven’t been numerous ongoing efforts behind the scenes by dozens of different partners and members of the crisis mapping community.

I would therefore not be so quick to dismiss the perceived inaction of this comm-unity. I would also not make an automatic assumption that crisis mapping platforms and mobile technology solutions will always be “easy” or feasible to deploy in every context, especially if this is attempted reactively in the middle of a complex humanitarian crisis. Both Haiti and Japan provided permissive envi-ronments, unlike recent crisis mapping projects in Libya, Egypt and the Sudan which present serious security challenges. Finally, if direct offers of support by the crisis mapping community are not leveraged by field-based humanitarian organizations, then how exactly is said crisis mapping community supposed to be more animated?

Crowdsourcing Will Solve All Humanitarian Problems

Here’s one of my favorite false arguments: “There are some people who believe that crowdsourcing will solve all humanitarian challenges….” So said a good colleague of mine vis-a-vis crisis response at a recent strategy meeting. Of course, when I pressed him for names, he didn’t have a reply. I don’t know anyone who subscribes to the above-mentioned point of view. While I understand that he made the statement in jest and primarily to position himself, I’m concerned that some in the humanitarian community actually believe this comment to be true.

First of all, suggesting that some individuals subscribe to an extreme point of view is a cheap debating tactic and a real pet peeve of mine. Simply label your “opponent” as holding a fundamentalist view of the world and everything you say following that statement holds true, easily discrediting your competition in the eyes of the jury. Surely we’ve moved beyond these types of false arguments in the crisis mapping community.

Secondly, crowdsourcing  is simply one among several methodologies that can, in some cases, be useful to collect information following a crisis. And as mentioned in this previous blog post entitled, “Demystifying Crowdsourcing: An Intro-duction to Non-Random Sampling,” the use of crowdsourcing, like any metho-dology, comes with advantages and disadvantages that depend both on goals and context. Surely, this is now common knowledge.

My point here is neither defend nor dismiss the use of crowdsourcing. My hope is that we move away from such false, dichotomous debates to conversations that recognize the complexities of an evolving situation; dialogues that value having more methodologies in the toolbox rather than fewer—and corresponding manuals that give us clarification on trade-offs and appropriate guidance on when to use which methods, why and how. Crowdsourcing crisis information has never been an either-or argument, so lets not turn it into one. Polarizing the con-versation with fictitious claims will only get in the way of learning and innovation.

OpenStreetMap’s New Micro-Tasking Platform for Satellite Imagery Tracing

The Humanitarian OpenStreetMap Team’s (HOT) response to Haiti remains one of the most remarkable examples of what’s possible when volunteers, open source software and open data intersect. When the 7.0 magnitude earthquake struck on January 12th, 2010, the Google Map of downtown Port-au-Prince was simply too incomplete to be used for humanitarian response. Within days, however, several hundred volunteers from the OpenStreetMap (OSM) commu-nity used satellite imagery to trace roads, shelters and other important features to create the most detailed map of Haiti ever made.

OpenStreetMap – Project Haiti from ItoWorld on Vimeo.

The video animation above shows just how spectacular this initiative was. More than 1.4 million edits were made to the map during the first month following the earthquake. These individual edits are highlighted as bright flashes of light in the video. This detailed map went a long way to supporting the humanitarian community’s response in Haiti. In addition, the map enabled my colleagues and I at The Fletcher School to geo-locate reports from crowdsourced text messages from Mission 4636 on the Ushahidi Haiti Map.

HOT’s response was truly remarkable. They created wiki’s to facilitate mass collaboration such as this page on “What needs to be mapped?” They also used this “OSM Matrix” to depict which areas required more mapping:

The purpose of OSM’s new micro-tasking platform is to streamline mass and rapid collaboration on future satellite image tracing projects. I recently reached out to HOT’s Kate Chapman and Nicolas Chavent to get an overview of their new platform. After logging in using my OSM username and password, I can click through a list of various on-going projects. The one below relates to a very neat HOT project in Indonesia. As you can tell, the region that needs to be mapped on the right-hand side of the screen is divided into a grid.

After I click on “Take a task randomly”, the screen below appears, pointing me to one specific cell in the grid above. I then have the option of opening and editing this cell within JOSM, the standard interface for editing OpenStreetMap. I would then trace all roads and buildings in my square and submit the edit. (I was excited to also see a link to WalkingPapers which allows you to print out and annotate that cell using pen & paper and then digitize the result for import back into OSM).

There’s no doubt that this new Tasking Server will go a long way to coordinate and streamline future live tracing efforts such as for Somalia. For now, the team is mapping Somalia’s road network using their wiki approach. In the future, I hope that the platform will also enable basic feature tagging and back-end triangulation for quality assurance purposes—much like Tomnod. In the meantime, however, it’s important to note that OSM is far more than just a global open source map. OSM’s open data advocacy is imperative for disaster preparedness and response: open data saves lives.

Seeking the Trustworthy Tweet: Can “Tweetsourcing” Ever Fit the Needs of Humanitarian Organizations?

Can microblogged data fit the information needs of humanitarian organizations? This is the question asked by a group of academics at Pennsylvania State University’s College of Information Sciences and Technology. Their study (PDF) is an important contribution to the discourse on humanitarian technology and crisis information. The applied research provides key insights based on a series of interviews with humanitarian professionals. While I largely agree with the majority of the arguments presented in this study, I do have questions regarding the framing of the problem and some of the assertions made.

The authors note that “despite the evidence of strong value to those experiencing the disaster and those seeking information concerning the disaster, there has been very little uptake of message data by large-scale, international humanitarian relief organizations.” This is because real-time message data is “deemed as unverifiable and untrustworthy, and it has not been incorporated into established mechanisms for organizational decision-making.” To this end, “committing to the mobilization of valuable and time sensitive relief supplies and personnel, based on what may turn out be illegitimate claims, has been perceived to be too great a risk.” Thus far, the authors argue, “no mechanisms have been fashioned for harvesting microblogged data from the public in a manner, which facilitates organizational decisions.”

I don’t think this latter assertion is entirely true if one looks at the use of Twitter by the private sector. Take for example the services offered by Crimson Hexagon, which I blogged about 3 years ago. This successful start-up launched by Gary King out of Harvard University provides companies with real-time sentiment analysis of brand perceptions in the Twittersphere precisely to help inform their decision making. Another example is Storyful, which harvests data from authenticated Twitter users to provide highly curated, real-time information via microblogging. Given that the humanitarian community lags behind in the use and adoption of new technologies, it behooves us to look at those sectors that are ahead of the curve to better understand the opportunities that do exist.

Since the study principally focused on Twitter, I’m surprised that the authors did not reference the empirical study that came out last year on the behavior of Twitter users after the 8.8 magnitude earthquake in Chile. The study shows that about 95% of tweets related to confirmed reports validated that information. In contrast only 0.03% of tweets denied the validity of these true cases. Interestingly, the results also show  that “the number of tweets that deny information becomes much larger when the information corresponds to a false rumor.” In fact, about 50% of tweets will deny the validity of false reports. This means it may very well be posible to detect rumors by using aggregate analysis on tweets.

On framing, I believe the focus on microblogging and Twitter in particular misses the bigger picture which ultimately is about the methodology of crowdsourcing rather than the technology. To be sure, the study by Penn State could just as well have been titled “Seeking the Trustworthy SMS.” I think this important research on microblogging would be stronger if this distinction were made and the resulting analysis tied more closely to the ongoing debate on crowdsourcing crisis information that began during the response to Haiti’s earthquake in 2010.

Also, as was noted during the Red Cross Summit in 2010, more than two-thirds of respondents to a survey noted that they would expect a response within an hour if they posted a need for help on a social media platform (and not just Twitter) during a crisis. So whether humanitarian organizations like it or not, crowdsourced social media information cannot be ignored.

The authors carried out a series of insightful interviews with about a dozen international humanitarian organizations to try and better understand the hesitation around the use of Twitter for humanitarian response. As noted earlier, however, it is not Twitter per se that is a concern but the underlying methodology of crowdsourcing.

As expected, interviewees noted that they prioritize the veracity of information over the speed of communication. “I don’t think speed is necessarily the number one tool that an emergency operator needs to use.” Another interviewee opined that “It might be hard to trust the data. I mean, I don’t think you can make major decisions based on a couple of tweets, on one or two tweets.” What’s interesting about this latter comment is that it implies that only one channel of information, Twitter, is to be used in decision-making, which is a false argument and one that nobody I know has ever made.

Either way, the trade-off between speed and accuracy is a well known one. As mentioned in this blog post from 2009, information is perishable and accuracy is often a luxury in the first few hours and days following a major disaster. As the authors for the study rightly note, “uncertainty is ‘always expected, if sometimes crippling’ (Benini, 1997) for NGOs involved in humanitarian relief.” Ultimately, the question posed by the authors of the Penn study can be boiled down to this: is some information better than no information if it cannot be immediately verified? In my opinion, yes. If you have some information, then at least you can investigate it’s veracity which may lead to action. I also believe that from this philosophical point of view, the answer would still be yes.

Based on the interviews, the authors found that organizations engaged in immediate emergency response were less likely to make use of Twitter (or crowdsourced information) as a channel for information. As one interviewee put it, “Lives are on the line. Every moment counts. We have it down to a science. We know what information we need and we get in and get it…” In contrast, those organizations engaged in subsequent phases of disaster response were thought more likely to make use of crowdsourced data.

I’m not entirely convinced by this: “We know what information we need and we get in and get it…”. Yes, humanitarian organizations typically know but whether they get it, and in time, is certainly not a given. Just look at the humanitarian responses to Haiti and Libya, for example. Organizations may very well be “unwilling to trade data assurance, veracity and authenticity for speed,” but sometimes this mindset will mean having absolutely no information. This is why OCHA asked the Standby Volunteer Taskforce to provide them with a live crowdsourced social media may of Libya. In Haiti, while the UN is not thought to have used crowdsourced SMS data from Mission 4636, other responders like the Marine Corps did.

Still, according to one interviewee, “fast is good, but bad information fast can kill people. It’s got to be good, and maybe fast too.” This assumes that no information doesn’t kill people. Also good information that is late, can also kill people. As one of the interviewees admitted when using traditional methods, “it can be quite slow before all that [information] trickles through all the layers to get to us.” The authors of the study also noted that, “Many [interviewees] were frustrated with how slow the traditional methods of gathering post-disaster data had remained despite the growing ubiquity of smart phones and high quality connectivity and power worldwide.”

On a side note, I found the following comment during the interviews especially revealing: “When we do needs assessments, we drive around and we look with our eyes and we talk to people and we assess what’s on the ground and that’s how we make our evaluations.” One of the common criticisms leveled against the use of crowdsourced information is that it isn’t representative. But then again, driving around, checking things out and chatting with people is hardly going to yield a representative sample either.

One of the main findings from this research has to do with a problem in attitude on the part of humanitarian organizations. “Each of the interviewees stated that their organization did not have the organizational will to try out new technolo-gies. Most expressed this as a lack of resources, support, leadership and interest to adopt new technologies.” As one interview noted, “We tried to get the president and CEO both to use Twitter. We failed abysmally, so they’re not– they almost never use it.” Interestingly, “most of the respondents admitted that many of their technological changes were motivated by the demands of their donors. At this point in time their donors have not demanded that these organizations make use of microblogged data. The subjects believed they would need to wait until this occurred for real change to begin.”

For me the lack of will has less to do with available resources and limited capacity and far more to do with a generational gap. When today’s young professionals in the humanitarian space work their way up to more executive positions, we’ll  see a significant change in attitude within these organizations. I’m thinking in particular of the many dozens of core volunteers who played a pivotal role in the crisis mapping operations in Haiti, Chile, Pakistan, Russia and most recently Libya. And when attitude changes, resources can be reallocated and new priorities can be rationalized.

What’s interesting about these interviews is that despite all the concerns and criticisms of crowdsourced Twitter data, all interviewees still see microblogged data as a “vast trove of potentially useful information concerning a disaster zone.” One of the professionals interviewed said, “Yes! Yes! Because that would – again, it would tell us what resources are already in the ground, what resources are still needed, who has the right staff, what we could provide. I mean, it would just – it would give you so much more real-time data, so that as we’re putting our plans together we can react based on what is already known as opposed to getting there and discovering, oh, they don’t really need medical supplies. What they really need is construction supplies or whatever.”

Another professional stated that, “Twitter data could potentially be used the same way… for crisis mapping. When an emergency happens there are so many things going on in the ground, and an emergency response is simply prioritization, taking care of the most important things first and knowing what those are. The difficult thing is that things change so quickly. So being able to gather information quickly…. <with Twitter> There’s enormous power.”

The authors propose three possible future directions. The first is bounded microblogging, which I have long referred to as “bounded crowdsourcing.” It doesn’t make sense to focus on the technology instead of the methodology because at the heart of the issue are the methods for information collection. In “bounded crowdsourcing,” membership is “controlled to only those vetted by a particular organization or community.” This is the approach taken by Storyful, for example. One interviewee acknowledge that “Twitter might be useful right after a disaster, but only if the person doing the Tweeting was from <NGO name removed>, you know, our own people. I guess if our own people were sending us back Tweets about the situation it could help.”

Bounded crowdsourcing overcomes the challenge of authentication and verification but obviously with a tradeoff in the volume of data collected “if an additional means were not created to enable new members through an automatic authentication system, to the bounded microblogging community.” However, the authors feel that bounded crowdsourcing environments “undermine the value of the system” since “the power of the medium lies in the fact that people, out of their own volition, make localized observations and that organizations could harness that multitude of data. The bounded environment argument neutralizes that, so in effect, at that point, when you have a group of people vetted to join a trusted circle, the data does not scale, because that pool by necessity would be small.”

That said, I believe the authors are spot on when they write that “Bounded environments might be a way of introducing Twitter into the humanitarian centric organizational discourse, as a starting point, because these organizations, as seen from the evidence presented above, are not likely to initially embrace the medium. Bounded environments could hence demonstrate the potential for Twitter to move beyond the PR and Communications departments.”

The second possible future direction is to treat crowdsourced data is ambient, “contextual information rather than instrumental information, (i.e., factual in nature).” This grassroots information could be considered as an “add-on to traditional, trusted institutional lines of information gathering.” As one interviewee noted, “Usually information exists. The question is the context doesn’t exist…. that’s really what I see as the biggest value [of crowdsourced information] and why would you use that in the future is creating the context…”.

The authors rightly suggest that “that adding contextual information through microblogged data may alleviate some of the uncertainty during the time of disaster. Since the microblogged data would not be the single data source upon which decisions would be made, the standards for authentication and security could be less stringent. This solution would offer the organization rich contextual data, while reducing the need for absolute data authentication, reducing the need for the organization to structurally change, and reducing the need for significant resources.” This is exactly how I consider and treat crowdsourced data.

The third and final forward-looking solution is computational. The authors “believe better computational models will eventually deduce informational snippets with acceptable levels of trust.” They refer to Ushahidi’s SwiftRiver project as an example.

In sum, this study is an important contribution to the discourse. The challenges around using crowdsourced crisis information are well known. If I come across as optimistic, it is for two reasons. First, I do think a lot can be done to address the challenges. Second, I do believe that attitudes in the humanitarian sector will continue to change.

Discussing the Recommendations of the Disaster 2.0 Report

It’s been well over a month since the Disaster 2.0 Report was publicly launched and while some conversations on the report have figured on the Crisis Mappers Network list-serve and the Standby Task Force blog, much of this discussion has largely overlooked the report’s detailed recommendations. I had hoped by now that someone would have taken the lead on catalyzing a debate around these recommendations, but since that still hasn’t happened, I might as well start.

The report’s authors clearly state that, “the development of an interface between the Volunteer and Technical Communities (V&TCs) and formal humanitarian system is a design problem that must be left to the stakeholders.” In addition, they clarify that “the purpose of this document is not to set forth the final word on how to connect new information flows into the international humanitarian system; but to initiate a conversation about the design challenges involved with this endeavor.” This conversation has yet to happen.

While the humanitarian community has proposed some design ideas, V&TCs have not responded in any detail to these proposals—although to be fair, no deadline for feedback has been suggested either. In any case, the proposed designs are meant to create an interface between humanitarian organizations and V&TCs—the two main stakeholders discussed in the Disaster 2.0 Report. It would be unfortunate and probably defeat the purpose of the report if the final interface were operationalized before any V&TCs had the chance to explain what would work best for them in terms of interface. Indeed, without an open and pro-active conversation that includes both stakeholder groups, it is unlikely that the final interface design will gain buy-in from both groups, which would result in wasted funding.

So here’s an open and editable Google Doc that includes the report’s recommen-dations. I have already added some of my comments to the Google Doc and hope others will as well. On June 1st, I will publish a new blog post that will summarize all the feedback added to the Google Doc. I hope this summary will serve to move the conversations forward so we can co-develop an interface that will prove useful and effective to all those concerned.