Category Archives: Humanitarian Technologies

Augmented Reality for Crisis Mapping and Humanitarian Response

Could we leverage Augmented Reality (AR) apps for Crisis Mapping? I’ve been thinking about this question for a while but finally decided to experiment after bumping into Autonomy here at the IPI World Congress in Taipei. The company has a free AR app called Aurasma, which basically lets the user create their own AR action. So I gave it a spin, figuring that if people could animate the odd T-Rex splish-splashing in the Bay Area, there might also be some humanitarian applications worth exploring.

Autonomy’s AR app is available for the iPhone, iPad and the Android. What is especially neat is that you can cache the AR data and therefore use the app off-line, always a plus for crisis response. I experimented by using: (1) the amazing Humanitarian OpenStreetMap animated video of Haiti, (2) Internews’s excellent humanitarian technology report on Dadaab (a must read), and (3), a printout of WalkingPapers for some location in California.

I had to use an iPhone and iPad at the same time to film the AR in action, so apologies in advance for the less than smooth panning. In this first video, I point the iPad’s camera to a screenshot print-out of the OpenStreetMap (OSM) video, the cover of Internews’s report and a paper-based map from WalkingPapers. For the OSM screenshot, I superimposed the video animation. I added a dynamic visualization of an Ushahidi platform for Somalia on the front page of the Internews report and added AR red dots to the WalkingPapers handout.

The OSM video animation in AR is a little wobbly but comes out much nicer on this video which demo’s the iPhone app.

The Aurasma app was easy to use and the Autonomy Marketing Executive I spoke to said he’d be happy to support humanitarian applications of the platform. One idea would be to visualize MapAction GIS products in the field with an AR layer for crowdsourced data, for example. In other words, the hard copy maps could serve as informative base maps on top of which dynamic event-data could be visualized (and updated) via the Aurasma app. A related idea: visualize projected weather forecasts on top of a hard copy map of flood prone areas. Of course, the same types of visualizations could be done from a GIS platform but I’m thinking about mobile, rapid and off-line options for humanitarian professionals not conversant in GIS.

How would you apply AR to crisis mapping? Is AR even useful for humanitarian response or yet another unnecessary gadget? Feel free to share your thoughts in the comments section below. (As for fans of David Suarez’s book, The Daemon, yes, this brings us one step closer to Matthew Sobol’s vision).

How to Crowdsource Crisis Response

I recently had the distinct pleasure of giving this year’s keynote address at the Global Communications Forum (#RCcom on Twitter) organized by the Interna-tional Committee of the Red Cross (ICRC) in Geneva. The conversations that followed were thoroughly fruitful and enjoyable.

Like many other humanitarian organizations, the ICRC is thinking hard about how to manage the social media challenge. In 2010, this study carried out by the American Red Cross (ARC) found that the public increasingly expects humanitarian organizations to respond to pleas for help posted on social media platforms like Facebook, Twitter, etc. The question is, how in the world are humanitarian organizations supposed to handle this significant increase in “customer service” requests? Even during non-emergencies, ARC’s Facebook page receives a large number of comments on a daily basis many of which solicit replies. This figure escalates significantly during crises. So what to do?

The answer, in my opinion, requires some organizational change. Clearly, the dramatic rise in customer service requests posted on social media platforms cannot be managed through existing organizational structures and work flows. Moreover, the vast majority of posted requests don’t reflect life threatening situations. In other words, responses to many requests don’t require professional emergency responders. So humanitarian organizations should consider taking a two-pronged strategy to address the social media challenge. The first is to upgrade their “customer service systems” and the second is to connect these systems with local networks of citizen crisis responders.

How do large private sector companies deal with the social media challenge? Well, some obviously do better than others. (Incidentally, this question was a recurring topic of conversation at the Same Wavelength conference in London where I spoke after Geneva). This explains why I recommended that my ICRC colleagues consider various social media customer service models used in the private sector and identify examples of positive deviance.

The latest innovation in the customer service space was just launched at TechCrunch Disrupt this week. TalkTo “allows consumers to send text messages to any business and get quick responses to questions, feedback, and more.” As TechCrunch writes, “no one wants to wait on the phone, and email can be slow as well. SMS Messaging is a natural form of communication these days and the most efficient for simple questions. It makes sense to bring this communication to businesses.” If successful, I wonder whether TalkTo will add Twitter and Facebook to their service as other communication media.

Some companies leverage crowdsourcing, like Best Buy’s TwelpForce. Over time, Best Buy “found that with some good foundational guideposts and training tools, the crowd began to self-organize and govern itself.  Leaders in the space popped up as coaches, or mentors – and pretty soon they had a really good support network in place.”

On the humanitarian side, the American Red Cross has begun to leverage their trained volunteers to manage responses to the organization’s official Facebook page, for example. With some good foundational guideposts and training tools, they should be able to scale this solution. In some ways, one could say that humanitarian organizations are increasingly required to play the role of “telephone” operator. So I’d be very interested in getting feedback from iRevolution readers on alternative, social media approaches to customer service in the private sector. If you know of any innovative ones, please feel free to share in the comments section below.

The second strategy that humanitarian organizations need to consider is linking this new customer service system to networks of citizen crisis responders. An “operator” on the ARC Facebook page, for example, would triage the incoming posts by “pushing” them into different bins according to topic and urgency. Posts that don’t reflect a life-threatening situation but still require operational response could simply be forwarded to local citizen crisis responders. The rest can be re-routed to professional emergency responders. Geo-fenced alerts from crisis mapping platforms could also play an important role in this respect.

One should remember that the majority of crisis responses are “crowdsourced” by definition since the real first responders are always local communities. For example, “it is well known that in case of earthquakes, such as the one that happened in Mexico City, the assistance to the victims comes first of all from the other survivors […]” (Gilbert 1998). In fact, estimates suggest that, “no more than 10 per cent of survival in emergencies can be contributed to external sources of relief aid” (Hillhorst 2004). So why not connect humanitarian customer service systems to local citizen crisis responders and thereby make the latter’s response more targeted and efficient rather than simply ad hoc? I’ve used the term “crowdfeeding” to describe this idea in previous blog posts like this one and this one. We basically need a Match.com for citizen based crisis response in which both problems and solutions are crowdsourced.

So where are these “new” citizen crisis responders to come from? How about leveraging existing networks like Community Emergency Response Teams (CERTs), the UN Volunteer system (UNVs), Red Cross volunteer networks and platforms like Red Cross Volunteer Match? Why not make use of existing training materials like FEMA’s online courses? Universities could also promote the idea of student crisis responders and offer credit for relevant courses.

Update: New app helps Queensland coordinate volunteers.

Crowdsourcing Satellite Imagery Analysis for Somalia: Results of Trial Run

We’ve just completed our very first trial run of the Standby Task Volunteer Force (SBTF) Satellite Team. As mentioned in this blog post last week, the UN approached us a couple weeks ago to explore whether basic satellite imagery analysis for Somalia could be crowdsourced using a distributed mechanical turk approach. I had actually floated the idea in this blog post during the floods in Pakistan a year earlier. In any case, a colleague at Digital Globe (DG) read my post on Somalia and said: “Lets do it.”

So I reached out to Luke Barrington at Tomnod to set up distributed micro-tasking platform for Somalia. To learn more about Tomond’s neat technology, see this previous blog post. Within just a few days we had high resolution satellite imagery from DG and a dedicated crowdsourcing platform for imagery analysis, courtesy of Tomnod . All that was missing were some willing and able “mapsters” from the SBTF to tag the location of shelters in this imagery. So I sent out an email to the group and some 50 mapsters signed up within 48 hours. We ran our pilot from August 26th to August 30th. The idea here was to see what would go wrong (and right!) and thus learn as much as we could before doing this for real in the coming weeks.

It is worth emphasizing that the purpose of this trial run (and entire exercise) is not to replicate the kind of advanced and highly-skilled satellite imagery analysis that professionals already carry out.  This is not just about Somalia over the next few weeks and months. This is about Libya, Syria, Yemen, Afghanistan, Iraq, Pakistan, North Korea, Zimbabwe, Burma, etc. Professional satellite imagery experts who have plenty of time to volunteer their skills are far and few between. Meanwhile, a staggering amount of new satellite imagery is produced  every day; millions of square kilometers’ worth according to one knowledgeable colleague.

This is a big data problem that needs mass human intervention until the software can catch up. Moreover, crowdsourcing has proven to be a workable solution in many other projects and sectors. The “crowd” can indeed scan vast volumes of satellite imagery data and tag features of interest. A number of these crowds-ourcing platforms also have built-in quality assurance mechanisms that take into account the reliability of the taggers and tags. Tomnod’s CrowdRank algorithm, for example, only validates imagery analysis if a certain number of users have tagged the same image in exactly the same way. In our case, only shelters that get tagged identically by three SBTF mapsters get their locations sent to experts for review. The point here is not to replace the experts but to take some of the easier (but time-consuming) tasks off their shoulders so they can focus on applying their skill set to the harder stuff vis-a-vis imagery interpretation and analysis.

The purpose of this initial trial run was simply to give SBTF mapsters the chance to test drive the Tomnod platform and to provide feeback both on the technology and the work flows we put together. They were asked to tag a specific type of shelter in the imagery they received via the web-based Tomnod platform:

There’s much that we would do differently in the future but that was exactly the point of the trial run. We had hoped to receive a “crash course” in satellite imagery analysis from the Satellite Sentinel Project (SSP) team but our colleagues had hardly slept in days because of some very important analysis they were doing on the Sudan. So we did the best we could on our own. We do have several satellite imagery experts on the SBTF team though, so their input throughout the process was very helpful.

Our entire work flow along with comments and feedback on the trial run is available in this open and editable Google Doc. You’ll note the pages (and pages) of comments, questions and answers. This is gold and the entire point of the trial run. We definitely welcome additional feedback on our approach from anyone with experience in satellite imagery interpretation and analysis.

The result? SBTF mapsters analyzed a whopping 3,700+ individual images and tagged more than 9,400 shelters in the green-shaded area below. Known as the “Afgooye corridor,” this area marks the road between Mogadishu and Afgooye which, due to displacement from war and famine in the past year, has become one of the largest urban areas in Somalia. [Note, all screen shots come from Tomnod].

Last year, UNHCR used “satellite imaging both to estimate how many people are living there, and to give the corridor a concrete reality. The images of the camps have led the UN’s refugee agency to estimate that the number of people living in the Afgooye Corridor is a staggering 410,000. Previous estimates, in September 2009, had put the number at 366,000” (1).

The yellow rectangles depict the 3,700+ individual images that SBTF volunteers individually analyzed for shelters: And here’s the output of 3 days’ worth of shelter tagging, 9,400+ tags:

Thanks to Tomnod’s CrowdRank algorithm, we were able to analyze consensus between mapsters and pull out the triangulated shelter locations. In total, we get 1,423 confirmed locations for the types of shelters described in our work flows. A first cursory glance at a handful (“random sample”) of these confirmed locations indicate they are spot on. As a next step, we could crowdsource (or SBTF-source, rather) the analysis of just these 1,423 images to triple check consensus. Incidentally, these 1,423 locations could easily be added to Google Earth or a password-protected Ushahidi map.

We’ve learned a lot during this trial run and Luke got really good feedback on how to improve their platform moving forward. The data collected should also help us provide targeted feedback to SBTF mapsters in the coming days so they can further refine their skills. On my end, I should have been a lot more specific and detailed on exactly what types of shelters qualified for tagging. As the Q&A section on the Google Doc shows, many mapsters weren’t exactly sure at first because my original guidelines were simply too vague. So moving forward, it’s clear that we’ll need a far more detailed “code book” with many more examples of the features to look for along with features that do not qualify. A colleague of mine suggested that we set up an interactive, online quiz that takes volunteers through a series of examples of what to tag and not to tag. Only when a volunteer answers all questions correctly do they move on to live tagging. I have no doubt whatsoever that this would significantly increase consensus in subsequent imagery analysis.

Please note: the analysis carried out in this trial run is not for humanitarian organizations or to improve situational awareness, it is simply for testing purposes only. The point was to try something new and in the process work out the kinks so when the UN is ready to provide us with official dedicated tasks we don’t have to scramble and climb the steep learning curve there and then.

In related news, the Humanitarian Open Street Map Team (HOT) provided SBTF mapsters with an introductory course on the OSM platform this past weekend. The HOT team has been working hard since the response to Haiti to develop an OSM Tasking Server that would allow them to micro-task the tracing of satellite imagery. They demo’d the platform to me last week and I’m very excited about this new tool in the OSM ecosystem. As soon as the system is ready for prime time, I’ll get access to the backend again and will write up a blog post specifically on the Tasking Server.

Why Geo-Fencing Will Revolutionize Crisis Mapping

A “geo-fence” is a virtual perimeter for a real-world geographic area—a virtually fenced off geographic location. Geo-fences can take any shape. They can also be user-generated thanks to custom-digitized geo-fencing options. Combine geo-fencing with a dynamic alerts feature and you’ve got yourself the next evolution of live mapping technologies for crisis mapping. Indeed, the ability to customize automated geo-fenced alerts is going to revolutionize the way we use crisis maps.

Several live mapping technologies like Ushahidi already have a very basic geo-fenced alerts feature. Lets say I am interested in the town of Dhuusamareeb in Somalia. I can point my cursor to that location, specify the radius of interest around this point and then subscribe to receive email and/or SMS updates for any new reports that get mapped within this area. If I’m particularly interested in food security, then I can instead subscribe to receive alerts only when reports tagged with the category “food security” is mapped within this area. This allows end users to define for themselves the type of granular information they need—a demand-based solution as opposed to a supply-side (spam-side?) solution.

But this feature is old news. There’s only so much use one can get from a simple subscribe-to-alerts feature. We need to bring more spatial and temporal geo-fencing solutions to crisis mapping. For example, I should be able to geo-fence different parts of a refugee camp and customize the automated alerts feature such that a 10% increase over a 24-hour period in the number of reports tagged with certain categories and geo-located within specified geo-fences sends an email and/or SMS to the appropriate teams in charge.

The variables that ought to be customizable by the user include the individual geo-fences, the time period over which a percentage change threshold is specified (eg., 1 hour, 4 hours, 24 hours, 2 days, 1 week etc.), the actual percentage change (eg., 5%, 20%, 80% etc) and the type of categories (eg., food security, health access, etc). In addition to percentages, users should be able to specify basic report counts, e.g., notify me when at least 20 reports have been mapped within this section of the refugee camp.

This kind of automated, customized geo-fencing threshold alerts feature has many, many applications, from public health, to conflict early warning, to disaster response, etc. Combining this type of geo-fenced alerts feature with check-in’s will make crisis mapping a lot more compelling for decision support. One could customize specific actions depending on where/when check-in’s take place and any additional content (eg., status update) included in the check-in. See my previous blog post on “Check-In’s with a Purpose: Applications for Disaster Response.”

These actions could include emails, SMS, twitter alerts, etc., to specific individuals with pre-determined content based on the parameters of a check-in. Actions could also be “physical” actions, not just information communication. For example, a certain type of customized geo-fenced alert, depending on where/when it happens, could execute a program that turns on a water pump, changes the temperature of a fridge storing vaccines, etc.

We need to make live mapping technologies more relevant for real-time decision support and I believe that geo-fencing is an important step to that affect.

p.s. Naturally, GIGO (garbage-in, garbage-out) still applies. That is, if the data is of poor quality or not existant, adding automated geo-fencing alerts is not going to improve decision-making. But that’s the case for any data processing feature. So my colleague Brian Herbert and I are in the process of adding geo-fenced alerts features to the Ushahidi platform.

Analyzing Satellite Imagery of the Somali Crisis Using Crowdsourcing

 Update: results of satellite imagery analysis available here.

You gotta love Twitter. Just two hours after I tweeted the above—in reference to this project—a colleague of mine from the UN who just got back from the Horn of Africa called me up: “Saw your tweet, what’s going on?” The last thing I wanted to was talk about the über frustrating day I’d just had. So he said, “Hey, listen, I’ve got an idea.” He reminded me of this blog post I had written a year ago on “Crowdsourcing the Analysis of Satellite for Disaster Response” and said, “Why not try this for Somalia? We could definitely use that kind of information.” I quickly forgot about my frustrating day.

Here’s the plan. He talks to UNOSAT and Google about acquiring high-resolution satellite imagery for those geographic areas for which they need more information on. A colleague of mine in San Diego just launched his own company to develop mechanical turk & micro tasking solutions for disaster response. He takes this satellite imagery and cuts it into say 50×50 kilometers square images for micro-tasking purposes.

We then develop a web-based interface where volunteers from the Standby Volunteer Task Force (SBTF) sign in and get one high resolution 50×50 km image displayed to them at a time. For each image, they answer the question: “Are there any human shelters discernible in this picture? [Yes/No].” If yes, what would you approximate the population of that shelter to be? [1-20; 21-50; 50-100; 100+].” Additional questions could be added. Note that we’d provide them with guidelines on how to identify human shelters and estimate population figures.

No shelters discernible in this image

Each 50×50 image would get rated by at least 3 volunteers for data triangulation and quality assurance purposes. That is, if 3 volunteers each tag an image as depicting a shelter (or more than one shelter) and each of the 3 volunteers approximate the same population range, then that image would get automatically pushed to an Ushahidi map, automatically turned into a geo-tagged incident report and automatically categorized by the population estimate. One could then filter by population range on the Ushahidi map and click on those reports to see the actual image.

If satellite imagery licensing is an issue, then said images need not be pushed to the Ushahidi map. Only the report including the location of where a shelter has been spotted would be mapped along with the associated population estimate. The satellite imagery would never be released in full, only small bits and pieces of that imagery would be shared with a trusted network of SBTF volunteers. In other words, the 50×50 images could not be reconstituted and patched together because volunteers would not get contiguous 50×50 images. Moreover, volunteers would sign a code of conduct whereby they pledge not to share any of the imagery with anyone else. Because we track which volunteers see which 50×50 images, we could easily trace any leaked 50×50 image back to the volunteer responsible.

Note that for security reasons, we could make the Ushahidi map password protected and have a public version of the map with very limited spatial resolution so that the location of individual shelters would not be discernible.

I’d love to get feedback on this idea from iRevolution readers, so if you have thoughts (including constructive criticisms), please do share in the comments section below.

On Technology and Building Resilient Societies to Mitigate the Impact of Disasters

I recently caught up with a colleague at the World Bank and learned that “resilience” is set to be the new “buzz word” in the international development community. I think this is very good news. Yes, discourse does matter. A single word can alter the way we frame problems. They can lead to new conceptual frameworks that inform the design and implementation of development projects and disaster risk reduction strategies.
 

The term resilience is important because it focuses not on us, the development and disaster community, but rather on local at-risk communities. The terms “vulnerability” and “fragility” were used in past discourse but they focus on the negative and seem to invoke the need for external protection, overlooking the possibility that local coping mechanisms do exist. From the perspective of this top-down approach, international organizations are the rescuers and aid does not arrive until they arrive.

Resilience, in contrast, implies radical self-sufficiency, and self-sufficien-cy suggests a degree of autonomy; self-dependence rather than dependence on an external entity that may or may not arrive, that may or may not be effective, and that may or may not stay the course. In the field of ecology, the term resilience is defined as “the capacity of an ecosystem to respond to a perturbation or disturbance by resisting damage and recovering quickly.” There are thus at least two ways for “social ecosystems” to be resilient:

  1. Resist damage by absorbing and dampening the perturbation.
  2. Recover quickly by bouncing back.

So how does a society resist damage from a disaster? As noted in an earlier blog post, “Disaster Theory for Techies“, there is no such thing as a “natural disaster”. There are natural hazards and there are social systems. If social systems are not sufficiently resilient to absorb the impact of a natural hazard such as an earthquake, then disaster unfolds. In other words, hazards are exogenous while disasters are the result of endogenous political, economic, social and cultural processes. Indeed, “it is generally accepted among environmental geographers that there is no such thing as a natural disaster. In every phase and aspect of a disaster—causes, vulnerability, preparedness, results and response, and reconstruction—the contours of disaster and the difference between who lives and dies is to a greater or lesser extent a social calculus” (Smith 2006).

So how do we take this understanding of disasters and apply it to building more resilient communities? Focusing on people-centered early warning systems is one way to do this. In 2006, the UN’s International Strategy for Disaster Reduction (ISDR) recognized that top-down early warning systems for disaster response were increasingly ineffective. They therefore called for a more bottom-up approach in the form of people-centered early warning systems. The UN ISDR’s Global Survey of Early Warning Systems (PDF), defines the purpose of people-centered early warning systems as follows:

“… to empower individuals and communities threatened by hazards to act in sufficient time and in an appropriate manner so as to reduce the possibility of personal injury, loss of life, damage to property and the environment, and loss of livelihoods.”

Information plays a central role here. Acting in sufficient time requires having timely information about (1) the hazard(s) and (2) how to respond. As some scholars have argued, a disaster is first of all “a crisis in communicating within a community—that is, a difficulty for someone to get informed and to inform other people” (Gilbert 1998). Improving ways for local communities to communicate internally is thus an important part of building more resilient societies. This is where information and communication technologies (ICTs) play an important role. Free and open source software like Ushahidi can also be used (the subject of a future blog post).

Open data is equally important. Local communities need to access data that will enable them to make more effective decisions on how to best minimize the impact of certain hazards on their livelihoods. This means accessing both internal community data in real time (the previous paragraph) and data external to the community that bears relevance to the decision-making calculus at the local level. This is why I’m particularly interested in the Open Data for Resilience Initiative (OpenDRI) spearheaded by the World Bank’s Global Facility for Disaster Reduction and Recovery (GFDRR). Institutionalizing OpenDRI at the state level will no doubt be a challenge in and of itself, but I do hope the initiative will also be localized using a people-centered approach like the one described above.

The second way to grow more resilient societies is by enabling them to recover quickly following a disaster. As Manyena wrote in 2006, “increasing attention is now paid to the capacity of disaster-affected communities to ‘bounce back’ or to recover with little or no external assistance following a disaster.” So what factors accelerate recovery in ecosystems in general? “To recover itself, a forest ecosystem needs suitable interactions among climate conditions and bio-actions, and enough area.” In terms of social ecosystems, these interactions can take the form of information exchange.

Identifying needs following a disaster and matching them to available resources is an important part of the process. Accelerating the rate of (1) identification; (2) matching and, (3) allocation, is one way to speed up overall recovery. In ecological terms, how quickly the damaged part of an ecosystem can repair itself depends on how many feedback loops (network connections) it has to the non- (or less-) damaged parts of the ecosystem(s). Some call this an adaptive system. This is where crowdfeeding comes in, as I’ve blogged about here (The Crowd is Always There: A Marketplace for Crowdsourcing Crisis Response) and here (Why Crowdsourcing and Crowdfeeding May be the Answer to Crisis Response).

Internal connectivity and communication is important for crowdfeeding to work, as is preparedness. This is why ICTs are central to growing more resilient societies. They can accelerate the identification of needs, matching and allocation of resources. Free and open source platforms like Ushahidi can also play a role in this respect, as per my recent blog post entitled “Check-In’s With a Purpose: Applications for Disaster Response.” But without sufficient focus on disaster preparedness, these technologies are more likely to facilitate spontaneous response rather than a planned and thus efficient response. As Louis Pas-teur famously noted, “Chance favors the prepared mind.” Hence the rationale for the Standby Volunteer Task Force for Live Mapping (SBTF), for example. Open data is also important in this respect. The OpenDRI initiative is thus important for both damage resistance and quick recovery.

I’m enjoying the process of thinking through these issues again. It’s been a while since I published and presented on the topic of resilience and adaptation. So I plan to read through some of my papers from a while back that addressed these issues in the context of violent conflict and climate change. What I need to do is update them based on what I’ve learned over the past four or five years.

If you’re curious and feel like jumping into some of these papers yourself, I recommend these two as a start:

  • Meier, Patrick. 2007. “New Strategies for Effective Early Response: Insights from Complexity Science.” Paper prepared for the 48th Annual Convention of the International Studies Association (ISA) in Chicago. Available online.
  • Meier, Patrick. 2007. “Networking Disaster and Conflict Early Warning Systems.” Paper prepared for the 48th Annual Convention of the Int’l Studies Association (ISA) in Chicago.  Available online.

More papers are available on my Publications page. This earlier blog post on “Failing Gracefully in Complex Systems: A Note on Resilience” may also be of interest to some readers.


Crisis Mapping Somalia with the Diaspora

The state of Minnesota is home to the largest population of Somalis in North America. Like any Diaspora, the estimated 25,000 Somalis who live there ar closely linked to family members back home. They make thousands of phone calls every week to numerous different locations across Somalia. So why not make the Somali Diaspora a key partner in the humanitarian response taking place half-way across the world?

In Haiti, Mission 4636 was launched to crowdsource micro needs assessments from the disaster affected population via SMS. The project could not have happened without hundreds of volunteers from the Haitian Diaspora who translated and geo-referenced the incoming text messages. There’s no doubt that Diasporas can play a pivotal role in humanitarian response but they are typically ignored by large humanitarian organizations. This is why I’m excited to be part of an initiative that plans to partner with key members of the Diaspora to create a live crisis map of Somalia.

This is a mock-up for illustration only

The project is still in very early stages so there’s not much to show right now but I’m hopeful that the stars will align next week so we can formally launch the initiative. The basic game plan is as follows:

  • A short survey of some 10 questions is being drafted by public health professionals with experience in humanitarian response. These questions will try to capture the most essential indicators. More questions are be added at a later stage.
  • Humanitarian colleagues who have been working with the Somali Diaspora in Minnesota for years are now in the process of recruiting trusted members of the community.
  • These trusted members of the Diaspora will participate in a training this weekend on basic survey and interview methods. The training will also provide them with a hands-on introduction to the Ushahidi platform where they’ll  enter the survey results.
  • If everything goes well, these community members will each make several phone calls to friends and relatives back home next week. They’ll ask the questions from the survey and add the answers to the Ushahidi map. Elders in the community will fill out a paper-based form for other colleagues to enter online.
  • Trusted members of the Diaspora will continue to survey contacts back home on a weekly basis. New survey questions are likely to be added based on feedback from other humanitarian organizations. Surveys may also be carried out every other day or even on a daily basis for some of the questions.

If the pilot is successful, then colleagues in Minnesota may recruit additional trusted members of the community to participate in this live crisis mapping effort. There’s a lot more to the project including several subsequent phases but we’re still at the early stages so who knows where this will go. But yes, we’re thinking through the security implications, verification issues, data visualization features, necessary analytics, etc. If all goes well, there’ll be a lot more information to share next week in which case I’ll add more info here and also post an update on the Ushahidi blog.

Google+ for Crowdsourcing Crisis Information, Crisis Mapping and Disaster Response

Facebook is increasingly used to crowdsource crisis information and response, as is Twitter. So is it just a matter of time until we see similar use cases with Google+? Another question I have is whether such uses cases will simply reflect more of the same or whether we’ll see new, unexpected applications and dynamics? Of course, it may be premature to entertain the role that Google+ might play in disaster response just days after it’s private beta launch, but the company seems fully committed to making  this new venture succeed. Entertain-ing how Google+ (G+) might be used as a humanitarian technology thus seems worthwhile.

The fact that G+ is open and searchable is probably one of the starkest differences with the walled-garden that is Facebook; that, and their Data Liberation policy. This will make activity on G+ relatively easier to find—Google is the King of Search, after all. This openness will render serendipity and synergies more likely.

The much talked about “Circles” feature is also very appealing for the kind of organic and collaborative crowdsourcing work that we see emerging following a crisis. Think about these “Circles” not only as networks but also as “honeycombs” for “flash” projects—i.e., short-term and temporary—very much along the lines that Skype is used for live collaborative crisis mapping operations.

Google+’s new Hangout feature could also be used instead of Skype chat and video, with the advantage of having multi-person video-conferencing. With a little more work, the Sparks feature could facilitate media monitoring—an important component of live crisis mapping. And then there’s Google+ mobile, which is accessible on most phones with a browser and already includes a “check-in” feature as well as geo-referenced status updates. The native app for the Android is already available and the iPhone app is coming soon.

Clicking on my status update above, produces the Google Maps page below. What’s particularly telling about this is how “underwhelming” the use of Google Maps currently is within G+.  There’s no doubt this will change dramatically as G+ evolves. The Google+ team has noted that they already have dozens of new features ready to be rolled out in the coming months. So expect G+ to make full use of Google’s formidable presence on the Geo Web—think MapMaker+ and Earth Engine+. This could be a big plus for live crowdsourced crisis mapping, especially of the multimedia kind.

One stark difference with Facebook’s status updates and check-in’s is that G+ allows you to decide which Circles (or networks of contacts) to share your updates and check-in’s with. This is an important difference that could allow for more efficient information sharing in near real-time. You could set up your Circles as different teams, perhaps even along UN Cluster lines.

As the G+ mobile website reveals, the team will also be integrating SMS, which is definitely key for crisis response. I imagine there will also be a way to connect your Twitter feed with Google+ in the near future. This will make G+ even more compelling as a mobile humanitarian technology platform. In addition, I expect there are also plans to integrate Google News, Google Reader, Google Groups, Google Docs and Google Translate with G+. GMail, YouTube and Picasa are already integrated.

One feature that will be important for humanitarian applications is offline functionality. Google Reader and GMail already have this feature (Google Gears), which I imagine could be added to G+’s Stream and perhaps eventually with Google Maps? In addition, if Google can provide customizable uses of G+, then this could also make the new platform more compelling for humanitarian organizations, e.g., if OCHA could have their own G+ (“iG+”) by customizing and branding their G+ interface; much like the flexibility afforded by the Ning platform. One first step in that direction might be to offer a range of “themes” for G+, just like Google does with GMail.

Finally, the ability to develop third party apps for G+ could be a big win. Think of a G+ store (in contrast to an App Store). I’d love to see a G+ app for Ushahidi and OSM, for example.

If successful, G+ could be the best example of “What Technology Wants” to date. G+ is convergence technology par excellence. It is a hub that connects many of Google’s excellent products and from the looks of it, the G+ team is just getting warmed up with the converging.

I’d love to hear from others who are also brainstorming about possible applications of Google+ in the humanitarian space. Am I off on any of the ideas above? What am I missing? Maybe we could set up a Google+ 4 Disaster Response Circle and get on Hangout to brainstorm together?

A List of Completely Wrong Assumptions About Technology Use in Emerging Economies

I’ve spent the past week at the iLab in Liberia and got what I came for: an updated reality check on the limitations of technology adoption in developing countries. Below are some of the assumptions that I took for granted. They’re perfectly obvious in hindsight and I’m annoyed at myself for not having realized their obviousness sooner. I’d be very interested in hearing from others about these and reading their lists. This need not be limited to one particular sector like ICT for Development (ICT4D) or Mobile Health (mHealth). Many of these assumptions have repercussions across multiple disciplines.

The following examples come from conversations with my colleague Kate Cummings who directs Ushahidi Liberia and the iLab here in Monrovia. She and her truly outstanding team—Kpetermeni Siakor, Carter Draper, Luther Jeke and Anthony Kamah—spearheaded a number of excellent training workshops over the past few days. At one point we began discussing the reasons for the limited use of SMS in Liberia. There are the usual and obvious reasons. But the one hurdle I had not expected to hear was Nokia’s predictive text functionality. This feature is incredibly helpful since the mobile phone basically guesses which words you’re trying to write so you don’t have to type every single letter.

But as soon as she pointed out how confusing this can be, I immediately understood what she meant. If I had never seen or been warned about this feature before, I’d honestly think the phone was broken. It would really be impossible to type with. I’d get frustrated and give up (the tiny screen further adds to the frustration). And if I was new to mobile phones, it wouldn’t be obvious how to switch that feature off either. (There are several tutorials online on how to use the predictive text feature and how to turn it off, which clearly proves they’re not intuitive).

In one of the training workshops we just had, I was explaining what Walking Papers was about and how it might be useful in Liberia. So I showed the example below and continued talking. But Kate jumped in and asked participants: “What do you see in this picture? Do you see the trees, the little roads?” She pointed at the features as she described the individual shapes. This is when it dawned on me that there is absolutely nothing inherently intuitive about satellite images. Most people on this planet have not been on an airplane or a tall building. So why would a bird’s eye view of their village be anything remotely recognizable? I really kicked myself on that one. So I’ll write it again: there is nothing intuitive about satellite imagery. Nor is there anything intuitive about GPS and the existence of a latitude and longitude coordinate system.

Kate went on to explain that this kind of picture is what you would see if you were flying high like a bird. That was the way I should have introduced the image but I had taken it completely for granted that satellite imagery was self-explanatory when it simply isn’t. In further conversations with Kate, she explained that they too had made that assumption early on when trying to introduce the in’s and out’s of the Ushahidi platform. They quickly realized that they had to rethink their approach and decided to provide introductory courses on Google Maps instead.

More wrong assumptions revealed themselves during the workshpos. For example, the “+” and “-” markers on Google Map are not intuitive either nor is the concept of zooming in and out. How are you supposed to understand that pressing these buttons still shows the same map but at a different scale and not an entirely different picture instead? Again, when I took a moment to think about this, I realized how completely confusing that could be. And again I kicked myself. But contrast this to an entirely different setting, San Francisco, where some friends recently told me how their five year old went up to a framed picture in their living room and started pinching at it with his fingers, the exact same gestures one would use on an iPhone to zoom in and out of a picture. “Broken, broken” is all the five year old said after that disappointing experience.

The final example actually comes from Haiti where my colleague Chrissy Martin is one of the main drivers behind the Digicel Group’s mobile banking efforts in the country. There were of course a number of expected challenges on the road to launching Haiti’s first successful mobile banking service, TchoTcho Mobile. The hurdle that I had not expected, however, had to do with the pin code. To use the service, you would enter your own personal pin number on your mobile phone in order to access your account. Seems perfectly straight forward. But it really isn’t.

The concept of a pin number is one that many of us take completely for granted. But the idea is often foreign to many would-be users of mobile banking services and not just in Haiti. Think about it: all one has to do to access all my money is to simply enter four numbers on my phone. That does genuinely sound crazy to me at a certain level. Granted, if you guess the pin wrong three times, the phone gets blocked and you have to call TchoTcho’s customer service. But still, I can understand the initial hesitation that many users had. When I asked Chrissy how they overcame the hurdle, her answer was simply this: training. It takes time for users to begin trusting a completely new technology.

So those are some of the assumptions I’ve gotten wrong. I’d be grateful if readers could share theirs as there must be plenty of other assumptions I’m making which don’t fit reality. Incidentally, I realize that emerging economies vary widely in technology diffusion and adoption—not to mention sub-nationally as well. This is why having the iLab in Liberia is so important. Identifying which assumptions are wrong in more challenging environments is really important if our goal is to use technology to help contribute meaningfully to a community’s empowerment, development and independence.

Seeking the Trustworthy Tweet: Can “Tweetsourcing” Ever Fit the Needs of Humanitarian Organizations?

Can microblogged data fit the information needs of humanitarian organizations? This is the question asked by a group of academics at Pennsylvania State University’s College of Information Sciences and Technology. Their study (PDF) is an important contribution to the discourse on humanitarian technology and crisis information. The applied research provides key insights based on a series of interviews with humanitarian professionals. While I largely agree with the majority of the arguments presented in this study, I do have questions regarding the framing of the problem and some of the assertions made.

The authors note that “despite the evidence of strong value to those experiencing the disaster and those seeking information concerning the disaster, there has been very little uptake of message data by large-scale, international humanitarian relief organizations.” This is because real-time message data is “deemed as unverifiable and untrustworthy, and it has not been incorporated into established mechanisms for organizational decision-making.” To this end, “committing to the mobilization of valuable and time sensitive relief supplies and personnel, based on what may turn out be illegitimate claims, has been perceived to be too great a risk.” Thus far, the authors argue, “no mechanisms have been fashioned for harvesting microblogged data from the public in a manner, which facilitates organizational decisions.”

I don’t think this latter assertion is entirely true if one looks at the use of Twitter by the private sector. Take for example the services offered by Crimson Hexagon, which I blogged about 3 years ago. This successful start-up launched by Gary King out of Harvard University provides companies with real-time sentiment analysis of brand perceptions in the Twittersphere precisely to help inform their decision making. Another example is Storyful, which harvests data from authenticated Twitter users to provide highly curated, real-time information via microblogging. Given that the humanitarian community lags behind in the use and adoption of new technologies, it behooves us to look at those sectors that are ahead of the curve to better understand the opportunities that do exist.

Since the study principally focused on Twitter, I’m surprised that the authors did not reference the empirical study that came out last year on the behavior of Twitter users after the 8.8 magnitude earthquake in Chile. The study shows that about 95% of tweets related to confirmed reports validated that information. In contrast only 0.03% of tweets denied the validity of these true cases. Interestingly, the results also show  that “the number of tweets that deny information becomes much larger when the information corresponds to a false rumor.” In fact, about 50% of tweets will deny the validity of false reports. This means it may very well be posible to detect rumors by using aggregate analysis on tweets.

On framing, I believe the focus on microblogging and Twitter in particular misses the bigger picture which ultimately is about the methodology of crowdsourcing rather than the technology. To be sure, the study by Penn State could just as well have been titled “Seeking the Trustworthy SMS.” I think this important research on microblogging would be stronger if this distinction were made and the resulting analysis tied more closely to the ongoing debate on crowdsourcing crisis information that began during the response to Haiti’s earthquake in 2010.

Also, as was noted during the Red Cross Summit in 2010, more than two-thirds of respondents to a survey noted that they would expect a response within an hour if they posted a need for help on a social media platform (and not just Twitter) during a crisis. So whether humanitarian organizations like it or not, crowdsourced social media information cannot be ignored.

The authors carried out a series of insightful interviews with about a dozen international humanitarian organizations to try and better understand the hesitation around the use of Twitter for humanitarian response. As noted earlier, however, it is not Twitter per se that is a concern but the underlying methodology of crowdsourcing.

As expected, interviewees noted that they prioritize the veracity of information over the speed of communication. “I don’t think speed is necessarily the number one tool that an emergency operator needs to use.” Another interviewee opined that “It might be hard to trust the data. I mean, I don’t think you can make major decisions based on a couple of tweets, on one or two tweets.” What’s interesting about this latter comment is that it implies that only one channel of information, Twitter, is to be used in decision-making, which is a false argument and one that nobody I know has ever made.

Either way, the trade-off between speed and accuracy is a well known one. As mentioned in this blog post from 2009, information is perishable and accuracy is often a luxury in the first few hours and days following a major disaster. As the authors for the study rightly note, “uncertainty is ‘always expected, if sometimes crippling’ (Benini, 1997) for NGOs involved in humanitarian relief.” Ultimately, the question posed by the authors of the Penn study can be boiled down to this: is some information better than no information if it cannot be immediately verified? In my opinion, yes. If you have some information, then at least you can investigate it’s veracity which may lead to action. I also believe that from this philosophical point of view, the answer would still be yes.

Based on the interviews, the authors found that organizations engaged in immediate emergency response were less likely to make use of Twitter (or crowdsourced information) as a channel for information. As one interviewee put it, “Lives are on the line. Every moment counts. We have it down to a science. We know what information we need and we get in and get it…” In contrast, those organizations engaged in subsequent phases of disaster response were thought more likely to make use of crowdsourced data.

I’m not entirely convinced by this: “We know what information we need and we get in and get it…”. Yes, humanitarian organizations typically know but whether they get it, and in time, is certainly not a given. Just look at the humanitarian responses to Haiti and Libya, for example. Organizations may very well be “unwilling to trade data assurance, veracity and authenticity for speed,” but sometimes this mindset will mean having absolutely no information. This is why OCHA asked the Standby Volunteer Taskforce to provide them with a live crowdsourced social media may of Libya. In Haiti, while the UN is not thought to have used crowdsourced SMS data from Mission 4636, other responders like the Marine Corps did.

Still, according to one interviewee, “fast is good, but bad information fast can kill people. It’s got to be good, and maybe fast too.” This assumes that no information doesn’t kill people. Also good information that is late, can also kill people. As one of the interviewees admitted when using traditional methods, “it can be quite slow before all that [information] trickles through all the layers to get to us.” The authors of the study also noted that, “Many [interviewees] were frustrated with how slow the traditional methods of gathering post-disaster data had remained despite the growing ubiquity of smart phones and high quality connectivity and power worldwide.”

On a side note, I found the following comment during the interviews especially revealing: “When we do needs assessments, we drive around and we look with our eyes and we talk to people and we assess what’s on the ground and that’s how we make our evaluations.” One of the common criticisms leveled against the use of crowdsourced information is that it isn’t representative. But then again, driving around, checking things out and chatting with people is hardly going to yield a representative sample either.

One of the main findings from this research has to do with a problem in attitude on the part of humanitarian organizations. “Each of the interviewees stated that their organization did not have the organizational will to try out new technolo-gies. Most expressed this as a lack of resources, support, leadership and interest to adopt new technologies.” As one interview noted, “We tried to get the president and CEO both to use Twitter. We failed abysmally, so they’re not– they almost never use it.” Interestingly, “most of the respondents admitted that many of their technological changes were motivated by the demands of their donors. At this point in time their donors have not demanded that these organizations make use of microblogged data. The subjects believed they would need to wait until this occurred for real change to begin.”

For me the lack of will has less to do with available resources and limited capacity and far more to do with a generational gap. When today’s young professionals in the humanitarian space work their way up to more executive positions, we’ll  see a significant change in attitude within these organizations. I’m thinking in particular of the many dozens of core volunteers who played a pivotal role in the crisis mapping operations in Haiti, Chile, Pakistan, Russia and most recently Libya. And when attitude changes, resources can be reallocated and new priorities can be rationalized.

What’s interesting about these interviews is that despite all the concerns and criticisms of crowdsourced Twitter data, all interviewees still see microblogged data as a “vast trove of potentially useful information concerning a disaster zone.” One of the professionals interviewed said, “Yes! Yes! Because that would – again, it would tell us what resources are already in the ground, what resources are still needed, who has the right staff, what we could provide. I mean, it would just – it would give you so much more real-time data, so that as we’re putting our plans together we can react based on what is already known as opposed to getting there and discovering, oh, they don’t really need medical supplies. What they really need is construction supplies or whatever.”

Another professional stated that, “Twitter data could potentially be used the same way… for crisis mapping. When an emergency happens there are so many things going on in the ground, and an emergency response is simply prioritization, taking care of the most important things first and knowing what those are. The difficult thing is that things change so quickly. So being able to gather information quickly…. <with Twitter> There’s enormous power.”

The authors propose three possible future directions. The first is bounded microblogging, which I have long referred to as “bounded crowdsourcing.” It doesn’t make sense to focus on the technology instead of the methodology because at the heart of the issue are the methods for information collection. In “bounded crowdsourcing,” membership is “controlled to only those vetted by a particular organization or community.” This is the approach taken by Storyful, for example. One interviewee acknowledge that “Twitter might be useful right after a disaster, but only if the person doing the Tweeting was from <NGO name removed>, you know, our own people. I guess if our own people were sending us back Tweets about the situation it could help.”

Bounded crowdsourcing overcomes the challenge of authentication and verification but obviously with a tradeoff in the volume of data collected “if an additional means were not created to enable new members through an automatic authentication system, to the bounded microblogging community.” However, the authors feel that bounded crowdsourcing environments “undermine the value of the system” since “the power of the medium lies in the fact that people, out of their own volition, make localized observations and that organizations could harness that multitude of data. The bounded environment argument neutralizes that, so in effect, at that point, when you have a group of people vetted to join a trusted circle, the data does not scale, because that pool by necessity would be small.”

That said, I believe the authors are spot on when they write that “Bounded environments might be a way of introducing Twitter into the humanitarian centric organizational discourse, as a starting point, because these organizations, as seen from the evidence presented above, are not likely to initially embrace the medium. Bounded environments could hence demonstrate the potential for Twitter to move beyond the PR and Communications departments.”

The second possible future direction is to treat crowdsourced data is ambient, “contextual information rather than instrumental information, (i.e., factual in nature).” This grassroots information could be considered as an “add-on to traditional, trusted institutional lines of information gathering.” As one interviewee noted, “Usually information exists. The question is the context doesn’t exist…. that’s really what I see as the biggest value [of crowdsourced information] and why would you use that in the future is creating the context…”.

The authors rightly suggest that “that adding contextual information through microblogged data may alleviate some of the uncertainty during the time of disaster. Since the microblogged data would not be the single data source upon which decisions would be made, the standards for authentication and security could be less stringent. This solution would offer the organization rich contextual data, while reducing the need for absolute data authentication, reducing the need for the organization to structurally change, and reducing the need for significant resources.” This is exactly how I consider and treat crowdsourced data.

The third and final forward-looking solution is computational. The authors “believe better computational models will eventually deduce informational snippets with acceptable levels of trust.” They refer to Ushahidi’s SwiftRiver project as an example.

In sum, this study is an important contribution to the discourse. The challenges around using crowdsourced crisis information are well known. If I come across as optimistic, it is for two reasons. First, I do think a lot can be done to address the challenges. Second, I do believe that attitudes in the humanitarian sector will continue to change.