Category Archives: Crisis Mapping

How the UN Used Social Media in Response to Typhoon Pablo (Updated)

Our mission as digital humanitarians was to deliver a detailed dataset of pictures and videos (posted on Twitter) which depict damage and flooding following the Typhoon. An overview of this digital response is available here. The task of our United Nations colleagues at the Office of the Coordination of Humanitarian Affairs (OCHA), was to rapidly consolidate and analyze our data to compile a customized Situation Report for OCHA’s team in the Philippines. The maps, charts and figures below are taken from this official report (click to enlarge).

Typhon PABLO_Social_Media_Mapping-OCHA_A4_Portrait_6Dec2012

This map is the first ever official UN crisis map entirely based on data collected from social media. Note the “Map data sources” at the bottom left of the map: “The Digital Humanitarian Network’s Solution Team: Standby Volunteer Task Force (SBTF) and Humanity Road (HR).” In addition to several UN agencies, the government of the Philippines has also made use of this information.

Screen Shot 2012-12-08 at 7.26.19 AM

Screen Shot 2012-12-08 at 7.29.24 AM

The cleaned data was subsequently added to this Google Map and also made public on the official Google Crisis Map of the Philippines.

Screen Shot 2012-12-08 at 7.32.17 AM

One of my main priorities now is to make sure we do a far better job at leveraging advanced computing and microtasking platforms so that we are better prepared the next time we’re asked to repeat this kind of deployment. On the advanced computing side, it should be perfectly feasible to develop an automated way to crawl twitter and identify links to images  and videos. My colleagues at QCRI are already looking into this. As for microtasking, I am collaborating with PyBossa and Crowdflower to ensure that we have highly customizable platforms on stand-by so we can immediately upload the results of QCRI’s algorithms. In sum, we have got to move beyond simple crowdsourcing and adopt more agile micro-tasking and social computing platforms as both are far more scalable.

In the meantime, a big big thanks once again to all our digital volunteers who made this entire effort possible and highly insightful.

Summary: Digital Disaster Response to Philippine Typhoon

Update: How the UN Used Social Media in Response to Typhoon Pablo

The United Nations Office for the Coordination of Humanitarian Affairs (OCHA) activated the Digital Humanitarian Network (DHN) on December 5th at 3pm Geneva time (9am New York). The activation request? To collect all relevant tweets about Typhoon Pablo posted on December 4th and 5th; identify pictures and videos of damage/flooding shared in those tweets; geo-locate, time-stamp and categorize this content. The UN requested that this database be shared with them by 5am Geneva time the following day. As per DHN protocol, the activation request was reviewed within an hour. The UN was informed that the request had been granted and that the DHN was formally activated at 4pm Geneva.

pablo_impact

The DHN is composed of several members who form Solution Teams when the network is activated. The purpose of Digital Humanitarians is to support humanitarian organizations in their disaster response efforts around the world. Given the nature of the UN’s request, both the Standby Volunteer Task Force (SBTF) and Humanity Road (HR) joined the Solution Team. HR focused on analyzing all tweets posted December 4th while the SBTF worked on tweets posted December 5th. Over 20,000 tweets were analyzed. As HR will have a blog post describing their efforts shortly (please check here), I will focus on the SBTF.

Geofeedia Pablo

The Task Force first used Geofeedia to identify all relevant pictures/videos that were already geo-tagged by users. About a dozen were identified in this manner. Meanwhile, the SBTF partnered with the Qatar Foundation Computing Research Institute’s (QCRI) Crisis Computing Team to collect all tweets posted on December 5th with the hashtags endorsed by the Philippine Government. QCRI ran algorithms on the dataset to remove (1) all retweets and (2) all tweets without links (URLs). Given the very short turn-around time requested by the UN, the SBTF & QCRI Teams elected to take a two-pronged approach in the hopes that one, at least, would be successful.

The first approach used  Crowdflower (CF), introduced here. Workers on Crowd-flower were asked to check each Tweet’s URL and determine whether it linked to a picture or video. The purpose was to filter out URLs that linked to news articles. CF workers were also asked to assess whether the tweets (or pictures/videos) provided sufficient geographic information for them to be mapped. This methodology worked for about 2/3 of all the tweets in the database. A review of lessons learned and how to use Crowdflower for disaster response will be posted in the future.

Pybossa Philippines

The second approach was made possible thanks to a partnership with PyBossa, a free, open-source crowdsourcing and micro-tasking platform. This effort is described here in more detail. While we are still reviewing the results of this approach, we expect that  this tool will become the standard for future activations of the Digital Humanitarian Network. I will thus continue working closely with the PyBossa team to set up a standby PyBossa platform ready-for-use at a moment’s notice so that Digital Humanitarians can be fully prepared for the next activation.

Now for the results of the activation. Within 10 hours, over 20,000 tweets were analyzed using a mix of methodologies. By 4.30am Geneva time, the combined efforts of HR and the SBTF resulted in a database of 138 highly annotated tweets. The following meta-data was collected for each tweet:

  • Media Type (Photo or Video)
  • Type of Damage (e.g., large-scale housing damage)
  • Analysis of Damage (e.g., 5 houses flooded, 1 damaged roof)
  • GPS coordinates (latitude/longitude)
  • Province
  • Region
  • Date
  • Link to Photo or Video

The vast majority of curated tweets had latitude and longitude coordinates. One SBTF volunteer (“Mapster”) created this map below to plot the data collected. Another Mapster created a similar map, which is available here.

Pablo Crisis Map Twitter Multimedia

The completed database was shared with UN OCHA at 4.55am Geneva time. Our humanitarian colleagues are now in the process of analyzing the data collected and writing up a final report, which they will share with OCHA Philippines today by 5pm Geneva time.

Needless to say, we all learned a lot thanks to the deployment of the Digital Humanitarian Network in the Philippines. This was the first time we were activated to carry out a task of this type. We are now actively reviewing our combined efforts with the concerted aim of streamlining our workflows and methodologies to make this type effort far easier and quicker to complete in the future. If you have suggestions and/or technologies that could facilitate this kind of digital humanitarian work, then please do get in touch either by posting your ideas in the comments section below or by sending me an email.

Lastly, but definitely most importantly, a big HUGE thanks to everyone who volunteered their time to support the UN’s disaster response efforts in the Philippines at such short notice! We want to publicly recognize everyone who came to the rescue, so here’s a list of volunteers who contributed their time (more to be added!). Without you, there would be no database to share with the UN, no learning, no innovating and no demonstration that digital volunteers can and do make a difference. Thank you for caring. Thank you for daring.

Why USAID’s Crisis Map of Syria is so Unique

While static, this crisis map includes a truly unique detail. Click on the map below to see a larger version as this may help you spot what is so striking.

For a hint, click this link. Still stumped? Look at the sources listed in the Key.

 

The Most Impressive Live Global Twitter Map, Ever?

My colleague Kalev Leetaru has just launched The Global Twitter Heartbeat Project in partnership with the Cyber Infrastructure and Geospatial Information Laboratory (CIGI) and GNIP. He shared more information on this impressive initiative with the CrisisMappers Network this morning.

According to Kalev, the project “uses an SGI super-computer to visualize the Twitter Decahose live, applying fulltext geocoding to bring the number of geo-located tweets from 1% to 25% (using a full disambiguating geocoder that uses all of the user’s available information in the Twitter stream, not just looking for mentions of major cities), tone-coding each tweet using a twitter-customized dictionary of 30,000 terms, and applying a brand-new four-stage heatmap engine (this is where the supercomputer comes in) that makes a map of the number of tweets from or about each location on earth, a second map of the average tone of all tweets for each location, a third analysis of spatial proximity (how close tweets are in an area), and a fourth map as needed for the percent of all of those tweets about a particular topic, which are then all brought together into a single heatmap that takes all of these factors into account, rather than a sequence of multiple maps.”

Kalev added that, “For the purposes of this demonstration we are processing English only, but are seeing a nearly identical spatial profile to geotagged all-languages tweets (though this will affect the tonal results).” The Twitterbeat team is running a live demo showing both a US and world map updated in realtime at Supercomputing on a PufferSphere and every few seconds on the SGI website here.”


So why did Kalev share all this with the CrisisMappers Network? Because he and his team created a rather unique crisis map composed of all tweets about Hurricane Sandy, see the YouTube video above. “[Y]ou  can see how the whole country lights up and how tweets don’t just move linearly up the coast as the storm progresses, capturing the advance impact of such a large storm and its peripheral effects across the country.” The team also did a “similar visualization of the recent US Presidential election showing the chaotic nature of political communication in the Twittersphere.”


To learn more about the project, I recommend watching Kalev’s 2-minute introductory video above.

Crowdsourcing the Evaluation of Post-Sandy Building Damage Using Aerial Imagery

Update (Nov 2): 5,739 aerial images tagged by over 3,000 volunteers. Please keep up the outstanding work!

My colleague Schuyler Erle from Humanitarian OpenStreetMap  just launched a very interesting effort in response to Hurricane Sandy. He shared the info below via CrisisMappers earlier this morning, which I’m turning into this blog post to help him recruit more volunteers.

Schuyler and team just got their hands on the Civil Air Patrol’s (CAP) super high resolution aerial imagery of the disaster affected areas. They’ve imported this imagery into their Micro-Tasking Server MapMill created by Jeff Warren and are now asking volunteers to help tag the images in terms of the damage depicted in each photo. “The 531 images on the site were taken from the air by CAP over New York, New Jersey, Rhode Island, and Massachusetts on 31 Oct 2012.”

To access this platform, simply click here: http://sandy.hotosm.org. If that link doesn’t work,  please try sandy.locative.us.

“For each photo shown, please select ‘ok’ if no building or infrastructure damage is evident; please select ‘not ok’ if some damage or flooding is evident; and please select ‘bad’ if buildings etc. seem to be significantly damaged or underwater. Our *hope* is that the aggregation of the ok/not ok/bad ratings can be used to help guide FEMA resource deployment, or so was indicated might be the case during RELIEF at Camp Roberts this summer.”

A disaster response professional working in the affected areas for FEMA replied (via CrisisMappers) to Schuyler’s efforts to confirm that:

“[G]overnment agencies are working on exploiting satellite imagery for damage assessments and flood extents. The best way that you can help is to help categorize photos using the tool Schuyler provides […].  CAP imagery is critical to our decision making as they are able to work around some of the limitations with satellite imagery so that we can get an area of where the worst damage is. Due to the size of this event there is an overwhelming amount of imagery coming in, your assistance will be greatly appreciated and truly aid in response efforts.  Thank you all for your willingness to help.”

Schuyler notes that volunteers can click on the Grid link from the home page of the Micro-Tasking platform to “zoom in to the coastlines of Massachusetts or New Jersey” and see “judgements about building damages beginning to aggregate in US National Grid cells, which is what FEMA use operationally. Again, the idea and intention is that, as volunteers judge the level of damage evident in each photo, the heat map will change color and indicate at a glance where the worst damage has occurred.” See above screenshot.

Even if you just spend 5 or 10 minutes tagging the imagery, this will still go a long way to supporting FEMA’s response efforts. You can also help by spreading the word and recruiting others to your cause. Thank you!

What Was Novel About Social Media Use During Hurricane Sandy?

We saw the usual spikes in Twitter activity and the typical (reactive) launch of crowdsourced crisis maps. We also saw map mashups combining user-generated content with scientific weather data. Facebook was once again used to inform our social networks: “We are ok” became the most common status update on the site. In addition, thousands of pictures where shared on Instagram (600/minute), documenting both the impending danger & resulting impact of Hurricane Sandy. But was there anything really novel about the use of social media during this latest disaster?

I’m asking not because I claim to know the answer but because I’m genuinely interested and curious. One possible “novelty” that caught my eye was this FrankenFlow experiment to “algorithmically curate” pictures shared on social media. Perhaps another “novelty” was the embedding of webcams within a number of crisis maps, such as those below launched by #HurricaneHacker and Team Rubicon respectively.

Another “novelty” that struck me was how much focus there was on debunking false information being circulated during the hurricane—particularly images. The speed of this debunking was also striking. As regular iRevolution readers will know, “information forensics” is a major interest of mine.

This Tumblr post was one of the first to emerge in response to the fake pictures (30+) of the hurricane swirling around the social media whirlwind. Snopes.com also got in on the action with this post. Within hours, The Atlantic Wire followed with this piece entitled “Think Before You Retweet: How to Spot a Fake Storm Photo.” Shortly after, Alexis Madrigal from The Atlantic published this piece on “Sorting the Real Sandy Photos from the Fakes,” like the one below.

These rapid rumor-bashing efforts led BuzzFeed’s John Herman to claim that Twitter acted as a truth machine: “Twitter’s capacity to spread false information is more than cancelled out by its savage self-correction.” This is not the first time that journalists or researchers have highlighted Twitter’s tendency for self-correction. This peer-reviewed, data-driven study of disaster tweets generated during the 2010 Chile Earthquake reports the same finding.

What other novelties did you come across? Are there other interesting, original and creative uses of social media that ought to be documented for future disaster response efforts? I’d love to hear from you via the comments section below. Thanks!

The Best Way to Crowdsource Satellite Imagery Analysis for Disaster Response

My colleague Kirk Morris recently pointed me to this very neat study on iterative versus parallel models of crowdsourcing for the analysis of satellite imagery. The study was carried out by French researcher & engineer Nicolas Maisonneuve for the next GISscience2012 conference.

Nicolas finds that after reaching a certain threshold, adding more volunteers to the parallel model does “not change the representativeness of opinion and thus will not change the consensual output.” His analysis also shows that the value of this threshold has significant impact on the resulting quality of the parallel work and thus should be chosen carefully.  In terms of the iterative approach, Nicolas finds that “the first iterations have a high impact on the final results due to a path dependency effect.” To this end, “stronger commitment during the first steps are thus a primary concern for using such model,” which means that “asking expert/committed users to start,” is important.

Nicolas’s study also reveals that the parellel approach is better able to correct wrong annotations (wrong analysis of the satellite imagery) than the iterative model for images that are fairly straightforward to interpret. In contrast, the iterative model is better suited for handling more ambiguous imagery. But there is a catch: the potential path dependency effect in the iterative model means that  “mistakes could be propagated, generating more easily type I errors as the iterations proceed.” In terms of spatial coverage, the iterative model is more efficient since the parallel model leverages redundancy to ensure data quality. Still, Nicolas concludes that the “parallel model provides an output which is more reliable than that of a basic iterative [because] the latter is sensitive to vandalism or knowledge destruction.”

So the question that naturally follow is this: how can parallel and iterative methodologies be combined to produce a better overall result? Perhaps the parallel approach could be used as the default to begin with. However, images that are considered difficult to interpret would get pushed from the parallel workflow to the iterative workflow. The latter would first be processed by experts in order to create favorable path dependency. Could this hybrid approach be the wining strategy?

Could Social Media Have Prevented the Largest Mass Poisoning of a Population in History?

I just finished reading a phenomenal book. Resilience: Why Things Bounce Back, was co-authored by my good friend Andrew Zolli of PopTech fame and his won-derful colleague Ann Marie Healey. I could easily write several dozen blog posts on this brilliant book. Consider this the first of possibly many more posts to follow. Some will summarize and highlight insights that really resonated with me while others like the one below will use the book as a spring board to explore related questions and themes.

In one of the many interesting case studies that Andrew and Ann discuss in their book, the following one may very well be the biggest #FAIL in all of development history. The vast majority of Bangladeshis did not have access to clean water during the early 1970s, which contributed to numerous diseases that claimed hundreds of thousands of lives every year. So UNICEF launched a “nationwide program to sink shallow tube wells across the country. Once a small hand pump was installed to the top of the tube, clean water rose quickly to the surface.”

By the end of the 1970s, over 300,000 tube wells had been installed and some 10 million more went into operation by the late 1990s. With access to clean water, the child mortality rate dropped by more than half, from 24% to less than 10%. UNICEF’s solution was thus “touted as a model for South Asia and the world.” In the early 1980s, however, signs of widespread arsenic poising began to appear across the country. “UNICEF had mistaken deep water for clean water and never tested its tube wells for this poison.” WHO soon predicted that “one in a hundred Bangladeshis drinking from the contaminated wells would die from an arsenic-related cancer.” The government estimated that about half of the 10 million wells were contaminated. A few years later, WHO announced that Bangladesh was “facing the largest mass poisoning of a population in history.”

In a typical move that proves James Scott’s thesis Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed, the Bangladeshi government partnered with the World Bank to paint the sprout of each well red if the water was contaminated and green if safe to drink. Five years and over $40 million later, the project had only been able to test half of the 10 million wells. “Officially, this intervention was hailed as almost instantaneous success.” But the widespread negative socio-economic impact and community-based conflicts that resulted from this one-off, top-down intervention calls into question the purported success of this intervention.

As Andrew and Ann explain, water use in Bangladesh (like many other countries) starts and ends with women and girls. “They are the ones who will determine if a switch to a green well is warranted because they are the ones who fetch the water in water numerous times a day.” The location of these green wells will largely determine “whether or not women and girls can access them in a way that is deemed socially appropriate.” As was the case with many of these wells, “the religious and cultural norms impeded a successful switch.”

In addition, “negotiating use of someone else’s green well was an act fraught with potential conflict.” As a result, some still used water from red-painted wells. In fact, “reports started to come in of families and communities chipping away at the red paint on their wells,” with some even repainting theirs with green. Such was the stigma of being a family linked to a red well. Indeed, “young girls living within the vicinity of contaminated wells [recall that there were an estimated 5 million such wells] suffered from diminishing marriage prospects, if they were able to marry at all.” In addition, because the government was unable to provide alternative sources of clean water for half of the communities with a red well, “many women and girls returned to surface water sources like ponds and lakes, significantly more likely to be contaminated with fecal pathogens.” As a result, “researchers estimated that abandonment of shallow tube wells increased a household’s risk of diarrheal disease by 20%.”

In 2009, a water quality survey carried out by the government found that “approximately 20 million people were still being exposed to excessive quantities of arsenic.” And so, “while the experts and politicians discuss how to find a solution for the unintended consequences of the intervention, the people of Bangladesh continue bringing their buckets to the wells while crossing their fingers behind their backs.”

I have several questions (and will omit the ones that start with WTF?). Could social media have mitigated this catastrophic disaster? It took an entire decade for UNICEF and the Bangladeshi government to admit that massive arsenic poisoning was taking place. And even then, when UNICEF finally responded to the crisis in 1998, they said “We are wedded to safe water, not tube wells, but at this time tube wells remain a good, affordable idea and our program will go on.” By then it was too late anyway since arsenic in the wells had “found their way into the food supply. Rice irrigated with the tube wells was found to contain more than nine times the normal amount of arsenic. Rice concentrated the poison, even if one managed to avoid drinking contaminated well water, concentrated amounts would just up in one’s food.”

Could social media—had they existed in the 1980s—been used to support the early findings published by local scientists 15 years before UNICEF publicly recognized (but still ignored) the crisis? Could scientists and activists have launched a public social media campaign to name and shame? Could hundreds of pictures posted on Flickr and videos uploaded to YouTube made a difference by directly revealing the awful human consequences of arsenic poisoning?

Could an Ushahidi platform powered by FrontlineSMS have been used to create a crowdsourced complaints mechanism? Could digital humanitarian volunteers from the Standby Volunteer Task Force (SBTF) have worked with local counterparts to create a live country-wide map of concerns posted anonymously by girls and women across thousands of communities in Bangladesh? Could an interactive voice response (IVR) system like this one been set up to address concerns and needs of illiterate individuals? Could a PeaceTXT approach have been used to catalyze behavior change? Can these technologies build more resilient societies that allow them to bounce back from crises like these?

And since mass arsenic poisoning is still happening in Bangladesh today, 40 years after UNICEF’s first intervention, are initiatives like the ones described above being tried at all?

Crisis Mapping for Disaster Preparedness, Mitigation and Resilience

Crisis mapping for disaster preparedness is nothing new. In 2004, my colleague Suha Ulgen spearheaded an innovative project in Istanbul that combined public participation and mobile geospatial technologies for the purposes of disaster mitigation. Suha subsequently published an excellent overview of the project entitled “Public Participation Geographic Information Sharing Systems for Co-mmunity Based Urban Disaster Mitigation,” available in this edited book on Geo-Information for Disaster Management. I have referred to this project in count-less conversations since 2007  so it is high time I blog about it as well.

Suha’s project included a novel “Neighborhood Geographic Information Sharing System,” which “provided volunteers with skills and tools for identification of seismic risks and response assets in their neighborhoods. Field data collection volunteers used low-cost hand-held computers and data compiled was fed into a geospatial database accessible over the Internet. Interactive thematic maps enabled discussion of mitigation measures and action alternatives. This pilot evolved into a proposal for sustained implementation with local fire stations.” Below is a screenshot of the web-based system that enabled data entry and query.

There’s no reason why a similar approach could not be taken today, one that uses a dedicated smart phone app combined with integrated gamification and social networking features. The idea would be to make community mapping fun and rewarding; a way to foster a more active and connected community—which would in turn build more social capital. In the event of a disaster, this same smart phone app would allow users to simply “check in” to receive information on the nearest shelter areas (response assets) as well as danger zones such as overpasses, etc. This is why geo-fencing is so important for crisis mapping.

(Incidentally, Suha’s project also included a “School Commute Contingency Pilot” designed to track school-bus routes in Istanbul and thus “stimulate contingency planning for commute-time emergencies when 400,000 students travel an average of 45 minutes each way on 20,000 service buses. [GPS] data loggers were used to determine service bus routes displayed on printed maps high-lighting nearest schools along the route.” Suha proposed that “bus-drivers, parents and school managers be issued route maps with nearest schools that could serve as both meeting places and shelters”).

Fast forward to 2012 and the Humanitarian OpenStreetMap’s (HOT) novel project “Community Mapping for Exposure in Indonesia,” which resulted in the mapping of over 160,000 buildings and numerous village level maps in under ten months. The team also organized a university competition to create incentives for the mapping of urban areas. “The students were not only tasked to digitize buildings, but to also collect building information such building structure, wall type, roof type and the number of floors.” This contributed to the mapping and codification of some 30,000 buildings.

As Suha rightly noted almost 10 years ago, “for disaster mitigation measures to be effective they need to be developed in recognition of the local differences and adopted by the active participation of each community.” OSM’s work in Indonesia fully embodies the importance of mapping local differences and provides important insights on how to catalyze community participation. The buildup of social capital is another important outcome of these efforts. Social capital facilitates collective action and increases local capacity for self-organization, resulting in greater social resilience. In sum, these novel projects demonstrate that technologies used for crisis mapping can be used for disaster preparedness, mitigation and resilience.

Crowdsourcing Crisis Response Following Philippine Floods

Widespread and heavy rains resulting from Typhoon Haikui have flooded the Philippine capital Manila. Over 800,000 have been affected by the flooding and some 250,000 have been relocated to evacuation centers. Given the gravity of the situation, “some resourceful Filipinos put up an online spreadsheet where concerned citizens can list down places where help is most urgently needed” (1). Meanwhile, Google’s Crisis Response Team has launched this resource page  which includes links to News updates, Emergency contact information, Person Finder and this shelter map.

Filipinos volunteers are using an open (but not editable) Google Spreadsheet and crowdsourcing reports using this Google Form to collect urgent reports on needs. The spreadsheet (please click the screenshot below to enlarge) includes time of incident, location (physical address), a description of the alert (many include personal names and phone numbers) and the person it was reported by. Additional fields include status of the alert, the urgency of this alert and whether action has been taken. The latter is also color coded.

“The spreadsheet can easily be referenced by any rescue group that can access the web, and is constantly updated by volunteers real-time” (2). This reminds me a lot of the Google Spreadsheets we used following the Haiti Earthquake of 2010. The Standby Volunteer Task Force (SBTF) continues to use Google Spreadsheets in similar aways but for the purposes of media monitoring and these are typically not made public. What is noteworthy about these important volunteer efforts in the Philippines is that the spreadsheet was made completely public in order to crowdsource the response.

As I’ve noted before, emergency management professionals cannot be every-where at the same time, but the crowd is always there. The tradeoff with the use of open data to crowdsource crisis response is obviously privacy and data protection. Volunteers may therefore want to let those filling out the Google Form know that any information they provide will or may be made public. I would also recommend that they create an “About Us” or “Who We Are” link to cultivate a sense of trust with the initiative. Finally, crowdsourcing offers-for-help may facilitate the “matchmaking” of needs and available resources.

I would give the same advice to volunteers who recently setup this Crowdmap of the floods. I would also suggest they set up their own Standby Volunteer Task Force (SBTF) in order to deploy again in the future. In the meantime, reports on flood levels can be submitted to the crisis map via webform, email and SMS.