Why USAID’s Crisis Map of Syria is so Unique

While static, this crisis map includes a truly unique detail. Click on the map below to see a larger version as this may help you spot what is so striking.

For a hint, click this link. Still stumped? Look at the sources listed in the Key.

 

Using E-Mail Data to Estimate International Migration Rates

As is well known, “estimates of demographic flows are inexistent, outdated, or largely inconsistent, for most countries.” I would add costly to that list as well. So my QCRI colleague Ingmar Weber co-authored a very interesting study on the use of e-mail data to estimate international migration rates.

The study analyzes a large sample of Yahoo! emails sent by 43 million users between September 2009 and June 2011. “For each message, we know the date when it was sent and the geographic location from where it was sent. In addition, we could link the message with the person who sent it, and with the user’s demographic information (date of birth and gender), that was self reported when he or she signed up for a Yahoo! account. We estimated the geographic location from where each email message was sent using the IP address of the user.”

The authors used data on existing migration rates for a dozen countries and international statistics on Internet diffusion rates by age and gender in order to correct for selection bias. For example, “estimated number of migrants, by age group and gender, is multiplied by a correction factor to adjust for over-representation of more educated and mobile people in groups for which the Internet penetration is low.” The graphs below are estimates of age and gender-specific immigration rates for the Philippines. “The gray area represents the size of the bias correction.” This means that “without any correction for bias, the point estimates would be at the upper end of the gray area.” These methods “correct for the fact that the group of users in the sample, although very large, is not representative of the entire population.”

The results? Ingmar and his co-author Emilio Zagheni were able to “estimate migration rates that are consistent with the ones published by those few countries that compile migration statistics. By using the same method for all geographic regions, we obtained country statistics in a consistent way, and we generated new information for those countries that do not have registration systems in place (e.g., developing countries), or that do not collect data on out-migration (e.g., the United States).” Overall, the study documented a “global trend of increasing mobility,” which is “growing at a faster pace for females than males. The rate of increase for different age groups varies across countries.”

The authors argue that this approach could also be used in the context of “natural” disasters and man-made disasters. In terms of future research, they are interested in evaluating “whether sending a high proportion of e-mail messages to a particular country (which is a proxy for having a strong social network in the country) is related to the decision of actually moving to the country.” Naturally, they are also interested in analyzing Twitter data. “In addition to mobility or migration rates, we could evaluate sentiments pro or against migration for different geographic areas. This would help us understand how sentiments change near an international border or in regions with different migration rates and economic conditions.”

I’m very excited to have Ingmar at QCRI so we can explore these ideas further and in the context of humanitarian and development challenges. I’ve been dis-cussing similar research ideas with my colleagues at UN Global Pulse and there may be a real sweet spot for collaboration here, particularly with the recently launched Pulse Lab in Jakarta.” The possibility of collaborating with my collea-gues at Flowminder could also be really interesting given their important study of population movement following the Haiti Earthquake. In conclusion, I fully share the authors’ sentiment when they highlight the fact that it is “more and more important to develop models for data sharing between private com-panies and the academic world, that allow for both protection of users’ privacy & private companies’ interests, as well as reproducibility in scientific publishing.”

Rapidly Verifying the Credibility of Information Sources on Twitter

One of the advantages of working at QCRI is that I’m regularly exposed to peer-reviewed papers presented at top computing conferences. This is how I came across an initiative called “Seriously Rapid Source Review” or SRSR. As many iRevolution readers know, I’m very interested in information forensics as applied to crisis situations. So SRSR certainly caught my attention.

The team behind SRSR took a human centered design approach in order to integrate journalistic practices within the platform. There are four features worth noting in this respect. The first feature to note in the figure below is the automated filter function, which allows one to view tweets generated by “Ordinary People,” “Journalists/Bloggers,” “Organizations,” “Eyewitnesses” and “Uncategorized.” The second feature, Location, “shows a set of pie charts indica-ting the top three locations where the user’s Twitter contacts are located. This cue provides more location information and indicates whether the source has a ‘tie’ or other personal interest in the location of the event, an aspect of sourcing exposed through our preliminary interviews and suggested by related work.”


The third feature worth noting is the “Eyewitness” icon. The SRSR team developed the first ever automatic classifier to identify eyewitness reports shared on Twitter. My team and I at QCRI are developing a second one that focuses specifically on automatically classifying eyewitness reports during sudden-onset natural disasters. The fourth feature is “Entities,” which displays the top five entities that the user has mentioned in their tweet history. These include references to organizations, people and places, which can reveal important patterns about the twitter user in question.

Journalists participating in this applied research found the “Location” feature particularly important when assess the credibility of users on Twitter. They noted that “sources that had friends in the location of the event were more believable, indicating that showing friends’ locations can be an indicator of credibility.” One journalist shared the following: “I think if it’s someone without any friends in the region that they’re tweeting about then that’s not nearly as authoritative, whereas if I find somebody who has 50% of friends are in [the disaster area], I would immediately look at that.”

In addition, the automatic identification of “eyewitnesses” was deemed essential by journalists who participated in the SRSR study. This should not be surprising since “news organizations often use eyewitnesses to add credibility to reports by virtue of the correspondent’s on-site proximity to the event.” Indeed, “Witness-ing and reporting on what the journalist had witnessed have long been seen as quintessential acts of journalism.” To this end, “social media provides a platform where once passive witnesses can become active and share their eyewitness testimony with the world, including with journalists who may choose to amplify their report.”

In sum, SRSR could be used to accelerate the verification of social media con-tent, i.e., go beyond source verification alone. For more on SRSR, please see this computing paper (PDF), which was authored by Nicholas Diakopoulos, Munmun De Choudhury and Mor Naaman.

The Most Impressive Live Global Twitter Map, Ever?

My colleague Kalev Leetaru has just launched The Global Twitter Heartbeat Project in partnership with the Cyber Infrastructure and Geospatial Information Laboratory (CIGI) and GNIP. He shared more information on this impressive initiative with the CrisisMappers Network this morning.

According to Kalev, the project “uses an SGI super-computer to visualize the Twitter Decahose live, applying fulltext geocoding to bring the number of geo-located tweets from 1% to 25% (using a full disambiguating geocoder that uses all of the user’s available information in the Twitter stream, not just looking for mentions of major cities), tone-coding each tweet using a twitter-customized dictionary of 30,000 terms, and applying a brand-new four-stage heatmap engine (this is where the supercomputer comes in) that makes a map of the number of tweets from or about each location on earth, a second map of the average tone of all tweets for each location, a third analysis of spatial proximity (how close tweets are in an area), and a fourth map as needed for the percent of all of those tweets about a particular topic, which are then all brought together into a single heatmap that takes all of these factors into account, rather than a sequence of multiple maps.”

Kalev added that, “For the purposes of this demonstration we are processing English only, but are seeing a nearly identical spatial profile to geotagged all-languages tweets (though this will affect the tonal results).” The Twitterbeat team is running a live demo showing both a US and world map updated in realtime at Supercomputing on a PufferSphere and every few seconds on the SGI website here.”


So why did Kalev share all this with the CrisisMappers Network? Because he and his team created a rather unique crisis map composed of all tweets about Hurricane Sandy, see the YouTube video above. “[Y]ou  can see how the whole country lights up and how tweets don’t just move linearly up the coast as the storm progresses, capturing the advance impact of such a large storm and its peripheral effects across the country.” The team also did a “similar visualization of the recent US Presidential election showing the chaotic nature of political communication in the Twittersphere.”


To learn more about the project, I recommend watching Kalev’s 2-minute introductory video above.

What Percentage of Tweets Generated During a Crisis Are Relevant for Humanitarian Response?

More than half-a-million tweets were generated during the first three days of Hurricane Sandy and well over 400,000 pictures were shared via Instagram. Last year, over one million tweets were generated every five minutes on the day that Japan was struck by a devastating earthquake and tsunami. Humanitarian organi-zations are ill-equipped to manage this volume and velocity of information. In fact, the lack of analysis of this “Big Data” has spawned all kinds of suppositions about the perceived value—or lack thereof—that social media holds for emer-gency response operations. So just what percentage of tweets are relevant for humanitarian response?

One of the very few rigorous and data-driven studies that addresses this question is Dr. Sarah Vieweg‘s 2012 doctoral dissertation on “Situational Awareness in Mass Emergency: Behavioral and Linguistic Analysis of Disaster Tweets.” After manually analyzing four distinct disaster datasets, Vieweg finds that only 8% to 20% of tweets generated during a crisis provide situational awareness. This implies that the vast majority of tweets generated during a crisis have zero added value vis-à-vis humanitarian response. So critics have good reason to be skeptical about the value of social media for disaster response.

At the same time, however, even if we take Vieweg’s lower bound estimate, 8%, this means that over 40,000 tweets generated during the first 72 hours of Hurricane Sandy may very well have provided increased situational awareness. In the case of Japan, more than 100,000 tweets generated every 5 minutes may have provided additional situational awareness. This volume of relevant infor-mation is much higher and more real-time than the information available to humanitarian responders via traditional channels.

Furthermore, preliminary research by QCRI’s Crisis Computing Team show that 55.8% of 206,764 tweets generated during a major disaster last year were “Informative,” versus 22% that were “Personal” in nature. In addition, 19% of all tweets represented “Eye-Witness” accounts, 17.4% related to information about “Casualty/Damage,” 37.3% related to “Caution/Advice,” while 16.6% related to “Donations/Other Offers.” Incidentally, the tweets were automatically classified using algorithms developed by QCRI. The accuracy rate of these ranged from 75%-81% for the “Informative Classifier,” for example. A hybrid platform could then push those tweets that are inaccurately classified to a micro-tasking platform for manual classification, if need be.

This research at QCRI constitutes the first phase of our work to develop a Twitter Dashboard for the Humanitarian Cluster System, which you can read more about in this blog post. We are in the process of analyzing several other twitter datasets in order to refine our automatic classifiers. I’ll be sure to share our preliminary observations and final analysis via this blog.

Crowdsourcing the Evaluation of Post-Sandy Building Damage Using Aerial Imagery

Update (Nov 2): 5,739 aerial images tagged by over 3,000 volunteers. Please keep up the outstanding work!

My colleague Schuyler Erle from Humanitarian OpenStreetMap  just launched a very interesting effort in response to Hurricane Sandy. He shared the info below via CrisisMappers earlier this morning, which I’m turning into this blog post to help him recruit more volunteers.

Schuyler and team just got their hands on the Civil Air Patrol’s (CAP) super high resolution aerial imagery of the disaster affected areas. They’ve imported this imagery into their Micro-Tasking Server MapMill created by Jeff Warren and are now asking volunteers to help tag the images in terms of the damage depicted in each photo. “The 531 images on the site were taken from the air by CAP over New York, New Jersey, Rhode Island, and Massachusetts on 31 Oct 2012.”

To access this platform, simply click here: http://sandy.hotosm.org. If that link doesn’t work,  please try sandy.locative.us.

“For each photo shown, please select ‘ok’ if no building or infrastructure damage is evident; please select ‘not ok’ if some damage or flooding is evident; and please select ‘bad’ if buildings etc. seem to be significantly damaged or underwater. Our *hope* is that the aggregation of the ok/not ok/bad ratings can be used to help guide FEMA resource deployment, or so was indicated might be the case during RELIEF at Camp Roberts this summer.”

A disaster response professional working in the affected areas for FEMA replied (via CrisisMappers) to Schuyler’s efforts to confirm that:

“[G]overnment agencies are working on exploiting satellite imagery for damage assessments and flood extents. The best way that you can help is to help categorize photos using the tool Schuyler provides […].  CAP imagery is critical to our decision making as they are able to work around some of the limitations with satellite imagery so that we can get an area of where the worst damage is. Due to the size of this event there is an overwhelming amount of imagery coming in, your assistance will be greatly appreciated and truly aid in response efforts.  Thank you all for your willingness to help.”

Schuyler notes that volunteers can click on the Grid link from the home page of the Micro-Tasking platform to “zoom in to the coastlines of Massachusetts or New Jersey” and see “judgements about building damages beginning to aggregate in US National Grid cells, which is what FEMA use operationally. Again, the idea and intention is that, as volunteers judge the level of damage evident in each photo, the heat map will change color and indicate at a glance where the worst damage has occurred.” See above screenshot.

Even if you just spend 5 or 10 minutes tagging the imagery, this will still go a long way to supporting FEMA’s response efforts. You can also help by spreading the word and recruiting others to your cause. Thank you!

What Was Novel About Social Media Use During Hurricane Sandy?

We saw the usual spikes in Twitter activity and the typical (reactive) launch of crowdsourced crisis maps. We also saw map mashups combining user-generated content with scientific weather data. Facebook was once again used to inform our social networks: “We are ok” became the most common status update on the site. In addition, thousands of pictures where shared on Instagram (600/minute), documenting both the impending danger & resulting impact of Hurricane Sandy. But was there anything really novel about the use of social media during this latest disaster?

I’m asking not because I claim to know the answer but because I’m genuinely interested and curious. One possible “novelty” that caught my eye was this FrankenFlow experiment to “algorithmically curate” pictures shared on social media. Perhaps another “novelty” was the embedding of webcams within a number of crisis maps, such as those below launched by #HurricaneHacker and Team Rubicon respectively.

Another “novelty” that struck me was how much focus there was on debunking false information being circulated during the hurricane—particularly images. The speed of this debunking was also striking. As regular iRevolution readers will know, “information forensics” is a major interest of mine.

This Tumblr post was one of the first to emerge in response to the fake pictures (30+) of the hurricane swirling around the social media whirlwind. Snopes.com also got in on the action with this post. Within hours, The Atlantic Wire followed with this piece entitled “Think Before You Retweet: How to Spot a Fake Storm Photo.” Shortly after, Alexis Madrigal from The Atlantic published this piece on “Sorting the Real Sandy Photos from the Fakes,” like the one below.

These rapid rumor-bashing efforts led BuzzFeed’s John Herman to claim that Twitter acted as a truth machine: “Twitter’s capacity to spread false information is more than cancelled out by its savage self-correction.” This is not the first time that journalists or researchers have highlighted Twitter’s tendency for self-correction. This peer-reviewed, data-driven study of disaster tweets generated during the 2010 Chile Earthquake reports the same finding.

What other novelties did you come across? Are there other interesting, original and creative uses of social media that ought to be documented for future disaster response efforts? I’d love to hear from you via the comments section below. Thanks!

The Limits of Crowdsourcing Crisis Information and The Promise of Advanced Computing


First, I want to express my sincere gratitude to the dozen or so iRevolution readers who recently contacted me. I have indeed not been blogging for the past few weeks but this does not mean I have decided to stop blogging altogether. I’ve simply been ridiculously busy (and still am!). But I truly, truly appreciate the kind encouragement to continue blogging, so thanks again to all of you who wrote in.

Now, despite the (catchy?) title of this blog post, I am not bashing crowd-sourcing or worshipping on the alter of technology. My purpose here is simply to suggest that the crowdsourcing of crisis information is an approach that does not scale very well. I have lost count of the number of humanitarian organizations who said they simply didn’t have hundreds of volunteers available to manually monitor social media and create a live crisis map. Hence my interest in advanced computing solutions.

The past few months at the Qatar Computing Research Institute (QCRI) have made it clear to me that developing and applying advanced computing solutions to address major humanitarian challenges is anything but trivial. I have learned heaps about social computing, machine learning and big data analytics. So I am now more aware of the hurdles but am even more excited than before about the promise that advanced computing holds for the development of next-generation humanitarian technology.

The way forward combines both crowdsourcing and advanced computing. The next generation of humanitarian technologies will take a hybrid approach—at times prioritizing “smart crowdsourcing” and at other times leading with automated algorithms. I shall explain what I mean by smart crowdsourcing in a future post. In the meantime, the video above from my recent talk at TEDxSendai expands on the themes I have just described.

MAQSA: Social Analytics of User Responses to News

Designed by QCRI in partnership with MIT and Al-Jazeera, MAQSA provides an interactive topic-centric dashboard that summarizes news articles and user responses (comments, tweets, etc.) to these news items. The platform thus helps editors and publishers in newsrooms like Al-Jazeera’s better “understand user engagement and audience sentiment evolution on various topics of interest.” In addition, MAQSA “helps news consumers explore public reaction on articles relevant to a topic and refine their exploration via related entities, topics, articles and tweets.” The pilot platform currently uses Al-Jazeera data such as Op-Eds from Al-Jazeera English.

Given a topic such as “The Arab Spring,” or “Oil Spill”, the platform combines time, geography and topic to “generate a detailed activity dashboard around relevant articles. The dashboard contains an annotated comment timeline and a social graph of comments. It utilizes commenters’ locations to build maps of comment sentiment and topics by region of the world. Finally, to facilitate exploration, MAQSA provides listings of related entities, articles, and tweets. It algorithmically processes large collections of articles and tweets, and enables the dynamic specification of topics and dates for exploration.”

While others have tried to develop similar dashboards in the past, these have “not taken a topic-centric approach to viewing a collection of news articles with a focus on their user comments in the way we propose.” The team at QCRI has since added a number of exciting new features for Al-Jazeera to try out as widgets on their site. I’ll be sure to blog about these and other updates when they are officially launched. Note that other media companies (e.g., UK Guardian) will also be able to use this platform and widgets once they become public.

As always with such new initiatives, my very first thought and question is: how might we apply them in a humanitarian context? For example, perhaps MAQSA could be repurposed to do social analytics of responses from local stakeholders with respect to humanitarian news articles produced by IRIN, an award-winning humanitarian news and analysis service covering the parts of the world often under-reported, misunderstood or ignored. Perhaps an SMS component could also be added to a MAQSA-IRIN platform to facilitate this. Or perhaps there’s an application for the work that Internews carries out with local journalists and consumers of information around the world. What do you think?

The Best Way to Crowdsource Satellite Imagery Analysis for Disaster Response

My colleague Kirk Morris recently pointed me to this very neat study on iterative versus parallel models of crowdsourcing for the analysis of satellite imagery. The study was carried out by French researcher & engineer Nicolas Maisonneuve for the next GISscience2012 conference.

Nicolas finds that after reaching a certain threshold, adding more volunteers to the parallel model does “not change the representativeness of opinion and thus will not change the consensual output.” His analysis also shows that the value of this threshold has significant impact on the resulting quality of the parallel work and thus should be chosen carefully.  In terms of the iterative approach, Nicolas finds that “the first iterations have a high impact on the final results due to a path dependency effect.” To this end, “stronger commitment during the first steps are thus a primary concern for using such model,” which means that “asking expert/committed users to start,” is important.

Nicolas’s study also reveals that the parellel approach is better able to correct wrong annotations (wrong analysis of the satellite imagery) than the iterative model for images that are fairly straightforward to interpret. In contrast, the iterative model is better suited for handling more ambiguous imagery. But there is a catch: the potential path dependency effect in the iterative model means that  “mistakes could be propagated, generating more easily type I errors as the iterations proceed.” In terms of spatial coverage, the iterative model is more efficient since the parallel model leverages redundancy to ensure data quality. Still, Nicolas concludes that the “parallel model provides an output which is more reliable than that of a basic iterative [because] the latter is sensitive to vandalism or knowledge destruction.”

So the question that naturally follow is this: how can parallel and iterative methodologies be combined to produce a better overall result? Perhaps the parallel approach could be used as the default to begin with. However, images that are considered difficult to interpret would get pushed from the parallel workflow to the iterative workflow. The latter would first be processed by experts in order to create favorable path dependency. Could this hybrid approach be the wining strategy?