Digital Humanitarians: From Haiti Earthquake to Typhoon Yolanda

We’ve been able to process and make sense of a quarter of a million tweets in the aftermath of Typhoon Yolanda. Using both AIDR (still under development) and Twitris, we were able to collect these tweets in real-time and use automated algorithms to filter for both relevancy and uniqueness. The resulting ~55,000 tweets were then uploaded to MicroMappers (still under development). Digital volunteers from the world over used this humanitarian technology platform to tag tweets and now images from the disaster (click image below to enlarge). At one point, volunteers tagged some 1,500 tweets in just 10 minutes. In parallel, we used machine learning classifiers to automatically identify tweets referring to both urgent needs and offers of help. In sum, the response to Typhoon Yolanda is the first to make full use of advanced computing, i.e., both human computing and machine computing to make sense of Big (Crisis) Data.

ImageClicker YolandaPH

We’ve come a long way since the tragic Haiti Earthquake. There was no way we would’ve been able to pull off the above with the Ushahidi platform. We weren’t able to keep up with even a few thousand tweets a day back then, not to mention images. (Incidentally, MicroMappers can also be used to tag SMS). Furthermore, we had no trained volunteers on standby back when the quake struck. Today, not only do we have a highly experienced network of volunteers from the Standby Volunteer Task Force (SBTF) who serve as first (digital) responders, we also have an ecosystem of volunteers from the Digital Humanitarian Network (DHN). In the case of Typhoon Yolanda, we also had a formal partner, the UN Office for the Coordination of Humanitarian Affairs (OCHA), that officially requested digital humanitarian support. In other words, our efforts are directly in response to clearly articulated information needs. In contrast, the response to Haiti was “supply based” in that we simply pushed out all information that we figured might be of use to humanitarian responders. We did not have a formal partner from the humanitarian sector going into the Haiti operation.

Yolanda Prezi

What this new digital humanitarian operation makes clear is that preparedness, partnerships & appropriate humanitarian technology go a long way to ensuring that our efforts as digital humanitarians add value to the field-based operations in disaster zones. The above Prezi by SBTF co-founder Anahi (click on the image to launch the presentation) gives an excellent overview of how these digital humanitarian efforts are being coordinated in response to Yolanda. SBTF Core Team member Justine Mackinnon is spearheading the bulk of these efforts.

While there are many differences between the digital response to Haiti and Yolanda, several key similarities have also emerged. First, neither was perfect, meaning that we learned a lot in both deployments; taking a few steps forward, then a few steps back. Such is the path of innovation, learning by doing. Second, like our use of Skype in Haiti, there’s no way we could do this digital response work without Skype. Third, our operations were affected by telecommunications going offline in the hardest hit areas. We saw an 18.7% drop in relevant tweets on Saturday compared to the day before, for example. Fourth, while the (very) new technologies we are deploying are promising, they are still under development and have a long way to go. Fifth, the biggest heroes in response to Haiti were the volunteers—both from the Haitian Diaspora and beyond. The same is true of Yolanda, with hundreds of volunteers from the world over (including the Philippines and the Diaspora) mobilizing online to offer assistance.

A Filipino humanitarian worker in Quezon City, Philippines, for example, is volunteering her time on MicroMappers. As is customer care advisor from Eurostar in the UK and a fire officer from Belgium who recruited his uniformed colleagues to join the clicking. We have other volunteer Clickers from Makati (Philippines), Cape Town (South Africa), Canberra & Gold Coast (Australia), Berkeley, Brooklyn, Citrus Heights & Hinesburg (US), Kamloops (Canada), Paris & Marcoussis (France), Geneva (Switzerland), Sevilla (Spain), Den Haag (Holland), Munich (Germany) and Stokkermarke (Denmark) to name just a few! So this is as much a human story is it is one about technology. This is why online communities like MicroMappers are important. So please join our list-serve if you want to be notified when humanitarian organizations need your help.

Bio

Typhoon Yolanda: UN Needs Your Help Tagging Crisis Tweets for Disaster Response (Updated)

Final Update 14 [Nov 13th @ 4pm London]: Thank you for clicking to support the UN’s relief operations in the Philippines! We have now completed our mission as digital humanitarian volunteers. The early results of our collective online efforts are described here. Thank you for caring and clicking. Feel free to join our list-serve if you want to be notified when humanitarian organizations need your help again during the next disaster—which we really hope won’t be for a long, long time. In the meantime, our hearts and prayers go out to those affected by this devastating Typhoon.

The United Nations Office for the Coordination of Humanitarian Affairs (OCHA) just activated the Digital Humanitarian Network (DHN) in response to Typhoon Yolanda, which has already been described as possibly one of the strongest Category 5 storms in history. The Standby Volunteer Task Force (SBTF) was thus activated by the DHN to carry out a rapid needs & damage assessment by tagging reports posted to social media. So Ji Lucas and I at QCRI (+ Hemant & Andrew) and Justine Mackinnon from SBTF have launched MicroMappers to microtask the tagging of tweets & images. We need all the help we can get given the volume we’ve collected (and are continuing to collect). This is where you come in!

TweetClicker_PH2

You don’t need any prior experience or training, nor do you need to create an account or even login to use the MicroMappers TweetClicker. If you can read and use a computer mouse, then you’re all set to be a Digital Humanitarian! Just click here to get started. Every tweet will get tagged by 3 different volunteers (to ensure quality control) and those tweets that get identical tags will be shared with our UN colleagues in the Philippines. All this and more is explained in the link above, which will give you a quick intro so you can get started right away. Our UN colleagues need these tags to better understand who needs help and what areas have been affected.

ImageClicker YolandaPH

It only takes 3 seconds to tag a tweet or image, so if that’s all the time you have then that’s plenty! And better yet, if you also share this link with family, friends, colleagues etc., and invite them to tag along. We’ll soon be launching We have also launched the ImageClicker to tag images by level of damage. So please stay tuned. What we need is the World Wide Crowd to mobilize in support of those affected by this devastating disaster. So please spread the word. And keep in mind that this is only the second time we’re using MicroMappers, so we know it is not (yet) perfect : ) Thank you!

bio

p.s. If you wish to receive an alert next time MicroMappers is activated for disaster response, then please join the MicroMappers list-serve here. Thanks!

Previous updates:

Update 1: If you notice that all the tweets (tasks) have been completed, then please check back in 1/2 hour as we’re uploading more tweets on the fly. Thanks!

Update 2: Thanks for all your help! We are getting lots of traffic, so the Clicker is responding very slowly right now. We’re working on improving speed, thanks for your patience!

Update 3: We collected 182,000+ tweets on Friday from 5am-7pm (local time) and have automatically filtered this down to 35,175 tweets based on relevancy and uniqueness. These 35K tweets are being uploaded to the TweetClicker a few thousand tweets at a time. We’ll be repeating all this for just one more day tomorrow (Saturday). Thanks for your continued support!

Update 4: We/you have clicked through all of Friday’s 35K tweets and currently clicking through today’s 28,202 tweets, which we are about 75% of the way through. Many thanks for tagging along with us, please keep up the top class clicking, we’re almost there! (Sunday, 1pm NY time)

Update 5: Thanks for all your help! We’ll be uploading more tweets tomorrow (Monday, November 11th). To be notified, simply join this list-serve. Thanks again! [updated post on Sunday, November 10th at 5.30pm New York]

Update 6: We’ve uploaded more tweets! This is the final stretch, thanks for helping us on this last sprint of clicks!  Feel free to join our list-serve if you want to be notified when new tweets are available, many thanks! If the system says all tweets have been completed, please check again in 1/2hr as we are uploading new tweets around the clock. [updated Monday, November 11th at 9am London]

Update 7 [Nov 11th @ 1pm London]We’ve just launched the ImageClicker to support the UN’s relief efforts. So please join us in tagging images to provide rapid damage assessments to our humanitarian partners. Our TweetClicker is still in need of your clicks too. If the Clickers are slow, then kindly be patient. If all the tasks are done, please come back in 1/2hr as we’re uploading content to both clickers around the clock. Thanks for caring and helping the relief efforts. An update on the overall digital humanitarian effort is available here.

Update 8 [Nov 11th @ 6.30pm NY]We’ll be uploading more tweets and images to the TweetClicker & ImageClicker by 7am London on Nov 12th. Thank you very much for supporting these digital humanitarian efforts, the results of which are displayed here. Feel free to join our list-serve if you want to be notified when the Clickers have been fed!

Update 9 [Nov 12th @ 6.30am London]: We’ve fed both our TweetClicker and ImageClicker with new tweets and images. So please join us in clicking away to provide our UN partners with the situational awareness they need to coordinate their important relief efforts on the ground. The results of all our clicks are displayed here. Thank you for helping and for caring. If the Clickers or empty or offline temporarily, please check back again soon for more clicks.

Update 10 [Nov 12th @ 10am New York]: Were continuing to feed both our TweetClicker and ImageClicker with new tweets and images. So please join us in clicking away to provide our UN partners with the situational awareness they need to coordinate their important relief efforts on the ground. The results of all our clicks are displayed here. Thank you for helping and for caring. If the Clickers or empty or offline temporarily, please check back again soon for more clicks. Try different browsers if the tweets/images are not showing up.

Update 11 [Nov 12th @ 5pm New York]: Only one more day to go! We’ll be feeding our TweetClicker and ImageClicker with new tweets and images by 7am London on the 13th. We will phase out operations by 2pm London, so this is the final sprint. The results of all our clicks are displayed here. Thank you for helping and for caring. If the Clickers are empty or offline temporarily, please check back again soon for more clicks. Try different browsers if the tweets/images are not showing up.

Update 12 [Nov 13th @ 9am London]: This is the last stretch, Clickers! We’ve fed our TweetClicker and ImageClicker with new tweets and images. We’ll be refilling them until 2pm London (10pm Manila) and phasing out shortly thereafter. Given that MicroMappers is still under development, we are pleased that this deployment went so well considering. The results of all our clicks are displayed here. Thank you for helping and for caring. If the Clickers are empty or offline temporarily, please check back again soon for more clicks. Try different browsers if the tweets/images are not showing up.

Update 13 [Nov 13th @ 11am London]: Just 3 hours left! Our UN OCHA colleagues have just asked us to prioritize the ImageClicker, so please focus on that Clicker. We’ll be refilling the ImageClicker until 2pm London (10pm Manila) and phasing out shortly thereafter. Given that MicroMappers is still under development, we are pleased that this deployment went so well considering. The results of all our clicks are displayed here. Thank you for helping and for caring. If the ImageClicker is empty or offline temporarily, please check back again soon for more clicks. Try different browsers if images are not showing up.

Social Media, Disaster Response and the Streetlight Effect

A police officer sees a man searching for his coin under a streetlight. After helping for several minutes, the exasperated officer asks if the man is sure that he lost his coin there. The man says “No, I lost them in the park a few blocks down the street.” The incredulous officer asks why he’s searching under the streetlight. The man replies, “Well this is where the light is.”[1] This parable describes the “streetlight effect,” the observational bias that results from using the easiest way to collect information. The streetlight effect is an important criticisms leveled against the use of social media for emergency management. This certainly is a valid concern but one that needs to be placed into context.

muttjeff01
I had the honor of speaking on a UN panel with Hans Rosling in New York last year. During the Q&A, Hans showed Member States a map of cell phone coverage in the Democratic Republic of the Congo (DRC). The map was striking. Barely 10% of the country seemed to have coverage. This one map shut down the entire conversation about the value of mobile technology for data collection during disasters. Now, what Hans didn’t show was a map of the DRC’s population distribution, which reveals that the majority of the country’s population lives in urban areas; areas that have cell phone coverage. Hans’s map was also static and thus did not convey the fact that the number cell phone subscribers increased by roughly 50% in the year leading up to the panel and ~50% again the year after.

Of course, the number of social media users in the DRC is far, far lower than the country’s 12.4 million unique cell phone subscribers. The map below, for example, shows the location of Twitter users over a 10 day period in October 2013. Now keep in mind that only 2% of users actually geo-tag their tweets. Also, as my colleague Kalev Leetaru recently discovered, the correlation between the location of Twitter users and access to electricity is very high, which means that every place on Earth that is electrified has a high probability of having some level of Twitter activity. Furthermore, Twitter was only launched 7 years ago compared to the first cell phone, which was built 30 years ago. So these are still early days for Twitter. But that doesn’t change the fact that there is clearly very little Twitter traffic in the DRC today. And just like the man in the parable above, we only have access to answers where an “electrified tweet” exists (if we restrict ourselves to the Twitter streetlight).

DRC twitter map 2

But this begs the following question, which is almost always overlooked: too little traffic for what? This study by Harvard colleagues, for example, found that Twitter was faster (and as accurate) as official sources at detecting the start and early progress of Cholera after the 2010 earthquake. And yet, the corresponding Twitter map of Haiti does not show significantly more activity than the DRC map over the same 10-day period. Keep in mind there were far fewer Twitter users in Haiti four years ago (i.e., before the earthquake). Other researchers have recently shown that “micro-crises” can also be detected via Twitter even though said crises elicit very few tweets by definition. More on that here.

Haiti twitter map

But why limit ourselves to the Twitter streetlight? Only a handful of “puzzle pieces” in our Haiti jigsaw may be tweets, but that doesn’t mean they can’t complement other pieces taken from traditional datasets and even other social media channels. Remember that there are five times more Facebook users than Twitter users. In certain contexts, however, social media may be of zero added value. I’ve reiterated this point again in recent talks at the Council on Foreign Relation and the UN. Social media is forming a new “nervous system” for our planet, but one that is still very young, even premature in places and certainly imperfect in representation. Then again, so was 911 in the 1970’s and 1980’s as explained here. In any event, focusing on more developed parts of the system (like Indonesia’s Twitter footprint below) makes more sense for some questions, as does complementing this new nervous system with other more mature data sources such mainstream media via as GDELT as advocated here.

Indonesia twitter map2

The Twitter map of the Manila area below is also the result of 10-day traffic. While “only” ~12 million Filipinos (13% of the country) lives in Manila, it behoves us to remember that urban populations across the world are booming. In just over 2,000 days, more than half of the population in the world’s developing regions will be living in urban areas according to the UN. Meanwhile, the rural population of developing countries will decline by half-a-billion in coming decades. At the same time, these rural populations will also grow a larger social media footprint since mobile phone penetration rates already stand at 89% in developing countries according to the latest ITU study (PDF). With Google and Facebook making it their (for-profit) mission to connect those off the digital grid, it is only a matter of time until very rural communities get online and click on ads.

Manila twitter map

The radical increase in population density means that urban areas will become even more vulnerable to major disasters (hence the Rockefeller Foundation’s program on 100 Resilience Cities). To be sure, as Rousseau noted in a letter to Voltaire after the massive 1756 Portugal Earthquake, “an earthquake occurring in wilderness would not be important to society.” In other words, disaster risk is a function of population density. At the same time, however, a denser population also means more proverbial streetlights. But just as we don’t need a high density of streetlights to find our way at night, we hardly need everyone to be on social media for tweets and Instagram pictures to shed some light during disasters and facilitate self-organized disaster response at the grassroots level.

Credit: Heidi RYDER Photography

My good friend Jaroslav Valůch recounted a recent conversation he had with an old fireman in a very small town in Eastern Europe who had never heard of Twitter, Facebook or crowdsourcing. The old man said: “During crisis, for us, the firemen, it is like having a dark house where only some rooms are lit (i.e., information from mayors and other official local sources in villages and cities affected). What you do [with social media and crowdsourcing], is that you are lighting up more rooms for us. So don’t worry, it is enough.”

No doubt Hans Rosling will show another dramatic map if I happen to sit on another panel with him. But this time I’ll offer context so that instead of ending the discussion, his map will hopefully catalyze a more informed debate. In any event, I suspect (and hope that) Hans won’t be the only one objecting to my optimism in this blog post. So as always, I welcome feedback from iRevolution readers. And as my colleague Andrew Zolli is fond of reminding folks at PopTech:

“Be tough on ideas, gentle on people.”

bio

Big Data & Disaster Response: Even More Wrong Assumptions

“Arguing that Big Data isn’t all it’s cracked up to be is a straw man, pure and simple—because no one should think it’s magic to begin with.” Since citing this point in my previous post on Big Data for Disaster Response: A List of Wrong Assumptions, I’ve come across more mischaracterizations of Big (Crisis) Data. Most of these fallacies originate from the Ivory Towers; from [a small number of] social scientists who have carried out one or two studies on the use of social media during disasters and repeat their findings ad nauseam as if their conclusions are the final word on a very new area of research.

Screen Shot 2013-11-05 at 12.38.31 PM

The mischaracterization of “Big Data and Sample Bias”, for example, typically arises when academics point out that marginalized communities do not have access to social media. First things first: I highly recommend reading “Big Data and Its Exclusions,” published by Stanford Law Review. While the piece does not address Big Crisis Data, it is nevertheless instructive when thinking about social media for emergency management. Secondly, identifying who “speaks” (and who does not speak) on social media during humanitarian crises is of course imperative, but that’s exactly why the argument about sample bias is such a straw man—all of my humanitarian colleagues know full well that social media reports are not representative. They live in the real world where the vast majority of data they have access to is unrepresentative and imperfect—hence the importance of drawing on as many sources as possible, including social media. Random sampling during disasters is a Quixotic luxury, which explains why humanitarian colleagues seek “good enough” data and methods.

Some academics also seem to believe that disaster responders ignore all other traditional sources of crisis information in favor of social media. This means, to follow their argument, that marginalized communities have no access to other communication life lines if they are not active on social media. One popular observation is the “revelation” that some marginalized neighborhoods in New York posted very few tweets during Hurricane Sandy. Why some academics want us to be surprised by this, I know not. And why they seem to imply that emergency management centers will thus ignore these communities (since they apparently only respond to Twitter) is also a mystery. What I do know is that social capital and the use of traditional emergency communication channels do not disappear just because academics chose to study tweets. Social media is simply another node in the pre-existing ecosystem of crisis information. 

negative space

Furthermore, the fact that very few tweets came out of the Rockaways during Hurricane Sandy can be valuable information for disaster responders, a point that academics often overlook. To be sure, monitoring  social media footprints during disasters can help humanitarians get a better picture of the “negative space” and thus infer what they might be missing, especially when comparing these “negative footprints” with data from traditional sources. Indeed, knowing what you don’t know is a key component of situational awareness. No one wants blind spots, and knowing who is not speaking on social media during disasters can help correct said blind spots. Moreover, the contours of a community’s social media footprint during a disaster can shed light on how neighboring areas (that are not documented on social media) may have been affected. When I spoke about this with humanitarian colleagues in Geneva this week, they fully agreed with my line of reasoning and even added that they already apply “good enough” methods of inference with traditional crisis data.

My PopTech colleague Andrew Zolli is fond of saying that we shape the world by the questions we ask. My UN colleague Andrej Verity recently reminded me that one of the most valuable aspects of social media for humanitarian response is that it helps us to ask important questions (that would not otherwise be posed) when coordinating disaster relief. So the next time you hear an academic go on about [a presentation on] issues of bias and exclusion, feel free to share the above along with this list of wrong assumptions.

Most importantly, tell them [say] this: “Arguing that Big Data isn’t all it’s cracked up to be is a straw man, pure and simple—because no one should think it’s magic to begin with.” It is high time we stop mischaracterizing Big Crisis Data. What we need instead is a can-do, problem-solving attitude. Otherwise we’ll all fall prey to the Smart-Talk trap.

Bio

Mining Mainstream Media for Emergency Management 2.0

There is so much attention (and hype) around the use of social media for emergency management (SMEM) that we often forget about mainstream media when it comes to next generation humanitarian technologies. The news media across the globe has become increasingly digital in recent years—and thus analyzable in real-time. Twitter added little value during the recent Pakistan Earthquake, for example. Instead, it was the Pakistani mainstream media that provided the immediate situational awareness necessary for a preliminary damage and needs assessment. This means that our humanitarian technologies need to ingest both social media and mainstream media feeds. 

Newspaper-covers

Now, this is hardly revolutionary. I used to work for a data mining company ten years ago that focused on analyzing Reuters Newswires in real-time using natural language processing (NLP). This was for a conflict early warning system we were developing. The added value of monitoring mainstream media for crisis mapping purposes has also been demonstrated repeatedly in recent years. In this study from 2008, I showed that a crisis map of Kenya was more complete when sources included mainstream media as well as user-generated content.

So why revisit mainstream media now? Simple: GDELT. The Global Data Event, Language and Tone dataset that my colleague Kalev Leetaru launched earlier this year. GDELT is the single largest public and global event-data catalog ever developed. Digital Humanitarians need no longer monitor mainstream media manually. We can simply develop a dedicated interface on top of GDELT to automatically extract situational awareness information for disaster response purposes. We’re already doing this with Twitter, so why not extend the approach to global digital mainstream media as well?

GDELT data is drawn from a “cross-section of all major international, national, regional, local, and hyper-local news sources, both print and broadcast, from nearly every corner of the globe, in both English and vernacular.” All identified events are automatically coded using the CAMEO coding framework (although Kalev has since added several dozen additional event-types). In short, GDELT codes all events by the actors involved, the type of event, location, time and other meta-data attributes. For example, actors include “Refugees,” “United Nations,” and “NGO”. Event-types include variables such as “Affect” which captures everything from refugees to displaced persons,  evacuations, etc. Humanitarian crises, aid, disasters, disaster relief, etc. are also included as an event-type. The “Provision of Humanitarian Aid” is another event-type, for example. GDELT data is currently updated every 24 hours, and Kalev has plans to provide hourly updates in the near future and ultimately 30-minute updates.

GDELT GKG

If this isn’t impressive enough, Kalev and colleagues have just launched the GDELT Global Knowledge Graph (GKG). “To sum up the GKG in a single sentence, it connects every person, organization, location, count, theme, news source, and event across the planet into a single massive network that captures what’s happening around the world, what its context is and who’s involved, and how the world is feeling about it, every single day.” The figure above (click to enlarge) is based on a subset of a single day of the GDELT Knowledge Graph, showing “how the cities of the world are connected to each other in a single day’s worth of news. A customized version of the GKG could perhaps prove useful for UN OCHA’s “Who Does What, Where” (3Ws) directory in the future. 

I’ve had long conversations with Kalev this month about leveraging GDELT for disaster response and he is very supportive of the idea. My hope is that we’ll be able to add a GDELT feed to MicroMappers next year. I’m also wondering whether we could eventually create a version of the AIDR platform that ingests GDELT data instead (or in addition to) Twitter. There is real potential here, which is why I’m excited that my colleagues at OCHA are exploring GDELT for humanitarian response. I’ll be meeting with them this week and next to explore ways to collaborate on making the most of GDELT for humanitarian response.

bio

Note: Mainstream media obviously includes television and radio as well. Some colleagues of mine in Austria are looking at triangulating television broadcasts with text-based media and social media for a European project.

Automatically Identifying Eyewitness Reporters on Twitter During Disasters

My colleague Kate Starbird recently shared a very neat study entitled “Learning from the Crowd: Collaborative Filtering Techniques for Identifying On-the-Ground Twitterers during Mass Disruptions” (PDF). As she and her co-authors rightly argue, “most Twitter activity during mass disruption events is generated by the remote crowd.” So can we use advanced computing to rapidly identify Twitter users who are reporting from ground zero? The answer is yes.

twitter-disaster-test

An important indicator of whether or not a Twitter user is reporting from the scene of a crisis is the number of times they are retweeted. During the Egyptian revolution in early 2011, “nearly 30% of highly retweeted Twitter users were physically present at those protest events.” Kate et al. drew on this insight to study tweets posted during the Occupy Wall Street (OWS) protests in September 2011. The authors manually analyzed a sample of more than 2,300 Twitter users to determine which were tweeting from the protests. They found that 4.5% of Twitter users in their sample were actually onsite. Using this dataset as training data, Kate et al. were able to develop a classifier that can automatically identify Twitter users reporting from the protests with an accuracy of just shy of 70%. I expect that more training data could very well help increase this accuracy score. 

In any event, “the information resulting from this or any filtering technique must be further combined with human judgment to assess its accuracy.” As the authors rightly note, “this ‘limitation’ fits well within an information space that is witnessing the rise of digital volunteer communities who monitor multiple data sources, including social media, looking to identify and amplify new information coming from the ground.” To be sure, “For volunteers like these, the use of techniques that increase the signal to noise ratio in the data has the potential to drastically reduce the amount of work they must do. The model that we have outlined does not result in perfect classification, but it does increase this signal-to-noise ratio substantially—tripling it in fact.”

I really hope that someone will leverage Kate’s important work to develop a standalone platform that automatically generates a list of Twitter users who are reporting from disaster-affected areas. This would be a very worthwhile contribution to the ecosystem of next-generation humanitarian technologies. In the meantime, perhaps QCRI’s Artificial Intelligence for Disaster Response (AIDR) platform will help digital humanitarians automatically identify tweets posted by eyewitnesses. I’m optimistic since we were able to create a machine learning classifier with an accuracy of 80%-90% for eyewitness tweets. More on this in our recent study

MOchin - talked to family

One question that remains is how to automatically identify tweets like the one above? This person is not an eyewitness but was likely on the phone with her family who are closer to the action. How do we develop a classifier to catch these “second-hand” eyewitness reports?

bio

Why Anonymity is Important for Truth and Trustworthiness Online

Philosophy Professor, Karen Frost-Arnold, has just published a highly lucid analysis of the dangers that come with Internet accountability (PDF). While the anonymity provided by social media can facilitate the spread of lies, Karen rightly argues that preventing anonymity can undermine online communities by stifling communication and spreading ignorance, thus leading to a larger volume of untrustworthy information. Her insights are instructive for those interested in information forensics and digital humanitarian action.

FN

To make her case, Karen distinguishes between error-avoidance and truth-attainment. The former seeks to avoid false beliefs while the latter seeks to attain true belief. Take mainstream and social media, for example. Some argue that the “value of traditional media surpasses that of the blogosphere […] because the traditional media are superior at filtering out false claims” since professional journalists “reduce the number of errors that might otherwise be reported and believed.” Others counter this assertion: “People who confine themselves to a filtered medium may well avoid believing falsehoods (if the filters are working well), but inevitably they will also miss out on valuable knowledge,” including many true beliefs.

Karen argues that Internet anonymity is at odds with both error-avoiding purists and truth-seeking purists. For example, “some experimental evidence indicates that anonymity in computer-mediated discussion increases the quantity and novelty of ideas shared.” In addition, anonymity provides a measure of safety. This is particularly important for digital activists and others who are vulnerable and oppressed. Without this anonymity, important knowledge may not be shared. To this end, “Removal of anonymity could deprive the community of true beliefs spread by reports from socially threatened groups. Without online anonymity, activists, citizen journalists, and members of many socially stigmatized groups are much less likely to take the risk of sharing what they know with others.”

This leads to decreased participation, which in turn undermines the diversity of online communities and their ability to detect errors. To be sure, “anonymity can enhance error-detection by enabling increased transformative criticism to weed out error and bias.” In fact, “anonymity enables such groups to share criticisms of false beliefs. These criticisms can lead community members to reject or suspend judgment on false claims.” In other words, “Blogging and tweeting are not simply means of disseminating knowledge claims; they are also means of challenging, criticizing & uncovering errors in others’ knowledge claims.” As Karen rightly notes, “The error-uncovering efficacy of such criticism is enhanced by the anonymity that facilitates participation by diverse groups who would otherwise, for fear of sanction, not join the discussion. Removing anonymity risks silencing their valuable criticisms.” In sum, “anonymity facilitates error detection as well as truth attainment.”

tl

Karen thus argues for internet norms of civility instead of barring Internet anonymity. She also outlines the many costs of enforcing the use of real-world identities online. Detecting false identities is both time and resource intensive. I experienced this first-hand during the Libya Crisis Map operation. Investigating online identities diverts time and resources away obtaining other valuable truths and detecting other important errors. Moreover, this type of investigative accountability “can have a dampening effect on internet speech as those who desire anonymity avoid making surprising claims that might raise the suspicions of potential investigators.” This curtails the sharing of valuable truths.

“To prevent the problem of disproportionate investigation of marginalized and minority users,” Karen writes that online communities “need mechanisms for checking the biases of potential investigators.” To this end, “if the question of whether some internet speech merits investigation is debated within a community, then as the diversity of that community increases, the likelihood increases that biased reasons for suspicion will be challenged.”

Karen also turns to recent research in behavioral and experimental economics, sociology and psychology for potential solutions. For example, “People appear less likely to lie when the lie only gives them a small benefit but does the recipient a great harm.” Making this possible harm more visible to would-be perpetrators may dissuade dishonest actions. Research also shows that “when people are asked to reflect on their own moral values or read a code of ethics before being tempted with an opportunity for profitable deception, they are less likely to be dishonest, even when there is no risk of dishonesty being detected.” This is precisely the rational behind my piece on crowdsourcing honesty.

Bio

See also:

  • Crowdsourcing Critical Thinking to Verify Social Media [link]
  • Truth in the Age of Social Media: A Big Data Challenge [link]

Analyzing Fake Content on Twitter During Boston Marathon Bombings

As iRevolution readers already know, the application of Information Forensics to social media is one of my primary areas of interest. So I’m always on the lookout for new and related studies, such as this one (PDF), which was just published by colleagues of mine in India. The study by Aditi Gupta et al. analyzes fake content shared on Twitter during the Boston Marathon Bombings earlier this year.

bostonstrong

Gupta et al. collected close to 8 million unique tweets posted by 3.7 million unique users between April 15-19th, 2013. The table below provides more details. The authors found that rumors and fake content comprised 29% of the content that went viral on Twitter, while 51% of the content constituted generic opinions and comments. The remaining 20% relayed true information. Interestingly, approximately 75% of fake tweets were propagated via mobile phone devices compared to true tweets which comprised 64% of tweets posted via mobiles.

Table1 Gupta et al

The authors also found that many users with high social reputation and verified accounts were responsible for spreading the bulk of the fake content posted to Twitter. Indeed, the study shows that fake content did not travel rapidly during the first hour after the bombing. Rumors and fake information only goes viral after Twitter users with large numbers of followers start propagating the fake content. To this end, “determining whether some information is true or fake, based on only factors based on high number of followers and verified accounts is not possible in the initial hours.”

Gupta et al. also identified close to 32,000 new Twitter accounts created between April 15-19 that also posted at least one tweet about the bombings. About 20% (6,073 accounts) of these new accounts were subsequently suspended by Twitter. The authors found that 98.7% of these suspended accounts did not include the word Boston in their names and usernames. They also note that some of these deleted accounts were “quite influential” during the Boston tragedy. The figure below depicts the number of suspended Twitter accounts created in the hours and days following the blast.

Figure 2 Gupta et al

The authors also carried out some basic social network analysis of the suspended Twitter accounts. First, they removed from the analysis all suspended accounts that did not interact with each other, which left just 69 accounts. Next, they analyzed the network typology of these 69 accounts, which produced four distinct graph structures: Single Link, Closed Community, Star Typology and Self-Loops. These are displayed in the figure below (click to enlarge).

Figure 3 Gupta et al

The two most interesting graphs are the Closed Community and Star Typology graphs—the second and third graphs in the figure above.

Closed Community: Users that retweet and mention each other, forming a closed community as indicated by the high closeness centrality values produced by the social network analysis. “All these nodes have similar usernames too, all usernames have the same prefix and only numbers in the suffixes are different. This indicates that either these profiles were created by same or similar minded people for posting common propaganda posts.” Gupta et al. analyzed the content posted by these users and found that all were “tweeting the same propaganda and hate filled tweet.”

Star Typology: Easily mistakable for the authentic “BostonMarathon” Twitter account, the fake account “BostonMarathons” created plenty of confusion. Many users propagated the fake content posted by the BostonMarathons account. As the authors note, “Impersonation or creating fake profiles is a crime that results in identity theft and is punishable by law in many countries.”

The automatic detection of these network structures on Twitter may enable us to detect and counter fake content in the future. In the meantime, my colleagues and I at QCRI are collaborating with Aditi Gupta et al. to develop a “Credibility Plugin” for Twitter based on this analysis and earlier peer-reviewed research carried out by my colleague ChaTo. Stay tuned for updates.

Bio

See also:

  • Boston Bombings: Analyzing First 1,000 Seconds on Twitter [link]
  • Taking the Pulse of the Boston Bombings on Twitter [link]
  • Predicting the Credibility of Disaster Tweets Automatically [link]
  • Auto-Ranking Credibility of Tweets During Major Events [link]
  • Auto-Identifying Fake Images on Twitter During Disasters [link]
  • How to Verify Crowdsourced Information from Social Media [link]
  • Crowdsourcing Critical Thinking to Verify Social Media [link]

World Disaster Report: Next Generation Humanitarian Technology

This year’s World Disaster Report was just released this morning. I had the honor of authoring Chapter 3 on “Strengthening Humanitarian Information: The Role of Technology.” The chapter focuses on the rise of “Digital Humanitarians” and explains how “Next Generation Humanitarian Technology” is used to manage Big (Crisis) Data. The chapter complements the groundbreaking report “Humanitarianism in the Network Age” published by UN OCHA earlier this year.

The key topics addressed in the chapter include:

  • Big (Crisis) Data
  • Self-Organized Disaster Response
  • Crowdsourcing & Bounded Crowdsourcing
  • Verifying Crowdsourced Information
  • Volunteer & Technical Communities
  • Digital Humanitarians
  • Libya Crisis Map
  • Typhoon Pablo Crisis Map
  • Syria Crisis Map
  • Microtasking for Disaster Response
  • MicroMappers
  • Machine Learning for Disaster Response
  • Artificial Intelligence for Disaster Response (AIDR)
  • American Red Cross Digital Operations Center
  • Data Protection and Security
  • Policymaking for Humanitarian Technology

I’m particularly interested in getting feedback on this chapter, so feel free to pose any comments or questions you may have in the comments section below.

bio

See also:

  • What is Big (Crisis) Data? [link]
  • Humanitarianism in the Network Age [link]
  • Predicting Credibility of Disaster Tweets [link]
  • Crowdsourced Verification for Disaster Response [link]
  • MicroMappers: Microtasking for Disaster Response [link]
  • AIDR: Artificial Intelligence for Disaster Response [link]
  • Research Agenda for Next Generation Humanitarian Tech [link]

Humanitarian Crisis Computing 101

Disaster-affected communities are increasingly becoming “digital” communities. That is, they increasingly use mobile technology & social media to communicate during crises. I often refer to this user-generated content as Big (Crisis) Data. Humanitarian crisis computing seeks to rapidly identify informative, actionable and credible content in this growing stack of real-time information. The challenge is akin to finding the proverbial needle in the haystack since the vast majority of reports posted on social media is often not relevant for humanitarian response. This is largely a result of the demand versus supply problem described here.

bd0

In any event, the few “needles” of information that are relevant, can relay information that is vital and indeed-life saving for relief efforts—both traditional top-down efforts and more bottom-up grassroots efforts. When disaster strikes, we increasingly see social media traffic explode. We know there are important “pins” of relevant information hidden in this growing stack of information but how do we find them in real-time?

bd2

Humanitarian organizations are ill-equipped to managing the deluge of Big Crisis Data. They tend to sift through the stack of information manually, which means they aren’t able to process more than a small volume of information. This is represented by the dotted green line in the picture below. Big Data is often described as filter failure. Our manual filters cannot manage the large volume, velocity and variety of information posted on social media during disasters. So all the information above the dotted line, Big Data, is completely ignored.

bd3

This is where Advanced Computing comes in. Advanced Computing uses Human and Machine Computing to manage Big Data and reduce filter failure, thus allowing humanitarian organizations to process a larger volume, velocity and variety of crisis information in less time. In other words, Advanced Computing helps us push the dotted green line up the information stack.

bd4

In the early days of digital humanitarian response, we used crowdsourcing to search through the haystack of user-generated content posted during disasters. Note that said content can also include text messages (SMS), like in Haiti. Crowd-sourcing crisis information is not as much fun as the picture below would suggest, however. In fact, crowdsourcing crisis information was (and can still be) quite a mess and a big pain in the haystack. Needless to say, crowdsourcing is not the best filter to make sense of Big Crisis Data.

bd5

Recently, digital humanitarians have turned to microtasking crisis information as described here and here. The UK Guardian and Wired have also written about this novel shift from crowdsourcing to microtasking.

bd6

Microtasking basically turns a haystack into little blocks of stacks. Each micro-stack is then processed by one ore more digital humanitarian volunteers. Unlike crowdsourcing, a microtasking approach to filtering crisis information is highly scalable, which is why we recently launched MicroMappers.

bd7

The smaller the micro-stack, the easier the tasks and the faster that they can be carried out by a greater number of volunteers. For example, instead of having 10 people classify 10,000 tweets based on the Cluster System, microtasking makes it very easy for 1,000 people to classify 10 tweets each. The former would take hours while the latter mere minutes. In response to the recent earthquake in Pakistan, some 100 volunteers used MicroMappers to classify 30,000+ tweets in about 30 hours, for example.

bd8

Machine Computing, in contrast, uses natural language processing (NLP) and machine learning (ML) to “quantify” the haystack of user-generated content posted on social media during disasters. This enable us to automatically identify relevant “needles” of information.

bd9

An example of a Machine Learning approach to crisis computing is the Artificial Intelligence for Disaster Response (AIDR) platform. Using AIDR, users can teach the platform to automatically identify relevant information from Twitter during disasters. For example, AIDR can be used to automatically identify individual tweets that relay urgent needs from a haystack of millions of tweets.

bd11
The pictures above are taken from the slide deck I put together for a keynote address I recently gave at the Canadian Ministry of Foreign Affairs.

bio