Category Archives: Humanitarian Technologies

Results: Analyzing 2 Million Disaster Tweets from Oklahoma Tornado

Thanks to the excellent work carried out by my colleagues Hemant Purohit and Professor Amit Sheth, we were able to collect 2.7 million tweets posted in the aftermath of the Category 4 Tornado that devastated Moore, Oklahoma. Hemant, who recently spent half-a-year with us at QCRI, kindly took the lead on carrying out some preliminary analysis of the disaster data. He sampled 2.1 million tweets posted during the first 48 hours for the analysis below.

oklahoma-tornado-20

About 7% of these tweets (~146,000 tweets) were related to donations of resources and services such as money, shelter, food, clothing, medical supplies and volunteer assistance. Many of the donations-related tweets were informative in nature, e.g.: “As President Obama said this morning, if you want to help the people of Moore, visit [link]”. Approximately 1.3% of the tweets (about 30,000 tweets) referred to the provision of financial assistance to the disaster-affected population. Just over 400 unique tweets sought non-monetary donations, such as “please help get the word out, we are accepting kid clothes to send to the lil angels in Oklahoma.Drop off.

Exactly 152 unique tweets related to offers of help were posted within the first 48 hours of the Tornado. The vast majority of these were asking how to get involved in helping others affected by the disaster. For example: “Anyone know how to get involved to help the tornado victims in Oklahoma??#tornado #oklahomacity” and “I want to donate to the Oklahoma cause shoes clothes even food if I can.” These two offers of help are actually automatically “matchable”, making the notion of a “Match.com” for disaster response a distinct possibility. Indeed, Hemant has been working with my team and I at QCRI to develop algorithms (classifiers) that not only identify relevant needs/offers from Twitter automatically but also suggests matches as a result.

Some readers may be suprised to learn that “only” several hundred unique tweets (out of 2+million) were related to needs/offers. The first point to keep in mind is that social media complements rather than replaces traditional information sources. All of us working in this space fully recognize that we are looking for the equivalent of needles in a haystack. But these “needles” may contain real-time, life-saving information. Second, a significant number of disaster tweets are retweets. This is not a negative, Twitter is particularly useful for rapid information dissemination during crises. Third, while there were “only” 152 unique tweets offering help, this still represents over 130 Twitter users who were actively seeking ways to help pro bono within 48 hours of the disaster. Plus, they are automatically identifiable and directly contactable. So these volunteers could also be recruited as digital humanitarian volunteers for MicroMappers, for example. Fourth, the number of Twitter users continues to skyrocket. In 2011, Twitter had 100 million monthly active users. This figure doubled in 2012. Fifth, as I’ve explained here, if disaster responders want to increase the number of relevant disaster tweets, they need to create demand for them. Enlightened leadership and policy is necessary. This brings me to point six: we were “only” able to collect ~2 million tweets but suspect that as many as 10 million were posted during the first 48 hours. So humanitarian organizations along with their partners need access to the Twitter Firehose. Hence my lobbying for Big Data Philanthropy.

Finally, needs/offers are hardly the only type of useful information available on Twitter during crises, which is why we developed several automatic classifiers to extract data on: caution and advice, infrastructure damage, casualties and injuries, missing people and eyewitness accounts. In the near future, when our AIDR platform is ready, colleagues from the American Red Cross, FEMA, UN, etc., will be able create their own classifiers on the fly to automatically collect information that is directly relevant to them and their relief operations. AIDR is spearheaded by QCRI colleague ChaTo and myself.

For now though, we simply emailed relevant geo-tagged and time-stamped data on needs/offers to colleagues at the American Red Cross who had requested this information. We also shared data related to gas leaks with colleagues at FEMA and ESRI, as per their request. The entire process was particularly insightful for Hemant and I, so we plan to follow up with these responders to learn how we can best support them again until AIDR becomes operational. In the meantime, check out the Twitris+ platform developed by Amit, Hemant and team at Kno.e.sis

bio

See also: Analysis of Multimedia Shared on Twitter After Tornado [Link

Project Loon: Google Blimps for Disaster Response (Updated)

A blimp is a floating airship that does not have any internal supporting framework or keel. The airship is typically filled with helium and is controlled remotely using steerable fans. Projet Loon is a Google initiative to launch a fleet of Blimps to extend Internet/wifi access across Africa and Asia. Some believe that “these high-flying networks would spend their days floating over areas outside of major cities where Internet access is either scarce or simply nonexistent.” Small-scale prototypes are reportedly being piloted in South Africa “where a base station is broadcasting signals to wireless access boxes in high schools over several kilometres.” The US military has been using similar technology for years.

Blimp

Google notes that the technology is “well-suited to provide low cost connectivity to rural communities with poor telecommunications infrastructure, and for expanding coverage of wireless broadband in densely populated urban areas.” Might Google Blimps also be used by Google’s Crisis Response Team in the future? Indeed, Google Blimps could be used to provide Internet access to disaster-affected communities. The blimps could also be used to capture very high-resolution aerial imagery for damage assessment purposes. Simply adding a digital camera to said blimps would do the trick. In fact, they could simply take the fourth-generation cameras used for Google Street View and mount them on the blimps to create Google Sky View. As always, however, these innovations are fraught with privacy and data protection issues. Also, the use of UAVs and balloons for disaster response has been discussed for years already.

bio

Jointly: Peer-to-Peer Disaster Recovery App

My colleague Samia Kallidis is launching a brilliant self-help app to facilitate community-based disaster recovery efforts. Samia is an MFA Candidate at the School of Visual Arts in New York. While her work on this peer-to-peer app began as part of her thesis, she has since been accepted to the NEA Studio Incubator Program to make her app a reality. NEA provides venture capital to help innovative entrepreneurs build transformational initiatives around the world. So huge congrats to Samia on this outstanding accomplishment. I was already hooked back in February when she presented her project at NYU and am even more excited now. Indeed, there are exciting synergies with the MatchApp project I’m working on with QCRI and MIT-CSAIL , which is why we’re happily exploring ways to collaborate & complement our respective initiatives.

Samia’s app is aptly called Jointly and carries the tag line: “More Recovery, Less Red Tape.” In her February presentation, Samia made many very compelling arguments for a self-help approach to disaster response based on her field research and interviews she conducted following Hurricane Sandy. She rightly noted that many needs that arise during the days, weeks and months following a disaster do not require the attention of expert disaster response professionals—in fact these responders may not have the necessary skills to match the needs that frequently arise after a disaster (assuming said responders even stay the course). Samia also remarked that solving little challenges and addressing the little needs that surface post-disaster can make the biggest differences. Hence Jointly. In her own words:

“Jointly is a decentralized mobile application that helps communities self-organize disaster relief without relying on bureaucratic organizations. By directly connecting disaster victims with volunteers, Jointly allows individuals to request help through services and donations, and to find skilled volunteers who are available to fulfill those needs. This minimizes waste of resources, reduces duplication of services, and significantly shortens recovery time for individuals and communities.”

Samia kindly shared the above video and screenshots of Jointly below.

Jointly_App-01

Jointly_App-02 Jointly_App-03

Jointly_App-04

I’m thrilled to see Jointly move forward and am excited to be collaborating with Samia on the Jointly and MatchApp connection. We certainly share the same goal: to help people help themselves. Indeed, increasing this capacity for self-organization builds resilience. These connection technologies and apps provide for more rapid and efficient self-help actions in times of need. This doesn’t mean that professional disaster response organizations are obsolete—quite on the contrary, in fact. Organizations like the American Red Cross can feed relevant service delivery data to the apps so that affected communities also know where, when and how to access these. In Jointly, official resources will be geo-tagged and updated live in the “Resources” part of the app.

You can contact Samia directly at: hello@jointly.us should you be interested in learning more or collaborating with her.

bio

Web App Tracks Breaking News Using Wikipedia Edits

A colleague of mine at Google recently shared a new and very interesting Web App that tracks breaking news events by monitoring Wikipedia edits in real-time. The App, Wikipedia Live Monitor, alerts users to breaking news based on the frequency of edits to certain articles. Almost every significant news event has a Wikipedia page that gets updated in near real-time and thus acts as a single, powerful cluster for tacking an evolving crisis.

Wikipedia Live Monitor

Social media, in contrast, is far more distributed, which makes it more difficult to track. In addition, social media is highly prone to false positives. These, however, are almost immediately corrected on Wikipedia thanks to dedicated editors. Wikipedia Live Monitor currently works across several dozen languages and also “cross-checks edits with social media updates on Twitter, Google Plus and Facebook to help users get a better sense of what is trending” (1).

I’m really excited to explore the use of this Live Monitor for crisis response and possible integration with some of the humanitarian technology platforms that my colleagues and I at QCRI are developing. For example, the Monitor could be used to supplement crisis information collected via social media using the Artificial Intelligence for Disaster Response (AIDR) platform. In addition, the Wikipedia Monitor could also be used to triangulate reports posted to our Verily platform, which leverages time-critical crowdsourcing techniques to verify user-generated content posted on social media during disasters.

bio

Social Media for Emergency Management: Question of Supply and Demand

I’m always amazed by folks who dismiss the value of social media for emergency management based on the perception that said content is useless for disaster response. In that case, libraries are also useless (bar the few books you’re looking for, but those rarely represent more than 1% of all the books available in a major library). Does that mean libraries are useless? Of course not. Is social media useless for disaster response? Of course not. Even if only 0.001% of the 20+ million tweets posted during Hurricane Sandy were useful, and only half of these were accurate, this would still mean over 1,000 real-time and informative tweets, or some 15,000 words—i.e., the equivalent of a 25-page, single-space document exclusively composed of fully relevant, actionable & timely disaster information.

LibTweet

Empirical studies clearly prove that social media reports can be informative for disaster response. Numerous case studies have also described how social media has saved lives during crises. That said, if emergency responders do not actively or explicitly create demand for relevant and high quality social media content during crises, then why should supply follow? If the 911 emergency number (999 in the UK) were never advertised, then would anyone call? If 911 were simply a voicemail inbox with no instructions, would callers know what type of actionable information to relay after the beep?

While the majority of emergency management centers do not create the demand for crowdsourced crisis information, members of the public are increasingly demanding that said responders monitor social media for “emergency posts”. But most responders fear that opening up social media as a crisis communication channel with the public will result in an unmanageable flood of requests, The London Fire Brigade seems to think otherwise, however. So lets carefully unpack the fear of information flooding.

First of all, New York City’s 911 operators receive over 10 million calls every year that are accidental, false or hoaxes. Does this mean we should abolish the 911 system? Of course not. Now, assuming that 10% of these calls takes an operator 10 seconds to manage, this represents close to 3,000 hours or 115 days worth of “wasted work”. But this filtering is absolutely critical and requires human intervention. In contrast, “emergency posts” published on social media can be automatically filtered and triaged thanks to Big Data Analytics and Social Computing, which could save time operators time. The Digital Operations Center at the American Red Cross is currently exploring this automated filtering approach. Moreover, just as it is illegal to report false emergency information to 911, there’s no reason why the same laws could not apply to social media when these communication channels are used for emergency purposes.

Second, if individuals prefer to share disaster related information and/or needs via social media, this means they are less likely to call in as well. In other words, double reporting is unlikely to occur and could also be discouraged and/or penalized. In other words, the volume of emergency reports from “the crowd” need not increase substantially after all. Those who use the phone to report an emergency today may in the future opt for social media instead. The only significant change here is the ease of reporting for the person in need. Again, the question is one of supply and demand. Even if relevant emergency posts were to increase without a comparable fall in calls, this would simply reveal that the current voice-based system creates a barrier to reporting that discriminates against certain users in need.

Third, not all emergency calls/posts require immediate response by a paid professional with 10+ years of experience. In other words, the various types of needs can be triaged and responded to accordingly. As part of their police training or internships, new cadets could be tasked to respond to less serious needs, leaving the more seasoned professionals to focus on the more difficult situations. While this approach certainly has some limitations in the context of 911, these same limitations are far less pronounced for disaster response efforts in which most needs are met locally by the affected communities themselves anyway. In fact, the Filipino government actively promotes the use of social media reporting and crisis hashtags to crowdsource disaster response.

In sum, if disaster responders and emergency management processionals are not content with the quality of crisis reporting found on social media, then they should do something about it by implementing the appropriate policies to create the demand for higher quality and more structured reporting. The first emergency telephone service was launched in London some 80 years ago in response to a devastating fire. At the time, the idea of using a phone to report emergencies was controversial. Today, the London Fire Brigade is paving the way forward by introducing Twitter as a reporting channel. This move may seem controversial to some today, but give it a few years and people will look back and ask what took us so long to adopt new social media channels for crisis reporting.

Bio

Self-Organized Crisis Response to #BostonMarathon Attack

I’m going to keep this blog post technical because the emotions from yesterday’s events are still too difficult to deal with. Within an hour of the bombs going off, I received several emails asking me to comment on the use of social media in Boston and how it differed to the digital humanitarian response efforts I am typically engaged in. So here are just a few notes, nothing too polished, but some initial reactions.

I Stand with Boston

Once again, we saw the outpouring of operational support from the “Crowd” with over two thousand people in the Boston area volunteering to take people in if they needed help, and this within 60 minutes of the attack. This was coordinated via a Google Spreadsheet & Google Form. This is not the first time that these web-based solutions were used for disaster response. For example, Google Spreadsheets was used to coordinate grassroots response efforts during the major Philippine floods in 2012.

We’re not all affected the same way during a crisis and those of us who are less affected almost always look for ways to help. Unlike the era of television broadcasting, the crowd can now become an operational actor in disaster response. To be sure, paid disaster response professionals cannot be everywhere at the same time, but the crowd is always there. This explains I have look called for a “Match.com for disaster response” to match local needs with local resources. So while I received numerous pings on Twitter, Skype and email about launching a crisis map for Boston, I am skeptical that doing so would have added much value.

What was/is needed is real-time filtering of social media content and matching of local needs (information and material needs) with local resources. There are two complementary ways to do this: human computing (e.g., crowdsourcing, microtasking, etc) and machine computing (natural language processing, machine learning, etc), which is why my team and I at QCRI are working on developing these solutions.

Other observations from the response to yesterday’s tragedy:

  • Boston Police made active use of their Twitter account to inform and advise. They also asked other Twitter users to spread their request for everyone to leave the city center area. The police and other emergency services also actively crowdsourced photographs and video footage to begin their criminal investigations. There was such heavy multimedia social media activity in the area that one could no doubt develop a Photosynth rendering of the scene.
  • There were calls for residents to unlock their Wifi networks to enable people in the streets to get access to the Internet. This was especially important after the cellphone network was taken offline for security reasons. To be sure, access to information is equally important as access to water, food, shelter, etc, during a crisis.

I’d welcome any other observation from readers, e.g., similarities and differences between the use of technologies for domestic emergency management versus international humanitarian efforts. I would also be interested to hear thoughts about how the two could be integrated or at the very least learn from each other.

bio

Introducing MicroMappers for Digital Disaster Response

The UN activated the Digital Humanitarian Network (DHN) on December 3, 2012 to carry out a rapid damage needs assessment in response to Typhoon Pablo in the Philippines. More specifically, the UN requested that Digital Humanitarians collect and geo-reference all tweets with links to pictures or video footage capturing Typhoon damage. To complete this mission, I reached out to my colleagues at CrowdCrafting. Together, we customized a microtasking app to filter, classify and geo-reference thousands of tweets. This type of rapid damage assessment request was the first of its kind, which means that setting up the appropriate workflows and technologies took a while, leaving less time for the tagging, verification and analysis of the multimedia content pointed to in the disaster tweets. Such is the nature of innovation; optimization takes place through iteration and learning.

Microtasking is key to the future of digital humanitarian response, which is precisely why I am launching MicroMappers in partnership with CrowdCrafting. MicroMappers, which combimes the terms Micro-Tasking and Crisis-Mappers, is a collection of free & open source microtasking apps specifically customized and optimized for digital disaster response. The first series of apps focus on rapid damage assessment activations. In other words, the apps include Translate, Locate and Assess. The Translate & Locate Apps are self-explanatory. The Assess App enables digital volunteers to quickly tag disaster tweets that link to relevant multimedia that captures disaster damage. This app also invites volunteers to rate the level of damage in each image and video.

For example, say an earthquake strikes Mexico City. We upload disaster tweets with links to the Translate App. Volunteer translators only translate tweets with location information. These get automatically pushed to the Assess App where digital volunteers tag tweets that point to relevant images/videos. They also rate the level of damage in each. (On a side note, my colleagues and I at QCRI are also developing a crawler that will automatically identify whether links posted on twitter actually point to images/videos). Assessed  tweets are then pushed in real-time to the Locate App for geo-referencing. The resulting tweets are subsequently published to a live map where the underlying data can also be downloaded.  Both the map & data download feature can be password protected.

The plan is to have these apps online and live 24/7 in the event of an activation request. When a request does come in, volunteers with the Digital Humanitarian Network will simply go to MicroMappers.com (not yet live) to start using the apps right away. Members of the public will also be invited to support these efforts and work along side digital humanitarian volunteers. In other words, the purpose of the MicroMappers Apps is not only to facilitate and accelerate digital humanitarian efforts but also to radically democratize these efforts by increasing the participation base. To be sure, one doesn’t need prior training to microtask, simply being able to read and access the web will make you an invaluable member of the team.

We plan to have the MicroMappers Apps completed in May/June September for testing by members of the Digital Humanitarian Network. In the meantime, huge thanks to our awesome partners at CrowdCrafting for making all of this possible! If you’re a coder and interested in contributing to these efforts, please feel free to get in touch with me. We may be able to launch and test these apps earlier with your help. After all, disasters won’t wait until we’re ready and we have several more disaster response apps that are in need of customization.

bio

Humanitarianism in the Network Age: Groundbreaking Study

My colleagues at the United Nations Office for the Coordination of Humanitarian Affairs (OCHA) have just published a groundbreaking must-read study on Humanitarianism in the Network Age; an important and forward-thinking policy document on humanitarian technology and innovation. The report “imagines how a world of increasingly informed, connected and self-reliant communities will affect the delivery of humanitarian aid. Its conclusions suggest a fundamental shift in power from capital and headquarters to the people [that] aid agencies aim to assist.” The latter is an unsettling prospect for many. To be sure, Humanitarianism in the Network Age calls for “more diverse and bottom-up forms of decision-making—something that most Governments and humanitarian organizations were not designed for. Systems constructed to move information up and down hierarchies are facing a new reality where information can be generated by any-one, shared with anyone and acted by anyone.”

Screen Shot 2013-04-04 at 10.35.40 AM

The purpose of this blog post (available as a PDF) is to summarize the 120-page OCHA study. In this summary, I specifically highlight the most important insights and profound implications. I also fill what I believe are some of the report’s most important gaps. I strongly recommend reading the OCHA publication in full, but if you don’t have time to leaf through the study, reading this summary will ensure that you don’t miss a beat. Unless otherwise stated, all quotes and figures below are taken directly from the OCHA report.

All in all, this is an outstanding, accurate, radical and impressively cross-disciplinary study. In fact, what strikes me most about this report is how far we’ve come since the devastating Haiti Earthquake of 2010. Just three short years ago, speaking the word “crowdsourcing” was blasphemous, like “Voldermort” (for all you Harry Potter fans). This explains why some humanitarians called me the CrowdSorcerer at the time (thinking it was a derogatory term). CrisisMappers was only launched three months before Haiti. The Standby Volunteer Task Force (SBTF) didn’t even exist at the time and the Digital Humanitarian Network (DHN) was to be launched 2 years hence. And here we are, just three short years later, with this official, high-profile humanitarian policy document that promotes crowdsourcing, digital humanitarian response and next generation humanitarian technology. Exciting times. While great challenges remain, I dare say we’re trying our darned best to find some solutions, and this time through collaboration, CrowdSorcerers and all. The OCHA report is a testament to this collaboration.

Screen Shot 2013-04-04 at 10.43.15 AM

Summary

the Rise of big (crisis) data

Over 100 countries have more mobile phone subscriptions than they have people. One in four individuals in developing countries use the Internet. This figure will double within 20 months. About 70% of Africa’s total population are mobile subscribers. In short, “The planet has gone online, producing and sharing vast quantities of information.” Meanwhile, however, hundreds of millions of people are affected by disasters every year—more than 250 million in 2010 alone. There have been over 1 billion new mobile phone subscriptions since 2010. In other words, disaster affected communities are becoming increasingly “digital” as a result of the information revolution. These new digital technologies continue are evolving new nervous system for our planet, taking the pulse of our social, economic and political networks in real-time.

“Filipinos sent an average of 2 billion SMS messages every day in early 2012,” for example. When disaster strikes, many of these messages are likely to relay crisis information. In Japan, over half-a-million new users joined Twitter the day after the 2011 Earthquake. More than 177 million tweets about the disaster were posted that same day—that is, 2,000 tweets per second on average. Welcome to “The Rise of Big (Crisis) Data.” Meanwhile, back in the US, 80% of the American public expects emergency responders to monitor social media; and almost as many expect them to respond within three hours of posting a request on social media (1). These expectations have been shown to increase year-on year. “At the same time,” however, the OCHA report notes that “there are greater numbers of people […] who are willing and able to respond to needs.”

communities first

A few brave humanitarian organizations are embracing these changes and new realities, “reorienting their approaches around the essential objectives of helping people to help themselves.” That said, “the frontline of humanitarian action has always consisted of communities helping themselves before outside aid arrives.” What is new, however, is “affected people using technology to communicate, interact with and mobilize their social networks quicker than ever before […].” To this end, “by rethinking how aid agencies work and communicate with people in crisis, there is a chance that many more lives can be saved.” In sum, “the increased reach of communications networks and the growing network of people willing and able to help, are defining a new age—a network age—for humanitarian assistance.”

This stands in stark contrast to traditional notions of humanitarian assistance, which refer to “a small group of established international organizations, often based in and funded by high-income countries, providing help to people in a major crisis. This view is now out of date.” As my colleague Tim McNamara noted on the CrisisMappers list-serve, (cited in the OCHA report), this is “…not simply a technological shift [but] also a process of rapid decentralization of power. With extremely low barriers to entry, many new entrants are appearing in the fields of emergency and disaster response. They are ignoring the traditional hierarchies, because the new entrants perceive that there is something they can do which benefits others.” In other words, the humanitarian “world order” is shifting towards a more multipolar system. And so, while Tim was “referring to the specific case of volunteer crisis mappers […], the point holds true across all types of humanitarian work.”

Take the case of Somalia Speaks, for example. A journalist recently asked me to list the projects I am most proud of in this field. Somalia Speaks ranks very high. I originally pitched the idea to my Al-Jazeera colleagues back in September 2011; the project was launched three months later. Together with my colleagues at Souktelwe texted 5,000 Somalis across the country to ask how were personally affected by the crisis.

SomaliaSpeaksPic

As the OCHA study notes, we received over 3,000 responses, which were translated into English and geotagged by the Diaspora and subsequently added to a crisis map hosted on the Al-Jazeera website. From the OCHA report: “effective communication can also be seen as an end itself in promoting human dignity. More than 3,000 Somalis responded to the Somalia Speaks project, and they seemed to feel that speaking out was a worthwhile activity.” In sum, “The Somalia Speaks project enabled the voices of people from one of the world’s most inaccessible, conflict-ridden areas, in a language known to few outside their community, to be heard by decision makers from across the planet.” The project has since been replicated several times; see Uganda Speaks for example. The OCHA study refers to Somalia Speaks at least four times, highlighting the project as an example of networked humanitarianism.

PRIVACY, SECURITY & PROTECTION

The report also emphasizes the critical importance of data security, privacy and protection in the network age. OCHA’s honest and balanced approach to the topic is another reason why this report is so radical and forward thinking. “Concern over the protection of information and data is not a sufficient reason to avoid using new communications technologies in emergencies, but it must be taken into account. To adapt to increased ethical risks, humanitarian responders and partners need explicit guidelines and codes of conduct for managing new data sources.” This is precisely why I worked with GSMA’s Disaster Response Program to draft and publish the first ever Code of Conduct for the Use of SMS in Disaster Response. I have also provided extensive feedback to the International Committee of the Red Cross’s (ICRC) latest edition of the “Professional Standards for Protection Work,” which was just launched in Geneva this month. My colleagues Emmanuel Letouzé and Patrick Vinck also included a section on data security and ethics in our recent publication on the use of Big Data for Conflict Prevention. In addition, I have blogged about this topic quite a bit: herehere and here, for example.

crisis in decision making

“As the 2010 Haiti crisis revealed, the usefulness of new forms of information gathering is limited by the awareness of responders that new data sources exist, and their applicability to existing systems of humanitarian decision-making.” The fact of the matter is that humanitarian decision-making structures are simply not geared towards using Big Crisis Data let alone new data sources. More pointedly, however, humanitarian decision-making processes are often not based on empirical data in the first place, even when the data originate from traditional sources. As DfID notes in this 2012 strategy document, “Even when good data is available, it is not always used to inform decisions. There are a number of reasons for this, including data not being available in the right format, not widely dispersed, not easily accessible by users, not being transmitted through training and poor information management. Also, data may arrive too late to be able to influence decision-making in real time operations or may not be valued by actors who are more focused on immediate action.”

This is the classic warning-response gap, which has been discussed ad nauseum for decades in the field of famine early warning systems and conflict early warning systems. More data in no way implies action. Take the 2011 Somalia Famine, which was one of the best documented crises yet. So the famine didn’t occur because data was lacking. “Would more data have driven a better decision making process that could have averted disaster? Unfortunately, this does not appear to be the case. There had, in fact, been eleven months of escalating warnings emanating from the famine early warning systems that monitor Somalia. Somalia was, at the time, one of the most frequently surveyed countries in the world, with detailed data available on malnutrition prevalence, mortality rates, and many other indicators. The evolution of the famine was reported in almost real time, yet there was no adequate scaling up of humanitarian intervention until too late” (2).

At other times, “Information is sporadic,” which is why OCHA notes that “decisions can be made on the basis of anecdote rather than fact.” Indeed, “Media reports can significantly influence allocations, often more than directly transmitted community statements of need, because they are more widely read or better trusted.” (It is worth keeping in mind that the media makes mistakes; the New York Times alone makes over 7,000 errors every year). Furthermore, as acknowledged, by OCHA, “The evidence suggests that new information sources are no less representative or reliable than more traditional sources, which are also imperfect in crisis settings.” This is one of the most radical statements in the entire report. OCHA should be applauded for their remarkable fortitude in plunging into this rapidly shifting information landscape. Indeed, they go on to state that, “Crowdsourcing has been used to validate information, map events, translate text and integrate data useful to humanitarian decision makers.”

Screen Shot 2013-04-04 at 10.40.50 AM

The vast major of disaster datasets are not perfect, regardless of whether they are drawn from traditional or non-traditional sources. “So instead of criticizing the lack of 100% data accuracy, we need to use it as a base and ensure our Monitoring and Evaluation (M&E) and community engagement pieces are strong enough to keep our programming relevant” (Bartosiak 2013). And so, perhaps the biggest impact of new technologies and recent disasters on the humanitarian sector is the self disrobing of the Emperor’s Clothes (or Data). “Analyses of emergency response during the past five years reveal that poor information management has severely hampered effective action, costing many lives.” Disasters increasingly serve as brutal audits of traditional humanitarian organizations; and the cracks are increasingly difficult to hide in an always-on social media world. The OCHA study makes clear that  decision-makers need to figure out “how to incorporate these sources into decisions.”

Fact is, “To exploit the opportunity of the network age, humanitarians must understand how to use the new range of available data sources and have the capacity to transform this data into useful information.” Furthermore, it is imperative “to ensure new partners have a better understanding of how [these] decisions are made and what information is useful to improve humanitarian action.” These new partners include the members of the Digital Humanitarian Network (DHN), for example. Finally, decision-makers also need to “invest in building analytic capacity across the entire humanitarian network.” This analytic capacity can no longer rest on manual solutions alone. The private sector already makes use of advanced computing platforms for decision-making purposes. The humanitarian industry would be well served to recognize that their problems are hardly unique. Of course, investing in greater analytic capacity is an obvious solution but many organizations are already dealing with limited budgets and facing serious capacity constraints. I provide some creative solutions to this challenge below, which I refer to as “Data Science Philanthropy“.

Commentary

Near Perfection

OCHA’s report is brilliant, honest and forward thinking. This is by far the most important official policy document yet on humanitarian technology and digital humanitarian response—and thus on the very future of humanitarian action. The study should be required reading for everyone in the humanitarian and technology communities, which is why I plan to organize a panel on the report at CrisisMappers 2013 and will refer to the strategy document in all of my forthcoming talks and many a future blog post. In the meantime, I would like to highlight and address a some of the issues that I feel need to be discussed to take this discussion further.

Ironically, some of these gaps appear to reflect a rather limited understanding of advanced computing & next generation humanitarian technology. The following topics, for example, are missing from the OCHA report: Microtasking, Sentiment Analysis and Information Forensics. In addition, the report does not relate OCHA’s important work to disaster resilience and people-centered early warning. So I’m planning to expand on the OCHA report in the technology chapter for this year’s World Disaster Report (WDR 2013). This high-profile policy document is an ideal opportunity to amplify OCHA’s radical insights and to take these to their natural and logical conclusions vis-à-vis Big (Crisis) Data. To be clear, and I must repeat this, the OCHA report is the most important forward thinking policy document yet on the future of humanitarian response. The gaps I seek to fill in no way make the previous statement any less valid. The team at OCHA should be applauded, recognized and thanked for their tremendous work on this report. So despite some of the key shortcomings described below, this policy document is by far the most honest, enlightened and refreshing look at the state of the humanitarian response today; a grounded and well-researched study that provides hope, leadership and a clear vision for the future of humanitarianism in the network age.

BIG DATA HOW

OCHA recognizes that “there is a significant opportunity to use big data to save lives,” and they also get that, “finding ways to make big data useful to humanitarian decision makers is one of the great challenges, and opportunities, of the network age.” Moreover, they realize that “While valuable information can be generated anywhere, detecting the value of a given piece of data requires analysis and understanding.” So they warn, quite rightly, that “the search for more data can obscure the need for more analysis.” To this end, they correctly conclude that “identifying the best uses of crowdsourcing and how to blend automated and crowdsourced approaches is a critical area for study.” But the report does not take these insights to their natural and logical conclusions. Nor does the report explore how to tap these new data sources let alone analyze them in real time.

Yet these Big Data challenges are hardly unique. Our problems in the humanitarian space are not that “special” or  different. OCHA rightly notes that “Understanding which bits of information are valuable to saving lives is a challenge when faced with this ocean of data.” Yes. But such challenges have been around for over a decade in other disciplines. The field of digital disease detection, for example, is years ahead when it comes to real-time analysis of crowdsourced big data, not to mention private sector companies, research institutes and even new startups whose expertise is Big Data Analytics. I can also speak to this from my own professsional experience. About a decade ago, I worked with a company specializing in conflict forecasting and early using Reuters news data (Big Data).

In sum, the OCHA report should have highlighted the fact that solutions to many of these Big Data challenges already exist, which is precisely why I joined the Qatar Computing Research Institute (QCRI). What’s more, a number of humanitarian technology projects at QCRI are already developing prototypes based on these solutions; and OCHA is actually the main partner in one such project, so it is a shame they did not get credit for this in their own report.

sentiment analysis

While I introduced the use of sentiment analysis during the Haiti Earthquake, this has yet to be replicated in other humanitarian settings. Why is sentiment analysis key to humanitarianism in the network age? The answer is simple: “Communities know best what works for them; external actors need to listen and model their response accordingly.” Indeed, “Affected people’s needs must be the starting point.” Actively listening to millions of voices is a Big Data challenge that has already been solved by the private sector. One such solution is real-time sentiment analysis to capture brand perception. This is a rapidly growing multimillion dollar market, which is why many companies like Crimson Hexagon exist. Numerous Top 500 Fortune companies have been actively using automated sentiment analysis for years now. Why? Because these advanced listening solutions enable them to better understand customer perceptions.

Screen Shot 2013-04-08 at 5.49.56 AM

In Haiti, I applied this approach to tens of thousands of text messages sent by the disaster-affected population. It allowed us to track the general mood of this population on a daily basis. This is important because sentiment analysis as a feedback loop works particularly well with Big Data, which explains why the private sector is all over it. If just one or two individuals in a community are displeased with service delivery during a disaster, they may simply be “an outlier”  or perhaps exaggerating. But if the sentiment analysis at the community level suddenly starts to dip, then this means hundreds, perhaps thousands of affected individuals are now all feeling the same way about a situation. In other words, sentiment analysis serves as a triangulating mechanism. The fact that the OCHA report makes no mention of this existing solution is unfortunate since sentiment feedback loops enable organizations to assess the impact of their interventions by capturing their clients’ perceptions.

Information forensics

“When dealing with the vast volume and complexity of information available in the network age, understanding how to assess the accuracy and utility of any data source becomes critical.” Indeed, and the BBC’s User-Generated Content (UGC) Hub has been doing just this since 2005—when Twitter didn’t even exist. The field of digital information forensics may be new to the humanitarian sector, but that doesn’t mean it is new to every other sector on the planet. Furthermore, recent research on crisis computing has revealed that the credibility of social media reporting can be modeled and even predicted. Twitter has even been called a “Truth Machine” because of the self-correcting dynamic that has been empirically observed. Finally, one of QCRI’s humanitarian technology projects, Verily, focuses precisely on the issue of verifying crowdsourced social media information from social media. And the first organization I reached out to for feedback on this project was OCHA.

microtasking

The OCHA report overlooks microtasking as well. Yes, the study does address and promote the use of crowdsourcing repeatedly, but again, this  tends to focus on the collection of information rather than the processing of said information. Microtasking applications in the humanitarian space are not totally unheard of, however. Microtasking was used to translate and geolocate tens of thousands of text messages following the Haiti Earthquake. (As the OCHA study notes, “some experts estimated that 90 per cent [of the SMS’s] were ‘repetition’, or ‘white noise’, meaning useless chatter”). There have been several other high profile uses of microtasking for humanitarian operations such as this one thanks to OCHA’s leadership in response to Typhoon Pablo. In sum, microtasking has been used extensively in other sectors to manage the big data and quality control challenge for many years now. So this important human computing solution really ought to have appeared in the OCHA report along with the immense potential of microtasking humanitarian information using massive online multiplayer games (more here).

Open Data is Open Power

OCHA argues that “while information can be used by anyone, power remains concentrated in the hands of a limited number of decision makers.” So if the latter “do not use this information to make decisions in the interests of the people they serve, its value is lost.” I don’t agree that the value is lost. One of the reports’ main themes is the high-impact agency and ingenuity of disaster-affected communities. As OCHA rightly points out, “The terrain is continually shifting, and people are finding new and brilliant ways to cope with crises every day.” Openly accessible crisis information posted on social media has already been used by affected populations for almost a decade now. In other words, communities affected by crises are (quite rightly) taking matters into their own hands in today’s networked world—just like they did in the analog era of yesteryear. As noted earlier, “affected people [are] using technology to communicate, interact with and mobilize their social networks quicker than ever before […].” This explains why “the failure to share [information] is no longer a matter of institutional recalcitrance: it can cost lives.”

creative partnerships

The OCHA study emphasizes that “Humanitarian agencies can learn from other agencies, such as fire departments or militaries, on how to effectively respond to large amounts of often confusing information during a fast-moving crisis.” This is spot on. Situational awareness is first and foremost a military term. The latest Revolution in Military Affairs (RMA) provides important insights into the future of humanitarian technology—see these recent developments, for example. Mean-while, the London Fire Brigade has announced plans to add Twitter as a communication channel, which means city residents will have the option of reporting a fire alert via Twitter. Moreover, the 911 service in the US (999 in the UK) is quite possibly the oldest and longest running crowdsourced emergency service in the world. So there much that humanitarian can learn from 911. But the fact of the matter is that most domestic emergency response agencies are completely unprepared to deal with the tidal wave of Big (Crisis) Data, which is precisely why the Fire Department of New York City (FDNY) and San Francisco City’s Emergency Response Team have recently reached out to me.

Screen Shot 2013-04-04 at 11.08.13 AM

But some fields are way ahead of the curve. The OCHA report should thus have pointed to crime mapping and digital disease detection since these fields have more effectively navigated the big data challenge. As for the American Red Cross’s Digital Operations Center, the main technology they are using, Radian6, has been used by private sector clients for years now. And while the latter can afford the very expensive licensing fees, it is unlikely that cash-strapped domestic emergency response officers and international humanitarian organizations will ever be able to afford these advanced solutions. This is why we need more than just “Data Philanthropy“.

We also need “Data Science Philanthropy“. As the OCHA report states, decision-makers need to “invest in building analytic capacity across the entire humanitarian network.” This is an obvious recommendation, but perhaps not particularly realistic given the limited budgets and capacity constraints in the humanitarian space. This means we need to create more partnerships with Data Science groups like DataKind, Kaggle and the University of Chicago’s Data Science for Social Good program. I’m in touch with these groups and others for this reason. I’ve also been (quietly) building a global academic network called “Data Science for Humanitarian Action” which will launch very soon. Open Source solutions are also imperative for building analytic capacity, which is why the humanitarian technology platforms being developed by QCRI will all be Open Source and freely available.

DISASTER RESILIENCE

This points to the following gap in the OCHA report: there is no reference whatsoever to resilience. While the study does recognize that collective self-help behavior is typical in disaster response and should be amplified, the report does not make the connection that this age-old mutual-aid dynamic is the humanitarian sector’s own lifeline during a major disaster. Resilience has to do with a community’s capacity for self-organization. Communication technologies increasingly play a pivotal role in self-organization. This explains why disaster preparedness and disaster risk reduction programs ought to place greater emphasis on building the capacity of at-risk communities to self-organize and mitigate the impact of disasters on their livelihoods. More about this here. Creating resilience through big data is also more academic curiosity, as explained here.

DECENTRALIZING RESPONSE

As more and more disaster-affected communities turn to social media in time of need, “Governments and responders will soon need answers to the questions: ‘Where were you? We Facebooked/tweeted/texted for help, why didn’t someone come?'” Again, customer support challenges are hardly unique to the humanitarian sector. Private sector companies have had to manage parallel problems by developing more advanced customer service platforms. Some have even turned to crowdsourcing to manage customer support. I blogged about this here to drive the point home that solutions to these humanitarian challenges already exist in other sectors.

Yes, that’s right, I am promoting the idea of crowdsourcing crisis response. Fact is, disaster response has always been crowdsourced. The real first responders are the disaster affected communities themselves. Thanks to new technologies, this crowdsourced response can be accelerated and made more efficient. And yes, there’s an app (in the making) for that: MatchApp. This too is a QCRI humanitarian technology project (in partnership with MIT’s Computer Science and Artificial Intelligence Lab). The purpose of MatchApp is to decentralize disaster response. After all, the many small needs that arise following a disaster rarely require the attention of paid and experienced emergency responders. Furthermore, as a colleague of mine at NYU shared based on her disaster efforts following Hurricane Sandy, “Solving little challenges can make the biggest differences” for disaster-affected communities.

As noted above, more and more individuals believe that emergency responders should monitor social media during disasters and respond accordingly. This is “likely to increase the pressure on humanitarian responders to define what they can and cannot provide. The extent of communities’ desires may exceed their immediate life-saving needs, raising expectations beyond those that humanitarian responders can meet. This can have dangerous consequences. Expectation management has always been important; it will become more so in the network age.”

Screen Shot 2013-04-04 at 11.20.15 AM

PEOPLE-CENTERED

“Community early warning systems (CEWS) can buy time for people to implement plans and reach safety during a crisis. The best CEWS link to external sources of assistance and include the pre-positioning of essential supplies.” At the same time, “communities do not need to wait for information to come from outside sources, […] they can monitor local hazards and vulnerabilities themselves and then shape the response.” This sense and shaping capacity builds resilience, which explains why “international humanitarian organizations must embrace the shift of warning systems to the community level, and help Governments and communities to prepare for, react and respond to emergencies using their own resources and networks.”

This is absolutely spot on and at least 7 years old as far as  UN policy goes. In 2006, the UN’s International Strategy for Disaster Risk Reduction (UNISDR) published this policy document advocating for a people-centered approach to early warning and response systems. They defined the purpose of such as systems as follows:

“… to empower individuals and communities threatened by hazards to act in sufficient time and in an appropriate manner so as to reduce the possibility of personal injury, loss of life, damage to property and the environment, and loss of livelihoods.”

Unfortunately, the OCHA report does not drive these insights to their logical conclusion. Disaster-affected communities are even more ill-equipped to manage the rise of Big (Crisis) Data. Storing, let alone analyzing Big Data Analytics in real-time, is a major technical challenge. As noted here vis-à-vis Big Data Analytics on Twitter, “only corporate actors and regulators—who possess both the intellectual and financial resources to succeed in this race—can afford to participate […].” Indeed, only a handful of research institutes have the technical ability and large funding base carry out the real-time analysis of Big (Crisis) Data. My team and I at QCRI, along with colleagues at UN Global Pulse and GSMA are trying to change this. In the meantime, however, the “Big Data Divide” is already here and very real.

information > Food

“Information is not water, food or shelter; on its own, it will not save lives. But in the list of priorities, it must come shortly after these.” While I understand the logic behind this assertion, I consider it a step back, not forward from the 2005 World Disaster Report (WDR 2005), which states that “People need information as much as water, food, medicine or shelter. Information can save lives, livelihoods and resources.” In fact, OCHA’s assertion contradicts an earlier statement in the report; namely that “information in itself is a life-saving need for people in crisis. It is as important as water, food and shelter.” Fact is: without information, how does one know where/when and from whom clean water and food might be available? How does one know which shelters are open, whether they can accommodate your family and whether the road to the shelter is safe to drive on?

Screen Shot 2013-04-08 at 5.39.51 AM

OCHA writes that, “Easy access to data and analysis, through technology, can help people make better life-saving decisions for themselves and mobilize the right types of external support. This can be as simple as ensuring that people know where to go and how to get help. But to do so effectively requires a clear understanding of how information flows locally and how people make decisions.” In sum, access to information is paramount, which means that local communities should have easy access to next generation humanitarian technologies that can manage and analyze Big Crisis Data. As a seasoned humanitarian colleague recently told me, “humanitarians sometimes have a misconception that all aid and relief comes through agencies.  In fact, (especially with things such a shelter) people start to recover on their own or within their communities. Thus, information is vital in assuring that they do this safely and properly.  Think of the Haiti, build-back-better campaign and the issues with cholera outbreaks.”

Them not us

The technologies of the network age should not be restricted to empowering second- and third-level responders. Unfortunately, as OCHA rightly observes, “there is still a tendency for people removed from a crisis to decide what is best for the people living through that crisis.” Moreover, these paid responders cannot be everywhere at the same time. But the crowd is always there. And as OCHA points out, there are “growing groups of people willing able to help those in need;” groups that unlike their analog counterparts of yesteryear now operate in the “network age with its increased reach of communications networks.” So information is not simply or “primarily a tool for agencies to decide how to help people, it must be understood as a product, or service, to help affected communities determine their own priorities.” Recall the above definition of people-centered early warning. This definition does not all of a sudden become obsolete in the network age. The purpose of next generation technologies is to “empower individuals and communities threatened by hazards to act in sufficient time and in an appropriate manner so as to reduce the possibility of personal injury, loss of life, damage to property and the environment, and loss of livelihoods.”

Screen Shot 2013-04-08 at 5.36.05 AM

Digital humanitarian volunteers are also highly unprepared to deal with the rise of Big Crisis Data, even though they are at the frontlines and indeed the pioneers of digital response. This explains why the Standby Volunteer Task Force (SBTF), a network of digital volunteers that OCHA refers to half-a-dozen times throughout the report, are actively looking to becoming early adopters of next generation humanitarian technologies. Burn out is a serious issue with digital volunteers. They too require access to these next generation technologies, which is precisely why the American Red Cross equips their digital volunteers with advanced computing platforms as part of their Digital Operations Center. Unfortunately, some humanitarians still think that they can just as easily throw more (virtual) volunteers at the Big Crisis Data challenge. Not only are they terribly misguided but also insensitive, which is why, As OCHA notes, “Using new forms of data may also require empowering technical experts to overrule the decisions of their less informed superiors.” As the OCHA study concludes, “Crowdsourcing is a powerful tool, but ensuring that scarce volunteer and technical resources are properly deployed will take further research and the expansion of collaborative models, such as SBTF.”

Conclusion

So will next generation humanitarian technology solve everything? Of course not, I don’t know anyone naïve enough to make this kind of claim. (But it is a common tactic used by the ignorant to attack humanitarian innovation). I have already warned about techno-centric tendencies in the past, such as here and here (see epilogue). Furthermore, one of the principal findings from this OECD report published in 2008 is that “An external, interventionist, and state-centric approach in early warning fuels disjointed and top down responses in situations that require integrated and multilevel action.” You can throw all the advanced computing technology you want at this dysfunctional structural problem but it won’t solve a thing. The OECD thus advocates for “micro-level” responses to crises because “these kinds of responses save lives.” Preparedness is obviously central to these micro-level responses and self-organization strategies. Shockingly, however, the OCHA study reveals that, “only 3% of humanitarian aid goes to disaster prevention and preparedness,” while barely “1% of all other development assistance goes towards disaster risk reduction.” This is no way to build disaster resilience. I doubt these figures will increase substantially in the near future.

This reality makes it even more pressing to ensure that “responders listen to affected people and find ways to respond to their priorities will require a mindset change.” To be sure, “If aid organizations are willing to listen, learn and encourage innovation on the front lines, they can play a critical role in building a more inclusive and more effective humanitarian system.” This need to listen and learn is why next generation humanitarian technologies are not optional. Ensuring that first, second and third-level responders have access to next generation humanitarian technologies is critical for the purposes of self-help, mutual aid and external response.

bio

Automatically Extracting Disaster-Relevant Information from Social Media

Latest update on AIDR available here

My team and I at QCRI have just had this paper (PDF) accepted at the Social Web for Disaster Management workshop at the World Wide Web (WWW 2013) conference in Rio next month. The paper relates directly to our Artificial Intelligence for Disaster Response (AIDR) project. One of our main missions at QCRI is to develop open source and freely available next generation humanitarian technologies to better manage Big (Crisis) Data. Over 20 million tweets and half-a-million Instagram pictures were posted during Hurricane Sandy, for example. In Japan, more 2,000 tweets were posted every second the day after the devastating earthquake and Tsunami struck the Eastern Coast. Recent empirical studies have shown that an important percentage of tweets posted during disaster are informative and even actionable. The challenge before  us is how to find those proverbial needles in the haystack and how to do so in as close to real-time as possible.

Screen Shot 2013-04-01 at 11.22.09 AM

So we analyzed disaster tweets posted during Hurricane Sandy (2012) and the Joplin Tornado (2011). We demonstrate that disaster-relevant information can be automatically extracted from these datasets. The results indicate that 40% to 80% of tweets that contain disaster-related information can be automatically detected. We also demonstrate that we can correctly identify the type of disaster information 80% to 90% of the time. This means, for example, that once we identify a disaster tweet, we can automatically correctly determine whether that tweet was written by an eyewitness 80%-90% of the time. Because these classifiers are developed using machine learning, they get more accurate with more data. This explains why we are building AIDR. Our aim is not to replace human involvement and oversight but to take much of the weight off the shoulders of humans.

The classifiers we’ve developed automatically identify tweets that are personal in nature and those that are informative—that is, tweets that are of interest to others beyond the author’s immediate circle. We also created classifiers to differentiate between informative content shared by eye-witnesses versus content that is simply recycled by other sources such as the media. What’s more, we also created classifiers to distinguish between various types of informative content. Additionally to classifying, we extract key phrases from each tweet. A key phrase summarizes the essential message of a tweet on a few words, allowing for better visualization/aggregation of content. Below, we list real-world examples of tweets on each class. The underlined text is what the extraction system finds to be the key phrase of each tweet:

Caution and Advice: message conveys/reports information about some warning or a piece of advice about a possible hazard.

  • .@NYGovCuomo orders closing of NYC bridges. Only Staten Island bridges unaffected at this time. Bridges must close by 7pm. #Sandy

Casualties and Damage: message mentions casualties or infrastructure damage related to the disaster.

  • At least 39 dead; millions without power in Sandy’s aftermath. http//[Link].

Donations and Offers:  message speaks about goods or services offered or needed by the victims of an incident.

  • 400 Volunteers are needed for areas that #Sandy destroyed.
  • I want to volunteer to help the hurricane Sandy victims. If anyone knows how I can get involved please let me know!

People Missing, Found, or Seen: message reports about a missing or found person affected by an incident, or reports reaction or visit of a celebrity.

  • rt @911buff: public help needed: 2 boys 2 & 4 missing nearly 24 hours after they got separated from their mom when car submerged in si. #sandy #911buff
Information Sources: message points to information sources, photos, videos; or mentions a website, TV or radio station providing extensive coverage.
  • RT @NBCNewsPictures: Photos of the unbelievable scenes left in #Hurricane #Sandy’s wake http//[Link] #NYC #NJ 

National Geographic

The two metrics used to assess the results of our analysis are: “Detection Rate” and “Hit Ratio”. The best way explain these metrics is by way of analogy. The Detection Rate measures how good your fishing net is. If you know (thanks to sonar) that there are 10 fish in the pond and your net is good enough to catch all 10, then your Detection Rate is 100%. If you catch 8 out of 10, you rate is 80%. In other words, the Detection Rate is a measure of sensitivity. Now say you’ve designed the world’s first ever “Smart Net” which only catches salmon and thus leaves all other fish in the same pond alone. Now say you caught 5 fish and that you wanted salmon. If all 5 are salmon, your Hit Ratio is 100%. If only 2 of them are salmon, then your Hit Ratio is 40%. In other words, Hit Ratio is a measure of accuracy.

Turning to our results, the Detection Rate was higher for Joplin (78%) than for Sandy (41%). The Hit Ratio is also higher for Joplin (90%) than for Sandy (78%). In other words, our classifiers find the Sandy dataset more challenging to decode. That that said, the Hit Ratio is rather high in both cases, indicating that when our system extracts some part of the tweet, it is often the correct part. In sum, our approach can detect from 40% to 80% of the tweets containing disaster-related information and can correctly identify the specific type of disaster information 80% to 90% of the time. This means, for example, that once we identify a disaster tweet, we can automatically correctly determine whether that tweet was written by an eyewitness between 80% to 90% of the time. Because these classifiers are developed using machine learning, they get more accurate with more data. This explains why we are building AIDR. Our aim is not to replace human involvement and oversight but to significantly lessen the load on humans.

This tweet-level extraction is key to extracting more reliable high-level information. Observing, for instance, that a large number of tweets in similar locations report the same infrastructure as being damaged, may be a strong indicator that this is indeed the case. So we are very much continuing our research and working hard to increase both Detection Rates and Hit Ratios.

bio

See also:

Using Crowdsourcing to Counter the Spread of False Rumors on Social Media During Crises

My new colleague Professor Yasuaki Sakamoto at the Stevens Institute of Tech-nology (SIT) has been carrying out intriguing research on the spread of rumors via social media, particularly on Twitter and during crises. In his latest research, “Toward a Social-Technological System that Inactivates False Rumors through the Critical Thinking of Crowds,” Yasu uses behavioral psychology to under-stand why exposure to public criticism changes rumor-spreading behavior on Twitter during disasters. This fascinating research builds very nicely on the excellent work carried out by my QCRI colleague ChaTo who used this “criticism dynamic” to show that the credibility of tweets can be predicted (by topic) with-out analyzing their content. Yasu’s study also seeks to find the psychological basis for the Twitter’s self-correcting behavior identified by ChaTo and also John Herman who described Twitter as a  “Truth Machine” during Hurricane Sandy.

criticalthink

Twitter is still a relatively new platform, but the existence and spread of false rumors is certainly not. In fact, a very interesting study dated 1950 found that “in the past 1,000 years the same types of rumors related to earthquakes appear again and again in different locations.” Early academic studies on the spread of rumors revealed that “that psychological factors, such as accuracy, anxiety, and impor-tance of rumors, affect rumor transmission.” One such study proposed that the spread of a rumor “will vary with the importance of the subject to the individuals concerned times the ambiguity of the evidence pertaining to the topic at issue.” Later studies added “anxiety as another key element in rumormongering,” since “the likelihood of sharing a rumor was related to how anxious the rumor made people feel. At the same time, however, the literature also reveals that counter-measures do exist. Critical thinking, for example, decreases the spread of rumors. The literature defines critical thinking as “reasonable reflective thinking focused on deciding what to believe or do.”

“Given the growing use and participatory nature of social media, critical thinking is considered an important element of media literacy that individuals in a society should possess.” Indeed, while social media can “help people make sense of their situation during a disaster, social media can also become a rumor mill and create social problems.” As discussed above, psychological factors can influence rumor spreading, particularly when experiencing stress and mental pressure following a disaster. Recent studies have also corroborated this finding, confirming that “differences in people’s critical thinking ability […] contributed to the rumor behavior.” So Yasu and his team ask the following interesting question: can critical thinking be crowdsourced?

Screen Shot 2013-03-30 at 3.37.40 PM

“Not everyone needs to be a critical thinker all the time,” writes Yasu et al. As long as some individuals are good critical thinkers in a specific domain, their timely criticisms can result in an emergent critical thinking social system that can mitigate the spread of false information. This goes to the heart of the self-correcting behavior often observed on social media and Twitter in particular. Yasu’s insight also provides a basis for a bounded crowdsourcing approach to disaster response. More on this here, here and here.

“Related to critical thinking, a number of studies have paid attention to the role of denial or rebuttal messages in impeding the transmission of rumor.” This is the more “visible” dynamic behind the self-correcting behavior observed on Twitter during disasters. So while some may spread false rumors, others often try to counter this spread by posting tweets criticizing rumor-tweets directly. The following questions thus naturally arise: “Are criticisms on Twitter effective in mitigating the spread of false rumors? Can exposure to criticisms minimize the spread of rumors?”

Yasu and his colleagues set out to test the following hypotheses: Exposure to criticisms reduces people’s intent to spread rumors; which mean that ex-posure to criticisms lowers perceived accuracy, anxiety, and importance of rumors. They tested these hypotheses on 87 Japanese undergraduate and grad-uate students by using 20 rumor-tweets related to the 2011 Japan Earthquake and 10 criticism-tweets that criticized the corresponding rumor-tweets. For example:

Rumor-tweet: “Air drop of supplies is not allowed in Japan! I though it has already been done by the Self- Defense Forces. Without it, the isolated people will die! I’m trembling with anger. Please retweet!”

Criticism-tweet: “Air drop of supplies is not prohibited by the law. Please don’t spread rumor. Please see 4-(1)-4-.”

The researchers found that “exposing people to criticisms can reduce their intent to spread rumors that are associated with the criticisms, providing support for the system.” In fact, “Exposure to criticisms increased the proportion of people who stop the spread of rumor-tweets approximately 1.5 times [150%]. This result indicates that whether a receiver is exposed to rumor or criticism first makes a difference in her decision to spread the rumor. Another interpretation of the result is that, even if a receiver is exposed to a number of criticisms, she will benefit less from this exposure when she sees rumors first than when she sees criticisms before rumors.”

Screen Shot 2013-03-30 at 3.53.02 PM

Findings also revealed three psychological factors that were related to the differences in the spread of rumor-tweets: one’s own perception of the tweet’s accuracy, the anxiety cause by the tweet, and the tweet’s perceived importance. The results also indicate that “exposure to criticisms reduces the perceived accuracy of the succeeding rumor-tweets, paralleling the findings by previous research that refutations or denials decrease the degree of belief in rumor.” In addition, the perceived accuracy of criticism-tweets by those exposed to rumors first was significantly higher than the criticism-first group. The results were similar vis-à-vis anxiety. “Seeing criticisms before rumors reduced anxiety associated with rumor-tweets relative to seeing rumors first. This result is also consistent with previous research findings that denial messages reduce anxiety about rumors. Participants in the criticism-first group also perceived rumor-tweets to be less important than those in the rumor-first group.” The same was true vis-à-vis the perceived importance of a tweet. That said, “When the rumor-tweets are perceived as more accurate, the intent to spread the rumor-tweets are stronger; when rumor-tweets cause more anxiety, the intent to spread the rumor-tweets is stronger; when the rumor-tweets are perceived as more im-portance, the intent to spread the rumor-tweets is also stronger.”

So how do we use these findings to enhance the critical thinking of crowds and design crowdsourced verification platforms such as Verily? Ideally, such a platform would connect rumor tweets with criticism-tweets directly. “By this design, information system itself can enhance the critical thinking of the crowds.” That said, the findings clearly show that sequencing matters—that is, being exposed to rumor tweets first vs criticism tweets first makes a big differ-ence vis-à-vis rumor contagion. The purpose of a platform like Verily is to act as a repo-sitory for crowdsourced criticisms and rebuttals; that is, crowdsourced critical thinking. Thus, the majority of Verily users would first be exposed to questions about rumors, such as: “Has the Vincent Thomas Bridge in Los Angeles been destroyed by the Earthquake?” Users would then be exposed to the crowd-sourced criticisms and rebuttals.

In conclusion, the spread of false rumors during disasters will never go away. “It is human nature to transmit rumors under uncertainty.” But social-technological platforms like Verily can provide a repository of critical thinking and ed-ucate users on critical thinking processes themselves. In this way, we may be able to enhance the critical thinking of crowds.


bio

See also:

  • Wiki on Truthiness resources (Link)
  • How to Verify and Counter Rumors in Social Media (Link)
  • Social Media and Life Cycle of Rumors during Crises (Link)
  • How to Verify Crowdsourced Information from Social Media (Link)
  • Analyzing the Veracity of Tweets During a Crisis (Link)
  • Crowdsourcing for Human Rights: Challenges and Opportunities for Information Collection & Verification (Link)
  • The Crowdsourcing Detective: Crisis, Deception and Intrigue in the Twittersphere (Link)