Update: Twitter Dashboard for Disaster Response

Project name: Artificial Intelligence for Disaster Response (AIDR). For a more recent update, please click here.

My Crisis Computing Team and I at QCRI have been working hard on the Twitter Dashboard for Disaster Response. We first announced the project on iRevolution last year. The experimental research we’ve carried out since has been particularly insightful vis-a-vis the opportunities and challenges of building such a Dashboard. We’re now using the findings from our empirical research to inform the next phase of the project—namely building the prototype for our humanitarian colleagues to experiment with so we can iterate and improve the platform as we move forward.

KnightDash

Manually processing disaster tweets is becoming increasingly difficult and unrealistic. Over 20 million tweets were posted during Hurricane Sandy, for example. This is the main problem that our Twitter Dashboard aims to solve. There are two ways to manage this challenge of Big (Crisis) Data: Advanced Computing and Human Computation. The former entails the use of machine learning algorithms to automatically tag tweets while the latter involves the use of microtasking, which I often refer to as Smart Crowdsourcing. Our Twitter Dashboard seeks to combine the best of both methodologies.

On the Advanced Computing side, we’ve developed a number of classifiers that automatically identify tweets that:

  • Contain informative content (in contrast to personal messages or information unhelpful for disaster response);
  • Are posted by eye-witnesses (as opposed to 2nd-hand reporting);
  • Include pictures, video footage, mentions from TV/radio
  • Report casualties and infrastructure damage;
  • Relate to people missing, seen and/or found;
  • Communicate caution and advice;
  • Call for help and important needs;
  • Offer help and support.

These classifiers are developed using state-of-the-art machine learning tech-niques. This simply means that we take a Twitter dataset of a disaster, say Hurricane Sandy, and develop clear definitions for “Informative Content,” “Eye-witness accounts,” etc. We use this classification system to tag a random sample of tweets from the dataset (usually 100+ tweets). We then “teach” algorithms to find these different topics in the rest of the dataset. We tweak said algorithms to make them as accurate as possible; much like training a dog new tricks like go-fetch (wink).

fetchball

We’ve found from this research that the classifiers are quite accurate but sensitive to the type of disaster being analyzed and also the country in which said disaster occurs. For example, a set of classifiers developed from tweets posted during Hurricane Sandy tend to be less accurate when applied to tweets posted for New Zealand’s earthquake. Each classifier is developed based on tweets posted during a specific disaster. In other words, while the classifiers can be highly accurate (i.e., tweets are correctly tagged as being damage-related, for example), they only tend to be accurate for the type of disaster they’ve been trained for, e.g., weather-related disasters (tornadoes), earth-related (earth-quakes) and water-related (floods).

So we’ve been busy trying to collect as many Twitter datasets of different disasters as possible, which has been particularly challenging and seriously time-consuming given Twitter’s highly restrictive Terms of Service, which prevents the direct sharing of Twitter datasets—even for humanitarian purposes. This means we’ve had to spend a considerable amount of time re-creating Twitter datasets for past disasters; datasets that other research groups and academics have already crawled and collected. Thank you, Twitter. Clearly, we can’t collect every single tweet for every disaster that has occurred over the past five years or we’ll never get to actually developing the Dashboard.

That said, some of the most interesting Twitter disaster datasets are of recent (and indeed future) disasters. Truth be told, tweets were still largely US-centric before 2010. But the international coverage has since increased, along with the number of new Twitter users, which almost doubled in 2012 alone (more neat stats here). This in part explains why more and more Twitter users actively tweet during disasters. There is also a demonstration effect. That is, the international media coverage of social media use during Hurricane Sandy, for example, is likely to prompt citizens in other countries to replicate this kind of pro-active social media use when disaster knocks on their doors.

So where does this leave us vis-a-vis the Twitter Dashboard for Disaster Response? Simply that a hybrid approach is necessary (see TEDx talk above). That is, the Dashboard we’re developing will have a number of pre-developed classifiers based on as many datasets as we can get our hands on (categorized by disaster type). In addition to that, the dashboard will also allow users to create their own classifiers on the fly by leveraging human computation. They’ll also be able to microtask the creation of new classifiers.

In other words, what they’ll do is this:

  • Enter a search query on the dashboard, e.g., #Sandy.
  • Click on “Create Classifier” for #Sandy.
  • Create a label for the new classifier, e.g., “Animal Rescue”.
  • Tag 50+ #Sandy tweets that convey content about animal rescue.
  • Click “Run Animal Rescue Classifier” on new incoming tweets.

The new classifier will then automatically tag incoming tweets. Of course, the classifier won’t get it completely right. But the beauty here is that the user can “teach” the classifier not to make the same mistakes, which means the classifier continues to learn and improve over time. On the geo-location side of things, it is indeed true that only ~3% of all tweets are geotagged by users. But this figure can be boosted to 30% using full-text geo-coding (as was done the TwitterBeat project). Some believe this figure can be doubled (towards 75%) by applying Google Translate to the full-text geo-coding. The remaining users can be queried via Twitter for their location and that of the events they are reporting.

So that’s where we’re at with the project. Ultimately, we envision these classifiers to be like individual apps that can be used/created, dragged and dropped on an intuitive widget-like dashboard with various data visualization options. As noted in my previous post, everything we’re building will be freely accessible and open source. And of course we hope to include classifiers for other languages beyond English, such as Arabic, Spanish and French. Again, however, this is purely experimental research for the time being; we want to be crystal clear about this in order to manage expectations. There is still much work to be done.

In the meantime, please feel free to get in touch if you have disaster datasets you can contribute to these efforts (we promise not to tell Twitter). If you’ve developed classifiers that you think could be used for disaster response and you’re willing to share them, please also get in touch. If you’d like to join this project and have the required skill sets, then get in touch, we may be able to hire you! Finally, if you’re an interested end-user or want to share some thoughts and suggestions as we embark on this next phase of the project, please do also get in touch. Thank you!

bio

25 responses to “Update: Twitter Dashboard for Disaster Response

  1. The concept of a dashboard or “Commons Operating Picture” needs to be redefined as the goal should be the availability of relevant data across various personnel echelons geographically distributed in or outside the scene of disaster. The reality is that an incident commander in the field has a different data needs than the Director of the National Operations Center. (This is also true at the same level of responsibility were state operations, logistics and planning all need to visualize data from a different perspective. All stakeholders should have access to and knowledge of how to incorporate data into their operational picture. Users should have internal processes and representation capabilities to injest variety of data on the field and make them relevantly available to all actors, this includes local datasets as well as social media.

    Development of a User Defined/Dynamic dashbaord: This is done for information at each level, local, state national, NGO, etc. Often we believe that interoperability is about movement of data, one system to another, where instead interoperability should be about access to the right data. Moving data in bulk without intelligently processing it and extracting the relevant information for varied purposes increases overload, complexity as well as digital liability to have it stored and maintained. Intelligent processing would allow each operational units to question datasets, only move and manage what is critical and/or required. Each unit can determine what information they need for their operational picture, creating dynamic user based operational pictures which answer questions

    • Thanks for sharing, Eric. Our first focus re end-user of the Dashboard is OCHA’s Information Services Section. It is important that we co-develop this with them in order to ensure that what we iterate on gets closest to what they need the most. If we can achieve this, then later we can look to collaborate with other groups such as ICRC, Oxfam, etc, to determine what additional features/classifiers might be relevant to their work. But what we want to avoid here is to try and be everything to all users at all levels of analysis.

  2. This looks interesting, really great progress for an essential technology.

    Given the importance of SNS for disasters what is the scope for creating a custom built social networking platform from the ground up rather than using Twitter? Are their significant advantages of using Twitter? I’m guessing access to large load servers is one huge advantage, and being able to quickly process huge amounts of traffic.

    • Thanks for reading and writing in, Richard. Yes, Twitter has a huge head-start given the traction of 200 million active users. I doubt a new social network service could be developed to rival those already in place (Twitter, FB, Google+, LinkedIn, etc).

  3. Hi Patrick, I am interested in your initiative. It would be interesting to eventually compare the data coming in from Twitter with the data collected from more traditional cluster assessment methods.

    We are working on a data integration project at OCHA. I can send you some background but just want to be sure that this is the best way to reach you.

    thanks, SArah

    Sarah Telford Head, Reporting Unit Communications Services Branch Office for the Coordination of Humanitarian Affairs United Nations, NY

    Tel: +1 (917) 367-4361 BlackBerry: +1 917 294 7021 Email: telford@un.org, ochareporting@un.org

  4. This looks very interesting. It can help in small agencies that have limited personnel resources early in a disaster.

  5. Super interesting… it would good to learn about how, as you mentioned, it can be applied to “future” disasters in terms of forecasting/anticipation, especially when we talk about extensive disasters and risks (ex. extreme weather events).

  6. Pingback: Verily: Crowdsourcing Evidence During Disasters | iRevolution

  7. Pingback: La plataforma para clasificar tweets de ayuda a respuesta humanitaria de QCRI será híbrida, abierta y gratuita. | iRescate

  8. Pingback: Social Media as Passive Polling: Prospects for Development & Disaster Response | iRevolution

  9. Pingback: Launching: SMS Code of Conduct for Disaster Response | iRevolution

  10. Pingback: A Research Framework for Next Generation Humanitarian Technology and Innovation | iRevolution

  11. Pingback: GeoFeedia: Ready for Digital Disaster Response | iRevolution

  12. Pingback: Zooniverse: The Answer to Big (Crisis) Data? | iRevolution

  13. Pingback: GDACSmobile: Disaster Responders Turn to Bounded Crowdsourcing | iRevolution

  14. Pingback: Automatically Extracting Disaster-Relevant Information from Social Media | iRevolution

  15. Pingback: Automatically Extracting Disaster-Relevant Information from Social Media ~iRevolution | DisasterMap.net Blog

  16. Pingback: Humanitarianism in the Network Age: Groundbreaking Study | iRevolution

  17. Pingback: Automatically Classifying Crowdsourced Election Reports | iRevolution

  18. Pingback: Automatically Classifying Crowdsourced Election Report « Afronline – The Voice Of Africa

  19. Pingback: Big Data en las crisis humanitarias - train2manage

  20. Hi Mr Patrick,
    I am a Phd student working on the folowing domains: social media, Volunteer and Ambient Geographic Data and disaster management. I find your work very interesting. Is it possible to get some Twitter dataset of two disasters that you have extracted and classified using your application to analyze them and try to have a clear idea about how can I explore Twitter in a disaster case?

  21. Pingback: AIDR platform: Inteligencia Artificial y voluntarios unidos para verificar información ciudadana | Periodismo Ciudadano

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s