The Value of Timely Information During Disasters (Measured in Hours)

In the 2005 World Disaster Report (PDF), the International Federation of the Red Cross states unequivocally that access to information during disasters is equally important as access to food, water, shelter and medication. Of all these commodities, however, crisis information is the most perishable. In other words, the “sell-by” or “use-by” date of information for decision-making during crisis is very short. Put simply: information rots fast, especially in the field (assuming that information even exists in the first place). But how fast exactly as measured in hours and days?

Screen Shot 2016-03-31 at 2.49.16 PM

Enter this handy graph by FEMA, which is based on a large survey of emergency management professionals across the US. As you’ll note, there is a very clear cut-off at 72 hours post-disaster by which time the value of information for decision making purposes has depreciated by 60% to 85%. Even at 48 hours, information has lost 35% to 65% of its initial tactical value. Disaster responders don’t have the luxury of waiting around for actionable information to inform their decisions during the first 24-72 hours after a disaster. So obviously they’ll take those decisions whether or not timely data is available to guide said decisions.

In a way, the graph also serves as a “historical caricature” of the availability of crisis information over the past 25 years:

11qsii

During the early 1990s, when the web and mobile phones were still in their infancy, it often took weeks to collect detailed information on disaster damage and needs following major disasters. Towards the end of the 2000’s, thanks to the rapid growth in smartphones, social media and the increasing availability of satellite imagery plus improvements in humanitarian information management systems, the time it took to collect crisis information was shortened. One could say we crossed the 72-hour time barrier on January 12, 2010 when a devastating earthquake struck Haiti. Five years later, the Nepal earthquake in April 2015 may have seen a number of formal responders crossing the 48-hour threshold.

While these observations are at best the broad brushstrokes of a caricature, the continued need for timely information is very real, especially for tactical decision making in the field. This is why we need to shift further left in the FEMA graph. Of course, information that is older than 48 hours is still useful, particularly for decision-makers at headquarters who do not need to make tactical decisions.

Screen Shot 2016-03-31 at 3.29.18 PM

In fact, the real win would be to generate and access actionable information within the first 12- to 24-hour mark. By the end of the 24-hours, the value of information has “only” depreciated by 10% to 35%. So how do we get to the top left corner of the graph? How do we get to “Win”?

Screen Shot 2016-03-31 at 3.31.16 PM

By integrating new and existing sensors and combining these with automated analysis solutions. New sensors: like Planet Lab’s growing constellation of micro-satellites, which will eventually image the entire planet once every 24 hours at around 3-meter resolution. And new automated analysis solutions: powered by crowdsourcing and artificial intelligence (AI), and in particular deep learning techniques to process the Big Data generated by these “neo-sensors” in near real-time, including multimedia posted to social media sites and the Web in general.

And the need for baseline data is no less important for comparative analysis and change detection purposes. As a colleague of mine recently noted, the value of baseline information before a major disaster is at an all time high but then itself depreciates as well post-disaster.

Screen Shot 2016-03-31 at 3.41.18 PM

Of course, access to real-time information does not make a humanitarian organization a real-time response organization. There are always delays regard-less of how timely (or not) the information is (assuming it is even available). But the real first responders are the local communities. So the real win here would be to make make this real-time analysis directly available to local partners in disaster prone countries. They often have more of an immediate incentive to generate and consume timely, tactical information. I described this information flow as “crowdfeeding” years ago.

In sum, the democratization of crisis information is key (keeping in mind data-protection protocols). But said democratization isn’t enough. The know-how and technologies to generate and analyze crisis information during the first 12-24 hours must also be democratized. The local capacity to respond quickly and effectively must exist; otherwise timely, tactical information will just rot away.


I’d be very interested to hear from human rights practitioners to get their thoughts on how/when the above crisis information framework does, and does not, apply when applied to human rights monitoring.

14 responses to “The Value of Timely Information During Disasters (Measured in Hours)

  1. Yes I fully agree, that’s why at EMSC we focus on teh very first few hours after an earthquake to crowdsource information from eyewitnesses using social media
    Information has another value: “better than any medication we know, information treats anxiety in a crisis” Saathoff and Everly (2002)

    Now a short question Patrick, I don’t really understand the label of the Y axis “Percentage of states”??

  2. Very interesting. Indeed a lot could be gained from channeling more information earlier-on in the response. Key challenges remain though in the analysis of such data, but also importantly in the quality of data. The latter tends to increase in time. No doubt cross-tabulating timeliness and quality (if existing) would provide for another informative graph.

    • Thanks Loek, yes the analysis of “Big Data” generated during disasters remains a challenge. Hence my point re “new automated analysis solutions being needed “powered by crowdsourcing and artificial intelligence (AI), and in particular deep learning techniques to process the Big Data generated by these ‘neo-sensors’ in near real-time, including multimedia posted to social media sites and the Web in general.” I’m not sure that the quality of the data necessarily increases based on my experience, but certainly as more time is spent on the data analysis, more time goes into verification which can lead to increased data accuracy over time. See my post on this here: http://iRevolutions.org/2009/03/27/internews-ushahidi-and-communication-in-crises

  3. Matthew Lloyd

    The message I’m taking from this is that there is difference between tactical front line decisions and strategic HQ decisions. HQ decisions require access to longer range communication links, tactical decisions may not have anything but local communications to support them. With that in mind, I don’t think that crowdsourcing can be assumed be reliably available to the front line. Therefore my interest is in decision support systems that are available within the limited computing power available in the front line. Can AI run on a laptop?

  4. Pingback: Real-time Mapping Software Aids In Disaster Response | MotionDSP Inc.

  5. Pingback: Democratizar la información en crisis ayudaría en desastres, señala @PatrickMeier | iRescate, revista digital de crisis y emergencias

  6. Pingback: Links round-up | The CSAE Blog

  7. Hi Patrick – can you post a link to the original FEMA work please. I have tried (but failed) to locate it. Many thanks

  8. Hey Patrick,
    At what point does democratizing information become inefficient?
    Is there any worth putting information in the hands of everyone in a crisis situation?

  9. Pingback: Today in disaster resilience (14 April 2017) – Disaster Resilience News

  10. Pingback: Info 281-13 Blog Post #4: The Future of Crisis Mapping – newzealandhistorian

Leave a comment