In the 2005 World Disaster Report (PDF), the International Federation of the Red Cross states unequivocally that access to information during disasters is equally important as access to food, water, shelter and medication. Of all these commodities, however, crisis information is the most perishable. In other words, the “sell-by” or “use-by” date of information for decision-making during crisis is very short. Put simply: information rots fast, especially in the field (assuming that information even exists in the first place). But how fast exactly as measured in hours and days?
Enter this handy graph by FEMA, which is based on a large survey of emergency management professionals across the US. As you’ll note, there is a very clear cut-off at 72 hours post-disaster by which time the value of information for decision making purposes has depreciated by 60% to 85%. Even at 48 hours, information has lost 35% to 65% of its initial tactical value. Disaster responders don’t have the luxury of waiting around for actionable information to inform their decisions during the first 24-72 hours after a disaster. So obviously they’ll take those decisions whether or not timely data is available to guide said decisions.
In a way, the graph also serves as a “historical caricature” of the availability of crisis information over the past 25 years:
During the early 1990s, when the web and mobile phones were still in their infancy, it often took weeks to collect detailed information on disaster damage and needs following major disasters. Towards the end of the 2000’s, thanks to the rapid growth in smartphones, social media and the increasing availability of satellite imagery plus improvements in humanitarian information management systems, the time it took to collect crisis information was shortened. One could say we crossed the 72-hour time barrier on January 12, 2010 when a devastating earthquake struck Haiti. Five years later, the Nepal earthquake in April 2015 may have seen a number of formal responders crossing the 48-hour threshold.
While these observations are at best the broad brushstrokes of a caricature, the continued need for timely information is very real, especially for tactical decision making in the field. This is why we need to shift further left in the FEMA graph. Of course, information that is older than 48 hours is still useful, particularly for decision-makers at headquarters who do not need to make tactical decisions.
In fact, the real win would be to generate and access actionable information within the first 12- to 24-hour mark. By the end of the 24-hours, the value of information has “only” depreciated by 10% to 35%. So how do we get to the top left corner of the graph? How do we get to “Win”?
By integrating new and existing sensors and combining these with automated analysis solutions. New sensors: like Planet Lab’s growing constellation of micro-satellites, which will eventually image the entire planet once every 24 hours at around 3-meter resolution. And new automated analysis solutions: powered by crowdsourcing and artificial intelligence (AI), and in particular deep learning techniques to process the Big Data generated by these “neo-sensors” in near real-time, including multimedia posted to social media sites and the Web in general.
And the need for baseline data is no less important for comparative analysis and change detection purposes. As a colleague of mine recently noted, the value of baseline information before a major disaster is at an all time high but then itself depreciates as well post-disaster.
Of course, access to real-time information does not make a humanitarian organization a real-time response organization. There are always delays regard-less of how timely (or not) the information is (assuming it is even available). But the real first responders are the local communities. So the real win here would be to make make this real-time analysis directly available to local partners in disaster prone countries. They often have more of an immediate incentive to generate and consume timely, tactical information. I described this information flow as “crowdfeeding” years ago.
In sum, the democratization of crisis information is key (keeping in mind data-protection protocols). But said democratization isn’t enough. The know-how and technologies to generate and analyze crisis information during the first 12-24 hours must also be democratized. The local capacity to respond quickly and effectively must exist; otherwise timely, tactical information will just rot away.
I’d be very interested to hear from human rights practitioners to get their thoughts on how/when the above crisis information framework does, and does not, apply when applied to human rights monitoring.