Planet has an unparalleled constellation of satellites in orbit. In addition to their current constellation of 130 micro-satellites, they have 5 RapidEye satellites and the 7 SkySat satellites (recently acquired from Google). What’s more, 48 new micro-satellites were just launched into orbit this July, bringing the total number of Planet satellites to 190. And once the 48 satellites begin imaging, Planet will have global, daily coverage of the entire Earth, covering over 150 million square kilometers every day. Never before has the humanitarian community had access to such a vast amount of timely satellite imagery.
As described in my book, Digital Humanitarians, this vast amount of new data adds to the rapidly growing Big Data challenge that humanitarian organizations are facing. As such, what humanitarians need is not just data philanthropy—i.e., free and rapid access to relevant data—they also need insight philanthropy. This is where Planet’s new Rapid Response Team comes in.
Planet just launched this new digital volunteer program in partnership with the Digital Humanitarian Network to help ensure that Planet’s data and insights get to the right people at the right time to accelerate and improve humanitarian response. After major disasters hit, members of the Rapid Response Team can provide the latest satellite images available and/or geospatial analysis directly to field-based aid organizations.
So if you’re an established humanitarian group and need rapid access to satellite imagery and/or analysis after major disasters, simply activate the Digital Humanitarian Network. You can request satellite images of disaster affected areas on a daily basis as well as before/after analysis (sliders) of those areas as shown above. This is an exciting and generous new resource being made available to the international humanitarian community by Planet, so please do take advantage.
In the meantime, if you have any questions or suggestions, please feel free to get in touch by email or via the comments section below. I serve as an advisor to Planet and am keen to make the Rapid Response initiative as useful as possible to humanitarian organizations.
The Multi-Cluster/Sector Initial Rapid Assessment (MIRA) is the methodology used by UN agencies to assess and analyze humanitarian needs within two weeks of a sudden onset disaster. A detailed overview of the process, methodologies and tools behind MIRA is available here (PDF). These reports are particularly insightful when comparing them with the processes and methodologies used by digital humanitarians to carry out their rapid damage assessments (typically done within 48-72 hours of a disaster).
Take the November 2013 MIRA report for Typhoon Haiyan in the Philippines. I am really impressed by how transparent the report is vis-à-vis the very real limitations behind the assessment. For example:
“The barangays [districts] surveyed do not constitute a represen-tative sample of affected areas. Results are skewed towards more heavily impacted municipalities […].”
“Key informant interviews were predominantly held with baranguay captains or secretaries and they may or may not have included other informants including health workers, teachers, civil and worker group representatives among others.”
“Barangay captains and local government staff often needed to make their best estimate on a number of questions and therefore there’s considerable risk of potential bias.”
Given the number of organizations involved, assessment teams were not trained in how to administrate the questionnaire and there may have been confusion on the use of terms or misrepresentation on the intent of the questions.”
“Only in a limited number of questions did the MIRA checklist contain before and after questions. Therefore to correctly interpret the information it would need to be cross-checked with available secondary data.”
In sum: The data collected was not representative; The process of selecting interviewees was biased given that said selection was based on a convenience sample; Interviewees had to estimate (guesstimate?) the answer for several questions, thus introducing additional bias in the data; Since assessment teams were not trained to administrate the questionnaire, this also introduces the problem of limited inter-coder reliability and thus limits the ability to compare survey results; The data still needs to be validated with secondary data.
I do not share the above to criticize, only to relay what the real world of rapid assessments resembles when you look “under the hood”. What is striking is how similar the above challenges are to the those that digital humanitarians have been facing when carrying out rapid damage assessments. And yet, I distinctly recall rather pointed criticisms leveled by professional humanitarians against groups using social media and crowdsourcing for humanitarian response back in 2010 & 2011. These criticisms dismissed social media reports as being unrepresentative, unreliable, fraught with selection bias, etc. (Some myopic criticisms continue to this day). I find it rather interesting that many of the shortcomings attributed to crowdsourcing social media reports are also true of traditional information collection methodologies like MIRA.
The fact is this: no data or methodology is perfect. The real world is messy, both off- and online. Being transparent about these limitations is important, especially for those who seek to combine both off- and online methodologies to create more robust and timely damage assessments.
I thrive when working across disciplines, building diverse cross-cutting coalitions to create, translate and apply innovative strategies driven by shared values. This has enabled the 20+ organizations I’ve worked with, and those I’ve led, to accelerate meaningful and inclusive social impact.
Which is why I've been called a social entrepreneur and a translational leader by successful innovators. President Clinton once called me a digital pioneer, while recent colleagues describe me as kind, dedicated, values-driven, authentic, creative, ethical, and impactful.