Category Archives: Crisis Mapping

What Happens When the Media Sends Drone Teams to Disasters?

Media companies like AFP, CNN and others are increasingly capturing dramatic aerial footage following major disasters around the world. These companies can be part of the solution when it comes to adding value to humanitarian efforts on the ground. But they can also be a part of the problem.


Media teams are increasingly showing up to disasters with small drones (UAVs) to document the damage. They’re at times the first with drones on the scene and thus able to quickly capture dramatic aerial footage of the devastation below. These media assets lead to more views and thus traffic on news websites, which increases the probability that more readers click on ads. Cue Noam Chomsky’s Manufacturing Consent: The Political Economy of the Mass Media, my favorite book whilst in high school.

Aerial footage can also increase situational awareness for disaster responders if that footage is geo-located. Labeling individual scenes in video footage with the name of the towns or villages being flown over would go a long way. This is what I asked one journalist to do in the aftermath of the Nepal Earthquake after he sent me dozens of his aerial videos. I also struck up an informal agreement with CNN to gain access to their raw aerial footage in future disasters. On a related note, I was pleased when my CNN contact expressed an interest in following the Humanitarian UAV Code of Conduct.

In an ideal world, there would be a network of professional drone journalists with established news agencies that humanitarian organizations could quickly contact for geo-tagged video footage after major disasters to improve their situational awareness. Perhaps the Professional Society of Drone Journalists (PSDJ) could be part of the solution. In any case, the network would either have its own Code of Conduct or follow the humanitarian one. Perhaps they could post their footage and pictures directly to the Humanitarian UAV Network (UAViators) Crisis Map. Either way, the media has long played an important role in humanitarian disasters, and their increasing use of drones makes them even more valuable partners to increase situational awareness.

The above scenario describes the ideal world. But the media can (and has) been part of the problem as well. “If it bleeds, it leads,” as the saying goes. Increased competition between media companies to be the first to capture dramatic aerial video that goes viral means that they may take shortcuts. They may not want to waste time getting formal approval from a country’s civil aviation authority. In Nepal after the earthquake, one leading company’s drone team was briefly detained by authorities for not getting official permission.


Media companies may not care to engage with local communities. They may be on a tight deadline and thus dispense with getting community buy-in. They may not have the time to reassure traumatized communities about the robots flying overhead. Media companies may overlook or ignore potential data privacy repercussions of publishing their aerial videos online. They may also not venture out to isolated and rural areas, thus biasing the video footage towards easy-to-access locations.

So how do we in the humanitarian space make media drone teams part of the solution rather than part of the problem? How do we make them partners in these efforts? One way forward is to start a conversation with these media teams and their relevant networks. Perhaps we start with a few informal agreements and learn by doing. If anyone is interested in working with me on this and/or has any suggestions on how to make this happen, please do get in touch. Thanks!

Why Robots Are Flying Over Zanzibar and the Source of the Nile

An expedition in 1858 revealed that Lake Victoria was the source of the Nile. We found ourselves on the shores of Africa’s majestic lake this October, a month after a 5.9 magnitude earthquake struck Tanzania’s Kagera Region. Hundreds were injured and dozens killed. This was the biggest tragedy in decades for the peaceful lakeside town of Bukoba. The Ministry of Home Affairs invited WeRobotics to support the recovery and reconstruction efforts by carrying out aerial surveys of the affected areas. 


The mission of WeRobotics is to build local capacity for the safe and effective use of appropriate robotics solutions. We do this by co-creating local robotics labs that we call Flying Labs. We use these Labs to transfer the professional skills and relevant robotics solutions to outstanding local partners. Our explicit focus on capacity building explains why we took the opportunity whilst in Kagera to train two Tanzanian colleagues. Khadija and Yussuf joined us from the State University of Zanzibar (SUZA). They were both wonderful to work with and quick learners too. We look forward to working with them and other partners to co-create our Flying Labs in Tanzania. More on this in a future post.

Aerial Surveys of Kagera Region After The Earthquake

We surveyed multiple areas in the region based on the priorities of our local partners as well as reports provided by local villagers. We used the Cumulus One UAV from our technology partner DanOffice to carry out the flights. The Cumulus has a stated 2.5 hour flight time and 50 kilometer radio range. We’re using software from our partner Pix4D to process the 3,000+ very high resolution images captured during our 2 days around Bukoba.


Above, Khadija and Yussuf on the left with a local engineer and a local member of the community on the right, respectfully. The video below shows how the Cumulus takes off and lands. The landing is automatic and simply involves the UAV stalling and gently gliding to the ground. 

We engaged directly with local communities before our flights to explain our project and get their permissions to fly. Learn more about our Code of Conduct.


Aerial mapping with fixed-wing UAVs can identify large-scale damage over large areas and serve as a good base map for reconstruction. A lot of the damage, however, can be limited to large cracks in walls, which cannot be seen with nadir (vertical) imagery. We thus flew over some areas using a Parrot Bebop2 to capture oblique imagery and to get closer to the damage. We then took dozens of geo-tagged images from ground-level with our phones in order to ground-truth the aerial imagery.


We’re still processing the resulting imagery so the results below are simply the low resolution previews of one (out of three) surveys we carried out.


Both Khadija and Yussuf were very quick learners and a real delight to work with. Below are more pictures documenting our recent work in Kagera. You can follow all our trainings and projects live via our Twitter feed (@werobotics) and our Facebook page. Sincerest thanks to both Linx Global Intelligence and UR Group for making our work in Kagera possible. Linx provided the introduction to the Ministry of Home Affairs while the UR Group provided invaluable support on the logistics and permissions.


Yussuf programming the flight plan of the Cumulus


Khadija is setting up the Cumulus for a full day of flying around Bukoba area


Khadija wants to use aerial robots to map Zanzibar, which is where she’s from


Community engagement is absolutely imperative


Local community members inspecting the Parrot’s Bebop2

From the shores of Lake Victoria to the coastlines of Zanzibar

Together with the outstanding drone team from the State University of Zanzibar, we mapped Jozani Forest and part of the island’s eastern coastline. This allowed us to further field-test our long-range platform and to continue our local capacity building efforts following our surveys near the Ugandan border. Here’s a picture-based summary of our joint efforts.


Flying Labs Coordinator Yussuf sets up the Cumulus UAV for flight


Turns out selfie sticks are popular in Zanzibar and kids love robots.


Khairat from Team SUZA is operating the mobile air traffic control tower. Team SUZA uses senseFly eBees for other projects on the island.


Another successful takeoff, courtesy of Flying Labs Coordinator Yussuf.


We flew the Cumulus at a speed of 65km/h and at an altitude of 265m.


The Cumulus flew for 2 hours, making this our longest UAV flight in Zanzibar so far.


Khadija from Team SUZA explains to local villagers how and why she maps Zanzibar using flying robots.


Tide starts rushing back in. It’s important to take the moon into account when mapping coastlines, as the tide can change drastically during a single flight and thus affect the stitching process.

The content above is cross-posted from WeRobotics.

How Can Digital Humanitarians Best Organize for Disaster Response?

I published a blog post with the same question in 2012. The question stemmed from earlier conversations I had at 10 Downing Street with colleague Duncan Watts from Microsoft Research. We subsequently embarked on a collaboration with the Standby Task Force (SBTF), a group I co-founded back in 2010. The SBTF was one of the early pioneers of digital humanitarian action. The purpose of this collaboration was to empirically explore the relationship between team size and productivity during crisis mapping efforts.


Duncan and Team from Microsoft simulated the SBTF’s crisis mapping efforts in response to Typhoon Pablo in 2012. At the time, the United Nations Office for the Coordination of Humanitarian Affairs (UN/OCHA) had activated the Digital Humanitarian Network (DHN) to create a crisis map of disaster impact (final version pictured above). OCHA requested the map within 24 hours. While we could have deployed the SBTF using the traditional crowdsourcing approach as before, we decided to try something different: microtasking. This was admittedly a gamble on our part.

We reached out to the team at PyBossa to ask them to customize their micro-tasking platform so that we could rapidly filter through both images and videos of disaster damage posted on Twitter. Note that we had never been in touch with the PyBossa team before this (hence the gamble) nor had we ever used their CrowdCrafting platform (which was still very new at the time). But thanks to PyBossa’s quick and positive response to our call for help, we were able to launch this microtasking app several hours after OCHA’s request.

Fast forward to the present research study. We gave Duncan and colleagues at Microsoft the same database of tweets for their simulation experiment. To conduct this experiment and replicate the critical features of crisis mapping, they created their own “CrowdMapper” platform pictured below.

Screen Shot 2016-04-20 at 11.12.36 AM Screen Shot 2016-04-20 at 11.12.53 AM

The CrowdMapper experiments suggest that the positive effects of coordination between digital humanitarian volunteers, i.e., teams, dominate the negative effects of social loafing, i.e., volunteers working independently from others. In social psychology, “social loafing is the phenomenon of people exerting less effort to achieve a goal when they work in a group than when they work alone” (1). In the CrowdMapper exercise, the teams performed comparably to the SBTF deployment following Typhoon Pablo. This suggests that such experiments can “help solve practical problems as well as advancing the science of collective intelligence.”

Our MicroMappers deployments have always included a live chat (IM) feature in the user interface precisely to support collaboration. Skype has also been used extensively during digital humanitarian efforts and Slack is now becoming more common as well. So while we’ve actively promoted community building and facilitated active collaboration over the past 6+ years of crisis mapping efforts, we now have empirical evidence that confirms we’re on the right track.

The full study by Duncan et al. is available here. As they note vis-a-vis areas for future research, we definitely need more studies on the division of labor in crisis mapping efforts. So I hope they or other colleagues will pursue this further.

Many thanks to the Microsoft Team and to SBTF for collaborating on this applied research, one of the few that exist in the field of crisis mapping and digital humanitarian action.

The main point I would push back on vis-a-vis Duncan et al’s study is comparing their simulated deployment with the SBTF’s real-world deployment. The reason it took the SBTF 12 hours to create the map was precisely because we didn’t take the usual crowdsourcing approach. As such, most of the 12 hours was spent on reaching out to PyBossa, customizing their microtasking app, testing said app and then finally deploying the platform. The Microsoft Team also had the dataset handed over to them while we had to use a very early, untested version of the AIDR platform to collect and filter the tweets, which created a number of hiccups. So this too took time. Finally, it should be noted that OCHA’s activation came during early evening (local time) and I for one pulled an all-nighter that night to ensure we had a map by sunrise.

A 10 Year Vision: Future Trends in Geospatial Information Management

Screen Shot 2016-02-07 at 12.35.09 PM

The United Nations Committee of Experts on Global Geospatial Information Management (UN-GGIM) recently published their second edition of Future Trends in Geospatial Information Management. I blogged about the first edition here. Below are some of the excerpts I found interesting or noteworthy. The report itself is a 50-page document (PDF 7.1Mb).

  • The integration of smart technologies and efficient governance models will increase and the mantra of ‘doing more for less’ is more relevant than ever before.
  • There is an increasing tendency to bring together data from multiple sources: official statistics, geospatial information, satellite data, big data and crowdsourced data among them.
  • New data sources and new data collection technologies must be carefully applied to avoid a bias that favors countries that are wealthier and with established data infrastructures. The use of innovative tools might also favor those who have greater means to access technology, thus widening the gap between the ‘data poor’ and the ‘data rich’.
  • The paradigm of geospatial information is changing; no longer is it used just for mapping and visualization, but also for integrating with other data sources, data analytics, modeling and policy-making.
  • Our ability to create data is still, on the whole, ahead of our ability to solve complex problems by using the data.  The need to address this problem will rely on the development of both Big Data technologies and techniques (that is technologies that enable the analysis of vast quantities of information within usable and practical timeframes) and artificial intelligence (AI) or machine learning technologies that will enable the data to be processed more efficiently.
  • In the future we may expect society to make increasing use of autonomous machines and robots, thanks to a combination of aging population, 
rapid technological advancement in unmanned autonomous systems and AI, and the pure volume of data being beyond a human’s ability to process it.
  • Developments in AI are beginning to transform the way machines interact with the world. Up to now machines have mainly carried out well-defined tasks such as robotic assembly, or data analysis using pre-defined criteria, but we are moving into an age where machine learning will allow machines to interact with their environment in more flexible and adaptive ways. This is a trend we expect to 
see major growth in over the next 5 to 10 years as the technologies–and understanding of the technologies–become more widely recognized.
  • Processes based on these principles, and the learning of geospatial concepts (locational accuracy, precision, proximity etc.), can be expected to improve the interpretation of aerial and satellite imagery, by improving the accuracy with which geospatial features can be identified.
  • Tools may run persistently on continuous streams of data, alerting interested parties to new discoveries and events.  Another branch of AI that has long been of interest has been the expert system, in which the knowledge and experience of human experts 
is taught to a machine.
  • The principle of collecting data once only at the highest resolution needed, and generalizing ‘on the fly’ as required, can become reality.  Developments of augmented and virtual reality will allow humans to interact with data in new ways.
  • The future of data will not be the conflation of multiple data sources into a single new dataset, rather there will be a growth in the number of datasets that are connected and provide models to be used across the world.
  • Efforts should be devoted to integrating involuntary sensors– mobile phones, RFID sensors and so
on–which aside from their primary purpose may produce information regarding previously difficult to collect information. This leads to more real-time information being generated.
  • Many developing nations have leapfrogged in areas such as mobile communications, but the lack of core processing power may inhibit some from taking advantage of the opportunities afforded by these technologies.
  • Disaggregating data at high levels down to small area geographies. This will increase the need to evaluate and adopt alternative statistical modeling techniques to ensure that statistics can be produced at the right geographic level, whilst still maintaining the quality to allow them to be reported against.
  • The information generated through use of social media and the use of everyday devices will further reveal patterns and the prediction of behaviour. This is not a new trend, but as the use of social media 
for providing real-time information and expanded functionality increases it offers new opportunities for location based services.
  • There seems to have been
 a breakthrough from 2D to 3D information, and
 this is becoming more prevalent.

 Software already exists to process this information, and to incorporate the time information to create 4D products and services. It 
is recognized that a growth area over the next five to ten years will be the use of 4D information in a wide variety of industries.
 The temporal element is crucial to a number of applications such as emergency service response, for simulations and analytics, and the tracking of moving objects. 
 4D is particularly relevant in the context of real-time information; this has been linked to virtual reality technologies.
  • Greater coverage, quality and resolution has been achieved by the availability of both low-cost and affordable satellite systems, and unmanned aerial vehicles (UAVs). This has increased both the speed of collection and acquisition in remote areas, but also reduced the cost barriers of entry.
  • UAVs can provide real-time information to decision-makers on the ground providing, for example, information for disaster manage-ment. They are
 an invaluable tool when additional information 
is needed to improve vital decision making capabilities and such use of UAVs will increase.
  • The licensing of data in an increasingly online world is proving to be very challenging. There is a growth in organisations adopting simple machine-readable licences, but these have not resolved the issues to data. Emerging technologies such as web services and the growth of big data solutions drawn from multiple sources will continue to create challenges for the licensing of data.
  • A wider issue is the training and education of a broader community of developers and users of location-enabled content. At the same time there is a need for more automated approaches to ensuring the non-geospatial professional community get the right data at the right time. 
Investment in formal training in the use of geospatial data and its implementation is still indispensable.
  • Both ‘open’ and ‘closed’ VGI 
data play an important and necessary part of the wider data ecosystem.

UN Crisis Map of Fiji Uses Aerial Imagery (Updated)

Update 1: The Crisis Map below was produced pro bono by Tonkin + Taylor so they should be credited accordingly.

Update 2: On my analysis of Ovalau below, I’ve been in touch with the excellent team at Tonkin & Taylor. It would seem that the few images I randomly sampled were outliers since the majority of the images taken around Ovalau reportedly show damage, hence the reason for Tonkin & Taylor color-coding the island red. Per the team’s explanation: “[We] have gone through 40 or so photographs of Ovalau. The area is marked red because the majority of photographs meet the definition of severe, i.e.,: 1) More than 50% of all buildings sustaining partial loss of amenity/roof; and 2) More than 20% of damaged buildings with substantial loss of amenity/roof.” Big thanks to the team for their generous time and for their good work on this crisis map.

Fiji Crisis Map

Fiji recently experienced the strongest tropical cyclone in its history. Named Cyclone Winston, the Category 5 Cyclone unleashed 285km/h (180 mph) winds. Total damage is estimated at close to half-a-billion US dollars. Approximately 80% of the country’s population lost power; 40,000 people required immediate assistance; some 24,000 homes were damaged or destroyed leaving around 120,000 people in need of shelter assistance; 43 people tragically lost their lives.

As a World Bank’s consultant on UAVs (aerial robotics), I was asked to start making preparations for the possible deployment of a UAV team to Fiji should an official request be made. I’ve therefore been in close contact with the Civil Aviation Authority of Fiji; and several professional and certified UAV teams as well. The purpose of this humanitarian robotics mission—if requested and authorized by relevant authorities—would be to assess disaster damage in support of the Post Disaster Needs Assessment (PDNA) process. I supported a similar effort last year in neighboring Vanuatu after Cyclone Pam.

World Bank colleagues are currently looking into selecting priority sites for the possible aerial surveys using a sampling method that would make said sites representative of the disaster’s overall impact. This is an approach that we were unable to take in Vanuatu following Cyclone Pam due to the lack of information. As part of this survey sampling effort, I came across the United Nations Office for the Coordination of Humanitarian Affairs (UN/OCHA) crisis map below, which depicts areas of disaster damage.

Fiji Crisis Map 2

I was immediately struck by the fact that the main dataset used to assess the damage depicted on this map comes from (declassified) aerial imagery provided by the Royal New Zealand Air Force (RNZAF). Several hundred high-resolution oblique aerial images populate the crisis map along with dozens of ground-based photographs like the ones below. Note that the positional accuracy of the aerial images is +/- 500m (meaning not particularly accurate).



I reached out to OCHA colleagues in Fiji who confirmed that they were using the crisis map as one source of information to get a rough idea about which areas were the most affected.  What makes this data useful, according to OCHA, is that it had good coverage over a large area. In contrast, satellite imagery could only provide small snapshots of random villages which were not as useful for trying to understand the scale and scope of a disasters. The limited value added of satellite imagery was reportedly due to cloud cover, which is typical after atmospheric hazards like Cyclones.

Below is the damage assessment methodology used vis-a-vis the interpret the aerial imagery. Note that this preliminary assessment was not carried out by the UN but rather an independent company.

Fiji Crisis Map 3

  • Severe Building Damage (Red): More than 50% of all buildings sustaining partial loss of amenity/roof or more than 20% of damaged buildings with substantial loss of amenity/roof.
  • Moderate Building Damage (Orange): Damage generally exceeding minor [damage] with up to 50% of all buildings sustaining partial loss of amenity/roof and up to 20% of damaged buildings with substantial loss of amenity/roof.
  • Minor Building Damage (Blue):  Up to 5% of all buildings with partial loss of amenity/roof or up to 1% of damaged buildings with substantial loss of amenity/roof.

The Fiji Crisis Map includes an important note: The primary objective of this preliminary assessment was to communicate rapid high-level building damage trends on a regional scale. This assessment has been undertaken on a regional scale (generally exceeding 100 km2) and thus may not accurately reflect local variation in damage. I wish more crisis maps provided qualifiers like the above. That said, while I haven’t had the time to review the hundreds of aerial images on the crisis map to personally assess the level of damage depicted in each, I was struck by the assessment of Ovalau, which I selected at random.

Fiji Crisis Map 4

As you’ll note, the entire island is color coded as severe damage. But I selected several aerial images at random and none showed severe building damage. The images I reviewed are included below.

Ovalau0 Ovalau1 Ovalau2 Ovalau3

This last one may seem like there is disaster damage but a closer inspection by zooming in reveals that the vast majority of buildings are largely intact.


I shall investigate this further to better understand the possible discrepancy. In any event, I’m particularly pleased to see the UN (and others) make use of aerial imagery in their disaster damage assessment efforts. I’d also like to see the use of aerial robotics for the collection of very high resolution, orthorectified aerial imagery. But using these robotics solutions to their full potential for damage assessment purposes requires regulatory approval and robust coordination mechanisms. Both are absolutely possible as we demonstrated in neighboring Vanuatu last year.

Increasing the Reliability of Aerial Imagery Analysis for Damage Assessments

In March 2015, I was invited by the World Bank to spearhead an ambitious humanitarian aerial robotics (UAV) mission to Vanuatu following Cyclone Pam, a devastating Category 5 Cyclone. This mission was coordinated with Heliwest and X-Craft, two outstanding UAV companies who were identified through the Humanitarian UAV Network (UAViators) Roster of Pilots. You can learn more about the mission and see pictures here. Lessons learned from this mission (and many others) are available here.

Screen Shot 2016-02-22 at 6.12.21 PM

The World Bank and partners were unable to immediately analyze the aerial imagery we had collected because they faced a Big Data challenge. So I suggested the Bank activate the Digital Humanitarian Network (DHN) to request digital volunteer assistance. As a result, Humanitarian OpenStreetMap (HOT) analyzed some of the orthorectified mosaics and MicroMappers focused on analyzing the oblique images (more on both here).

This in turn produced a number of challenges. To cite just one, the Bank needed digital humanitarians to identify which houses or buildings were completely destroyed, versus partially damaged versus largely intact. But there was little guidance on how to determine what constituted fully destroyed versus partially damaged or what such structures in Vanuatu look like when damaged by a Cyclone. As a result, data quality was not as high as it could have been. In my capacity as consultant for the World Bank’s UAVs for Resilience Program, I decided to do something about this lack of guidelines for imagery interpretation.

I turned to my colleagues at the Harvard Humanitarian Initiative (where I had previously co-founded and co-directed the HHI Program on Crisis Mapping) and invited them to develop a rigorous guide that could inform the consistent interpretation of aerial imagery of disaster damage in Vanuatu (and nearby Island States). Note that Vanuatu is number one on the World Bank’s Risk Index of most disaster-prone countries. The imagery analysis guide has just published (PDF) by the Signal Program on Human Security and Technology at HHI.

Big thanks to the HHI team for having worked on this guide and for my Bank colleagues and other reviewers for their detailed feedback on earlier drafts. The guide is another important step towards improving data quality for satellite and aerial imagery analysis in the context of damage assessments. Better data quality is also important for the use of Artificial Intelligence (AI) and computer vision as explained here. If a humanitarian UAV mission does happen in response to the recent disaster in Fiji, then the guide may also be of assistance there depending on how similar the building materials and architecture is. For now, many thanks to HHI for having produced this imagery guide.

Assessing Disaster Damage: How Close Do You Need to Be?

“What is the optimal spatial resolution for the analysis of disaster damage?”

I posed this question during an hour-long presentation I gave to the World Bank in May 2015. The talk was on the use of remote sensing aerial robotics (UAVs) for disaster damage assessments; I used Cyclone Pam in Vanuatu and the double Nepal Earthquakes as case studies.

One advantage of aerial robotics over space robotics (satellites) is that the former can capture imagery at far higher spatial resolutions—sub 1-centimeter if need be. Without hesitation, a World Bank analyst in the conference room replied to my question: “Fifty centimeters.” I was taken aback by the rapid reply. Had I per chance missed something? Was it so obvious that 50 cm resolution was optimal? So I asked, “Why 50 centimeters?” Again, without hesitation: “Because that’s the resolution we’ve been using.” Ah ha! “But how do you know this is the optimal resolution if you haven’t tried 30 cm or even 10 cm?”

Lets go back to the fundamentals. We know that “rapid damage assessment is essential after disaster events, especially in densely built up urban areas where the assessment results provide guidance for rescue forces and other immediate relief efforts, as well as subsequent rehabilitation and reconstruction. Ground-based mapping is too slow, and typically hindered by disaster-related site access difficulties” (Gerke & Kerle 2011). Indeed, studies have shown that the inability of physically access damaged areas results in field teams underestimating the damage (Lemoine et al 2013). Hence one reason for the use of remote sensing.

We know that remote sensing can be used for two purposes following disasters. The first, “Rapid Mapping”, aims at providing impact assessments as quickly as possible after a disaster. The second, “Economic Mapping” assists in quantifying the economic impact of the damage. “Major distinctions between the two categories are timeliness, completeness and accuracies” (2013).  In addition, Rapid Mapping aims to identify the relative severity of the damage (low, medium, high) rather than absolute damage figures. Results from Economic Mapping are combined with other geospatial data (building size, building type, etc) and economic parameters (e.g., cost per unit area, relocation costs, etc) to compute a total cost estimate (2013). The Post-Disaster Needs Assessment (PDNA) is an example of Economic Mapping.

It is worth noting that a partially destroyed building may be seen as a complete economic loss, identical to a totally destroyed structure (2011). “From a casualty / fatality assessment perspective, however, a structure that is still partly standing […] offers a different survival potential” (2011).

Screen Shot 2016-02-07 at 10.31.27 AM

We also know that “damage detectability is primarily a function of image resolution” (2011). The Haiti Earthquake was “one of the first major disasters in which very high resolution satellite and airborne imagery was embraced to delineate the event impact (2013). It was also “the first time that the PDNA was based on damage assessment produced with remotely sensed data” (2013). Imagery analysis of the impact confirmed that a “moderate resolution increase from 41 cm to 15 cm has profound effects on damage mapping accuracy” (2011). Indeed, a number of validation studies carried out since 2010 have confirmed that “the higher detail airborne imagery performs much better [than lower resolution] satellite imagery” (2013). More specifically, the detection of very heavy damage and destruction in Haiti aerial imagery “is approximately a factor 8 greater than in the satellite imagery.”

Comparing the aerial imagery analysis with field surveys, Lemoine et al. find that “the number of heavily affected and damaged buildings in the aerial point set is slightly higher than that obtained from the field survey” (2013). The correlation between the results of the aerial imagery analysis and field surveys is sensitive to land-use (e.g., commercial, downtown, industrial, residential high density, shanty, etc). In highly populated areas such as shanty zones, “the over-estimation of building damage from aerial imagery could simply be a result of an incomplete field survey while, in downtown, where field surveys seem to have been conducted in a more systematic way, the damage assessment from aerial imagery matches very well the one obtained from the field” (2013).


In sum, the results from Haiti suggests that the “damage assessment from aerial imagery currently represents the best possible compromise between timeliness and accuracy” (2013). The Haiti case study also “showed that the damage derived from satellite imagery was underestimated by a factor of eight, compared to the damage derived from aerial imagery. These results suggest that despite the fast availability of very high resolution satellite imagery […], the spatial resolution of 50 cm is not sufficient for an accurate interpretation of building damage.”

In other words, even though “damage assessments depend very much on the timeliness of the aerial images acquisitions and requires a considerable effort of visual interpretation as an element of compromise; [aerial imagery] remains the best trade-of in terms of required quality and timeliness for producing detailed damage assessments over large affected areas compared to satellite based assessments (insufficient quality) and exhaustive field inventories (too slow).” But there’s a rub with respect to aerial imagery. While the above results do “show that the identification of building damage from aerial imagery […] provides a realistic estimate of the spatial pattern and intensity of damage,” the aerial imagery analysis still “suffers from several limitations due to the nadir imagery” (2013).

“Essentially all conventional airborne and spacebar image data are taken from a quasi-vertical perspective” (2011). Vertical (or nadir) imagery is particularly useful for a wide range of applications, for sure. But when it comes to damage mapping, vertical data have great limitations, particularly when concerning structural building damage (2011). While complete collapse can be readily identified using vertical imagery (e.g., disintegrated roof structures and associated high texture values, or adjacent rubble piles, etc), “lower levels of damage are much harder to map. This is because such damage effects are largely expressed along the façades, which are not visible in such imagery” (2011). According to Gerke and Kerle, aerial oblique imagery is more useful for disaster damage assessment than aerial or satellite imagery taken with a vertical angle. I elaborated on this point vis-a-vis a more recent study in this blog post.


Clearly, “much of the challenge in detecting damage stems from the complex nature of damage” (2011). For some types and sizes of damage, using only vertical (nadir) imagery will result in missed damage (2013). The question, therefore, is not simply one of spatial resolution but the angle at which the aerial or space-based image is taken, land-use and the type of damage that needs to be quantified. Still, we do have a partial answer to the first question. Technically, the optimal spatial resolution for disaster damage assessments is certainly not 50 cm since 15 cm proved far more useful in Haiti.

Of course, if higher-resolution imagery is not available in time (or at all), than clearly 50 cm imagery is infinitely more optimal than no imagery. In fact, even 5 meter imagery that is available within 24-48 hours of a disaster can add value if this imagery comes with baseline imagery, i.e., imagery of the location of interest before the disaster. Baseline data enables the use of automated change-detection algorithms that can provide a first estimate of damage severity and the location or scope of that severity. What’s more, these change-detection algorithms could automatically plot a series of waypoints to chart the flight plan of an autonomous aerial robot (UAV) to carry out closer inspection. In other words, satellite and aerial data can be complementary, and the drawbacks of low resolution imagery can be offset if said imagery is available at a higher temporal frequency.


On the flip side, just because the aerial imagery used in the Haiti study was captured at 15 cm resolution does not imply that 15 cm resolution is the most optimal spatial resolution for disaster damage. It could very well be 10 cm. This depends entirely on the statistical distribution of the size of damaged features (e.g, the size of a crack, a window, a tile, etc,), the local architecture (e.g., type of building materials used and so on) and the type of hazard (e.g, hurricane, earthquake, etc). That said, “individual indicators of destruction, such as a roof or façade damage, do not linearly add to a given damage class [e.g., low, medium, high]” (2011). This limitation is alas “fundamental in remote sensing where no assessment of internal structural integrity is possible” (2011).

In any event, one point is certain: there was no need to capture aerial imagery (both nadir and obliques) at 5 centimeter resolution during the World Bank’s humanitarian UAV mission after Cyclone Pam—at least not for the purposes of 2D imagery analysis. This leads me to one final point. As recently noted during my opening Keynote at the 2016 International Drones and Robotics for Good Awards in Dubai, disaster damage is a 3D phenomenon, not a 2D experience. “Buildings are three-dimensional, and even the most detailed view at only one of those dimensions is ill-suited to describe the status of such features” (2011). In other words, we need “additional perspectives to provide a more comprehensive view” (2011).

There are very few peer-reviewed scientific papers that evaluate the use of high-resolution 3D models for the purposes of damage assessments. This one is likely the most up-to-date study. An earlier research effort by Booth et al. found that the overall correlation between the results from field surveys and 3D analysis of disaster damage was “an encouraging 74 percent.” But this visual analysis was carried out manually, which could have introduced non-random errors. After all, “the subjectivity inherent in visual structure damage mapping is considerable” (2011). Could semi-automated methods for the analysis of 3D models thus yield a higher correlation? This is the research question posed by Gerke and Kerle in their 2011 study.

Screen Shot 2016-02-07 at 5.39.06 AM

The authors tested this question using aerial imagery from the Haiti Earthquake. When 3D features were removed from their automated analysis, “classification performance declined, for example by some 10 percent for façades, the class that benefited most from the 3D derivates” (2011). The researchers also found that trained imagery analysts only agreed at most 76% of the time in their visual interpretation and assessments of aerial data. This is in part due to the lack of standards for damage categories (2011).

Screen Shot 2016-02-07 at 5.40.36 AM

I made this point more bluntly in this earlier blog post. Just imagine when you have hundreds of professionals and/or digital volunteers analyzing imagery (e.g, through crowdsourcing) and no standardized categories of disaster damage to inform the consistent interpretation of said imagery. This collective subjectivity introduces a non-random error into the overall analysis. And because it is non-random, this error cannot be accounted for. In contrast, a more semi-automated solution would render this error more random, which means the overall model could potentially be adjusted accordingly.

Gerke and Kerle conclude that high quality 3D models are “in principle well-suited for comprehensive semi-automated damage mapping. In particular façades, which are critical [to the assessment process], can be assessed [when] multiple views are provided.” That said, the methodology used by the authors was “still essentially based on projecting 3D data into 2D space, with conceptual and geometric limitations. [As such], one goal should be to perform the actual damage assessment and classification in 3D.” This explains why I’ve been advocating for Virtual Reality (VR) based solutions as described here.


Geek and Kerle also “required extensive training data and substantial subjective evidence integration in the final damage class assessment. This raises the question to what extent rules could be formulated to create a damage ontology as the basis for per-building damage scoring.” This explains why I invited the Harvard Humanitarian Initiative (HHI) to create a damage ontology based on the aerial imagery from Cyclone Pam in Vanuatu. This ontology is based on 2D imagery, however. Ironically, very high spatial resolution 2D imagery can be more difficult to interpret than lower-res imagery since the high resolution imagery inevitably adds more “noise” to the data.

Ultimately, we’ll need to move on to 3D damage ontologies that can be visualized using VR headsets. 3D analysis is naturally more intuitive to us since we live in a mega-high resolution 3D world rather than a 2D one. As a result, I suspect there would be more agreement between different analysts studying dynamic, very high-resolution 3D models versus 2D static images at the same spatial res.

Screen Shot 2016-02-07 at 12.53.27 PM

Taiwan’s National Cheng Kung University created this 3D model from aerial imagery captured by UAV. This model was created and uploaded to Sketchfab on the same day the earthquake struck. Note that Sketchfab recently added a VR feature to their platform, which I tried out on this model. Simply open this page on your mobile device to view the disaster damage in Taiwan in VR. I must say it works rather well, and even seems to be higher resolution in VR mode compared to the 2D projections of the 3D model above. More on the Taiwan model here.

Screen Shot 2016-02-07 at 12.52.06 PM

But to be useful for disaster damage assessments, the VR headset would need to be combined with wearable technology that enables the end-user to digitally annotate (or draw on) the 3D models directly within the same VR environment. This would render the analysis process more intuitive while also producing 3D training data for the purposes of machine learning—and thus automated feature detection.

I’m still actively looking for a VR platform that will enable this, so please do get in touch if you know of any group, company, research institute, etc., that would be interested in piloting the 3D analysis of disaster damage from the Nepal Earthquake entirely within a VR solution. Thank you!