A 10 Year Vision: Future Trends in Geospatial Information Management

Screen Shot 2016-02-07 at 12.35.09 PM

The United Nations Committee of Experts on Global Geospatial Information Management (UN-GGIM) recently published their second edition of Future Trends in Geospatial Information Management. I blogged about the first edition here. Below are some of the excerpts I found interesting or noteworthy. The report itself is a 50-page document (PDF 7.1Mb).

  • The integration of smart technologies and efficient governance models will increase and the mantra of ‘doing more for less’ is more relevant than ever before.
  • There is an increasing tendency to bring together data from multiple sources: official statistics, geospatial information, satellite data, big data and crowdsourced data among them.
  • New data sources and new data collection technologies must be carefully applied to avoid a bias that favors countries that are wealthier and with established data infrastructures. The use of innovative tools might also favor those who have greater means to access technology, thus widening the gap between the ‘data poor’ and the ‘data rich’.
  • The paradigm of geospatial information is changing; no longer is it used just for mapping and visualization, but also for integrating with other data sources, data analytics, modeling and policy-making.
  • Our ability to create data is still, on the whole, ahead of our ability to solve complex problems by using the data.  The need to address this problem will rely on the development of both Big Data technologies and techniques (that is technologies that enable the analysis of vast quantities of information within usable and practical timeframes) and artificial intelligence (AI) or machine learning technologies that will enable the data to be processed more efficiently.
  • In the future we may expect society to make increasing use of autonomous machines and robots, thanks to a combination of aging population, 
rapid technological advancement in unmanned autonomous systems and AI, and the pure volume of data being beyond a human’s ability to process it.
  • Developments in AI are beginning to transform the way machines interact with the world. Up to now machines have mainly carried out well-defined tasks such as robotic assembly, or data analysis using pre-defined criteria, but we are moving into an age where machine learning will allow machines to interact with their environment in more flexible and adaptive ways. This is a trend we expect to 
see major growth in over the next 5 to 10 years as the technologies–and understanding of the technologies–become more widely recognized.
  • Processes based on these principles, and the learning of geospatial concepts (locational accuracy, precision, proximity etc.), can be expected to improve the interpretation of aerial and satellite imagery, by improving the accuracy with which geospatial features can be identified.
  • Tools may run persistently on continuous streams of data, alerting interested parties to new discoveries and events.  Another branch of AI that has long been of interest has been the expert system, in which the knowledge and experience of human experts 
is taught to a machine.
  • The principle of collecting data once only at the highest resolution needed, and generalizing ‘on the fly’ as required, can become reality.  Developments of augmented and virtual reality will allow humans to interact with data in new ways.
  • The future of data will not be the conflation of multiple data sources into a single new dataset, rather there will be a growth in the number of datasets that are connected and provide models to be used across the world.
  • Efforts should be devoted to integrating involuntary sensors– mobile phones, RFID sensors and so
on–which aside from their primary purpose may produce information regarding previously difficult to collect information. This leads to more real-time information being generated.
  • Many developing nations have leapfrogged in areas such as mobile communications, but the lack of core processing power may inhibit some from taking advantage of the opportunities afforded by these technologies.
  • Disaggregating data at high levels down to small area geographies. This will increase the need to evaluate and adopt alternative statistical modeling techniques to ensure that statistics can be produced at the right geographic level, whilst still maintaining the quality to allow them to be reported against.
  • The information generated through use of social media and the use of everyday devices will further reveal patterns and the prediction of behaviour. This is not a new trend, but as the use of social media 
for providing real-time information and expanded functionality increases it offers new opportunities for location based services.
  • There seems to have been
 a breakthrough from 2D to 3D information, and
 this is becoming more prevalent.

 Software already exists to process this information, and to incorporate the time information to create 4D products and services. It 
is recognized that a growth area over the next five to ten years will be the use of 4D information in a wide variety of industries.
  • 
 The temporal element is crucial to a number of applications such as emergency service response, for simulations and analytics, and the tracking of moving objects. 
 4D is particularly relevant in the context of real-time information; this has been linked to virtual reality technologies.
  • Greater coverage, quality and resolution has been achieved by the availability of both low-cost and affordable satellite systems, and unmanned aerial vehicles (UAVs). This has increased both the speed of collection and acquisition in remote areas, but also reduced the cost barriers of entry.
  • UAVs can provide real-time information to decision-makers on the ground providing, for example, information for disaster manage-ment. They are
 an invaluable tool when additional information 
is needed to improve vital decision making capabilities and such use of UAVs will increase.
  • The licensing of data in an increasingly online world is proving to be very challenging. There is a growth in organisations adopting simple machine-readable licences, but these have not resolved the issues to data. Emerging technologies such as web services and the growth of big data solutions drawn from multiple sources will continue to create challenges for the licensing of data.
  • A wider issue is the training and education of a broader community of developers and users of location-enabled content. At the same time there is a need for more automated approaches to ensuring the non-geospatial professional community get the right data at the right time. 
Investment in formal training in the use of geospatial data and its implementation is still indispensable.
  • Both ‘open’ and ‘closed’ VGI 
data play an important and necessary part of the wider data ecosystem.

UN Crisis Map of Fiji Uses Aerial Imagery (Updated)

Update 1: The Crisis Map below was produced pro bono by Tonkin + Taylor so they should be credited accordingly.

Update 2: On my analysis of Ovalau below, I’ve been in touch with the excellent team at Tonkin & Taylor. It would seem that the few images I randomly sampled were outliers since the majority of the images taken around Ovalau reportedly show damage, hence the reason for Tonkin & Taylor color-coding the island red. Per the team’s explanation: “[We] have gone through 40 or so photographs of Ovalau. The area is marked red because the majority of photographs meet the definition of severe, i.e.,: 1) More than 50% of all buildings sustaining partial loss of amenity/roof; and 2) More than 20% of damaged buildings with substantial loss of amenity/roof.” Big thanks to the team for their generous time and for their good work on this crisis map.


Fiji Crisis Map

Fiji recently experienced the strongest tropical cyclone in its history. Named Cyclone Winston, the Category 5 Cyclone unleashed 285km/h (180 mph) winds. Total damage is estimated at close to half-a-billion US dollars. Approximately 80% of the country’s population lost power; 40,000 people required immediate assistance; some 24,000 homes were damaged or destroyed leaving around 120,000 people in need of shelter assistance; 43 people tragically lost their lives.

As a World Bank’s consultant on UAVs (aerial robotics), I was asked to start making preparations for the possible deployment of a UAV team to Fiji should an official request be made. I’ve therefore been in close contact with the Civil Aviation Authority of Fiji; and several professional and certified UAV teams as well. The purpose of this humanitarian robotics mission—if requested and authorized by relevant authorities—would be to assess disaster damage in support of the Post Disaster Needs Assessment (PDNA) process. I supported a similar effort last year in neighboring Vanuatu after Cyclone Pam.

World Bank colleagues are currently looking into selecting priority sites for the possible aerial surveys using a sampling method that would make said sites representative of the disaster’s overall impact. This is an approach that we were unable to take in Vanuatu following Cyclone Pam due to the lack of information. As part of this survey sampling effort, I came across the United Nations Office for the Coordination of Humanitarian Affairs (UN/OCHA) crisis map below, which depicts areas of disaster damage.

Fiji Crisis Map 2

I was immediately struck by the fact that the main dataset used to assess the damage depicted on this map comes from (declassified) aerial imagery provided by the Royal New Zealand Air Force (RNZAF). Several hundred high-resolution oblique aerial images populate the crisis map along with dozens of ground-based photographs like the ones below. Note that the positional accuracy of the aerial images is +/- 500m (meaning not particularly accurate).

Fiji_2

Fiji_!

I reached out to OCHA colleagues in Fiji who confirmed that they were using the crisis map as one source of information to get a rough idea about which areas were the most affected.  What makes this data useful, according to OCHA, is that it had good coverage over a large area. In contrast, satellite imagery could only provide small snapshots of random villages which were not as useful for trying to understand the scale and scope of a disasters. The limited value added of satellite imagery was reportedly due to cloud cover, which is typical after atmospheric hazards like Cyclones.

Below is the damage assessment methodology used vis-a-vis the interpret the aerial imagery. Note that this preliminary assessment was not carried out by the UN but rather an independent company.

Fiji Crisis Map 3

  • Severe Building Damage (Red): More than 50% of all buildings sustaining partial loss of amenity/roof or more than 20% of damaged buildings with substantial loss of amenity/roof.
  • Moderate Building Damage (Orange): Damage generally exceeding minor [damage] with up to 50% of all buildings sustaining partial loss of amenity/roof and up to 20% of damaged buildings with substantial loss of amenity/roof.
  • Minor Building Damage (Blue):  Up to 5% of all buildings with partial loss of amenity/roof or up to 1% of damaged buildings with substantial loss of amenity/roof.

The Fiji Crisis Map includes an important note: The primary objective of this preliminary assessment was to communicate rapid high-level building damage trends on a regional scale. This assessment has been undertaken on a regional scale (generally exceeding 100 km2) and thus may not accurately reflect local variation in damage. I wish more crisis maps provided qualifiers like the above. That said, while I haven’t had the time to review the hundreds of aerial images on the crisis map to personally assess the level of damage depicted in each, I was struck by the assessment of Ovalau, which I selected at random.

Fiji Crisis Map 4

As you’ll note, the entire island is color coded as severe damage. But I selected several aerial images at random and none showed severe building damage. The images I reviewed are included below.

Ovalau0 Ovalau1 Ovalau2 Ovalau3

This last one may seem like there is disaster damage but a closer inspection by zooming in reveals that the vast majority of buildings are largely intact.

Ovalau5

I shall investigate this further to better understand the possible discrepancy. In any event, I’m particularly pleased to see the UN (and others) make use of aerial imagery in their disaster damage assessment efforts. I’d also like to see the use of aerial robotics for the collection of very high resolution, orthorectified aerial imagery. But using these robotics solutions to their full potential for damage assessment purposes requires regulatory approval and robust coordination mechanisms. Both are absolutely possible as we demonstrated in neighboring Vanuatu last year.

Increasing the Reliability of Aerial Imagery Analysis for Damage Assessments

In March 2015, I was invited by the World Bank to spearhead an ambitious humanitarian aerial robotics (UAV) mission to Vanuatu following Cyclone Pam, a devastating Category 5 Cyclone. This mission was coordinated with Heliwest and X-Craft, two outstanding UAV companies who were identified through the Humanitarian UAV Network (UAViators) Roster of Pilots. You can learn more about the mission and see pictures here. Lessons learned from this mission (and many others) are available here.

Screen Shot 2016-02-22 at 6.12.21 PM

The World Bank and partners were unable to immediately analyze the aerial imagery we had collected because they faced a Big Data challenge. So I suggested the Bank activate the Digital Humanitarian Network (DHN) to request digital volunteer assistance. As a result, Humanitarian OpenStreetMap (HOT) analyzed some of the orthorectified mosaics and MicroMappers focused on analyzing the oblique images (more on both here).

This in turn produced a number of challenges. To cite just one, the Bank needed digital humanitarians to identify which houses or buildings were completely destroyed, versus partially damaged versus largely intact. But there was little guidance on how to determine what constituted fully destroyed versus partially damaged or what such structures in Vanuatu look like when damaged by a Cyclone. As a result, data quality was not as high as it could have been. In my capacity as consultant for the World Bank’s UAVs for Resilience Program, I decided to do something about this lack of guidelines for imagery interpretation.

I turned to my colleagues at the Harvard Humanitarian Initiative (where I had previously co-founded and co-directed the HHI Program on Crisis Mapping) and invited them to develop a rigorous guide that could inform the consistent interpretation of aerial imagery of disaster damage in Vanuatu (and nearby Island States). Note that Vanuatu is number one on the World Bank’s Risk Index of most disaster-prone countries. The imagery analysis guide has just published (PDF) by the Signal Program on Human Security and Technology at HHI.

Big thanks to the HHI team for having worked on this guide and for my Bank colleagues and other reviewers for their detailed feedback on earlier drafts. The guide is another important step towards improving data quality for satellite and aerial imagery analysis in the context of damage assessments. Better data quality is also important for the use of Artificial Intelligence (AI) and computer vision as explained here. If a humanitarian UAV mission does happen in response to the recent disaster in Fiji, then the guide may also be of assistance there depending on how similar the building materials and architecture is. For now, many thanks to HHI for having produced this imagery guide.

Aerial Robotics for Search & Rescue: State of the Art?

WeRobotics is co-creating a global network of labs to transfer robotics solutions to those who need them most. These “Flying Labs” take on different local flavors based on the needs and priorities of local partners. Santiago Flying Labs is one of the labs under consideration. Our local partners in Chile are interested in the application of robotics for disaster preparedness and Search & Rescue (SaR) operations. So what is the state of the art in rescue robotics?

WP0

One answer may lie several thousand miles away in Lushan, China, which experienced a 7.0 magnitude earthquake in 2013. The mountainous area made it near impossible for the Chinese International Search and Rescue Team (CISAR) to implement a rapid search and post-seismic evaluation. So State Key Robotics Lab at Shenyang Institute of Automation offered aerial support to CISAR. They used their aerial robot (UAV) to automatically and accurately detect collapsed buildings for ground rescue guidance. This saved the SaR teams considerable time. Here’s how.

WP5

A quicker SaR response leads to a higher survival rate. The survival rate is around 90% within the first 30 minutes but closer to 20% by day four. “In traditional search methods, ground rescuers are distributed to all possible places, which is time consuming and inefficient.” An aerial inspection of the disaster damage can help accelerate the ground search for survivors by prioritizing which areas to search first.

WP1

State Key Labs used a ServoHeli aerial robot to capture live video footage of the damaged region. And this is where it gets interesting. “Because the remains of a collapsed building often fall onto the ground in arbitrary directions, their shapes will exhibit random gradients without particular orientations. Thus, in successive aerial images, the random shape of a collapsed building will lead to particular motion features that can be used to discriminate collapsed from non-collapsed buildings.”

WP2

These distinct motion features can be quantified using a histogram of oriented gradient (HOG) as depicted here (click to enlarge):

WP4
As is clearly evident from the histograms, the “HOG variation of a normal building will be much larger than that of a collapsed one.” The team at State Key Labs had already employed this technique to train and test their automated feature-detection algorithm using aerial video footage from the 2010 Haiti Earthquake. Sample results of this are displayed below. Red rectangles denote where the algorithm was successful in identifying damage. Blue rectangles are false alarms while orange rectangles are missed detections.

Screen Shot 2016-02-07 at 2.55.14 AM

Screen Shot 2016-02-07 at 2.54.14 AM

Screen Shot 2016-02-07 at 2.55.57 AM

While the team achieved increasingly accurate detection rates for Haiti, the initial results for Lushan were not as robust. This was due to the fact that Lushan is far more rural than Port-au-Prince, which tripped up the algorithm. Eventually, the software achieved an accurate rate of 83.4% without any missed collapses, however. The use of aerial robotics and automated feature detection algorithms in Xinglong Village (9.5 sq. km) enabled CISAR to cut their search time in half. In sum, the team concluded that videos are more valuable for SaR operations than static images.

Screen Shot 2016-02-07 at 2.53.05 AM

Screen Shot 2016-02-07 at 2.52.26 AM

To learn more about this deployment, see the excellent write-up “Search and Rescue Rotary-Wing UAV and Its Application to the Lushan Ms 7.0 Earthquake” published in the Journal of Field Robotics. I wish all robotics deployments were this well documented. Another point that I find particularly noteworthy about this operation is that it was conducted three years ago already. In other words, real-time feature detection of disaster damage from live aerial video footage was already used operationally years ago.

WP3
What’s more, this paper published in 2002 (!) used computer vision to detect with a 90% accuracy rate collapsed buildings in aerial footage of the 1995 Kobe earthquake captured by television crews in helicopters. Perhaps in the near future we’ll have automated feature detection algorithms for disaster damage assessments running on live video footage from news channels and aerial robots. These could then be complemented by automated change-detection algorithms running on satellite imagery. In any event, the importance of applied research is clearly demonstrated by the Lushan deployments. This explains why WeRobotics
always aims to have local universities involved in Flying Labs.


Thanks to the ICARUS Team for pointing me to this deployment.

When Bushmen Race Aerial Robots to Protect Wildlife

WP1

The San people are bushmen who span multiple countries in Southern Africa. “For millennia, they have lived as hunter-gatherers in the Kalahari desert relying on an intimate knowledge of the environment.” Thus begins Sonja’s story, written with Matthew Parkan. Friend and colleague Sonja Betschart is a fellow co-founder at WeRobotics. In 2015, she teamed up on with partners in Namibia to support their wildlife conservation efforts. Sonja and team used aerial robots (UAVs) to count the number of wild animals in Kuzikus Wildlife Reserve. All the images in this blog post are credited to Drone Adventures.

Screen Shot 2016-02-07 at 3.18.44 PM

If that name sounds familiar, it’s because of this digital crowdsourcing project, which I spearheaded with the same Wildlife Reserve back in 2014. The project, which was covered on CNN, used crowdsourcing to identify wild animals in very high-resolution aerial imagery. As Sonja notes, having updated counts for the number of animals in each species is instrumental to nature conservation efforts.

While the crowdsourcing project worked rather well to identify the presence of animals, distinguishing between animals is more difficult for “the crowd” even when crowdsourcing is combined with Artificial Intelligence. Local indigenous knowledge is unbeatable, or is it? They decided to put this indigenous technical knowledge to the test: find the baby rhino that had been spotted in the area earlier during the mission. Armed with the latest aerial robotics technology, Sonja and team knew full well that the robots would be stiff—even unfair—competition for the bushmen. After all, the aerial robots could travel much faster and much farther than the fastest bushman in Namibia. 

Game on: Bushmen vs Robots. Ready, get set, go!

WP3

The drones weren’t even up in the air when the bushmen strolled back into camp to make themselves a cup up tea. Yes, they had already identified the location of the rhino. “This technologically humbling experience was a real eye opener,” writes Sonja, “a clear demonstration of how valuable traditional skills are despite new technologies. It was also one of the reasons why we started thinking about ways to involve the trackers in our work.”

They invited the bushmen to analyze the thousands of aerial images taken by the aerial robots to identity the various animal species visible in each image. After providing the San with a crash course on how to use a computer (to navigate between images), the bushmen “provided us with consistent interpretations of the species and sometimes even of the tracks in the high grass. However, what impressed us most was that the San pointed out subtle differences in the shadows [of the animals] that would give away the animals’ identities […].”

WP4

Take this aerial image, for example. See anything? “The San identified three different species on this image: oryx (red circles), blesbok (blue circle) and zebra (green circle).” In another image, “the bushmen were even able to give us the name of one of the Black Rhinos they spotted! For people who had never seen aerial imagery, their performance was truly remarkable.” Clearly, “with a bit of additional training, the San trackers could easily become expert image analysts.”

As Sonja rightly concluded after the mission, the project in Namibia was yet more evidence that combining indigenous knowledge with modern technology is both an elegant way of keeping this knowledge alive and an effective approach to address some of our technical and scientific limitations. As such, this is precisely the kind of project that Sonja and I want to see WeRobotics launch in 2016. It is fully in line with our mission to democratize the Fourth Industrial Revolution.


“The Namibia mission was part of SAVMAP, a joint initiative between the Laboratory of Geographic Information Systems (LASIG) of EPFL, Kuzikus Wildlife Reserve & Drone Adventures that aims at better integrating geospatial technologies with conservation and wildlife management in the Kalahari.”

360° Aerial View of Taiwan Earthquake Damage

The latest news coming from Taiwan is just tragic and breaks my heart. More than 100 may still be trapped under the rubble. I’ve been blogging about the role that emerging technologies can play in humanitarian response since 2008 but it definitely doesn’t get any easier emotionally to witness these tragic events. I try to draw a psychological line when I can so as not to lose hope and fall apart, but I’m not always successful. Sometimes it is just easier psychologically to focus on the technology and nothing else. My thoughts go out to all the families affected.


My colleague Preston Ward from DronePan kindly connected me to a friend of his, Elton, in Taiwan. Elton is with Super720, a company that creates impressive Virtual Reality (VR) panoramas. They deployed within 4 hours of the earthquake and used aerial robots (UAVs) at 5 different locations to capture both 360° aerial panoramas and 4K video to document the earthquake damage.

Click on the image below to view the pano in full 360° (takes a while to load).

360° visual analysis may prove useful and complementary for inspecting disaster damage. If there were a way to geo-reference and orthorectify these panos, they could potentially play a larger role in the disaster assessment process which uses geospatial data and Geographic Information Systems (GIS).

The team in Taiwan also shared this aerial video footage of the damage:

As one of my upcoming blog posts will note, we should be able to run computer vision algorithms on aerial videos to automatically detect disaster damage. We should also be able to use Virtual Reality to carry out damage assessments. You can already use VR to inspect at the earthquake damage in Taiwan here.

Assessing Disaster Damage: How Close Do You Need to Be?

“What is the optimal spatial resolution for the analysis of disaster damage?”

I posed this question during an hour-long presentation I gave to the World Bank in May 2015. The talk was on the use of remote sensing aerial robotics (UAVs) for disaster damage assessments; I used Cyclone Pam in Vanuatu and the double Nepal Earthquakes as case studies.

WPa
One advantage of aerial robotics over space robotics (satellites) is that the former can capture imagery at far higher spatial resolutions—sub 1-centimeter if need be. Without hesitation, a World Bank analyst in the conference room replied to my question: “Fifty centimeters.” I was taken aback by the rapid reply. Had I per chance missed something? Was it so obvious that 50 cm resolution was optimal? So I asked, “Why 50 centimeters?” Again, without hesitation: “Because that’s the resolution we’ve been using.” Ah ha! “But how do you know this is the optimal resolution if you haven’t tried 30 cm or even 10 cm?”

Lets go back to the fundamentals. We know that “rapid damage assessment is essential after disaster events, especially in densely built up urban areas where the assessment results provide guidance for rescue forces and other immediate relief efforts, as well as subsequent rehabilitation and reconstruction. Ground-based mapping is too slow, and typically hindered by disaster-related site access difficulties” (Gerke & Kerle 2011). Indeed, studies have shown that the inability of physically access damaged areas results in field teams underestimating the damage (Lemoine et al 2013). Hence one reason for the use of remote sensing.

We know that remote sensing can be used for two purposes following disasters. The first, “Rapid Mapping”, aims at providing impact assessments as quickly as possible after a disaster. The second, “Economic Mapping” assists in quantifying the economic impact of the damage. “Major distinctions between the two categories are timeliness, completeness and accuracies” (2013).  In addition, Rapid Mapping aims to identify the relative severity of the damage (low, medium, high) rather than absolute damage figures. Results from Economic Mapping are combined with other geospatial data (building size, building type, etc) and economic parameters (e.g., cost per unit area, relocation costs, etc) to compute a total cost estimate (2013). The Post-Disaster Needs Assessment (PDNA) is an example of Economic Mapping.

It is worth noting that a partially destroyed building may be seen as a complete economic loss, identical to a totally destroyed structure (2011). “From a casualty / fatality assessment perspective, however, a structure that is still partly standing […] offers a different survival potential” (2011).

Screen Shot 2016-02-07 at 10.31.27 AM

We also know that “damage detectability is primarily a function of image resolution” (2011). The Haiti Earthquake was “one of the first major disasters in which very high resolution satellite and airborne imagery was embraced to delineate the event impact (2013). It was also “the first time that the PDNA was based on damage assessment produced with remotely sensed data” (2013). Imagery analysis of the impact confirmed that a “moderate resolution increase from 41 cm to 15 cm has profound effects on damage mapping accuracy” (2011). Indeed, a number of validation studies carried out since 2010 have confirmed that “the higher detail airborne imagery performs much better [than lower resolution] satellite imagery” (2013). More specifically, the detection of very heavy damage and destruction in Haiti aerial imagery “is approximately a factor 8 greater than in the satellite imagery.”

Comparing the aerial imagery analysis with field surveys, Lemoine et al. find that “the number of heavily affected and damaged buildings in the aerial point set is slightly higher than that obtained from the field survey” (2013). The correlation between the results of the aerial imagery analysis and field surveys is sensitive to land-use (e.g., commercial, downtown, industrial, residential high density, shanty, etc). In highly populated areas such as shanty zones, “the over-estimation of building damage from aerial imagery could simply be a result of an incomplete field survey while, in downtown, where field surveys seem to have been conducted in a more systematic way, the damage assessment from aerial imagery matches very well the one obtained from the field” (2013).

WPb

In sum, the results from Haiti suggests that the “damage assessment from aerial imagery currently represents the best possible compromise between timeliness and accuracy” (2013). The Haiti case study also “showed that the damage derived from satellite imagery was underestimated by a factor of eight, compared to the damage derived from aerial imagery. These results suggest that despite the fast availability of very high resolution satellite imagery […], the spatial resolution of 50 cm is not sufficient for an accurate interpretation of building damage.”

In other words, even though “damage assessments depend very much on the timeliness of the aerial images acquisitions and requires a considerable effort of visual interpretation as an element of compromise; [aerial imagery] remains the best trade-of in terms of required quality and timeliness for producing detailed damage assessments over large affected areas compared to satellite based assessments (insufficient quality) and exhaustive field inventories (too slow).” But there’s a rub with respect to aerial imagery. While the above results do “show that the identification of building damage from aerial imagery […] provides a realistic estimate of the spatial pattern and intensity of damage,” the aerial imagery analysis still “suffers from several limitations due to the nadir imagery” (2013).

“Essentially all conventional airborne and spacebar image data are taken from a quasi-vertical perspective” (2011). Vertical (or nadir) imagery is particularly useful for a wide range of applications, for sure. But when it comes to damage mapping, vertical data have great limitations, particularly when concerning structural building damage (2011). While complete collapse can be readily identified using vertical imagery (e.g., disintegrated roof structures and associated high texture values, or adjacent rubble piles, etc), “lower levels of damage are much harder to map. This is because such damage effects are largely expressed along the façades, which are not visible in such imagery” (2011). According to Gerke and Kerle, aerial oblique imagery is more useful for disaster damage assessment than aerial or satellite imagery taken with a vertical angle. I elaborated on this point vis-a-vis a more recent study in this blog post.

WPd

Clearly, “much of the challenge in detecting damage stems from the complex nature of damage” (2011). For some types and sizes of damage, using only vertical (nadir) imagery will result in missed damage (2013). The question, therefore, is not simply one of spatial resolution but the angle at which the aerial or space-based image is taken, land-use and the type of damage that needs to be quantified. Still, we do have a partial answer to the first question. Technically, the optimal spatial resolution for disaster damage assessments is certainly not 50 cm since 15 cm proved far more useful in Haiti.

Of course, if higher-resolution imagery is not available in time (or at all), than clearly 50 cm imagery is infinitely more optimal than no imagery. In fact, even 5 meter imagery that is available within 24-48 hours of a disaster can add value if this imagery comes with baseline imagery, i.e., imagery of the location of interest before the disaster. Baseline data enables the use of automated change-detection algorithms that can provide a first estimate of damage severity and the location or scope of that severity. What’s more, these change-detection algorithms could automatically plot a series of waypoints to chart the flight plan of an autonomous aerial robot (UAV) to carry out closer inspection. In other words, satellite and aerial data can be complementary, and the drawbacks of low resolution imagery can be offset if said imagery is available at a higher temporal frequency.

WPc

On the flip side, just because the aerial imagery used in the Haiti study was captured at 15 cm resolution does not imply that 15 cm resolution is the most optimal spatial resolution for disaster damage. It could very well be 10 cm. This depends entirely on the statistical distribution of the size of damaged features (e.g, the size of a crack, a window, a tile, etc,), the local architecture (e.g., type of building materials used and so on) and the type of hazard (e.g, hurricane, earthquake, etc). That said, “individual indicators of destruction, such as a roof or façade damage, do not linearly add to a given damage class [e.g., low, medium, high]” (2011). This limitation is alas “fundamental in remote sensing where no assessment of internal structural integrity is possible” (2011).

In any event, one point is certain: there was no need to capture aerial imagery (both nadir and obliques) at 5 centimeter resolution during the World Bank’s humanitarian UAV mission after Cyclone Pam—at least not for the purposes of 2D imagery analysis. This leads me to one final point. As recently noted during my opening Keynote at the 2016 International Drones and Robotics for Good Awards in Dubai, disaster damage is a 3D phenomenon, not a 2D experience. “Buildings are three-dimensional, and even the most detailed view at only one of those dimensions is ill-suited to describe the status of such features” (2011). In other words, we need “additional perspectives to provide a more comprehensive view” (2011).

There are very few peer-reviewed scientific papers that evaluate the use of high-resolution 3D models for the purposes of damage assessments. This one is likely the most up-to-date study. An earlier research effort by Booth et al. found that the overall correlation between the results from field surveys and 3D analysis of disaster damage was “an encouraging 74 percent.” But this visual analysis was carried out manually, which could have introduced non-random errors. After all, “the subjectivity inherent in visual structure damage mapping is considerable” (2011). Could semi-automated methods for the analysis of 3D models thus yield a higher correlation? This is the research question posed by Gerke and Kerle in their 2011 study.

Screen Shot 2016-02-07 at 5.39.06 AM

The authors tested this question using aerial imagery from the Haiti Earthquake. When 3D features were removed from their automated analysis, “classification performance declined, for example by some 10 percent for façades, the class that benefited most from the 3D derivates” (2011). The researchers also found that trained imagery analysts only agreed at most 76% of the time in their visual interpretation and assessments of aerial data. This is in part due to the lack of standards for damage categories (2011).

Screen Shot 2016-02-07 at 5.40.36 AM

I made this point more bluntly in this earlier blog post. Just imagine when you have hundreds of professionals and/or digital volunteers analyzing imagery (e.g, through crowdsourcing) and no standardized categories of disaster damage to inform the consistent interpretation of said imagery. This collective subjectivity introduces a non-random error into the overall analysis. And because it is non-random, this error cannot be accounted for. In contrast, a more semi-automated solution would render this error more random, which means the overall model could potentially be adjusted accordingly.

Gerke and Kerle conclude that high quality 3D models are “in principle well-suited for comprehensive semi-automated damage mapping. In particular façades, which are critical [to the assessment process], can be assessed [when] multiple views are provided.” That said, the methodology used by the authors was “still essentially based on projecting 3D data into 2D space, with conceptual and geometric limitations. [As such], one goal should be to perform the actual damage assessment and classification in 3D.” This explains why I’ve been advocating for Virtual Reality (VR) based solutions as described here.

3D_1

Geek and Kerle also “required extensive training data and substantial subjective evidence integration in the final damage class assessment. This raises the question to what extent rules could be formulated to create a damage ontology as the basis for per-building damage scoring.” This explains why I invited the Harvard Humanitarian Initiative (HHI) to create a damage ontology based on the aerial imagery from Cyclone Pam in Vanuatu. This ontology is based on 2D imagery, however. Ironically, very high spatial resolution 2D imagery can be more difficult to interpret than lower-res imagery since the high resolution imagery inevitably adds more “noise” to the data.

Ultimately, we’ll need to move on to 3D damage ontologies that can be visualized using VR headsets. 3D analysis is naturally more intuitive to us since we live in a mega-high resolution 3D world rather than a 2D one. As a result, I suspect there would be more agreement between different analysts studying dynamic, very high-resolution 3D models versus 2D static images at the same spatial res.

Screen Shot 2016-02-07 at 12.53.27 PM

Taiwan’s National Cheng Kung University created this 3D model from aerial imagery captured by UAV. This model was created and uploaded to Sketchfab on the same day the earthquake struck. Note that Sketchfab recently added a VR feature to their platform, which I tried out on this model. Simply open this page on your mobile device to view the disaster damage in Taiwan in VR. I must say it works rather well, and even seems to be higher resolution in VR mode compared to the 2D projections of the 3D model above. More on the Taiwan model here.

Screen Shot 2016-02-07 at 12.52.06 PM

But to be useful for disaster damage assessments, the VR headset would need to be combined with wearable technology that enables the end-user to digitally annotate (or draw on) the 3D models directly within the same VR environment. This would render the analysis process more intuitive while also producing 3D training data for the purposes of machine learning—and thus automated feature detection.

I’m still actively looking for a VR platform that will enable this, so please do get in touch if you know of any group, company, research institute, etc., that would be interested in piloting the 3D analysis of disaster damage from the Nepal Earthquake entirely within a VR solution. Thank you!