Category Archives: Crisis Mapping

Using Aerial Robotics and Virtual Reality to Inspect Earthquake Damage in Taiwan (Updated)

The tragic 6.4 magnitude earthquake struck southern Taiwan shortly before 4 in the morning on Saturday, February 6th. Later in the day, aerial robots were used to capture areal videos and images of the disaster damage, like below.

Screen Shot 2016-02-07 at 1.25.26 PM

Within 10 hours of the earthquake, Dean Hosp at Taiwan’s National Cheng Kung University used screenshots of aerial videos posted on YouTube by various media outlets to create the 3D model below. As such, Dean used “second hand” data to create the model, which is why it is low resolution. Having the original imagery first hand would enable a far higher-res 3D model. Says Dean: “If I can fly myself, results can produce more fine and faster.”

Click the images below to enlarge.

Screen Shot 2016-02-07 at 1.32.46 PM

Screen Shot 2016-02-07 at 1.17.12 PM

Update: About 48 hours after the earthquake, Dean and team used their own UAV to create this much higher resolution version (see below), which they also annotated (click to enlarge).

Screen Shot 2016-02-10 at 6.45.04 AMScreen Shot 2016-02-10 at 6.47.33 AM Screen Shot 2016-02-10 at 6.48.19 AMScreen Shot 2016-02-10 at 6.51.23 AM

Here’s the embedded 3D model:

These 3D models were processed using AgiSoft PhotoScan and then uploaded to Sketchfab on the same day the earthquake struck. I’ve blogged about Sketchfab in the past—see this first-ever 3D model of a refugee camp, for example. A few weeks ago, Sketchfab added a Virtual Reality feature to their platform, so I just tried this out on the above model.

Screen Shot 2016-02-10 at 6.51.32 AM

The model appears equally crisp when viewed in VR mode on a mobile device (using Google Cardboard in my case). Simply open this page on your mobile device to view the disaster damage in VR. This works rather well; the model does seem to be of high resolution in Virtual Reality as well.


This is a good first step vis-a-vis VR applications. As a second step, we need to develop 3D disaster ontologies to ensure that imagery analysts actually interpret 3D models in the same way. As a third step, we need to combine VR headsets with wearable technology that enables the end-user to annotate (or draw on) the 3D models directly within the same VR environment. This would make the damage assessment process more intuitive while also producing 3D training data for the purposes of machine learning—and thus automated feature detection.

I’m still actively looking for a VR platform that will enable this, so please do get in touch if you know of any group, company, research institute, etc., that would be interested in piloting the 3D analysis of disaster damage from the Taiwan or Nepal Earthquakes entirely within a VR solution. Thank you.

Click here to view 360 aerial visual panoramas of the disaster damage.

Many thanks to Sebastien Hodapp for pointing me to the Taiwan model.

Ranking Aerial Imagery for Disaster Damage Assessments

Analyzing satellite and aerial imagery to assess disaster damage is fraught with challenges. This is true for both digital humanitarians and professional imagery analysts alike. Why? Because distinguishing between infrastructure that is fully destroyed and partially damaged can be particularly challenging. Professional imagery analysts with years of experience have readily admitted that trained analysts regularly interpret the same sets of images differently. Consistency in the interpretation of satellite and aerial imagery is clearly no easy task. My colleague Joel Kaiser from Medair recently suggested another approach.

PM DJI Selfie Nepal 2

Joel and I both serve on the “Core Team” of the Humanitarian UAV Network (UAViators). It is in this context that we’ve been exploring ways to render aerial imagery more actionable for rapid disaster damage assessments and tactical decision making. To overcome some of the challenges around the consistent analysis of aerial imagery, Joel suggested we take a rank-order approach. His proposal is quite simple: display two geo-tagged aerial images side by side with the following question: “Which of the two images shows more disaster damage?” Each combination of images could be shown to multiple individuals. Images that are voted as depicting more damage would “graduate” to the next display stage and in turn be compared to each other, and so on and so forth along with those images voted as showing less damage.

In short, a dedicated algorithm would intelligently select the right combination of images to display side by side. The number and type of votes could be tabulated to compute reliability and confidence scores for the rankings. Each image would have a unique damage score which could potentially be used to identify thresholds for fully destroyed versus partially damaged versus largely intact infrastructure. Much of this could be done on MicroMappers or similar microtasking solutions. Such an approach would do away with the need for detailed imagery interpretation guides. As noted above, consistent analysis is difficult even when such guides are available. The rank-order approach could help quickly identify and map the most severely affected areas to prioritize tactical response efforts.  Note that this approach could be used with both crowd-sourced analysis and professional analysis. Note also that the GPS coordinates for each image would not be made publicly available for data privacy reasons.

Is this strategy worth pursuing? What are we missing? Joel and I would be keen to get some feedback. So please feel free to use the comments section below to share your thoughts or to send an email here.

Video: Crisis Mapping Nepal with Aerial Robotics


I had the honor of spearheading this disaster recovery UAV mission in Nepal a few weeks ago as part of Kathmandu Flying Labs. I’ve been working on this new initiative (in my own time) with Kathmandu Living Labs (KLL), Kathmandu University (KU), DJI and Pix4D. This Flying Lab is the first of several local UAV innovation labs that I am setting up (in my personal capacity and during my holiday time) with friends and colleagues in disaster-prone countries around the world. The short film documentary above was launched just minutes ago by DJI and describes how we teamed up with local partners in Kathmandu to make use of aerial robotics (UAVs) to map Nepal’s recovery efforts.

Here are some of the 3D results, courtesy of Pix4D (click to enlarge):


Screen Shot 2015-11-02 at 5.17.58 PM

Why work in 3D? Because disaster damage is a 3D phenomenon. This newfound ability to work in 3D has important implications for Digital Humanitarians. To be sure, the analysis of these 3D models could potentially be crowdsourced and eventually analyzed entirely within a Virtual Reality environment.

Since most of our local partners in Nepal don’t have easy access to computers or VR headsets, I found another way to unlock and liberate this digital data by printing our high-resolution maps on large, rollable banners.



We brought these banner maps back to the local community and invited them to hack the map. How? Directly, by physically adding their local knowledge to the map; knowledge about the location of debris, temporary shelters, drinking water and lots more. We brought tape and color-coded paper with us to code this knowledge so that the community could annotate the map themselves.


In other words, we crowdsourced a crisis map of Panga, which was highly participatory. The result was a rich, contextual social layer on top of the base map, which further inform community discussions on strategies and priorities guiding their recovery efforts. For the first time ever, the community of Panga was working off the one and same dataset to inform their rebuilding. In short, our humanitarian mission combined aerial robotics, computer vision, water-proof banners, local knowledge, tape, paper and crowdsourcing to engage local communities on the reconstruction process.


I’m now spending my evenings & weekends working with friends and colleagues to plan a follow-up mission in early 2016. We’ll be returning to Kathmandu Flying Labs with new technology partners to train our local partners on how to use fixed-wing UAVs for large scale mapping efforts. In the meantime, we’re also exploring the possibility of co-creating Jakarta Flying Labs, Monrovia Flying Labs and Santiago Flying Labs in 2016.

I’m quitting my day job next week to devote myself full time to these efforts. Fact is, I’ve been using all of my free time (meaning evenings, weekends and many, many weeks of holiday time) to pursue my passion in aid robotics and to carry out volunteer-based UAV missions like the one in Nepal. I’ve also used holiday time (and my own savings) to travel across the globe to present this volunteer-work at high-profile events, such as the 2015 Web Summit here in Dublin where the DJI film documentary was just publicly launched.


My Nepali friends & I need your help to make sure that Kathmandu Flying Labs take-off and become a thriving and sustainable center of social entrepreneur-ship. To this end, we’re actively looking for both partners and sponsors to make all this happen, so please do get in touch if you share our vision. And if you’d like to learn more about how UAVs other emerging technologies are changing the face of humanitarian action, then check out my new book Digital Humanitarians.

In the meantime, big, big thanks to our Nepali partners and technology partners for making our good work in Kathmandu possible!

3D Digital Humanitarians: The Irony

In 2009 I wrote this blog post entitled “The Biggest Problem with Crisis Maps.” The gist of the post: crises are dynamic over time and space but our crisis maps are 2D and static. More than half-a-decade later, Digital Humanitarians have still not escaped from Plato’s Cave. Instead, they continue tracing 2D shadows cast by crisis data projected on their 2D crisis maps. Is there value in breaking free from our 2D data chains? Yes. And the time will soon come when Digital Humanitarians will have to make a 3D run for it.

Screen Shot 2015-07-21 at 3.31.27 PM

Aerial imagery captured by UAVs (Unmanned Aerial Vehicles) can be used to create very high-resolution 3D point clouds like the one below. It only took a 4-minute UAV flight to capture the imagery for this point cloud. Of course, the processing time to convert the 2D imagery to 3D took longer. But solutions already exist to create 3D point clouds on the fly, and these solutions will only get more sophisticated over time.

Stitching 2D aerial imagery into larger “mosaics” is already standard practice in the UAV space. But that’s so 2014. What we need is the ability to stitch together 3D point clouds. In other words, I should be able to mesh my 3D point cloud of a given area with other point clouds that overlap spatially with mine. This would enable us to generate high-resolution 3D point clouds for larger areas. Lets call these accumulated point clouds Cumulus Clouds. We could then create baseline data in the form of Cumulus Clouds. And when a disaster happens, we could create updated Cumulus Clouds for the affected area and compare them with our baseline Cumulus Cloud for changes. In other words, instead of solely generating 2D mapping data for the Missing Maps Project, we could add Cumulus Clouds.

Meanwhile, breakthroughs in Virtual Reality will enable Digital Humanitarians to swarm through these Cumulus Clouds. Innovations such as Oculus Rift, the first consumer-targeted virtual reality headsets, may become the pièce de résistance of future Digital Humanitarians. This shift to 3D doesn’t mean that our methods for analyzing 2D crisis maps are obsolete when we leave Plato’s Cave. We simply need to extend our microtasking and crowdsourcing solutions to the 3D space. As such, a 3D “tasking manager” would just assign specific areas of a Cumulus Cloud to individual Digital Jedis. This is no different to how field-based disaster assessment surveys get carried out in the “Solid World” (Real Word). Our Oculus headsets would “simply” need to allow Digital Jedis to “annotate” or “trace various” sections of the Cumulus Clouds just like they already do with 2D maps; otherwise we’ll be nothing more than disaster tourists.

wp oculus

The shift to 3D is not without challenges. This shift necessarily increases visual complexity. Indeed, 2D images are a radical (and often welcome) simplification of the Solid World. This simplification comes with a number of advantages like reducing the signal to noise ratio. But 2D imagery, like satellite imagery, “hides” information, which is one reason why imagery-interpretation and analysis is difficult, often requiring expert training. But 3D is more intuitive; 3D is the world we live in. Interpreting signs of damage in 3D may thus be easier than doing so with a lot less information in 2D. Of course, this also depends on the level of detail required for the 3D damage assessments. Regardless, appropriate tutorials will need to be developed to guide the analysis of 3D point clouds and Cumulus Clouds. Wait a minute—shouldn’t existing assessment methodologies used for field-based surveys in the Solid World do the trick? After all, the “Real World” is in 3D last time I checked.

Ah, there’s the rub. Some of the existing methodologies developed by the UN and World Bank to assess disaster damage are largely dysfunctional. Take for example the formal definition of “partial damage” used by the Bank to carry out their post-disaster damage and needs assessments: “the classification used is to say that if a building is 40% damaged, it needs to be repaired. In my view this is too vague a description and not much help. When we say 40%, is it the volume of the building we are talking about or the structural components?” The question is posed by a World Bank colleague with 15+ years of experience. Since high-resolution 3D data enables more of us to more easily see more details, our assessment methodologies will necessarily need to become more detailed both for manual and automated analysis solutions. This does add more complexity but such is the price if we actually want reliable damage assessments regardless.

Isn’t it ironic that our shift to Virtual Reality may ultimately improve the methodologies (and thus data quality) of field-based surveys carried out in the Solid World? In any event, I can already “hear” the usual critics complaining; the usual theatrics of cave-bound humanitarians who eagerly dismiss any technology that appears after the radio (and maybe SMS). Such is life. Moving along. I’m exploring practical ways to annotate 3D point clouds here but if anyone has additional ideas, do please get in touch. I’m also looking for any solutions out there (imperfect ones are fine too) that can can help us build Cumulus Clouds—i.e., stitch overlapping 3D point clouds. Lastly, I’d love to know what it would take to annotate Cumulus Clouds via Virtual Reality. Thanks!

Acknowledgements: Thanks to colleagues from OpenAerialMap, Cadasta and MapBox for helping me think through some of the ideas above.

Social Media for Disaster Response – Done Right!

To say that Indonesia’s capital is prone to flooding would be an understatement. Well over 40% of Jakarta is at or below sea level. Add to this a rapidly growing population of over 10 million and you have a recipe for recurring disasters. Increasing the resilience of the city’s residents to flooding is thus imperative. Resilience is the capacity of affected individuals to self-organize effectively, which requires timely decision-making based on accurate, actionable and real-time information. But Jakarta is also flooded with information during disasters. Indeed, the Indonesian capital is the world’s most active Twitter city.


So even if relevant, actionable information on rising flood levels could somehow be gleaned from millions of tweets in real-time, these reports could be inaccurate or completely false. Besides, only 3% of tweets on average are geo-located, which means any reliable evidence of flooding reported via Twitter is typically not actionable—that is, unless local residents and responders know where waters are rising, they can’t take tactical action in a timely manner. These major challenges explain why most discount the value of social media for disaster response.

But Digital Humanitarians in Jakarta aren’t your average Digital Humanitarians. These Digital Jedis recently launched one of the most promising humanitarian technology initiatives I’ve seen in years. Code named Peta Jakarta, the project takes social media and digital humanitarian action to the next level. Whenever someone posts a tweet with the word banjir (flood), they receive an automated tweet reply from @PetaJkt inviting them to confirm whether they see signs of flooding in their area: “Flooding? Enable geo-location, tweet @petajkt #banjir and check” The user can confirm their report by turning geo-location on and simply replying with the keyword banjir or flood. The result gets added to a live, public crisis map, like the one below.

Credit: Peta Jakarta

Over the course of the 2014/2015 monsoon season, Peta Jakarta automatically sent 89,000 tweets to citizens in Jakarta as a call to action to confirm flood conditions. These automated invitation tweets served to inform the user about the project and linked to the video below (via Twitter Cards) to provide simple instructions on how to submit a confirmed report with approximate flood levels. If a Twitter user forgets to turn on the geo-location feature of their smartphone, they receive an automated tweet reminding them to enable geo-location and resubmit their tweet. Finally, the platform “generates a thank you message confirming the receipt of the user’s report and directing them to to see their contribution to the map.” Note that the “overall aim of sending programmatic messages is not to simply solicit a high volume of replies, but to reach active, committed citizen-users willing to participate in civic co-management by sharing nontrivial data that can benefit other users and government agencies in decision-making during disaster scenarios.”

A report is considered verified when a confirmed geo-tagged tweet includes a picture of the flooding, like in the tweet below. These confirmed and verified tweets get automatically mapped and also shared with Jakarta’s Emergency Management Agency (BPBD DKI Jakarta). The latter are directly involved in this initiative since they’re “regularly faced with the difficult challenge of anticipating & responding to floods hazards and related extreme weather events in Jakarta.” This direct partnership also serves to limit the “Data Rot Syndrome” where data is gathered but not utilized. Note that Peta Jakarta is able to carry out additional verification measures by manually assessing the validity of tweets and pictures by cross-checking other Twitter reports from the same district and also by monitoring “television and internet news sites, to follow coverage of flooded areas and cross-check reports.”

Screen Shot 2015-06-29 at 2.38.54 PM

During the latest monsoon season, Peta Jakarta “received and mapped 1,119 confirmed reports of flooding. These reports were formed by 877 users, indicating an average tweet to user ratio of 1.27 tweets per user. A further 2,091 confirmed reports were received without the required geolocation metadata to be mapped, highlighting the value of the programmatic geo-location ‘reminders’ […]. With regard to unconfirmed reports, Peta Jakarta recorded and mapped a total of 25,584 over the course of the monsoon.”

The Live Crisis Maps could be viewed via two different interfaces depending on the end user. For local residents, the maps could be accessed via smartphone with the visual display designed specifically for more tactical decision-making, showing flood reports at the neighborhood level and only for the past hour.


For institutional partners, the data is visualized in more aggregate terms for strategic decision-making based trends-analysis and data integration. “When viewed on a desktop computer, the web-application scaled the map to show a situational overview of the city.”

Credit: Peta Jakarta

Peta Jakarta has “proven the value and utility of social media as a mega-city methodology for crowdsourcing relevant situational information to aid in decision-making and response coordination during extreme weather events.” The initiative enables “autonomous users to make independent decisions on safety and navigation in response to the flood in real-time, thereby helping increase the resilience of the city’s residents to flooding and its attendant difficulties.” In addition, by “providing decision support at the various spatial and temporal scales required by the different actors within city, Peta Jakarta offers an innovative and inexpensive method for the crowdsourcing of time-critical situational information in disaster scenarios.” The resulting confirmed and verified tweets were used by BPBD DKI Jakarta to “cross-validate formal reports of flooding from traditional data sources, supporting the creation of information for flood assessment, response, and management in real-time.”

My blog post is based several conversations I had with Peta Jakarta team and on this white paper, which was just published a week ago. The report runs close to 100 pages and should absolutely be considered required reading for all Digital Humanitarians and CrisisMappers. The paper includes several dozen insights which a short blog post simply cannot do justice to. If you can’t find the time to read the report, then please see the key excerpts below. In a future blog post, I’ll describe how the Peta Jakarta team plans to leverage UAVs to complement social media reporting.

  • Extracting knowledge from the “noise” of social media requires designed engagement and filtering processes to eliminate unwanted information, reward valuable reports, and display useful data in a manner that further enables users, governments, or other agencies to make non-trivial, actionable decisions in a time-critical manner.
  • While the utility of passively-mined social media data can offer insights for offline analytics and derivative studies for future planning scenarios, the critical issue for frontline emergency responders is the organization and coordination of actionable, real-time data related to disaster situations.
  • User anonymity in the reporting process was embedded within the Peta Jakarta project. Whilst the data produced by Twitter reports of flooding is in the public domain, the objective was not to create an archive of users who submitted potentially sensitive reports about flooding events, outside of the Twitter platform. Peta Jakarta was thus designed to anonymize reports collected by separating reports from their respective users. Furthermore, the text content of tweets is only stored when the report is confirmed, that is, when the user has opted to send a message to the @petajkt account to describe their situation. Similarly, when usernames are stored, they are encrypted using a one-way hash function.
  • In developing the Peta Jakarta brand as the public face of the project, it was important to ensure that the interface and map were presented as community-owned, rather than as a government product or academic research tool. Aiming to appeal to first adopters—the young, tech-savvy Twitter-public of Jakarta—the language used in all the outreach materials (Twitter replies, the outreach video, graphics, and print advertisements) was intentionally casual and concise. Because of the repeated recurrence of flood events during the monsoon, and the continuation of daily activities around and through these flood events, the messages were intentionally designed to be more like normal twitter chatter and less like public service announcements.
  • It was important to design the user interaction with to create a user experience that highlighted the community resource element of the project (similar to the Waze traffic app), rather than an emergency or information service. With this aim in mind, the graphics and language are casual and light in tone. In the video, auto-replies, and print advertisements, never used alarmist or moralizing language; instead, the graphic identity is one of casual, opt-in, community participation.
  • The most frequent question directed to @petajkt on Twitter was about how to activate the geo-location function for tweets. So far, this question has been addressed manually by sending a reply tweet with a graphic instruction describing how to activate geo-location functionality.
  • Critical to the success of the project was its official public launch with, and promotion by, the Governor. This endorsement gave the platform very high visibility and increased legitimacy among other government agencies and public users; it also produced a very successful media event, which led substantial media coverage and subsequent public attention.

  • The aggregation of the tweets (designed to match the spatio-temporal structure of flood reporting in the system of the Jakarta Disaster Management Agency) was still inadequate when looking at social media because it could result in their overlooking reports that occurred in areas of especially low Twitter activity. Instead, the Agency used the @petajkt Twitter stream to direct their use of the map and to verify and cross-check information about flood-affected areas in real-time. While this use of social media was productive overall, the findings from the Joint Pilot Study have led to the proposal for the development of a more robust Risk Evaluation Matrix (REM) that would enable Peta Jakarta to serve a wider community of users & optimize the data collection process through an open API.
  • Developing a more robust integration of social media data also means leveraging other potential data sets to increase the intelligence produced by the system through hybridity; these other sources could include, but are not limited to, government, private sector, and NGO applications (‘apps’) for on- the-ground data collection, LIDAR or UAV-sourced elevation data, and fixed ground control points with various types of sensor data. The “citizen-as- sensor” paradigm for urban data collection will advance most effectively if other types of sensors and their attendant data sources are developed in concert with social media sourced information.

A Force for Good: How Digital Jedis are Responding to the Nepal Earthquake (Updated)

Digital Humanitarians are responding in full force to the devastating earthquake that struck Nepal. Information sharing and coordination is taking place online via CrisisMappers and on multiple dedicated Skype chats. The Standby Task Force (SBTF), Humanitarian OpenStreetMap (HOT) and others from the Digital Humanitarian Network (DHN) have also deployed in response to the tragedy. This blog post provides a quick summary of some of these digital humanitarian efforts along with what’s coming in terms of new deployments.

Update: A list of Crisis Maps for Nepal is available below.


At the request of the UN Office for the Coordination of Humanitarian Affairs (OCHA), the SBTF is using QCRI’s MicroMappers platform to crowdsource the analysis of tweets and mainstream media (the latter via GDELT) to rapidly 1) assess disaster damage & needs; and 2) Identify where humanitarian groups are deploying (3W’s). The MicroMappers CrisisMaps are already live and publicly available below (simply click on the maps to open live version). Both Crisis Maps are being updated hourly (at times every 15 minutes). Note that MicroMappers also uses both crowdsourcing and Artificial Intelligence (AIDR).

Update: More than 1,200 Digital Jedis have used MicroMappers to sift through a staggering 35,000 images and 7,000 tweets! This has so far resulted in 300+ relevant pictures of disaster damage displayed on the Image Crisis Map and over 100 relevant disaster tweets on the Tweet Crisis Map.

Live CrisisMap of pictures from both Twitter and Mainstream Media showing disaster damage:

MM Nepal Earthquake ImageMap

Live CrisisMap of Urgent Needs, Damage and Response Efforts posted on Twitter:

MM Nepal Earthquake TweetMap

Note: the outstanding Kathmandu Living Labs (KLL) team have also launched an Ushahidi Crisis Map in collaboration with the Nepal Red Cross. We’ve already invited invited KLL to take all of the MicroMappers data and add it to their crisis map. Supporting local efforts is absolutely key.


The Humanitarian UAV Network (UAViators) has also been activated to identify, mobilize and coordinate UAV assets & teams. Several professional UAV teams are already on their way to Kathmandu. The UAV pilots will be producing high resolution nadir imagery, oblique imagery and 3D point clouds. UAViators will be pushing this imagery to both HOT and MicroMappers for rapid crowdsourced analysis (just like was done with the aerial imagery from Vanuatu post Cyclone Pam, more on that here). A leading UAV manufacturer is also donating several UAVs to UAViators for use in Nepal. These UAVs will be sent to KLL to support their efforts. In the meantime, DigitalGlobePlanet Labs and SkyBox are each sharing their satellite imagery with CrisisMappers, HOT and others in the Digital Humanitarian Network.

There are several other efforts going on, so the above is certainly not a complete list but simply reflect those digital humanitarian efforts that I am involved in or most familiar with. If you know of other major efforts, then please feel free to post them in the comments section. Thank you. More on the state of the art in digital humanitarian action in my new book, Digital Humanitarians.

List of Nepal Crisis Maps

Please add to the list below by posting new links in this Google Spreadsheet. Also, someone should really create 1 map that pulls from each of the listed maps.

Code for Nepal Casualty Crisis Map: 

DigitalGlobe Crowdsourced Damage Assessment Map:

Disaster OpenRouteService Map for Nepal:

ESRI Damage Assessment Map:

Harvard WorldMap Tweets of Nepal: 

Humanitarian OpenStreetMap Nepal:

Kathmandu Living Labs Crowdsourced Crisis Map:

MicroMappers Disaster Image Map of Damage:

MicroMappers Disaster Damage Tweet Map of Needs:

NepalQuake Status Map:

UAViators Crisis Map of Damage from Aerial Pics/Vids: (takes a while to load)

Visions SDSU Tweet Crisis Map of Nepal:

Crowdsourcing Point Clouds for Disaster Response

Point Clouds, or 3D models derived from high resolution aerial imagery, are in fact nothing new. Several software platforms already exist to reconstruct a series of 2D aerial images into fully fledged 3D-fly-through models. Check out these very neat examples from my colleagues at Pix4D and SenseFly:

What does a castle, Jesus and a mountain have to do with humanitarian action? As noted in my previous blog post, there’s only so much disaster damage one can glean from nadir (that is, vertical) imagery and oblique imagery. Lets suppose that the nadir image below was taken by an orbiting satellite or flying UAV right after an earthquake, for example. How can you possibly assess disaster damage from this one picture alone? Even if you had nadir imagery for these houses before the earthquake, your ability to assess structural damage would be limited.

Screen Shot 2015-04-09 at 5.48.23 AM

This explains why we also captured oblique imagery for the World Bank’s UAV response to Cyclone Pam in Vanuatu (more here on that humanitarian mission). But even with oblique photographs, you’re stuck with one fixed perspective. Who knows what these houses below look like from the other side; your UAV may have simply captured this side only. And even if you had pictures for all possible angles, you’d literally have 100’s of pictures to leaf through and make sense of.

Screen Shot 2015-04-09 at 5.54.34 AM

What’s that famous quote by Henry Ford again? “If I had asked people what they wanted, they would have said faster horses.” We don’t need faster UAVs, we simply need to turn what we already have into Point Clouds, which I’m indeed hoping to do with the aerial imagery from Vanuatu, by the way. The Point Cloud below was made only from single 2D aerial images.

It isn’t perfect, but we don’t need perfection in disaster response, we need good enough. So when we as humanitarian UAV teams go into the next post-disaster deployment and ask what humanitarians they need, they may say “faster horses” because they’re not (yet) familiar with what’s really possible with the imagery processing solutions available today. That obviously doesn’t mean that we should ignore their information needs. It simply means we should seek to expand their imaginations vis-a-vis the art of the possible with UAVs and aerial imagery. Here is a 3D model of a village in Vanuatu constructed using 2D aerial imagery:

Now, the title of my blog post does lead with the word crowdsourcing. Why? For several reasons. First, it takes some decent computing power (and time) to create these Point Clouds. But if the underlying 2D imagery is made available to hundreds of Digital Humanitarians, we could use this distributed computing power to rapidly crowdsource the creation of 3D models. Second, each model can then be pushed to MicroMappers for crowdsourced analysis. Why? Because having a dozen eyes scrutinizing one Point Cloud is better than 2. Note that for quality control purposes, each Point Cloud would be shown to 5 different Digital Humanitarian volunteers; we already do this with MicroMappers for tweets, pictures, videos, satellite images and of course aerial images as well. Each digital volunteer would then trace areas in the Point Cloud where they spot damage. If the traces from the different volunteers match, then bingo, there’s likely damage at those x, y and z coordinate. Here’s the idea:

We could easily use iPads to turn the process into a Virtual Reality experience for digital volunteers. In other words, you’d be able to move around and above the actual Point Cloud by simply changing the position of your iPad accordingly. This technology already exists and has for several years now. Tracing features in the 3D models that appear to be damaged would be as simple as using your finger to outline the damage on your iPad.

What about the inevitable challenge of Big Data? What if thousands of Point Clouds are generated during a disaster? Sure, we could try to scale our crowd-sourcing efforts by recruiting more Digital Humanitarian volunteers, but wouldn’t that just be asking for a “faster horse”? Just like we’ve already done with MicroMappers for tweets and text messages, we would seek to combine crowdsourcing and Artificial Intelligence to automatically detect features of interest in 3D models. This sounds to me like an excellent research project for a research institute engaged in advanced computing R&D.

I would love to see the results of this applied research integrated directly within MicroMappers. This would allow us to integrate the results of social media analysis via MicroMappers (e.g, tweets, Instagram pictures, YouTube videos) directly with the results of satellite imagery analysis as well as 2D and 3D aerial imagery analysis generated via MicroMappers.

Anyone interested in working on this?