Aerial imagery will soon become a Big Data problem for humanitarian response—particularly oblique imagery. This was confirmed to me by a number of imagery experts in both the US (FEMA) and Europe (JRC). Aerial imagery taken at an angle is referred to as oblique imagery; compared to vertical imagery, which is taken by cameras pointing straight down (like satellite imagery). The team from Humanitarian OpenStreetMap (HOT) is already well equipped to make sense of vertical aerial imagery. They do this by microtasking the tracing of said imagery, as depicted below. So how do we rapidly analyze oblique images, which often provide more detail vis-a-vis infrastructure damage than vertical pictures?
One approach is to microtask the tagging of oblique images. This was carried out very successfully after Hurricane Sandy (screenshot below).
This solution did not include any tracing and was not designed to inform the development of machine learning classifiers to automatically identify features of interest, like damaged buildings, for example. Making sense of Big (Aerial) Data will ultimately require the combined use of human computing (microtasking) and machine learning. As volunteers use microtasking to trace features of interest such as damaged buildings pictured in oblique aerial imagery, perhaps machine learning algorithms can learn to detect these features automatically if enough examples of damaged buildings are provided. There is obviously value in doing automated feature detection with vertical imagery as well. So my team and I at QCRI have been collaborating with a local Filipino UAV start-up (SkyEye) to develop a new “Clicker” for our MicroMappers collection. We’ll be testing the “Aerial Clicker” below with our Filipino partners this summer. Incidentally, SkyEye is on the Advisory Board of the Humanitarian UAV Network (UAViators).
SkyEye is interested in developing a machine learning classifier to automatically identify coconut trees, for example. Why? Because coconut trees are an important source of livelihood for many Filipinos. Being able to rapidly identify trees that are still standing versus uprooted would enable SkyEye to quickly assess the impact of a Typhoon on local agriculture, which is important for food security & long-term recovery. So we hope to use the Aerial Clicker to microtask the tracing of coconut trees in order to significantly improve the accuracy of the machine learning classifier that SkyEye has already developed.
Will this be successful? One way to find out is by experimenting. I realize that developing a “visual version” of AIDR is anything but trivial. While AIDR was developed to automatically identify tweets (i.e., text) of interest during disasters by using microtasking and machine learning, what if we also had a free and open source platform to microtask and then automatically identify visual features of interest in both vertical and oblique imagery captured by UAVs? To be honest, I’m not sure how feasible this is vis-a-vis oblique imagery. As an imagery analyst at FEMA recently told me, this is still a research question for now. So I’m hoping to take this research on at QCRI but I do not want to duplicate any existing efforts in this space. So I would be grateful for feedback on this idea and any related research that iRevolution readers may recommend.
In the meantime, here’s another idea I’m toying with for the Aerial Clicker:
I often see this in the aftermath of major disasters; affected communities turning to “analog social medial” to communicate when cell phone towers are down. The aerial imagery above was taken following Typhoon Yolanda in the Philippines. And this is just one of several dozen images with analog media messages that I came across. So what if our Aerial Clicker were to ask digital volunteers to transcribe or categorize these messages? This would enable us to quickly create a crisis map of needs based on said content since every image is already geo-referenced. Thoughts?
See Also:
- Welcome to the Humanitarian UAV Network [link]
- How UAVs are Making a Difference in Disaster Response [link]
- Humanitarians Using UAVs for Post Disaster Recovery [link]
- Grassroots UAVs for Disaster Response [link]
- Using UAVs for Search & Rescue [link]
- Debrief: UAV/Drone Search & Rescue Challenge [link]
- Crowdsourcing Analysis of UAV Imagery for Search/Rescue [link]
- Check-List for Flying UAVs in Humanitarian Settings [link]
The fact is that this is a technology (microtasking of photo interpretation) that could be done easily by a developing country with some help from humanitarian tech communities. It would be part of it’s disaster response preparation plan to set this up in advance, and it could be used for floods, earthquakes, crop estimates for famine, etc. You might even be able to use it for wildlife counts on the Serengeti plains, or what other examples might we come up with? And it would be a way for the connected digiterati in the capitol to help those in the countryside.
Thanks for reading and for your comment, David. Yes, one of our first pilots for the Aerial Clicker will be using aerial imagery captured via UAV in Namibia; project is to identify wildlife in a game park. So will hopefully be able to experiment with this in coming weeks. Thanks again
Pingback: Live: Crowdsourced Crisis Map of UAV/Aerial Videos for Disaster Response | iRevolution
Pingback: Common Misconceptions About Humanitarian UAVs | iRevolution
Pingback: Official UN Policy Brief on Humanitarian UAVs | iRevolution
Pingback: UAV/Aerial Video of Gaza Destruction | iRevolution
Pingback: Disaster Tweets Coupled With UAV Imagery Give Responders Valuable Data on Infrastructure Damage | iRevolution
Pingback: How Drones Are Changing Humanitarian Disaster Response « memukesh