Category Archives: Drones/UAVs

Towards Persistent Humanitarian Robotics

Aerial robots (or UAVs) represent the first wave of robotics to impact the humanitarian space. Thus far, this first wave has largely been characterized by the application of relatively new technologies in one-off deployments. This needs to change. We must shift towards more persistent and autonomous solutions. This applies equally to the use of aerial robots for data collection as it does to the use of said robots for the transportation of payloads.

By “persistent” I mean UAV platforms that are far more durable and robust. It is all fine and well to fly a UAV a few dozen times in favorable weather. That’s elementary. It is far less trivial to develop UAVs that can successfully operate for thousands of flight hours in unfavorable weather conditions. By “autonomous” I mean UAV platforms that are pre-programmed and equipped with collision avoidance technology. This allows the platforms to operate autonomously with less need for human intervention. The real revolution around robotics in general is not robotics per se but rather robotics-powered-by-artificial-intelligence, which increasingly allows mobile robots to operate autonomously. In contrast, the manual operation of UAVs very much limits the value they can add to humanitarian efforts. What’s more, the leading cause of accidents in both manned and unmanned aviation is attributable to pilot error.

Humanitarian organizations exploring the use of aerial robotics should make sure to raise questions around persistent and autonomous solutions in their discussions with UAV partners. This second wave of robotics is at most a ripple at the moment, so it is important to manage expectations. But this next wave is also inevitable. So the sooner humanitarian organizations start addressing the importance of both persistent and autonomous solutions, the earlier we can scale the positive impact of aerial, terrestrial and maritime robotics across a wide range of humanitarian efforts.

Introducing WeRobotics

WeRobotics accelerates the use of robotics to solve humanitarian challenges. More specifically, we accelerate the ethical, safe and effective use of robotics to address aid, development and environmental challenges. Robots, such as drones or UAVs, are already transforming multiple industries through rapid & dramatic gains in efficiency and productivity. The impact on the social good sector will be no different. We want to create a world in which every social good organization has access to robotics technologies to solve global and local challenges.

Robots are technologies that radically enhance the ability of people to sense and affect physical change. The state of the art in robotics is quickly shifting from manually controlled systems to increasingly intelligent autonomous systems. Aid, development & environmental organizations have yet to take full advantage of these new solutions. The field of robotics is evolving so quickly that the social good sector is largely unaware of the possibilities let alone how to translate this potential into meaningful impact. As a result, robotics solutions are often over-looked or worse: seen as a threat rather than a unique opportunity to accelerate social good. We turn this perceived threat into powerful new opportunities for both the technology and social good sectors.

Aerial robotics (or UAVs) represent the first wave of robotics to impact the social good sector by disrupting traditional modes of data collection and payload delivery. In fact, aerial robotics is the current leading edge of humanitarian robotics applications. UAVs stand to offer cost-saving, time-saving and even life-saving solutions by enabling novel ways to collect data and transport payloads. Both timely data and the capacity to act on this data are integral to aid, development and environmental projects. Aerial robotics can thus fulfill two key roles: data collection and payload delivery. Indeed, we’ve already witnessed this use of aerial robotics for social good in dozens countries in recent years including Albania, Bosnia, China, Guyana, Haiti, Japan, Kenya, Liberia, Namibia, Nepal, Papua New Guinea, Philippines, South Africa, Tanzania, Thailand and Vanuatu.

The rapid commercialization of consumer UAVs explains why aerial robots are the first—but certainly not the last—wave of intelligent robots to impact the social good space. The second and third waves are already in plain sight: industry and academia are making tremendous strides in both terrestrial and maritime robotics. Like aerial robots, terrestrial and maritime robots will significantly extend people’s ability to collect data and transport payloads. WeRobotics thus seeks to fast-track the social good sector’s access to aerial, terrestrial and maritime robotics to expand the impact of aid, development and environmental projects.

The combined impact of increasingly autonomous systems on the social good sector will bring massive change. This will need to be managed carefully. WeRobotics offers dedicated platforms to channel this rapid change ethically, safely and effectively. Cities are the main drivers of innovation, social change, population growth and risk. To this end, WeRobotics is co-creating a global network of city-level labs in countries experiencing cascading risks, rapid development and/or environmental challenges. These localized platforms—our Flying Labs—are co-created with local and international partners to seed the social good sector with direct access to aerial, terrestrial and maritime robotics. The first phase will prioritize the deployment of aerial robotics, hence the name Flying Labs. Once terrestrial and maritime robotics become integrated into the Labs, these will become WeRobotics Labs.

Our Lab partners will include industry, academia and social good organizations as evidenced by our recent co-creation of Kathmandu Flying Labs in Nepal. Other local partners in Africa, Asia and South America have recently approached us to explore the possibility of co-creating Flying Labs in their cities as well. Each Lab will focus on sector-based applications of robotics that are directly relevant to the local area’s needs, interests and opportunities. As such, one Lab might lead with the deployment of aerial robotics for data collection in environmental projects while another might prioritize maritime robotics for payload delivery in development projects. A third might focus on autonomous terrestrial robotics for sensing and payload delivery in aid projects. In short, our co-created Labs are launchpads where robotics solutions can be deployed ethically, safely and effectively within each social good sector.

WeRobotics will manage the core activities of the Labs through our dedicated sector-based programs—AidRobotics, DevRobotics and EcoRobotics. These programs will partner with aid, development and environmental organizations respectively and with technology partners to carry out joint activities in each Lab. As such, each program is responsible for catalyzing and managing its own sector’s strategic partnerships, hands-on trainings, operational projects, applied research and key events within each of the Labs. Future programs might include HealthRobotics, AgriRobotics and RightsRobotics.

Once a Lab is fully trained in one type of robotics technology, such as aerial robotics, local Lab partners carry out future aerial robotics projects themselves. Meanwhile, WeRobotics works on introducing other relevant robotics solutions to the Labs—such as terrestrial and maritime robotics—in close collaboration with other technology partners. WeRobotics also ensures that learning and innovations generated in each Lab are disseminated to all labs in order to accelerate cross-pollination around uses cases and new robotics solutions.

WeRobotics is the missing link between robotics companies, academia and social good organizations. We want to catalyze strategic, cross-sectoral partnerships by creating a common purpose, platform and opportunity for these diverse partners to collaborate on meaningful social good projects. WeRobotics is currently a joint exploration between four accomplished professionals. We bring together decades of experience in the humanitarian sector, robotics industry and the private sector. Feedback and/or questions are welcomed via email.

Video: Crisis Mapping Nepal with Aerial Robotics

CrisisMappingNepalVideo

I had the honor of spearheading this disaster recovery UAV mission in Nepal a few weeks ago as part of Kathmandu Flying Labs. I’ve been working on this new initiative (in my own time) with Kathmandu Living Labs (KLL), Kathmandu University (KU), DJI and Pix4D. This Flying Lab is the first of several local UAV innovation labs that I am setting up (in my personal capacity and during my holiday time) with friends and colleagues in disaster-prone countries around the world. The short film documentary above was launched just minutes ago by DJI and describes how we teamed up with local partners in Kathmandu to make use of aerial robotics (UAVs) to map Nepal’s recovery efforts.

Here are some of the 3D results, courtesy of Pix4D (click to enlarge):

3D_1

Screen Shot 2015-11-02 at 5.17.58 PM

Why work in 3D? Because disaster damage is a 3D phenomenon. This newfound ability to work in 3D has important implications for Digital Humanitarians. To be sure, the analysis of these 3D models could potentially be crowdsourced and eventually analyzed entirely within a Virtual Reality environment.

Since most of our local partners in Nepal don’t have easy access to computers or VR headsets, I found another way to unlock and liberate this digital data by printing our high-resolution maps on large, rollable banners.

temp

WP13

We brought these banner maps back to the local community and invited them to hack the map. How? Directly, by physically adding their local knowledge to the map; knowledge about the location of debris, temporary shelters, drinking water and lots more. We brought tape and color-coded paper with us to code this knowledge so that the community could annotate the map themselves.

WP16

In other words, we crowdsourced a crisis map of Panga, which was highly participatory. The result was a rich, contextual social layer on top of the base map, which further inform community discussions on strategies and priorities guiding their recovery efforts. For the first time ever, the community of Panga was working off the one and same dataset to inform their rebuilding. In short, our humanitarian mission combined aerial robotics, computer vision, water-proof banners, local knowledge, tape, paper and crowdsourcing to engage local communities on the reconstruction process.

WP19

I’m now spending my evenings & weekends working with friends and colleagues to plan a follow-up mission in early 2016. We’ll be returning to Kathmandu Flying Labs with new technology partners to train our local partners on how to use fixed-wing UAVs for large scale mapping efforts. In the meantime, we’re also exploring the possibility of co-creating Jakarta Flying Labs, Monrovia Flying Labs and Santiago Flying Labs in 2016.

I’m quitting my day job next week to devote myself full time to these efforts. Fact is, I’ve been using all of my free time (meaning evenings, weekends and many, many weeks of holiday time) to pursue my passion in aid robotics and to carry out volunteer-based UAV missions like the one in Nepal. I’ve also used holiday time (and my own savings) to travel across the globe to present this volunteer-work at high-profile events, such as the 2015 Web Summit here in Dublin where the DJI film documentary was just publicly launched.

bookcover

My Nepali friends & I need your help to make sure that Kathmandu Flying Labs take-off and become a thriving and sustainable center of social entrepreneur-ship. To this end, we’re actively looking for both partners and sponsors to make all this happen, so please do get in touch if you share our vision. And if you’d like to learn more about how UAVs other emerging technologies are changing the face of humanitarian action, then check out my new book Digital Humanitarians.

In the meantime, big, big thanks to our Nepali partners and technology partners for making our good work in Kathmandu possible!

QED – Goodbye Doha, Hello Adventure!

Quod Erat Demonstrandum (QED) is Latin for “that which had to be proven.” This abbreviation was traditionally used at the end of mathematical proofs to signal the completion of said proofs. I joined the Qatar Computing Research Institute (QCRI) well over 3 years ago with a very specific mission and mandate: to develop and deploy next generation humanitarian technologies. So I built the Institute’s Social Innovation Program from the ground up and recruited the majority of the full-time experts (scientists, engineers, research assistants, interns & project manager) who have become integral to the Program’s success. During these 3+years, my team and I partnered directly with humanitarian and development organizations to empirically prove that methods from advanced computing can be used to make sense of Big (Crisis) Data. The time has thus come to add “QED” to the end of that proof and move on to new adventures. But first a reflection.

Over the past 3.5 years, my team and I at QCRI developed free and open source solutions powered by crowdsourcing and artificial intelligence to make sense of Tweets, text messages, pictures, videos, satellite and aerial imagery for a wide range of humanitarian and development projects. We co-developed and co-deployed these platforms (AIDR and MicroMappers) with the United Nations and the World Bank in response to major disasters such as Typhoons Haiyan and RubyCyclone Pam and both the Nepal & Chile Earthquakes. In addition, we carried out peer-reviewed, scientific research on these deployments to better understand how to meet the information needs of our humanitarian partners. We also tackled the information reliability question, experimenting with crowd-sourcing (Verily) and machine learning (TweetCred) to assess the credibility of information generated during disasters. All of these initiatives were firsts in the humanitarian technology space.

We later developed AIDR-SMS to auto-classify text messages; a platform that UNICEF successfully tested in Zambia and which the World Food Program (WFP) and the International Federation of the Red Cross (IFRC) now plan to pilot. AIDR was also used to monitor a recent election, and our partners are now looking to use AIDR again for upcoming election monitoring efforts. In terms of MicroMappers, we extended the platform (considerably) in order to crowd-source the analysis of oblique aerial imagery captured via small UAVs, which was another first in the humanitarian space. We also teamed up with excellent research partners to crowdsource the analysis of aerial video footage and to develop automated feature-detection algorithms for oblique imagery analysis based on crowdsourced results derived from MicroMappers. We developed these Big Data solutions to support damage assessment efforts, food security projects and even this wildlife protection initiative.

In addition to the above accomplishments, we launched the Internet Response League (IRL) to explore the possibility of leveraging massive multiplayer online games to process Big Crisis Data. Along similar lines, we developed the first ever spam filter to make sense of Big Crisis Data. Furthermore, we got directly engaged in the field of robotics by launching the Humanitarian UAV Network (UAViators), yet another first in the humanitarian space. In the process, we created the largest repository of aerial imagery and videos of disaster damage, which is ripe for cutting-edge computer vision research. We also spearheaded the World Bank’s UAV response to Category 5 Cyclone Pam in Vanuatu and also directed a unique disaster recovery UAV mission in Nepal after the devastating earthquakes. (I took time off from QCRI to carry out both of these missions and also took holiday time to support UN relief efforts in the Philippines following Typhoon Haiyan in 2013). Lastly, on the robotics front, we championed the development of international guidelines to inform the safe, ethical & responsible use of this new technology in both humanitarian and development settings. To be sure, innovation is not just about the technology but also about crafting appropriate processes to leverage this technology. Hence also the rationale behind the Humanitarian UAV Experts Meetings that we’ve held at the United Nations Secretariat, the Rockefeller Foundation and MIT.

All  of the above pioneering-and-experimental projects have resulted in extensive media coverage, which has placed QCRI squarely on the radar of international humanitarian and development groups. This media coverage has included the New York Times, Washington Post, Wall Street Journal, CNN, BBC News, UK Guardian, The Economist, Forbes and Times Magazines, New Yorker, NPR, Wired, Mashable, TechCrunch, Fast Company, Nature, New Scientist, Scientific American and more. In addition, our good work and applied research has been featured in numerous international conference presentations and keynotes. In sum, I know of no other institute for advanced computing research that has contributed this much to the international humanitarian space in terms of thought-leadership, strategic partnerships, applied research and operational expertise through real-world co-deployments during and after major disasters.

There is, of course, a lot more to be done in the humanitarian technology space. But what we have accomplished over the past 3 years clearly demonstrates that techniques from advanced computing can indeed provide part of the solution to the pressing Big Data challenge that humanitarian & development organizations face. At the same time, as I wrote in the concluding chapter of my new book, Digital Humanitarians, solving the Big Data challenge does not alas imply that international aid organizations will actually make use of the resulting filtered data or any other data for that matter—even if they ask for this data in the first place. So until humanitarian organizations truly shift towards both strategic and tactical evidence-based analysis & data-driven decision-making, this disconnect will surely continue unabated for many more years to come.

Reflecting on the past 3.5 years at QCRI, it is crystal clear to me that the number one most important lesson I (re)learned is that you can do anything if you have an outstanding, super-smart and highly dedicated team that continually goes way above and beyond the call of duty. It is one thing for me to have had the vision for AIDR, MicroMappers, IRL, UAViators, etc., but vision alone does not amount to much. Implementing said vision is what delivers results and learning. And I simply couldn’t have asked for a more talented & stellar team to translate these visions into reality over the past 3+years. You each know who you are, partners included; it has truly been a privilege and honor working with you. I can’t wait to see what you do next at/with QCRI. Thank you for trusting me; thank you for sharing my vision; thanks for your sense of humor, and thank you for your dedication and loyalty to science and social innovation.

So what’s next for me? I’ll be lining up independent consulting work with several organizations (likely including QCRI). In short, I’ll be open for business. I’m also planning to work on a new project that I’m very excited about, so stay tuned for updates; I’ll be sure to blog about this new adventure when the time is right. For now, I’m busy wrapping up my work as Director of Social Innovation at QCRI and working with the best team there is. QED.

Using Computer Vision to Analyze Aerial Big Data from UAVs During Disasters

Recent scientific research has shown that aerial imagery captured during a single 20-minute UAV flight can take more than half-a-day to analyze. We flew several dozen flights during the World Bank’s humanitarian UAV mission in response to Cyclone Pam earlier this year. The imagery we captured would’ve taken a single expert analyst a minimum 20 full-time workdays to make sense of. In other words, aerial imagery is already a Big Data problem. So my team and I are using human computing (crowdsourcing), machine computing (artificial intelligence) and computer vision to make sense of this new Big Data source.

For example, we recently teamed up with the University of Southampton and EPFL to analyze aerial imagery of the devastation caused by Cyclone Pam in Vanuatu. The purpose of this research is to generate timely answers. Aid groups want more than high-resolution aerial images of disaster-affected areas, they want answers; answers like the number and location of damaged buildings, the number and location of displaced peoples, and which roads are still useable for the delivery of aid, for example. Simply handing over the imagery is not good enough. As demonstrated in my new book, Digital Humanitarians, both aid and development organizations are already overwhelmed by the vast volume and velocity of Big Data generated during and post-disasters. Adding yet another source, Big Aerial Data, may be pointless since these organizations may simply not have the time or capacity to make sense of this new data let alone integrate the results with their other datasets.

We therefore analyzed the crowdsourced results from the deployment of our MicroMappers platform following Cyclone Pam to determine whether those results could be used to train algorithms to automatically detect disaster damage in future disasters in Vanuatu. During this MicroMappers deployment, digital volunteers analyzed over 3,000 high-resolution oblique aerial images, tracing houses that were fully destroyed, partially damaged and largely intact. My colleague Ferda Ofli and I teamed up with Nicolas Rey (a graduate student from EPFL who interned with us over the summer) to explore whether these traces could be used to train our algorithms. The results below were written with Ferda and Nicolas. Our research is not just an academic exercise. Vanuatu is the most disaster-prone country in the world. What’s more, this year’s El Niño is expected to be one of the strongest in half-a-century.

Screen Shot 2015-10-11 at 6.11.04 PM

According to the crowdsourced results, 1,145 of the high-resolution images did not contain any buildings. Above is a simple histogram depicting the number of buildings per image. The aerial images of Vanuatu are very heterogeneous, and vary not only in diversity of features they exhibit but also in the angle of view and the altitude at which the pictures were taken. While the vast majority of the images are oblique, some are almost nadir images, and some were taken very close to the ground or even before take off.

Screen Shot 2015-10-11 at 6.45.15 PM

The heterogeneity of our dataset of images makes the automated analysis of this imagery a lot more difficult. Furthermore, buildings that are under construction, of which there are many in our dataset, represent a major difficulty because they look very similar to damaged buildings. Our first task thus focused on training our algorithms to determine whether or not any given aerial image shows some kind of building. This is an important task given that more than ~30% of the images in our dataset do not contain buildings. As such, if we can develop an accurate algorithm to automatically filter out these irrelevant images (like the “noise” below), this will allows us focus the crowdsourced analysis of relevant images only.

Vanuatu3

While our results are purely preliminary, we are still pleased with our findings thus far. We’ve been able to train our algorithms to determine whether or not an aerial image includes a building with just over 90% accuracy at the tile level. More specifically, our algorithms were able to recognize and filter out 60% of the images that do not contain any buildings (recall rate), and only 10% of the images that contain buildings were mistakingly discarded (precision rate of 90%). The example below is an example. There are still quite a number of major challenges, however, so we want to be sure not to over-promise anything at this stage. In terms of next steps, we would like to explore whether our computer vision algorithms can distinguish between destroyed an intact buildings.

Screen Shot 2015-10-11 at 6.57.05 PMScreen Shot 2015-10-11 at 6.57.15 PM

The UAVs we were flying in Vanuatu required that we landed them in order to get access to the collected imagery. Increasingly, newer UAVs offer the option of broadcasting the aerial images and videos back to base in real time. DJI’s new Phantom 3 UAV (pictured below), for example, allows you to broadcast your live aerial video feed directly to YouTube (assuming you have connectivity). There’s absolutely no doubt that this is where the UAV industry is headed; towards real-time data collection and analysis. In terms of humanitarian applications, and search and rescue, having the data-analysis carried out in real-time is preferable.

WP27

This explains why my team and I recently teamed up with Elliot Salisbury & Sarvapali Ramchurn from the University of Southampton to crowdsource the analysis of live aerial video footage of disaster zones and to combine this crowdsourcing with (hopefully) near real-time machine learning and automated feature detection. In other words, as digital volunteers are busy tagging disaster damage in video footage, we want our algorithms to learn from these volunteers in real-time. That is, we’d like the algorithms to learn what disaster damage looks like so they can automatically identify any remaining disaster damage in a given aerial video.

So we recently carried out a MicroMappers test-deployment using aerial videos from the humanitarian UAV mission to Vanuatu. Close to 100 digital volunteers participated in this deployment. Their task? To click on any parts of the videos that show disaster damage. And whenever 80% or more of these volunteers clicked on the same areas, we would automatically highlight these areas to provide near-real time feedback to the UAV pilot and humanitarian teams.

At one point during the simulations, we had some 30 digital volunteers clicking on areal videos at the same time, resulting in an average of 12 clicks per second for more than 5 minutes. In fact, we collectively clicked on the videos a total of 49,706 times! This provided more than enough real-time data for MicroMappers to act as a human-intelligence sensor for disaster damage assessments. In terms of accuracy, we had about 87% accuracy with the collective clicks. Here’s how the simulations looked like to the UAV pilots as we were all clicking away:

Thanks to all this clicking, we can export only the most important and relevant parts of the video footage while the UAV is still flying. These snippets, such as this one and this one, can then be pushed to MicroMappers for additional verification. These animations are small and quick, and reduce a long aerial video down to just the most important footage. We’re now analyzing the areas that were tagged in order to determine whether we can use this data to train our algorithms accordingly. Again, this is far more than just an academic curiosity. If we can develop robust algorithms during the next few months, we’ll be ready to use them effectively during the next Typhoon season in the Pacific.

In closing, big thanks to my team at QCRI for translating my vision of Micro-Mappers into reality and for trusting me well over a year ago when I said we needed to extend our work to aerial imagery. All of the above research would simply not have been possible without MicroMappers existing. Big thanks as well to our excellent partners at EPFL and Southampton for sharing our vision and for their hard work on our joint projects. Last but certainly not least, sincerest thanks to digital volunteers from SBTF and beyond for participating in these digital humanitarian deployments.

Aerial Robotics in the Land of Buddha

Buddhist Temples adorn Nepal’s blessed land. Their stupas, like Everest, stretch to the heavens, yearning to democratize the sky. We felt the same yearning after landing in Kathmandu with our UAVs. While some prefer the word “drone” over “UAVs”, the reason our Nepali partners use the latter dates back some 3,000 years to the spiritual epic Mahabharata (Great Story of Bharatas). The ancient story features Drona, a master of advanced military arts who slayed hundreds of thousands with his bow & arrows. This strong military connotation explains why our Nepali partners use “UAV” instead, which is the term we also used for our Humanitarian UAV Mission in the land of Buddha. Our purpose: to democratize the sky.

Screen Shot 2015-09-28 at 12.05.09 AM

Unmanned Aerial Vehicles (UAVs) are aerial robots. They are the first wave of robotics to impact the humanitarian space. The mission of the Humanitarian UAV Network (UAViators) is to enable the safe, responsible and effective use of UAVs in a wide range of humanitarian and development settings. We thus spearheaded a unique and weeklong UAV Mission in Nepal in close collaboration with Kathmandu University (KU), Kathmandu Living Labs (KLL), DJI and Pix4D. This mission represents the first major milestone for Kathmandu Flying Labs (please see end of this post for background on KFL).

WP1

Our joint UAV mission combined both hands-on training and operational deployments. The full program is available here. The first day comprised a series of presentations on Humanitarian UAV Applications, Missions, Best Practices, Guidelines, Technologies, Software and Regulations. These talks were given by myself, KU, DJI, KLL and the Civil Aviation Authority (CAA) of Nepal. The second day  focused on direct hands-on training. DJI took the lead by training 30+ participants on how to use the Phantom 3 UAVs safely, responsibly. Pix4D, also on site, followed up by introducing their imagery-analysis software.

WP2

WP3

WP4

The second-half of the day was dedicated to operations. We had already received written permission from the CAA to carry out all UAV flights thanks to KU’s outstanding leadership. KU also selected the deployment sites and enabled us to team up with the very pro-active Community Disaster Management Committee (CDMC-9) of Kirtipur to survey the town of Panga, which had been severely affected by the earthquake just months earlier. The CDMC was particularly keen to gain access to very high-resolution aerial imagery of the area to build back faster and better, so we spent half-a-day flying half-a-dozen Phantom 3’s over parts of Panga as requested by our local partners.

WP5

WP6

WP7

WP8

The best part of this operation came at the end of the day when we had finished the mission and were packing up: Our Nepali partners politely noted that we had not in fact finished the job; we still had a lot more area to cover. They wanted us back in Panga the following day to complete our mapping mission. We thus changed our plans and returned the next day during which—thanks to DJI & Pix4D—we flew several dozen additional UAV flights from four different locations across Panga (without taking a single break; no lunch was had). Our local partners were of course absolutely invaluable throughout since they were the ones informing the flight plans. They also made it possible for us to launch and land all our flights from the highest rooftops across town. (Click images to enlarge).

WP8b

WP8bb

WP8c

WP8cc

WP8d

Meanwhile, back at KU, our Pix4D partners provided hands-on training on how to use their software to analyze the aerial imagery we had collected the day before. KLL also provided training on how to use the Humanitarian Open Street Map Tasking Manager to trace this aerial imagery. Incidentally, we flew well over 60 UAV flights all in all over the course of our UAV mission, which includes all our training flights on campus as well as our aerial survey of a teaching hospital. Not a single incident or accident occurred; everyone followed safety guidelines and the technology worked flawlessly.

WP8e

WP8f

WP8g

With more than 800 aerial photographs in hand, the Pix4D team worked through the night to produce a very high-resolution orthorectified mosaic of Panga. Here are some of the results.

WP9

WP10

WP11

WP12

Compare these results with the resolution and colors of the satellite imagery for the same area (maximum zoom).

Screen Shot 2015-09-28 at 12.32.36 AM

WP10

We can now use MicroMappers to crowdsource the analysis & digital annotation of oblique aerial pictures and videos collected throughout the mission. This is an important step in the development of automated feature-detection algorithms using techniques from computer vision and machine learning. The reason we want automated solutions is because aerial imagery already presents a Big Data challenge for humanitarian and development organizations. Indeed, a single 20- minute UAV flight can generate some 800 images. A trained analyst needs at least one minute to analyze a single image, which means that more than 13 hours of human time is needed to analyze imagery captured from just one 20-minute UAV flight. More on this Big Data challenge here.

Incidentally, since Pix4D also used their software to produce a number of stunning 3D models, I’m keen to explore ways to crowdsource 3D models via MicroMappers and to explore possible Virtual Reality solutions to the Big Data challenge. In any event, we generated all the aerial data requested by our local partners by the end of the day.

While this technically meant that we had successfully completed our mission, it didn’t feel finished to me. I really wanted to “liberate” the data completely and place it directly into the hands of the CDCM and local community in Panga. What’s the point of “open data” if most of Panga’s residents are not able to view or interact with the resulting maps? So I canceled my return flight and stayed an extra day to print out our aerial maps on very large roll-able and waterproof banners (which are more durable than paper-based maps).

WP13

WP14

WP15

We thus used these banner-maps and participatory mapping methods to engage the local community directly. We invited community members to annotate the very-high resolution aerial maps themselves by using tape and color-coded paper we had brought along. In other words, we used the aerial imagery as a base map to catalyze a community-wide discussion; to crowdsource and to visualize the community’s local knowledge. Participatory mapping and GIS (PPGIS) can play an impactful role in humanitarian and development projects, hence the initiative with our local partners (more here on community mapping).

In short, our humanitarian mission combined aerial robotics, computer vision, waterproof banners, tape, paper and crowdsourcing to inform the rebuilding process at the community level.

WP16

WP20

WP19

WP18

WP17

WP16b

WP24

WP23

WP22

WP21

WP25

The engagement from the community was absolutely phenomenal and definitely for me the highlight of the mission. Our CDMC partners were equally thrilled and excited with the community engagement that the maps elicited. There were smiles all around. When we left Panga some four hours later, dozens of community members were still discussing the map, which our partners had hung up near a popular local teashop.

There’s so much more to share from this UAV mission; so many angles, side-stories and insights. The above is really just a brief and incomplete teaser. So stay tuned, there’s a lot more coming up from DJI and Pix4D. Also, the outstanding film crew that DJI invited along is already reviewing the vast volume of footage captured during the week. We’re excited to see the professionally edited video in coming weeks, not to mention the professional photographs that both DJI and Pix4D took throughout the mission. We’re especially keen to see what our trainees at KU and KLL do next with the technology and software that are now in their hands. Indeed, the entire point of our mission was to help build local capacity for UAV missions in Nepal by transferring knowledge, skills and technology. It is now their turn to democratize the skies of Nepal.

WP26

WP27

Acknowledgements: Some serious acknowledgements are in order. First, huge thanks to Lecturer Uma Shankar Panday from KU for co-sponsoring this mission, for hosting us and for making our joint efforts a resounding success. The warm welcome and kind hospitality we received from him, KU’s faculty and executive leadership was truly very touching. Second, special thanks to the CAA of Nepal for participating in our training and for giving us permission to fly. Third, big, big thanks to the entire DJI and Pix4D Teams for joining this UAViators mission and for all their very, very hard work throughout the week. Many thanks also to DJI for kindly donating 10 Smartisan phones and 10 Phantom 3’s to KU and KLL; and kind thanks to Pix4D for generously donating licenses of their software to both KU and KLL. Fourth, many thanks to KLL for contributing to the training and for sharing our vision behind Kathmandu Flying Labs. Fifth, I’d like to express my sincere gratitude to Smartisan for co-sponsoring this mission. Sixth, deepest thanks to CDMC and Dhulikhel Hospital for partnering with us on the ops side of the mission. Their commitment and life-saving work are truly inspiring. Seventh, special thanks to the film and photography crew for being so engaged throughout the mission; they were absolutely part of the team. In closing, I want to specifically thank my colleagues Andrew Schroeder from UAViators and Paul & William from DJI for all the heavy lifting they did to make this entire mission possible. On a final and personal note, I’ve made new friends for life as a result of this UAV mission, and for that I am infinitely grateful.


Kathmandu Flying Labs: My colleague Dr. Nama Budhathoki and I began discussing the potential role that small UAVs could play in his country in early 2014, well over a year-and-half before Nepal’s tragic earthquakes. Nama is the Director of Kathmandu Living Labs, a crack team of Digital Humanitarians whose hard work has been featured in The New York Times and the BBC. Nama and team create open-data maps for disaster risk reduction and response. They use Humanitarian OpenStreetMap’s Tasking Server to trace buildings and roads visible from orbiting satellites in order to produce these invaluable maps. Their primary source of satellite imagery for this is Bing. Alas, said imagery is both low-resolution and out-of-date. And they’re not sure they’ll have free access to said imagery indefinitely either.

KFL logo draft

So Nama and I decided to launch a UAV Innovation Lab in Nepal, which I’ve been referring to as Kathmandu Flying Labs. A year-and-a-half later, the tragic earthquake struck. So I reached out to DJI in my capacity as founder of the Humanitarian UAV Network (UAViators). The mission of UAViators is to enable the safe, responsible and effective use of UAVs in a wide range of humanitarian and development settings. DJI, who are on the Advisory Board of UAViators, had deployed a UAV team in response to the 6.1 earthquake in China the year before. Alas, they weren’t able to deploy to Nepal. But they very kindly donated two Phantom 2’s to KLL.

A few months later, my colleague Andrew Schroeder from UAViators and Direct Relief reconnected with DJI to explore the possibility of a post-disaster UAV Mission focused on recovery and rebuilding. Both DJI and Pix4D were game to make this mission happen, so I reached out to KLL and KU to discuss logistics. Professor Uma at KU worked tirelessly to set everything up. The rest, as they say, is history. There is of course a lot more to be done, which is why Nama, Uma and I are already planning the next important milestones for Kathmandu Flying Labs. Do please get in touch if you’d like to be involved and contribute to this truly unique initiative. We’re also exploring payload delivery options via UAVs and gearing up for new humanitarian UAV missions in other parts of the planet.

World Bank Using UAVs for Disaster Risk Reduction in Tanzania

An innovative World Bank team in Tanzania is exploring the use of UAVs for disaster risk reduction efforts. Spearheaded by colleague Edward Anderson, the team recently partnered with friends at Drone Adventures to capture very high-resolution images of flood-prone areas in the country’s capital. This imagery is now being used to generate Digital Terrain Models to develop more reliable flood-inundation models at an unparalleled level of resolution. This project is a joint effort with the Commission for Science and Technology (COSTECH) and kindly supported by the Swedish International Development Agency and the Global Facility for Disaster Risk Reduction (GFDRR), working in partnership with the Tanzania Red Cross.

Drone Adventures flew dozens of flights over the course of 10 days, covering close to 90km² at a resolution of 4cm-8cm. They used eBees, which weigh about 700 grams and are 95% foam-based with a small properly facing the back, which makes the UAV extra safe. Here are some pictures (click to enlarge) from the recent mission in Dar es Salam, courtesy of Mark Iliffe from the Bank.

16544751406_dd5feb3ae3_z

15948142154_81b0b854bf_z

Screen Shot 2015-08-14 at 1.17.32 PM

Screen Shot 2015-08-14 at 1.28.41 PM

16569015351_b1d0907f7f_z

Screen Shot 2015-08-16 at 4.18.40 PM

The World Bank Team also used a DJI Phantom 2 UAV pictured below. Like Drone Adventures, they also took the time to engage local communities. This approach to community engagement in UAV projects is an important component of the UAViators Code of Conduct and Guidelines. The team is using the DJI Phantom to inform urban planning and transportation conversations, and to quickly assess flood impact, as this video explains.

16569539912_0c7a5ca5b2_z

16384737537_773757f8fc_z

16384734007_e1b739b1f0_z
16570687475_0846dcaf87_z
Screen Shot 2015-08-16 at 4.18.16 PM

Screen Shot 2015-08-16 at 4.17.53 PM

Most of the resulting imagery has already been added to OpenAerialMap here. The imagery is also being used here as part of the Missing Maps project. This has already improved the level of detail of Dar es Salam maps. For example, compare the level of detail in this map before the aerial imagery was made available:

Screen Shot 2015-08-14 at 1.55.26 PM

With these more detailed maps enabled by the availability of aerial imagery:

Screen Shot 2015-08-14 at 1.57.22 PM

Screen Shot 2015-08-14 at 2.01.40 PM

Screen Shot 2015-08-14 at 2.02.12 PM

And here’s a comparison of a satellite image (taken from Google Earth) of a neighborhood in Dar es Salam with an areal image (from an eBee) at around the same spatial resolution.

Screen Shot 2015-08-16 at 5.34.08 PM

thumb_Screen Shot 2015-08-16 at 5.37.02 PM_1024

As Mark from the World Bank noted during a recent conversation, making this aerial imagery open and making the data derived from this imagery open “gives agencies and municipalities data that they’ve not had access to previously. But there are still outstanding questions such as authoritativeness that need to be resolved. There is a lot of institutional work with statistics and mapping agencies that is ongoing to validate the data and ensure they’re happy with it, prior to it augmenting traditional mapping practices. That’s where we’re at currently.”

Acknowledgements: Many thanks to Edward & Mark for sharing their efforts.

The First Ever 3D Model of a Refugee Camp Made with UAV Imagery

A colleague of mine just returned from overseas where he flew a UAV as part of an independent exploratory project. He did so with permission and also engaged directly with local communities in the process—as per the guidelines listed in the Humanitarian UAV Code of Conduct. He subsequently sent me this aerial video footage of a camp, which he recorded using a DJI Phantom 2 Vision+:

The analysis of aerial imagery for humanitarian & development purposes is an active area of research at UAViators. He thus kindly gave me permission to share this footage with colleague Matt Shroyer so that we could explore the possibility of creating a mosaic and 3D model from the video.

Screen Shot 2015-08-16 at 5.21.38 PM

Incidentally, the image below is the highest resolution and most recent satellite image available of the camp on Google Maps. As you can tell, the satellite image is very much out of date.

Screen Shot 2015-08-14 at 10.56.08 AM

And here is the mosaic, which Matt kindly produced by taking hundreds of screenshots of the aerial video footage (click to enlarge):

Screen Shot 2015-08-14 at 11.00.01 AM

A close up:

Screen Shot 2015-08-14 at 11.02.13 AM

We then explored the possibility of creating a 3D model of the camp using the screenshots and SketchFab. The results are displayed below (click to enlarge). The numbers are annotations we added to provide relevant information on the camp. Perhaps in the future we’ll be able to add photographs & videos (captured from hand-held cameras) and other types of data to the 3D model.

Screen Shot 2015-08-14 at 11.09.00 AM

Screen Shot 2015-08-14 at 11.11.18 AM

Screen Shot 2015-08-14 at 11.11.39 AM

It’s worth noting that this 3D model would be far higher resolution if the UAV had been flown with the expressed purpose of creating a 3D model. Either way, you’ll note that no individuals appear either in the mosaic or in the 3D model, which is important for data privacy and security.

Here are two short video fly-throughs of the 3D model:

You can also fly through the model yourself here.

The purpose of this visual exploration is to solicit feedback from humanitarian organizations vis-a-vis the potential added value that this imagery could provide for camp management and related humanitarian efforts. So please feel free to get in touch via email and/or to post comments below with your feedback. In the meantime, a big thanks to my colleague for sharing the aerial videos and equally big thanks to Matt for all his time on the imagery processing. UAViators will be carrying out additional projects like this one over the coming months. So if you’d like to get involved, please do get in touch.

Using UAVs to Map Diamond Mines and Reduce Conflict in Africa

In June 2014, a joint USAID and USGS team used a small UAV to map artisanal diamond mining sites in Western Guinea. The purpose of this UAV mission was to support the “Kimberley Process (KP), an international initiative aimed at preventing the flow of conflict diamonds.” Adhering to the Process’s regulations is proving challenging for “countries whose diamonds are produced through artisanal and small-scale mining (ASM).” These mines are “often remote and spread over vast territories, and the diamonds found are frequently sold into informal networks,” which makes it “very difficult to track production—a key requirement of the KP.” National governments have recently taken important steps to formalize ASM by “registering miners, delineating mining zones, and establishing a legal flow chain through which production is intended to move. The ability to map and monitor artisanal diamond mining sites is a necessary step towards achieving formalization. Doing so helps to identify where mining is taking place, the extent of activities, the amount of production, and how the activity and production change over time.”

Screen Shot 2015-08-05 at 4.18.30 PM

While the US Geological Survey (USGS) has been using satellite imagery to “identify ASM activities and estimate the production in diamond mining zones through-out the region,” satellite imagery presents a number of limitations. These include “atmospheric constraints (cloud cover, haze, smoke, etc.)” as well as “temporal resolutions that fail to capture the dynamic nature of ASM sites and spatial resolutions that can be inadequate for identifying fine-scale features.” Hence the use of UAVs to “support USAID’s Property Rights and Artisanal Diamond Development (PRADD) project’s efforts to formalize ASM in Guinea.” USAID and USGS deployed a joint team in June 2014 to “create detailed site maps and generate very-high resolution digital elevation models (DEMs) of the region to better inform diamond production evaluations.” The team flew a DJI Phantom 1, a multi-rotor UAV (pictured below) to “collect data at seven artisanal diamond mining sites in the Forecariah Prefecture of western Guinea.”  The DJI UAV was flown by Visual Line of Site (VLOS).

Screen Shot 2015-08-05 at 4.19.49 PM

The resulting aerial imagery allowed the team to “clearly distinguish active pits from inactive pits, locate and measure piles of extracted gravel and sedimentary layers, and detect changes in water color and sediment properties. The ability to map an entire site from one or two field locations is particularly beneficial for ASM research, as mine sites are often located in remote areas, can be several square kilometers in size, and sections of sites may be inaccessible or even dangerous for researchers to traverse due to a lack of roads, surficial disturbance due to mining, or other challenging terrain.” The team’s use of the multi-rotor UAV enabled them to “acquire complete aerial coverage of a site in under an hour.” A small fixed-wing UAV like an eBee would likely take under 15 minutes; but fixed-wings can be more challenging to operate (they do not take off and land vertically like the multi-rotors, for example) and can also cost a lot more.

Screen Shot 2015-08-05 at 3.46.57 PM

The team is using the nadir imagery collected from the UAVs to develop 10cm resolution Digital Elevation Models (DEM) of each mine site. In addition, USAID is using the “aerial videos and oblique still imagery captured using Camera 1 (wide angle lens) to conduct participatory mapping with local communities in Forecariah Prefecture to delineate mining and agricultural zones. This will assist with the formalization of property rights, thus reducing local-scale conflicts over land use.” Note that UAV regulations do not exist in Guinea, which is why it was “the team’s responsibility to identify a process for contacting the appropriate authorities in Guinea to acquire permission to fly the UAS. This involved receiving signed letters from the Minister of Mines and Geology, the Minister of Transportation, and consent from the ministers of Defense and the Interior.”

Screen Shot 2015-08-05 at 3.44.15 PM

And this is especially refreshing (not least because of the Ebola outbreak at the time): “Equally as important as acquiring permission at the national government level was informing local communities near the field sites about the planned [UAV] mission.” To accomplish this, the team “traveled to villages and mining sites to conduct a public relations campaign to notify local populations that the [UAV] would be flown in the area and to explain why it was being flown and what to expect. During the flight missions the team immediately downloaded and played video collected by the [UAV] for miners & villagers as a follow-up to the information campaign and to let them see their local landscapes from a birds-eye perspective. These steps added significant time to the field mission, but were essential to gaining the trust of local populations.” These steps are also in line with the Code of Conduct produced by the Humanitarian UAV Network (UAViators) in early 2014. This Code of Conduct has since been revised by several dozen humanitarian organizations who have also drafted guidelines on community engagement and data ethics for UAV projects (more on this here).

Screen Shot 2015-08-05 at 4.21.28 PM

In conclusion, “an abundance of information is being gathered from these [images], ranging from the scope of mining activities, the location of mining within the landscape, the amount of activity at each site, the impact of mining on the surrounding environment, and the type of mining activities being conducted at the time of image collection.” The results will enable the “Guinean government to select appropriate zones to parcel for artisanal mining based on diamond potential, an important step towards formalization and resource governance. Interpretation of the data will also assist with the identification of abandoned mine sites that can be remediated into other income-generating activities, such as fish farming and vegetable gardens, thus helping to reduce the long-term environmental degradation caused by ASM.”

image003

I was recently introduced to Pete Chirico, one of the team members from USGS who deployed to Guinea. You can read more about his write-up of the Guinea mission here (PDF). Pete will soon be headed back to Africa to carry out a similar mission. I look forward to meeting up with him soon to learn more about his good work. The Guinea project represents the very first time that USAID made use of a UAV in one of its projects. Alas, many at USAID are not aware of this. I know this because I was recently invited by USAID to give a talk on UAV applications and this important precedent was not brought up. In any event, USAID’s former Administrator Rajiv Shah—pictured above with colleague Frank Pichel—was briefed on the initiative last year.

Rescue Robotics: An Introduction

I recently had the pleasure of meeting Dr. Robin Murphy when she participated in the 3-day Policy Forum on Humanitarian UAVs, which I organized and ran at the Rockefeller Center in Italy last month. Anyone serious about next generation humanitarian technology should read Robin’s book on rescue robotics. The book provides a superb introduction to the use of robotics in search and rescue missions and doubles as a valuable “how to manual” packed with deep insights, lessons learned and best practices. Rescue robots enable “responders and other stakeholders to sense and act at a distance from the site of a disaster or extreme incident.” While Robin’s focus is predominantly on the use of search-robots for rescue missions in the US, international humanitarian organizations should not overlook the important lessons learned from this experience.

WP2

As Robin rightly notes, ‘the impact of earthquakes, hurricanes, flooding […] is increasing, so the need for robots for all phases of a disaster, from prevention to response and recovery, will increase as well.” This is particularly true of aerial robots, or Unmanned Aerial Vehicles (UAVs), which represent the first wide-spread use of robotics in international humanitarian efforts. As such, this blog post relays some of the key insights from the field of rescue robots and aerial UAVs in particular. For another excellent book on the use of UAVs for search and rescue, please see Gene Robinson’s book entitled First to Deploy.

The main use-case for rescue robotics is data collection. “Rescue robots are a category of mobile robots that are generally small enough and portable enough to be transported, used and operated on demand by the group needing the information; such a robot is called a tactical, organic system […].” Tactical means that “the robot is directly controlled by stakeholders with ‘boots on the ground’—people who need to make fairly rapid decisions about the event. Organic means that the robot is deployed, maintained, transported, and tasked and directed by the stakeholder, though, of course, the information can be shared with other stakeholders […].” These mobile robots are “often referred to as unmanned systems to distinguish them from robots used for factory automation.”

WP1

There are three types or modalities of mobile robots: Unmanned Ground Vehicles (UGVs), Unmanned Marine Vehicles (UMVs) and Unmanned Aerial Vehicles (UAVs). UGVs are typically used to enter coal mines following cave-in’s or collapsed buildings to search for survivors. Indeed, “mine disasters are the most frequent users or requesters of rescue robots.” As an aside, I found it quite striking that “urban structures are likely to be manually inspected at least four times by different stakeholders” following a disaster. In any event, “few formal response organizations “own rescue robots, which explains the average lag time of 6.5 days for a robot to be used [in] disaster [response].” That said, Robin notes that this lag time is reduced to 0.5 day when a “command institution had a robot or an existing partnership with a group that had robots […].” While “robots are still far from perfect, they are useful.” Robin is careful to note that the failures and gaps described in her book “should not be used as reasons to reject use of a robot but rather as decision aids in selecting a currently available robot and for proactively preparing a field team for what to expect.”

The Florida State Emergency Response Team deployed the first documented use of small UAVs for disaster response following Hurricane Katrina in 2005. Robin Murphy’s Center for Robot-Assisted Search & Rescue (CRASAR) also flew two types of small UAVs to assist with the rescue phase: an AeroVironment Raven (fixed-wing UAV) and an iSENSYS T-Rex variant miniature helicopter (pictured below). Two flights were carried out to “determine whether people were stranded in the area around Pearlington, Mississippi, and if the cresting Pearl River was posing immediate threats.” These affected areas were “unreachable by truck due to trees in the road.” The Raven UAV unfortunately crashed “into a set of power lines […] while landing in a demolished neighborhood.” CRASAR subsequently carried out an additional 32 flights with an iSENSYS IP-3 miniature helicopter to examine “structural damage at seven multistory buildings.”

Screen Shot 2015-08-04 at 4.43.46 PM

The second documented deployment of UAVs in Robin’s book occurs in 2009, when a quadrotor used by the Sapienza University of Rome in the aftermath of the L’Aquila earthquake in 2009. Members of the University’s Cognitive Cooperative Robotics Lab deployed the UAV on behalf of the L’Aquila Fire Department. “The deployment in the debris concentrated on demonstrating mobility to fire rescue agencies.” The third documented use of UAVs occurred in Haiti after the 2010 Earthquake. An Elbit Skylark (fixed-wing) UAV was used to survey the state of a distant orphanage near Leogane, just outside the capital.

Several UAV deployments occurred in 2011. After the Christchurch Earthquake in New Zealand, a consumer Parrot AR drone was initially used to fly into a cathedral to inspect the damage (aerial photo bellow). That same year, a Pelican UAV was used in response to the Japan Earthquake and Tsunami to “test multi-robot collaborative mapping in a damaged building at Tohoku University.” In this case, multirobot means that “the UAV was carried by a UGV” to get the former inside the rubble so it could fly inside the damaged building. At least two additional UAVs were used for the emergency at the Fukushima Daiichi nuclear power plant. Note that a “recording radiological sensor was zip tied to [one of the UAVs] in order to get low-altitude surveys.” Still in 2011, two UAVs were used in Cyprus after an explosion damaged a power plant. The UAVs were deployed to “inspect the damage and create a three-dimensional image of the power plant.” This mission “suggested that multiple UAVs could simultaneously map a face of the structure, [thus] accelerating the reconnaissance process. Finally, at least two multiple fixed-wing UAVs were used in Bangkok following the Great Thailand Flood in 2011. These aerial robots were used to “monitor large areas and allow disaster scientists to predict and prevent flooding.”

Wp3

In 2012, a project funded by the European Union (EU) fielded to UAVs following earthquakes in Northern Italy to assess the exteriors of “two churches that had not been entered [for] safety reasons. The robots were successful and provided engineers and cultural historians with information that could not have been obtained otherwise.” UAV deployments following disasters in Haiti in 2012 and the Philippines in 2013 do not appear in the book, unfortunately. In any event, Robin notes that the main barrier to deploying UGVs, UMVs and UAVs “is not a technical issue but an administrative one.” I would add regulatory constraints as another major hurdle.

Robin’s book provides some excellent operational guidance on how to carry out rescue-robot missions successfully. These guidance notes also identify existing gaps in recent missions. One such gap is the “lack of ability to integrate UAV data with satellite imagery and other geographical sources,” an area that I’m actively working on (see MicroMappers). Robin makes an important observation on the gaps—or more precisely the data gaps that exist in the field of rescue robotics. “Surprisingly few deployments have been reported in the scientific or professional literature, and even fewer have been analyzed in any depth.” And even when “data are collected, many reports lack a unifying framework or conceptual model for analysis.”

This should not be surprising. Rescue robotics, and humanitarian UAVs in particular, “are new areas of discovery.” As such, “their newness means there is a lag in understanding how best to capture performance and even the dimensions that make up performance.” To be sure, “performance goes beyond simple binary declarations of mission success: it requires knowing what worked and why.” Furthermore, the use of UAVs in aid and development requires a “holistic evaluation of the technology in the larger socio-technical system.” I whole heartedly agree with Robin, which is precisely why I’ve been developing standardized indicators to assess the performance of humanitarian UAVs used for data collection, payload transportation and communication services in international humanitarian aid. Such standards are needed earlier rather than later since “the current state of reporting deployments is ad hoc,” which means “there is no guarantee that all deployments have been recorded, much less documented in a manner to support scientific understanding or improved devices and concepts of operations.” I’ll be writing more on the standardized indicators I’ve been developing in a future blog post.

As Robin also notes, “it is not easy to determine if a robot accomplished the mission optimally, was resilient to conditions it did not encounter, or missed an important cue of a victim or structural hazard.” What’s more, “good performance of a robot in one hurricane does not necessarily mean good performance in another hurricane because so many factors can be different.” Fact is, Rescue robotics have a “very small corpus of natural world observations […],” meaning that there is limited documentation based on direct observation of UAV missions in the field. This is also true of humanitarian UAVs. Unlike the science of rescue-robotics, many of the other sciences have a “large corpus of prior observations, and thus ideation may not require new fundamental observations of the natural world.” What does this mean for rescue robotics (and humanitarian UAVs)? According to Robin, the very small corpus of real world observations suggests that lab experimentation and simulations will have “limited utility as there is little information to create meaning models or to know what aspect of the natural world to duplicate.”

I’m still a strong proponent of simulations and disaster response exercises; they are key to catalyzing learning around emerging (humanitarian) technologies in non-high-stakes environments. But I certainly take Robin’s point. What’s very clear is that a lot more fieldwork is needed in rescue-robotics (and especially in the humanitarian UAV space). This fieldwork can be carried out in several ways:

  • Controlled Experimentation
  • Participation in an Exercise
  • Concept Experimentation
  • Participant-Observer Research

Controlled experimentation is “highly focused, either on testing a hypothesis or capturing a performance metric(s) […]. Participation in an exercise occurs in simulated-but-realistic environments. This type of fieldwork focuses on “reinforcing good practices […].” Concept experimentation can occur both in simulated environment and in the real world. “The experimentation is focused on generating concepts of how a new technology or protocol can be used […].” This type of experimentation also “identifies new uses or missions for the robot.” Lastly, “participant-observer” research is conducted while the robot is actually deployed to a disaster, and is a form of ethnography.” 

There are many more important, operational insights in Robin’s book. I highly recommend reading sections 3-6 in Chapter 6 since they provide very practical advice on how to carry out rescue-robotics missions. These section are packed with hands-on lessons learned and best practices, which very much mirror my own experience in the humanitarian UAV space, as documented in this best practices guide. For example, she emphasizes the critical importance of having a “Data Manager” as part of your deployment team. “The first priority of the data manager is to gather all the incoming data, and perform backups.” In addition, Robin Murphy strongly recommends that expert participant-observer researcher be embedded in the mission team—another suggestion I completely agree with. In terms of good etiquette, “Do not attempt first contact during a disaster,” is another suggestion that I wholeheartedly agree with. This is precisely why the UN asked UAV operators in Nepal to first check-in with the Humanitarian UAV Network (UAViators).

In closing, big thanks to Robin for writing this book and for participating in the recent Policy Forum on Humanitarian UAVs.