Category Archives: Humanitarian Technologies

Think Global, Fly Local: The Future of Aerial Robotics for Disaster Response

First responders during disasters are not the United Nations or the Red Cross. The real first responders, by definition, are the local communities; always have been, always will be. So the question is: can robotics empower local communities to respond and recover both faster and better? I believe the answer is Yes.

But lets look at the alternative. As we’ve seen from recent disasters, the majority of teams that deploy with aerial robotics (UAVs) do so from the US, Europe and Australia. The mobilization costs involved in flying a professional team across the world—not to mention their robotics equipment—is not insignificant. And this doesn’t even include the hotel costs for a multi-person team over the course of a mission. When you factor in these costs on top of the consulting fees owed to professional international robotics teams, then of course the use of aerial robotics versus space robotics (satellites) becomes harder to justify.

There is also an important time factor. The time it takes for international teams to obtain the necessary export/import permits and customs clearance can be highly unpredictable. More than one international UAV team that (self) deployed to Nepal after the tragic 2015 Earthquake had their robotics platforms held up in customs for days. And of course there’s the question of getting regulatory approval for robotics flights. Lastly, international teams (especially companies and start-up’s) may have little to no prior experience working in the country they’re deploying to; they may not know the culture or speak the language. This too creates friction and can slow down a humanitarian robotics mission.

WeR PM Blog pic 2

What if you had fully trained teams on the ground already? Not an international team, but a local expert robotics team that obviously speaks the local language, understands local customs and already has a relationship with the country’s Civil Aviation Authority. A local team does not need to waste time with export/import permits or customs clearance; doesn’t need expensive international flights or weeks’ worth of hotel accommodations. They’re on site, and ready to deploy at a moment’s notice. Not only would this response be faster, it would be orders of magnitudes cheaper and more sustainable to carry through to the recovery and reconstruction phase.

In sum, we need to co-create local Flying Labs with local partners including universities, NGOs, companies and government partners. Not only would these Labs be far more agile and rapid vis-a-vis disaster response efforts, they would also be far more sustainable and their impact more scalable than deploying international robotics teams. This is one of the main reasons why my team and I at WeRobotics are looking to co-create and connect a number of Flying Labs in disaster prone countries across Asia, Africa and Latin America. With these Flying Labs in place, the cost of rapidly acquiring high quality aerial imagery will fall significantly. Think Global, Fly Local.

Aerial Robotics for Payload Delivery in Developing Countries: Open Questions

Should developing countries seek to manufacture their own robotics solutions in order to establish payload delivery services? What business models make the most sense to sustain these services? Do decision-support tools already exist to determine which delivery routes are best served by aerial robots (drones) rather than traditional systems (such as motorbikes)? And what mechanisms should be in place to ensure that the impact of robotics solutions on local employment is one of net job creation rather than job loss?

Screen Shot 2016-03-17 at 3.34.52 PM

There are some of the questions I’ve been thinking about and discussing with various colleagues over the past year vis-a-vis humanitarian applications. So let me take the first 2 questions and explore these further here. I’ll plan on writing a follow up post in the near future to address the other two questions.

First, should developing countries take advantage of commercial solutions that already exist to build their robotics delivery infrastructure? Or should they seek instead to manufacture these robotics platforms locally instead? The way I see it, this does not have to be an either/or situation. Developing countries can both benefit from the robust robotics technologies that already exist and take steps to manufacture their own solutions over time.

This is not a hypothetical debate. I’ve spent the past few months going back and forth with a government official in a developing country about this very question. The official is not interested in leveraging existing commercial solutions from the West. As he rightly notes, there are many bright engineers in-country who are able and willing to build these robotics solutions locally.

Here’s the rub, however, this official has no idea just how much work, time and money is needed to develop robust, reliable and safe robotics solutions. In fact, many companies in both Europe and the US have themselves completely under-estimated just how technically challenging (and very expensive) it is to develop reliable aerial robotics solutions to delivery payloads. This endeavor easily takes years and millions of dollars to have a shot at success. It is far from trivial.

The government official in question wants his country’s engineers to build these solutions locally in order to transport essential medicines and vaccines between health clinics and remote villages. Providing this service is relatively urgent because existing delivery mechanisms are slow, unreliable and at times danger-ous. So this official will have to raise a substantial amount of funds to pay local engineers to build home-grown robotics solutions and iterate accordingly. This could take years (with absolutely no guarantee of success mind you).

On the other hand, this same official could decide to welcome the use of existing commercial solutions as part of field-tests in-country. The funding for this would not have to come from the government and the platforms could be field-tested as early as this summer. Not only would this provide local engineers with the ability to learn from the tests and gain important engineering insights, they could also be hired to actually operate the cargo delivery services over the long-term, thus gaining the skills to maintain and fix the platforms. Learning by doing would give these engineers practical training that they could use to build their own home-grown solutions.

One could be even more provocative: Why invest so much time and effort in local manufacturing when in-country engineers and entrepreneurs could simply use commercial solutions that already exist to make money sooner rather than later by providing robotics as a service? We’ve seen, historically, the transition from manufacturing to service-based economies. There’s plenty of profit to be made from the latter with a lot less start-up time and capital required. And again, one strategy does not preclude the other, so why forgo both early training and business opportunities when these same opportunities could help develop and fund the local robotics industry?

Admittedly, I’m somewhat surprised by the official’s zero tolerance for the use of foreign commercial technology to improve his country’s public health services; that same official is using computers, phones, cars, televisions, etc., that are certainly not made in-country. He does not have a background in robotics, so perhaps he assumes that building robust robotics solutions is relatively easy. Simply perusing the past 2 years of crowdfunded aerial robotics projects will clearly demonstrate that most have resulted in complete failure despite raising millions of dollars. That robotics graveyard keeps growing.

But I fully respect the government official’s position even if I disagree with it. In my most recent exchange with said official, I politely re-iterated that one strategy (local manufacturing) does not preclude the other (local business opportunities around robotics as service using foreign commercial solutions). Surely, the country in question can both leverage foreign technology while also building a local manufacturing base to produce their own robotics solutions.

show-me-the-money

Second, on business models, which models can provide sustainability by having aerial delivery services be profitable earlier rather than later? I was recently speaking to a good colleague of mine who works for a very well-respected humanitarian group about their plans to pilot the use of aerial robotics for the delivery of essential medicines. When I asked him about his organization’s business model for sustaining these delivery services, he simply said there was no model, that his humanitarian organization would simply foot the bill.

Surely we can do better. Just think how absurd it would be for a humanitarian organization to pay for their own 50 kilometer paved road to transport essential medicines by truck and decide not to recoup those major costs. You’ve paid for a perfectly good road that only gets used a few times a day by your organization. But 80% of the time there is no one else on that road. That would be absurd. Humanitarians who seek to embark on robotics delivery projects should really take the time to understand local demand for transportation services and use-cases to explore strategies to recoup part of their investments in building the aerial robotics infrastructure.

Surely remote communities who are disconnected from health services are also disconnected from access to other commodities. Of course, these local villages may not benefit from high levels of income; but I’m not suggesting that we look for high margins of return. Point is, if you’ve already purchased an aerial robot (drone) and it spends 80% of its time on the ground, then talk about a missed opportunity. Take commercial aviation as an analogy. Airlines do not make money when their planes are parked at the gate. They make money when said planes fly from point A to point B. The more they fly, the more they transport, the more they profit. So pray tell what is the point of investing in aerial robots only to have them spend most of their lives on the ground? Why not “charter” these robots for other purposes when they’re not busy flying medicines?

The fixed costs are the biggest hurdle with respect to aerial robotics, not the variable costs. Autonomous flights themselves cost virtually nothing; only 1-2 person’s time to operate the robot and swap batteries & payloads. Just like their big sisters (manually piloted aircraft), aerial robots should be spending the bulk of their time in the sky. So humanitarian organizations really ought to be thinking earlier rather than later about how to recoup part of their fixed costs by offering to transport other high-demand goods. For example, by allowing local businesses to use existing robotics aircraft and routes to transport top-up cards or SIM cards for mobile phones. What is the weight of 500 top-up or SIM cards? Around 0.5kg, which is easily transportable via aerial robot. Better yet, identify perishable commodities with a short shelf-life and allow business to fly those via aerial robot.

The business model that I’m most interested in at the moment is a “Per Flight Savings” model. One reason to introduce robotics solutions is to save on costs—variable costs in particular. Lest say that the variable cost of operating robotics solutions is 20% lower than the costs of traditional delivery mechanisms (per flight versus per drive, for example). You offer the client a 10% cost saving and pocket the other 10% as revenue. Over time, with sufficient flights (transactions) and growing demand, you break even and start to create a profit. I realize this is a hugely simplistic description; but this need not be unnecessarily complicated either.  The key will obviously be the level of demand for these transactions.

The way I see it, regardless of the business model, there will be a huge first-mover advantage in developing countries given the massive barriers to entry. Said barriers are primarily due to regulatory issues and air traffic management challenges. For example, once a robotics company manages to get regulatory approval and specific flight permissions for designated delivery routes to supply essential medicines, a second company that seeks to enter the market may face even greater barriers. Why? Because managing aerial robotics platforms from one company and segregating that airspace from manned aircraft can already be a challenge (not to mention a source of concern for Civil Aviation Authorities).

So adding new (and different types of) robots from a second company requires new communication protocols between the different robotics platforms operated by the 2 different companies. In sum, the challenges become more complex more quickly as new competitors seek entry. And for an Aviation Authority that may already be weary of flying robots, the proposal of adding a second fleet from a different company in order to increase competition around aerial deliveries may take said Authority some time to digest. Of course, if these companies can each operate in completely different parts of a given country, then technically this is an easier challenge to manage (and less anxiety provoking for authorities).

But said barriers do not only include technical (though surmountable) barriers. They also include identifying those (few?) use-cases that clearly make the most business sense to recoup one’s investments earlier rather than later given the very high start-up fixed costs associated with developing robotics platforms. Identifying these business cases is typically not something that’s easily done remotely. A considerable amount of time and effort must be spent on-site to identify and meet possible stakeholders in order to brainstorm and discover key use-cases. And my sense is that aerial robots often need to be designed to meet a specific use-case. So even when new use-cases are identified, there may still be the need for Research and Development (R&D) to modify a given robotics platform so it can most efficiently cater to new use-cases.

There are other business models worth thinking through for related services, such as those around the provision of battery-charging services, for example. The group Mobisol has installed solar home systems on the roofs of over 40,000 households in Rwanda and Tanzania to tackle the challenge of energy poverty. Mobisol claims to already cover much of Tanzania with solar panels that are no more than 5 kilometers apart. This could enabling aerial robots (UAVs) to hop from recharging station to recharging station, an opportunity that Mobisol is already actively exploring. Practical challenges aside, this network of charging stations could lead to an interesting business model around the provision of aerial robotics services.

As the astute reader will have gathered, much of the above is simply a written transcript me thinking out load. So I’d very much welcome some intellectual company here along with constructive feedback. What am I missing? Is my logic sound? What else should I be taking into account?

New Findings: Rapid Assessment of Disaster Damage Using Social Media

The latest peer-reviewed, scientific research on social media & crisis computing has just been published in the prestigious journal, Science. The authors pose a question that many of us in the international humanitarian space have been asking, debating and answering since 2009: Can social media data aid in disaster response and damage assessment?

hurricane-sandy-social-media
To answer this question, the authors of the new study carry out “a multiscale analysis of Twitter activity before, during, and after Hurricane Sandy” and “examine the online response of 50 metropolitan areas of the US.” They find a “strong relationship between proximity to Sandy’s path and hurricane-related social media activity.” In addition, they “show that real and perceived threats, together with physical disaster effects, are directly observable through the intensity and composition of Twitter’s message stream.”

Screen Shot 2016-03-17 at 12.18.11 PM

What’s more, they actually “demonstrate that per-capita Twitter activity strongly correlates with the per-capita economic damage inflicted by the hurricane.” The authors found these results to hold true for a “wide range of [US-based] disasters and suggest that massive online social networks can be used for rapid assessment of damage caused by a large-scale disaster.”

Screen Shot 2016-03-17 at 12.26.51 PM

Unlike the vast majority of crisis computing studies in the scientific literature, this is one of the few (perhaps the only?) study of its kind that uses actual post-disaster damage data, i.e. actual ground-truthing, to demonstrate that “the per-capita number of Twitter messages corresponds directly to disaster-inflicted monetary damage.” What’s more, “The correlation is especially pronounced for persistent post-disaster activity and is weakest at the peak of the disaster.”

Screen Shot 2016-03-17 at 1.16.11 PM

The authors thus conclude that social media is a “viable platform for preliminary rapid damage assessment in the chaotic time immediately after a disaster.” As such, their results suggest that “officials should pay attention to normalized activity levels, rates of original content creation, and rates of content rebroadcast to identify the hardest hit areas in real time. Immediately after a disaster, they should focus on persistence in activity levels to assess which areas are likely to need the most assistance.”

Screen Shot 2016-03-17 at 12.37.02 PM

In sum, the authors found that “Twitter activity during a large-scale natural disaster—in this instance Hurricane Sandy—is related to the proximity of the region to the path of the hurricane. Activity drops as the distance from the hurricane increases; after a distance of approximately 1200 to 1500 km, the influence of proximity disappears. High-level analysis of the composition of the message stream reveals additional findings. Geo-enriched data (with location of tweets inferred from users’ profiles) show that the areas close to the disaster generate more original content […].”

Five years ago, professional humanitarians were still largely dismissive of social media’s added value in disasters. Three years ago, it was the turn of ivory tower academics in the social sciences to dismiss the value added of social media for disaster response. The criticisms focused on the notion that reports posted on social media were simply untrustworthy and hardly representative. The above peer-reviewed scientific study dismisses these limitations as inconsequential.

Screen Shot 2016-03-17 at 1.11.17 PM

A 10 Year Vision: Future Trends in Geospatial Information Management

Screen Shot 2016-02-07 at 12.35.09 PM

The United Nations Committee of Experts on Global Geospatial Information Management (UN-GGIM) recently published their second edition of Future Trends in Geospatial Information Management. I blogged about the first edition here. Below are some of the excerpts I found interesting or noteworthy. The report itself is a 50-page document (PDF 7.1Mb).

  • The integration of smart technologies and efficient governance models will increase and the mantra of ‘doing more for less’ is more relevant than ever before.
  • There is an increasing tendency to bring together data from multiple sources: official statistics, geospatial information, satellite data, big data and crowdsourced data among them.
  • New data sources and new data collection technologies must be carefully applied to avoid a bias that favors countries that are wealthier and with established data infrastructures. The use of innovative tools might also favor those who have greater means to access technology, thus widening the gap between the ‘data poor’ and the ‘data rich’.
  • The paradigm of geospatial information is changing; no longer is it used just for mapping and visualization, but also for integrating with other data sources, data analytics, modeling and policy-making.
  • Our ability to create data is still, on the whole, ahead of our ability to solve complex problems by using the data.  The need to address this problem will rely on the development of both Big Data technologies and techniques (that is technologies that enable the analysis of vast quantities of information within usable and practical timeframes) and artificial intelligence (AI) or machine learning technologies that will enable the data to be processed more efficiently.
  • In the future we may expect society to make increasing use of autonomous machines and robots, thanks to a combination of aging population, 
rapid technological advancement in unmanned autonomous systems and AI, and the pure volume of data being beyond a human’s ability to process it.
  • Developments in AI are beginning to transform the way machines interact with the world. Up to now machines have mainly carried out well-defined tasks such as robotic assembly, or data analysis using pre-defined criteria, but we are moving into an age where machine learning will allow machines to interact with their environment in more flexible and adaptive ways. This is a trend we expect to 
see major growth in over the next 5 to 10 years as the technologies–and understanding of the technologies–become more widely recognized.
  • Processes based on these principles, and the learning of geospatial concepts (locational accuracy, precision, proximity etc.), can be expected to improve the interpretation of aerial and satellite imagery, by improving the accuracy with which geospatial features can be identified.
  • Tools may run persistently on continuous streams of data, alerting interested parties to new discoveries and events.  Another branch of AI that has long been of interest has been the expert system, in which the knowledge and experience of human experts 
is taught to a machine.
  • The principle of collecting data once only at the highest resolution needed, and generalizing ‘on the fly’ as required, can become reality.  Developments of augmented and virtual reality will allow humans to interact with data in new ways.
  • The future of data will not be the conflation of multiple data sources into a single new dataset, rather there will be a growth in the number of datasets that are connected and provide models to be used across the world.
  • Efforts should be devoted to integrating involuntary sensors– mobile phones, RFID sensors and so
on–which aside from their primary purpose may produce information regarding previously difficult to collect information. This leads to more real-time information being generated.
  • Many developing nations have leapfrogged in areas such as mobile communications, but the lack of core processing power may inhibit some from taking advantage of the opportunities afforded by these technologies.
  • Disaggregating data at high levels down to small area geographies. This will increase the need to evaluate and adopt alternative statistical modeling techniques to ensure that statistics can be produced at the right geographic level, whilst still maintaining the quality to allow them to be reported against.
  • The information generated through use of social media and the use of everyday devices will further reveal patterns and the prediction of behaviour. This is not a new trend, but as the use of social media 
for providing real-time information and expanded functionality increases it offers new opportunities for location based services.
  • There seems to have been
 a breakthrough from 2D to 3D information, and
 this is becoming more prevalent.

 Software already exists to process this information, and to incorporate the time information to create 4D products and services. It 
is recognized that a growth area over the next five to ten years will be the use of 4D information in a wide variety of industries.
  • 
 The temporal element is crucial to a number of applications such as emergency service response, for simulations and analytics, and the tracking of moving objects. 
 4D is particularly relevant in the context of real-time information; this has been linked to virtual reality technologies.
  • Greater coverage, quality and resolution has been achieved by the availability of both low-cost and affordable satellite systems, and unmanned aerial vehicles (UAVs). This has increased both the speed of collection and acquisition in remote areas, but also reduced the cost barriers of entry.
  • UAVs can provide real-time information to decision-makers on the ground providing, for example, information for disaster manage-ment. They are
 an invaluable tool when additional information 
is needed to improve vital decision making capabilities and such use of UAVs will increase.
  • The licensing of data in an increasingly online world is proving to be very challenging. There is a growth in organisations adopting simple machine-readable licences, but these have not resolved the issues to data. Emerging technologies such as web services and the growth of big data solutions drawn from multiple sources will continue to create challenges for the licensing of data.
  • A wider issue is the training and education of a broader community of developers and users of location-enabled content. At the same time there is a need for more automated approaches to ensuring the non-geospatial professional community get the right data at the right time. 
Investment in formal training in the use of geospatial data and its implementation is still indispensable.
  • Both ‘open’ and ‘closed’ VGI 
data play an important and necessary part of the wider data ecosystem.

Increasing the Reliability of Aerial Imagery Analysis for Damage Assessments

In March 2015, I was invited by the World Bank to spearhead an ambitious humanitarian aerial robotics (UAV) mission to Vanuatu following Cyclone Pam, a devastating Category 5 Cyclone. This mission was coordinated with Heliwest and X-Craft, two outstanding UAV companies who were identified through the Humanitarian UAV Network (UAViators) Roster of Pilots. You can learn more about the mission and see pictures here. Lessons learned from this mission (and many others) are available here.

Screen Shot 2016-02-22 at 6.12.21 PM

The World Bank and partners were unable to immediately analyze the aerial imagery we had collected because they faced a Big Data challenge. So I suggested the Bank activate the Digital Humanitarian Network (DHN) to request digital volunteer assistance. As a result, Humanitarian OpenStreetMap (HOT) analyzed some of the orthorectified mosaics and MicroMappers focused on analyzing the oblique images (more on both here).

This in turn produced a number of challenges. To cite just one, the Bank needed digital humanitarians to identify which houses or buildings were completely destroyed, versus partially damaged versus largely intact. But there was little guidance on how to determine what constituted fully destroyed versus partially damaged or what such structures in Vanuatu look like when damaged by a Cyclone. As a result, data quality was not as high as it could have been. In my capacity as consultant for the World Bank’s UAVs for Resilience Program, I decided to do something about this lack of guidelines for imagery interpretation.

I turned to my colleagues at the Harvard Humanitarian Initiative (where I had previously co-founded and co-directed the HHI Program on Crisis Mapping) and invited them to develop a rigorous guide that could inform the consistent interpretation of aerial imagery of disaster damage in Vanuatu (and nearby Island States). Note that Vanuatu is number one on the World Bank’s Risk Index of most disaster-prone countries. The imagery analysis guide has just published (PDF) by the Signal Program on Human Security and Technology at HHI.

Big thanks to the HHI team for having worked on this guide and for my Bank colleagues and other reviewers for their detailed feedback on earlier drafts. The guide is another important step towards improving data quality for satellite and aerial imagery analysis in the context of damage assessments. Better data quality is also important for the use of Artificial Intelligence (AI) and computer vision as explained here. If a humanitarian UAV mission does happen in response to the recent disaster in Fiji, then the guide may also be of assistance there depending on how similar the building materials and architecture is. For now, many thanks to HHI for having produced this imagery guide.

Aerial Robotics for Search & Rescue: State of the Art?

WeRobotics is co-creating a global network of labs to transfer robotics solutions to those who need them most. These “Flying Labs” take on different local flavors based on the needs and priorities of local partners. Santiago Flying Labs is one of the labs under consideration. Our local partners in Chile are interested in the application of robotics for disaster preparedness and Search & Rescue (SaR) operations. So what is the state of the art in rescue robotics?

WP0

One answer may lie several thousand miles away in Lushan, China, which experienced a 7.0 magnitude earthquake in 2013. The mountainous area made it near impossible for the Chinese International Search and Rescue Team (CISAR) to implement a rapid search and post-seismic evaluation. So State Key Robotics Lab at Shenyang Institute of Automation offered aerial support to CISAR. They used their aerial robot (UAV) to automatically and accurately detect collapsed buildings for ground rescue guidance. This saved the SaR teams considerable time. Here’s how.

WP5

A quicker SaR response leads to a higher survival rate. The survival rate is around 90% within the first 30 minutes but closer to 20% by day four. “In traditional search methods, ground rescuers are distributed to all possible places, which is time consuming and inefficient.” An aerial inspection of the disaster damage can help accelerate the ground search for survivors by prioritizing which areas to search first.

WP1

State Key Labs used a ServoHeli aerial robot to capture live video footage of the damaged region. And this is where it gets interesting. “Because the remains of a collapsed building often fall onto the ground in arbitrary directions, their shapes will exhibit random gradients without particular orientations. Thus, in successive aerial images, the random shape of a collapsed building will lead to particular motion features that can be used to discriminate collapsed from non-collapsed buildings.”

WP2

These distinct motion features can be quantified using a histogram of oriented gradient (HOG) as depicted here (click to enlarge):

WP4
As is clearly evident from the histograms, the “HOG variation of a normal building will be much larger than that of a collapsed one.” The team at State Key Labs had already employed this technique to train and test their automated feature-detection algorithm using aerial video footage from the 2010 Haiti Earthquake. Sample results of this are displayed below. Red rectangles denote where the algorithm was successful in identifying damage. Blue rectangles are false alarms while orange rectangles are missed detections.

Screen Shot 2016-02-07 at 2.55.14 AM

Screen Shot 2016-02-07 at 2.54.14 AM

Screen Shot 2016-02-07 at 2.55.57 AM

While the team achieved increasingly accurate detection rates for Haiti, the initial results for Lushan were not as robust. This was due to the fact that Lushan is far more rural than Port-au-Prince, which tripped up the algorithm. Eventually, the software achieved an accurate rate of 83.4% without any missed collapses, however. The use of aerial robotics and automated feature detection algorithms in Xinglong Village (9.5 sq. km) enabled CISAR to cut their search time in half. In sum, the team concluded that videos are more valuable for SaR operations than static images.

Screen Shot 2016-02-07 at 2.53.05 AM

Screen Shot 2016-02-07 at 2.52.26 AM

To learn more about this deployment, see the excellent write-up “Search and Rescue Rotary-Wing UAV and Its Application to the Lushan Ms 7.0 Earthquake” published in the Journal of Field Robotics. I wish all robotics deployments were this well documented. Another point that I find particularly noteworthy about this operation is that it was conducted three years ago already. In other words, real-time feature detection of disaster damage from live aerial video footage was already used operationally years ago.

WP3
What’s more, this paper published in 2002 (!) used computer vision to detect with a 90% accuracy rate collapsed buildings in aerial footage of the 1995 Kobe earthquake captured by television crews in helicopters. Perhaps in the near future we’ll have automated feature detection algorithms for disaster damage assessments running on live video footage from news channels and aerial robots. These could then be complemented by automated change-detection algorithms running on satellite imagery. In any event, the importance of applied research is clearly demonstrated by the Lushan deployments. This explains why WeRobotics
always aims to have local universities involved in Flying Labs.


Thanks to the ICARUS Team for pointing me to this deployment.

When Bushmen Race Aerial Robots to Protect Wildlife

WP1

The San people are bushmen who span multiple countries in Southern Africa. “For millennia, they have lived as hunter-gatherers in the Kalahari desert relying on an intimate knowledge of the environment.” Thus begins Sonja’s story, written with Matthew Parkan. Friend and colleague Sonja Betschart is a fellow co-founder at WeRobotics. In 2015, she teamed up on with partners in Namibia to support their wildlife conservation efforts. Sonja and team used aerial robots (UAVs) to count the number of wild animals in Kuzikus Wildlife Reserve. All the images in this blog post are credited to Drone Adventures.

Screen Shot 2016-02-07 at 3.18.44 PM

If that name sounds familiar, it’s because of this digital crowdsourcing project, which I spearheaded with the same Wildlife Reserve back in 2014. The project, which was covered on CNN, used crowdsourcing to identify wild animals in very high-resolution aerial imagery. As Sonja notes, having updated counts for the number of animals in each species is instrumental to nature conservation efforts.

While the crowdsourcing project worked rather well to identify the presence of animals, distinguishing between animals is more difficult for “the crowd” even when crowdsourcing is combined with Artificial Intelligence. Local indigenous knowledge is unbeatable, or is it? They decided to put this indigenous technical knowledge to the test: find the baby rhino that had been spotted in the area earlier during the mission. Armed with the latest aerial robotics technology, Sonja and team knew full well that the robots would be stiff—even unfair—competition for the bushmen. After all, the aerial robots could travel much faster and much farther than the fastest bushman in Namibia. 

Game on: Bushmen vs Robots. Ready, get set, go!

WP3

The drones weren’t even up in the air when the bushmen strolled back into camp to make themselves a cup up tea. Yes, they had already identified the location of the rhino. “This technologically humbling experience was a real eye opener,” writes Sonja, “a clear demonstration of how valuable traditional skills are despite new technologies. It was also one of the reasons why we started thinking about ways to involve the trackers in our work.”

They invited the bushmen to analyze the thousands of aerial images taken by the aerial robots to identity the various animal species visible in each image. After providing the San with a crash course on how to use a computer (to navigate between images), the bushmen “provided us with consistent interpretations of the species and sometimes even of the tracks in the high grass. However, what impressed us most was that the San pointed out subtle differences in the shadows [of the animals] that would give away the animals’ identities […].”

WP4

Take this aerial image, for example. See anything? “The San identified three different species on this image: oryx (red circles), blesbok (blue circle) and zebra (green circle).” In another image, “the bushmen were even able to give us the name of one of the Black Rhinos they spotted! For people who had never seen aerial imagery, their performance was truly remarkable.” Clearly, “with a bit of additional training, the San trackers could easily become expert image analysts.”

As Sonja rightly concluded after the mission, the project in Namibia was yet more evidence that combining indigenous knowledge with modern technology is both an elegant way of keeping this knowledge alive and an effective approach to address some of our technical and scientific limitations. As such, this is precisely the kind of project that Sonja and I want to see WeRobotics launch in 2016. It is fully in line with our mission to democratize the Fourth Industrial Revolution.


“The Namibia mission was part of SAVMAP, a joint initiative between the Laboratory of Geographic Information Systems (LASIG) of EPFL, Kuzikus Wildlife Reserve & Drone Adventures that aims at better integrating geospatial technologies with conservation and wildlife management in the Kalahari.”

360° Aerial View of Taiwan Earthquake Damage

The latest news coming from Taiwan is just tragic and breaks my heart. More than 100 may still be trapped under the rubble. I’ve been blogging about the role that emerging technologies can play in humanitarian response since 2008 but it definitely doesn’t get any easier emotionally to witness these tragic events. I try to draw a psychological line when I can so as not to lose hope and fall apart, but I’m not always successful. Sometimes it is just easier psychologically to focus on the technology and nothing else. My thoughts go out to all the families affected.


My colleague Preston Ward from DronePan kindly connected me to a friend of his, Elton, in Taiwan. Elton is with Super720, a company that creates impressive Virtual Reality (VR) panoramas. They deployed within 4 hours of the earthquake and used aerial robots (UAVs) at 5 different locations to capture both 360° aerial panoramas and 4K video to document the earthquake damage.

Click on the image below to view the pano in full 360° (takes a while to load).

360° visual analysis may prove useful and complementary for inspecting disaster damage. If there were a way to geo-reference and orthorectify these panos, they could potentially play a larger role in the disaster assessment process which uses geospatial data and Geographic Information Systems (GIS).

The team in Taiwan also shared this aerial video footage of the damage:

As one of my upcoming blog posts will note, we should be able to run computer vision algorithms on aerial videos to automatically detect disaster damage. We should also be able to use Virtual Reality to carry out damage assessments. You can already use VR to inspect at the earthquake damage in Taiwan here.

Assessing Disaster Damage: How Close Do You Need to Be?

“What is the optimal spatial resolution for the analysis of disaster damage?”

I posed this question during an hour-long presentation I gave to the World Bank in May 2015. The talk was on the use of remote sensing aerial robotics (UAVs) for disaster damage assessments; I used Cyclone Pam in Vanuatu and the double Nepal Earthquakes as case studies.

WPa
One advantage of aerial robotics over space robotics (satellites) is that the former can capture imagery at far higher spatial resolutions—sub 1-centimeter if need be. Without hesitation, a World Bank analyst in the conference room replied to my question: “Fifty centimeters.” I was taken aback by the rapid reply. Had I per chance missed something? Was it so obvious that 50 cm resolution was optimal? So I asked, “Why 50 centimeters?” Again, without hesitation: “Because that’s the resolution we’ve been using.” Ah ha! “But how do you know this is the optimal resolution if you haven’t tried 30 cm or even 10 cm?”

Lets go back to the fundamentals. We know that “rapid damage assessment is essential after disaster events, especially in densely built up urban areas where the assessment results provide guidance for rescue forces and other immediate relief efforts, as well as subsequent rehabilitation and reconstruction. Ground-based mapping is too slow, and typically hindered by disaster-related site access difficulties” (Gerke & Kerle 2011). Indeed, studies have shown that the inability of physically access damaged areas results in field teams underestimating the damage (Lemoine et al 2013). Hence one reason for the use of remote sensing.

We know that remote sensing can be used for two purposes following disasters. The first, “Rapid Mapping”, aims at providing impact assessments as quickly as possible after a disaster. The second, “Economic Mapping” assists in quantifying the economic impact of the damage. “Major distinctions between the two categories are timeliness, completeness and accuracies” (2013).  In addition, Rapid Mapping aims to identify the relative severity of the damage (low, medium, high) rather than absolute damage figures. Results from Economic Mapping are combined with other geospatial data (building size, building type, etc) and economic parameters (e.g., cost per unit area, relocation costs, etc) to compute a total cost estimate (2013). The Post-Disaster Needs Assessment (PDNA) is an example of Economic Mapping.

It is worth noting that a partially destroyed building may be seen as a complete economic loss, identical to a totally destroyed structure (2011). “From a casualty / fatality assessment perspective, however, a structure that is still partly standing […] offers a different survival potential” (2011).

Screen Shot 2016-02-07 at 10.31.27 AM

We also know that “damage detectability is primarily a function of image resolution” (2011). The Haiti Earthquake was “one of the first major disasters in which very high resolution satellite and airborne imagery was embraced to delineate the event impact (2013). It was also “the first time that the PDNA was based on damage assessment produced with remotely sensed data” (2013). Imagery analysis of the impact confirmed that a “moderate resolution increase from 41 cm to 15 cm has profound effects on damage mapping accuracy” (2011). Indeed, a number of validation studies carried out since 2010 have confirmed that “the higher detail airborne imagery performs much better [than lower resolution] satellite imagery” (2013). More specifically, the detection of very heavy damage and destruction in Haiti aerial imagery “is approximately a factor 8 greater than in the satellite imagery.”

Comparing the aerial imagery analysis with field surveys, Lemoine et al. find that “the number of heavily affected and damaged buildings in the aerial point set is slightly higher than that obtained from the field survey” (2013). The correlation between the results of the aerial imagery analysis and field surveys is sensitive to land-use (e.g., commercial, downtown, industrial, residential high density, shanty, etc). In highly populated areas such as shanty zones, “the over-estimation of building damage from aerial imagery could simply be a result of an incomplete field survey while, in downtown, where field surveys seem to have been conducted in a more systematic way, the damage assessment from aerial imagery matches very well the one obtained from the field” (2013).

WPb

In sum, the results from Haiti suggests that the “damage assessment from aerial imagery currently represents the best possible compromise between timeliness and accuracy” (2013). The Haiti case study also “showed that the damage derived from satellite imagery was underestimated by a factor of eight, compared to the damage derived from aerial imagery. These results suggest that despite the fast availability of very high resolution satellite imagery […], the spatial resolution of 50 cm is not sufficient for an accurate interpretation of building damage.”

In other words, even though “damage assessments depend very much on the timeliness of the aerial images acquisitions and requires a considerable effort of visual interpretation as an element of compromise; [aerial imagery] remains the best trade-of in terms of required quality and timeliness for producing detailed damage assessments over large affected areas compared to satellite based assessments (insufficient quality) and exhaustive field inventories (too slow).” But there’s a rub with respect to aerial imagery. While the above results do “show that the identification of building damage from aerial imagery […] provides a realistic estimate of the spatial pattern and intensity of damage,” the aerial imagery analysis still “suffers from several limitations due to the nadir imagery” (2013).

“Essentially all conventional airborne and spacebar image data are taken from a quasi-vertical perspective” (2011). Vertical (or nadir) imagery is particularly useful for a wide range of applications, for sure. But when it comes to damage mapping, vertical data have great limitations, particularly when concerning structural building damage (2011). While complete collapse can be readily identified using vertical imagery (e.g., disintegrated roof structures and associated high texture values, or adjacent rubble piles, etc), “lower levels of damage are much harder to map. This is because such damage effects are largely expressed along the façades, which are not visible in such imagery” (2011). According to Gerke and Kerle, aerial oblique imagery is more useful for disaster damage assessment than aerial or satellite imagery taken with a vertical angle. I elaborated on this point vis-a-vis a more recent study in this blog post.

WPd

Clearly, “much of the challenge in detecting damage stems from the complex nature of damage” (2011). For some types and sizes of damage, using only vertical (nadir) imagery will result in missed damage (2013). The question, therefore, is not simply one of spatial resolution but the angle at which the aerial or space-based image is taken, land-use and the type of damage that needs to be quantified. Still, we do have a partial answer to the first question. Technically, the optimal spatial resolution for disaster damage assessments is certainly not 50 cm since 15 cm proved far more useful in Haiti.

Of course, if higher-resolution imagery is not available in time (or at all), than clearly 50 cm imagery is infinitely more optimal than no imagery. In fact, even 5 meter imagery that is available within 24-48 hours of a disaster can add value if this imagery comes with baseline imagery, i.e., imagery of the location of interest before the disaster. Baseline data enables the use of automated change-detection algorithms that can provide a first estimate of damage severity and the location or scope of that severity. What’s more, these change-detection algorithms could automatically plot a series of waypoints to chart the flight plan of an autonomous aerial robot (UAV) to carry out closer inspection. In other words, satellite and aerial data can be complementary, and the drawbacks of low resolution imagery can be offset if said imagery is available at a higher temporal frequency.

WPc

On the flip side, just because the aerial imagery used in the Haiti study was captured at 15 cm resolution does not imply that 15 cm resolution is the most optimal spatial resolution for disaster damage. It could very well be 10 cm. This depends entirely on the statistical distribution of the size of damaged features (e.g, the size of a crack, a window, a tile, etc,), the local architecture (e.g., type of building materials used and so on) and the type of hazard (e.g, hurricane, earthquake, etc). That said, “individual indicators of destruction, such as a roof or façade damage, do not linearly add to a given damage class [e.g., low, medium, high]” (2011). This limitation is alas “fundamental in remote sensing where no assessment of internal structural integrity is possible” (2011).

In any event, one point is certain: there was no need to capture aerial imagery (both nadir and obliques) at 5 centimeter resolution during the World Bank’s humanitarian UAV mission after Cyclone Pam—at least not for the purposes of 2D imagery analysis. This leads me to one final point. As recently noted during my opening Keynote at the 2016 International Drones and Robotics for Good Awards in Dubai, disaster damage is a 3D phenomenon, not a 2D experience. “Buildings are three-dimensional, and even the most detailed view at only one of those dimensions is ill-suited to describe the status of such features” (2011). In other words, we need “additional perspectives to provide a more comprehensive view” (2011).

There are very few peer-reviewed scientific papers that evaluate the use of high-resolution 3D models for the purposes of damage assessments. This one is likely the most up-to-date study. An earlier research effort by Booth et al. found that the overall correlation between the results from field surveys and 3D analysis of disaster damage was “an encouraging 74 percent.” But this visual analysis was carried out manually, which could have introduced non-random errors. After all, “the subjectivity inherent in visual structure damage mapping is considerable” (2011). Could semi-automated methods for the analysis of 3D models thus yield a higher correlation? This is the research question posed by Gerke and Kerle in their 2011 study.

Screen Shot 2016-02-07 at 5.39.06 AM

The authors tested this question using aerial imagery from the Haiti Earthquake. When 3D features were removed from their automated analysis, “classification performance declined, for example by some 10 percent for façades, the class that benefited most from the 3D derivates” (2011). The researchers also found that trained imagery analysts only agreed at most 76% of the time in their visual interpretation and assessments of aerial data. This is in part due to the lack of standards for damage categories (2011).

Screen Shot 2016-02-07 at 5.40.36 AM

I made this point more bluntly in this earlier blog post. Just imagine when you have hundreds of professionals and/or digital volunteers analyzing imagery (e.g, through crowdsourcing) and no standardized categories of disaster damage to inform the consistent interpretation of said imagery. This collective subjectivity introduces a non-random error into the overall analysis. And because it is non-random, this error cannot be accounted for. In contrast, a more semi-automated solution would render this error more random, which means the overall model could potentially be adjusted accordingly.

Gerke and Kerle conclude that high quality 3D models are “in principle well-suited for comprehensive semi-automated damage mapping. In particular façades, which are critical [to the assessment process], can be assessed [when] multiple views are provided.” That said, the methodology used by the authors was “still essentially based on projecting 3D data into 2D space, with conceptual and geometric limitations. [As such], one goal should be to perform the actual damage assessment and classification in 3D.” This explains why I’ve been advocating for Virtual Reality (VR) based solutions as described here.

3D_1

Geek and Kerle also “required extensive training data and substantial subjective evidence integration in the final damage class assessment. This raises the question to what extent rules could be formulated to create a damage ontology as the basis for per-building damage scoring.” This explains why I invited the Harvard Humanitarian Initiative (HHI) to create a damage ontology based on the aerial imagery from Cyclone Pam in Vanuatu. This ontology is based on 2D imagery, however. Ironically, very high spatial resolution 2D imagery can be more difficult to interpret than lower-res imagery since the high resolution imagery inevitably adds more “noise” to the data.

Ultimately, we’ll need to move on to 3D damage ontologies that can be visualized using VR headsets. 3D analysis is naturally more intuitive to us since we live in a mega-high resolution 3D world rather than a 2D one. As a result, I suspect there would be more agreement between different analysts studying dynamic, very high-resolution 3D models versus 2D static images at the same spatial res.

Screen Shot 2016-02-07 at 12.53.27 PM

Taiwan’s National Cheng Kung University created this 3D model from aerial imagery captured by UAV. This model was created and uploaded to Sketchfab on the same day the earthquake struck. Note that Sketchfab recently added a VR feature to their platform, which I tried out on this model. Simply open this page on your mobile device to view the disaster damage in Taiwan in VR. I must say it works rather well, and even seems to be higher resolution in VR mode compared to the 2D projections of the 3D model above. More on the Taiwan model here.

Screen Shot 2016-02-07 at 12.52.06 PM

But to be useful for disaster damage assessments, the VR headset would need to be combined with wearable technology that enables the end-user to digitally annotate (or draw on) the 3D models directly within the same VR environment. This would render the analysis process more intuitive while also producing 3D training data for the purposes of machine learning—and thus automated feature detection.

I’m still actively looking for a VR platform that will enable this, so please do get in touch if you know of any group, company, research institute, etc., that would be interested in piloting the 3D analysis of disaster damage from the Nepal Earthquake entirely within a VR solution. Thank you!

Using Aerial Robotics and Virtual Reality to Inspect Earthquake Damage in Taiwan (Updated)

The tragic 6.4 magnitude earthquake struck southern Taiwan shortly before 4 in the morning on Saturday, February 6th. Later in the day, aerial robots were used to capture areal videos and images of the disaster damage, like below.

Screen Shot 2016-02-07 at 1.25.26 PM

Within 10 hours of the earthquake, Dean Hosp at Taiwan’s National Cheng Kung University used screenshots of aerial videos posted on YouTube by various media outlets to create the 3D model below. As such, Dean used “second hand” data to create the model, which is why it is low resolution. Having the original imagery first hand would enable a far higher-res 3D model. Says Dean: “If I can fly myself, results can produce more fine and faster.”

Click the images below to enlarge.

Screen Shot 2016-02-07 at 1.32.46 PM

Screen Shot 2016-02-07 at 1.17.12 PM

Update: About 48 hours after the earthquake, Dean and team used their own UAV to create this much higher resolution version (see below), which they also annotated (click to enlarge).

Screen Shot 2016-02-10 at 6.45.04 AMScreen Shot 2016-02-10 at 6.47.33 AM Screen Shot 2016-02-10 at 6.48.19 AMScreen Shot 2016-02-10 at 6.51.23 AM

Here’s the embedded 3D model:

These 3D models were processed using AgiSoft PhotoScan and then uploaded to Sketchfab on the same day the earthquake struck. I’ve blogged about Sketchfab in the past—see this first-ever 3D model of a refugee camp, for example. A few weeks ago, Sketchfab added a Virtual Reality feature to their platform, so I just tried this out on the above model.

Screen Shot 2016-02-10 at 6.51.32 AM

The model appears equally crisp when viewed in VR mode on a mobile device (using Google Cardboard in my case). Simply open this page on your mobile device to view the disaster damage in VR. This works rather well; the model does seem to be of high resolution in Virtual Reality as well.

IMG_5966

This is a good first step vis-a-vis VR applications. As a second step, we need to develop 3D disaster ontologies to ensure that imagery analysts actually interpret 3D models in the same way. As a third step, we need to combine VR headsets with wearable technology that enables the end-user to annotate (or draw on) the 3D models directly within the same VR environment. This would make the damage assessment process more intuitive while also producing 3D training data for the purposes of machine learning—and thus automated feature detection.

I’m still actively looking for a VR platform that will enable this, so please do get in touch if you know of any group, company, research institute, etc., that would be interested in piloting the 3D analysis of disaster damage from the Taiwan or Nepal Earthquakes entirely within a VR solution. Thank you.

Click here to view 360 aerial visual panoramas of the disaster damage.


Many thanks to Sebastien Hodapp for pointing me to the Taiwan model.