Increasing the Reliability of Aerial Imagery Analysis for Damage Assessments

In March 2015, I was invited by the World Bank to spearhead an ambitious humanitarian aerial robotics (UAV) mission to Vanuatu following Cyclone Pam, a devastating Category 5 Cyclone. This mission was coordinated with Heliwest and X-Craft, two outstanding UAV companies who were identified through the Humanitarian UAV Network (UAViators) Roster of Pilots. You can learn more about the mission and see pictures here. Lessons learned from this mission (and many others) are available here.

Screen Shot 2016-02-22 at 6.12.21 PM

The World Bank and partners were unable to immediately analyze the aerial imagery we had collected because they faced a Big Data challenge. So I suggested the Bank activate the Digital Humanitarian Network (DHN) to request digital volunteer assistance. As a result, Humanitarian OpenStreetMap (HOT) analyzed some of the orthorectified mosaics and MicroMappers focused on analyzing the oblique images (more on both here).

This in turn produced a number of challenges. To cite just one, the Bank needed digital humanitarians to identify which houses or buildings were completely destroyed, versus partially damaged versus largely intact. But there was little guidance on how to determine what constituted fully destroyed versus partially damaged or what such structures in Vanuatu look like when damaged by a Cyclone. As a result, data quality was not as high as it could have been. In my capacity as consultant for the World Bank’s UAVs for Resilience Program, I decided to do something about this lack of guidelines for imagery interpretation.

I turned to my colleagues at the Harvard Humanitarian Initiative (where I had previously co-founded and co-directed the HHI Program on Crisis Mapping) and invited them to develop a rigorous guide that could inform the consistent interpretation of aerial imagery of disaster damage in Vanuatu (and nearby Island States). Note that Vanuatu is number one on the World Bank’s Risk Index of most disaster-prone countries. The imagery analysis guide has just published (PDF) by the Signal Program on Human Security and Technology at HHI.

Big thanks to the HHI team for having worked on this guide and for my Bank colleagues and other reviewers for their detailed feedback on earlier drafts. The guide is another important step towards improving data quality for satellite and aerial imagery analysis in the context of damage assessments. Better data quality is also important for the use of Artificial Intelligence (AI) and computer vision as explained here. If a humanitarian UAV mission does happen in response to the recent disaster in Fiji, then the guide may also be of assistance there depending on how similar the building materials and architecture is. For now, many thanks to HHI for having produced this imagery guide.

Aerial Robotics for Search & Rescue: State of the Art?

WeRobotics is co-creating a global network of labs to transfer robotics solutions to those who need them most. These “Flying Labs” take on different local flavors based on the needs and priorities of local partners. Santiago Flying Labs is one of the labs under consideration. Our local partners in Chile are interested in the application of robotics for disaster preparedness and Search & Rescue (SaR) operations. So what is the state of the art in rescue robotics?

WP0

One answer may lie several thousand miles away in Lushan, China, which experienced a 7.0 magnitude earthquake in 2013. The mountainous area made it near impossible for the Chinese International Search and Rescue Team (CISAR) to implement a rapid search and post-seismic evaluation. So State Key Robotics Lab at Shenyang Institute of Automation offered aerial support to CISAR. They used their aerial robot (UAV) to automatically and accurately detect collapsed buildings for ground rescue guidance. This saved the SaR teams considerable time. Here’s how.

WP5

A quicker SaR response leads to a higher survival rate. The survival rate is around 90% within the first 30 minutes but closer to 20% by day four. “In traditional search methods, ground rescuers are distributed to all possible places, which is time consuming and inefficient.” An aerial inspection of the disaster damage can help accelerate the ground search for survivors by prioritizing which areas to search first.

WP1

State Key Labs used a ServoHeli aerial robot to capture live video footage of the damaged region. And this is where it gets interesting. “Because the remains of a collapsed building often fall onto the ground in arbitrary directions, their shapes will exhibit random gradients without particular orientations. Thus, in successive aerial images, the random shape of a collapsed building will lead to particular motion features that can be used to discriminate collapsed from non-collapsed buildings.”

WP2

These distinct motion features can be quantified using a histogram of oriented gradient (HOG) as depicted here (click to enlarge):

WP4
As is clearly evident from the histograms, the “HOG variation of a normal building will be much larger than that of a collapsed one.” The team at State Key Labs had already employed this technique to train and test their automated feature-detection algorithm using aerial video footage from the 2010 Haiti Earthquake. Sample results of this are displayed below. Red rectangles denote where the algorithm was successful in identifying damage. Blue rectangles are false alarms while orange rectangles are missed detections.

Screen Shot 2016-02-07 at 2.55.14 AM

Screen Shot 2016-02-07 at 2.54.14 AM

Screen Shot 2016-02-07 at 2.55.57 AM

While the team achieved increasingly accurate detection rates for Haiti, the initial results for Lushan were not as robust. This was due to the fact that Lushan is far more rural than Port-au-Prince, which tripped up the algorithm. Eventually, the software achieved an accurate rate of 83.4% without any missed collapses, however. The use of aerial robotics and automated feature detection algorithms in Xinglong Village (9.5 sq. km) enabled CISAR to cut their search time in half. In sum, the team concluded that videos are more valuable for SaR operations than static images.

Screen Shot 2016-02-07 at 2.53.05 AM

Screen Shot 2016-02-07 at 2.52.26 AM

To learn more about this deployment, see the excellent write-up “Search and Rescue Rotary-Wing UAV and Its Application to the Lushan Ms 7.0 Earthquake” published in the Journal of Field Robotics. I wish all robotics deployments were this well documented. Another point that I find particularly noteworthy about this operation is that it was conducted three years ago already. In other words, real-time feature detection of disaster damage from live aerial video footage was already used operationally years ago.

WP3
What’s more, this paper published in 2002 (!) used computer vision to detect with a 90% accuracy rate collapsed buildings in aerial footage of the 1995 Kobe earthquake captured by television crews in helicopters. Perhaps in the near future we’ll have automated feature detection algorithms for disaster damage assessments running on live video footage from news channels and aerial robots. These could then be complemented by automated change-detection algorithms running on satellite imagery. In any event, the importance of applied research is clearly demonstrated by the Lushan deployments. This explains why WeRobotics
always aims to have local universities involved in Flying Labs.


Thanks to the ICARUS Team for pointing me to this deployment.

When Bushmen Race Aerial Robots to Protect Wildlife

WP1

The San people are bushmen who span multiple countries in Southern Africa. “For millennia, they have lived as hunter-gatherers in the Kalahari desert relying on an intimate knowledge of the environment.” Thus begins Sonja’s story, written with Matthew Parkan. Friend and colleague Sonja Betschart is a fellow co-founder at WeRobotics. In 2015, she teamed up on with partners in Namibia to support their wildlife conservation efforts. Sonja and team used aerial robots (UAVs) to count the number of wild animals in Kuzikus Wildlife Reserve. All the images in this blog post are credited to Drone Adventures.

Screen Shot 2016-02-07 at 3.18.44 PM

If that name sounds familiar, it’s because of this digital crowdsourcing project, which I spearheaded with the same Wildlife Reserve back in 2014. The project, which was covered on CNN, used crowdsourcing to identify wild animals in very high-resolution aerial imagery. As Sonja notes, having updated counts for the number of animals in each species is instrumental to nature conservation efforts.

While the crowdsourcing project worked rather well to identify the presence of animals, distinguishing between animals is more difficult for “the crowd” even when crowdsourcing is combined with Artificial Intelligence. Local indigenous knowledge is unbeatable, or is it? They decided to put this indigenous technical knowledge to the test: find the baby rhino that had been spotted in the area earlier during the mission. Armed with the latest aerial robotics technology, Sonja and team knew full well that the robots would be stiff—even unfair—competition for the bushmen. After all, the aerial robots could travel much faster and much farther than the fastest bushman in Namibia. 

Game on: Bushmen vs Robots. Ready, get set, go!

WP3

The drones weren’t even up in the air when the bushmen strolled back into camp to make themselves a cup up tea. Yes, they had already identified the location of the rhino. “This technologically humbling experience was a real eye opener,” writes Sonja, “a clear demonstration of how valuable traditional skills are despite new technologies. It was also one of the reasons why we started thinking about ways to involve the trackers in our work.”

They invited the bushmen to analyze the thousands of aerial images taken by the aerial robots to identity the various animal species visible in each image. After providing the San with a crash course on how to use a computer (to navigate between images), the bushmen “provided us with consistent interpretations of the species and sometimes even of the tracks in the high grass. However, what impressed us most was that the San pointed out subtle differences in the shadows [of the animals] that would give away the animals’ identities […].”

WP4

Take this aerial image, for example. See anything? “The San identified three different species on this image: oryx (red circles), blesbok (blue circle) and zebra (green circle).” In another image, “the bushmen were even able to give us the name of one of the Black Rhinos they spotted! For people who had never seen aerial imagery, their performance was truly remarkable.” Clearly, “with a bit of additional training, the San trackers could easily become expert image analysts.”

As Sonja rightly concluded after the mission, the project in Namibia was yet more evidence that combining indigenous knowledge with modern technology is both an elegant way of keeping this knowledge alive and an effective approach to address some of our technical and scientific limitations. As such, this is precisely the kind of project that Sonja and I want to see WeRobotics launch in 2016. It is fully in line with our mission to democratize the Fourth Industrial Revolution.


“The Namibia mission was part of SAVMAP, a joint initiative between the Laboratory of Geographic Information Systems (LASIG) of EPFL, Kuzikus Wildlife Reserve & Drone Adventures that aims at better integrating geospatial technologies with conservation and wildlife management in the Kalahari.”

360° Aerial View of Taiwan Earthquake Damage

The latest news coming from Taiwan is just tragic and breaks my heart. More than 100 may still be trapped under the rubble. I’ve been blogging about the role that emerging technologies can play in humanitarian response since 2008 but it definitely doesn’t get any easier emotionally to witness these tragic events. I try to draw a psychological line when I can so as not to lose hope and fall apart, but I’m not always successful. Sometimes it is just easier psychologically to focus on the technology and nothing else. My thoughts go out to all the families affected.


My colleague Preston Ward from DronePan kindly connected me to a friend of his, Elton, in Taiwan. Elton is with Super720, a company that creates impressive Virtual Reality (VR) panoramas. They deployed within 4 hours of the earthquake and used aerial robots (UAVs) at 5 different locations to capture both 360° aerial panoramas and 4K video to document the earthquake damage.

Click on the image below to view the pano in full 360° (takes a while to load).

360° visual analysis may prove useful and complementary for inspecting disaster damage. If there were a way to geo-reference and orthorectify these panos, they could potentially play a larger role in the disaster assessment process which uses geospatial data and Geographic Information Systems (GIS).

The team in Taiwan also shared this aerial video footage of the damage:

As one of my upcoming blog posts will note, we should be able to run computer vision algorithms on aerial videos to automatically detect disaster damage. We should also be able to use Virtual Reality to carry out damage assessments. You can already use VR to inspect at the earthquake damage in Taiwan here.

Assessing Disaster Damage: How Close Do You Need to Be?

“What is the optimal spatial resolution for the analysis of disaster damage?”

I posed this question during an hour-long presentation I gave to the World Bank in May 2015. The talk was on the use of remote sensing aerial robotics (UAVs) for disaster damage assessments; I used Cyclone Pam in Vanuatu and the double Nepal Earthquakes as case studies.

WPa
One advantage of aerial robotics over space robotics (satellites) is that the former can capture imagery at far higher spatial resolutions—sub 1-centimeter if need be. Without hesitation, a World Bank analyst in the conference room replied to my question: “Fifty centimeters.” I was taken aback by the rapid reply. Had I per chance missed something? Was it so obvious that 50 cm resolution was optimal? So I asked, “Why 50 centimeters?” Again, without hesitation: “Because that’s the resolution we’ve been using.” Ah ha! “But how do you know this is the optimal resolution if you haven’t tried 30 cm or even 10 cm?”

Lets go back to the fundamentals. We know that “rapid damage assessment is essential after disaster events, especially in densely built up urban areas where the assessment results provide guidance for rescue forces and other immediate relief efforts, as well as subsequent rehabilitation and reconstruction. Ground-based mapping is too slow, and typically hindered by disaster-related site access difficulties” (Gerke & Kerle 2011). Indeed, studies have shown that the inability of physically access damaged areas results in field teams underestimating the damage (Lemoine et al 2013). Hence one reason for the use of remote sensing.

We know that remote sensing can be used for two purposes following disasters. The first, “Rapid Mapping”, aims at providing impact assessments as quickly as possible after a disaster. The second, “Economic Mapping” assists in quantifying the economic impact of the damage. “Major distinctions between the two categories are timeliness, completeness and accuracies” (2013).  In addition, Rapid Mapping aims to identify the relative severity of the damage (low, medium, high) rather than absolute damage figures. Results from Economic Mapping are combined with other geospatial data (building size, building type, etc) and economic parameters (e.g., cost per unit area, relocation costs, etc) to compute a total cost estimate (2013). The Post-Disaster Needs Assessment (PDNA) is an example of Economic Mapping.

It is worth noting that a partially destroyed building may be seen as a complete economic loss, identical to a totally destroyed structure (2011). “From a casualty / fatality assessment perspective, however, a structure that is still partly standing […] offers a different survival potential” (2011).

Screen Shot 2016-02-07 at 10.31.27 AM

We also know that “damage detectability is primarily a function of image resolution” (2011). The Haiti Earthquake was “one of the first major disasters in which very high resolution satellite and airborne imagery was embraced to delineate the event impact (2013). It was also “the first time that the PDNA was based on damage assessment produced with remotely sensed data” (2013). Imagery analysis of the impact confirmed that a “moderate resolution increase from 41 cm to 15 cm has profound effects on damage mapping accuracy” (2011). Indeed, a number of validation studies carried out since 2010 have confirmed that “the higher detail airborne imagery performs much better [than lower resolution] satellite imagery” (2013). More specifically, the detection of very heavy damage and destruction in Haiti aerial imagery “is approximately a factor 8 greater than in the satellite imagery.”

Comparing the aerial imagery analysis with field surveys, Lemoine et al. find that “the number of heavily affected and damaged buildings in the aerial point set is slightly higher than that obtained from the field survey” (2013). The correlation between the results of the aerial imagery analysis and field surveys is sensitive to land-use (e.g., commercial, downtown, industrial, residential high density, shanty, etc). In highly populated areas such as shanty zones, “the over-estimation of building damage from aerial imagery could simply be a result of an incomplete field survey while, in downtown, where field surveys seem to have been conducted in a more systematic way, the damage assessment from aerial imagery matches very well the one obtained from the field” (2013).

WPb

In sum, the results from Haiti suggests that the “damage assessment from aerial imagery currently represents the best possible compromise between timeliness and accuracy” (2013). The Haiti case study also “showed that the damage derived from satellite imagery was underestimated by a factor of eight, compared to the damage derived from aerial imagery. These results suggest that despite the fast availability of very high resolution satellite imagery […], the spatial resolution of 50 cm is not sufficient for an accurate interpretation of building damage.”

In other words, even though “damage assessments depend very much on the timeliness of the aerial images acquisitions and requires a considerable effort of visual interpretation as an element of compromise; [aerial imagery] remains the best trade-of in terms of required quality and timeliness for producing detailed damage assessments over large affected areas compared to satellite based assessments (insufficient quality) and exhaustive field inventories (too slow).” But there’s a rub with respect to aerial imagery. While the above results do “show that the identification of building damage from aerial imagery […] provides a realistic estimate of the spatial pattern and intensity of damage,” the aerial imagery analysis still “suffers from several limitations due to the nadir imagery” (2013).

“Essentially all conventional airborne and spacebar image data are taken from a quasi-vertical perspective” (2011). Vertical (or nadir) imagery is particularly useful for a wide range of applications, for sure. But when it comes to damage mapping, vertical data have great limitations, particularly when concerning structural building damage (2011). While complete collapse can be readily identified using vertical imagery (e.g., disintegrated roof structures and associated high texture values, or adjacent rubble piles, etc), “lower levels of damage are much harder to map. This is because such damage effects are largely expressed along the façades, which are not visible in such imagery” (2011). According to Gerke and Kerle, aerial oblique imagery is more useful for disaster damage assessment than aerial or satellite imagery taken with a vertical angle. I elaborated on this point vis-a-vis a more recent study in this blog post.

WPd

Clearly, “much of the challenge in detecting damage stems from the complex nature of damage” (2011). For some types and sizes of damage, using only vertical (nadir) imagery will result in missed damage (2013). The question, therefore, is not simply one of spatial resolution but the angle at which the aerial or space-based image is taken, land-use and the type of damage that needs to be quantified. Still, we do have a partial answer to the first question. Technically, the optimal spatial resolution for disaster damage assessments is certainly not 50 cm since 15 cm proved far more useful in Haiti.

Of course, if higher-resolution imagery is not available in time (or at all), than clearly 50 cm imagery is infinitely more optimal than no imagery. In fact, even 5 meter imagery that is available within 24-48 hours of a disaster can add value if this imagery comes with baseline imagery, i.e., imagery of the location of interest before the disaster. Baseline data enables the use of automated change-detection algorithms that can provide a first estimate of damage severity and the location or scope of that severity. What’s more, these change-detection algorithms could automatically plot a series of waypoints to chart the flight plan of an autonomous aerial robot (UAV) to carry out closer inspection. In other words, satellite and aerial data can be complementary, and the drawbacks of low resolution imagery can be offset if said imagery is available at a higher temporal frequency.

WPc

On the flip side, just because the aerial imagery used in the Haiti study was captured at 15 cm resolution does not imply that 15 cm resolution is the most optimal spatial resolution for disaster damage. It could very well be 10 cm. This depends entirely on the statistical distribution of the size of damaged features (e.g, the size of a crack, a window, a tile, etc,), the local architecture (e.g., type of building materials used and so on) and the type of hazard (e.g, hurricane, earthquake, etc). That said, “individual indicators of destruction, such as a roof or façade damage, do not linearly add to a given damage class [e.g., low, medium, high]” (2011). This limitation is alas “fundamental in remote sensing where no assessment of internal structural integrity is possible” (2011).

In any event, one point is certain: there was no need to capture aerial imagery (both nadir and obliques) at 5 centimeter resolution during the World Bank’s humanitarian UAV mission after Cyclone Pam—at least not for the purposes of 2D imagery analysis. This leads me to one final point. As recently noted during my opening Keynote at the 2016 International Drones and Robotics for Good Awards in Dubai, disaster damage is a 3D phenomenon, not a 2D experience. “Buildings are three-dimensional, and even the most detailed view at only one of those dimensions is ill-suited to describe the status of such features” (2011). In other words, we need “additional perspectives to provide a more comprehensive view” (2011).

There are very few peer-reviewed scientific papers that evaluate the use of high-resolution 3D models for the purposes of damage assessments. This one is likely the most up-to-date study. An earlier research effort by Booth et al. found that the overall correlation between the results from field surveys and 3D analysis of disaster damage was “an encouraging 74 percent.” But this visual analysis was carried out manually, which could have introduced non-random errors. After all, “the subjectivity inherent in visual structure damage mapping is considerable” (2011). Could semi-automated methods for the analysis of 3D models thus yield a higher correlation? This is the research question posed by Gerke and Kerle in their 2011 study.

Screen Shot 2016-02-07 at 5.39.06 AM

The authors tested this question using aerial imagery from the Haiti Earthquake. When 3D features were removed from their automated analysis, “classification performance declined, for example by some 10 percent for façades, the class that benefited most from the 3D derivates” (2011). The researchers also found that trained imagery analysts only agreed at most 76% of the time in their visual interpretation and assessments of aerial data. This is in part due to the lack of standards for damage categories (2011).

Screen Shot 2016-02-07 at 5.40.36 AM

I made this point more bluntly in this earlier blog post. Just imagine when you have hundreds of professionals and/or digital volunteers analyzing imagery (e.g, through crowdsourcing) and no standardized categories of disaster damage to inform the consistent interpretation of said imagery. This collective subjectivity introduces a non-random error into the overall analysis. And because it is non-random, this error cannot be accounted for. In contrast, a more semi-automated solution would render this error more random, which means the overall model could potentially be adjusted accordingly.

Gerke and Kerle conclude that high quality 3D models are “in principle well-suited for comprehensive semi-automated damage mapping. In particular façades, which are critical [to the assessment process], can be assessed [when] multiple views are provided.” That said, the methodology used by the authors was “still essentially based on projecting 3D data into 2D space, with conceptual and geometric limitations. [As such], one goal should be to perform the actual damage assessment and classification in 3D.” This explains why I’ve been advocating for Virtual Reality (VR) based solutions as described here.

3D_1

Geek and Kerle also “required extensive training data and substantial subjective evidence integration in the final damage class assessment. This raises the question to what extent rules could be formulated to create a damage ontology as the basis for per-building damage scoring.” This explains why I invited the Harvard Humanitarian Initiative (HHI) to create a damage ontology based on the aerial imagery from Cyclone Pam in Vanuatu. This ontology is based on 2D imagery, however. Ironically, very high spatial resolution 2D imagery can be more difficult to interpret than lower-res imagery since the high resolution imagery inevitably adds more “noise” to the data.

Ultimately, we’ll need to move on to 3D damage ontologies that can be visualized using VR headsets. 3D analysis is naturally more intuitive to us since we live in a mega-high resolution 3D world rather than a 2D one. As a result, I suspect there would be more agreement between different analysts studying dynamic, very high-resolution 3D models versus 2D static images at the same spatial res.

Screen Shot 2016-02-07 at 12.53.27 PM

Taiwan’s National Cheng Kung University created this 3D model from aerial imagery captured by UAV. This model was created and uploaded to Sketchfab on the same day the earthquake struck. Note that Sketchfab recently added a VR feature to their platform, which I tried out on this model. Simply open this page on your mobile device to view the disaster damage in Taiwan in VR. I must say it works rather well, and even seems to be higher resolution in VR mode compared to the 2D projections of the 3D model above. More on the Taiwan model here.

Screen Shot 2016-02-07 at 12.52.06 PM

But to be useful for disaster damage assessments, the VR headset would need to be combined with wearable technology that enables the end-user to digitally annotate (or draw on) the 3D models directly within the same VR environment. This would render the analysis process more intuitive while also producing 3D training data for the purposes of machine learning—and thus automated feature detection.

I’m still actively looking for a VR platform that will enable this, so please do get in touch if you know of any group, company, research institute, etc., that would be interested in piloting the 3D analysis of disaster damage from the Nepal Earthquake entirely within a VR solution. Thank you!

Using Aerial Robotics and Virtual Reality to Inspect Earthquake Damage in Taiwan (Updated)

The tragic 6.4 magnitude earthquake struck southern Taiwan shortly before 4 in the morning on Saturday, February 6th. Later in the day, aerial robots were used to capture areal videos and images of the disaster damage, like below.

Screen Shot 2016-02-07 at 1.25.26 PM

Within 10 hours of the earthquake, Dean Hosp at Taiwan’s National Cheng Kung University used screenshots of aerial videos posted on YouTube by various media outlets to create the 3D model below. As such, Dean used “second hand” data to create the model, which is why it is low resolution. Having the original imagery first hand would enable a far higher-res 3D model. Says Dean: “If I can fly myself, results can produce more fine and faster.”

Click the images below to enlarge.

Screen Shot 2016-02-07 at 1.32.46 PM

Screen Shot 2016-02-07 at 1.17.12 PM

Update: About 48 hours after the earthquake, Dean and team used their own UAV to create this much higher resolution version (see below), which they also annotated (click to enlarge).

Screen Shot 2016-02-10 at 6.45.04 AMScreen Shot 2016-02-10 at 6.47.33 AM Screen Shot 2016-02-10 at 6.48.19 AMScreen Shot 2016-02-10 at 6.51.23 AM

Here’s the embedded 3D model:

These 3D models were processed using AgiSoft PhotoScan and then uploaded to Sketchfab on the same day the earthquake struck. I’ve blogged about Sketchfab in the past—see this first-ever 3D model of a refugee camp, for example. A few weeks ago, Sketchfab added a Virtual Reality feature to their platform, so I just tried this out on the above model.

Screen Shot 2016-02-10 at 6.51.32 AM

The model appears equally crisp when viewed in VR mode on a mobile device (using Google Cardboard in my case). Simply open this page on your mobile device to view the disaster damage in VR. This works rather well; the model does seem to be of high resolution in Virtual Reality as well.

IMG_5966

This is a good first step vis-a-vis VR applications. As a second step, we need to develop 3D disaster ontologies to ensure that imagery analysts actually interpret 3D models in the same way. As a third step, we need to combine VR headsets with wearable technology that enables the end-user to annotate (or draw on) the 3D models directly within the same VR environment. This would make the damage assessment process more intuitive while also producing 3D training data for the purposes of machine learning—and thus automated feature detection.

I’m still actively looking for a VR platform that will enable this, so please do get in touch if you know of any group, company, research institute, etc., that would be interested in piloting the 3D analysis of disaster damage from the Taiwan or Nepal Earthquakes entirely within a VR solution. Thank you.

Click here to view 360 aerial visual panoramas of the disaster damage.


Many thanks to Sebastien Hodapp for pointing me to the Taiwan model.

From Aerial to Maritime Robotics for Payload Delivery?

WeRobotics is co-creating a global network of social innovation labs to accelerate the transfer of appropriate robotics solutions to local partners who need them the most. Launched last week at the prestigious 2016 International Drones & Robotics for Good Awards in Dubai, WeRobotics seeks to democratize the Fourth Industrial Revolution. One of several new labs under consideration is Cuyo Flying Labs in the Cuyo Archipelago of the Philippines.

CuyoPic

Our local partners in this district are outstanding public health professionals who have made a clear and data-driven case for exploring robotics solutions to improve the delivery of essential medicines across the islands of the Archipelago. WeRobotics has reached out to it’s network of Technology Partners to identify a cross-section of aerial robotics solutions. We plan to pilot and evaluate these solutions together with our local partners this year; just waiting to hear back on a number of grant proposals.

CuyoMap

Aerial robots are not the only robotics solutions out there, however. Indeed, a number of contestants at the International Drones & Robotics Awards presented maritime robotics platforms. One team from NYU Abu Dhabi presented their Reef Rover solution, which I gave high marks to (I served on the panel of judges for the finals of the championship and also had the honor of giving the opening Keynote). Fellow keynote speaker David Lang from OpenROV also presented his group’s maritime robotics solution.

So I caught up with David and told him about the possible Cuyo project. I asked him about possible autonomous maritime robotics solutions that could carry a 1- 2kg payload across a 20-mile stretch. Given that GPS signals and radio waves do not travel well underwater, the solution would need to be a surface-water robot. That said, Team NYU did come up with a neat solution to this issue as shown in the above video. But said solution may be problematic for long distances. In any event, I was surprised to learn from David that existing surface-water robotics solutions would likely range in the hundreds of thousands of dollars. So I asked David about possibility hacking a remote control boat since these toys cost a couple hundred dollars at most.

RCboat

This could totally be done, according to David. He encouraged me to blog about the humanitarian applications of small autonomous boats and to share my post with the DIY community for feedback. So here we are. I also did some research on autonomous boats and came across this high school project—aptly titled DIY boat—along with the short video below. These kinds of boats can travel between 10-25 miles per hour. Battery power won’t be an issue but salt water and robotics are not the best of friends. So developing a persistent & robust solution with sense and avoid technology will need more thought.

I also found this more recent article (February 2, 2016) on swarming robot boats that apparently demonstrate learning. More on this applied research project in the video below.

As for operating this in Cuyo, we could test out the autonomous boat late at night when no commercial boats or fishermen are likely to be out and about. The water routes would be automatically programmed so the maritime robot would simply do several runs back and forth. In other words, even if the boat can “only” carry 1kg-2kg at a time, the frequency of the trips could make up for the low payload weight—just like aerial robotics. Of course, we’d have to look up the Philippine’s maritime regulations to figure out what kinda of permits might be necessary, but I assume these regulations may not be as demanding as aviation regulations.

Screen Shot 2016-02-06 at 12.21.36 PM

In any event, WeRobotics seeks to work across multiple robotics platforms, from aerial robotics to terrestrial robotics to maritime robotics. For now, though, most of the world’s attention is focused on the hype surrounding small autonomous aerial robots. But aerial robotics won’t be the one and only wave of robotics to impact the social good space. So f you know of existing maritime solutions that we could pilot this year as part of Cuyo Flying Labs, then I’d be most grateful if you could either get in touch or leave a comment below. Many thanks!