The Most Comprehensive Study on Drones in Humanitarian Action

In August 2015, the Swiss humanitarian organization FSD kindly hired me as a consultant to work on the EU-funded Drones in Humanitarian Action program. I had the pleasure of working closely with FSD and team during the past 16 months. Today represents the exciting culmination of a lot of hard work by many dedicated individuals.

Today we’re launching our comprehensive report on “Drones in Humanitarian Action: A Guide to the Use of Airborne Systems in Humanitarian Crises.” The full report is available here (PDF). Our study draws on extensive research and many, many consultations carried out over a year and a half. The report covers the principle actors & technologies along with key applications and case studies on both mapping and cargo drones. Note that the section on cargo delivery is drawn from a larger 20+ page study I carried out. Please contact me directly if you’d like a copy of this more detailed study. In the meantime, I want to sincerely thank my fellow co-authors Denise Soesilo, Audrey Lessard-Fontaine, Jessica Du Plessis & Christina Stuhlberger for this productive and meaningful collaboration.

screenshot-2016-12-02-09-03-23

Aerial Robotics and Agriculture: Opportunities for the Majority World

The majority of studies and articles on the use of drones/UAVs for agriculture seem to focus on examples and opportunities in the US, Europe or Japan. These reports talk about the needs for large scale aerial surveys over massive farms, machine learning algorithms for automated crop detection, and the development of sophisticated forecasting models to inform decisions at the very micro level. This is the realm of precision agriculture. But what about small-holder and family farms in the Majority World? Do flying robots make sense for them? Yes, in some cases, but not in the same way that this technology makes sense for large farms in highly industrialized countries.

First things first, smallholder and family farms won’t have as much need for long-range fixed-wing UAVs as ranchers do in the US. According to this FAO study (PDF), smallholder farms typically cover less than 0.02 square kilometers. Secondly, these farms do not necessarily need access to very high-resolution, orthorectified mosaics or fancy 3D models. Mosaics and 3D models require data processing software; and software requires a computer to run said software, not to mention having time to learn how to use said software. Without software, a farmer could still upload her aerial images to the cloud for processing but that requires a reliable and relatively fast Internet connection. Also, data processing means having to store that data before and after processing. So now the farmer needs software, a computer and a hard disk (or two for backup). 

I’m not suggesting that very high-resolution orthorectified mosaics, 3D models and multi-spectral sensors cannot add value to smallholder and family farms. Of course they can. Farmers have always needed accurate as well as up-to-date information on their crops and on the environmental conditions of the land on which their crops grow. I’m just suggesting a more practical approach to begin with. Simply getting a live video feed from a bird’s eye view can already reveal patterns that show everything from irrigation problems to soil variation and even pest and fungal infestations that aren’t apparent at eye level.

At the end of the day, farming is an input-output problem. Local farmers can get a live video feed from the sky to subsequently reduce their inputs—water and pesticides—while maintaining the same output. But lets unpack that a bit. Once a farmer detects an irrigation problem from the video feed, they don’t need an other piece of robotics tech to intervene. They can easily see where the drone is hovering and simply walk over to the area with basic tools to intervene as needed. Smallholder and family farms do not have access to variable-rate sprayers and other fancy tractor tools that can take precision data and respond with precision interventions. So very high-res mosaics and 3D models may add little value in the context of smallholder farms in developing countries.

rededge-cam-nepal-rice-fields

Of course, some farmers may prefer to pay consultants or local companies to carry out these aerial surveys instead of leasing or owning a drone and carrying out the surveys themselves. In fact, some companies actually found it too “tedious to teach farmers how to use the drones they had made. Instead, they decided to focus on providing the much-needed service of mapping out farms and sites” (1). In stark contrast, my team & I at WeRobotics have really enjoyed training local partners in Nepal and Tanzania. We don’t find it tedious at all but rather highly rewarding. Building local capacity around the use of appropriate robotics solutions goes to the heart of our mission.

This explains why we’re creating local robotics labs—we call them Flying Labs—to transfer the skills and technologies that our partners need to scale the positive impact of their local efforts. So we’re especially keen to work with smallholder and family farms so they can use robotics solutions to improve their yields. They could lease small drones from the labs for a nominal fee, and I’m willing to bet that some savvy young men and women working on these farms will be keen to learn a new set of skills that could lead to an increase in income. We’re also keen to work with local drone consultants or local companies to enable them to expand their services to include agriculture. The key, either way, is to design and deliver effective trainings to local farmers, consultants and/or companies while providing each with long-term support through the Flying Labs. 


Thanks to colleagues from WeRobotics for feedback on an earlier version of this postI’m keen to receive additional input from iRevolution readers. 

What Happens When the Media Sends Drone Teams to Disasters?

Media companies like AFP, CNN and others are increasingly capturing dramatic aerial footage following major disasters around the world. These companies can be part of the solution when it comes to adding value to humanitarian efforts on the ground. But they can also be a part of the problem.

screenshot-2016-10-26-04-51-44

Media teams are increasingly showing up to disasters with small drones (UAVs) to document the damage. They’re at times the first with drones on the scene and thus able to quickly capture dramatic aerial footage of the devastation below. These media assets lead to more views and thus traffic on news websites, which increases the probability that more readers click on ads. Cue Noam Chomsky’s Manufacturing Consent: The Political Economy of the Mass Media, my favorite book whilst in high school.

Aerial footage can also increase situational awareness for disaster responders if that footage is geo-located. Labeling individual scenes in video footage with the name of the towns or villages being flown over would go a long way. This is what I asked one journalist to do in the aftermath of the Nepal Earthquake after he sent me dozens of his aerial videos. I also struck up an informal agreement with CNN to gain access to their raw aerial footage in future disasters. On a related note, I was pleased when my CNN contact expressed an interest in following the Humanitarian UAV Code of Conduct.

In an ideal world, there would be a network of professional drone journalists with established news agencies that humanitarian organizations could quickly contact for geo-tagged video footage after major disasters to improve their situational awareness. Perhaps the Professional Society of Drone Journalists (PSDJ) could be part of the solution. In any case, the network would either have its own Code of Conduct or follow the humanitarian one. Perhaps they could post their footage and pictures directly to the Humanitarian UAV Network (UAViators) Crisis Map. Either way, the media has long played an important role in humanitarian disasters, and their increasing use of drones makes them even more valuable partners to increase situational awareness.

The above scenario describes the ideal world. But the media can (and has) been part of the problem as well. “If it bleeds, it leads,” as the saying goes. Increased competition between media companies to be the first to capture dramatic aerial video that goes viral means that they may take shortcuts. They may not want to waste time getting formal approval from a country’s civil aviation authority. In Nepal after the earthquake, one leading company’s drone team was briefly detained by authorities for not getting official permission.

screenshot-2016-10-26-05-07-05

Media companies may not care to engage with local communities. They may be on a tight deadline and thus dispense with getting community buy-in. They may not have the time to reassure traumatized communities about the robots flying overhead. Media companies may overlook or ignore potential data privacy repercussions of publishing their aerial videos online. They may also not venture out to isolated and rural areas, thus biasing the video footage towards easy-to-access locations.

So how do we in the humanitarian space make media drone teams part of the solution rather than part of the problem? How do we make them partners in these efforts? One way forward is to start a conversation with these media teams and their relevant networks. Perhaps we start with a few informal agreements and learn by doing. If anyone is interested in working with me on this and/or has any suggestions on how to make this happen, please do get in touch. Thanks!

Why Robots Are Flying Over Zanzibar and the Source of the Nile

An expedition in 1858 revealed that Lake Victoria was the source of the Nile. We found ourselves on the shores of Africa’s majestic lake this October, a month after a 5.9 magnitude earthquake struck Tanzania’s Kagera Region. Hundreds were injured and dozens killed. This was the biggest tragedy in decades for the peaceful lakeside town of Bukoba. The Ministry of Home Affairs invited WeRobotics to support the recovery and reconstruction efforts by carrying out aerial surveys of the affected areas. 

2016-10-10-08-14-57-hdr

The mission of WeRobotics is to build local capacity for the safe and effective use of appropriate robotics solutions. We do this by co-creating local robotics labs that we call Flying Labs. We use these Labs to transfer the professional skills and relevant robotics solutions to outstanding local partners. Our explicit focus on capacity building explains why we took the opportunity whilst in Kagera to train two Tanzanian colleagues. Khadija and Yussuf joined us from the State University of Zanzibar (SUZA). They were both wonderful to work with and quick learners too. We look forward to working with them and other partners to co-create our Flying Labs in Tanzania. More on this in a future post.

Aerial Surveys of Kagera Region After The Earthquake

We surveyed multiple areas in the region based on the priorities of our local partners as well as reports provided by local villagers. We used the Cumulus One UAV from our technology partner DanOffice to carry out the flights. The Cumulus has a stated 2.5 hour flight time and 50 kilometer radio range. We’re using software from our partner Pix4D to process the 3,000+ very high resolution images captured during our 2 days around Bukoba.

img_6753

Above, Khadija and Yussuf on the left with a local engineer and a local member of the community on the right, respectfully. The video below shows how the Cumulus takes off and lands. The landing is automatic and simply involves the UAV stalling and gently gliding to the ground. 

We engaged directly with local communities before our flights to explain our project and get their permissions to fly. Learn more about our Code of Conduct.

img_6807

Aerial mapping with fixed-wing UAVs can identify large-scale damage over large areas and serve as a good base map for reconstruction. A lot of the damage, however, can be limited to large cracks in walls, which cannot be seen with nadir (vertical) imagery. We thus flew over some areas using a Parrot Bebop2 to capture oblique imagery and to get closer to the damage. We then took dozens of geo-tagged images from ground-level with our phones in order to ground-truth the aerial imagery.

img_6964

We’re still processing the resulting imagery so the results below are simply the low resolution previews of one (out of three) surveys we carried out.

ortho1_bukoba

Both Khadija and Yussuf were very quick learners and a real delight to work with. Below are more pictures documenting our recent work in Kagera. You can follow all our trainings and projects live via our Twitter feed (@werobotics) and our Facebook page. Sincerest thanks to both Linx Global Intelligence and UR Group for making our work in Kagera possible. Linx provided the introduction to the Ministry of Home Affairs while the UR Group provided invaluable support on the logistics and permissions.

img_6827

Yussuf programming the flight plan of the Cumulus

img_6875

Khadija is setting up the Cumulus for a full day of flying around Bukoba area

img_6756

Khadija wants to use aerial robots to map Zanzibar, which is where she’s from

img_6787

Community engagement is absolutely imperative

img_6791

Local community members inspecting the Parrot’s Bebop2

From the shores of Lake Victoria to the coastlines of Zanzibar

Together with the outstanding drone team from the State University of Zanzibar, we mapped Jozani Forest and part of the island’s eastern coastline. This allowed us to further field-test our long-range platform and to continue our local capacity building efforts following our surveys near the Ugandan border. Here’s a picture-based summary of our joint efforts.

2016-10-14-09-09-48

Flying Labs Coordinator Yussuf sets up the Cumulus UAV for flight

2016-10-13-14-44-27-hdr

Turns out selfie sticks are popular in Zanzibar and kids love robots.

2016-10-14-10-01-25

Khairat from Team SUZA is operating the mobile air traffic control tower. Team SUZA uses senseFly eBees for other projects on the island.

2016-10-15-09-03-10

Another successful takeoff, courtesy of Flying Labs Coordinator Yussuf.

2016-10-15-11-11-20

We flew the Cumulus at a speed of 65km/h and at an altitude of 265m.

2016-10-15-13-11-13

The Cumulus flew for 2 hours, making this our longest UAV flight in Zanzibar so far.

2016-10-15-10-38-51-hdr

Khadija from Team SUZA explains to local villagers how and why she maps Zanzibar using flying robots.

2016-10-15-17-26-23

Tide starts rushing back in. It’s important to take the moon into account when mapping coastlines, as the tide can change drastically during a single flight and thus affect the stitching process.

The content above is cross-posted from WeRobotics.

Using Sound and Artificial Intelligence to Detect Human Rights Violations

Video continues to be a powerful way to capture human rights abuses around the world. Videos posted to social media can be used to hold perpetrators of gross violations accountable. But video footage poses a “Big Data” challenge to human rights organizations. Two billion smartphone users means almost as many video cameras. This leads to massive amounts of visual content of both suffering and wrong-doing during conflict zones. Reviewing these videos manually is a very labor intensive, time consuming, expensive and often traumatic task. So my colleague Jay Aronson at CMU has been exploring how artificial intelligence and in particular machine learning might solve this challenge.

blog1pic

As Jay and team rightly note in a recent publication (PDF), “the dissemination of conflict and human rights related video has vastly outpaced the ability of researchers to keep up with it – particularly when immediate political action or rapid humanitarian response is required.” The consequences of this are similar to what I’ve observed in humanitarian aid: At some point (which will vary from organization to organization), time and resource limitations will necessitate an end to the collection, archiving, and analysis of user generated content unless the process can be automated.” In sum, information overload can “prevent human rights researchers from uncovering widely dispersed events taking place over long periods of time or large geographic areas that amount to systematic human rights violations.”

jayblog1

To take on this Big Data challenge, Jay and team have developed a new machine learning-based audio processing system that “enables both synchronization of multiple audio-rich videos of the same event, and discovery of specific sounds (such as wind, screaming, gunshots, airplane noise, music, and explosions) at the frame level within a video.” The system basically “creates a unique “soundprint” for each video in a collection, synchronizes videos that are recorded at the same time and location based on the pattern of these signatures, and also enables these signatures to be used to locate specific sounds precisely within a video. The use of this tool for synchronization ultimately provides a multi-perspectival view of a specific event, enabling more efficient event reconstruction and analysis by investigators.”

Synchronizing image features is far more complex than synchronizing sound. “When an object is occluded, poorly illuminated, or not visually distinct from the background, it cannot always be detected by computer vision systems. Further, while computer vision can provide investigators with confirmation that a particular video was shot from a particular location based on the similarity of the background physical environment, it is less adept at synchronizing multiple videos over time because it cannot recognize that a video might be capturing the same event from different angles or distances. In both cases, audio sensors function better so long as the relevant videos include reasonably good audio.”

Ukrainian human rights practitioners working with families of protestors killed during the 2013-2014 Euromaidan Protests recently approached Jay and company to analyze videos from those events. They wanted to “ locate every video available in their collection of the moments before, during, and just after a specific set of killings. They wanted to extract information from these videos, including visual depictions of these killings, whether the protesters in question were an immediate and direct threat to the security forces, plus any other information that could be used to corroborate or refute other forms of evidence or testimony available for their cases.”

jayblog2

Their plan had originally been to manually synchronize more than 65 hours of video footage from 520 videos taken during the morning of February 20, 2014. But after working full-time over several months, they were only able to stitch together about 4 hours of the total video using visual and audio cues in the recording.” So Jay and team used their system to make sense of the footage. They were able to automatically synchronize over 4 hours of the footage. The figure above shows an example of video clips synchronized by the system.

Users can also “select a segment within the video containing the event they are interested in (for example, a series of explosions in a plaza), and search in other videos for a similar segment that shows similar looking buildings or persons, or that contains a similar sounding noise. A user may for example select a shooting scene with a significant series of gunshots, and may search for segments with a similar sounding series of gunshots. This method increases the chances for finding video scenes of an event displaying different angles of the scene or parallel events.”

Jay and team are quick to emphasize that their system “does not  eliminate human involvement in the process because machine learning systems provide probabilistic, not certain, results.” To be sure, “the synchronization of several videos is noisy and will likely include mistakes—this is precisely why human involvement in the process is crucial.”

I’ve been following Jay’s applied research for many years now and continue to be a fan of his approach given the overlap with my own work in the use of machine learning to make sense of the Big Data generated during major natural disasters. I wholeheartedly agree with Jay when he reflected during a recent call that the use of advanced techniques alone is not the answer. Effective cross-disciplinary collaboration between computer scientists and human rights (or humanitarian) practitioners is really hard but absolutely essential. This explains why I wrote this practical handbook on how to create effective collaboration and successful projects between computer scientists and humanitarian organizations.

Using Swimming Robots to Warn Villages of Himalayan Tsunamis

Cross-posted from National Geographic 

Climate change is having a devastating impact on the Himalaya. On the Ngozumpa glacier, one of the largest and longest in the region, hundreds of supraglacial lakes dot the glacier surface. One lake in particular is known for its continuous volume purges on an annual basis. Near the start of the monsoon this summer, in less than 48 hours, it loses enough water to fill over 40 Olympic-sized swimming pools. To make matters worse, these glacial lakes act like cancers: they consume Himalayan glaciers from the inside out, making some of them melt twice as fast. As a result, villages down-valley from these glacial lakes are becoming increasingly prone to violent flash floods, which locals call Himalayan Tsunamis.

To provide early warnings of these flash floods requires that we collect a lot more geophysical and hydrologic information on these glacial lakes. So scientists like Ulyana (co-author) are racing to understand exactly how these glacial lakes form and grow, and how they’re connected to each other through seemingly secret subterranean channels. We need to know how deep and steep these lakes are, what the lake floors look like and of what materials they are composed (e.g., mud, rock, bare ice).

Ulyana, her colleagues and a small local team of Sherpa have recently started using autonomous swimming robots to automatically map lake floors and look for cracks that may trigger mountain tsunamis. Using robotics to do this is both faster and more accurate than having humans take the measurements. What’s more, robots are significantly safer. Indeed, even getting near these lakes (let alone in them!) is dangerous enough due to unpredictable collapses of ice called calving and large boulders rolling off of surrounding ice cliffs and into the lakes below. Just imagine being on a small inflatable boat floating on ice-cold water when one of those icefalls happen.

We (Ulyana and Patrick) are actively looking to utilize diving robots as well—specifically the one in the video footage below. This OpenROV Trident robot will enable us to get to the bottom of these glacial lakes to identify deepening ‘hotspots’ before they’re visible from the lake’s surface or from the air. Our plan next year is to pool our efforts, bringing diving, swimming and flying robots to Nepal so we can train our partners—Sherpas and local engineers—on how to use these robotic solutions to essentially take the ‘pulse’ of the changing Himalaya. This way they’ll be able to educate as well as warn nearby villages before the next mountain floods hit.

We plan to integrate these efforts with WeRobotics (co-founded by co-author Patrick) and in particular with the local robotics lab that WeRobotics is already setting up in Kathmandu. This lab has a number of flying robots and trained Nepali engineers. To learn more about how these flying robots are being used in Nepal, check out the pictures here.

We’ll soon be adding diving robots to the robotic lab’s portfolio in Nepal thanks to WeRobotics’s partnership with OpenROV. What’s more, all WeRobotics labs have an expressed goal of spinning off  local businesses that offer robotics as services. Thus, the robotics start-up that spins off from our lab in Nepal will offer a range of mapping services using both flying and diving robots. As such, we want to create local jobs that use robotics (jobs that local partners want!) so that our Nepali friends can make a career out of saving their beautiful mountains.  

Please do get in touch if you’d like to get involved or support in other ways! Email us ulyana@scienceinthewild.com and patrick@werobotics.org

Could These Swimming Robots Help Local Communities?

Flying robots are all over the news these days. But UAVs/drones are hardly the only autonomous robotics solutions out there. Driving robots like the one below by Starship Technologies has already driven more than 5,000 miles in 39 cities across 12 counties, gentling moving around hundreds of thousands of people in the process of delivering small packages. I’m already in touch with Starship and related companies to explore a range of humanitarian applications. Perhaps less well known, however, are the swimming robots that can be found floating and diving in oceans, lakes and rivers around the world.

These swimming robots are often referred to as maritime or marine robots, aquatic robots, remotely operated vehicles and autonomous surface water (or underwater) vehicles. I’m interested in swimming robots for the same reason I’m interested in flying and driving robots: they allow us to collect data, transport cargo and take samples in more efficient and productive ways. Flying Robots, for example, can be used to transport essential vaccines and medicines. They can also collect data by taking pictures to support precision agriculture and they can take air samples to test for pollution. The equivalent is true for swimming and diving robots.

So I’d like to introduce you to this cast of characters and will then elaborate on how they can and have been used to make a difference. Do please let me know if I’m missing any major ones—robots and use-cases.

OpenRov Trident
OpenROV_Trident
This tethered diving robot can reach depths of up to 100 meters with a maximum speed of 2 meters per second (7 km/hour). The Trident has a maximum run-time of 3 hours, weighs just under 3 kg and easily fits in a backpack. It comes with a 25 meter tether (although longer tethers are also available). The robot, which relays a live video feed back to the surface, can be programmed to swim in long straight lines (transects) over a given area to generate a continuous map of the seafloor. The OpenROV software is open source. More here.

MidWest ROV Screen Shot 2016-07-29 at 8.00.07 AM
This remotely operated swimming robot has an maximum cruising speed of just under 5km per hour and weighs 25kg. The ROV is a meter long and has a run time of approximately 4 hours. The platform has full digital and audio recording capabilities with a sonar a scanner that can record a swatch of ~60 meter wide at a depth of 180 meters. This sturdy robot has been swimming in one of the fastest changing glacier lakes in the Himalayas to assess flood hazards. More here. See also MarineTech.

Hydromea Vertex AUV

This small swimming robot can cover a volume of several square kilometers at a depth of up to 300 meters with a maximum speed of 1 meter per second (3.5 km/hour). The Vertex can automatically scan vertically and horizontally, or any other angle for that matter and from multiple locations. The platform, which only weighs 7 kg and has a length of 70 cm, can be used to create 3D scans of the seafloor with up to 10 robots operating in simultaneously in parallel thanks to communication and localization technology that enables them to cooperate as a team. More here.

Liquid Robotics Wave Glider
LiquidRobotics
The Wave Glider is an autonomous swimming robot powered by both wave and solar energy, enabling it to cruise at 5.5 km/hour. The surface component, which measures 3 meters in length, contains solar panels that power the platform and onboard sensors. The tether and underwater component enables the platform to use waves for thrust. This Glider operates individually or in fleets to deliver real-time data for up to a year with no fuel. The platform has already traveled well over one million kilometers and through a range of weather conditions including hurricanes and typhoons. More here.

SeaDrone
SeaDrone
This tethered robot weighs 5kg and can operate at a depth of 100 meters with a maximum of 1.5m per second (3.5km/hour). Tethers are available at a length of 30 meters to 50 meters. The platform has a battery life of 3 hours and provides a live, high-definition video feed. The SeaDrone platform can be easily controlled from an iOS tablet. More here.

Clear Path Robotics Heron
ClearPath
This surface water swimming robot can cruise at a maximum speed of 1.7 meters per second (6km/ hour) for around 2 hours. The Heron, which weighs 28kg, offers a payload bay for submerged sensors and a mounting system for those above water. The robot can carry a maximum payload of 10kg. A single operator can control multiple Herons simultaneously. The platform, like others described below is ideal for ecosystem assessments and bathymetry surveys (to map the topography of lakes and ocean floors). More here.

SailDrone
SailDrone
The Saildrone navigates to its destination using wind power alone, typically cruising at an average speed of 5.5 km/hour. The robot can then stay at a designated spot or perform survey patterns. Like other robots introduced here, the Saildrone can carry a range of sensors for data collection. The data is then transmitted back to shore via satellite. The Saildrone is also capable of carrying an additional 100 kg worth of payload. More here.

EMILY
Screen Shot 2016-07-29 at 2.29.07 PM
EMILY, an acronym for Emergency Integrated Lifesaving Lanyard, is a robotic device used by lifeguards for rescuing swimmers. It operates on battery power and is operated by remote control after being dropped into the water from shore, a boat or pier, or helicopter. EMILY has a maximum cruising speed of 35km per hour (much faster than a human lifeguard can swim) and function as a floatation device for up to 4-6 people. The platform was used in Greece to assist in ocean rescues of refugees crossing the Aegean Sea from Turkey. More here. The same company has also created Searcher, an autonomous marine robot that I hope to learn more about soon.

Platypus
Platypus
Platypus manufactures four different types of swimming robots one which is depicted above. Called the Serval, this platform has a maximum speed of 15 km/hour with a runtime of 4 hours. The Serval weighs 35kg and can carry a payload of 70kg. The Serval can use either wireless, 3G or Edge to communicate. Platypus also offers a base-station package that includes a wireless router and antenna with range up to 2.5 km. The Felis, another Playtpus robot, has a max speed of 30km/hour and a max payload of 200kg. The platform can operate for 12 hours. These platforms can be used for autonomous mapping. More here.

AquaBot
AquaBot
The aim of the AquaBot project is to develop an underwater tethered robot that automates the tasks of visual inspection of fish farm nets and mooring systems. There is little up-to-date information on this project so it is unclear how many prototypes and tests were carried out. Specs for this diving robot don’t seem to be readily available online. More here.

There are of course many more marine robots out there. Have a look at these other companies: Bluefin Robotics, Ocean Server, Riptide Autonomous Systems, Seabotix, Blue Robotics, YSI, AC-CESS and Juice Robotics, for example. The range of applications of maritime robotics can be applied to is also growing. At WeRobotics, we’re actively exploring a wide number of use-cases to determine if and where maritime robots might be able to add value to the work of our local partners in developing countries.

aquaculture

Take aquaculture (also known as aquafarming), for example. Aquaculture is the fastest growing sector of global food production. But many types of aquaculture remain labor intensive. In addition, a combination of “social and environmental pressures and biological necessities are creating opportunities for aquatic farms to locate in more exposed waters further offshore,” which increases both risks and costs, “particularly those associated with the logistics of human maintenance and intervention activities.” These and other factors make “this an excellent time to examine the possibilities for various forms of automation to improve the efficiency and cost-effectiveness of farming the oceans.”

Just like land-based agriculture, aquaculture can also be devastated by major disasters. To this end, aquaculture represents an important food security issue for local communities directly dependent on seafood for their livelihoods. As such, restoring aquafarms can be a vital element of disaster recovery. After the 2011 Japan Earthquake and Tsunami, for example, maritime robots were used to “remediate fishing nets and accelerate the restarting of the fishing economy.” As further noted in the book Disaster Robotics, the robots “cleared fishing beds from debris and pollution” by mapping the “remaining debris in the prime fishing and aquaculture areas, particularly looking for cars and boats leaking oil and gas and for debris that would snag and tear fishing nets.”

JapanTsunami

Ports and shipping channels in both Japan and Haiti were also reopened using marine robots following the major earthquakes in 2011 and 2010. They mapped the debris field that could damage or prevent ships from entering the port. The clearance of this debris allowed “relief supplies to enter devastated areas and economic activities to resume.” To be sure, coastal damage caused by earthquake and tsunamis can render ports inoperable. Marine robots can thus accelerate both response and recovery efforts by reopening ports that represent primary routes for relief supplies, as noted in the book Disaster Robotics

In sum, marine robotics can be used for aquaculture, structural inspections, estimation of debris volume and type, victim recovery, forensics, environmental monitoring as well as search, reconnaissance and mapping. While marine robots remain relatively expensive, new types of low-cost solutions are starting to enter the market. As these become cheaper and more sophisticated in terms of their autonomous capabilities, I am hopeful that they will become increasingly useful and accessible to local communities around the world. Check out WeRobotics to learn about how appropriate robotics solutions can support local livelihoods.