Author Archives: Patrick Meier

What Happens When the Media Sends Drone Teams to Disasters?

Media companies like AFP, CNN and others are increasingly capturing dramatic aerial footage following major disasters around the world. These companies can be part of the solution when it comes to adding value to humanitarian efforts on the ground. But they can also be a part of the problem.


Media teams are increasingly showing up to disasters with small drones (UAVs) to document the damage. They’re at times the first with drones on the scene and thus able to quickly capture dramatic aerial footage of the devastation below. These media assets lead to more views and thus traffic on news websites, which increases the probability that more readers click on ads. Cue Noam Chomsky’s Manufacturing Consent: The Political Economy of the Mass Media, my favorite book whilst in high school.

Aerial footage can also increase situational awareness for disaster responders if that footage is geo-located. Labeling individual scenes in video footage with the name of the towns or villages being flown over would go a long way. This is what I asked one journalist to do in the aftermath of the Nepal Earthquake after he sent me dozens of his aerial videos. I also struck up an informal agreement with CNN to gain access to their raw aerial footage in future disasters. On a related note, I was pleased when my CNN contact expressed an interest in following the Humanitarian UAV Code of Conduct.

In an ideal world, there would be a network of professional drone journalists with established news agencies that humanitarian organizations could quickly contact for geo-tagged video footage after major disasters to improve their situational awareness. Perhaps the Professional Society of Drone Journalists (PSDJ) could be part of the solution. In any case, the network would either have its own Code of Conduct or follow the humanitarian one. Perhaps they could post their footage and pictures directly to the Humanitarian UAV Network (UAViators) Crisis Map. Either way, the media has long played an important role in humanitarian disasters, and their increasing use of drones makes them even more valuable partners to increase situational awareness.

The above scenario describes the ideal world. But the media can (and has) been part of the problem as well. “If it bleeds, it leads,” as the saying goes. Increased competition between media companies to be the first to capture dramatic aerial video that goes viral means that they may take shortcuts. They may not want to waste time getting formal approval from a country’s civil aviation authority. In Nepal after the earthquake, one leading company’s drone team was briefly detained by authorities for not getting official permission.


Media companies may not care to engage with local communities. They may be on a tight deadline and thus dispense with getting community buy-in. They may not have the time to reassure traumatized communities about the robots flying overhead. Media companies may overlook or ignore potential data privacy repercussions of publishing their aerial videos online. They may also not venture out to isolated and rural areas, thus biasing the video footage towards easy-to-access locations.

So how do we in the humanitarian space make media drone teams part of the solution rather than part of the problem? How do we make them partners in these efforts? One way forward is to start a conversation with these media teams and their relevant networks. Perhaps we start with a few informal agreements and learn by doing. If anyone is interested in working with me on this and/or has any suggestions on how to make this happen, please do get in touch. Thanks!

Why Robots Are Flying Over Zanzibar and the Source of the Nile

An expedition in 1858 revealed that Lake Victoria was the source of the Nile. We found ourselves on the shores of Africa’s majestic lake this October, a month after a 5.9 magnitude earthquake struck Tanzania’s Kagera Region. Hundreds were injured and dozens killed. This was the biggest tragedy in decades for the peaceful lakeside town of Bukoba. The Ministry of Home Affairs invited WeRobotics to support the recovery and reconstruction efforts by carrying out aerial surveys of the affected areas. 


The mission of WeRobotics is to build local capacity for the safe and effective use of appropriate robotics solutions. We do this by co-creating local robotics labs that we call Flying Labs. We use these Labs to transfer the professional skills and relevant robotics solutions to outstanding local partners. Our explicit focus on capacity building explains why we took the opportunity whilst in Kagera to train two Tanzanian colleagues. Khadija and Yussuf joined us from the State University of Zanzibar (SUZA). They were both wonderful to work with and quick learners too. We look forward to working with them and other partners to co-create our Flying Labs in Tanzania. More on this in a future post.

Aerial Surveys of Kagera Region After The Earthquake

We surveyed multiple areas in the region based on the priorities of our local partners as well as reports provided by local villagers. We used the Cumulus One UAV from our technology partner DanOffice to carry out the flights. The Cumulus has a stated 2.5 hour flight time and 50 kilometer radio range. We’re using software from our partner Pix4D to process the 3,000+ very high resolution images captured during our 2 days around Bukoba.


Above, Khadija and Yussuf on the left with a local engineer and a local member of the community on the right, respectfully. The video below shows how the Cumulus takes off and lands. The landing is automatic and simply involves the UAV stalling and gently gliding to the ground. 

We engaged directly with local communities before our flights to explain our project and get their permissions to fly. Learn more about our Code of Conduct.


Aerial mapping with fixed-wing UAVs can identify large-scale damage over large areas and serve as a good base map for reconstruction. A lot of the damage, however, can be limited to large cracks in walls, which cannot be seen with nadir (vertical) imagery. We thus flew over some areas using a Parrot Bebop2 to capture oblique imagery and to get closer to the damage. We then took dozens of geo-tagged images from ground-level with our phones in order to ground-truth the aerial imagery.


We’re still processing the resulting imagery so the results below are simply the low resolution previews of one (out of three) surveys we carried out.


Both Khadija and Yussuf were very quick learners and a real delight to work with. Below are more pictures documenting our recent work in Kagera. You can follow all our trainings and projects live via our Twitter feed (@werobotics) and our Facebook page. Sincerest thanks to both Linx Global Intelligence and UR Group for making our work in Kagera possible. Linx provided the introduction to the Ministry of Home Affairs while the UR Group provided invaluable support on the logistics and permissions.


Yussuf programming the flight plan of the Cumulus


Khadija is setting up the Cumulus for a full day of flying around Bukoba area


Khadija wants to use aerial robots to map Zanzibar, which is where she’s from


Community engagement is absolutely imperative


Local community members inspecting the Parrot’s Bebop2

From the shores of Lake Victoria to the coastlines of Zanzibar

Together with the outstanding drone team from the State University of Zanzibar, we mapped Jozani Forest and part of the island’s eastern coastline. This allowed us to further field-test our long-range platform and to continue our local capacity building efforts following our surveys near the Ugandan border. Here’s a picture-based summary of our joint efforts.


Flying Labs Coordinator Yussuf sets up the Cumulus UAV for flight


Turns out selfie sticks are popular in Zanzibar and kids love robots.


Khairat from Team SUZA is operating the mobile air traffic control tower. Team SUZA uses senseFly eBees for other projects on the island.


Another successful takeoff, courtesy of Flying Labs Coordinator Yussuf.


We flew the Cumulus at a speed of 65km/h and at an altitude of 265m.


The Cumulus flew for 2 hours, making this our longest UAV flight in Zanzibar so far.


Khadija from Team SUZA explains to local villagers how and why she maps Zanzibar using flying robots.


Tide starts rushing back in. It’s important to take the moon into account when mapping coastlines, as the tide can change drastically during a single flight and thus affect the stitching process.

The content above is cross-posted from WeRobotics.

Using Sound and Artificial Intelligence to Detect Human Rights Violations

Video continues to be a powerful way to capture human rights abuses around the world. Videos posted to social media can be used to hold perpetrators of gross violations accountable. But video footage poses a “Big Data” challenge to human rights organizations. Two billion smartphone users means almost as many video cameras. This leads to massive amounts of visual content of both suffering and wrong-doing during conflict zones. Reviewing these videos manually is a very labor intensive, time consuming, expensive and often traumatic task. So my colleague Jay Aronson at CMU has been exploring how artificial intelligence and in particular machine learning might solve this challenge.


As Jay and team rightly note in a recent publication (PDF), “the dissemination of conflict and human rights related video has vastly outpaced the ability of researchers to keep up with it – particularly when immediate political action or rapid humanitarian response is required.” The consequences of this are similar to what I’ve observed in humanitarian aid: At some point (which will vary from organization to organization), time and resource limitations will necessitate an end to the collection, archiving, and analysis of user generated content unless the process can be automated.” In sum, information overload can “prevent human rights researchers from uncovering widely dispersed events taking place over long periods of time or large geographic areas that amount to systematic human rights violations.”


To take on this Big Data challenge, Jay and team have developed a new machine learning-based audio processing system that “enables both synchronization of multiple audio-rich videos of the same event, and discovery of specific sounds (such as wind, screaming, gunshots, airplane noise, music, and explosions) at the frame level within a video.” The system basically “creates a unique “soundprint” for each video in a collection, synchronizes videos that are recorded at the same time and location based on the pattern of these signatures, and also enables these signatures to be used to locate specific sounds precisely within a video. The use of this tool for synchronization ultimately provides a multi-perspectival view of a specific event, enabling more efficient event reconstruction and analysis by investigators.”

Synchronizing image features is far more complex than synchronizing sound. “When an object is occluded, poorly illuminated, or not visually distinct from the background, it cannot always be detected by computer vision systems. Further, while computer vision can provide investigators with confirmation that a particular video was shot from a particular location based on the similarity of the background physical environment, it is less adept at synchronizing multiple videos over time because it cannot recognize that a video might be capturing the same event from different angles or distances. In both cases, audio sensors function better so long as the relevant videos include reasonably good audio.”

Ukrainian human rights practitioners working with families of protestors killed during the 2013-2014 Euromaidan Protests recently approached Jay and company to analyze videos from those events. They wanted to “ locate every video available in their collection of the moments before, during, and just after a specific set of killings. They wanted to extract information from these videos, including visual depictions of these killings, whether the protesters in question were an immediate and direct threat to the security forces, plus any other information that could be used to corroborate or refute other forms of evidence or testimony available for their cases.”


Their plan had originally been to manually synchronize more than 65 hours of video footage from 520 videos taken during the morning of February 20, 2014. But after working full-time over several months, they were only able to stitch together about 4 hours of the total video using visual and audio cues in the recording.” So Jay and team used their system to make sense of the footage. They were able to automatically synchronize over 4 hours of the footage. The figure above shows an example of video clips synchronized by the system.

Users can also “select a segment within the video containing the event they are interested in (for example, a series of explosions in a plaza), and search in other videos for a similar segment that shows similar looking buildings or persons, or that contains a similar sounding noise. A user may for example select a shooting scene with a significant series of gunshots, and may search for segments with a similar sounding series of gunshots. This method increases the chances for finding video scenes of an event displaying different angles of the scene or parallel events.”

Jay and team are quick to emphasize that their system “does not  eliminate human involvement in the process because machine learning systems provide probabilistic, not certain, results.” To be sure, “the synchronization of several videos is noisy and will likely include mistakes—this is precisely why human involvement in the process is crucial.”

I’ve been following Jay’s applied research for many years now and continue to be a fan of his approach given the overlap with my own work in the use of machine learning to make sense of the Big Data generated during major natural disasters. I wholeheartedly agree with Jay when he reflected during a recent call that the use of advanced techniques alone is not the answer. Effective cross-disciplinary collaboration between computer scientists and human rights (or humanitarian) practitioners is really hard but absolutely essential. This explains why I wrote this practical handbook on how to create effective collaboration and successful projects between computer scientists and humanitarian organizations.

Using Swimming Robots to Warn Villages of Himalayan Tsunamis

Cross-posted from National Geographic 

Climate change is having a devastating impact on the Himalaya. On the Ngozumpa glacier, one of the largest and longest in the region, hundreds of supraglacial lakes dot the glacier surface. One lake in particular is known for its continuous volume purges on an annual basis. Near the start of the monsoon this summer, in less than 48 hours, it loses enough water to fill over 40 Olympic-sized swimming pools. To make matters worse, these glacial lakes act like cancers: they consume Himalayan glaciers from the inside out, making some of them melt twice as fast. As a result, villages down-valley from these glacial lakes are becoming increasingly prone to violent flash floods, which locals call Himalayan Tsunamis.

To provide early warnings of these flash floods requires that we collect a lot more geophysical and hydrologic information on these glacial lakes. So scientists like Ulyana (co-author) are racing to understand exactly how these glacial lakes form and grow, and how they’re connected to each other through seemingly secret subterranean channels. We need to know how deep and steep these lakes are, what the lake floors look like and of what materials they are composed (e.g., mud, rock, bare ice).

Ulyana, her colleagues and a small local team of Sherpa have recently started using autonomous swimming robots to automatically map lake floors and look for cracks that may trigger mountain tsunamis. Using robotics to do this is both faster and more accurate than having humans take the measurements. What’s more, robots are significantly safer. Indeed, even getting near these lakes (let alone in them!) is dangerous enough due to unpredictable collapses of ice called calving and large boulders rolling off of surrounding ice cliffs and into the lakes below. Just imagine being on a small inflatable boat floating on ice-cold water when one of those icefalls happen.

We (Ulyana and Patrick) are actively looking to utilize diving robots as well—specifically the one in the video footage below. This OpenROV Trident robot will enable us to get to the bottom of these glacial lakes to identify deepening ‘hotspots’ before they’re visible from the lake’s surface or from the air. Our plan next year is to pool our efforts, bringing diving, swimming and flying robots to Nepal so we can train our partners—Sherpas and local engineers—on how to use these robotic solutions to essentially take the ‘pulse’ of the changing Himalaya. This way they’ll be able to educate as well as warn nearby villages before the next mountain floods hit.

We plan to integrate these efforts with WeRobotics (co-founded by co-author Patrick) and in particular with the local robotics lab that WeRobotics is already setting up in Kathmandu. This lab has a number of flying robots and trained Nepali engineers. To learn more about how these flying robots are being used in Nepal, check out the pictures here.

We’ll soon be adding diving robots to the robotic lab’s portfolio in Nepal thanks to WeRobotics’s partnership with OpenROV. What’s more, all WeRobotics labs have an expressed goal of spinning off  local businesses that offer robotics as services. Thus, the robotics start-up that spins off from our lab in Nepal will offer a range of mapping services using both flying and diving robots. As such, we want to create local jobs that use robotics (jobs that local partners want!) so that our Nepali friends can make a career out of saving their beautiful mountains.  

Please do get in touch if you’d like to get involved or support in other ways! Email us and

Could These Swimming Robots Help Local Communities?

Flying robots are all over the news these days. But UAVs/drones are hardly the only autonomous robotics solutions out there. Driving robots like the one below by Starship Technologies has already driven more than 5,000 miles in 39 cities across 12 counties, gentling moving around hundreds of thousands of people in the process of delivering small packages. I’m already in touch with Starship and related companies to explore a range of humanitarian applications. Perhaps less well known, however, are the swimming robots that can be found floating and diving in oceans, lakes and rivers around the world.

These swimming robots are often referred to as maritime or marine robots, aquatic robots, remotely operated vehicles and autonomous surface water (or underwater) vehicles. I’m interested in swimming robots for the same reason I’m interested in flying and driving robots: they allow us to collect data, transport cargo and take samples in more efficient and productive ways. Flying Robots, for example, can be used to transport essential vaccines and medicines. They can also collect data by taking pictures to support precision agriculture and they can take air samples to test for pollution. The equivalent is true for swimming and diving robots.

So I’d like to introduce you to this cast of characters and will then elaborate on how they can and have been used to make a difference. Do please let me know if I’m missing any major ones—robots and use-cases.

OpenRov Trident
This tethered diving robot can reach depths of up to 100 meters with a maximum speed of 2 meters per second (7 km/hour). The Trident has a maximum run-time of 3 hours, weighs just under 3 kg and easily fits in a backpack. It comes with a 25 meter tether (although longer tethers are also available). The robot, which relays a live video feed back to the surface, can be programmed to swim in long straight lines (transects) over a given area to generate a continuous map of the seafloor. The OpenROV software is open source. More here.

MidWest ROV Screen Shot 2016-07-29 at 8.00.07 AM
This remotely operated swimming robot has an maximum cruising speed of just under 5km per hour and weighs 25kg. The ROV is a meter long and has a run time of approximately 4 hours. The platform has full digital and audio recording capabilities with a sonar a scanner that can record a swatch of ~60 meter wide at a depth of 180 meters. This sturdy robot has been swimming in one of the fastest changing glacier lakes in the Himalayas to assess flood hazards. More here. See also MarineTech.

Hydromea Vertex AUV

This small swimming robot can cover a volume of several square kilometers at a depth of up to 300 meters with a maximum speed of 1 meter per second (3.5 km/hour). The Vertex can automatically scan vertically and horizontally, or any other angle for that matter and from multiple locations. The platform, which only weighs 7 kg and has a length of 70 cm, can be used to create 3D scans of the seafloor with up to 10 robots operating in simultaneously in parallel thanks to communication and localization technology that enables them to cooperate as a team. More here.

Liquid Robotics Wave Glider
The Wave Glider is an autonomous swimming robot powered by both wave and solar energy, enabling it to cruise at 5.5 km/hour. The surface component, which measures 3 meters in length, contains solar panels that power the platform and onboard sensors. The tether and underwater component enables the platform to use waves for thrust. This Glider operates individually or in fleets to deliver real-time data for up to a year with no fuel. The platform has already traveled well over one million kilometers and through a range of weather conditions including hurricanes and typhoons. More here.

This tethered robot weighs 5kg and can operate at a depth of 100 meters with a maximum of 1.5m per second (3.5km/hour). Tethers are available at a length of 30 meters to 50 meters. The platform has a battery life of 3 hours and provides a live, high-definition video feed. The SeaDrone platform can be easily controlled from an iOS tablet. More here.

Clear Path Robotics Heron
This surface water swimming robot can cruise at a maximum speed of 1.7 meters per second (6km/ hour) for around 2 hours. The Heron, which weighs 28kg, offers a payload bay for submerged sensors and a mounting system for those above water. The robot can carry a maximum payload of 10kg. A single operator can control multiple Herons simultaneously. The platform, like others described below is ideal for ecosystem assessments and bathymetry surveys (to map the topography of lakes and ocean floors). More here.

The Saildrone navigates to its destination using wind power alone, typically cruising at an average speed of 5.5 km/hour. The robot can then stay at a designated spot or perform survey patterns. Like other robots introduced here, the Saildrone can carry a range of sensors for data collection. The data is then transmitted back to shore via satellite. The Saildrone is also capable of carrying an additional 100 kg worth of payload. More here.

Screen Shot 2016-07-29 at 2.29.07 PM
EMILY, an acronym for Emergency Integrated Lifesaving Lanyard, is a robotic device used by lifeguards for rescuing swimmers. It operates on battery power and is operated by remote control after being dropped into the water from shore, a boat or pier, or helicopter. EMILY has a maximum cruising speed of 35km per hour (much faster than a human lifeguard can swim) and function as a floatation device for up to 4-6 people. The platform was used in Greece to assist in ocean rescues of refugees crossing the Aegean Sea from Turkey. More here. The same company has also created Searcher, an autonomous marine robot that I hope to learn more about soon.

Platypus manufactures four different types of swimming robots one which is depicted above. Called the Serval, this platform has a maximum speed of 15 km/hour with a runtime of 4 hours. The Serval weighs 35kg and can carry a payload of 70kg. The Serval can use either wireless, 3G or Edge to communicate. Platypus also offers a base-station package that includes a wireless router and antenna with range up to 2.5 km. The Felis, another Playtpus robot, has a max speed of 30km/hour and a max payload of 200kg. The platform can operate for 12 hours. These platforms can be used for autonomous mapping. More here.

The aim of the AquaBot project is to develop an underwater tethered robot that automates the tasks of visual inspection of fish farm nets and mooring systems. There is little up-to-date information on this project so it is unclear how many prototypes and tests were carried out. Specs for this diving robot don’t seem to be readily available online. More here.

There are of course many more marine robots out there. Have a look at these other companies: Bluefin Robotics, Ocean Server, Riptide Autonomous Systems, Seabotix, Blue Robotics, YSI, AC-CESS and Juice Robotics, for example. The range of applications of maritime robotics can be applied to is also growing. At WeRobotics, we’re actively exploring a wide number of use-cases to determine if and where maritime robots might be able to add value to the work of our local partners in developing countries.


Take aquaculture (also known as aquafarming), for example. Aquaculture is the fastest growing sector of global food production. But many types of aquaculture remain labor intensive. In addition, a combination of “social and environmental pressures and biological necessities are creating opportunities for aquatic farms to locate in more exposed waters further offshore,” which increases both risks and costs, “particularly those associated with the logistics of human maintenance and intervention activities.” These and other factors make “this an excellent time to examine the possibilities for various forms of automation to improve the efficiency and cost-effectiveness of farming the oceans.”

Just like land-based agriculture, aquaculture can also be devastated by major disasters. To this end, aquaculture represents an important food security issue for local communities directly dependent on seafood for their livelihoods. As such, restoring aquafarms can be a vital element of disaster recovery. After the 2011 Japan Earthquake and Tsunami, for example, maritime robots were used to “remediate fishing nets and accelerate the restarting of the fishing economy.” As further noted in the book Disaster Robotics, the robots “cleared fishing beds from debris and pollution” by mapping the “remaining debris in the prime fishing and aquaculture areas, particularly looking for cars and boats leaking oil and gas and for debris that would snag and tear fishing nets.”


Ports and shipping channels in both Japan and Haiti were also reopened using marine robots following the major earthquakes in 2011 and 2010. They mapped the debris field that could damage or prevent ships from entering the port. The clearance of this debris allowed “relief supplies to enter devastated areas and economic activities to resume.” To be sure, coastal damage caused by earthquake and tsunamis can render ports inoperable. Marine robots can thus accelerate both response and recovery efforts by reopening ports that represent primary routes for relief supplies, as noted in the book Disaster Robotics

In sum, marine robotics can be used for aquaculture, structural inspections, estimation of debris volume and type, victim recovery, forensics, environmental monitoring as well as search, reconnaissance and mapping. While marine robots remain relatively expensive, new types of low-cost solutions are starting to enter the market. As these become cheaper and more sophisticated in terms of their autonomous capabilities, I am hopeful that they will become increasingly useful and accessible to local communities around the world. Check out WeRobotics to learn about how appropriate robotics solutions can support local livelihoods.

Reverse Robotics: A Brief Thought Experiment

Imagine a world in which manually controlled technologies simply do not exist. The very thought of manual technologies is, in actual fact, hardly conceivable let alone comprehensible. Instead, this seemingly alien world is seamlessly powered by intelligent and autonomous robotics systems. Lets call this world Planet AI.


Planet AI’s version of airplanes, cars, trains and ships are completely unmanned. That is, they are fully autonomous—a silent symphony of large and small robots waltzing around with no conductor in sight. On one fateful night, a young PhD student awakens in a sweat unable to breathe, momentarily. The nightmare: all the swirling robots of Planet AI were no longer autonomous. Each of them had to be told exactly what to do by the Planet’s inhabitants. Madness.

She couldn’t go back to sleep. The thought of having to tell her robotics transport unit (RTU) in the morning how to get from her studio to the university gave her a panic attack. She would inevitably get lost or worse yet crash, maybe even hurt someone. She’d need weeks of practice to manually control her RTU. And even if she could somehow master manual steering, she wouldn’t be able to steer and work on her dissertation at the same time during the 36-minute drive. What’s more, that drive would easily become a 100-minute drive since there’s no way she would manually steer the RTU at 100 kilometers an hour—the standard autonomous speed of RTUs; more like 30km/h.

And what about the other eight billion inhabits of Planet AI? The thought of having billions of manually controlled RTUs flying, driving & swimming through the massive metropolis of New AI was surely the ultimate horror story. Indeed, civilization would inevitably come to an end. Millions would die in horrific RTU collisions. Transportation would slow to a crawl before collapsing. And the many billions of hours spent working, resting or playing in automated RTU’s every day would quickly evaporate into billions of hours of total stress and anxiety. The Planet’s Global GDP would free fall. RTU’s carrying essential cargo automatically from one side of the planet to the other would need to be steered manually. Where would those millions of jobs require such extensive manual labor come from? Who in their right mind would even want to take such a dangerous and dull assignment? Who would provide the training and certification? And who in the world would be able to pay for all the salaries anyway?

At this point, the PhD student was on her feet. “Call RTU,” she instructed her personal AI assistant. An RTU swung by while she as putting on her shoes on. Good, so far so good, she told herself. She got in slowly and carefully, studying the RTU’s behavior suspiciously. No, she thought to herself, nothing out of the ordinary here either. It was just a bad dream. The RTU’s soft purring power source put her at ease, she had always enjoyed the RTU’s calming sound. For the first time since she awoke from her horrible nightmare, she started to breathe more easily. She took an extra deep and long breath.


The RTU was already waltzing with ease at 100km per hour through the metropolis, the speed barely noticeable from inside the cocoon. Forty-six, forty-seven and forty-eight; she was counting the number of other RTU’s that were speeding right alongside her’s, below and above as well. She arrived on campus in 35 minutes and 48 seconds—exactly the time it had taken the RTU during her 372 earlier rides. She breathed a deep sigh of relief and said “Home Please.” It was just past 3am and she definitely needed more sleep.

She thought of her fiancée on the way home. What would she think about her crazy nightmare given her work in the humanitarian space? Oh no. Her heart began to race again. Just imagine the impact that manually steered RTUs would have on humanitarian efforts. Talk about a total horror story. Life-saving aid, essential medicines, food, water, shelter; each of these would have to be trans-ported manually to disaster-affected communities. The logistics would be near impossible to manage manually. Everything would grind and collapse to a halt. Damage assessments would have to be carried manually as well, by somehow steering hundreds of robotics data units (RDU’s) to collect data on affected areas. Goodness, it would take days if not weeks to assess disaster damage. Those in need would be left stranded. “Call Fiancée,” she instructed, shivering at the thought of her fiancée having to carry out her important life-saving relief work entirely manually.

The point of this story and thought experiment? While some on Planet Earth may find the notion of autonomous robotics system insane and worry about accidents, it is worth noting that a future world like Planet AI would feel exactly the same way with respect to our manually controlled technologies. Over 80% of airplane accidents are due to human pilot error and 90% of car accidents are the result of human driver error. Our PhD student on Planet AI would describe our use of manually controlled technologies a suicidal, not to mention a massive waste of precious human time.

Screen Shot 2016-07-07 at 11.11.08 AM

An average person in the US spends 101 minutes per day driving (which totals to more than 4 years in their life time). There are 214 million licensed car drivers in the US. This means that over 360 million hours of human time in the US alone is spent manually steering a car from point A to point B every day. This results in more than 30,000 people killed per year. And again, that’s just for the US. There are over 1 billion manually controlled motor vehicles on Earth. Imagine what we could achieve with an additional billion hours every day if we had Planet AI’s autonomous systems to free up this massive cognitive surplus. And lets not forget the devastating environmental impact of individually-owned, manually controlled vehicles.

If you had the choice, would you prefer to live on Earth or on Planet AI if everything else were held equal?

Humanitarian Robotics: The $15 Billion Question?

The International Community spends around $25 Billion per year to provide life saving assistance to people devastated by wars and natural disasters. According to the United Nations, this is $15 Billion short of what is urgently needed; that’s $15 Billion short every year. So how do we double the impact of humanitarian efforts and do so at half the cost?


Perhaps one way to deal with this stunning 40% gap in funding is to scale the positive impact of the aid industry. How? By radically increasing the efficiency (time-savings) and productivity (cost-savings) of humanitarian efforts. This is where Artificial Intelligence (AI) and Autonomous Robotics come in. The World Economic Forum refers to this powerful new combination as the 4th Industrial Revolution. Amazon, Facebook, Google and other Top 100 Fortune companies are powering this revolution with billions of dollars in R&D. So whether we like it or not, the robotics arms race will impact the humanitarian industry just like it is impacting other industries: through radical gains in efficiency & productivity.

Take Amazon, for example. The company uses some 30,000 Kiva robots in its warehouses across the globe (pictured below). These ground-based, terrestrial robotics solutions have already reduced Amazon’s operating expenses by no less than 20%. And each new warehouse that integrates these self-driving robots will save the company around $22 million in fulfillment expenses alone. According to Deutsche Bank, “Bringing the Kivas to the 100 or so distribution centers that still haven’t implemented the tech would save Amazon a further $2.5 billion.” As is well known, the company is also experimenting with aerial robotics (drones). A recent study by PwC (PDF) notes that “the labor costs and services that can be replaced by the use of these devices account for about $127 billion today, and that the main sectors that will be affected are infrastructure, agriculture, and transportation.” Meanwhile, Walmart and others are finally starting to enter the robotics arms race. The former is using ground-based robots to ship apparel and is actively exploring the use of aerial robotics to “photograph ware-house shelves as part of an effort to reduce the time it takes to catalogue inventory.”

Amazon Robotics

What makes this new industrial revolution different from those that preceded it is the fundamental shift from manually controlled technologies—a world we’re all very familiar with—to a world powered by increasingly intelligent and autonomous systems—an entirely different kind of world. One might describe this as a shift towards extreme automation. And whether extreme automation powers aerial robotics, terrestrial robotics or maritime robots (pictured below) is besides the point. The disruption here is the one-way shift towards increasingly intelligent and autonomous systems.


Why does this fundamental shift matter to those of us working in humanitarian aid? For at least two reasons: the collection of humanitarian information and the transportation of humanitarian cargo. Whether we like it or not, the rise of increasingly autonomous systems will impact both the way we collect data and transport cargo by making these processes faster, safer and more cost-effective. Naturally, this won’t happen overnight: disruption is a process.

Humanitarian organizations cannot stop the 4th Industrial Revolution. But they can apply their humanitarian principles and ideals to inform how autonomous robotics are used in humanitarian contexts. Take the importance of localizing aid, for example, a priority that gained unanimous support at the recent World Humanitarian Summit. If we apply this priority to humanitarian robotics, the question becomes: how can access to appropriate robotics solutions be localized so that local partners can double the positive impact of their own humanitarian efforts? In other words, how do we democratize the 4th Industrial Revolution? Doing so may be an important step towards closing the $15 billion gap. It could render the humanitarian industry more efficient and productive while localizing aid and creating local jobs in new industries.