Author Archives: Patrick Meier

Could These Swimming Robots Help Local Communities?

Flying robots are all over the news these days. But UAVs/drones are hardly the only autonomous robotics solutions out there. Driving robots like the one below by Starship Technologies has already driven more than 5,000 miles in 39 cities across 12 counties, gentling moving around hundreds of thousands of people in the process of delivering small packages. I’m already in touch with Starship and related companies to explore a range of humanitarian applications. Perhaps less well known, however, are the swimming robots that can be found floating and diving in oceans, lakes and rivers around the world.

These swimming robots are often referred to as maritime or marine robots, aquatic robots, remotely operated vehicles and autonomous surface water (or underwater) vehicles. I’m interested in swimming robots for the same reason I’m interested in flying and driving robots: they allow us to collect data, transport cargo and take samples in more efficient and productive ways. Flying Robots, for example, can be used to transport essential vaccines and medicines. They can also collect data by taking pictures to support precision agriculture and they can take air samples to test for pollution. The equivalent is true for swimming and diving robots.

So I’d like to introduce you to this cast of characters and will then elaborate on how they can and have been used to make a difference. Do please let me know if I’m missing any major ones—robots and use-cases.

OpenRov Trident
OpenROV_Trident
This tethered diving robot can reach depths of up to 100 meters with a maximum speed of 2 meters per second (7 km/hour). The Trident has a maximum run-time of 3 hours, weighs just under 3 kg and easily fits in a backpack. It comes with a 25 meter tether (although longer tethers are also available). The robot, which relays a live video feed back to the surface, can be programmed to swim in long straight lines (transects) over a given area to generate a continuous map of the seafloor. The OpenROV software is open source. More here.

MidWest ROV Screen Shot 2016-07-29 at 8.00.07 AM
This remotely operated swimming robot has an maximum cruising speed of just under 5km per hour and weighs 25kg. The ROV is a meter long and has a run time of approximately 4 hours. The platform has full digital and audio recording capabilities with a sonar a scanner that can record a swatch of ~60 meter wide at a depth of 180 meters. This sturdy robot has been swimming in one of the fastest changing glacier lakes in the Himalayas to assess flood hazards. More here. See also MarineTech.

Hydromea Vertex AUV

This small swimming robot can cover a volume of several square kilometers at a depth of up to 300 meters with a maximum speed of 1 meter per second (3.5 km/hour). The Vertex can automatically scan vertically and horizontally, or any other angle for that matter and from multiple locations. The platform, which only weighs 7 kg and has a length of 70 cm, can be used to create 3D scans of the seafloor with up to 10 robots operating in simultaneously in parallel thanks to communication and localization technology that enables them to cooperate as a team. More here.

Liquid Robotics Wave Glider
LiquidRobotics
The Wave Glider is an autonomous swimming robot powered by both wave and solar energy, enabling it to cruise at 5.5 km/hour. The surface component, which measures 3 meters in length, contains solar panels that power the platform and onboard sensors. The tether and underwater component enables the platform to use waves for thrust. This Glider operates individually or in fleets to deliver real-time data for up to a year with no fuel. The platform has already traveled well over one million kilometers and through a range of weather conditions including hurricanes and typhoons. More here.

SeaDrone
SeaDrone
This tethered robot weighs 5kg and can operate at a depth of 100 meters with a maximum of 1.5m per second (3.5km/hour). Tethers are available at a length of 30 meters to 50 meters. The platform has a battery life of 3 hours and provides a live, high-definition video feed. The SeaDrone platform can be easily controlled from an iOS tablet. More here.

Clear Path Robotics Heron
ClearPath
This surface water swimming robot can cruise at a maximum speed of 1.7 meters per second (6km/ hour) for around 2 hours. The Heron, which weighs 28kg, offers a payload bay for submerged sensors and a mounting system for those above water. The robot can carry a maximum payload of 10kg. A single operator can control multiple Herons simultaneously. The platform, like others described below is ideal for ecosystem assessments and bathymetry surveys (to map the topography of lakes and ocean floors). More here.

SailDrone
SailDrone
The Saildrone navigates to its destination using wind power alone, typically cruising at an average speed of 5.5 km/hour. The robot can then stay at a designated spot or perform survey patterns. Like other robots introduced here, the Saildrone can carry a range of sensors for data collection. The data is then transmitted back to shore via satellite. The Saildrone is also capable of carrying an additional 100 kg worth of payload. More here.

EMILY
Screen Shot 2016-07-29 at 2.29.07 PM
EMILY, an acronym for Emergency Integrated Lifesaving Lanyard, is a robotic device used by lifeguards for rescuing swimmers. It operates on battery power and is operated by remote control after being dropped into the water from shore, a boat or pier, or helicopter. EMILY has a maximum cruising speed of 35km per hour (much faster than a human lifeguard can swim) and function as a floatation device for up to 4-6 people. The platform was used in Greece to assist in ocean rescues of refugees crossing the Aegean Sea from Turkey. More here. The same company has also created Searcher, an autonomous marine robot that I hope to learn more about soon.

Platypus
Platypus
Platypus manufactures four different types of swimming robots one which is depicted above. Called the Serval, this platform has a maximum speed of 15 km/hour with a runtime of 4 hours. The Serval weighs 35kg and can carry a payload of 70kg. The Serval can use either wireless, 3G or Edge to communicate. Platypus also offers a base-station package that includes a wireless router and antenna with range up to 2.5 km. The Felis, another Playtpus robot, has a max speed of 30km/hour and a max payload of 200kg. The platform can operate for 12 hours. These platforms can be used for autonomous mapping. More here.

AquaBot
AquaBot
The aim of the AquaBot project is to develop an underwater tethered robot that automates the tasks of visual inspection of fish farm nets and mooring systems. There is little up-to-date information on this project so it is unclear how many prototypes and tests were carried out. Specs for this diving robot don’t seem to be readily available online. More here.

There are of course many more marine robots out there. Have a look at these other companies: Bluefin Robotics, Ocean Server, Riptide Autonomous Systems, Seabotix, Blue Robotics, YSI, AC-CESS and Juice Robotics, for example. The range of applications of maritime robotics can be applied to is also growing. At WeRobotics, we’re actively exploring a wide number of use-cases to determine if and where maritime robots might be able to add value to the work of our local partners in developing countries.

aquaculture

Take aquaculture (also known as aquafarming), for example. Aquaculture is the fastest growing sector of global food production. But many types of aquaculture remain labor intensive. In addition, a combination of “social and environmental pressures and biological necessities are creating opportunities for aquatic farms to locate in more exposed waters further offshore,” which increases both risks and costs, “particularly those associated with the logistics of human maintenance and intervention activities.” These and other factors make “this an excellent time to examine the possibilities for various forms of automation to improve the efficiency and cost-effectiveness of farming the oceans.”

Just like land-based agriculture, aquaculture can also be devastated by major disasters. To this end, aquaculture represents an important food security issue for local communities directly dependent on seafood for their livelihoods. As such, restoring aquafarms can be a vital element of disaster recovery. After the 2011 Japan Earthquake and Tsunami, for example, maritime robots were used to “remediate fishing nets and accelerate the restarting of the fishing economy.” As further noted in the book Disaster Robotics, the robots “cleared fishing beds from debris and pollution” by mapping the “remaining debris in the prime fishing and aquaculture areas, particularly looking for cars and boats leaking oil and gas and for debris that would snag and tear fishing nets.”

JapanTsunami

Ports and shipping channels in both Japan and Haiti were also reopened using marine robots following the major earthquakes in 2011 and 2010. They mapped the debris field that could damage or prevent ships from entering the port. The clearance of this debris allowed “relief supplies to enter devastated areas and economic activities to resume.” To be sure, coastal damage caused by earthquake and tsunamis can render ports inoperable. Marine robots can thus accelerate both response and recovery efforts by reopening ports that represent primary routes for relief supplies, as noted in the book Disaster Robotics

In sum, marine robotics can be used for aquaculture, structural inspections, estimation of debris volume and type, victim recovery, forensics, environmental monitoring as well as search, reconnaissance and mapping. While marine robots remain relatively expensive, new types of low-cost solutions are starting to enter the market. As these become cheaper and more sophisticated in terms of their autonomous capabilities, I am hopeful that they will become increasingly useful and accessible to local communities around the world. Check out WeRobotics to learn about how appropriate robotics solutions can support local livelihoods.

Reverse Robotics: A Brief Thought Experiment

Imagine a world in which manually controlled technologies simply do not exist. The very thought of manual technologies is, in actual fact, hardly conceivable let alone comprehensible. Instead, this seemingly alien world is seamlessly powered by intelligent and autonomous robotics systems. Lets call this world Planet AI.

PlanetAI

Planet AI’s version of airplanes, cars, trains and ships are completely unmanned. That is, they are fully autonomous—a silent symphony of large and small robots waltzing around with no conductor in sight. On one fateful night, a young PhD student awakens in a sweat unable to breathe, momentarily. The nightmare: all the swirling robots of Planet AI were no longer autonomous. Each of them had to be told exactly what to do by the Planet’s inhabitants. Madness.

She couldn’t go back to sleep. The thought of having to tell her robotics transport unit (RTU) in the morning how to get from her studio to the university gave her a panic attack. She would inevitably get lost or worse yet crash, maybe even hurt someone. She’d need weeks of practice to manually control her RTU. And even if she could somehow master manual steering, she wouldn’t be able to steer and work on her dissertation at the same time during the 36-minute drive. What’s more, that drive would easily become a 100-minute drive since there’s no way she would manually steer the RTU at 100 kilometers an hour—the standard autonomous speed of RTUs; more like 30km/h.

And what about the other eight billion inhabits of Planet AI? The thought of having billions of manually controlled RTUs flying, driving & swimming through the massive metropolis of New AI was surely the ultimate horror story. Indeed, civilization would inevitably come to an end. Millions would die in horrific RTU collisions. Transportation would slow to a crawl before collapsing. And the many billions of hours spent working, resting or playing in automated RTU’s every day would quickly evaporate into billions of hours of total stress and anxiety. The Planet’s Global GDP would free fall. RTU’s carrying essential cargo automatically from one side of the planet to the other would need to be steered manually. Where would those millions of jobs require such extensive manual labor come from? Who in their right mind would even want to take such a dangerous and dull assignment? Who would provide the training and certification? And who in the world would be able to pay for all the salaries anyway?

At this point, the PhD student was on her feet. “Call RTU,” she instructed her personal AI assistant. An RTU swung by while she as putting on her shoes on. Good, so far so good, she told herself. She got in slowly and carefully, studying the RTU’s behavior suspiciously. No, she thought to herself, nothing out of the ordinary here either. It was just a bad dream. The RTU’s soft purring power source put her at ease, she had always enjoyed the RTU’s calming sound. For the first time since she awoke from her horrible nightmare, she started to breathe more easily. She took an extra deep and long breath.

starfleet

The RTU was already waltzing with ease at 100km per hour through the metropolis, the speed barely noticeable from inside the cocoon. Forty-six, forty-seven and forty-eight; she was counting the number of other RTU’s that were speeding right alongside her’s, below and above as well. She arrived on campus in 35 minutes and 48 seconds—exactly the time it had taken the RTU during her 372 earlier rides. She breathed a deep sigh of relief and said “Home Please.” It was just past 3am and she definitely needed more sleep.

She thought of her fiancée on the way home. What would she think about her crazy nightmare given her work in the humanitarian space? Oh no. Her heart began to race again. Just imagine the impact that manually steered RTUs would have on humanitarian efforts. Talk about a total horror story. Life-saving aid, essential medicines, food, water, shelter; each of these would have to be trans-ported manually to disaster-affected communities. The logistics would be near impossible to manage manually. Everything would grind and collapse to a halt. Damage assessments would have to be carried manually as well, by somehow steering hundreds of robotics data units (RDU’s) to collect data on affected areas. Goodness, it would take days if not weeks to assess disaster damage. Those in need would be left stranded. “Call Fiancée,” she instructed, shivering at the thought of her fiancée having to carry out her important life-saving relief work entirely manually.


The point of this story and thought experiment? While some on Planet Earth may find the notion of autonomous robotics system insane and worry about accidents, it is worth noting that a future world like Planet AI would feel exactly the same way with respect to our manually controlled technologies. Over 80% of airplane accidents are due to human pilot error and 90% of car accidents are the result of human driver error. Our PhD student on Planet AI would describe our use of manually controlled technologies a suicidal, not to mention a massive waste of precious human time.

Screen Shot 2016-07-07 at 11.11.08 AM

An average person in the US spends 101 minutes per day driving (which totals to more than 4 years in their life time). There are 214 million licensed car drivers in the US. This means that over 360 million hours of human time in the US alone is spent manually steering a car from point A to point B every day. This results in more than 30,000 people killed per year. And again, that’s just for the US. There are over 1 billion manually controlled motor vehicles on Earth. Imagine what we could achieve with an additional billion hours every day if we had Planet AI’s autonomous systems to free up this massive cognitive surplus. And lets not forget the devastating environmental impact of individually-owned, manually controlled vehicles.

If you had the choice, would you prefer to live on Earth or on Planet AI if everything else were held equal?

Humanitarian Robotics: The $15 Billion Question?

The International Community spends around $25 Billion per year to provide life saving assistance to people devastated by wars and natural disasters. According to the United Nations, this is $15 Billion short of what is urgently needed; that’s $15 Billion short every year. So how do we double the impact of humanitarian efforts and do so at half the cost?

wp1

Perhaps one way to deal with this stunning 40% gap in funding is to scale the positive impact of the aid industry. How? By radically increasing the efficiency (time-savings) and productivity (cost-savings) of humanitarian efforts. This is where Artificial Intelligence (AI) and Autonomous Robotics come in. The World Economic Forum refers to this powerful new combination as the 4th Industrial Revolution. Amazon, Facebook, Google and other Top 100 Fortune companies are powering this revolution with billions of dollars in R&D. So whether we like it or not, the robotics arms race will impact the humanitarian industry just like it is impacting other industries: through radical gains in efficiency & productivity.

Take Amazon, for example. The company uses some 30,000 Kiva robots in its warehouses across the globe (pictured below). These ground-based, terrestrial robotics solutions have already reduced Amazon’s operating expenses by no less than 20%. And each new warehouse that integrates these self-driving robots will save the company around $22 million in fulfillment expenses alone. According to Deutsche Bank, “Bringing the Kivas to the 100 or so distribution centers that still haven’t implemented the tech would save Amazon a further $2.5 billion.” As is well known, the company is also experimenting with aerial robotics (drones). A recent study by PwC (PDF) notes that “the labor costs and services that can be replaced by the use of these devices account for about $127 billion today, and that the main sectors that will be affected are infrastructure, agriculture, and transportation.” Meanwhile, Walmart and others are finally starting to enter the robotics arms race. The former is using ground-based robots to ship apparel and is actively exploring the use of aerial robotics to “photograph ware-house shelves as part of an effort to reduce the time it takes to catalogue inventory.”

Amazon Robotics

What makes this new industrial revolution different from those that preceded it is the fundamental shift from manually controlled technologies—a world we’re all very familiar with—to a world powered by increasingly intelligent and autonomous systems—an entirely different kind of world. One might describe this as a shift towards extreme automation. And whether extreme automation powers aerial robotics, terrestrial robotics or maritime robots (pictured below) is besides the point. The disruption here is the one-way shift towards increasingly intelligent and autonomous systems.

All_Robotics

Why does this fundamental shift matter to those of us working in humanitarian aid? For at least two reasons: the collection of humanitarian information and the transportation of humanitarian cargo. Whether we like it or not, the rise of increasingly autonomous systems will impact both the way we collect data and transport cargo by making these processes faster, safer and more cost-effective. Naturally, this won’t happen overnight: disruption is a process.

Humanitarian organizations cannot stop the 4th Industrial Revolution. But they can apply their humanitarian principles and ideals to inform how autonomous robotics are used in humanitarian contexts. Take the importance of localizing aid, for example, a priority that gained unanimous support at the recent World Humanitarian Summit. If we apply this priority to humanitarian robotics, the question becomes: how can access to appropriate robotics solutions be localized so that local partners can double the positive impact of their own humanitarian efforts? In other words, how do we democratize the 4th Industrial Revolution? Doing so may be an important step towards closing the $15 billion gap. It could render the humanitarian industry more efficient and productive while localizing aid and creating local jobs in new industries.

This is What Happens When You Send Flying Robots to Nepal

In September 2015, we were invited by our partner Kathmandu University to provide them and other key stakeholders with professional hands-on training to help them scale the positive impact of their humanitarian efforts following the devastating earthquakes. More specifically, our partners were looking to get trained on how to use aerial robotics solutions (drones) safely and effectively to support their disaster risk reduction and early recovery efforts. So we co-created Kathmandu Flying Labs to ensure the long-term sustainability of our capacity building efforts. Kathmandu Flying Labs is kindly hosted by our lead partner, Kathmandu University (KU). This is already well known. What is hardly known, however, is what happened after we left the country.

Screen Shot 2015-11-02 at 5.17.58 PM

Our Flying Labs are local innovation labs used to transfer both relevant skills and appropriate robotics solutions sustainably to outstanding local partners who need these the most. The co-creation of these Flying Labs include both joint training and applied projects customized to meet the specific needs & priorities of our local partners. In Nepal, we provided both KU and Kathmandu Living Labs (KLL) with the professional hands-on training they requested. What’s more, thanks to our Technology Partner DJI, we were able to transfer 10 DJI Phantoms (aerial robotics solutions) to our Nepali partners (6 to KU and 4 to KLL). In addition, thanks to another Technology Partner, Pix4D, we provided both KU and KLL with free licenses of the Pix4D software and relevant training so they could easily process and analyze the imagery they captured using their DJI platforms. Finally, we carried out joint aerial surveys of Panga, one of the towns hardest-hit by the 2015 Earthquake. Joint projects are an integral element of our capacity building efforts. These projects serve to reinforce the training and enable our local partners to create immediate added value using aerial robotics. This important phase of Kathmandu Flying Labs is already well documented.

WP15

What is less known, however, is what KU did with the technology and software after we left Nepal. Indeed, the results of this next phase of the Flying Labs process (during which we provide remote support as needed) has not been shared widely, until now. KU’s first order of business was to actually finish the joint project we had started with them in Panga. It turns out that our original aerial surveys there were actually incomplete, as denoted by the red circle below.

Map_Before

But because we had taken the time to train our partners and transfer both our skills and the robotics technologies, the outstanding team at KU’s School of Engineering returned to Panga to get the job done without needing any further assistance from us at WeRobotics. They filled the gap:

Map_After

The KU team didn’t stop there. They carried out a detailed aerial survey of a nearby hospital to create the 3D model below (at the hospital’s request). They also created detailed 3D models of the university and a nearby temple that had been partially damaged by the 2015 earthquakes. Furthermore, they carried out additional disaster damage assessments in Manekharka and Sindhupalchowk, again entirely on their own.

Yesterday, KU kindly told us about their collaboration with the World Wildlife Fund (WWF). Together, they are conducting a study to determine the ecological flow of Kaligandaki river, one of the largest rivers in Nepal. According to KU, the river’s ecosystem is particularly “complex as it includes aquatic invertebrates, flora, vertebrates, hydrology, geo-morphology, hydraulics, sociology-cultural and livelihood aspects.” The Associate Dean at KU’s School of Engineering wrote “We are deploying both traditional and modern technology to get the information from ground including UAVs. In this case we are using the DJI Phantoms,” which “reduced largely our field investigation time. The results are interesting and promising.” I look forward to sharing these results in a future blog post.

kali-gandaki-river

Lastly, KU’s Engineering Department has integrated the use of the robotics platforms directly into their courses, enabling Geomatics Engineering students to use the robots as part of their end-of-semester projects. In sum, KU has done truly outstanding work following our capacity building efforts and deserve extensive praise. (Alas, it seems that KLL has made little to no use of the aerial technologies or the software since our training 10 months ago).

Several months after the training in Nepal, we were approached by a British company that needed aerial surveys of specific areas for a project that the Nepal Government had contracted them to carry out. So they wanted to hire us for this project. We proposed instead that they hire our partners at Kathmandu Flying Labs since the latter are more than capable to carry out the surveys themselves. In other words, we actively drive business opportunities to Flying Labs partners. Helping to create local jobs and local businesses around robotics as a service is one of our key goals and the final phase of the Flying Labs framework.

So when we heard last week that USAID’s Global Development Lab was looking to hire a foreign company to carry out aerial surveys for a food security project in Nepal, we jumped on a call with USAID to let them know about the good work carried out by Kathmandu Flying Labs. We clearly communicated to our USAID colleagues that there are perfectly qualified Nepali pilots who can carry out the same aerial surveys. USAID’s Development Lab will be meeting with Kathmandu Flying Labs during their next visit in September.

thumb_IMG_4591_1024

On a related note, one of the participants who we trained in September was hired soon after by Build Change to support the organization’s shelter programs by producing Digital Surface Models (DSMs) from aerial images captured using DJI platforms. More recently, we heard from another student who emailed us with the following: “I had an opportunity to participate in the Humanitarian UAV Training mission in Nepal. It’s because of this training I was able learn how to fly drones and now I can conduct aerial Survey on my own with any hardware.  I would like to thank you and your team for the knowledge transfer sessions.”

This same student (who graduated from KU) added: “The workshop that your team did last time gave us the opportunity to learn how to fly and now we are handling some professional works along with major research. My question to you is ‘How can young graduates from developing countries like ours strengthen their capacity and keep up with their passion on working with technology like UAVs […]? The immediate concern for a graduate in Nepal is a simple job where he can make some money for him and prove to his family that he has done something in return for all the investments they have been doing upon him […]’.

KU campus sign

This is one of several reasons why our approach at WeRobotics is not limited to scaling the positive impact of local humanitarian, development, environmental and public health projects. Our demand-driven Flying Labs model goes the extra (aeronautical) mile to deliberately create local jobs and businesses. Our Flying Labs partners want to make money off the skills and technologies they gain from WeRobotics. They want to take advantage of the new career opportunities afforded by these new AI-powered robotics solutions. And they want their efforts to be sustainable.

In Nepal, we are now interviewing the KU graduate who posed the question above because we’re looking to hire an outstanding and passionate Coordinator for Kathmandu Flying Labs. Indeed, there is much work to be done as we are returning to Nepal in coming months for three reasons: 1) Our local partners have asked us to provide them with the technology and training they need to carry out large scale mapping efforts using long-distance fixed-wing platforms; 2) A new local partner needs to create very high-resolution topographical maps of large priority areas for disaster risk reduction and planning efforts, which requires the use of a fixed-wing platform; 3) We need to meet with KU’s Business Incubation Center to explore partnership opportunities since we are keen to help incubate local businesses that offer robotics as a service in Nepal.

How to Democratize Humanitarian Robotics

Our world is experiencing an unprecedented shift from manually controlled technologies to increasingly intelligent and autonomous systems powered by artificial intelligence (AI). I believe that this radical shift in both efficiency and productivity can have significant positive social impact when it is channeled responsibly, locally and sustainably.

WeRobotics_Logo_New

This is why my team and I founded WeRobotics, the only organization fully dedicated to accelerating and scaling the positive impact of humanitarian, development and environmental projects through the appropriate use of AI-powered robotics solutions. I’m thrilled to announce that the prestigious Rockefeller Foundation shares our vision—indeed, the Foundation has just awarded WeRobotics a start-up grant to take Humanitarian Robotics to the next level. We’re excited to leverage the positive power of robotics to help build a more resilient world in line with Rockefeller’s important vision.

Print

Aerial Robotics (drones/UAVs) represent the first wave of robotics to impact humanitarian sectors by disrupting traditional modes of data collection and cargo delivery. Both timely data and the capacity to act on this data are integral to aid, development and environmental projects. This is why we are co-creating and co-hosting global network of “Flying Labs”; to transfer appropriate aerial robotics solutions and relevant skills to outstanding local partners in developing countries who need these the most.

Our local innovation labs also present unique opportunities for our Technology Partners—robotics companies and institutes. Indeed, our growing network of Flying Labs offer a multitude of geographical, environmental and social conditions for ethical social good projects and responsible field-testing; from high-altitude glaciers and remote archipelagos experiencing rapid climate change to dense urban environments in the tropics subject to intense flooding and endangered ecosystems facing cascading environmental risks.

The Labs also provide our Technology Partners with direct access to local knowledge, talent and markets, and in turn provide local companies and entrepreneurs with facilitated access to novel robotics solutions. In the process, our local partners become experts in different aspects of robotics, enabling them to become service providers and drive new growth through local start-up’s and companies. The Labs thus seek to offer robotics-as-a-service across multiple local sectors. As such, the Labs follow a demand-driven social entrepreneurship model designed to catalyze local businesses while nurturing learning and innovation.

Of course, there’s more to robotics than just aerial robotics. This is why we’re also exploring the use of AI-powered terrestrial and maritime robotics for data collection and cargo delivery. We’ll add these solutions to our portfolio as they become more accessible in the future. In the meantime, sincerest thanks to the Rockefeller Foundation for their trust and invaluable support. Big thanks also to our outstanding Board of Directors and to key colleagues for their essential feed-back and guidance.

On Humanitarian Innovation versus Robotic Natives

I recently read an excellent piece entitled “Humanitarian Innovation and the Art of the Possible,” which appeared in the latest issue of the Humanitarian Practice Network’s (HPN) magazine. The author warns that humanitarian innovation will have limited systemic impact unless there is notable shift in the culture and underlying politics of the aid system. Turns out I had written a similar piece (although not nearly as articulate) during the first year of my PhD in 2005. I had, at the time, just re-read Alex de Waal’s Famine Crimes: Politics and the Disaster Relief Industry in Africa and Peter Uvin’s Aiding Violence.

thumb_IMG_5014_1024

Kim Scriven, the author of the HPN piece and one of the leading thinkers in the humanitarian innovation space, questions whether innovation efforts are truly “free from the political and institutional blockages curtailing other initiatives” in the humanitarian space. He no doubt relates to “field-based humanitarians who have looked on incredulously as technological quick fixes are deployed from afar to combat essentially political blockages to the provision of aid.” This got me thinking about the now well-accepted notion that information is aid.

What kinds of political blockages exist vis-a-vis the provision of information (communication) during or after humanitarian crises? “For example,” writes Kim, “the adoption of new technology like SMS messaging may help close the gap between aid giver and aid recipient, but it will not be sufficient to ensure that aid givers respond to the views and wishes of affected people.” One paragraph later, Kim warns that we must also “look beyond stated benefits [of innovation] to unintended consequences, for instance around how the growing use of drones and remote communication technologies in the humanitarian sphere may be contributing to the increased use of remote management practices, increasing the separation between agencies and those they seek to assist.”

I find this all very intriguing for several reasons. First, the concern regarding the separation—taken to be the physical distance—between agencies and those they seek to assist is an age-old concern. I first came across said concern while at the Harvard Humanitarian Initiative (HHI) in 2007. At the time, ironically, it was the use of SMS in humanitarian and development projects that provoked separation anxiety amongst aid groups. By 2012, humanitarian organizations were starting to fear that social media would further increase the separation. But as we’ve said, communication is aid, and unlike food and medication, digital information doesn’t need to hitch a ride on UN planes and convoys to reach their destination. Furthermore, studies in social psychology have shown that access to timely information during crises can reduce stress, anxiety and despair. So now, in 2016, it seems to be the turn of drones; surely this emerging technology will finally create the separation anxiety that some humanitarians have long-feared (more on this in a bit).

The second reason I find Kim’s points intriguing is because of all the talk around the importance of two-way communication with disaster-affected communities. Take the dire refugee crisis in Europe. When Syrians finally escape the horrid violence in their country and make it alive to Europe, their first question is: “Where am I?” and their second: “Do you have WiFi?” In other words, they want to use their smartphones to communicate & access digital information precisely because mobile technology allows for remote communication and access.

Young humanitarian professionals understand this; they too are Digital Natives. If crisis-affected communities prefer to communicate using mobile phones, then is it not the duty of humanitarian organizations to adapt and use those digital communication channels rather than force their analog channels on others? The priority here shouldn’t be about us and our preferences. But is there a political economy—an entrenched humanitarian industrial complex—that would prefer business as usual since innovation could disrupt existing funding channels? Could these be some of the political & institutional blockages that Kim hints at?

The third reason is the reference to drones. Kim warns that the “growing use of drones and remote communication technologies in the humanitarian sphere may be contributing to the increased use of remote management practices, increasing the separation between agencies and those they seek to assist.” Ironically, the same HPN magazine issue that Kim’s piece appears in also features this article on “Automation for the People: Opportunities and Challenges of Humanitarian Robotics,” co-authored by Dr. Andrew Schroeder & myself. Incidentally, drones (also as UAVs) are aerial robots.

Kim kindly provided Andrew and I with valuable feedback on earlier drafts. So he is familiar with the Humanitarian UAV Code of Conduct and its focus on Community Engagement since we delve into this in our HPN piece. In fact, the header image featured in Kim’s article (also displayed above) is a photograph I took whilst in Nepal; showing local community members using a map created with aerial robots as part of a damage assessment exercise. Clearly, the resulting map did not create physical separation—quite on the contrary, it brought the community and robotics operators together as has happened in Haiti, Tanzania, the Philippines and elsewhere.

(As an aside, a number of UAV teams in Ecuador used the Code of Conduct in their response efforts, more here. Also, I’m co-organizing an Experts Meeting in the UK this June that will, amongst other deliverables, extend said code of conduct to include the use of aerial robotics for cargo transportation).

What’s more, Andrew and I used our article for HPN to advocate for locally managed and operated robotics solutions enabled through local innovation labs (Flying Labs) to empower local responders. In other words, and to quote Kim’s own concluding paragraph, we agree that “those who focus on innovation must do a better job of relocating innovation capacity from HQ to the field, providing tools and guidance to support those seeking to solve problems in the delivery of aid.” Hence, in part, the Flying Labs.

In fact, we’ve already started co-creating Kathmandu Flying Labs, and thanks to both the relevant training and the appropriate robotics technologies that we transferred to members of Kathmandu Flying Labs following the devastating earthquakes in 2015, one of these partners—Kathmandu University—have since carried out multiple damage assessments using aerial robotics; without needing any assistance from us or needing our permission for that matter. The Labs are also about letting go of control, and deliberately so. Which projects Kathmandu Flying Labs partners decide to pursue with their new aerial robotics platforms is entirely their decision, not ours. Trust is key. Besides, the Flying Labs are not only about providing access to appropriate robotics solutions and relevant skills, they are just as much about helping to connect & turbocharge the local capacity for innovation that already exists, and disseminating that innovation globally.

Kathmandu University’s damage assessments didn’t create a separation between themselves and the local communities. KU followed the UAV Code of Conduct and worked directly with local communities throughout. So there is nothing inherent to robotics as a technology that innately creates the separation that Kim refers to. Nor is there anything inherent to robotics that will ensure that aid givers (or robots) respond to the needs of disaster-affected communities. This is also true of SMS as Kim points out above. Technology is just a tool; how we chose to use technology is a human decision.

The fourth and final reason I find Kim’s piece intriguing is because it suggests that remote management practices and physical separations between agencies and those they seek to assist are to be avoided. But the fact of the matter is that remote management is sometimes the most efficient solution; in some cases, it is the only solution, as clearly evidenced in the protracted response to the complex humanitarian crisis in Syria. In fact, the United Nation’s Inter-Agency Standing Committee (IASC) suggests bolstering remote management in some cases. And besides, the vast majority of humanitarian interventions engage in some level of remote management.

So if we can use aerial robotics to deliver essential supplies more quickly, more reliably and at lower cost (like in Rwanda), then how exactly does using fewer motorbikes or trucks to deliver said supplies create more separation between agencies and those they seek to assist? In the case of Rwanda, aerial robotics solutions are airlifting much-needed blood supplies to remote health clinics across the country. I’d like to know how exactly this creates a separation between the doctors administering the blood transfusion and the patients receiving said transfusion. As for using aerial robotics solutions to collect data, we’ve already shown that community engagement is key and that local partners can expertly manage the operation of robotics platforms independently. The most obvious alternative to aerial imagery is satellite imagery, but orbiting satellites certainly don’t allow local partners and communities to participate in data collection.

So are there “political and institutional blockages” against the use of robotics in humanitarian efforts? Might humanitarian organizations receive less funding if aerial robotics solutions prove to be cheaper, more effective and more scalable? Is this one reason, to quote Kim, that “Emerging ideas get stuck at the pilot stage or siloed within a single organization unable to achieve scale and impact”? Are political & institutional barriers curtailing in part the entry of new and radically more efficient solutions to deliver aid? If these autonomous solutions require less international staff to manually operate, will the underlying politics of the $25 billion dollar-a-year aid industry allow such a shift? Or will it revert to fears over (money) separation anxiety?

We should realize that disaster-affected communities today are increasingly digital communities. As such, Digital Natives do not necessarily share the physical separation anxieties that aid organizations seemingly experience with every new emerging technology. Digital Natives, by definition, prefer a friction-free world. But by the time we catch on, we’ll no doubt struggle to understand the newer world of Robotic Natives. We’ll look on incredulously as the new generation of Robotic and AI Natives prefer to interact with Facebook chatbots over “analog humanitarians” during disasters. Some of us may cry foul when Robotic Natives decide to get their urgent 3D-printed food supplies delivered to them via aerial robotics while riding a driverless robotics car to their auto-matically built-in-time shelter.

In conclusion, yes, we should of course be aware and weary of the unintended consequences that new innovations in technology may have when employed in humanitarian settings. Has anyone ever suggested the contrary? At the same time, we should realize that those same unintended consequences may in some cases be welcomed or even preferred over the status quo, especially by Robotic Natives. In other words, those unintended effects may not always be a bug, but rather a feature. Whether these consequences are viewed as a bug or a feature is ultimately a political decision. And whether or not the culture and underlying politics of the aid system will shift to accommodate the new bug as-a-feature worldview, we may be deluding ourselves if we think we can change the world-view of Robotics Natives to accommodate our culture and politics. Such is the nature of innovation and systemic impact.

How Can Digital Humanitarians Best Organize for Disaster Response?

I published a blog post with the same question in 2012. The question stemmed from earlier conversations I had at 10 Downing Street with colleague Duncan Watts from Microsoft Research. We subsequently embarked on a collaboration with the Standby Task Force (SBTF), a group I co-founded back in 2010. The SBTF was one of the early pioneers of digital humanitarian action. The purpose of this collaboration was to empirically explore the relationship between team size and productivity during crisis mapping efforts.

Pablo_UN_Map

Duncan and Team from Microsoft simulated the SBTF’s crisis mapping efforts in response to Typhoon Pablo in 2012. At the time, the United Nations Office for the Coordination of Humanitarian Affairs (UN/OCHA) had activated the Digital Humanitarian Network (DHN) to create a crisis map of disaster impact (final version pictured above). OCHA requested the map within 24 hours. While we could have deployed the SBTF using the traditional crowdsourcing approach as before, we decided to try something different: microtasking. This was admittedly a gamble on our part.

We reached out to the team at PyBossa to ask them to customize their micro-tasking platform so that we could rapidly filter through both images and videos of disaster damage posted on Twitter. Note that we had never been in touch with the PyBossa team before this (hence the gamble) nor had we ever used their CrowdCrafting platform (which was still very new at the time). But thanks to PyBossa’s quick and positive response to our call for help, we were able to launch this microtasking app several hours after OCHA’s request.

Fast forward to the present research study. We gave Duncan and colleagues at Microsoft the same database of tweets for their simulation experiment. To conduct this experiment and replicate the critical features of crisis mapping, they created their own “CrowdMapper” platform pictured below.

Screen Shot 2016-04-20 at 11.12.36 AM Screen Shot 2016-04-20 at 11.12.53 AM

The CrowdMapper experiments suggest that the positive effects of coordination between digital humanitarian volunteers, i.e., teams, dominate the negative effects of social loafing, i.e., volunteers working independently from others. In social psychology, “social loafing is the phenomenon of people exerting less effort to achieve a goal when they work in a group than when they work alone” (1). In the CrowdMapper exercise, the teams performed comparably to the SBTF deployment following Typhoon Pablo. This suggests that such experiments can “help solve practical problems as well as advancing the science of collective intelligence.”

Our MicroMappers deployments have always included a live chat (IM) feature in the user interface precisely to support collaboration. Skype has also been used extensively during digital humanitarian efforts and Slack is now becoming more common as well. So while we’ve actively promoted community building and facilitated active collaboration over the past 6+ years of crisis mapping efforts, we now have empirical evidence that confirms we’re on the right track.

The full study by Duncan et al. is available here. As they note vis-a-vis areas for future research, we definitely need more studies on the division of labor in crisis mapping efforts. So I hope they or other colleagues will pursue this further.

Many thanks to the Microsoft Team and to SBTF for collaborating on this applied research, one of the few that exist in the field of crisis mapping and digital humanitarian action.


The main point I would push back on vis-a-vis Duncan et al’s study is comparing their simulated deployment with the SBTF’s real-world deployment. The reason it took the SBTF 12 hours to create the map was precisely because we didn’t take the usual crowdsourcing approach. As such, most of the 12 hours was spent on reaching out to PyBossa, customizing their microtasking app, testing said app and then finally deploying the platform. The Microsoft Team also had the dataset handed over to them while we had to use a very early, untested version of the AIDR platform to collect and filter the tweets, which created a number of hiccups. So this too took time. Finally, it should be noted that OCHA’s activation came during early evening (local time) and I for one pulled an all-nighter that night to ensure we had a map by sunrise.