On Humanitarian Innovation versus Robotic Natives

I recently read an excellent piece entitled “Humanitarian Innovation and the Art of the Possible,” which appeared in the latest issue of the Humanitarian Practice Network’s (HPN) magazine. The author warns that humanitarian innovation will have limited systemic impact unless there is notable shift in the culture and underlying politics of the aid system. Turns out I had written a similar piece (although not nearly as articulate) during the first year of my PhD in 2005. I had, at the time, just re-read Alex de Waal’s Famine Crimes: Politics and the Disaster Relief Industry in Africa and Peter Uvin’s Aiding Violence.

thumb_IMG_5014_1024

Kim Scriven, the author of the HPN piece and one of the leading thinkers in the humanitarian innovation space, questions whether innovation efforts are truly “free from the political and institutional blockages curtailing other initiatives” in the humanitarian space. He no doubt relates to “field-based humanitarians who have looked on incredulously as technological quick fixes are deployed from afar to combat essentially political blockages to the provision of aid.” This got me thinking about the now well-accepted notion that information is aid.

What kinds of political blockages exist vis-a-vis the provision of information (communication) during or after humanitarian crises? “For example,” writes Kim, “the adoption of new technology like SMS messaging may help close the gap between aid giver and aid recipient, but it will not be sufficient to ensure that aid givers respond to the views and wishes of affected people.” One paragraph later, Kim warns that we must also “look beyond stated benefits [of innovation] to unintended consequences, for instance around how the growing use of drones and remote communication technologies in the humanitarian sphere may be contributing to the increased use of remote management practices, increasing the separation between agencies and those they seek to assist.”

I find this all very intriguing for several reasons. First, the concern regarding the separation—taken to be the physical distance—between agencies and those they seek to assist is an age-old concern. I first came across said concern while at the Harvard Humanitarian Initiative (HHI) in 2007. At the time, ironically, it was the use of SMS in humanitarian and development projects that provoked separation anxiety amongst aid groups. By 2012, humanitarian organizations were starting to fear that social media would further increase the separation. But as we’ve said, communication is aid, and unlike food and medication, digital information doesn’t need to hitch a ride on UN planes and convoys to reach their destination. Furthermore, studies in social psychology have shown that access to timely information during crises can reduce stress, anxiety and despair. So now, in 2016, it seems to be the turn of drones; surely this emerging technology will finally create the separation anxiety that some humanitarians have long-feared (more on this in a bit).

The second reason I find Kim’s points intriguing is because of all the talk around the importance of two-way communication with disaster-affected communities. Take the dire refugee crisis in Europe. When Syrians finally escape the horrid violence in their country and make it alive to Europe, their first question is: “Where am I?” and their second: “Do you have WiFi?” In other words, they want to use their smartphones to communicate & access digital information precisely because mobile technology allows for remote communication and access.

Young humanitarian professionals understand this; they too are Digital Natives. If crisis-affected communities prefer to communicate using mobile phones, then is it not the duty of humanitarian organizations to adapt and use those digital communication channels rather than force their analog channels on others? The priority here shouldn’t be about us and our preferences. But is there a political economy—an entrenched humanitarian industrial complex—that would prefer business as usual since innovation could disrupt existing funding channels? Could these be some of the political & institutional blockages that Kim hints at?

The third reason is the reference to drones. Kim warns that the “growing use of drones and remote communication technologies in the humanitarian sphere may be contributing to the increased use of remote management practices, increasing the separation between agencies and those they seek to assist.” Ironically, the same HPN magazine issue that Kim’s piece appears in also features this article on “Automation for the People: Opportunities and Challenges of Humanitarian Robotics,” co-authored by Dr. Andrew Schroeder & myself. Incidentally, drones (also as UAVs) are aerial robots.

Kim kindly provided Andrew and I with valuable feedback on earlier drafts. So he is familiar with the Humanitarian UAV Code of Conduct and its focus on Community Engagement since we delve into this in our HPN piece. In fact, the header image featured in Kim’s article (also displayed above) is a photograph I took whilst in Nepal; showing local community members using a map created with aerial robots as part of a damage assessment exercise. Clearly, the resulting map did not create physical separation—quite on the contrary, it brought the community and robotics operators together as has happened in Haiti, Tanzania, the Philippines and elsewhere.

(As an aside, a number of UAV teams in Ecuador used the Code of Conduct in their response efforts, more here. Also, I’m co-organizing an Experts Meeting in the UK this June that will, amongst other deliverables, extend said code of conduct to include the use of aerial robotics for cargo transportation).

What’s more, Andrew and I used our article for HPN to advocate for locally managed and operated robotics solutions enabled through local innovation labs (Flying Labs) to empower local responders. In other words, and to quote Kim’s own concluding paragraph, we agree that “those who focus on innovation must do a better job of relocating innovation capacity from HQ to the field, providing tools and guidance to support those seeking to solve problems in the delivery of aid.” Hence, in part, the Flying Labs.

In fact, we’ve already started co-creating Kathmandu Flying Labs, and thanks to both the relevant training and the appropriate robotics technologies that we transferred to members of Kathmandu Flying Labs following the devastating earthquakes in 2015, one of these partners—Kathmandu University—have since carried out multiple damage assessments using aerial robotics; without needing any assistance from us or needing our permission for that matter. The Labs are also about letting go of control, and deliberately so. Which projects Kathmandu Flying Labs partners decide to pursue with their new aerial robotics platforms is entirely their decision, not ours. Trust is key. Besides, the Flying Labs are not only about providing access to appropriate robotics solutions and relevant skills, they are just as much about helping to connect & turbocharge the local capacity for innovation that already exists, and disseminating that innovation globally.

Kathmandu University’s damage assessments didn’t create a separation between themselves and the local communities. KU followed the UAV Code of Conduct and worked directly with local communities throughout. So there is nothing inherent to robotics as a technology that innately creates the separation that Kim refers to. Nor is there anything inherent to robotics that will ensure that aid givers (or robots) respond to the needs of disaster-affected communities. This is also true of SMS as Kim points out above. Technology is just a tool; how we chose to use technology is a human decision.

The fourth and final reason I find Kim’s piece intriguing is because it suggests that remote management practices and physical separations between agencies and those they seek to assist are to be avoided. But the fact of the matter is that remote management is sometimes the most efficient solution; in some cases, it is the only solution, as clearly evidenced in the protracted response to the complex humanitarian crisis in Syria. In fact, the United Nation’s Inter-Agency Standing Committee (IASC) suggests bolstering remote management in some cases. And besides, the vast majority of humanitarian interventions engage in some level of remote management.

So if we can use aerial robotics to deliver essential supplies more quickly, more reliably and at lower cost (like in Rwanda), then how exactly does using fewer motorbikes or trucks to deliver said supplies create more separation between agencies and those they seek to assist? In the case of Rwanda, aerial robotics solutions are airlifting much-needed blood supplies to remote health clinics across the country. I’d like to know how exactly this creates a separation between the doctors administering the blood transfusion and the patients receiving said transfusion. As for using aerial robotics solutions to collect data, we’ve already shown that community engagement is key and that local partners can expertly manage the operation of robotics platforms independently. The most obvious alternative to aerial imagery is satellite imagery, but orbiting satellites certainly don’t allow local partners and communities to participate in data collection.

So are there “political and institutional blockages” against the use of robotics in humanitarian efforts? Might humanitarian organizations receive less funding if aerial robotics solutions prove to be cheaper, more effective and more scalable? Is this one reason, to quote Kim, that “Emerging ideas get stuck at the pilot stage or siloed within a single organization unable to achieve scale and impact”? Are political & institutional barriers curtailing in part the entry of new and radically more efficient solutions to deliver aid? If these autonomous solutions require less international staff to manually operate, will the underlying politics of the $25 billion dollar-a-year aid industry allow such a shift? Or will it revert to fears over (money) separation anxiety?

We should realize that disaster-affected communities today are increasingly digital communities. As such, Digital Natives do not necessarily share the physical separation anxieties that aid organizations seemingly experience with every new emerging technology. Digital Natives, by definition, prefer a friction-free world. But by the time we catch on, we’ll no doubt struggle to understand the newer world of Robotic Natives. We’ll look on incredulously as the new generation of Robotic and AI Natives prefer to interact with Facebook chatbots over “analog humanitarians” during disasters. Some of us may cry foul when Robotic Natives decide to get their urgent 3D-printed food supplies delivered to them via aerial robotics while riding a driverless robotics car to their auto-matically built-in-time shelter.

In conclusion, yes, we should of course be aware and weary of the unintended consequences that new innovations in technology may have when employed in humanitarian settings. Has anyone ever suggested the contrary? At the same time, we should realize that those same unintended consequences may in some cases be welcomed or even preferred over the status quo, especially by Robotic Natives. In other words, those unintended effects may not always be a bug, but rather a feature. Whether these consequences are viewed as a bug or a feature is ultimately a political decision. And whether or not the culture and underlying politics of the aid system will shift to accommodate the new bug as-a-feature worldview, we may be deluding ourselves if we think we can change the world-view of Robotics Natives to accommodate our culture and politics. Such is the nature of innovation and systemic impact.

2 responses to “On Humanitarian Innovation versus Robotic Natives

  1. I think the trend is strongly towards remote management. As “accountability” is emphasized, it is a brave manager who trusts frontline staff with decisions for which s/he is accountable even though they have a more detailed and nuanced situational awareness. Bravo to Flying Labs for reversing the trend and pushing responsibility back towards the people in the area of operations.
    The other difficulty is that decision makers tend to be more experienced, a good thing except that their experience is from the previous generation and they do not seek the advice of the digital/robotic/next technology natives. This makes the introduction of new technology difficult and it is often fresh organisations, like Flying Labs, that can see past the risk in novelty to see the efficiency and utility.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s