My colleague Helena Puig Larrauri recently published this excellent piece on the ethical problems & possible solutions to using UAVs for good in conflict settings. I highly recommend reading her article. The purpose of my blog post is simply to reflect on the important issues that Helena raises.
One of Helena’s driving questions in this: “Does the local population get a say in what data is collected, and to what purpose?” She asks this in the context of the surveillance drones (pictured above) used by the United Nation’s Department for Peacekeeping Operations (DPKO) in the Democratic Republic of the Congo (DRC). While the use of non-lethal UAVs in conflict zones raises a number of complicated issues, Helena is right to insist that we begin discussing these hard issues earlier rather than later. To this end, she presents “three problems and two possible solutions to start a conversation on drones, ethics and conflict.” I italicized solutions because much of the nascent discourse on this topic seems preoccupied with repeating all the problems that have already been identified, leaving little time and consideration to discussions on possible solutions. So kudos to Helena.
Problem 1: Privacy and Consent. How viable is it to obtain consent from those being imaged for UAV-collected data? As noted in this blog post on data protection protocols for crisis mapping, the International Committee of the Red Cross recognizes that, “When such consent cannot be realistically obtained, information allowing the identification of victims or witnesses, should only be relayed in the public domain if the expected protection outcome clearly outweighs the risks. In case of doubt, displaying only aggregated data, with no individual markers, is strongly recommended.” But Helena argues that drawing the line on what is actually life-threatening in a conflict context is particularly hard. “UAVs cannot detect intent, so how are imagery analysts to determine if a situation is likely to result in loss of life?” These are really important questions, and I certainly do not have all, most or any of the answers.
In terms of UAVs not being able to detect intent, could other data sources be used to monitor tactics and strategies that may indicate intent to harm? On a different note, DigitalGlobe’s latest & most sophisticated satellite, WorldView-3, captures images at an astounding 31-centimeter resolution and can even see wildfires beneath the smoke. What happens when commercial satellites are able to capture imagery at 20 or 10 centimeter resolutions? Will DigitalGlobe ask the planet’s population for their consent? Does anyone know of any studies out there that have analyzed just how much—and also what kind—of personal identifying information can be captured via satellite and UAV imagery across various resolutions, especially when linked to other datasets?
Problem 2: Fear and Confusion. Helena kindly refers to this blog post of mine on common misconceptions about UAVs. I gasped when she quite rightly noted that my post didn’t explicitly distinguish between the use of UAVs in response to natural hazards versus violent, armed conflict. To be clear, I was speaking strictly and only about the former. The very real possibility for fear and confusion that Helena and others describe is precisely why I’ve remained stead-fast about including the following guideline in the Humanitarian UAV Network’s Code of Conduct:
“Do not operate humanitarian UAVs in conflict zones or in countries under repressive, authoritarian rule; particularly if military drones have recently been used in these countries.”
As Helena notes, a consortium of NGOs working in the DRC have warned that DPKO’s use of surveillance drones in the country could “blur the lines between military and humanitarian actors.” According to Daniel Gilman from the UN Office for the Coordination of Humanitarian Affairs (OCHA), who also authored OCHA’s Policy Brief on Humanitarian UAVs,
“The DRC NGO position piece has to be understood in the context of the Oslo Guidelines on the use of Military and Civil Defense Assets in Disaster Relief – from conversations with some people engaged on the ground, the issue was less the tech itself [i.e., the drones] than the fact that the mission was talking about using this [tech] both for military interventions and ‘humanitarian’ needs, particularly since [DPKO’s] Mission doesn’t have a humanitarian mandate. We should be careful of eliding issues around dual-use by military actors with use by humanitarians in conflicts or with general concerns about privacy” (Email exchange on Sept. 8, 2014, permission to publish this excerpt granted in writing).
This is a very important point. Still, distinguishing between UAVs operated by the military versus those used by humanitarian organizations for non-military purposes is no easy task—assuming it is even possible. Does this mean that UAVs should simply not be used for good in conflict zones? I’m conflicted. (As an aside, this dilemma reminds me of the “Security Dilemma” in International Relations Theory and in particular the related “Offense-Defense Theory“).
Perhaps an alternative is for DPKO to use their helicopters instead (like the one below), which, for some (most?) civilians, may look somewhat more scary than DPKO’s drone above. Keep in mind that such helicopters & military cargo planes are also significantly louder, which may add to the fear. Also, using helicopters to capture aerial imagery doesn’t really solve the privacy and consent problem.
On the plus side, we can at least distinguish these UN-marked helicopters from other military attack helicopters used by repressive regimes. Then again, what prevents a ruthless regime from painting their helicopters white and adding big UN letters to maintain an element of surprise when bombing their own civilians?
Going back to DPKO’s drone, it is perhaps worth emphasizing that these models are definitely on the larger and heavier end of the spectrum. Compare the above with the small, ultralight UAV below, which was used following Typhoon Haiyan in the Philippines. This UAV is almost entirely made of foam and thus weighs only ~600 grams. When airborne, it looks like a bird. So it may elicit less fear even if DPKO ends up using this model in the future.
Problem 3: Response and Deterrence. Helena asks whether it is ethical for DPKO or other UN/NGO actors to deploy UAVs “if they do not have the capacity to respond to increased information on threats?” Could the use of UAV raise expectations of a response? “One possible counter-argument is to say that the presence of UAVs is in itself a deterrent” to would-be perpetrators of violence, “just as the presence of UN peacekeepers is meant to be a deterrent.” As Helena writes, the head of DPKO has suggested that deterrence is actually a direct aim of the UN’s drone program. “But the notion that a digital Panopticon can deter violent acts is disputable (see for example here), since most conflict actors on the ground are unlikely to be aware that they are being watched and / or are immune to the consequences of surveillance.”
I suppose this leads to the following question: are there ways to make conflict actors on the ground aware that they are perhaps being watched? Then again, if they do realize that they’re being watched, won’t they simply adapt and evolve strategies to evade or shoot down DPKO’s UAVs? This would then force DPKO to change it’s own strategy, perhaps adopting more stealthy UAVs. What broader consequences and possible unintended impact could this have on civilian, crisis-affected communities?
Solution 1: Education and Civic Engagement. I completely agree with Helena’s emphasis on both education and civic engagements, two key points I’ve made in a number of posts (here, here & here). I also agree that “This can make way for informed consent about the operation of drones, allowing communities to engage critically, offer grounded advice and hold drone operators to account.” But this brings us back to Helena’s first question: “what happens if a community, after being educated and openly consulted about a UAV program, decides that drones pose too much of a risk or are otherwise not beneficial? In other words, can communities stop UN- or NGO-operated drones from collecting information they have not consented to sharing? Education will be insufficient if there are no mechanisms in place for participatory decision-making on drone use in conflict settings.” So what to do? Perhaps Helena’s second solution may shed some light.
Solution 2: From Civic Engagement to Empowerment. In Helena’s view, “the critical ethical question about drones and conflict is how they shift the balance of power. As with other data-driven, tech-enabled tools, ultimately the only ethical solution (and probably also the most effective at achieving impact) is community-driven implementation of UAV programs.” I completely agree with this as well, which is why I’m very interested in this community-based project in Haiti and this grassroots UAV initiative; in fact, I invited the latter’s team leads to join the Advisory Board of the Humanitarian UAV Network (UAViators) given their expertise in UAVs and their explicit focus on community engagement.
In terms of peacebuilding applications, Helena writes that “there is plenty that local peacebuilders could use drones for in conflict settings: from peace activism using tactics for civil resistance, to citizen journalism that communicates the effects of conflict, to community monitoring and reporting of displacement due to violence.” But as she rightly notes, these novel applications exacerbate the three ethical problems outlined above. So what now?
I have some (unformed) ideas but this blog post is long enough already. I’ll leave this for a future post and simply add the following for now. First, in terms of civil resistance and the need to distinguish between a regime’s UAV versus activist UAVs, perhaps secret codes could be used to signal that a UAV flying for a civil resistance mission. This could mean painting certain patterns on the UAV or flying in a particular pattern. Of course, this leads back to the age-old challenge of disseminating the codes widely enough while keeping them from falling into the wrong hands.
Second, I used to work extensively in the conflict prevention and conflict early warning space (see my original blog on this). During this time, I was a strong advocate for a people-centered approach to early warning and rapid response systems. The UN ‘s Global Survey of Early Warning Systems (PDF), defines the purpose of people-centered early warning systems as follows:
“… to empower individuals and communities threatened by hazards to act in sufficient time & in an appropriate manner so as to reduce the possibility of personal injury, loss of life, damage to property and the environment, and loss of livelihoods.”
This shift is ultimately a shift in the balance of power, away from state-centric power to people-power, which is why I wholeheartedly agree with Helena’s closing thoughts: “The more I consider how drones could be used for good in conflict settings, the more I think that local peacebuilders need to turn the ethics discourse on its head: as well as defending privacy and holding drone operators to account, start using the same tools and engage from a place of power.” This is not about us.