Tag Archives: Evaluation

Results: Evaluation of UAVs for Humanitarian Use

UAViators Logo

My team & I at the Humanitarian UAV Network (UAViators) have just completed the first phase of our evaluation and welcome feedback on the results. We have reviewed over 150 UAV models along with camera technologies, payload units as well as image processing and analysis software. Each of these items have been reviewed within the context of humanitarian applications and with humanitarian practitioners in mind as end-users.

The results of the evaluation are available here in the form of an open and editable Google spreadsheet. We are actively looking for feedback and very much welcome additional entries. So feel free to review & add more UAVs and related technologies directly to the spreadsheet. Our second phase will involve the scoring/weighing of the results to identity the UAVs, cameras and software that may be the best fit for humanitarian organizations.

In the meantime, big thanks to my research assistants who carried out all the research for this review.


See Also:

  • Humanitarian UAV Network: Strategy for 2014-2015 [link]
  • Humanitarians in the Sky: Using UAVs for Disaster Response [link]
  • Humanitarian UAV Missions During Balkan Floods [link]
  • UAVs, Community Mapping & Disaster Risk Reduction in Haiti [link]
  • Crisis Map of UAV Videos for Disaster Response [link]
  • Check-List for Flying UAVs in Humanitarian Settings [link]


Comprehensive List of UAVs for Humanitarian Response

The Humanitarian UAV Network (UAViators) is carrying out a comprehensive evaluation of UAVs and related technologies for use in humanitarian settings. We’ve developed an evaluation framework for this assessment and have now drafted this list of UAV models to determine which are worth evaluating. As you’ll note, the link also points to three other related lists: cameras, sensors and software for image processing and analysis. We’ll be evaluating these as well to identify which are the best fit for use by humanitarians in the field.


We are actively seeking feedback on these preliminary lists in order to identify which items we should prioritize for evaluation. The lists are available in this open and editable Google Spreadsheet. Each list include a column for you to add any comments you might have about any entries. We’re particularly interested in getting feedback on which items are not worth evaluating and which should be added at to the list. Thank you!


See Also:

  • Crisis Map of UAV/Aerial Videos for Disaster Response [link]
  • How UAVs are Making a Difference in Disaster Response [link]
  • Humanitarians Using UAVs for Post Disaster Recovery [link]
  • Grassroots UAVs for Disaster Response [link]
  • Using UAVs for Search & Rescue [link]
  • Debrief: UAV/Drone Search & Rescue Challenge [link]
  • Crowdsourcing Analysis of UAV Imagery for Search/Rescue [link]
  • Check-List for Flying UAVs in Humanitarian Settings [link]

Evaluating UAVs for Humanitarian Response

The Humanitarian UAV Network is carrying out an evaluation of UAVs and related technologies for use in humanitarian settings. The related technologies being evaluated include cameras, payload units, image processing & analysis software. As a first step, we have created an evaluation framework based on parameters relevant for field-based deployments of UAVs by humanitarian organizations. Before moving to the next step in the evaluation—identifying which UAVs and related technologies to evaluate—we want to make sure that we’re on the right track as far as our evaluation framework goes. So the purpose of this blog post is to seek feedback on said framework.

UAViators Logo

To recap, we are evaluating four distinct technologies: 1) UAVs; 2) Cameras; 3) Payload units; and 4) Image Processing & Analysis Software specifically for humanitarian use. So below are the evaluation criteria we have identified for each technology.

UAVs: Type, Cost, Size, Weight, Appearance, Noise Factor, Durability, Ease of Use, Ease of Repair, Payload Capacity, Flight Time, Transmitter Range,  Autonomy, Camera/Gimbal Compatibility and Legality/Customs.

Most of the parameters are self-explanatory but a few require some elaboration. Type refers to whether the UAV is a fixed-wing, rotary-wing or a hybrid. Appearance seeks to evaluate whether the UAV airframe looks threatening or more like a toy, for example. Autonomy refers to whether the UAV can be flown autonomously. Legality/Customs seeks to assess whether the transportation of the UAV across borders is likely to be easy or challenging.

Cameras: Cost, Size, Weight, Megapixels, Memory, Control, Durability, Easy of Use, Ease of Repair, Lens Type and Gimbal Compatibility.

Payload Units: Cost, Size, Weight, Type of Release Mechanism, Release Mechanism, Ease of Use and Ease of Repair.

Image Software: Cost, Image Processing, Image Analysis, Ease of Use, System Requirements, Compatibility and Type of License.

We need to make sure that we fill any gaps in our evaluation criteria before proceeding with the assessment. So what parameters are we missing?


See Also:

  • Welcome to the Humanitarian UAV Network [link]
  • How UAVs are Making a Difference in Disaster Response [link]
  • Humanitarians Using UAVs for Post Disaster Recovery [link]
  • Grassroots UAVs for Disaster Response [link]
  • Using UAVs for Search & Rescue [link]
  • Debrief: UAV/Drone Search & Rescue Challenge [link]
  • Crowdsourcing Analysis of UAV Imagery for Search/Rescue [link]
  • Check-List for Flying UAVs in Humanitarian Settings [link]


Humanitarian UAV Network: Strategy for 2014-2015

The purpose of the Humanitarian UAV Network (UAViators) is to guide the responsible and safe use of small UAVs in humanitarian settings while promoting information sharing and enlightened policymaking. As I’ve noted in the past, UAVs are already being used to support a range of humanitarian efforts. So the question is not if, but rather how to facilitate the inevitable expanded use of UAVs in a responsible and safe manner. This is just one of many challenging questions that UAViators was created to manage.

UAViators Logo

UAViators has already drafted a number of documents, including a Code of Conduct and an Operational Check-List for the use of UAVs in humanitarian settings. These documents will continue to be improved throughout the year, so don’t expect a final and perfect version tomorrow. This space is still too new to have all the answers in a first draft. So our community will aim to improve these documents over time. By the end of 2014, we hope to have a solid version of the of Code of Conduct for organizations and companies to publicly endorse.


In the meantime, my three Research Assistants (RA’s) and I are working on (the first ever?) comprehensive evaluation of 1) Small UAVs; 2) Cameras; 3) Payload Units; and 4) Imagery Software specifically for humanitarian field-workers. The purpose of this evaluation is to rate which technologies are best suited to the needs of humanitarians in the field. We will carry out this research through interviews with seasoned UAV experts coupled with secondary, online research. Our aim is to recommend 2-3 small UAVs, cameras, payload units and software solutions for imagery processing and analysis that make the most sense for humanitarians as end users. These suggestions will then be “peer reviewed” by members of the Humanitarian UAV Network.

Following this evaluation, my three RA’s and I will create a detailed end-to-end operational model for the use of UAVs in humanitarian settings. The model will include pre-flight guidance on several key issues including legislation, insurance, safety and coordination. The pre-flight section will also include guidance on how to program the flight-path of the UAVs recommended in the evaluation. But the model does not end with the safe landing of a UAV. The operational model will include post-flight guidance on imagery processing and analysis for decision support as well as guidelines on information sharing with local communities. Once completed, this operational model will also be “peer reviewed” by members of the UAViators.

Credit Drone Adventures

Both deliverables—the evaluation and model—will be further reviewed by the Advisory Board of UAViators and by field-based humanitarians. We hope to have this review completed during the Humanitarian UAV Experts Meeting, which I am co-organizing with OCHA in New York this November. Feedback from this session will be integrated into both deliverables.

Our plan is to subsequently convert these documents into training materials for both online and onsite training. We have thus far identified two sites for this training, one in Southeast Asia and the other in southern Africa. We’re considering a potential third site in South America depending on the availability of funding. These trainings will enable us to further improve our materials and to provide minimum level certification to humanitarians participating in said trainings. To this end, our long-term strategy for the Humanitarian UAV Network is not only to facilitate the coordination of small UAVs in humanitarian settings but also to provide both training and certification in collaboration with multiple humanitarian organizations.

I recognize that the above is highly ambitious. But all the signals I’m getting from humanitarian organizations clearly demonstrate that the above is needed. So if you have some expertise in this space and wish to join my Research Assistants and I in this applied and policy-focused research, then please do get in touch. In addition, if your organization or company is interested in funding any of the above, then do get in touch as well. We have the initial funding for the first phase of the 2014-2015 strategy and are in the process of applying for funding to complete the second phase.

One final but important point: while the use of many small and large UAVs in complex airspaces in which piloted (manned) aircraft are also flying poses a major challenge in terms of safety, collision avoidance and coordination, this obviously doesn’t mean that small UAVs should be grounded in humanitarian contexts with far simpler airspaces. Indeed, to argue that small UAVs cannot be responsibly and safely operated in simpler airspaces ignores the obvious fact that they already have—and continue to be responsibly & safely used. Moreover, I for one don’t see the point of flying small UAVs in areas already covered by larger UAVs and piloted aircraft. I’m far more interested in the rapid and local deployment of small UAVs to cover areas that are overlooked or have not yet been reached by mainstream response efforts. In sum, while it will take years to develop effective solutions for large UAV-use in dense and complex airspaces, small UAVs are already being used responsibly and safely by a number of humanitarian organizations and their partners.


See Also:

  • Welcome to the Humanitarian UAV Network [link]
  • How UAVs are Making a Difference in Disaster Response [link]
  • Humanitarians Using UAVs for Post Disaster Recovery [link]
  • Grassroots UAVs for Disaster Response [link]
  • Using UAVs for Search & Rescue [link]
  • Debrief: UAV/Drone Search & Rescue Challenge [link]
  • Crowdsourcing Analysis of UAV Imagery for Search/Rescue [link]
  • Check-List for Flying UAVs in Humanitarian Settings [link]

Could Lonely Planet Render World Bank Projects More Transparent?

That was the unexpected question that my World Bank colleague Johannes Kiess asked me the other day. I was immediately intrigued. So I did some preliminary research and offered to write up a blog post on the idea to solicit some early feedback. According to recent statistics, international tourist arrivals numbered over 1 billion in 2012 alone. Of this population, the demographic that Johannes is interested in comprises those intrepid and socially-conscious backpackers who travel beyond the capitals of developing countries. Perhaps the time is ripe for a new form of tourism: Tourism for Social Good.


There may be a real opportunity to engage a large crowd because travelers—and in particular the backpacker type—are smartphone savvy, have time on their hands, want to do something meaningful, are eager to get off the beaten track and explore new spaces where others do not typically trek. Johannes believes this approach could be used to map critical social infrastructure and/or to monitor development projects. Consider a simple smartphone app, perhaps integrated with existing travel guide apps or Tripadvisor. The app would ask travelers to record the quality of the roads they take (with the GPS of their smartphone) and provide feedback on the condition, e.g.,  bumpy, even, etc., every 50 miles or so.

They could be asked to find the nearest hospital and take a geotagged picture—a scavenger hunt for development (as Johannes calls it); Geocaching for Good? Note that governments often do not know exactly where schools, hospitals and roads are located. The app could automatically alert travelers of a nearby development project or road financed by the World Bank or other international donor. Travelers could be prompted to take (automatically geo-tagged) pictures that would then be forwarded to development organizations for subsequent visual analysis (which could easily be carried out using microtasking). Perhaps a very simple, 30-second, multiple-choice survey could even be presented to travelers who pass by certain donor-funded development projects. For quality control purposes, these pictures and surveys could easily be triangulated. Simple gamification features could also be added to the app; travelers could gain points for social good tourism—collect 100 points and get your next Lonely Planet guide for free? Perhaps if you’re the first person to record a road within the app, then it could be named after you (of course with a notation of the official name). Even Photosynth could be used to create panoramas of visual evidence.

The obvious advantage of using travelers against the now en vogue stakeholder monitoring approach is that they said bagpackers are already traveling there anyway and have their phones on them to begin with. Plus, they’d be independent third parties and would not need to be trained. This obviously doesn’t mean that the stakeholder approach is not useful. The travelers strategy would simply be complementary. Furthermore, this tourism strategy comes with several key challenges, such as the safety of backpackers who choose to take on this task, for example. But appropriate legal disclaimers could be put in place, so this challenge seems surmountable. In any event, Johannes, together with his colleagues at the World Bank (and I), hope to explore this idea of Tourism for Social Good further in the coming months.

In the meantime, we would be very grateful for feedback. What might we be overlooking? Would you use such an app if it were available? Where can we find reliable statistics on top backpacker destinations and flows?


See also: 

  • What United Airlines can Teach the World Bank about Mobile Accountability [Link]

Why Ushahidi Should Embrace Open Data

“This is the report that Ushahidi did not want you to see.” Or so the rumors in certain circles would have it. Some go as far as suggesting that Ushahidi tried to burry or delay the publication. On the other hand, some rumors claim that the report was a conspiracy to malign and discredit Ushahidi. Either way, what is clear is this: Ushahidi is an NGO that prides itself in promoting transparency & accountability; an organization prepared to take risks—and yes fail—in the pursuit of this  mission.

The report in question is CrowdGlobe: Mapping the Maps. A Meta-level Analysis of Ushahidi & Crowdmap. Astute observers will discover that I am indeed one of the co-authors. Published by Internews in collaboration with George Washington University, the report (PDF) reveals that 93% of 12,000+ Crowdmaps analyzed had fewer than 10 reports while a full 61% of Crowdmaps had no reports at all. The rest of the findings are depicted in the infographic below (click to enlarge) and eloquently summarized in the above 5-minute presentation delivered at the 2012 Crisis Mappers Conference (ICCM 2012).

Infographic_2_final (2)

Back in 2011, when my colleague Rob Baker (now with Ushahidi) generated the preliminary results of the quantitative analysis that underpins much of the report, we were thrilled to finally have a baseline against which to measure and guide the future progress of Ushahidi & Crowdmap. But when these findings were first publicly shared (August 2012), they were dismissed by critics who argued that the underlying data was obsolete. Indeed, much of the data we used in the analysis dates back to 2010 and 2011. Far from being obsolete, however, this data provides a baseline from which the use of the platform can be measured over time. We are now in 2013 and there are apparently 36,000+ Crowdmaps today rather than just 12,000+.

To this end, and as a member of Ushahidi’s Advisory Board, I have recommended that my Ushahidi colleagues run the same analysis on the most recent Crowdmap data in order to demonstrate the progress made vis-a-vis the now-outdated public baseline. (This analysis takes no more than an hour a few days to carry out). I also strongly recommend that all this anonymized meta-data be made public on a live dashboard in the spirit of open data and transparency. Ushahidi, after all, is a public NGO funded by some of the biggest proponents of open data and transparency in the world.

Embracing open data is one of the best ways for Ushahidi to dispel the harmful rumors and conspiracy theories that continue to swirl as a result of the Crowd-Globe report. So I hope that my friends at Ushahidi will share their updated analysis and live dashboard in the coming weeks. If they do, then their bold support of this report and commitment to open data will serve as a model for other organizations to emulate. If they’ve just recently resolved to make this a priority, then even better.

In the meantime, I look forward to collaborating with the entire Ushahidi team on making the upcoming Kenyan elections the most transparent to date. As referenced in this blog post, the Standby Volunteer Task Force (SBTF) is partnering with the good people at PyBossa to customize an awesome micro-tasking platform that will significantly facilitate and accelerate the categorization and geo-location of reports submitted to the Ushahidi platform. So I’m working hard with both of these outstanding teams to make this the most successful, large-scale microtasking effort for election monitoring yet. Now lets hope for everyone’s sake that the elections remain peaceful. Onwards!

Innovation and the State of the Humanitarian System

Published by ALNAP, the 2012 State of the Humanitarian System report is an important evaluation of the humanitarian community’s efforts over the past two years. “I commend this report to all those responsible for planning and delivering life saving aid around the world,” writes UN Under-Secretary General Valerie Amos in the Preface. “If we are going to improve international humanitarian response we all need to pay attention to the areas of action highlighted in the report.” Below are some of the highlighted areas from the 100+ page evaluation that are ripe for innovative interventions.

Accessing Those in Need

Operational access to populations in need has not improved. Access problems continue and are primarily political or security-related rather than logistical. Indeed, “UN security restrictions often place sever limits on the range of UN-led assessments,” which means that “coverage often can be compromised.” This means that “access constraints in some contexts continue to inhibit an accurate assessment of need. Up to 60% of South Sudan is inaccessible for parts of the year. As a result, critical data, including mortality and morbidity, remain unavailable. Data on nutrition, for example, exist in only 25 of 79 countries where humanitarian partners have conducted surveys.”

Could satellite and/or areal imagery be used to measure indirect proxies? This would certainly be rather imperfect but perhaps better than nothing? Could crowdseeding be used?

Information and Communication Technologies

“The use of mobile devices and networks is becoming increasingly important, both to deliver cash and for communication with aid recipients.” Some humanitarian organizations are also “experimenting with different types of communication tools, for different uses and in different contexts. Examples include: offering emergency information, collecting information for needs assessments or for monitoring and evaluation, surveying individuals, or obtaining information on remote populations from an appointed individual at the community level.”

“Across a variety of interventions, mobile phone technology is seen as having great potential to increase efficiency. For example, […] the governments of Japan and Thailand used SMS and Twitter to spread messages about the disaster response.” Naturally, in some contexts, “traditional means like radios and call centers are most appropriate.”

In any case, “thanks to new technologies and initiatives to advance commu-nications with affected populations, the voices of aid recipients began, in a small way, to be heard.” Obviously, heard and understood are not the same thing–not to mention heard, understood and responded to. Moreover, as disaster affected communities become increasingly “digital” thanks to the spread of mobile phones, the number of voices will increase significantly. The humanitarian system is largely (if not completely) unprepared to handle this increase in volume (Big Data).

Consulting Local Recipients

Humanitarian organizations have “failed to consult with recipients […] or to use their input in programming.” Indeed, disaster-affected communities are “rarely given opportunities to assess the impact of interventions and to comment on performance.” In fact, “they are rarely treated as end-users of the service.” Aid recipients also report that “the aid they received did not address their ‘most important needs at the time.'” While some field-level accountability mechanisms do exist, they were typically duplicative and very project oriented. To this end, “it might be more efficient and effective to have more coordination between agencies regarding accountability approaches.”

While the ALNAP report suggests that these shortcomings could “be addressed in the near future by technical advances in methods of needs assessment,” the challenge here is not simply a technical one. Still, there are important efforts underway to address these issues.

Improving Needs Assessments

The Inter-Agency Standing Committee’s (IASC) Needs Assessment Task Force (NAFT) and the International NGO-led Assessment Capacities Project (ACAPS) are two such exempts of progress. OCHA serves as the secretariat for the NAFT through its Assessment and Classification of Emergencies (ACE) Team. ACAPS, which is a consortium of three international NGOs (X, Y and Z) and a member of NATF, aims to “strengthen the capacity of the humanitarian sector in multi-sectoral needs assessment.” ACAPS is considered to have “brought sound technical processes and practical guidelines to common needs assessment.” Note that both ACAPS and ACE have recently reached out to the Digital Humanitarian Network (DHNetwork) to partner on needs-assessment projects in South Sudan and the DRC.

Another promising project is the Humanitarian Emergency Settings Perceived Needs Scale (HESPER). This join initiative between WHO and King’s College London is designed to rapidly assess the “perceived needs of affected populations and allow their views to be taken into consideration. The project specifically aims to fill the gap between population-based ‘objective’ indicators […] and/or qualitative data based on convenience samples such as focus groups or key informant interviews.” On this note, some NGOs argue that “overall assessment methodologies should focus far more at the community (not individual) level, including an assessment of local capacities […],” since “far too often international aid actors assume there is no local capacity.”

Early Warning and Response

An evaluation of the response in the Horn of Africa found “significant disconnects between early warning systems and response, and between technical assessments and decision-makers.” According to ALNAP, “most commentators agree that the early warning worked, but there was a failure to act on it.” This disconnect is a concern I voiced back in 2009 when UN Global Pulse was first launched. To be sure, real-time information does not turn an organization into a real-time organization. Not surprisingly, most of the aid recipients surveyed for the ALNAP report felt that “the foremost way in which humanitarian organizations could improve would be to: ‘be faster to start delivering aid.'” Interestingly, “this stands in contrast to the survey responses of international aid practitioners who gave fairly high marks to themselves for timeliness […].”

Rapid and Skilled Humanitarians

While the humanitarian system’s surge capacity for the deployment of humanitarian personnel has improved, “findings also suggest that the adequate scale-up of appropriately skilled […] staff is still perceived as problematic for both operations and coordination.” Other evaluations “consistently show that staff in NGOs, UN agencies and clusters were perceived to be ill prepared in terms of basic language and context training in a significant number of contexts.” In addition, failures in knowledge and understanding of humanitarian principles were also raised. Furthermore, evaluations of mega-disasters “predictably note influxes or relatively new staff with limited experience.” Several evaluations noted that the lack of “contextual knowledge caused a net decrease in impact.” This lend one senior manager noted:

“If you don’t understand the political, ethnic, tribal contexts it is difficult to be effective… If I had my way I’d first recruit 20 anthropologists and political scientists to help us work out what’s going on in these settings.”

Monitoring and Evaluation

ALNAP found that monitoring and evaluation continues to be a significant shortcoming in the humanitarian system. “Evaluations have made mixed progress, but affected states are still notably absent from evaluating their own response or participating in joint evaluations with counterparts.” Moreover, while there have been important efforts by CDAC and others to “improve accountability to, and communication with, aid recipients,” there is “less evidence to suggest that this new resource of ground-level information is being used strategically to improve humanitarian interventions.” To this end, “relatively few evaluations focus on the views of aid recipients […].” In one case, “although a system was in place with results-based indicators, there was neither the time nor resources to analyze or use the data.”

The most common reasons cited for failing to meet community expectations include the “inability to meet the full spectrum of need, weak understanding of local context, inability to understand the changing nature of need, inadequate information-gathering techniques or an inflexible response approach.” In addition, preconceived notions of vulnerability have “led to inappropriate interventions.” A major study carried out by Tufts University and cited in the ALNAP report concludes that “humanitarian assistance remains driven by ‘anecdote rather than evidence’ […].” One important exception to this is the Danish Refugee Council’s work in Somalia.

Leadership, Risk and Principles

ALNAP identifies an “alarming evidence of a growing tendency towards risk aversion” and a “stifling culture of compliance.” In addition, adherence to humanitarian principles were found to have weakened as “many humanitarian organizations have willingly compromised a principled approach in their own conduct through close alignment with political and military activities and actors.” Moreover, “responses in highly politicized contexts are viewed as particularly problematic for the retention of humanitarian principles.” Humanitarian professionals who were interviewed by ALNAP for this report “highlighted multiple occasions when agencies failed to maintain an impartial response when under pressure from strong states, such as Pakistan and Sri Lanka.”

Disaster Response, Self-Organization and Resilience: Shocking Insights from the Haiti Humanitarian Assistance Evaluation

Tulane University and the State University of Haiti just released a rather damming evaluation of the humanitarian response to the 2010 earthquake that struck Haiti on January 12th. The comprehensive assessment, which takes a participatory approach and applies a novel resilience framework, finds that despite several billion dollars in “aid”, humanitarian assistance did not make a detectable contribution to the resilience of the Haitian population and in some cases increased certain communities’ vulnerability and even caused harm. Welcome to supply-side humanitarian assistance directed by external actors.

“All we need is information. Why can’t we get information?” A quote taken from one of many focus groups conducted by the evaluators. “There was little to no information exchange between the international community tasked with humanitarian response and the Haitian NGOs, civil society or affected persons / communities themselves.” Information is critical for effective humanitarian assistance, which should include two objectives: “preventing excess mortality and human suffering in the immediate, and in the longer term, improving the community’s ability to respond to potential future shocks.” This longer term objective thus focuses on resilience, which the evaluation team defines as follows:

“Resilience is the capacity of the affected community to self-organize, learn from and vigorously recover from adverse situations stronger than it was before.”

This link between resilience and capacity for self-organization is truly profound and incredibly important. To be sure, the evaluation reveals that “the humani-tarian response frequently undermined the capacity of Haitian individuals and organizations.” This completely violates the Hippocratic Oath of Do No Harm. The evaluators thus “promote the attainment of self-sufficiency, rather than the ongoing dependency on standard humanitarian assistance.” Indeed, “focus groups indicated that solutions to help people help themselves were desired.”

I find it particularly telling that many aid organizations interviewed for this assessment were reluctant to assist the evaluators in fully capturing and analyzing resource flows, which are critical for impact evaluation. “The lack of transparency in program dispersal of resources was a major constraint in our research of effective program evaluation.” To this end, the evaluation team argue that “by strengthening Haitian institutions’ ability to monitor and evaluate, Haitians will more easily be able to track and monitor international efforts.”

I completely disagree with this remedy. The institutions are part of the problem, and besides, institution-building takes years if not decades. To assume there is even political will and the resources for such efforts is at best misguided. If resilience is about strengthening the capacity of affected communities to self-organize, then I would focus on just that, applying existing technologies and processes that both catalyze and facilitate demand-side, people-centered self-organization. My previous blog post on “Technology and Building Resilient Societies to Mitigate the Impact of Disasters” elaborates on this point.

In sum, “resilience is the critical link between disaster and development; monitoring it will ensure that relief efforts are supporting, and not eroding, household and community capabilities.” This explains why crowdsourcing and data mining efforts like those of Ushahidi, HealthMap and UN Global Pulse are important for disaster response, self-organization and resilience.

On Technology and Learning, Or Why the Wright Brothers Did Not Create the 747

I am continuously amazed by critics who outlaw learning curves, especially when it comes to new and innovative projects. These critics expect instantaneous perfection from the outset even when new technologies are involved. So when a new project doesn’t meet their satisfaction (based on their own, often arbitrary measurement of success), they publicly castigate the groups behind the projects for their “failures”.

What would the world look like if these critics were in charge? There would be little to no innovation, progress or breakthrough’s. Isaac Newton would never have spoken the words “If I have seen so far it is because I have stood on the shoulders of giants.” There would be no giants, no perspective, and no accumulation of knowledge.

Take the Wright brothers, Orville and Wilbur, who invented and built the world’s first successful airplane by developing controls that made fixed-wing powered flight possible. They too had their critics. Some of them in “the European aviation community had converted the press to an anti-Wright brothers stance. European newspapers, especially in France, were openly derisive, calling them bluffers.” The Wright brothers certainly failed on numerous occasions. But they kept tinkering and experimenting. It was their failures that eventually made them successful. As the Chinese proverb goes, “Failure is not falling down but refusing to get up.”

The Wright brothers did not create the 747 because they had to start from scratch. They first had to figure out the basic principles of flight. Today’s critics of new technology-for-social impact projects are basically blaming NGOs in the developing world for not creating 747s.

Finally, it is problematic that many critics of technology-for-social impact projects have little to no scholarly or professional background in monitoring and evaluation (M&E). These critics think that because they are technology or development experts they know how to evaluate projects—one of the most common mistakes made by self-styled evaluators as noted at the start of The Fletcher School’s introductory course on M&E.

The field of M&E is a science, not an amateur sport. M&E is an independent, specialized field of expertise in it’s own right, one that requires months (if not several semesters) of dedicated study and training.

Patrick Philippe Meier

From Baselines to Basemaps: Crisis Mapping for Monitoring & Evaluation (M&E)

I was just in Berlin for meetings with Transparency International and the topic of mapping for Monitoring & Evaluation came up yet again. Earlier this year, Mercy Corps and UNDP Sudan both expressed an interest in exploring the application of crisis mapping platforms for M&E. Problem is, the field of M&E—particularly with regards to peacebuilding and post-conflict reconstruction—is devoid of any references to mapping.

As part of my consulting work with UNDP Sudan, I therefore produced a short concept paper this topic back in June. Here’s a summary of what I wrote.


Peacebuilding and post-conflict reconstruction programs necessarily operate within a dynamic environment. This means that they must adapt to changing circumstances or else run the risk of misallocating resources or worse, exacerbating tensions not to mention creating new sources of violent conflict. Hence the need for conflict sensitive programming, which UNDP’s TRMA initiative is designed to support in the Sudan.

The threat and risk maps produced by UNDP provide spatial risk assessments that can inform programmatic response in Sudan’s post-conflict states. This need not be a one-off decision-support exercise, however. Indeed, the use of spatial risk assessments updated over time is an even more compelling use of crisis maps for decision-support.

A changing post-conflict environment means that projects designed half-a-year ago may no longer be having the intended impact they were funded to have. To this end, it is important that UN and local-government partners have regular updates on the changing context in order to adapt programming respectively. Crisis mapping can play a pivotal role in this decision support process.


I therefore propose a new approach to crisis mapping called “basemapping”. The purpose of basemapping is to combine M&E and crisis mapping to produce basemaps against which projects can be monitored and evaluated. The basemapping process constitutes three distinct mapping steps:

1. Ideal World Basemapping: mapping the ideal world that a given project seeks to achieve over a given period of time.

2. Real World Basemapping: mapping the current state of affairs in the specific world that the project seeks to change

3. Changed World Basemapping: ongoing mapping to compare the change between the ideal world and real world basemaps.

The following section provides a brief background to M&E and formulates a proposed methodology for basemapping. As basemapping is a new concept that has little to no precedent, the purpose of the proposed methodology is to catalyze discussion on the subject and to make the methodology more sophisticated over time.

Monitoring and Evaluation (M&E)

A strong M&E framework will include theories and types of change, an achievable goal with clear objectives, outputs and activities as well as reliable indicators and baselines. However, M&E frameworks should be case-specific and must be tailored to the purpose of individual projects. This deliverable assumes that UNDP is already well versed in M&E frameworks and methodologies. The analysis that follows thus focuses specifically on the contributing role of crisis mapping to the M&E process.

Baselines are the most often forgotten component within design, monitoring and evaluation, yet they are key to proving that change has truly taken place. They also provide the most intuitive link to crisis mapping. While baselines typically represent a snapshot in time and space against which deviations represent progress or failure, the concept of “basemaps” can play the same role albeit dynamically.

Perhaps the closest analogy to baselines in the field of conflict analysis is the conflict assessment. A conflict assessment is an exploration of the realities of the conflict and an analysis of its underlying causes. An assessment can be done at any time, independently of a program or as a part of an existing program. Assessments are often conducted to determine whether an intervention is needed and, if so, what type of intervention.

Assessments (or risk assessments) are also typically carried out as a first step in the development of a conflict early warning system. They serve to identify appropriate conflict early warning indicators. In a sense, an assessment is the basis from which the programming will be designed. Conversely, a baseline identifies the status of the targeted change before the project starts but after it has been designed.

M&E experts caution that assessments and baselines should not be blended together. Nor do they suggest using one as a substitute for the other since their raison d’être, focus, and implementation are very different. We beg to differ and would even go so far as proposing that dynamic crisis mapping platforms can bring both conflict assessments and M&E baselines together with considerable added value.

To explain the potential of basemaps in more detail, a sound understanding of baselines is important. A baseline provides a starting point or reference from which a comparison can be made.  Baselines are conducted prior to the beginning of a program intervention and are the point of comparison for monitoring and evaluation data. The bulk of baseline studies focus on the intended outcomes of a project. They can also take into account secondary outcomes and assumptions, though these are not the primary emphasis.

Baseline information can be used in a number of ways. Perhaps the most intuitive application is the comparison of baseline information with subsequent information to show the change (or lack thereof) that has taken place over time (again, change over space is all-too often ignored). Baseline information can also be used to refine programming decisions or set achievable and realistic targets. Finally, baseline information enables monitoring data to have greater utility earlier in the project cycle.

Baselines have three possible focus areas: change, secondary outcomes and assumptions: change, secondary changes and assumptions. It is important that DAI and partners agree on the formulation of an M&E framework that clearly focuses on one of these focus areas. Note that the first area, change, is required of all baselines, while the other two are optional depending on the project.

Towards Dynamic Basemaps

Conflict, or threat and risk data, typically has a geographic dimension. Consequently, baseline data on conflict dynamics also have a geographic element. To this end, it makes far more sense to use basemaps than baselines since the former includes spatially relevant information that changes over time, which is necessarily important for decision-making vis-à-vis programmatic response in post-conflict environments.

UNDP’s TRMA enjoys a distinctive advantage in this respect since the project already maps threat and risk data (via hard-copy print-outs and a dynamic mapping tools called the 4W’s). The main handicap at the moment is the lack of regularly updated threat and risk data in order to visualize change over time. That said, the TRMA team is planning to shift towards more regular data collection, which will make the use of the 4Ws even more compelling for M&E purposes.

In the meantime, drawing on the use of baselines for M&E can guide the development of a general methodology for basemaps. This deliverable assumes that a standard M&E framework has already been developed for a given project. The challenge is to now translate this framework into a basemap. Recall that a strong M&E framework will include theories and types of change, an achievable goal with clear objectives, outputs and activities as well as reliable indicators and baselines. The first step, then, is to consider the theory (or theories) of change formulated for a given project.

Theories of change help planners and evaluators stay aware of the assumptions behind their choices, verify that the activities and objectives are logically aligned, and identify opportunities for integrated programming to spark synergies and leverage greater results. Types of change refer to specific changes expressed in the actual program design and/or evaluation, either as goals, objectives, or indicators.  Common examples include changes in behavior, practice, process, status, etc.  Both the theory of change and the types of changes sought should be evident in a well-designed program.

Although theories and types of change may not have obvious or intuitive geographical dimensions, it is important for “basemapping” to start thinking in geographical terms from the first step in the M&E process. Take “the reduction of violence theory of change,” which suggests that peace will result as we reduce the levels of violence perpetrated by combatants or their representatives. Methods include cease-fires, introduction of peacekeeping forces and conflict sensitive programming, for example. Cease-fires, peacekeeping operations and conflict sensitive development all have a geographical component. So the point is simply to ask the question “Where?” and to ensure the answer is woven into the theory of change.

Next, the basemapping process should consider the program design, e.g., the design hierarchy:

  • Goal: broadest change in the conflict.
  • Objectives: types of changes that are prerequisites to achieve stated goal.
  • Outputs: deliverables or products, often tangible from the activities.
  • Activities: concrete events or services performed.

Each element in the design hierarchy has a spatial component. The goal is to be achieved in a specific location or locations. So are the objectives, outputs and activities. Again, this geographical dimension needs to be explicitly articulated in each step of the hierarchy. The first phase of the basemapping process is about fully geographically mapping a project’s design hierarchy (social network mapping is also possible but not included in this deliverable).

In essence, the first phase of basemapping is to map the “ideal world” that a given project is mean to achieve. The second phase of basemapping comprises the mapping of a standard albeit georeferenced baseline of indicators. This phase seeks to capture an accurate picture of the current state of affairs in the “real world” and is in effect a conflict or risk assessment. Many conflict/risk assessment frameworks have already been developed and applied so this will not be duplicated here.

The third phase of basemapping comes after “Ideal World” and “Real World” mapping. The purpose of “Changed World” basemapping is to compare ideal and real world basemaps in order to isolate any positive changes that can be attributed to the project being implemented. The purpose of basemaps is not to prove causation but rather to suggest correlation.

Perhaps the most critical component of “Changed World” basemaps is the selection of the “change indicators”. That is, identifying those indicators that can geographically denote whether program activities are in fact change the real world as intended. Often, these indicators will already have been identified during standard baseline studies and simply need to be tied to specific geographic coordinates. In other situations, proxy indicators with a deliberate geographic dimension will need to be identified.

The temporal resolution of “change indicators” also needs to be a deliberate decision. For example, these geo-referenced indicators can be monitored on a daily, weekly or monthly basis. Mobile technology such as mobile phones and PDAs can be used to document change indicators and map them in quasi-real time. The Ushahidi mapping platform could be ideal for basemapping in this regards.

Basemapping and Decision-Support

Basemapping should not be considered distinct from decision-support processes and tools. This is particularly true of “Changed World Basemapping” which is meant to highlight in space and time the difference between the “Ideal World” and “Real World” basemaps. In other words, “Changed World Basemapping” should draw on geospatial analysis to create “heat maps” (and other relevant visualization techniques) to depict progress towards the ideal world in both time and space.

It is precisely because heat maps depict change that they should be considered as decision-support tools, particularly if these are viewed on a dynamic platform that permits the user to query and analyze the heat map. To this end, the purpose of basemapping is to combine M&E, conflict assessment and decision support by using one-and-only one integrated dynamic mapping tool.

Basemap Challenges

Like any new idea and methodology, there are important challenges that need to be addressed. For example, projects are likely to have impact at different spatial levels. How do we capture cross-scale effects? In addition, how do we introduce (spatial) control variables in order to isolate intervening (spatial) variables? Finally, can control groups be introduced in order to provide compelling evidence of impact or does this raise some important ethical issues as has happened in other fields?

Patrick Philippe Meier