Tag Archives: OCHA

Humanitarianism in the Network Age: Groundbreaking Study

My colleagues at the United Nations Office for the Coordination of Humanitarian Affairs (OCHA) have just published a groundbreaking must-read study on Humanitarianism in the Network Age; an important and forward-thinking policy document on humanitarian technology and innovation. The report “imagines how a world of increasingly informed, connected and self-reliant communities will affect the delivery of humanitarian aid. Its conclusions suggest a fundamental shift in power from capital and headquarters to the people [that] aid agencies aim to assist.” The latter is an unsettling prospect for many. To be sure, Humanitarianism in the Network Age calls for “more diverse and bottom-up forms of decision-making—something that most Governments and humanitarian organizations were not designed for. Systems constructed to move information up and down hierarchies are facing a new reality where information can be generated by any-one, shared with anyone and acted by anyone.”

Screen Shot 2013-04-04 at 10.35.40 AM

The purpose of this blog post (available as a PDF) is to summarize the 120-page OCHA study. In this summary, I specifically highlight the most important insights and profound implications. I also fill what I believe are some of the report’s most important gaps. I strongly recommend reading the OCHA publication in full, but if you don’t have time to leaf through the study, reading this summary will ensure that you don’t miss a beat. Unless otherwise stated, all quotes and figures below are taken directly from the OCHA report.

All in all, this is an outstanding, accurate, radical and impressively cross-disciplinary study. In fact, what strikes me most about this report is how far we’ve come since the devastating Haiti Earthquake of 2010. Just three short years ago, speaking the word “crowdsourcing” was blasphemous, like “Voldermort” (for all you Harry Potter fans). This explains why some humanitarians called me the CrowdSorcerer at the time (thinking it was a derogatory term). CrisisMappers was only launched three months before Haiti. The Standby Volunteer Task Force (SBTF) didn’t even exist at the time and the Digital Humanitarian Network (DHN) was to be launched 2 years hence. And here we are, just three short years later, with this official, high-profile humanitarian policy document that promotes crowdsourcing, digital humanitarian response and next generation humanitarian technology. Exciting times. While great challenges remain, I dare say we’re trying our darned best to find some solutions, and this time through collaboration, CrowdSorcerers and all. The OCHA report is a testament to this collaboration.

Screen Shot 2013-04-04 at 10.43.15 AM

Summary

the Rise of big (crisis) data

Over 100 countries have more mobile phone subscriptions than they have people. One in four individuals in developing countries use the Internet. This figure will double within 20 months. About 70% of Africa’s total population are mobile subscribers. In short, “The planet has gone online, producing and sharing vast quantities of information.” Meanwhile, however, hundreds of millions of people are affected by disasters every year—more than 250 million in 2010 alone. There have been over 1 billion new mobile phone subscriptions since 2010. In other words, disaster affected communities are becoming increasingly “digital” as a result of the information revolution. These new digital technologies continue are evolving new nervous system for our planet, taking the pulse of our social, economic and political networks in real-time.

“Filipinos sent an average of 2 billion SMS messages every day in early 2012,” for example. When disaster strikes, many of these messages are likely to relay crisis information. In Japan, over half-a-million new users joined Twitter the day after the 2011 Earthquake. More than 177 million tweets about the disaster were posted that same day—that is, 2,000 tweets per second on average. Welcome to “The Rise of Big (Crisis) Data.” Meanwhile, back in the US, 80% of the American public expects emergency responders to monitor social media; and almost as many expect them to respond within three hours of posting a request on social media (1). These expectations have been shown to increase year-on year. “At the same time,” however, the OCHA report notes that “there are greater numbers of people […] who are willing and able to respond to needs.”

communities first

A few brave humanitarian organizations are embracing these changes and new realities, “reorienting their approaches around the essential objectives of helping people to help themselves.” That said, “the frontline of humanitarian action has always consisted of communities helping themselves before outside aid arrives.” What is new, however, is “affected people using technology to communicate, interact with and mobilize their social networks quicker than ever before […].” To this end, “by rethinking how aid agencies work and communicate with people in crisis, there is a chance that many more lives can be saved.” In sum, “the increased reach of communications networks and the growing network of people willing and able to help, are defining a new age—a network age—for humanitarian assistance.”

This stands in stark contrast to traditional notions of humanitarian assistance, which refer to “a small group of established international organizations, often based in and funded by high-income countries, providing help to people in a major crisis. This view is now out of date.” As my colleague Tim McNamara noted on the CrisisMappers list-serve, (cited in the OCHA report), this is “…not simply a technological shift [but] also a process of rapid decentralization of power. With extremely low barriers to entry, many new entrants are appearing in the fields of emergency and disaster response. They are ignoring the traditional hierarchies, because the new entrants perceive that there is something they can do which benefits others.” In other words, the humanitarian “world order” is shifting towards a more multipolar system. And so, while Tim was “referring to the specific case of volunteer crisis mappers […], the point holds true across all types of humanitarian work.”

Take the case of Somalia Speaks, for example. A journalist recently asked me to list the projects I am most proud of in this field. Somalia Speaks ranks very high. I originally pitched the idea to my Al-Jazeera colleagues back in September 2011; the project was launched three months later. Together with my colleagues at Souktelwe texted 5,000 Somalis across the country to ask how were personally affected by the crisis.

SomaliaSpeaksPic

As the OCHA study notes, we received over 3,000 responses, which were translated into English and geotagged by the Diaspora and subsequently added to a crisis map hosted on the Al-Jazeera website. From the OCHA report: “effective communication can also be seen as an end itself in promoting human dignity. More than 3,000 Somalis responded to the Somalia Speaks project, and they seemed to feel that speaking out was a worthwhile activity.” In sum, “The Somalia Speaks project enabled the voices of people from one of the world’s most inaccessible, conflict-ridden areas, in a language known to few outside their community, to be heard by decision makers from across the planet.” The project has since been replicated several times; see Uganda Speaks for example. The OCHA study refers to Somalia Speaks at least four times, highlighting the project as an example of networked humanitarianism.

PRIVACY, SECURITY & PROTECTION

The report also emphasizes the critical importance of data security, privacy and protection in the network age. OCHA’s honest and balanced approach to the topic is another reason why this report is so radical and forward thinking. “Concern over the protection of information and data is not a sufficient reason to avoid using new communications technologies in emergencies, but it must be taken into account. To adapt to increased ethical risks, humanitarian responders and partners need explicit guidelines and codes of conduct for managing new data sources.” This is precisely why I worked with GSMA’s Disaster Response Program to draft and publish the first ever Code of Conduct for the Use of SMS in Disaster Response. I have also provided extensive feedback to the International Committee of the Red Cross’s (ICRC) latest edition of the “Professional Standards for Protection Work,” which was just launched in Geneva this month. My colleagues Emmanuel Letouzé and Patrick Vinck also included a section on data security and ethics in our recent publication on the use of Big Data for Conflict Prevention. In addition, I have blogged about this topic quite a bit: herehere and here, for example.

crisis in decision making

“As the 2010 Haiti crisis revealed, the usefulness of new forms of information gathering is limited by the awareness of responders that new data sources exist, and their applicability to existing systems of humanitarian decision-making.” The fact of the matter is that humanitarian decision-making structures are simply not geared towards using Big Crisis Data let alone new data sources. More pointedly, however, humanitarian decision-making processes are often not based on empirical data in the first place, even when the data originate from traditional sources. As DfID notes in this 2012 strategy document, “Even when good data is available, it is not always used to inform decisions. There are a number of reasons for this, including data not being available in the right format, not widely dispersed, not easily accessible by users, not being transmitted through training and poor information management. Also, data may arrive too late to be able to influence decision-making in real time operations or may not be valued by actors who are more focused on immediate action.”

This is the classic warning-response gap, which has been discussed ad nauseum for decades in the field of famine early warning systems and conflict early warning systems. More data in no way implies action. Take the 2011 Somalia Famine, which was one of the best documented crises yet. So the famine didn’t occur because data was lacking. “Would more data have driven a better decision making process that could have averted disaster? Unfortunately, this does not appear to be the case. There had, in fact, been eleven months of escalating warnings emanating from the famine early warning systems that monitor Somalia. Somalia was, at the time, one of the most frequently surveyed countries in the world, with detailed data available on malnutrition prevalence, mortality rates, and many other indicators. The evolution of the famine was reported in almost real time, yet there was no adequate scaling up of humanitarian intervention until too late” (2).

At other times, “Information is sporadic,” which is why OCHA notes that “decisions can be made on the basis of anecdote rather than fact.” Indeed, “Media reports can significantly influence allocations, often more than directly transmitted community statements of need, because they are more widely read or better trusted.” (It is worth keeping in mind that the media makes mistakes; the New York Times alone makes over 7,000 errors every year). Furthermore, as acknowledged, by OCHA, “The evidence suggests that new information sources are no less representative or reliable than more traditional sources, which are also imperfect in crisis settings.” This is one of the most radical statements in the entire report. OCHA should be applauded for their remarkable fortitude in plunging into this rapidly shifting information landscape. Indeed, they go on to state that, “Crowdsourcing has been used to validate information, map events, translate text and integrate data useful to humanitarian decision makers.”

Screen Shot 2013-04-04 at 10.40.50 AM

The vast major of disaster datasets are not perfect, regardless of whether they are drawn from traditional or non-traditional sources. “So instead of criticizing the lack of 100% data accuracy, we need to use it as a base and ensure our Monitoring and Evaluation (M&E) and community engagement pieces are strong enough to keep our programming relevant” (Bartosiak 2013). And so, perhaps the biggest impact of new technologies and recent disasters on the humanitarian sector is the self disrobing of the Emperor’s Clothes (or Data). “Analyses of emergency response during the past five years reveal that poor information management has severely hampered effective action, costing many lives.” Disasters increasingly serve as brutal audits of traditional humanitarian organizations; and the cracks are increasingly difficult to hide in an always-on social media world. The OCHA study makes clear that  decision-makers need to figure out “how to incorporate these sources into decisions.”

Fact is, “To exploit the opportunity of the network age, humanitarians must understand how to use the new range of available data sources and have the capacity to transform this data into useful information.” Furthermore, it is imperative “to ensure new partners have a better understanding of how [these] decisions are made and what information is useful to improve humanitarian action.” These new partners include the members of the Digital Humanitarian Network (DHN), for example. Finally, decision-makers also need to “invest in building analytic capacity across the entire humanitarian network.” This analytic capacity can no longer rest on manual solutions alone. The private sector already makes use of advanced computing platforms for decision-making purposes. The humanitarian industry would be well served to recognize that their problems are hardly unique. Of course, investing in greater analytic capacity is an obvious solution but many organizations are already dealing with limited budgets and facing serious capacity constraints. I provide some creative solutions to this challenge below, which I refer to as “Data Science Philanthropy“.

Commentary

Near Perfection

OCHA’s report is brilliant, honest and forward thinking. This is by far the most important official policy document yet on humanitarian technology and digital humanitarian response—and thus on the very future of humanitarian action. The study should be required reading for everyone in the humanitarian and technology communities, which is why I plan to organize a panel on the report at CrisisMappers 2013 and will refer to the strategy document in all of my forthcoming talks and many a future blog post. In the meantime, I would like to highlight and address a some of the issues that I feel need to be discussed to take this discussion further.

Ironically, some of these gaps appear to reflect a rather limited understanding of advanced computing & next generation humanitarian technology. The following topics, for example, are missing from the OCHA report: Microtasking, Sentiment Analysis and Information Forensics. In addition, the report does not relate OCHA’s important work to disaster resilience and people-centered early warning. So I’m planning to expand on the OCHA report in the technology chapter for this year’s World Disaster Report (WDR 2013). This high-profile policy document is an ideal opportunity to amplify OCHA’s radical insights and to take these to their natural and logical conclusions vis-à-vis Big (Crisis) Data. To be clear, and I must repeat this, the OCHA report is the most important forward thinking policy document yet on the future of humanitarian response. The gaps I seek to fill in no way make the previous statement any less valid. The team at OCHA should be applauded, recognized and thanked for their tremendous work on this report. So despite some of the key shortcomings described below, this policy document is by far the most honest, enlightened and refreshing look at the state of the humanitarian response today; a grounded and well-researched study that provides hope, leadership and a clear vision for the future of humanitarianism in the network age.

BIG DATA HOW

OCHA recognizes that “there is a significant opportunity to use big data to save lives,” and they also get that, “finding ways to make big data useful to humanitarian decision makers is one of the great challenges, and opportunities, of the network age.” Moreover, they realize that “While valuable information can be generated anywhere, detecting the value of a given piece of data requires analysis and understanding.” So they warn, quite rightly, that “the search for more data can obscure the need for more analysis.” To this end, they correctly conclude that “identifying the best uses of crowdsourcing and how to blend automated and crowdsourced approaches is a critical area for study.” But the report does not take these insights to their natural and logical conclusions. Nor does the report explore how to tap these new data sources let alone analyze them in real time.

Yet these Big Data challenges are hardly unique. Our problems in the humanitarian space are not that “special” or  different. OCHA rightly notes that “Understanding which bits of information are valuable to saving lives is a challenge when faced with this ocean of data.” Yes. But such challenges have been around for over a decade in other disciplines. The field of digital disease detection, for example, is years ahead when it comes to real-time analysis of crowdsourced big data, not to mention private sector companies, research institutes and even new startups whose expertise is Big Data Analytics. I can also speak to this from my own professsional experience. About a decade ago, I worked with a company specializing in conflict forecasting and early using Reuters news data (Big Data).

In sum, the OCHA report should have highlighted the fact that solutions to many of these Big Data challenges already exist, which is precisely why I joined the Qatar Computing Research Institute (QCRI). What’s more, a number of humanitarian technology projects at QCRI are already developing prototypes based on these solutions; and OCHA is actually the main partner in one such project, so it is a shame they did not get credit for this in their own report.

sentiment analysis

While I introduced the use of sentiment analysis during the Haiti Earthquake, this has yet to be replicated in other humanitarian settings. Why is sentiment analysis key to humanitarianism in the network age? The answer is simple: “Communities know best what works for them; external actors need to listen and model their response accordingly.” Indeed, “Affected people’s needs must be the starting point.” Actively listening to millions of voices is a Big Data challenge that has already been solved by the private sector. One such solution is real-time sentiment analysis to capture brand perception. This is a rapidly growing multimillion dollar market, which is why many companies like Crimson Hexagon exist. Numerous Top 500 Fortune companies have been actively using automated sentiment analysis for years now. Why? Because these advanced listening solutions enable them to better understand customer perceptions.

Screen Shot 2013-04-08 at 5.49.56 AM

In Haiti, I applied this approach to tens of thousands of text messages sent by the disaster-affected population. It allowed us to track the general mood of this population on a daily basis. This is important because sentiment analysis as a feedback loop works particularly well with Big Data, which explains why the private sector is all over it. If just one or two individuals in a community are displeased with service delivery during a disaster, they may simply be “an outlier”  or perhaps exaggerating. But if the sentiment analysis at the community level suddenly starts to dip, then this means hundreds, perhaps thousands of affected individuals are now all feeling the same way about a situation. In other words, sentiment analysis serves as a triangulating mechanism. The fact that the OCHA report makes no mention of this existing solution is unfortunate since sentiment feedback loops enable organizations to assess the impact of their interventions by capturing their clients’ perceptions.

Information forensics

“When dealing with the vast volume and complexity of information available in the network age, understanding how to assess the accuracy and utility of any data source becomes critical.” Indeed, and the BBC’s User-Generated Content (UGC) Hub has been doing just this since 2005—when Twitter didn’t even exist. The field of digital information forensics may be new to the humanitarian sector, but that doesn’t mean it is new to every other sector on the planet. Furthermore, recent research on crisis computing has revealed that the credibility of social media reporting can be modeled and even predicted. Twitter has even been called a “Truth Machine” because of the self-correcting dynamic that has been empirically observed. Finally, one of QCRI’s humanitarian technology projects, Verily, focuses precisely on the issue of verifying crowdsourced social media information from social media. And the first organization I reached out to for feedback on this project was OCHA.

microtasking

The OCHA report overlooks microtasking as well. Yes, the study does address and promote the use of crowdsourcing repeatedly, but again, this  tends to focus on the collection of information rather than the processing of said information. Microtasking applications in the humanitarian space are not totally unheard of, however. Microtasking was used to translate and geolocate tens of thousands of text messages following the Haiti Earthquake. (As the OCHA study notes, “some experts estimated that 90 per cent [of the SMS’s] were ‘repetition’, or ‘white noise’, meaning useless chatter”). There have been several other high profile uses of microtasking for humanitarian operations such as this one thanks to OCHA’s leadership in response to Typhoon Pablo. In sum, microtasking has been used extensively in other sectors to manage the big data and quality control challenge for many years now. So this important human computing solution really ought to have appeared in the OCHA report along with the immense potential of microtasking humanitarian information using massive online multiplayer games (more here).

Open Data is Open Power

OCHA argues that “while information can be used by anyone, power remains concentrated in the hands of a limited number of decision makers.” So if the latter “do not use this information to make decisions in the interests of the people they serve, its value is lost.” I don’t agree that the value is lost. One of the reports’ main themes is the high-impact agency and ingenuity of disaster-affected communities. As OCHA rightly points out, “The terrain is continually shifting, and people are finding new and brilliant ways to cope with crises every day.” Openly accessible crisis information posted on social media has already been used by affected populations for almost a decade now. In other words, communities affected by crises are (quite rightly) taking matters into their own hands in today’s networked world—just like they did in the analog era of yesteryear. As noted earlier, “affected people [are] using technology to communicate, interact with and mobilize their social networks quicker than ever before […].” This explains why “the failure to share [information] is no longer a matter of institutional recalcitrance: it can cost lives.”

creative partnerships

The OCHA study emphasizes that “Humanitarian agencies can learn from other agencies, such as fire departments or militaries, on how to effectively respond to large amounts of often confusing information during a fast-moving crisis.” This is spot on. Situational awareness is first and foremost a military term. The latest Revolution in Military Affairs (RMA) provides important insights into the future of humanitarian technology—see these recent developments, for example. Mean-while, the London Fire Brigade has announced plans to add Twitter as a communication channel, which means city residents will have the option of reporting a fire alert via Twitter. Moreover, the 911 service in the US (999 in the UK) is quite possibly the oldest and longest running crowdsourced emergency service in the world. So there much that humanitarian can learn from 911. But the fact of the matter is that most domestic emergency response agencies are completely unprepared to deal with the tidal wave of Big (Crisis) Data, which is precisely why the Fire Department of New York City (FDNY) and San Francisco City’s Emergency Response Team have recently reached out to me.

Screen Shot 2013-04-04 at 11.08.13 AM

But some fields are way ahead of the curve. The OCHA report should thus have pointed to crime mapping and digital disease detection since these fields have more effectively navigated the big data challenge. As for the American Red Cross’s Digital Operations Center, the main technology they are using, Radian6, has been used by private sector clients for years now. And while the latter can afford the very expensive licensing fees, it is unlikely that cash-strapped domestic emergency response officers and international humanitarian organizations will ever be able to afford these advanced solutions. This is why we need more than just “Data Philanthropy“.

We also need “Data Science Philanthropy“. As the OCHA report states, decision-makers need to “invest in building analytic capacity across the entire humanitarian network.” This is an obvious recommendation, but perhaps not particularly realistic given the limited budgets and capacity constraints in the humanitarian space. This means we need to create more partnerships with Data Science groups like DataKind, Kaggle and the University of Chicago’s Data Science for Social Good program. I’m in touch with these groups and others for this reason. I’ve also been (quietly) building a global academic network called “Data Science for Humanitarian Action” which will launch very soon. Open Source solutions are also imperative for building analytic capacity, which is why the humanitarian technology platforms being developed by QCRI will all be Open Source and freely available.

DISASTER RESILIENCE

This points to the following gap in the OCHA report: there is no reference whatsoever to resilience. While the study does recognize that collective self-help behavior is typical in disaster response and should be amplified, the report does not make the connection that this age-old mutual-aid dynamic is the humanitarian sector’s own lifeline during a major disaster. Resilience has to do with a community’s capacity for self-organization. Communication technologies increasingly play a pivotal role in self-organization. This explains why disaster preparedness and disaster risk reduction programs ought to place greater emphasis on building the capacity of at-risk communities to self-organize and mitigate the impact of disasters on their livelihoods. More about this here. Creating resilience through big data is also more academic curiosity, as explained here.

DECENTRALIZING RESPONSE

As more and more disaster-affected communities turn to social media in time of need, “Governments and responders will soon need answers to the questions: ‘Where were you? We Facebooked/tweeted/texted for help, why didn’t someone come?'” Again, customer support challenges are hardly unique to the humanitarian sector. Private sector companies have had to manage parallel problems by developing more advanced customer service platforms. Some have even turned to crowdsourcing to manage customer support. I blogged about this here to drive the point home that solutions to these humanitarian challenges already exist in other sectors.

Yes, that’s right, I am promoting the idea of crowdsourcing crisis response. Fact is, disaster response has always been crowdsourced. The real first responders are the disaster affected communities themselves. Thanks to new technologies, this crowdsourced response can be accelerated and made more efficient. And yes, there’s an app (in the making) for that: MatchApp. This too is a QCRI humanitarian technology project (in partnership with MIT’s Computer Science and Artificial Intelligence Lab). The purpose of MatchApp is to decentralize disaster response. After all, the many small needs that arise following a disaster rarely require the attention of paid and experienced emergency responders. Furthermore, as a colleague of mine at NYU shared based on her disaster efforts following Hurricane Sandy, “Solving little challenges can make the biggest differences” for disaster-affected communities.

As noted above, more and more individuals believe that emergency responders should monitor social media during disasters and respond accordingly. This is “likely to increase the pressure on humanitarian responders to define what they can and cannot provide. The extent of communities’ desires may exceed their immediate life-saving needs, raising expectations beyond those that humanitarian responders can meet. This can have dangerous consequences. Expectation management has always been important; it will become more so in the network age.”

Screen Shot 2013-04-04 at 11.20.15 AM

PEOPLE-CENTERED

“Community early warning systems (CEWS) can buy time for people to implement plans and reach safety during a crisis. The best CEWS link to external sources of assistance and include the pre-positioning of essential supplies.” At the same time, “communities do not need to wait for information to come from outside sources, […] they can monitor local hazards and vulnerabilities themselves and then shape the response.” This sense and shaping capacity builds resilience, which explains why “international humanitarian organizations must embrace the shift of warning systems to the community level, and help Governments and communities to prepare for, react and respond to emergencies using their own resources and networks.”

This is absolutely spot on and at least 7 years old as far as  UN policy goes. In 2006, the UN’s International Strategy for Disaster Risk Reduction (UNISDR) published this policy document advocating for a people-centered approach to early warning and response systems. They defined the purpose of such as systems as follows:

“… to empower individuals and communities threatened by hazards to act in sufficient time and in an appropriate manner so as to reduce the possibility of personal injury, loss of life, damage to property and the environment, and loss of livelihoods.”

Unfortunately, the OCHA report does not drive these insights to their logical conclusion. Disaster-affected communities are even more ill-equipped to manage the rise of Big (Crisis) Data. Storing, let alone analyzing Big Data Analytics in real-time, is a major technical challenge. As noted here vis-à-vis Big Data Analytics on Twitter, “only corporate actors and regulators—who possess both the intellectual and financial resources to succeed in this race—can afford to participate […].” Indeed, only a handful of research institutes have the technical ability and large funding base carry out the real-time analysis of Big (Crisis) Data. My team and I at QCRI, along with colleagues at UN Global Pulse and GSMA are trying to change this. In the meantime, however, the “Big Data Divide” is already here and very real.

information > Food

“Information is not water, food or shelter; on its own, it will not save lives. But in the list of priorities, it must come shortly after these.” While I understand the logic behind this assertion, I consider it a step back, not forward from the 2005 World Disaster Report (WDR 2005), which states that “People need information as much as water, food, medicine or shelter. Information can save lives, livelihoods and resources.” In fact, OCHA’s assertion contradicts an earlier statement in the report; namely that “information in itself is a life-saving need for people in crisis. It is as important as water, food and shelter.” Fact is: without information, how does one know where/when and from whom clean water and food might be available? How does one know which shelters are open, whether they can accommodate your family and whether the road to the shelter is safe to drive on?

Screen Shot 2013-04-08 at 5.39.51 AM

OCHA writes that, “Easy access to data and analysis, through technology, can help people make better life-saving decisions for themselves and mobilize the right types of external support. This can be as simple as ensuring that people know where to go and how to get help. But to do so effectively requires a clear understanding of how information flows locally and how people make decisions.” In sum, access to information is paramount, which means that local communities should have easy access to next generation humanitarian technologies that can manage and analyze Big Crisis Data. As a seasoned humanitarian colleague recently told me, “humanitarians sometimes have a misconception that all aid and relief comes through agencies.  In fact, (especially with things such a shelter) people start to recover on their own or within their communities. Thus, information is vital in assuring that they do this safely and properly.  Think of the Haiti, build-back-better campaign and the issues with cholera outbreaks.”

Them not us

The technologies of the network age should not be restricted to empowering second- and third-level responders. Unfortunately, as OCHA rightly observes, “there is still a tendency for people removed from a crisis to decide what is best for the people living through that crisis.” Moreover, these paid responders cannot be everywhere at the same time. But the crowd is always there. And as OCHA points out, there are “growing groups of people willing able to help those in need;” groups that unlike their analog counterparts of yesteryear now operate in the “network age with its increased reach of communications networks.” So information is not simply or “primarily a tool for agencies to decide how to help people, it must be understood as a product, or service, to help affected communities determine their own priorities.” Recall the above definition of people-centered early warning. This definition does not all of a sudden become obsolete in the network age. The purpose of next generation technologies is to “empower individuals and communities threatened by hazards to act in sufficient time and in an appropriate manner so as to reduce the possibility of personal injury, loss of life, damage to property and the environment, and loss of livelihoods.”

Screen Shot 2013-04-08 at 5.36.05 AM

Digital humanitarian volunteers are also highly unprepared to deal with the rise of Big Crisis Data, even though they are at the frontlines and indeed the pioneers of digital response. This explains why the Standby Volunteer Task Force (SBTF), a network of digital volunteers that OCHA refers to half-a-dozen times throughout the report, are actively looking to becoming early adopters of next generation humanitarian technologies. Burn out is a serious issue with digital volunteers. They too require access to these next generation technologies, which is precisely why the American Red Cross equips their digital volunteers with advanced computing platforms as part of their Digital Operations Center. Unfortunately, some humanitarians still think that they can just as easily throw more (virtual) volunteers at the Big Crisis Data challenge. Not only are they terribly misguided but also insensitive, which is why, As OCHA notes, “Using new forms of data may also require empowering technical experts to overrule the decisions of their less informed superiors.” As the OCHA study concludes, “Crowdsourcing is a powerful tool, but ensuring that scarce volunteer and technical resources are properly deployed will take further research and the expansion of collaborative models, such as SBTF.”

Conclusion

So will next generation humanitarian technology solve everything? Of course not, I don’t know anyone naïve enough to make this kind of claim. (But it is a common tactic used by the ignorant to attack humanitarian innovation). I have already warned about techno-centric tendencies in the past, such as here and here (see epilogue). Furthermore, one of the principal findings from this OECD report published in 2008 is that “An external, interventionist, and state-centric approach in early warning fuels disjointed and top down responses in situations that require integrated and multilevel action.” You can throw all the advanced computing technology you want at this dysfunctional structural problem but it won’t solve a thing. The OECD thus advocates for “micro-level” responses to crises because “these kinds of responses save lives.” Preparedness is obviously central to these micro-level responses and self-organization strategies. Shockingly, however, the OCHA study reveals that, “only 3% of humanitarian aid goes to disaster prevention and preparedness,” while barely “1% of all other development assistance goes towards disaster risk reduction.” This is no way to build disaster resilience. I doubt these figures will increase substantially in the near future.

This reality makes it even more pressing to ensure that “responders listen to affected people and find ways to respond to their priorities will require a mindset change.” To be sure, “If aid organizations are willing to listen, learn and encourage innovation on the front lines, they can play a critical role in building a more inclusive and more effective humanitarian system.” This need to listen and learn is why next generation humanitarian technologies are not optional. Ensuring that first, second and third-level responders have access to next generation humanitarian technologies is critical for the purposes of self-help, mutual aid and external response.

bio

Using CrowdFlower to Microtask Disaster Response

Cross-posted from CrowdFlower blog

A devastating earthquake struck Port-au-Prince on January 12, 2010. Two weeks later, on January 27th, a CrowdFlower was used to translate text messages from Haitian Creole to English. Tens of thousands of messages were sent by affected Haitians over the course of several months. All of these were heroically translated by hundreds of dedicated Creole-speaking volunteers based in dozens of countries across the globe. While Ushahidi took the lead by developing the initial translation platform used just days after the earthquake, the translation efforts were eventually rerouted to CrowdFlower. Why? Three simple reasons:

  1. CrowdFlower is one of the leading and most highly robust micro-tasking platforms there is;
  2. CrowdFlower’s leadership is highly committed to supporting digital humanitarian response efforts;
  3. Haitians in Haiti could now be paid for their translation work.

While the CrowdFlower project was launched 15 days after the earthquake, i.e., following the completion of search and rescue operations, every single digital humanitarian effort in Haiti was reactive. The key takeaway here was the proof of concept–namely that large-scale micro-tasking could play an important role in humanitarian information management. This was confirmed months later when devastating floods inundated much of Pakistan. CrowdFlower was once again used to translate incoming messages from the disaster affected population. While still reactive, this second use of CrowdFlower demonstrated replicability.

The most recent and perhaps most powerful use of CrowdFlower for disaster response occurred right after Typhoon Pablo devastated the Philippines in early December 2012. The UN Office for the Coordination of Humanitarian Affairs (OCHA) activated the Digital Humanitarian Network (DHN) to rapidly deliver a detailed dataset of geo-tagged pictures and video footage (posted on Twitter) depicting the damage caused by the Typhoon. The UN needed this dataset within 12 hours, which required that 20,000 tweets to be analyzed as quickly as possible. The Standby Volunteer Task Force (SBTF), a member of Digital Huma-nitarians, immediately used CrowdFlower to identify all tweets with links to pictures & video footage. SBTF volunteers subsequently analyzed those pictures and videos for damage and geographic information using other means.

This was the most rapid use of CrowdFlower following a disaster. In fact, this use of CrowdFlower was pioneering in many respects. This was the first time that a member of the Digital Humanitarian Network made use of CrowdFlower (and thus micro-tasking) for disaster response. It was also the first time that Crowd-Flower’s existing workforce was used for disaster response. In addition, this was the first time that data processed by CrowdFlower contributed to an official crisis map produced by the UN for disaster response (see above).

These three use-cases, Haiti, Pakistan and the Philippines, clearly demonstrate the added value of micro-tasking (and hence CrowdFlower) for disaster response. If CrowdFlower had not been available in Haiti, the alternative would have been to pay a handful of professional translators. The total price could have come to some $10,000 for 50,000 text messages (at 0.20 cents per word). Thanks to CrowdFlower, Haitians in Haiti were given the chance to make some of that money by translating the text messages themselves. Income generation programs are absolutely critical to rapid recovery following major disasters. In Pakistan, the use of CrowdFlower enabled Pakistani students and the Diaspora to volunteer their time and thus accelerate the translation work for free. Following Typhoon Pablo, paid CrowdFlower workers from the Philippines, India and Australia categorized several thousand tweets in just a couple hours while the volunteers from the Standby Volunteer Task Force geo-tagged the results. Had CrowdFlower not been available then, it is highly, highly unlikely that the mission would have succeeded given the very short turn-around required by the UN.

While impressive, the above use-cases were also reactive. We need to be a lot more pro-active, which is why I’m excited to be collaborating with CrowdFlower colleagues to customize a standby platform for use by the Digital Humanitarian Network. Having a platform ready-to-go within minutes is key. And while digital volunteers will be able to use this standby platform, I strongly believe that paid CrowdFlower workers also have a key role to play in the digital huma-nitarian ecosystem. Indeed, CrowdFlower’s large, multinational and multi-lingual global workforce is simply unparalleled and has the distinct advantage of being very well versed in the CrowdFlower platform.

In sum, it is high time that the digital humanitarian space move from crowd-sourcing to micro-tasking. It has been three years since the tragic earthquake in Haiti but we have yet to adopt micro-tasking more widely. CrowdFlower should thus play a key role in promoting and enabling this important shift. Their con-tinued important leadership in digital humanitarian response should also serve as a model for other private sector companies in the US and across the globe.

bio

How the UN Used Social Media in Response to Typhoon Pablo (Updated)

Our mission as digital humanitarians was to deliver a detailed dataset of pictures and videos (posted on Twitter) which depict damage and flooding following the Typhoon. An overview of this digital response is available here. The task of our United Nations colleagues at the Office of the Coordination of Humanitarian Affairs (OCHA), was to rapidly consolidate and analyze our data to compile a customized Situation Report for OCHA’s team in the Philippines. The maps, charts and figures below are taken from this official report (click to enlarge).

Typhon PABLO_Social_Media_Mapping-OCHA_A4_Portrait_6Dec2012

This map is the first ever official UN crisis map entirely based on data collected from social media. Note the “Map data sources” at the bottom left of the map: “The Digital Humanitarian Network’s Solution Team: Standby Volunteer Task Force (SBTF) and Humanity Road (HR).” In addition to several UN agencies, the government of the Philippines has also made use of this information.

Screen Shot 2012-12-08 at 7.26.19 AM

Screen Shot 2012-12-08 at 7.29.24 AM

The cleaned data was subsequently added to this Google Map and also made public on the official Google Crisis Map of the Philippines.

Screen Shot 2012-12-08 at 7.32.17 AM

One of my main priorities now is to make sure we do a far better job at leveraging advanced computing and microtasking platforms so that we are better prepared the next time we’re asked to repeat this kind of deployment. On the advanced computing side, it should be perfectly feasible to develop an automated way to crawl twitter and identify links to images  and videos. My colleagues at QCRI are already looking into this. As for microtasking, I am collaborating with PyBossa and Crowdflower to ensure that we have highly customizable platforms on stand-by so we can immediately upload the results of QCRI’s algorithms. In sum, we have got to move beyond simple crowdsourcing and adopt more agile micro-tasking and social computing platforms as both are far more scalable.

In the meantime, a big big thanks once again to all our digital volunteers who made this entire effort possible and highly insightful.

Disaster Relief 2.0: Between a Signac and a Picasso

The United Nations Foundation, Vodafone Foundation, OCHA and my “alma matter” the Harvard Humanitarian Initiative just launched an important report that seeks to chart the future of disaster response based on critical lessons learned from Haiti. The report, entitled “Disaster Relief 2.0: The Future of Information Sharing in Humanitarian Emergencies,” builds on a previous UN/Vodafone Foundation Report co-authored by Diane Coyle and myself just before the Haiti earthquake: “New Technologies in Emergencies and Conflict: The Role of Information and Social Networks.”

The authors of the new study begin with a warning: “this report sounds an alarm bell. If decision makers wish to have access to (near) real-time assessments of complex emergencies, they will need to figure out how to process information flows from many more thousands of individuals than the current system can handle.” In any given crisis, “everyone has a piece of information, everyone has a piece of that picture.” And more want to share their piece of the picture. So part of the new challenge lies in how to collect and combine multiple feeds of information such that the result paints a coherent and clear picture of an evolving crisis situation. What we need is a Signac, not a Picasso.

The former, Paul Signac, is known for using “pointillism,” a technique in which “small, distinct dots of pure color are applied in patterns to form an image.” Think of these dots as data points drawn from diverse pallets but combined to depict an appealing and consistent whole. In contrast, Pablo Picasso’s paintings from his Cubism and Surrealism period often resemble unfinished collages of fragmented objects. A Picasso gives the impression of impossible puzzle pieces in contrast to the single legible harmony of a Signac.

This Picasso effect, or “information fragmentation” as the humanitarian community calls it, was one of the core information management challenges that the humanitarian community faced in Haiti: “the division of data resources and analysis into silos that are difficult to aggregate, fuse, or otherwise reintegrate into composite pictures.” This plagued information management efforts between and within UN clusters, which made absorbing new and alternative sources of information–like crowdsourced SMS reports–even less possible.

These new information sources exist in part thanks to new players in the disaster response field, the so-called Volunteer Technical Communities (VTCs). This shift towards a more multi-polar system of humanitarian response brings both new opportunities and new challenges. One way to overcome “information fragmentation” and create a Signac is for humanitarian organizations and VTCs to work more closely together. Indeed, as “volunteer and technical communities continue to engage with humanitarian crises they will increasingly add to the information overload problem. Unless they can become part of the solution.” This is in large part why we launched the Standby Volunteer Task Force at the 2010 International Conference on Crisis Mapping (ICCM 2010): to avoid information overload by creating a common canvas and style between volunteer crisis mappers and the humanitarian community.

What is perhaps most striking about this new report is the fact that it went to press the same month that two of the largest crisis mapping operations since Haiti were launched, namely the Libya and Japan Crisis Maps. One could already write an entirely new UN/Vodafone Foundation Report on just the past 3 months of crisis mapping operations. The speed with which learning and adaptation is happening in some VTCs is truly astounding. As I noted in this earlier blog post, “Crisis Mapping Libya: This is no Haiti“, we have come a long way since the Haiti response. Indeed, lessons from last year have been identified, they have been learned and operationally applied by VTCs like the Task Force. The fact that OCHA formally requested activation of the Task Force to provide a live crisis map of Libya just months after the Task Force was launched is a clear indication that we are on the right track. This is no Picasso.

Referring to lessons learned in Haiti will continue to be important, but as my colleague Nigel Snoad has noted, Haiti represents an outlier in terms of disasters. We are already learning new lessons and implementing better practices in response to crises that couldn’t be more different than Haiti, e.g., crisis mapping hostile, non-permissive environments like Egypt, Sudan and Libya. In Japan, we are also learning how a more hierarchical society with a highly developed and media rich environment presents a different set of opportunities and challenges for crisis mapping. This is why VTCs will continue to be at the forefront of Disaster 2.0 and why reports like this one are so key: they clearly show that a Signac is well within our reach if we continue working together.

Changing the World One Map at a Time

The response to last year’s crises in Haiti, Chile and Pakistan revealed an exciting potential. Volunteers from thousands of miles away could possibly play an important role in humanitarian operations by using social networking platforms and free, open source software to create live crisis maps. Today’s volunteer efforts on the Libya Crisis Map are turning that potential into reality.

When I called Ushahidi’s David Kobia to launch the Haiti Crisis Map just hours after the earthquake that struck Port-au-Prince, that was purely an emotional reaction. I had no plan. I just needed to do something because watching the first reports coming in on CNN was agonizing and unbearable. Some of my closest friends from The Fletcher School were in Haiti at the time and I had no idea whether they were still alive. Little did I know that several hundred volunteers from dozens of countries would soon join the efforts to create a live crisis map of the disaster-struck country.

I called David again a few weeks later just hours after another earthquake had struck, this time Chile. Unlike Haiti, I now had a better sense of what it would take to launch a crisis map, but I had no idea who might volunteer to keep this map alive around the clock since volunteers working on Haiti were either over-stretched or burnt out, or both (like I was). As luck would have it, I was due to give a talk at Columbia University that same day on our experience in Haiti. So I used my speaking slot to recruit volunteers for Chile. Several came up to me after the presentation and some sixty new volunteers were trained within 48 hours. This is how the Chile Crisis Map got started.

Pakistan was different. I didn’t launch a crisis map; someone else did, and from Karachi. But he needed volunteer support to create the Pakistan Crisis Map so we turned to the incredible volunteers who had helped out in Haiti and Chile and recruited new ones along the way. By now, we had a core set of volunteers with an impressive track record in live crisis mapping.

This is when I realized what the next logical step was. To give these volunteers a name and visibility. We needed to give them the opportunity to share what they had learned and train new recruits. Thus was born the Standby Volunteer Task Force: an online community for live mapping.

We got to work right away after launching in October 2010. Our first step was to create protocols and establish workflows in order to streamline crisis mapping processes and render them as efficient and effective as possible. We had the opportunity to test our first drafts thanks to the UN OCHA Colombia team who invited us to participate in an official earthquake disaster simulation exercise just weeks after we launched. This provided us with invaluable feedback which we used to revise our protocols.

In January of this year, we activated the Task Force to provide live mapping support to monitor the referendum in Southern Sudan. We also learned a lot from that experience and improved our workflows accordingly. Last month, New Zealand was struck by a powerful earthquake so we activated the Task Force at the request of local disaster response colleagues. Again, there were some important lessons gained from that deployment, and again we went back to our protocols and workflows to improve them further.

This week, the Information Services Section of OCHA in Geneva requested that the Task Force be activated for Libya. This was a first. Unlike Haiti, we had a direct channel from day one to the main coordinating body of the UN for humanitarian assistance. We also had a trained network of volunteers on standby with protocols and workflows that had already been revised and tested several times over almost half-a-year. It is also important to emphasize that many Task Force volunteers are skilled professionals, including humanitarian professionals. This is a self-selected group and while many new volunteers who join may have little experience in crisis mapping, they go through a structured training process managed by the most experienced volunteers on the team.

The result? A Crisis Map of Libya launched within hours and public institutional support expressed within days. Some of the awesome volunteers crisis mapping Libya brought their experience from the Haiti, Chile and Pakistan days. Most however, are newly trained and bring renewed energy, dedication and good cheer to the Task Force.

Below is a new interface option developed as a plugin for an Ushahidi project in Liberia that is being used for Libya as well. Our colleagues at OCHA are using this interface almost exclusively as it provides a number of important functionalities for data visualization and comparative analysis:

The public tweets below are amazing and unprecedented in so many ways! Thank you UN and Josette, we really appreciate your public support!

Note that Josette Sheeran is the Executive Director of WFP.

We still have a long way to go with the Task Force, but boy have we covered even more ground since Haiti. There are for me two powerful narratives in this story:

The first is a reminder that being human is about helping others in need. And thanks to today’s easy mapping platforms, volunteers can help respond to a crisis from thousands of miles away by collaborating online to create a live map that can be used to support humanitarian operations. They can use social networking platforms to connect, organize, recruit and train. There’s so much we as volunteers can do online to help, especially if we’re prepared and are ready to work hard. This is why I think it’s time for established volunteer networks like UN Volunteers (UNVs) to offer both field-based and web-based opportunities. Why not train UNVs in online crisis mapping so they can be activated to directly support UN operations via web?

The second powerful narrative for me is the collaboration between large established organizations and new decentralized volunteer networks. OCHA took a bold move when they decided to bet on the Volunteer Task Force for the Libya Crisis Map. They should be applauded. They’ve never done this before and neither have we (vis-a-vis direct collaboration with a UN office during a major crisis). I find this unprecedented move a powerful indication that learning by doing is almost always better than learning by just talking. This impromptu collaboration also shows that large organizations and small volunteer networks can work together in a way that creates more added value than flying solo does.

It’s the beginning of a new world for humanitarian response; The Prologue, if you will. I’m excited for what comes next. I know there’s a lot to figure out and many obstacles to overcome. I have no illusions of that. But I’m hopeful; as ready as I’ll ever be; and I have the honor and privilege to work with and learn from the best volunteer network of crisis mappers on the planet. They are the true heros, for without them the map would be barren. Onwards.

Updated: Humanitarian Situation Risk Index (HSRI)

The Humanitarian Situation Risk Index (HSRI) is a tool created by UN OCHA in Colombia. The objective of HSRI is to determine the probability that a humanitarian situation occurs in each of the country’s municipalities in relation to the ongoing complex emergency. HSRI’s overall purpose is to serve as a “complementary analytical tool in decision-making allowing for humanitarian assistance prioritization in different regions as needed.”

UPDATE: I actually got in touch with the HSRI group back in February 2009 to let them know about Ushahidi and they have since “been running some beta-testing on Ushahidi, and may as of next week start up a pilot effort to organize a large number of actors in northeastern Colombia to feed data into [their] on-line information system.” In addition, they “plan to move from a logit model calculating probability of a displacement situation for each of the 1,120 Colombian municipalities, to cluster analysis, and have been running the identical model on data [they] have for confined communities.”

hsrimap

HSRI uses statistical tools (principal component analysis and the Logit model) to estimate the risk indexes. The indexes range from 0 to 1, where 0 is no risk and 1 is maximum risk. The team behind the project clearly state that the tool does not indicate the current situation in each municipality given that the data is not collected in real-time. Nor does the tool quantify the precise number of persons at risk.

The data used to estimate the Humanitarian Situation Risk Index “mostly comes from official sources, due to the fact that the vast majority of data collected and processed are from State entities, and in the remaining cases the data is from non-governmental or multilateral institutions.” The following table depicts the data collected.

hsri

I’d be interested to know whether the project will move towards doing any temporal analysis of the data over time. This would enable trends analysis which could more directly inform decision-making than a static map representing static data. One other thought might be to complement this “baseline” type data with event-data by using mobile phones and a “bounded crowdsourcing” approach a la Ushahidi.

Patrick Philippe Meier

Internews, Ushahidi and Communication in Crises

I had the pleasure of participating in two Internews sponsored meetings in New York today. Fellow participants included OCHA, Oxfram, Red Cross, Save the Children, World Vision, BBC World Service Trust, Thomson Reuters Foundation, Humanitarian Media Foundation, International Media Support and several others.

img_0409

The first meeting was a three-hour brainstorming session on “Improving Humanitarian Information for Affected Communities” organized in preparation for the second meeting on “The Unmet Need for Communication in Humanitarian Response,” which was held at the UN General Assembly.

img_0411

The meetings presented an ideal opportunity for participants to share information on current initiatives that focus on communications with crisis-affected populations. Ushahidi naturally came to mind so I introduced the concept of crowdsourcing crisis information. I should have expected the immediate push back on the issue of data validation.

Crowdsourcing and Data Validation

While I have already blogged about overcoming some of the challenges of data validation in the context of crowdsourcing here, there is clearly more to add since the demand for “fully accurate information” a.k.a. “facts and only facts” was echoed during the second meeting in the General Assembly. I’m hoping this blog post will help move the discourse beyond the black and white concepts that characterize current discussions on data accuracy.

Having worked in the field of conflict early warning and rapid response for the past seven years, I fully understand the critical importance of accurate information. Indeed, a substantial component of my consulting work on CEWARN in the Horn of Africa specifically focused on the data validation process.

To be sure, no one in the humanitarian and human rights community is asking for inaccurate information. We all subscribe to the notion of “Do No Harm.”

Does Time Matter?

What was completely missing from today’s meetings, however, was a reference to time. Nobody noted the importance of timely information during crises, which is rather ironic since both meetings focused on sudden onset emergencies. I suspect that our demand (and partial Western obsession) for fully accurate information has clouded some of our thinking on this issue.

This is particularly ironic given that evidence-based policy-making and data-driven analysis are still the exception rather than the rule in the humanitarian community. Field-based organizations frequently make decisions on coordination, humanitarian relief and logistics without complete and fully accurate, real-time information, especially right after a crisis strikes.

So why is this same community holding crowdsourcing to a higher standard?

Time versus Accuracy

Timely information when a crisis strikes is a critical element for many of us in the humanitarian and human rights communities. Surely then we must recognize the tradeoff between accuracy and timeliness of information. Crisis information is perishable!

The more we demand fully accurate information, the longer the data validation process typically takes and thus the more likely the information will be become useless. Our public health colleagues who work in emergency medicine know this only too well.

The figure below represents the perishable nature of crisis information. Data validation makes sense during time-periods A and B. Continuing to carry out data validation beyond time B may be beneficial to us, but hardly to crisis affected communities. We may very well have the luxury of time. Not so for at-risk communities.

relevance_time

This point often gets overlooked when anxieties around inaccurate information surface. Of course we need to insure that information we produce or relay is as accurate as possible. Of course we want to prevent dangerous rumors from spreading. To this end, the Thomson Reuters Foundation clearly spelled out that their new Emergency Information Service (EIS) would only focus on disseminating facts and only facts. (See my previous post on EIS here).

Yes, we can focus all our efforts on disseminating facts, but are those facts communicated after time-period B above really useful to crisis-affected communities? (Incidentally, since EIS will be based on verifiable facts, their approach may well be liked to Wikipedia’s rules for corrective editing. In any event, I wonder how EIS might define the term “fact”).

Why Ushahidi?

Ushahidi was created within days of the Kenyan elections in 2007 because both the government and national media were seriously under-reporting widespread human rights violations. I was in Nairobi visiting my parents at the time and it was also frustrating to see the majority of international and national NGOs on the ground suffering from “data hugging disorder,” i.e., they had no interest whatsoever to share information with each other or the public for that matter.

This left the Ushahidi team with few options, which is why they decided to develop a transparent platform that would allow Kenyans to report directly, thereby circumventing the government, media and NGOs, who were working against transparency.

Note that the Ushahidi team is only comprised of tech-experts. Here’s a question: why didn’t the human rights or humanitarian community set up a platform like Ushahidi? Why were a few tech-savvy Kenyans without a humanitarian background able to set up and deploy the platform within a week and not the humanitarian community? Where were we? Shouldn’t we be the ones pushing for better information collection and sharing?

In a recent study for the Harvard Humanitarian Initiative (HHI), I mapped and time-stamped reports on the post-election violence reported by the mainstream media, citizen journalists and Ushahidi. I then created a Google Earth layer of this data and animated the reports over time and space. I recommend reading the conclusions.

Accuracy is a Luxury

Having worked in humanitarian settings, we all know that accuracy is more often luxury than reality, particularly right after a crisis strikes. Accuracy is not black and white, yes or no. Rather, we need to start thinking in terms of likelihood, i.e., how likely is this piece of information to be accurate? All of us already do this everyday albeit subjectively. Why not think of ways to complement or triangulate our personal subjectivities to determine the accuracy of information?

At CEWARN, we included “Source of Information” for each incident report. A field reporter could select from several choices: (1) direct observation; (2) media, and (3) rumor. This gave us a three-point weighted-scale that could be used in subsequent analysis.

At Ushahidi, we are working on Swift River, a platform that applies human crowdsourcing and machine analysis (natural language parsing) to filter crisis information produced in real time, i.e., during time-periods A and B above. Colleagues at WikiMapAid are developing similar solutions for data on disease outbreaks. See my recent post on WikiMapAid and data validation here.

Conclusion

In sum, there are various ways to rate the likelihood that a reported event is true. But again, we are not looking to develop a platform that insures 100% reliability. If full accuracy were the gold standard of humanitarian response (or military action for that matter), the entire enterprise would come to a grinding halt. The intelligence community has also recognized this as I have blogged about here.

The purpose of today’s meetings was for us to think more concretely about communication in crises from the perspective of at-risk communities. Yet, as soon as I mentioned crowdsourcing the discussion became about our own demand for fully accurate information with no concerns raised about the importance of timely information for crisis-affected communities.

Ironic, isn’t it?

Patrick Philippe Meier

Crisis Mapping Conference Proposal

Bridging the Divide in Crisis Mapping

As mentioned in a recent blog post, my colleague Jen Ziemke and I are organizing a workshop on the topic of crisis mapping. The purpose of this workshop is to bring together a small group of scholars and practitioners who are pioneering the new field of crisis mapping. We are currently exploring funding opportunities with a number of donors and welcome any suggestions you might have for specific sponsors.

The new field of crisis mapping encompasses the collection, dynamic visualization and subsequent analysis of georeferenced information on contemporary conflicts and human rights violations.  A wide range of sources are used to create these crisis maps, (e.g. events data,  newspaper and intelligence parsing, satellite imagery, interview and survey data, SMS, etc). Scholars have developed several analytical methodologies to identify patterns in dynamic crisis maps. These range from computational methods and visualization techniques to spatial econometrics and “hot spot” analysis.

While scholars employ these sophisticated methods in their academic research, operational crisis mapping platforms developed by practitioners are completely devoid of analytical tools. At the same time, scholars often assume that humanitarian practitioners are conversant in quantitative spatial analysis, which is rarely the case. Furthermore, practitioners who are deploying crisis mapping platforms do not have time to the academic literature on this topic.

Mobile Crisis Mapping and Crisis Mapping Analytics

In other words, there is a growing divide between scholars and practitioners in the field of crisis mapping. The purpose of this workshop is to bridge this divide by bringing scholars and practitioners together to shape the future of crisis mapping. At the heart of this lies two new developments: Mobile Crisis Mapping (MCM) and Crisis Mapping Analytics (CMA). See previous blog posts on MCM and CMA here and here.

I created these terms to highlight areas in need for further applied research. As MCM platforms like Ushahidi‘s become more widely available, the amount of crowdsourced data will substantially increase and so mays of the challenges around data validation and analysis. This is why we need to think now about developing a field of Crisis Mapping Analytics (CMA) to make sense of the incoming data and identify new and recurring patterns in human rights abuses and conflict.

This entails developing user-friendly metrics for CMA that practitioners can build in as part of their MCM platforms. However, there is no need to reinvent the circle since scholars who analyze spatial and temporal patterns of conflict already employ sophisticated metrics that can inform the development of CMA metrics. In sum, a dedicated workshop that brings these practitioners and scholars together would help accelerate the developing field of crisis mapping.

Proposed Agenda

Here is a draft agenda that we’ve been sharing with prospective donors. We envisage the workshop to take place over a Friday, Saturday and Sunday. Feedback is very much welcomed.

Day 1 – Friday

Welcome and Introductions

Keynote 1 – The Past & Future of Crisis Mapping

Roundtable 1 – Presentation of Academic and Operational Crisis Mapping projects with Q&A

Lunch

Track 1a – Introduction to Automated Crisis Mapping (ACM): From information collection and data validation to dynamic visualization and dissemination

Track 1b – Introduction to Mobile Crisis Mapping (MCM): From information collection and data validation to dynamic visualization and dissemination

&

Track 2a – Special introduction for newly interested colleagues  and students on spatial thinking in social sciences, using maps to understand crisis, violence and war

Track 2b – Breakout session for students and new faculty: hands-on introduction to GIS and other mapping programs

Dinner

Day 2 – Saturday

Keynote 2 – Crisis Mapping and Patterns Analysis

Roundtable 2 – Interdisciplinary Applications: Innovations & Challenges

Roundtable 3 – Data Collection & Validations: Innovations & Challenges

Lunch

Roundtable 4 – Crisis Mapping Analytics (CMA): Metrics and Taxonomies

Roundtable 5 – Crisis Mapping & Response: Innovations & Challenges

Dinner

Day 3 – Sunday

Keynote 3 – What Happens Next – Shaping the Future of Crisis Mapping

Self-organized Sessions

Wrap-Up

Proposed Participants

Here are some of the main academic institutes and crisis mapping organizations we had in mind:

Institutes

  • John Carrol University (JCU)
  • Harvard Humanitarian Initiative (HHI)
  • Peace Research Institute, Oslo (PRIO)
  • International Conflict Research, ETH Zurich
  • US Institute for Peace (USIP)
  • Political Science Department, Yale University

Organizations

Next Steps

Before we can move forward on any of this, we need to identify potential donors to help co-sponsor the workshop. So please do get in touch if you have any suggestions and/or creative ideas.

Patrick Philippe Meier