Tag Archives: UN

How the UN Used Social Media in Response to Typhoon Pablo (Updated)

Our mission as digital humanitarians was to deliver a detailed dataset of pictures and videos (posted on Twitter) which depict damage and flooding following the Typhoon. An overview of this digital response is available here. The task of our United Nations colleagues at the Office of the Coordination of Humanitarian Affairs (OCHA), was to rapidly consolidate and analyze our data to compile a customized Situation Report for OCHA’s team in the Philippines. The maps, charts and figures below are taken from this official report (click to enlarge).

Typhon PABLO_Social_Media_Mapping-OCHA_A4_Portrait_6Dec2012

This map is the first ever official UN crisis map entirely based on data collected from social media. Note the “Map data sources” at the bottom left of the map: “The Digital Humanitarian Network’s Solution Team: Standby Volunteer Task Force (SBTF) and Humanity Road (HR).” In addition to several UN agencies, the government of the Philippines has also made use of this information.

Screen Shot 2012-12-08 at 7.26.19 AM

Screen Shot 2012-12-08 at 7.29.24 AM

The cleaned data was subsequently added to this Google Map and also made public on the official Google Crisis Map of the Philippines.

Screen Shot 2012-12-08 at 7.32.17 AM

One of my main priorities now is to make sure we do a far better job at leveraging advanced computing and microtasking platforms so that we are better prepared the next time we’re asked to repeat this kind of deployment. On the advanced computing side, it should be perfectly feasible to develop an automated way to crawl twitter and identify links to images  and videos. My colleagues at QCRI are already looking into this. As for microtasking, I am collaborating with PyBossa and Crowdflower to ensure that we have highly customizable platforms on stand-by so we can immediately upload the results of QCRI’s algorithms. In sum, we have got to move beyond simple crowdsourcing and adopt more agile micro-tasking and social computing platforms as both are far more scalable.

In the meantime, a big big thanks once again to all our digital volunteers who made this entire effort possible and highly insightful.

Big Data for Development: Challenges and Opportunities

The UN Global Pulse report on Big Data for Development ought to be required reading for anyone interested in humanitarian applications of Big Data. The purpose of this post is not to summarize this excellent 50-page document but to relay the most important insights contained therein. In addition, I question the motivation behind the unbalanced commentary on Haiti, which is my only major criticism of this otherwise authoritative report.

Real-time “does not always mean occurring immediately. Rather, “real-time” can be understood as information which is produced and made available in a relatively short and relevant period of time, and information which is made available within a timeframe that allows action to be taken in response i.e. creating a feedback loop. Importantly, it is the intrinsic time dimensionality of the data, and that of the feedback loop that jointly define its characteristic as real-time. (One could also add that the real-time nature of the data is ultimately contingent on the analysis being conducted in real-time, and by extension, where action is required, used in real-time).”

Data privacy “is the most sensitive issue, with conceptual, legal, and technological implications.” To be sure, “because privacy is a pillar of democracy, we must remain alert to the possibility that it might be compromised by the rise of new technologies, and put in place all necessary safeguards.” Privacy is defined by the International Telecommunications Union as theright of individuals to control or influence what information related to them may be disclosed.” Moving forward, “these concerns must nurture and shape on-going debates around data privacy in the digital age in a constructive manner in order to devise strong principles and strict rules—backed by adequate tools and systems—to ensure “privacy-preserving analysis.”

Non-representative data is often dismissed outright since findings based on such data cannot be generalized beyond that sample. “But while findings based on non-representative datasets need to be treated with caution, they are not valueless […].” Indeed, while the “sampling selection bias can clearly be a challenge, especially in regions or communities where technological penetration is low […],  this does not mean that the data has no value. For one, data from “non-representative” samples (such as mobile phone users) provide representative information about the sample itself—and do so in close to real time and on a potentially large and growing scale, such that the challenge will become less and less salient as technology spreads across and within developing countries.”

Perceptions rather than reality is what social media captures. Moreover, these perceptions can also be wrong. But only those individuals “who wrongfully assume that the data is an accurate picture of reality can be deceived. Furthermore, there are instances where wrong perceptions are precisely what is desirable to monitor because they might determine collective behaviors in ways that can have catastrophic effects.” In other words, “perceptions can also shape reality. Detecting and understanding perceptions quickly can help change outcomes.”

False data and hoaxes are part and parcel of user-generated content. While the challenges around reliability and verifiability are real, Some media organizations, such as the BBC, stand by the utility of citizen reporting of current events: “there are many brave people out there, and some of them are prolific bloggers and Tweeters. We should not ignore the real ones because we were fooled by a fake one.” And have thus devised internal strategies to confirm the veracity of the information they receive and chose to report, offering an example of what can be done to mitigate the challenge of false information.” See for example my 20-page study on how to verify crowdsourced social media data, a field I refer to as information forensics. In any event, “whether false negatives are more or less problematic than false positives depends on what is being monitored, and why it is being monitored.”

“The United States Geological Survey (USGS) has developed a system that monitors Twitter for significant spikes in the volume of messages about earthquakes,” and as it turns out, 90% of user-generated reports that trigger an alert have turned out to be valid. “Similarly, a recent retrospective analysis of the 2010 cholera outbreak in Haiti conducted by researchers at Harvard Medical School and Children’s Hospital Boston demonstrated that mining Twitter and online news reports could have provided health officials a highly accurate indication of the actual spread of the disease with two weeks lead time.”

This leads to the other Haiti example raised in the report, namely the finding that SMS data was correlated with building damage. Please see my previous blog posts here and here for context. What the authors seem to overlook is that Benetech apparently did not submit their counter-findings for independent peer-review whereas the team at the European Commission’s Joint Research Center did—and the latter passed the peer-review process. Peer-review is how rigorous scientific work is validated. The fact that Benetech never submitted their blog post for peer-review is actually quite telling.

In sum, while this Big Data report is otherwise strong and balanced, I am really surprised that they cite a blog post as “evidence” while completely ignoring the JRC’s peer-reviewed scientific paper published in the Journal of the European Geosciences Union. Until counter-findings are submitted for peer review, the JRC’s results stand: unverified, non-representative crowd-sourced text messages from the disaster affected population in Port-au-Prince that were in turn translated from Haitian Creole to English via a novel crowdsourced volunteer effort and subsequently geo-referenced by hundreds of volunteers  which did not undergo any quality control, produced a statistically significant, positive correlation with building damage.

In conclusion, “any challenge with utilizing Big Data sources of information cannot be assessed divorced from the intended use of the information. These new, digital data sources may not be the best suited to conduct airtight scientific analysis, but they have a huge potential for a whole range of other applications that can greatly affect development outcomes.”

One such application is disaster response. Earlier this year, FEMA Administrator Craig Fugate, gave a superb presentation on “Real Time Awareness” in which he relayed an example of how he and his team used Big Data (twitter) during a series of devastating tornadoes in 2011:

“Mr. Fugate proposed dispatching relief supplies to the long list of locations immediately and received pushback from his team who were concerned that they did not yet have an accurate estimate of the level of damage. His challenge was to get the staff to understand that the priority should be one of changing outcomes, and thus even if half of the supplies dispatched were never used and sent back later, there would be no chance of reaching communities in need if they were in fact suffering tornado damage already, without getting trucks out immediately. He explained, “if you’re waiting to react to the aftermath of an event until you have a formal assessment, you’re going to lose 12-to-24 hours…Perhaps we shouldn’t be waiting for that. Perhaps we should make the assumption that if something bad happens, it’s bad. Speed in response is the most perishable commodity you have…We looked at social media as the public telling us enough information to suggest this was worse than we thought and to make decisions to spend [taxpayer] money to get moving without waiting for formal request, without waiting for assessments, without waiting to know how bad because we needed to change that outcome.”

“Fugate also emphasized that using social media as an information source isn’t a precise science and the response isn’t going to be precise either. “Disasters are like horseshoes, hand grenades and thermal nuclear devices, you just need to be close— preferably more than less.”

Twitter, Crises and Early Detection: Why “Small Data” Still Matters

My colleagues John Brownstein and Rumi Chunara at Harvard Univer-sity’s HealthMap project are continuing to break new ground in the field of Digital Disease Detection. Using data obtained from tweets and online news, the team was able to identify a cholera outbreak in Haiti weeks before health officials acknowledged the problem publicly. Meanwhile, my colleagues from UN Global Pulse partnered with Crimson Hexagon to forecast food prices in Indonesia by carrying out sentiment analysis of tweets. I had actually written this blog post on Crimson Hexagon four years ago to explore how the platform could be used for early warning purposes, so I’m thrilled to see this potential realized.

There is a lot that intrigues me about the work that HealthMap and Global Pulse are doing. But one point that really struck me vis-a-vis the former is just how little data was necessary to identify the outbreak. To be sure, not many Haitians are on Twitter and my impression is that most humanitarians have not really taken to Twitter either (I’m not sure about the Haitian Diaspora). This would suggest that accurate, early detection is possible even without Big Data; even with “Small Data” that is neither representative or indeed verified. (Inter-estingly, Rumi notes that the Haiti dataset is actually larger than datasets typically used for this kind of study).

In related news, a recent peer-reviewed study by the European Commi-ssion found that the spatial distribution of crowdsourced text messages (SMS) following the earthquake in Haiti were strongly correlated with building damage. Again, the dataset of text messages was relatively small. And again, this data was neither collected using random sampling (i.e., it was crowdsourced) nor was it verified for accuracy. Yet the analysis of this small dataset still yielded some particularly interesting findings that have important implications for rapid damage detection in post-emergency contexts.

While I’m no expert in econometrics, what these studies suggests to me is that detecting change-over–time is ultimately more critical than having a large-N dataset, let alone one that is obtained via random sampling or even vetted for quality control purposes. That doesn’t mean that the latter factors are not important, it simply means that the outcome of the analysis is relatively less sensitive to these specific variables. Changes in the baseline volume/location of tweets on a given topic appears to be strongly correlated with offline dynamics.

What are the implications for crowdsourced crisis maps and disaster response? Could similar statistical analyses be carried out on Crowdmap data, for example? How small can a dataset be and still yield actionable findings like those mentioned in this blog post?

On Technology and Building Resilient Societies to Mitigate the Impact of Disasters

I recently caught up with a colleague at the World Bank and learned that “resilience” is set to be the new “buzz word” in the international development community. I think this is very good news. Yes, discourse does matter. A single word can alter the way we frame problems. They can lead to new conceptual frameworks that inform the design and implementation of development projects and disaster risk reduction strategies.
 

The term resilience is important because it focuses not on us, the development and disaster community, but rather on local at-risk communities. The terms “vulnerability” and “fragility” were used in past discourse but they focus on the negative and seem to invoke the need for external protection, overlooking the possibility that local coping mechanisms do exist. From the perspective of this top-down approach, international organizations are the rescuers and aid does not arrive until they arrive.

Resilience, in contrast, implies radical self-sufficiency, and self-sufficien-cy suggests a degree of autonomy; self-dependence rather than dependence on an external entity that may or may not arrive, that may or may not be effective, and that may or may not stay the course. In the field of ecology, the term resilience is defined as “the capacity of an ecosystem to respond to a perturbation or disturbance by resisting damage and recovering quickly.” There are thus at least two ways for “social ecosystems” to be resilient:

  1. Resist damage by absorbing and dampening the perturbation.
  2. Recover quickly by bouncing back.

So how does a society resist damage from a disaster? As noted in an earlier blog post, “Disaster Theory for Techies“, there is no such thing as a “natural disaster”. There are natural hazards and there are social systems. If social systems are not sufficiently resilient to absorb the impact of a natural hazard such as an earthquake, then disaster unfolds. In other words, hazards are exogenous while disasters are the result of endogenous political, economic, social and cultural processes. Indeed, “it is generally accepted among environmental geographers that there is no such thing as a natural disaster. In every phase and aspect of a disaster—causes, vulnerability, preparedness, results and response, and reconstruction—the contours of disaster and the difference between who lives and dies is to a greater or lesser extent a social calculus” (Smith 2006).

So how do we take this understanding of disasters and apply it to building more resilient communities? Focusing on people-centered early warning systems is one way to do this. In 2006, the UN’s International Strategy for Disaster Reduction (ISDR) recognized that top-down early warning systems for disaster response were increasingly ineffective. They therefore called for a more bottom-up approach in the form of people-centered early warning systems. The UN ISDR’s Global Survey of Early Warning Systems (PDF), defines the purpose of people-centered early warning systems as follows:

“… to empower individuals and communities threatened by hazards to act in sufficient time and in an appropriate manner so as to reduce the possibility of personal injury, loss of life, damage to property and the environment, and loss of livelihoods.”

Information plays a central role here. Acting in sufficient time requires having timely information about (1) the hazard(s) and (2) how to respond. As some scholars have argued, a disaster is first of all “a crisis in communicating within a community—that is, a difficulty for someone to get informed and to inform other people” (Gilbert 1998). Improving ways for local communities to communicate internally is thus an important part of building more resilient societies. This is where information and communication technologies (ICTs) play an important role. Free and open source software like Ushahidi can also be used (the subject of a future blog post).

Open data is equally important. Local communities need to access data that will enable them to make more effective decisions on how to best minimize the impact of certain hazards on their livelihoods. This means accessing both internal community data in real time (the previous paragraph) and data external to the community that bears relevance to the decision-making calculus at the local level. This is why I’m particularly interested in the Open Data for Resilience Initiative (OpenDRI) spearheaded by the World Bank’s Global Facility for Disaster Reduction and Recovery (GFDRR). Institutionalizing OpenDRI at the state level will no doubt be a challenge in and of itself, but I do hope the initiative will also be localized using a people-centered approach like the one described above.

The second way to grow more resilient societies is by enabling them to recover quickly following a disaster. As Manyena wrote in 2006, “increasing attention is now paid to the capacity of disaster-affected communities to ‘bounce back’ or to recover with little or no external assistance following a disaster.” So what factors accelerate recovery in ecosystems in general? “To recover itself, a forest ecosystem needs suitable interactions among climate conditions and bio-actions, and enough area.” In terms of social ecosystems, these interactions can take the form of information exchange.

Identifying needs following a disaster and matching them to available resources is an important part of the process. Accelerating the rate of (1) identification; (2) matching and, (3) allocation, is one way to speed up overall recovery. In ecological terms, how quickly the damaged part of an ecosystem can repair itself depends on how many feedback loops (network connections) it has to the non- (or less-) damaged parts of the ecosystem(s). Some call this an adaptive system. This is where crowdfeeding comes in, as I’ve blogged about here (The Crowd is Always There: A Marketplace for Crowdsourcing Crisis Response) and here (Why Crowdsourcing and Crowdfeeding May be the Answer to Crisis Response).

Internal connectivity and communication is important for crowdfeeding to work, as is preparedness. This is why ICTs are central to growing more resilient societies. They can accelerate the identification of needs, matching and allocation of resources. Free and open source platforms like Ushahidi can also play a role in this respect, as per my recent blog post entitled “Check-In’s With a Purpose: Applications for Disaster Response.” But without sufficient focus on disaster preparedness, these technologies are more likely to facilitate spontaneous response rather than a planned and thus efficient response. As Louis Pas-teur famously noted, “Chance favors the prepared mind.” Hence the rationale for the Standby Volunteer Task Force for Live Mapping (SBTF), for example. Open data is also important in this respect. The OpenDRI initiative is thus important for both damage resistance and quick recovery.

I’m enjoying the process of thinking through these issues again. It’s been a while since I published and presented on the topic of resilience and adaptation. So I plan to read through some of my papers from a while back that addressed these issues in the context of violent conflict and climate change. What I need to do is update them based on what I’ve learned over the past four or five years.

If you’re curious and feel like jumping into some of these papers yourself, I recommend these two as a start:

  • Meier, Patrick. 2007. “New Strategies for Effective Early Response: Insights from Complexity Science.” Paper prepared for the 48th Annual Convention of the International Studies Association (ISA) in Chicago. Available online.
  • Meier, Patrick. 2007. “Networking Disaster and Conflict Early Warning Systems.” Paper prepared for the 48th Annual Convention of the Int’l Studies Association (ISA) in Chicago.  Available online.

More papers are available on my Publications page. This earlier blog post on “Failing Gracefully in Complex Systems: A Note on Resilience” may also be of interest to some readers.


Disaster Relief 2.0: Between a Signac and a Picasso

The United Nations Foundation, Vodafone Foundation, OCHA and my “alma matter” the Harvard Humanitarian Initiative just launched an important report that seeks to chart the future of disaster response based on critical lessons learned from Haiti. The report, entitled “Disaster Relief 2.0: The Future of Information Sharing in Humanitarian Emergencies,” builds on a previous UN/Vodafone Foundation Report co-authored by Diane Coyle and myself just before the Haiti earthquake: “New Technologies in Emergencies and Conflict: The Role of Information and Social Networks.”

The authors of the new study begin with a warning: “this report sounds an alarm bell. If decision makers wish to have access to (near) real-time assessments of complex emergencies, they will need to figure out how to process information flows from many more thousands of individuals than the current system can handle.” In any given crisis, “everyone has a piece of information, everyone has a piece of that picture.” And more want to share their piece of the picture. So part of the new challenge lies in how to collect and combine multiple feeds of information such that the result paints a coherent and clear picture of an evolving crisis situation. What we need is a Signac, not a Picasso.

The former, Paul Signac, is known for using “pointillism,” a technique in which “small, distinct dots of pure color are applied in patterns to form an image.” Think of these dots as data points drawn from diverse pallets but combined to depict an appealing and consistent whole. In contrast, Pablo Picasso’s paintings from his Cubism and Surrealism period often resemble unfinished collages of fragmented objects. A Picasso gives the impression of impossible puzzle pieces in contrast to the single legible harmony of a Signac.

This Picasso effect, or “information fragmentation” as the humanitarian community calls it, was one of the core information management challenges that the humanitarian community faced in Haiti: “the division of data resources and analysis into silos that are difficult to aggregate, fuse, or otherwise reintegrate into composite pictures.” This plagued information management efforts between and within UN clusters, which made absorbing new and alternative sources of information–like crowdsourced SMS reports–even less possible.

These new information sources exist in part thanks to new players in the disaster response field, the so-called Volunteer Technical Communities (VTCs). This shift towards a more multi-polar system of humanitarian response brings both new opportunities and new challenges. One way to overcome “information fragmentation” and create a Signac is for humanitarian organizations and VTCs to work more closely together. Indeed, as “volunteer and technical communities continue to engage with humanitarian crises they will increasingly add to the information overload problem. Unless they can become part of the solution.” This is in large part why we launched the Standby Volunteer Task Force at the 2010 International Conference on Crisis Mapping (ICCM 2010): to avoid information overload by creating a common canvas and style between volunteer crisis mappers and the humanitarian community.

What is perhaps most striking about this new report is the fact that it went to press the same month that two of the largest crisis mapping operations since Haiti were launched, namely the Libya and Japan Crisis Maps. One could already write an entirely new UN/Vodafone Foundation Report on just the past 3 months of crisis mapping operations. The speed with which learning and adaptation is happening in some VTCs is truly astounding. As I noted in this earlier blog post, “Crisis Mapping Libya: This is no Haiti“, we have come a long way since the Haiti response. Indeed, lessons from last year have been identified, they have been learned and operationally applied by VTCs like the Task Force. The fact that OCHA formally requested activation of the Task Force to provide a live crisis map of Libya just months after the Task Force was launched is a clear indication that we are on the right track. This is no Picasso.

Referring to lessons learned in Haiti will continue to be important, but as my colleague Nigel Snoad has noted, Haiti represents an outlier in terms of disasters. We are already learning new lessons and implementing better practices in response to crises that couldn’t be more different than Haiti, e.g., crisis mapping hostile, non-permissive environments like Egypt, Sudan and Libya. In Japan, we are also learning how a more hierarchical society with a highly developed and media rich environment presents a different set of opportunities and challenges for crisis mapping. This is why VTCs will continue to be at the forefront of Disaster 2.0 and why reports like this one are so key: they clearly show that a Signac is well within our reach if we continue working together.

MDG Monitor: Combining GIS and Network Analysis

I had some fruitful conversations with colleagues at the UN this week and learned about an interesting initiative called the MDG Monitor. The platform is being developed in collaboration with the Parsons Institute for Information Mapping (PIIM).

Introduction

The purpose of the MDG Monitor is to provide a dynamic and interactive mapping platform to visualize complex data and systems relevant to the Millennium Development Goals (MDGs). The team is particularly interested in having the MDG Monitor facilitate the visualization of linkages, connections and relationships between the MDGs and underlying indicators: “We want to understand how complex systems work.”

G8-MDG-logosThe icons above represent the 8 development goals.

The MDG Monitor is thus designed to be a “one-stop-shop for information on progress towards the MDGs, globally and at the country level.” The platform is for “policymakers, development practitioners, journalists, students and others interested in learning about the Goals and tracking progress toward them.”

The platform is under development but I saw a series of compelling mock-ups and very much look forward to testing the user-interface when the tool becomes public. I was particularly pleased to learn about the team’s interest in visualizing both “high frequency” and “low frequency” data. The former being rapidly changing data versus the latter slow change data.

In addition, the platform will allow users to drill down below the country admin level and overlay multiple layers. As one colleague mentioned, “We want to provide policy makers with the equivalent of a magnifying glass.”

Network Analysis

Perhaps most impressive but challenging is the team’s interest in combining spatial analysis with social networking analysis (SNA). For example, visualizing data or projects based on their geographic relationships but also on their functional relationships. I worked on a similar project at the Santa Fe Institute (SFI) back in 2006, when colleagues and I developed an Agent Based Model  (ABM) to simulate internal displacement of ethnic groups following a crisis.

abmSFI

Agent Based Model of Crisis Displacement

As the screenshot above depicts, we were interested in understanding how groups would move based on their geographical and ethnic or social ties. In any case, if the MDG Monitor team can combine the two types of dynamic maps, this will certainly be a notable advance in the field of crisis mapping.

Patrick Philippe Meier

Global Impact and Vulnerability Alert System (GIVAS): A New Early Warning Initiative?

Update: This project is now called UN Global Pulse.

UN Secretary-General Ban Ki-Moon is calling for better real-time data on the impact of the financial crisis on the poor. To this end, he is committing the UN to the development of a Global Impact and Vulnerability Alert System (or GIVAS) in the coming months.  While I commend the initiative’s focus on innovative data collection, I’m concerned that this is yet another “early warning system” that will fail to bridge alert and operational response.

The platform is being developed in collaboration with the World Bank and will use real time data to assess the vulnerability of particular countries or populations. “This will provide the evidence needed to determine specific and appropriate responses,” according to UNDP. UN-Habitat opines that the GVA will be a “vital tool to know what is happening and to hold ourselves accountable to those who most need our help.”

According to sources, the objective for the GIVAS is to “ensure that in times of global crisis, the fate of the poorest and most vulnerable populations is not marginalized in the international community’s response. By closely monitoring emerging and dramatically worsening vulnerabilities on the ground, the Alert would fill the information gap that currently exists between the point when a global crisis hits vulnerable populations and when information reaches decision makers through official statistical channels.”

GIVAS will draw on both high frequency and low frequency indicators:

The lower frequency contextual indicators would allow the Alert system to add layers of analysis to the real time “evidence” generated by the high frequency indicators. Contextual indicators would provide information, for example, on a country’s capacity to respond to a crisis (resilience) or its exposure to a crisis (transmission channels). Contextual indicators could be relatively easily drawn from existing data bases. Given their lesser crisis sensitivity, they are generally collected less frequently without losing significantly in relevance.”

The high frequency indicators would allow the system to pick up significant and immediately felt changes in vulnerability at sentinel sites in specific countries. This data would constitute the heart of the Alert system, and would provide the real-time evidence – both qualitative and quantitative – of the effects of external shocks on the most vulnerable populations. Data would be collected by participating partners and would be uploaded into the Alert’s technical platform.”

The pulse indicators would have to be highly crisis sensitive (i.e. provide early signals that there is a significant impact), should be available in high periodicity and should be able to be collected with relative ease and at a reasonable cost. Data would be collected using a variety of methodologies, including mobile communication tools (i.e. text messaging), quick impact assessment surveys, satellite imagery and sophisticated media tracking systems.”

The GIVAS is also expected to use natural language processing (NLP) to extract data from the web. In addition, GIVAS will also emphasize the importance of data presentation and possibly draw on Gapminder’s Trendalyzer software.

There’s a lot more to say on GIVAS and I will definitely blog more about this new initiative as more information becomes public. My main question at this point is simple: How will GIVAS seek to bridge the alert-response gap? Oh, and a related question: has the GIVAS team reviewed past successes and failures of early warning/response systems?

Patrick Philippe Meier

UN Sudan Information Management Working (Group)

I’m back in the Sudan to continue my work with the UNDP’s Threat and Risk Mapping Analysis (TRMA) project. UN agencies typically suffer from what a colleague calls “Data Hugging Disorder (DHD),” i.e., they rarely share data. This is generally the rule, not the exception.

UN Exception

There is an exception, however: the recently established UN’s Information Management Working Group (IMWG) in the Sudan. The general goal of the IMWG is to “facilitate the development of a coherent information management approach for the UN Agencies and INGOs in Sudan in close cooperation with local authorities and institutions.”

More specifically, the IMWG seeks to:

  1. Support and advise the UNDAF Technical Working Groups and Work Plan sectors in the accessing and utilization of available data for improved development planning and programming;
  2. Develop/advise on the development of, a Sudan-specific tool, or set of tools, to support decentralized information-sharing and common GIS mapping, in such a way that it will be consistent with the DevInfo system development, and can eventually be adopted/integrated as a standard plug-in for the same.

To accomplish these goals, the IMWG will collectively assume a number of responsibilities including the following:

  • Agree on  information sharing protocols, including modalities of shared information update;
  • Review current information management mechanisms to have a coherent approach.

The core members of the working group include: IOM, WHO, FAO, UNICEF, UNHCR, UNPFA, WFP, OCHA and UNDP.

Information Sharing Protocol

These members recently signed and endorsed an “Information Sharing Protocol”. The protocol sets out the preconditions, the responsibilities and the rights of the IMWG members for sharing, updating and accessing the data of the information providers.

With this protocol, each member commits to sharing specific datasets, in specific formats and at specific intervals. The data provided is classified as either public access or classified accessed. The latter is further disaggregated into three categories:

  1. UN partners only;
  2. IMWG members only;
  3. [Agency/group] only.

There is also a restricted access category, which is granted on a case-by-case basis only.

UNDP/TRMA’s Role

UNDP’s role (via TRMA) in the IMWG is to technically support the administration of the information-sharing between IMWG members. More specifically, UNDP will provide ongoing technical support for the development and upgrading of the IMWG database tool in accoardance with the needs of the Working Group.

In addition, UNDP’s role is to receive data updates, to update the IMWG tool and to circulate data according to classification of access as determined by individual contributing agencies. Would a more seemless information sharing approach might work; one in which UNDP does not have to be the repository of the data let alone manually update the information?

In any case, the very existence of a UN Information Management Working Group in the Sudan suggests that Data Hugging Disorders (DHDs) can be cured.

Patrick Philippe Meier