Tag Archives: Information

The Value of Timely Information During Disasters (Measured in Hours)

In the 2005 World Disaster Report (PDF), the International Federation of the Red Cross states unequivocally that access to information during disasters is equally important as access to food, water, shelter and medication. Of all these commodities, however, crisis information is the most perishable. In other words, the “sell-by” or “use-by” date of information for decision-making during crisis is very short. Put simply: information rots fast, especially in the field (assuming that information even exists in the first place). But how fast exactly as measured in hours and days?

Screen Shot 2016-03-31 at 2.49.16 PM

Enter this handy graph by FEMA, which is based on a large survey of emergency management professionals across the US. As you’ll note, there is a very clear cut-off at 72 hours post-disaster by which time the value of information for decision making purposes has depreciated by 60% to 85%. Even at 48 hours, information has lost 35% to 65% of its initial tactical value. Disaster responders don’t have the luxury of waiting around for actionable information to inform their decisions during the first 24-72 hours after a disaster. So obviously they’ll take those decisions whether or not timely data is available to guide said decisions.

In a way, the graph also serves as a “historical caricature” of the availability of crisis information over the past 25 years:

11qsii

During the early 1990s, when the web and mobile phones were still in their infancy, it often took weeks to collect detailed information on disaster damage and needs following major disasters. Towards the end of the 2000’s, thanks to the rapid growth in smartphones, social media and the increasing availability of satellite imagery plus improvements in humanitarian information management systems, the time it took to collect crisis information was shortened. One could say we crossed the 72-hour time barrier on January 12, 2010 when a devastating earthquake struck Haiti. Five years later, the Nepal earthquake in April 2015 may have seen a number of formal responders crossing the 48-hour threshold.

While these observations are at best the broad brushstrokes of a caricature, the continued need for timely information is very real, especially for tactical decision making in the field. This is why we need to shift further left in the FEMA graph. Of course, information that is older than 48 hours is still useful, particularly for decision-makers at headquarters who do not need to make tactical decisions.

Screen Shot 2016-03-31 at 3.29.18 PM

In fact, the real win would be to generate and access actionable information within the first 12- to 24-hour mark. By the end of the 24-hours, the value of information has “only” depreciated by 10% to 35%. So how do we get to the top left corner of the graph? How do we get to “Win”?

Screen Shot 2016-03-31 at 3.31.16 PM

By integrating new and existing sensors and combining these with automated analysis solutions. New sensors: like Planet Lab’s growing constellation of micro-satellites, which will eventually image the entire planet once every 24 hours at around 3-meter resolution. And new automated analysis solutions: powered by crowdsourcing and artificial intelligence (AI), and in particular deep learning techniques to process the Big Data generated by these “neo-sensors” in near real-time, including multimedia posted to social media sites and the Web in general.

And the need for baseline data is no less important for comparative analysis and change detection purposes. As a colleague of mine recently noted, the value of baseline information before a major disaster is at an all time high but then itself depreciates as well post-disaster.

Screen Shot 2016-03-31 at 3.41.18 PM

Of course, access to real-time information does not make a humanitarian organization a real-time response organization. There are always delays regard-less of how timely (or not) the information is (assuming it is even available). But the real first responders are the local communities. So the real win here would be to make make this real-time analysis directly available to local partners in disaster prone countries. They often have more of an immediate incentive to generate and consume timely, tactical information. I described this information flow as “crowdfeeding” years ago.

In sum, the democratization of crisis information is key (keeping in mind data-protection protocols). But said democratization isn’t enough. The know-how and technologies to generate and analyze crisis information during the first 12-24 hours must also be democratized. The local capacity to respond quickly and effectively must exist; otherwise timely, tactical information will just rot away.


I’d be very interested to hear from human rights practitioners to get their thoughts on how/when the above crisis information framework does, and does not, apply when applied to human rights monitoring.

A 10 Year Vision: Future Trends in Geospatial Information Management

Screen Shot 2016-02-07 at 12.35.09 PM

The United Nations Committee of Experts on Global Geospatial Information Management (UN-GGIM) recently published their second edition of Future Trends in Geospatial Information Management. I blogged about the first edition here. Below are some of the excerpts I found interesting or noteworthy. The report itself is a 50-page document (PDF 7.1Mb).

  • The integration of smart technologies and efficient governance models will increase and the mantra of ‘doing more for less’ is more relevant than ever before.
  • There is an increasing tendency to bring together data from multiple sources: official statistics, geospatial information, satellite data, big data and crowdsourced data among them.
  • New data sources and new data collection technologies must be carefully applied to avoid a bias that favors countries that are wealthier and with established data infrastructures. The use of innovative tools might also favor those who have greater means to access technology, thus widening the gap between the ‘data poor’ and the ‘data rich’.
  • The paradigm of geospatial information is changing; no longer is it used just for mapping and visualization, but also for integrating with other data sources, data analytics, modeling and policy-making.
  • Our ability to create data is still, on the whole, ahead of our ability to solve complex problems by using the data.  The need to address this problem will rely on the development of both Big Data technologies and techniques (that is technologies that enable the analysis of vast quantities of information within usable and practical timeframes) and artificial intelligence (AI) or machine learning technologies that will enable the data to be processed more efficiently.
  • In the future we may expect society to make increasing use of autonomous machines and robots, thanks to a combination of aging population, 
rapid technological advancement in unmanned autonomous systems and AI, and the pure volume of data being beyond a human’s ability to process it.
  • Developments in AI are beginning to transform the way machines interact with the world. Up to now machines have mainly carried out well-defined tasks such as robotic assembly, or data analysis using pre-defined criteria, but we are moving into an age where machine learning will allow machines to interact with their environment in more flexible and adaptive ways. This is a trend we expect to 
see major growth in over the next 5 to 10 years as the technologies–and understanding of the technologies–become more widely recognized.
  • Processes based on these principles, and the learning of geospatial concepts (locational accuracy, precision, proximity etc.), can be expected to improve the interpretation of aerial and satellite imagery, by improving the accuracy with which geospatial features can be identified.
  • Tools may run persistently on continuous streams of data, alerting interested parties to new discoveries and events.  Another branch of AI that has long been of interest has been the expert system, in which the knowledge and experience of human experts 
is taught to a machine.
  • The principle of collecting data once only at the highest resolution needed, and generalizing ‘on the fly’ as required, can become reality.  Developments of augmented and virtual reality will allow humans to interact with data in new ways.
  • The future of data will not be the conflation of multiple data sources into a single new dataset, rather there will be a growth in the number of datasets that are connected and provide models to be used across the world.
  • Efforts should be devoted to integrating involuntary sensors– mobile phones, RFID sensors and so
on–which aside from their primary purpose may produce information regarding previously difficult to collect information. This leads to more real-time information being generated.
  • Many developing nations have leapfrogged in areas such as mobile communications, but the lack of core processing power may inhibit some from taking advantage of the opportunities afforded by these technologies.
  • Disaggregating data at high levels down to small area geographies. This will increase the need to evaluate and adopt alternative statistical modeling techniques to ensure that statistics can be produced at the right geographic level, whilst still maintaining the quality to allow them to be reported against.
  • The information generated through use of social media and the use of everyday devices will further reveal patterns and the prediction of behaviour. This is not a new trend, but as the use of social media 
for providing real-time information and expanded functionality increases it offers new opportunities for location based services.
  • There seems to have been
 a breakthrough from 2D to 3D information, and
 this is becoming more prevalent.

 Software already exists to process this information, and to incorporate the time information to create 4D products and services. It 
is recognized that a growth area over the next five to ten years will be the use of 4D information in a wide variety of industries.
  • 
 The temporal element is crucial to a number of applications such as emergency service response, for simulations and analytics, and the tracking of moving objects. 
 4D is particularly relevant in the context of real-time information; this has been linked to virtual reality technologies.
  • Greater coverage, quality and resolution has been achieved by the availability of both low-cost and affordable satellite systems, and unmanned aerial vehicles (UAVs). This has increased both the speed of collection and acquisition in remote areas, but also reduced the cost barriers of entry.
  • UAVs can provide real-time information to decision-makers on the ground providing, for example, information for disaster manage-ment. They are
 an invaluable tool when additional information 
is needed to improve vital decision making capabilities and such use of UAVs will increase.
  • The licensing of data in an increasingly online world is proving to be very challenging. There is a growth in organisations adopting simple machine-readable licences, but these have not resolved the issues to data. Emerging technologies such as web services and the growth of big data solutions drawn from multiple sources will continue to create challenges for the licensing of data.
  • A wider issue is the training and education of a broader community of developers and users of location-enabled content. At the same time there is a need for more automated approaches to ensuring the non-geospatial professional community get the right data at the right time. 
Investment in formal training in the use of geospatial data and its implementation is still indispensable.
  • Both ‘open’ and ‘closed’ VGI 
data play an important and necessary part of the wider data ecosystem.

Quantifying Information Flow During Emergencies

I was particularly pleased to see this study appear in the top-tier journal, Nature. (Thanks to my colleague Sarah Vieweg for flagging). Earlier studies have shown that “human communications are both temporally & spatially localized following the onset of emergencies, indicating that social propagation is a primary means to propagate situational awareness.” In this new study, the authors analyze crisis events using country-wide mobile phone data. To this end, they also analyze the communication patterns of mobile phone users outside the affected area. So the question driving this study is this: how do the communication patterns of non-affected mobile phone users differ from those affected? Why ask this question? Understanding the communication patterns of mobile phone users outside the affected areas sheds light on how situational awareness spreads during disasters.

Nature graphs

The graphs above (click to enlarge) simply depict the change in call volume for three crisis events and one non-emergency event for the two types of mobile phone users. The set of users directly affected by a crisis is labeled G0 while users they contact during the emergency are labeled G1. Note that G1 users are not affected by the crisis. Since the study seeks to assess how G1 users change their communication patterns following a crisis, one logical question is this: do the call volume of G1 users increase like those of G0 users? The graphs above reveal that G1 and G0 users have instantaneous and corresponding spikes for crisis events. This is not the case for the non-emergency event.

“As the activity spikes for G0 users for emergency events are both temporally and spatially localized, the communication of G1 users becomes the most important means of spreading situational awareness.” To quantify the reach of situational awareness, the authors study the communication patterns of G1 users after they receive a call or SMS from the affected set of G0 users. They find 3 types of communication patterns for G1 users, as depicted below (click to enlarge).

Nature graphs 2

Pattern 1: G1 users call back G0 users (orange edges). Pattern 2: G1 users call forward to G2 users (purple edges). Pattern 3: G1 users call other G1 users (green edges). Which of these 3 patterns is most pronounced during a crisis? Pattern 1, call backs, constitute 25% of all G1 communication responses. Pattern 2, call forwards, constitutes 70% of communications. Pattern 3, calls between G1 users only represents 5% of all communications. This means that the spikes in call volumes shown in the above graphs is overwhelmingly driven by Patterns 1 and 2: call backs and call forwards.

The graphs below (click to enlarge) show call volumes by communication patterns 1 and 2. In these graphs, Pattern 1 is the orange line and Pattern 2 the dashed purple line. In all three crisis events, Pattern 1 (call backs) has clear volume spikes. “That is, G1 users prefer to interact back with G0 users rather than contacting with new users (G2), a phenomenon that limits the spreading of information.” In effect, Pattern 1 is a measure of reciprocal communications and indeed social capital, “representing correspondence and coordination calls between social neighbors.” In contrast, Pattern 2 measures the dissemination of the “dissemination of situational awareness, corresponding to information cascades that penetrate the underlying social network.”

Nature graphs 3

The histogram below shows average levels of reciprocal communication for the 4 events under study. These results clearly show a spike in reciprocal behavior for the three crisis events compared to the baseline. The opposite is true for the non-emergency event.Nature graphs 4

In sum, a crisis early warning system based on communication patterns should seek to monitor changes in the following two indicators: (1) Volume of Call Backs; and (2) Deviation of Call Backs from baseline. Given that access to mobile phone data is near-impossible for the vast majority of academics and humanitarian professionals, one question worth exploring is whether similar communication dynamics can be observed on social networks like Twitter and Facebook.

 bio

Can Official Disaster Response Apps Compete with Twitter?

There are over half-a-billion Twitter users, with an average of 135,000 new users signing up on a daily basis (1). Can emergency management and disaster response organizations win over some Twitter users by convincing them to use their apps in addition to Twitter? For example, will FEMA’s smartphone app gain as much “market share”? The app’s new crowdsourcing feature, “Disaster Reporter,” allows users to submit geo-tagged disaster-related images, which are then added to a public crisis map. So the question is, will more images be captured via FEMA’s app or from Twitter users posting Instagram pictures?

fema_app

This question is perhaps poorly stated. While FEMA may not get millions of users to share disaster-related pictures via their app, it is absolutely critical for disaster response organizations to explicitly solicit crisis information from the crowd. See my blog post “Social Media for Emergency Management: Question of Supply and Demand” for more information on the importance demand-driven crowdsourcing. The advantage of soliciting crisis information from a smartphone app is that the sourced information is structured and thus easily machine readable. For example, the pictures taken with FEMA’s app are automatically geo-tagged, which means they can be automatically mapped if need be.

While many, many more picture may be posted on Twitter, these may be more difficult to map. The vast majority of tweets are not geo-tagged, which means more sophisticated computational solutions are necessary. Instagram pictures are geo-tagged, but this information is not publicly available. So smartphone apps are a good way to overcome these challenges. But we shouldn’t overlook the value of pictures shared on Twitter. Many can be geo-tagged, as demonstrated by the Digital Humanitarian Network’s efforts in response to Typhoon Pablo. More-over, about 40% of pictures shared on Twitter in the immediate aftermath of the Oklahoma Tornado had geographic data. In other words, while the FEMA app may have 10,000 users who submit a picture during a disaster, Twitter may have 100,000 users posting pictures. And while only 40% of the latter pictures may be geo-tagged, this would still mean 40,000 pictures compared to FEMA’s 10,000. Recall that over half-a-million Instagram pictures were posted during Hurricane Sandy alone.

The main point, however, is that FEMA could also solicit pictures via Twitter and ask eyewitnesses to simply geo-tag their tweets during disasters. They could also speak with Instagram and perhaps ask them to share geo-tag data for solicited images. These strategies would render tweets and pictures machine-readable and thus automatically mappable, just like the pictures coming from FEMA’s app. In sum, the key issue here is one of policy and the best solution is to leverage multiple platforms to crowdsource crisis information. The technical challenge is how to deal with the high volume of pictures shared in real-time across multiple platforms. This is where microtasking comes in and why MicroMappers is being developed. For tweets and images that do not contain automatically geo-tagged data, MicroMappers has a microtasking app specifically developed to crowd-source the manual tagging of images.

In sum, there are trade-offs. The good news is that we don’t have to choose one solution over the other; they are complementary. We can leverage both a dedicated smartphone app and very popular social media platforms like Twitter and Facebook to crowdsource the collection of crisis information. Either way, a demand-driven approach to soliciting relevant information will work best, both for smartphone apps and social media platforms.

Bio

 

Tweets, Crises and Behavioral Psychology: On Credibility and Information Sharing

How we feel about the content we read on Twitter influences whether we accept and share it—particularly during disasters. My colleague Yasuaki Sakamoto at the Stevens Institute of Technology (SIT) and his PhD students analyzed this dyna-mic more closely in this recent study entitled “Perspective Matters: Sharing of Crisis Information in Social Media”. Using a series behavioral psychology experiments, they examined “how individuals share information related to the 9.0 magnitude earthquake, which hit northeastern Japan on March 11th, 2011.” Their results indicate that individuals were more likely to share crisis infor-mation (1) when they imagined that they were close to the disaster center, (2) when they were thinking about themselves, and (3) when they experienced negative emotions as a result of reading the information.

stevens1

Yasu and team are particularly interested in “the effects of perspective taking – considering self or other – and location on individuals’ intention to pass on information in a Twitter-like environment.” In other words: does empathy influence information sharing (retweeting) during crises? Does thinking of others in need eliminate the individual differences in perception that arise when thinking of one’s self instead? The authors hypothesize that “individuals’ information sharing decision can be influenced by (1) their imagined proximity, being close to or distant from the disaster center, (2) the perspective that they take, thinking about self or other, and (3) how they feel about the information that they are exposed to in social media, positive, negative or neutral.”

To test these hypotheses, Yasu and company collected one year’s worth of tweets posted by two major news agencies and five individuals following the Japan Earthquake on March 11, 2012. They randomly sampled 100 media tweets and 100 tweets produced by individuals, resulting a combined sample of 200 tweets. Sampling from these two sources (media vs user-generated) enables Yasu and team to test whether people treat the resulting content differently. Next, they recruited 468 volunteers from Amazon’s Mechanical Turk and paid them a nominal fee for their participation in a series of three behavioral psychology experiments.

In the first experiment, the “control” condition, volunteers read through the list of tweets and simply rated the likelihood of sharing a given tweet. The second experiment asked volunteers to read through the list and imagine they were in Fukushima. They were then asked to document their feelings and rate whether they would pass along a given message. Experiment three introduced a hypo-thetical person John based in Fukushima and prompted users to describe how each tweet might make John feel and rate whether they would share the tweet.

empathy

The results of these experiments suggest that, “people are more likely to spread crisis information when they think about themselves in the disaster situation. During disasters, then, one recommendation we can give to citizens would be to think about others instead of self, and think about others who are not in the disaster center. Doing so might allow citizens to perceive the information in a different way, and reduce the likelihood of impulsively spreading any seemingly useful but false information.” Yasu and his students also found that “people are more likely to share information associated with negative feelings.” Since rumors tend to evoke negativity,” they spread more quickly. The authors entertain possible ways to manage this problem such as “surrounding negative messages with positive ones,” for example.

In conclusion, Yasu and his students consider the design principles that ought to be considered when designing social media systems to verify and counter rumors. “In practice, designers need to devote significant efforts to understanding the effects of perspective taking and location, as shown in the current work, and develop techniques to mitigate negative influences of unproved information in social media.”

Bio

For more on Yasu’s work, see:

  • Using Crowdsourcing to Counter False Rumos on Social Media During Crises [Link]

Using Rapportive for Source and Information Verification

I’ve been using Rapportive for several few weeks now and have found the tool rather useful for assessing the trustworthiness of a source. Rapportive is an extension for Gmail that allows you to automatically visualize an email sender’s complete profile information right inside your inbox.

So now, when receiving emails from strangers, I can immediately see their profile picture, short bio, twitter handle (including latest tweets), links to their Facebook page, Google+ account, LinkedIn profile, blog, SkypeID, recent pictures they’ve posted, etc. As explained in my blog posts on information forensics, this type of meta-data can be particularly useful when assessing the importance or credibility of a source. To be sure, having a source’s entire digital footprint on hand can be quite revealing (as marketers know full well). Moreover, this type of meta-data was exactly what the Standby Volunteer Task Force was manually looking for when they sought to verify the identify of volunteers during the Libya Crisis Map project with the UN last year.

Obviously, the use of Rapportive alone is not a silver bullet to fully determine the credibility of a source or the authenticity of a source’s identity. But it does add contextual information that can make a difference when seeking to better under-stand the reliability of an email. I’d be curious to know whether Rapportive will be available as a stand-alone platform in the future so it can be used outside of Gmail. A simple web-based search box that allows one to search by email address, twitter handle, etc., with the result being a structured profile of that individual’s entire digital footprint. Anyone know whether similar platforms already exist? They could serve as ideal plugins for platforms like CrisisTracker.

From Crowdsourcing Crisis Information to Crowdseeding Conflict Zones (Updated)

Friends Peter van der Windt and Gregory Asmolov are two of the sharpest minds I know when it comes to crowdsourcing crisis information and crisis response. So it was a real treat to catch up with them in Berlin this past weekend during the “ICTs in Limited Statehood” workshop. An edited book of the same title is due out next year and promises to be an absolute must-read for all interested in the impact of Information and Communication Technologies (ICTs) on politics, crises and development.

I blogged about Gregory’s presentation following last year’s workshop, so this year I’ll relay Peter’s talk on research design and methodology vis-a-vis the collection of security incidents in conflict environments using SMS. Peter and mentor Macartan Humphreys completed their Voix des Kivus project in the DRC last year, which ran for just over 16 months. During this time, they received 4,783 text messages on security incidents using the FrontlineSMS platform. These messages were triaged and rerouted to several NGOs in the Kivus as well as the UN Mission there, MONUSCO.

How did they collect this information in the first place? Well, they considered crowdsourcing but quickly realized this was the wrong methodology for their project, which was to assess the impact of a major conflict mitigation program in the region. (Relaying text messages to various actors on the ground was not initially part of the plan). They needed high-quality, reliable, timely, regular and representative conflict event-data for their monitoring and evaluation project. Crowdsourcing is obviously not always the most appropriate methodology for the collection of information—as explained in this blog post.

Peter explained the pro’s and con’s of using crowdsourcing by sharing the framework above. “Knowledge” refers to the fact that only those who have knowledge of a given crowdsourcing project will know that participating is even an option. “Means” denotes whether or not an individual has the ability to participate. One would typically need access to a mobile phone and enough credit to send text messages to Voix des Kivus. In the case of the DRC, the size of subset “D” (no knowledge / no means) would easily dwarf the number of individuals comprising subset “A” (knowledge / means). In Peter’s own words:

“Crowdseeding brings the population (the crowd) from only A (what you get with crowdsourcing) to A+B+C+D: because you give phones&credit and you go to and inform the phoneholds about the project. So the crowd increases from A to A+B+C+D. And then from A+B+C+D one takes a representative sample. So two important benefits. And then a third: the relationship with the phone holder: stronger incentive to tell the truth, and no bad people hacking into the system.”

In sum, Peter and Macartan devised the concept of “crowdseeding” to increase the crowd and render that subset a representative sample of the overall population. In addition, the crowdseeding methodology they developed genera-ted more reliable information than crowdsourcing would have and did so in a way that was safer and more sustainable.

Peter traveled to 18 villages across the Kivus and in each identified three representatives to serve as the eyes and years of the village. These representatives were selected in collaboration with the elders and always included a female representative. They were each given a mobile phone and received extensive training. A code book was also shared which codified different types of security incidents. That way, the reps simply had to type the number corresponding to a given incident (or several numbers if more than one incident had taken place). Anyone in the village could approach these reps with relevant information which would then be texted to Peter and Macartan.

The table above is the first page of the codebook. Note that the numerous security risks of doing this SMS reporting were discussed at length with each community before embarking on the selection of 3 village reps. Each community decided to voted to participate despite the risks. Interestingly, not a single village voted against launching the project. However, Peter and Macartan chose not to scale the project beyond 18 villages for fear that it would get the attention of the militias operating in the region.

A local field representative would travel to the villages every two weeks or so to individually review the text messages sent out by each representative and to verify whether these incidents had actually taken place by asking others in the village for confirmation. The fact that there were 3 representatives per village also made the triangulation of some text messages possible. Because the 18 villages were randomly selected as part the randomized control trial (RCT) for the monitoring and evaluation project, the text messages were relaying a representative sample of information.

But what was the incentive? Why did a total of 54 village representatives from 18 villages send thousands of text messages to Voix des Kivus over a year and a half? On the financial side, Peter and Macartan devised an automated way to reimburse the cost of each text message sent on a monthly basis and in addition provided an additional $1.5/month. The only ask they made of the reps was that each had to send at least one text message per week, even if that message had the code 00 which referred to “no security incident”.

The figure above depicts the number of text messages received throughout the project, which formally ended in January 2011. In Peter’s own words:

“We gave $20 at the end to say thanks but also to learn a particular thing. During the project we heard often: ‘How important is that weekly $1.5?’ ‘Would people still send messages if you only reimburse them for their sent messages (and stop giving them the weekly $1.5)?’ So at the end of the project […] we gave the phone holder $20 and told them: the project continues exactly the same, the only difference is we can no longer send you the $1.5. We will still reimburse you for the sent messages, we will still share the bulletins, etc. While some phone holders kept on sending textmessages, most stopped. In other words, the financial incentive of $1.5 (in the form of phonecredit) was important.”

Peter and Macartan have learned a lot during this project, and I urge colleagues interested in applying their project to get in touch with them–I’m happy to provide an email introduction. I wish Swisspeace’s Early Warning System (FAST) had adopted this methodology before running out of funding several years ago. But the leadership at the time was perhaps not forward thinking enough. I’m not sure whether the Conflict Early Warning and Response Network (CEWARN) in the Horn has fared any better vis-a-vis demonstrated impact or lack thereof.

To learn more about crowdsourcing as a methodology for information collection, I recommend the following three articles:

Predicting the Future of Global Geospatial Information Management

The United Nations Committee of Experts on Global Information Management (GGIM) recently organized a meeting of thought-leaders and visionaries in the geo-spatial world to identify the future of this space over the next 5-10 years. These experts came up with some 80+ individual predictions. I’ve included some of the more interesting ones below.

  • The use of Unmanned Aerial Vehicles (UAVs) as a tool for rapid geospatial data collection will increase.
  • 3D and even 4D geospatial information, incorporating time as the fourth dimension, will increase.
  • Technology will move faster than legal and governance structures.
  • The link between geospatial information and social media, plus other actor networks, will become more and more important.
  • Real-time info will enable more dynamic modeling & response to disasters.
  • Free and open source software will continue to grow as viable alternatives both in terms of software, and potentially in analysis and processing.
  • Geospatial computation will increasingly be non-human consumable in nature, with an increase in fully-automated decision systems.
  • Businesses and Governments will increasingly invest in tools and resources to manage Big Data. The technologies required for this will enable greater use of raw data feeds from sensors and other sources of data.
  • In ten years time it is likely that all smart phones will be able to film 360 degree 3D video at incredibly high resolution by today’s standards & wirelessly stream it in real time.
  • There will be a need for geospatial use governance in order to discern the real world from the virtual/modelled world in a 3D geospatial environ-ment.
  • Free and open access to data will become the norm and geospatial information will increasingly be seen as an essential public good.
  • Funding models to ensure full data coverage even in non-profitable areas will continue to be a challenge.
  • Rapid growth will lead to confusion and lack of clarity over data ownership, distribution rights, liabilities and other aspects.
  • In ten years, there will be a clear dividing line between winning and losing nations, dependent upon whether the appropriate legal and policy frameworks have been developed that enable a location-enabled society to flourish.
  • Some governments will use geospatial technology as a means to monitor or restrict the movements and personal interactions of their citizens. Individuals in these countries may be unwilling to use LBS or applications that require location for fear of this information being shared with authorities.
  • The deployment of sensors and the broader use of geospatial data within society will force public policy and law to move into a direction to protect the interests and rights of the people.
  • Spatial literacy will not be about learning GIS in schools but will be more centered on increasing spatial awareness and an understanding of the value of understanding place as context.
  • The role of National Mapping Agencies as an authoritative supplier of high quality data and of arbitrator of other geospatial data sources will continue to be crucial.
  • Monopolies held by National Mapping Agencies in some areas of specialized spatial data will be eroded completely.
  • More activities carried out by National Mapping Agencies will be outsourced and crowdsourced.
  • Crowdsourced data will push National Mapping Agencies towards niche markets.
  • National Mapping Agencies will be required to find new business models to provide simplified licenses and meet the demands for more free data from mapping agencies.
  • The integration of crowdsourced data with government data will increase over the next 5 to 10 years.
  • Crowdsourced content will decrease cost, improve accuracy and increase availability of rich geospatial information.
  •  There will be increased combining of imagery with crowdsourced data to create datasets that could not have been created affordably on their own.
  • Progress will be made on bridging the gap between authoritative data and crowdsourced data, moving towards true collaboration.
  • There will be an accelerated take-up of Volunteer Geographic Information over the next five years.
  • Within five years the level of detail on transport systems within OpenStreetMap will exceed virtually all other data sources & will be respected/used by major organisations & governments across the globe.
  • Community-based mapping will continue to grow.
  • There is unlikely to be a market for datasets like those currently sold to power navigation and location-based services solutions in 5 years, as they will have been superseded by crowdsourced datasets from OpenStreetMaps or other comparable initiatives.

Which trends have the experts missed? Do you think they’re completely off on any of the above? The full set of predictions on the future of global geospatial information management is available here as a PDF.

Does the Humanitarian Industry Have a Future in The Digital Age?

I recently had the distinct honor of being on the opening plenary of the 2012 Skoll World Forum in Oxford. The panel, “Innovation in Times of Flux: Opportunities on the Heels of Crisis” was moderated by Judith Rodin, CEO of the Rockefeller Foundation. I’ve spent the past six years creating linkages between the humanitarian space and technology community, so the conversations we began during the panel prompted me to think more deeply about innovation in the humanitarian industry. Clearly, humanitarian crises have catalyzed a number of important innovations in recent years. At the same time, however, these crises extend the cracks that ultimately reveal the inadequacies of existing organiza-tions, particularly those resistant to change; and “any organization that is not changing is a battle-field monument” (While 1992).

These cracks, or gaps, are increasingly filled by disaster-affected communities themselves thanks in part to the rapid commercialization of communication technology. Question is: will the multi-billion dollar humanitarian industry change rapidly enough to avoid being left in the dustbin of history?

Crises often reveal that “existing routines are inadequate or even counter-productive [since] response will necessarily operate beyond the boundary of planned and resourced capabilities” (Leonard and Howitt 2007). More formally, “the ‘symmetry-breaking’ effects of disasters undermine linearly designed and centralized administrative activities” (Corbacioglu 2006). This may explain why “increasing attention is now paid to the capacity of disaster-affected communities to ‘bounce back’ or to recover with little or no external assistance following a disaster” (Manyena 2006).

But disaster-affected populations have always self-organized in times of crisis. Indeed, first responders are by definition those very communities affected by disasters. So local communities—rather than humanitarian professionals—save the most lives following a disaster (Gilbert 1998). Many of the needs arising after a disaster can often be met and responded to locally. One doesn’t need 10 years of work experience with the UN in Darfur or a Masters degree to know basic first aid or to pull a neighbor out of the rubble, for example. In fact, estimates suggest that “no more than 10% of survival in emergencies can be attributed to external sources of relief aid” (Hilhorst 2004).

This figure may be higher today since disaster-affected communities now benefit from radically wider access to information and communication technologies (ICTs). After all, a “disaster is first of all seen as a crisis in communicating within a community—that is as a difficulty for someone to get informed and to inform other people” (Gilbert 1998). This communication challenge is far less acute today because disaster-affected communities are increasingly digital, and thus more and more the primary source of information communicated following a crisis. Of course, these communities were always sources of information but being a source in an analog world is fundamentally different than being a source of information in the digital age. The difference between “read-only” versus “read-write” comes to mind as an analogy. And so, while humanitarian organiza-tions typically faced a vacuum of information following sudden onset disasters—limited situational awareness that could only be filled by humanitarians on the ground or via established news organizations—one of the major challenges today is the Big Data produced by disaster-affected communities themselves.

Indeed, vacuums are not empty and local communities are not invisible. One could say that disaster-affected communities are joining the quantified self (QS) movement given that they are increasingly quantifying themselves. If inform-ation is power, then the shift of information sourcing and sharing from the select few—the humanitarian professionals—to the masses must also engender a shift in power. Indeed, humanitarians rarely have access to exclusive information any longer. And even though affected populations are increasingly digital, some groups believe that humanitarian organizations have largely failed at commu–nicating with disaster-affected communities. (Naturally, there are important and noteworthy exceptions).

So “Will Twitter Put the UN Out of Business?” (Reuters), or will humanitarian organizations cope with these radical changes by changing themselves and reshaping their role as institutions before it’s too late? Indeed, “a business that doesn’t communicate with its customers won’t stay in business very long—it’ll soon lose track of what its clients want, and clients won’t know what products or services are on offer,” whilst other actors fill the gaps (Reuters). “In the multi-billion dollar humanitarian aid industry, relief agencies are businesses and their beneficiaries are customers. Yet many agencies have muddled along for decades with scarcely a nod towards communicating with the folks they’re supposed to be serving” (Reuters).

The music and news industries were muddling along as well for decades. Today, however, they are facing tremendous pressures and are undergoing radical structural changes—none of them by choice. Of course, it would be different if affected communities were paying for humanitarian services but how much longer do humanitarian organizations have until they feel similar pressures?

Whether humanitarian organizations like it or not, disaster affected communities will increasingly communicate their needs publicly and many will expect a response from the humanitarian industry. This survey carried out by the American Red Cross two years ago already revealed that during a crisis the majority of the public expect a response to needs they communicate via social media. Moreover, they expect this response to materialize within an hour. Humanitarian organizations simply don’t have the capacity to deal with this surge in requests for help, nor are they organizationally structured to do so. But the fact of the matter is that humanitarian organizations have never been capable of dealing with this volume of requests in the first place. So “What Good is Crowd-sourcing When Everyone Needs Help?” (Reuters). Perhaps “crowdsourcing” is finally revealing all the cracks in the system, which may not be a bad thing. Surely by now it is no longer a surprise that many people may be in need of help after a disaster, hence the importance of disaster risk reduction and preparedness.

Naturally, humanitarian organizations could very well chose to continue ignoring calls for help and decide that communicating with disaster affected communities is simply not tenable. In the analog world of the past, the humanitarian industry was protected by the fact that their “clients” did not have a voice because they could not speak out digitally. So the cracks didn’t show. Today, “many traditional humanitarian players see crowdsourcing as an unwelcome distraction at a time when they are already overwhelmed. They worry that the noise-to-signal ration is just too high” (Reuters). I think there’s an important disconnect here worth emphasizing. Crowdsourced information is simply user-generated content. If humanitarians are to ignore user-generated content, then they can forget about two-way communications with disaster-affected communities and drop all the rhetoric. On the other hand, “if aid agencies are to invest time and resources in handling torrents of crowdsourced information in disaster zones, they should be confident it’s worth their while” (Reuters).

This last comment is … rather problematic for several reasons (how’s that for being diplomatic?). First of all, this kind of statement continues to propel the myth that we the West are the rescuers and aid does not start until we arrive (Barrs 2006). Unfortunately, we rarely arrive: how many “neglected crises” and so-called “forgotten emergencies” have we failed to intervene in? This kind of mindset may explain why humanitarian interventions often have the “propensity to follow a paternalistic mode that can lead to a skewing of activities towards supply rather than demand” and towards informing at the expense of listening (Manyena 2006).

Secondly, the assumption that crowdsourced data would be for the exclusive purpose of the humanitarian cavalry is somewhat arrogant and ignores the reality that local communities are by definition the first responders in a crisis. Disaster-affected communities (and Diasporas) are already collecting (and yes crowdsourcing) information to create their own crisis maps in times of need as a forthcoming report shows. And they’ll keep doing this whether or not humanita-rian organizations approve or leverage that information. As my colleague Tim McNamara has noted “Crisis mapping is not simply a technological shift, it is also a process of rapid decentralization of power. With extremely low barriers to entry, many new entrants are appearing in the fields of emergency and disaster response. They are ignoring the traditional hierarchies, because the new entrants perceive that there is something that they can do which benefits others.”

Thirdly, humanitarian organizations are far more open to using free and open source software than they were just two years ago. So the resources required to monitor and map crowdsourced information need not break the bank. Indeed, the Syria Crisis Map uses a free and open source data-mining platform called HealthMap, which has been monitoring some 2,000 English-based sources on a daily basis for months. The technology powering the map itself, Ushahidi, is also free and open source. Moreover, the team behind the project is comprised of just a handful of volunteers doing this in their own free time (for almost an entire year now). And as a result of this initiative, I am collaborating with a colleague from UNDP to pilot HealthMap’s data mining feature for conflict monitoring and peacebuilding purposes.

Fourth, other than UN Global Pulse, humanitarian agencies are not investing time and resources to manage Big (Crisis) Data. Why? Because they have neither the time nor the know-how. To this end, they are starting to “outsource” and indeed “crowdsource” these tasks—just as private sector businesses have been doing for years in order to extend their reach. Anyone actually familiar with this space and developments since Haiti already knows this. The CrisisMappers Network, Standby Volunteer Task Force (SBTF), Humanitarian OpenStreetMap (HOT) and Crisis Commons (CC) are four volunteer/technical networks that have already collaborated actively with a number of humanitarian organizations since Haiti to provide the “surge capacity” requested by the latter; this includes UN OCHA in Libya and Colombia, UNHCR in Somalia and WHO in Libya, to name a few. In fact, these groups even have their own acronym: Volunteer & Technical Communities (V&TCs).

As the former head of OCHA’s Information Services Section (ISS) noted after the SBTF launched the Libya Crisis Map, “Your efforts at tackling a difficult problem have definitely reduced the information overload; sorting through the multitude of signals on the crisis is not easy task” (March 8, 2011). Furthermore, the crowdsourced social media information mapped on the Libya Crisis Map was integrated into official UN OCHA information products. I dare say activating the SBTF was worth OCHA’s while. And it cost the UN a grand total of $0 to benefit from this support.

Credit: Chris Bow

The rapid rise of V&TC’s has catalyzed the launch of the Digital Humanitarian Network (DHN), formerly called the Humanitarian Standby Task Force (H-SBTF). Digital Humanitarians is a network-of-network catalyzed by the UN and comprising some of the most active members of the volunteer & technical co-mmunity. The purpose of the Digital Humanitarian platform (powered by Ning) is to provide a dedicated interface for traditional humanitarian organizations to outsource and crowdsource important information management tasks during and in-between crises. OCHA has also launched the Communities of Interest (COIs) platform to further leverage volunteer engagement in other areas of humanitarian response.

These are not isolated efforts. During the massive Russian fires of 2010, volunteers launched their own citizen-based disaster response agency that was seen by many as more visible and effective than the Kremlin’s response. Back in Egypt, volunteers used IntaFeen.com to crowdsource and coordinate their own humanitarian convoys to Libya, for example. The company LinkedIn has also taken innovative steps to enable the matching of volunteers with various needs. They recently added a “Volunteer and Causes” field to its member profile page, which is now available to 150 million LinkedIn users worldwide. Sparked.com is yet another group engaged in matching volunteers with needs. The company is the world’s first micro-volunteering network, sending challenges to registered volunteers that are targeted to their skill set and the causes that they are most passionate about.

It is not farfetched to envisage how these technologies could be repurposed or simply applied to facilitate and streamline volunteer management following a disaster. Indeed, researchers at the University of Queensland in Australia have already developed a new smart phone app to help mobilize and coordinate volunteer efforts during and following major disasters. The app not only provides information on preparedness but also gives real-time updates on volunteering opportunities by local area. For example, volunteers can register for a variety of tasks including community response to extreme weather events.

Meanwhile, the American Red Cross just launched a Digital Operations Center in partnership with Dell Labs, which allows them to leverage digital volunteers and Dell’s social media monitoring platforms to reduce the noise-to-signal ratio. This is a novel “social media-based operation devoted to humanitarian relief, demonstrating the growing importance of social media in emergency situations.” As part of this center, the Red Cross also “announced a Digital Volunteer program to help respond to question from and provide information to the public during disasters.”

While important challenges do exist, there are many positive externalities to leveraging digital volunteers. As deputy high commissioner of UNHCR noted about this UNHCR-volunteer project in Somalia, these types of projects create more citizen-engagement and raises awareness of humanitarian organizations and projects. This in part explains why UNHCR wants more, not less, engage-ment with digital volunteers. Indeed, these volunteers also develop important skills that will be increasingly sought after by humanitarian organizations recruit-ing for junior full-time positions. Humanitarian organizations are likely to be come smarter and more up to speed on humanitarian technologies and digital humanitarian skills as a result. This change should be embraced.

So given the rise of “self-quantified” disaster-affected communities and digitally empowered volunteer communities, is there a future for traditional humani-tarian organizations? Of course, anyone who suggests otherwise is seriously misguided and out of touch with innovation in the humanitarian space. Twitter will not put the UN out of business. Humanitarian organizations will continue to play some very important roles, especially those relating to logistics and coor-dination. These organizations will continue outsourcing some roles but will also take on some new roles. The issue here is simply one of comparative advantage. Humanitarian organizations used to have a comparative advantage in some areas, but this has shifted for all the reasons described above. So outsourcing in some cases makes perfect sense.

Interestingly, organizations like UN OCHA are also changing some of their own internal information management processes as a result of their collaboration with volunteer networks like the SBTF, which they expect will lead to a number of efficiency gains. Furthermore, OCHA is behind the Digital Humanitarians initiative and has also been developing a check-in app for humanitarian pro-fessionals to use in disaster response—clear signs of innovation and change. Meanwhile, the UK’s Department for International Development (DfID) has just launched a $75+ million fund to leverage new technologies in support of humani-tarian response; this includes mobile phones, satellite imagery, Twitter as well as other social media technologies, digital mapping and gaming technologies. Given that crisis mapping integrates these new technologies and has been at the cutting edge of innovation in the humanitarian space, I’ve invited DfID to participate in this year’s International Conference on Crisis Mapping (ICCM 2012).

In conclusion, and as argued two years ago, the humanitarian industry is shifting towards a more multi-polar system. The rise of new actors, from digitally empowered disaster-affected communities to digital volunteer networks, has been driven by the rapid commercialization of communication technology—particularly the mobile phone and social networking platforms. These trends are unlikely to change soon and crises will continue to spur innovations in this space. This does not mean that traditional humanitarian organizations are becoming obsolete. Their roles are simply changing and this change is proof that they are not battlefield monuments. Of course, only time will tell whether they change fast enough.

Why Bounded Crowdsourcing is Important for Crisis Mapping and Beyond

I coined the term “bounded crowdsourcing” a couple years back to distinguish the approach from other methodologies for information collection. As tends to happen, some Muggles (in the humanitarian community) ridiculed the term. They freaked out about the semantics instead of trying to understand the under-lying concept. It’s not their fault though, they’ve never been to Hogwarts and have never taken Crowdsourcery 101 (joke!).

Open crowdsourcing or “unbounded crowdsourcing” refers to the collection of information with no intentional constraints. Anyone who hears about an effort to crowdsource information can participate. This definition is inline with the original description put forward by Jeff Howe: outsourcing a task to a generally large group of people in the form of an open call.

In contrast, the point of “bounded crowdsourcing” is to start with a small number of trusted individuals and to have these individuals invite say 3 additional individuals to join the project–individuals who they fully trust and can vouch for. After joining and working on the project, these individuals in turn invite 3 additional people they fully trust. And so on and so forth at an exponential rate if desired. Just like crowdsourcing is nothing new in the field of statistics, neither is “bounded crowdsourcing”; it’s analog being snowball sampling.

In snowball sampling, a number of individuals are identified who meet certain criteria but unlike purposive sampling they are asked to recommend others who also meet this same criteria—thus expanding the network of participants. Although these “bounded” methods are unlikely to produce representative samples, they are more likely to produce trustworthy information. In addition, there are times when it may be the best—or indeed only—method available. Incidentally, a recent study that analyzed various field research methodologies for conflict environments concluded that snowball sampling was the most effective method (Cohen and Arieli 2011).

I introduced the concept of bounded crowdsourcing to the field of crisis mapping in response to concerns over the reliability of crowd sourced information. One excellent real world case study of bounded crowdsourcing for crisis response is this remarkable example from Kyrgyzstan. The “boundary” in bounded crowd-sourcing is dynamic and can grow exponentially very quickly. Participants may not all know each other (just like in open crowdsourcing) so in some ways they become a crowd but one bounded by an invite-only criteria.

I have since recommended this approach to several groups using the Ushahidi platform, like the #OWS movement. The statistical method known as snowball sampling is decades old. So I’m not introducing a new technique, simply applying a conventional approach from statistics to the field of crisis mapping and calling it bounded to distinguish the methodology from regular crowdsourcing efforts. What is different and exciting about combining snowball sampling with crowd-sourcing is that a far larger group can be sampled, a lot more quickly and also more cost-effectively given today’s real-time, free social networking platforms.