Category Archives: Crisis Mapping

Back to the Future: On National Geographic and Crisis Mapping

[Cross-posted from National Geographic Newswatch]

Published in October 1888, the first issue of National Geographic “was a modest looking scientific brochure with an austere terra-cotta cover” (NG 2003). The inaugural publication comprised a dense academic treatise on the classification of geographic forms by genesis. But that wasn’t all. The first issue also included a riveting account of “The Great White Hurricane” of March 1888, which still ranks as one of the worst winter storms ever in US history.

Wreck at Coleman’s Station, New York & Harlem R. R., March 13, 1888. Photo courtesy NOAA Photo Library.

I’ve just spent a riveting week myself at the 2012 National Geographic Explorers Symposium in Washington DC, the birthplace of the National Geographic Society. I was truly honored to be recognized as a 2012 Emerging Explorer along with such an amazing and accomplished cadre of explorers. So it was with excitement that I began reading up on the history of this unique institution whilst on my flight to Doha following the Symposium.

I’ve been tagged as the “Crisis Mapper” of the Emerging Explorers Class of 2012. So imagine my astonishment when I  began discovering that National Geographic had a long history of covering and mapping natural disasters, humanitarian crises and wars starting from the very first issue of the magazine in 1888. And when World War I broke out:

“Readers opened their August 1914 edition of the magazine to find an up-to-date map of ‘The New Balkan States and Central Europe’ that allowed them to follow the developments of the war. Large maps of the fighting fronts continued to be published throughout the conflict […]” (NG 2003).

Map of ‘The New Balkan States and Central Europe’ from the August 1914 “National Geographic Magazine.” Image courtesy NGS.

National Geographic even established a News Service Bureau to provide bulletins on the geographic aspects of the war for the nation’s newspapers. As the respected war strategist Carl von Clausewitz noted half-a-century before the launch of Geographic, “geography and the character of the ground bear a close and ever present relation to warfare, . . . both as to its course and to its planning and exploitation.”

“When World War II came, the Geographic opened its vast files of photographs, more than 300,000 at that time, to the armed forces. By matching prewar aerial photographs against wartime ones, analysts detected camouflage and gathered intelligence” (NG 2003).

During the 1960s, National Geographic “did not shrink from covering the war in Vietnam.” Staff writers and photographers captured all aspects of the war from “Saigon to the Mekong Delta to villages and rice fields.” In the years and decades that followed, Geographic continued to capture unfolding crises, from occupied Palestine and Apartheid South Africa to war-torn Afghanistan and the drought-striven Sahel of Africa.

Geographic also covered the tragedy of the Chernobyl nuclear disaster and the dramatic eruption of Mount Saint Helens. The gripping account of the latter would in fact become the most popular article in all of National Geographic history. Today,

“New technologies–remote sensing, lasers, computer graphics, x-rays and CT scans–allow National Geographic to picture the world in new ways.” This is equally true of maps. “Since the first map was published in the magazine in 1888, maps  have been an integral component of many magazine articles, books and television programs […]. Originally drafted by hand on large projections, today’s maps are created by state-of-the art computers to map everything from the Grand Canyon to the outer reaches of the universe” (NG 2003). And crises.

“Pick up a newspaper and every single day you’ll see how geography plays a dominant role in giving a third dimension to life,” wrote Gil Grosvenor, the former Editor in Chief of National Geographic (NG 2003). And as we know only too well, many of the headlines in today’s newspapers relay stories of crises the world over. National Geographic has a tremendous opportunity to shed a third dimension on emerging crises around the globe using new live mapping technologies. Indeed, to map the world is to know it, and to map the world live is to change it live before it’s too late. The next post in this series will illustrate why with an example from the 2010 Haiti Earthquake.

Patrick Meier is a 2012 National Geographic Emerging ExplorerHe is an internationally recognized thought leader on the application of new technologies for positive social change. He currently serves as Director of Social Innovation at the Qatar Foundation’s Computing Research Institute (QCRI). Patrick also authors the respected iRevolution blog & tweets at @patrickmeier. This piece was originally published here on National Geographic.

Does Digital Crime Mapping Work? Insights on Engagement, Empowerment & Transparency

In 2008, police forces across the United Kingdom (UK) launched an online crime mapping tool “to help improve the credibility and confidence that the public had in police-recorded crime levels, address perceptions of crime, promote community engagement and empowerment, and support greater public service transparency and accountability.” How effective has this large scale digital mapping effort been? “There continues to be a lack of evidence that publishing crime statistics using crime mapping actually supports improvements in community engagement and empowerment.” This blog post evaluates the project’s impact by summarizing the findings from a recent peer-reviewed study entitled: “Engagement, Empowerment and Transparency: Publishing Crime Statistics using Online Crime Mapping.” Insights from this study have important implications for crisis mapping projects.

The rationale for publishing up-to-date crime statistics online was to address the “reassurance gap” which “relates to the counterintuitive relationship between fear of crime and the reality of crime.” While crime in the UK has decreased steadily over the past 15 years, there was no corresponding in the public’s fear of crime during this period. Studies subsequently found a relationship between a person’s confidence in the criminal justice system and the level to which a person felt informed about crime and justice issue. “Hence, barriers to accurate information were one of the main reasons why the reassurance gap, and lack of confidence in the police, was believed to exist.”

A related study  found that people’s opinions on crime levels were “heavily influenced by media depictions, demographic qualities, and personal ex- perience.” Meanwhile, “the countervailing source of information—nationally reported crime statistics—was not being heard. Simply put, the message that crime levels were falling was not getting through to the populace over the cacophony of competing information.” Hence the move to publish crime statistics online using a crime map.

Studies have long inferred that “publically disseminating crime information engages the public and empowers them to get involved in their communities. This has been a key principle in the adoption of community policing, where the public are considered just as much a part of community safety as the police themselves. Increasing public access to crime information is seen as integral to this whole agenda.” In addition, digital crime mapping was “seen as a ‘key mechanism for encouraging the public to take greater responsibility for holding the local police to account for their performance.’ In other words, it is believed that publishing information on crime at a local level facilitates greater public scrutiny of how well the police are doing at suppressing local crime and serves as a basis for dialogue between the public and their local police.”

While these are all great reasons to launch a nationwide crime mapping initiative online, “the evidence base that should form the foundations of the policy to pub- lish crime statistics using crime maps is distinctly absent.” When the project launched, “no police force had conducted a survey (robust or anecdotal) that had measured the impact that publishing crime statistics had on improving the credibility of these data or the way in which the information was being used to inform, reassure, and engage with the public.” In fact, “the only research-derived knowledge available on the impact of crime maps on public perceptions of crime was generated in the USA, on a small and conveniently selected sample.”

Moreover, many practitioners had “concerns with the geocoding accuracy of some crime data and how this would be represented on the national crime mapping site (i.e. many records cannot be geographically referenced to the exact location where the crime took place) […] which could make the interpretation of street-level data misleading and confusing to the public.” The authors thus argue that “an areal visualization method such as kernel density estimation that is commonly used in police forces for visualizing the geographic distribution of crime would have been more appropriate.”

The media was particularly critical of the confusion provoked by the map and the negative media attention “may have done long-term damage to the reputation of crime statistics. Whilst these inaccuracies are few in number, they have been high profile and risk undermining the legitimacy and confidence in the accuracy of all the crime statistics published on the site. Indeed, these concerns were highlighted further in October 2011 when the crime statistics for the month of August appeared to show no resemblance to the riots […].”

Furthermore, “contemporary research has stressed that information provision needs to be relevant to the recipients, and should emphasize police responsive-ness to local issues to chime with the public’s priorities.” Yet the UK’s online crime map does not “taylor sub-neighborhood reassurance messages alongside the publishing of crime statistics (e.g., saying how little crime there is in your neighborhood). General messages of reassurance and crime prevention often fail to resonate. This also under-scores the need for tailored information that is actively passed on to local communities at times of heightened crime risk, which local residents can then use to minimize their own immediate risk of victimization and improve local public safety.”

In addition, the presentation of “crime statistics on the national website is very passive, offering little that will draw people back and keep them interested on crime trends and policing in their area.” For example, the project did not require users to register their email addresses and home post (zip) codes when using the map. This meant the police had no way to inform interested audiences with locally relevant crime information such as “specific and tailored crime prevention advice regarding a known local crime issue (e.g. a spate of burglaries), directly promoting messages of reassurance and used as a means to publicize police activity.” I would personally argue for the use of automated alerts and messages of reassurance via geo-fencing. (The LA Crime Map provides automated alerts, for example). I would also recommend social networking tools such as Facebook and Twitter to the map.

In conclusion, the authors question the “assumption that all police-recorded crime data are fit for purpose for mapping at street level.” They recommend using the Management of Police Information (MOPI) protocol, which states that “information must fulfill a necessary purpose for it to be recorded and retained by the police.” MOPI would “help to qualify what should and what should not be published, and the mechanism by which it is published.” Instead of mapping everything and anything, the authors advocate for the provision of “better quality information that the public can actually do something with to minimize their risk of victimization, or use as a basis for dialogue with their local policing teams.” In sum, “the purpose of publishing the crime statistics must not lose sight of the important potential it can contribute to improving the dialogue and involvement of local communities in improving community safety, and must avoid becoming an exercise in promoting political transparency when the data it offers provides little that encourages the public to react.”

For more on crime mapping, see:

How Can Innovative Technology Make Conflict Prevention More Effective?

I’ve been asked to participate in an expert working group in support of a research project launched by the International Peace Institute (IPI) on new technologies for conflict prevention. Both UNDP and USAID are also partners in this effort. To this end, I’ve been invited to make some introductory remarks during our upcoming working group meeting. The purpose of this blog post is to share my preliminary thoughts on this research and provide some initial suggestions.

Before I launch into said thoughts, some context may be in order. I spent several years studying, launching and improving conflict early warning systems for violence prevention. While I haven’t recently blogged about conflict prevention on iRevolution, you’ll find my writings on this topic posted on my other blog, Conflict Early Warning. I have also published and presented several papers on conflict prevention, most of which are available here. The most relevant ones include the following:

  • Meier, Patrick. 2011. Early Warning Systems and the Prevention of Violent Conflict. In Peacebuilding in the Information Age: Sifting Hype from Reality, ed. Daniel Stauffacher et al. GenevaICT4Peace. Available online.
  • Leaning, Jennifer and Patrick Meier. 2009. “The Untapped Potential of Information Communication Technology for Conflict Early Warning and Crisis Mapping,” Working Paper Series, Harvard Humanitarian Initiative (HHI), Harvard University. Available online.
  • Leaning, Jennifer and Patrick Meier. 2008. “Community Based Conflict Early Warning and Response Systems: Opportunities and Challenges.” Working Paper Series, Harvard Humanitarian Initiative (HHI), Harvard University. Available online.
  • Leaning, Jennifer and Patrick Meier. 2008. “Conflict Early Warning and Response: A Critical Reassessment.” Working Paper Series, Harvard Humanitarian Initiative (HHI), Harvard University. Available online.
  • Meier, Patrick. 2008. “Upgrading the Role of Information Communication Technology (ICT) for Tactical Early Warning/Response.” Paper prepared for the 49th Annual Convention of the International Studies Association (ISA) in San Francisco. Available online.
  • Meier, Patrick. 2007. “New Strategies for Effective Early Response: Insights from Complexity Science.” Paper prepared for the 48th Annual Convention of the International Studies Association (ISA) in Chicago.Available online.
  • Campbell, Susanna and Patrick Meier. 2007. “Deciding to Prevent Violent Conflict: Early Warning and Decision-Making at the United Nations.” Paper prepared for the 48th Annual Convention of the International Studies Association (ISA) in Chicago. Available online.
  • Meier, Patrick. 2007. From Disaster to Conflict Early Warning: A People-Centred Approach. Monday Developments 25, no. 4, 12-14. Available online.
  • Meier, Patrick. 2006. “Early Warning for Cowardly Lions: Response in Disaster & Conflict Early Warning Systems.” Unpublished academic paper, The Fletcher SchoolAvailable online.
  • I was also invited to be an official reviewer of this 100+ page workshop summary on “Communication and Technology for Violence Prevention” (PDF), which was just published by the National Academy of Sciences. In addition, I was an official referee for this important OECD report on “Preventing Violence, War and State Collapse: The Future of Conflict Early Warning and Response.”

An obvious first step for IPI’s research would be to identify the conceptual touch-points between the individual functions or components of conflict early warning systems and information & communication technology (ICT). Using this concep-tual framework put forward by ISDR would be a good place to start:

That said, colleagues at IPI should take care not to fall prey to technological determinism. The first order of business should be to understand exactly why previous (and existing) conflict early warning systems are complete failures—a topic I have written extensively about and been particularly vocal on since 2004. Throwing innovative technology at failed systems will not turn them into successful operations. Furthermore, IPI should also take note of the relatively new discourse on people-centered approaches to early warning and distinguish between first, second, third and fourth generation conflict early warning systems.

On this note, IPI ought to focus in particular on third and fourth generation systems vis-a-vis the role of innovative technology. Why? Because first and second generation systems are structured for failure due to constraints explained by organizational theory. They should thus explore the critical importance of conflict preparedness and the role that technology can play in this respect since preparedness is key to the success of third and fourth generation systems. In addition, IPI should consider the implications of crowdsourcing, crisis mapping, Big Data, satellite imagery and the impact that social media analytics might play for the early detection and respons to violent conflict. They should also take care not to ignore critical insights from the field of nonviolent civil resistance vis-a-vis preparedness and tactical approaches to community-based early response. Finally, they should take note of new and experimental initiatives in this space, such as PeaceTXT.

IPI’s plans to write up several case studies on conflict early warning systems to understand how innovative technology might (or already are) making these more effective. I would recommend focusing on specific systems in Kenya, Kyrgyzstan Sri Lanka and Timor-Leste. Note that some community-based systems are too sensitive to make public, such as one in Burma for example. In terms of additional experts worth consulting, I would recommend David Nyheim, Joe Bock, Maria Stephan, Sanjana Hattotuwa, Scott Edwards and Casey Barrs. I would also shy away from inviting too many academics or technology companies. The former tend to focus too much on theory while the latter often have a singular focus on technology.

Many thanks to UNDP for including me in the team of experts. I look forward to the first working group meeting and reviewing IPI’s early drafts. In the meantime, if iRevolution readers have certain examples or questions they’d like me to relay to the working group, please do let me know via the comments section below and I’ll be sure to share.

Big Data for Development: Challenges and Opportunities

The UN Global Pulse report on Big Data for Development ought to be required reading for anyone interested in humanitarian applications of Big Data. The purpose of this post is not to summarize this excellent 50-page document but to relay the most important insights contained therein. In addition, I question the motivation behind the unbalanced commentary on Haiti, which is my only major criticism of this otherwise authoritative report.

Real-time “does not always mean occurring immediately. Rather, “real-time” can be understood as information which is produced and made available in a relatively short and relevant period of time, and information which is made available within a timeframe that allows action to be taken in response i.e. creating a feedback loop. Importantly, it is the intrinsic time dimensionality of the data, and that of the feedback loop that jointly define its characteristic as real-time. (One could also add that the real-time nature of the data is ultimately contingent on the analysis being conducted in real-time, and by extension, where action is required, used in real-time).”

Data privacy “is the most sensitive issue, with conceptual, legal, and technological implications.” To be sure, “because privacy is a pillar of democracy, we must remain alert to the possibility that it might be compromised by the rise of new technologies, and put in place all necessary safeguards.” Privacy is defined by the International Telecommunications Union as theright of individuals to control or influence what information related to them may be disclosed.” Moving forward, “these concerns must nurture and shape on-going debates around data privacy in the digital age in a constructive manner in order to devise strong principles and strict rules—backed by adequate tools and systems—to ensure “privacy-preserving analysis.”

Non-representative data is often dismissed outright since findings based on such data cannot be generalized beyond that sample. “But while findings based on non-representative datasets need to be treated with caution, they are not valueless […].” Indeed, while the “sampling selection bias can clearly be a challenge, especially in regions or communities where technological penetration is low […],  this does not mean that the data has no value. For one, data from “non-representative” samples (such as mobile phone users) provide representative information about the sample itself—and do so in close to real time and on a potentially large and growing scale, such that the challenge will become less and less salient as technology spreads across and within developing countries.”

Perceptions rather than reality is what social media captures. Moreover, these perceptions can also be wrong. But only those individuals “who wrongfully assume that the data is an accurate picture of reality can be deceived. Furthermore, there are instances where wrong perceptions are precisely what is desirable to monitor because they might determine collective behaviors in ways that can have catastrophic effects.” In other words, “perceptions can also shape reality. Detecting and understanding perceptions quickly can help change outcomes.”

False data and hoaxes are part and parcel of user-generated content. While the challenges around reliability and verifiability are real, Some media organizations, such as the BBC, stand by the utility of citizen reporting of current events: “there are many brave people out there, and some of them are prolific bloggers and Tweeters. We should not ignore the real ones because we were fooled by a fake one.” And have thus devised internal strategies to confirm the veracity of the information they receive and chose to report, offering an example of what can be done to mitigate the challenge of false information.” See for example my 20-page study on how to verify crowdsourced social media data, a field I refer to as information forensics. In any event, “whether false negatives are more or less problematic than false positives depends on what is being monitored, and why it is being monitored.”

“The United States Geological Survey (USGS) has developed a system that monitors Twitter for significant spikes in the volume of messages about earthquakes,” and as it turns out, 90% of user-generated reports that trigger an alert have turned out to be valid. “Similarly, a recent retrospective analysis of the 2010 cholera outbreak in Haiti conducted by researchers at Harvard Medical School and Children’s Hospital Boston demonstrated that mining Twitter and online news reports could have provided health officials a highly accurate indication of the actual spread of the disease with two weeks lead time.”

This leads to the other Haiti example raised in the report, namely the finding that SMS data was correlated with building damage. Please see my previous blog posts here and here for context. What the authors seem to overlook is that Benetech apparently did not submit their counter-findings for independent peer-review whereas the team at the European Commission’s Joint Research Center did—and the latter passed the peer-review process. Peer-review is how rigorous scientific work is validated. The fact that Benetech never submitted their blog post for peer-review is actually quite telling.

In sum, while this Big Data report is otherwise strong and balanced, I am really surprised that they cite a blog post as “evidence” while completely ignoring the JRC’s peer-reviewed scientific paper published in the Journal of the European Geosciences Union. Until counter-findings are submitted for peer review, the JRC’s results stand: unverified, non-representative crowd-sourced text messages from the disaster affected population in Port-au-Prince that were in turn translated from Haitian Creole to English via a novel crowdsourced volunteer effort and subsequently geo-referenced by hundreds of volunteers  which did not undergo any quality control, produced a statistically significant, positive correlation with building damage.

In conclusion, “any challenge with utilizing Big Data sources of information cannot be assessed divorced from the intended use of the information. These new, digital data sources may not be the best suited to conduct airtight scientific analysis, but they have a huge potential for a whole range of other applications that can greatly affect development outcomes.”

One such application is disaster response. Earlier this year, FEMA Administrator Craig Fugate, gave a superb presentation on “Real Time Awareness” in which he relayed an example of how he and his team used Big Data (twitter) during a series of devastating tornadoes in 2011:

“Mr. Fugate proposed dispatching relief supplies to the long list of locations immediately and received pushback from his team who were concerned that they did not yet have an accurate estimate of the level of damage. His challenge was to get the staff to understand that the priority should be one of changing outcomes, and thus even if half of the supplies dispatched were never used and sent back later, there would be no chance of reaching communities in need if they were in fact suffering tornado damage already, without getting trucks out immediately. He explained, “if you’re waiting to react to the aftermath of an event until you have a formal assessment, you’re going to lose 12-to-24 hours…Perhaps we shouldn’t be waiting for that. Perhaps we should make the assumption that if something bad happens, it’s bad. Speed in response is the most perishable commodity you have…We looked at social media as the public telling us enough information to suggest this was worse than we thought and to make decisions to spend [taxpayer] money to get moving without waiting for formal request, without waiting for assessments, without waiting to know how bad because we needed to change that outcome.”

“Fugate also emphasized that using social media as an information source isn’t a precise science and the response isn’t going to be precise either. “Disasters are like horseshoes, hand grenades and thermal nuclear devices, you just need to be close— preferably more than less.”

Big Data Philanthropy for Humanitarian Response

My colleague Robert Kirkpatrick from Global Pulse has been actively promoting the concept of “data philanthropy” within the context of development. Data philanthropy involves companies sharing proprietary datasets for social good. I believe we urgently need big (social) data philanthropy for humanitarian response as well. Disaster-affected communities are increasingly the source of big data, which they generate and share via social media platforms like twitter. Processing this data manually, however, is very time consuming and resource intensive. Indeed, large numbers of digital humanitarian volunteers are often needed to monitor and process user-generated content from disaster-affected communities in near real-time.

Meanwhile, companies like Crimson Hexagon, Geofeedia, NetBase, Netvibes, RecordedFuture and Social Flow are defining the cutting edge of automated methods for media monitoring and analysis. So why not set up a Big Data Philanthropy group for humanitarian response in partnership with the Digital Humanitarian Network? Call it Corporate Social Responsibility (CRS) for digital humanitarian response. These companies would benefit from the publicity of supporting such positive and highly visible efforts. They would also receive expert feedback on their tools.

This “Emergency Access Initiative” could be modeled along the lines of the International Charter whereby certain criteria vis-a-vis the disaster would need to be met before an activation request could be made to the Big Data Philanthropy group for humanitarian response. These companies would then provide a dedicated account to the Digital Humanitarian Network (DHNet). These accounts would be available for 72 hours only and also be monitored by said companies to ensure they aren’t being abused. We would simply need to  have relevant members of the DHNet trained on these platforms and draft the appropriate protocols, data privacy measures and MoUs.

I’ve had preliminary conversations with humanitarian colleagues from the United Nations and DHnet who confirm that “this type of collaboration would be see very positively from the coordination area within the traditional humanitarian sector.” On the business development end, this setup would enable companies to get their foot in the door of the humanitarian sector—a multi-billion dollar industry. Members of the DHNet are early adopters of humanitarian technology and are ideally placed to demonstrate the added value of these platforms since they regularly partner with large humanitarian organizations. Indeed, DHNet operates as a partnership model. This would enable humanitarian professionals to learn about new Big Data tools, see them in action and, possibly, purchase full licenses for their organizations. In sum, data philanthropy is good for business.

I have colleagues at most of the companies listed above and thus plan to actively pursue this idea further. In the meantime, I’d be very grateful for any feedback and suggestions, particularly on the suggested protocols and MoUs. So I’ve set up this open and editable Google Doc for feedback.

Big thanks to the team at the Disaster Information Management Research Center (DIMRC) for planting the seeds of this idea during our recent meeting. Check out their very neat Emergency Access Initiative.

Geofeedia: Next Generation Crisis Mapping Technology?

My colleague Jeannine Lemaire from the Core Team of the Standby Volunteer Task Force (SBTF) recently pointed me to Geofeedia, which may very well be the next generation in crisis mapping technology. So I spent over an hour talking with GeoFeedia’s CEO, Phil Harris, to learn more about the platform and discuss potential applications for humanitarian response. The short version: I’m impressed; not just with the technology itself and potential, but also by Phil’s deep intuition and genuine interest in building a platform that enables others to scale positive social impact.

Situational awareness is absolutely key to emergency response, hence the rise of crisis mapping. The challenge? Processing and geo-referencing Big Data from social media sources to produce live maps has largely been a manual (and arduous) task for many in the humanitarian space. In fact, a number of humanitarian colleagues I’ve spoken to recently have complained that the manual labor required to create (and maintain) live maps is precisely why they aren’t able to launch their own crisis maps. I know this is also true of several international media organizations.

There have been several attempts at creating automated live maps. Take Havaria and Global Incidents Map, for example. But neither of these provide the customi-zability necessary for users to apply the platforms in meaningful ways. Enter Geofeedia. Lets take the recent earthquake and 800 aftershocks in Emilia, Italy. Simply type in the place name (or an exact address) and hit enter. Geofeedia automatically parses Twitter, YouTube, Flickr, Picasa and Instagram for the latest updates in that area and populates the map with this content. The algorithm pulls in data that is already geo-tagged and designated as public.

The geo-tagging happens on the smartphone, laptop/desktop when an image or Tweet is generated. The platform then allows you to pivot between the map and to browse through a collage of the automatically harvested content. Note that each entry includes a time stamp. Of course, since the search function is purely geo-based, the result will not be restricted to earthquake-related updates, hence the picture of friends at a picnic.

But lets click on the picture of the collapsed roof directly to the left. This opens up a new page with the following: the original picture and a map displaying where this picture was taken.

In between these, you’ll note the source of the picture, the time it was uploaded and the author. Directly below this you’ll find the option to query the map further by geographic distance. Lets click on the 300 meters option. The result is the updated collage below.

We know see a lot more content relevant to the earthquake than we did after the initial search. Geofeedia only parses for recently published information, which adds temporal relevance to the geographic search. The result of combing these two dimensions is a more filtered result. Incidentally, Geofeedia allows you to save and very easily share these searches and results. Now lets click on the first picture on the top left.

Geofeedia allows you to create collections (top right-hand corner).  I’ve called mine “Earthquake Damage” so I can collect all the relevant Tweets, pictures and video footage of the disaster. The platform gives me the option of inviting specific colleagues to view and help curate this new collection by adding other relevant content such as tweets and video footage. Together with Geofeedia’s multi-media approach, these features facilitate the clustering and triangulation of multi-media data in a very easy way.

Now lets pivot from these search results in collage form to the search results in map view. This display can also be saved and shared with others.

One of the clear strengths of Geofeedia is the simplicity of the user-interface. Key features and functions are esthetically designed. For example, if we wish to view the YouTube footage that is closest to the circle’s center, simply click on the icon and the video can be watched in the pop-up on the same page.

Now notice the menu just to the right of the YouTube video. Geofeedia allows you to create geo-fences on the fly. For example, we can click on “Search by Polygon” and draw a “digital fence” of that shape directly onto the map with just a few clicks of the mouse. Say we’re interested in the residential area just north of Via Statale. Simply trace the area, double-click to finish and then press on the magnifying glass icon to search for the latest social media updates and Geofeedia will return all content with relevant geo-tags.

The platform allows us to filter these results further the “Settings” menu as displayed below. On the technical side, the tool’s API supports ATOM/RSS, JSON and GeoRSS formats.

Geofeedia has a lot of potential vis-a-vis humanitarian applications, which is why the Standby Volunteer Task Force (SBTF) is partnering with the group to explore this potential further. A forthcoming blog post on the SBTF blog will outline this partnership in more detail.

In the meantime, below are a few thoughts and suggestions for Phil and team on how they can make Geofeedia even more relevant and compelling for humanitarian applications. A quick qualifier is in order beforehand, however. I often have a tendency to ask for the moon when discovering a new platform I’m excited about. The suggestions that follow are thus not criticism at all but rather the result of my imagination gone wild. So big congrats to Phil and team for having built what is already a very, very neat platform!

  • Topical search feature that enables users to search by location and a specific theme or topic.
  • Delete function that allows users to delete content that is not relevant to them either from the Map or Collage interface. In the future, perhaps some “basic” machine learning algorithms could be added to learn what types of content the user does not want displayed or prioritized.
  • Add function that gives users the option of adding relevant multi-media content, say perhaps from a blog post, a Wikipedia entry, news article or (Geo)RSS feed. I would be particularly interested in seeing a Storyful feed integrated into Geofeedia, for example. The ability to add KML files could also be interesting, e.g., a KML of an earthquake’s epicenter and estimated impact.
  • Commenting function that enables users to comment on individual data points (Tweets, pictures, etc) and a “discussion forum” feature that enables users to engage in text-based conversation vis-a-vis a specific data point.
  • Storify feature that gives users the ability to turn their curated content into a storify-like story board with narrative. A Storify plugin perhaps.
  • Ushahidi feature that enables users to export an item (Tweet, picture, etc) directly to an Ushahidi platform with just one click. This feature should also allow for the automatic publishing of said item on an Ushahidi map.
  • Alerts function that allows one to turn a geo-fence into an automated alert feature. For example, once I’ve created my geo-fence, having an option that allows me (and others) to subscribe to this geo-fence for future updates could be particularly interesting. These alerts would be sent out as emails (and maybe SMS) with a link to the new picture or Tweet that has been geo-tagged within the geographical area of the geo-fence. Perhaps each geo-fence could tweet updates directly to anyone subscribed to that Geofeedia deployment.
  • Trends alert feature that gives users the option of subscribing to specific trends of interest. For example, I’d like to be notified if the number of data points in my geo-fence increases by more than 25% within a 24-hour time period. Or more specifically whether the number of pictures has suddenly increased. These meta-level trends can provide important insights vis-a-vis early detection & response.
  • Analytics function that produces summary statistics and trends analysis for a geo-fence of interest. This is where Geofeedia could better capture temporal dynamics by including charts, graphs and simple time-series analysis to depict how events have been unfolding over the past hour vs 12 hours, 24 hours, etc.
  • Sentiment analysis feature that enables users to have an at-a-glance understanding of the sentiments and moods being expressed in the harvested social media content.
  • Augmented Reality feature … just kidding (sort-of).

Naturally, most or all of the above may not be in line with Geofeedia’s vision, purpose or business model. But I very much look forward to collaborating with Phil & team vis-a-vis our SBTF partnership. A big thanks to Jeannine once again for pointing me to Geofeedia, and equally big thanks to my SBTF colleague Timo Luege for his blog post on the platform. I’m thrilled to see more colleagues actively blog about the application of new technologies for disaster response.

On this note, anyone familiar with this new Iremos platform (above picture) from France? They recently contacted me to offer a demo.

The Future of Crisis Mapping? Full-Sized Arcade Pinball Machines

Remember those awesome pinball machines (of the analog kind)? You’d launch the ball and see it bounce all over, reacting wildly to various fun objects as you accumulate bonus points. The picture below hardly does justice so have a look on YouTube for some neat videos. I wish today’s crisis maps were that dynamic. Instead, they’re still largely static and hardly as interactive or user-friendly.

Do we live in an inert, static universe? No, obviously we don’t, and yet the state of our crisis mapping platforms would seem to suggest otherwise; a rather linear and flat world, which reminds me more of this game:

Things are always changing and interacting around us. So we need maps with automated geo-fencing alerts that can trigger kinetic and non-kinetic actions. To this end, dynamic check-in features should be part and parcel of crisis mapping platforms as well. My check-in at a certain location and time of day should trigger relevant messages to certain individuals and things (cue the Internet of Things) both nearby and at a distance based on the weather and latest crime statistics, for example. In addition, crisis mapping platforms need to have more gamification options and “special effects”. Indeed, they should be more game-like in terms of consoles and user-interface design. They also ought to be easier to use and be more rewarding.

This explains why I blogged about the “Fisher Price Theory of Crisis Mapping” back in 2008. We’ve made progress over the past four years, for sure, but the ultimate pinball machine of crisis mapping still seems to be missing from the arcade of humanitarian technology.

State of the Art in Digital Disease Detection

Larry Brilliant’s TED Talk back in 2006 played an important role in catalyzing my own personal interest in humanitarian technology. Larry spoke about the use of natural language processing and computational linguistics for the early detection and early response to epidemics. So it was with tremendous honor and deep gratitude that I delivered the first keynote presentation at Harvard University’s Digital Disease Detection (DDD) conference earlier this year.

The field of digital disease detection has remained way ahead of the curve since 2006 in terms of leveraging natural language processing, computational linguistics and now crowdsourcing for the purposes of early detection of critical events. I thus highly, highly recommend watching the videos of the DDD Ignite Talks and panel presentations, which are all available here. Topics include “Participatory Surveillance,” “Monitoring Rumors,” “Twitter and Disease Detection,” “Search Query Surveillance,” “Open Source Surveillance,” “Mobile Disease Detection,” etc. The presentation on BioCaster is also well worth watching. I blogged about BioCaster here over three years ago and the platform is as impressive as ever.

These public health experts are really operating at the cutting-edge and their insights are proving important to the broader humanitarian technology community. To be sure, the potential added value of cross-fertilization between fields is tremendous. Just take this example of a public health data mining platform (HealthMap) being used by Syrian activists to detect evidence of killings and human rights violations.

From Gunfire at Sea to Maps of War: Implications for Humanitarian Innovation

MIT Professor Eric von Hippel is the author of Democratizing Innovation, a book I should have read when it was first published seven years ago. The purpose of this blog post, however, is to share some thoughts on “Gunfire at Sea: A Case Study in Innovation” (PDF), which Eric recently instructed me to read. Authored by Elting Morison in 1968, this piece is definitely required reading for anyone engaged in disruptive innovation, particularly in the humanitarian space. Morison was one of the most distinguished historians of the last century and the founder of MIT’s Program in Science, Technology and Society (STS). The Boston Globe called him “an educator and industrial historian who believed that technology could only be harnessed to serve human beings when scientists and poets could meet with mutual understanding.”

Morison details in intriguing fashion the challenges of using light artillery at sea in the late 1,800’s to illustrate how new technologies and new forms of power collide and indeed, “bombard the fixed structure of our habits of mind and behavior.” The first major innovative disruption in naval gunfire technology is the result of one person’s acute observation. Admiral Sir Percy Scott happened to watched his men during target practice one day while the ship they were on was pitching and rolling acutely due to heavy weather. The resulting accuracy of the shots was dismal save for one man who was doing something slightly different to account for the swaying. Scott observed this positive deviance carefully and cobbled existing to technology to render the strategy easier to repeat and replicate. Within a year, his gun crews were remarkable accurate.

Note that Scott was not responsible for the invention of the basic instruments he cobbled together to scale the positive deviance he observed. Scott’s contribution, rather, was  a mashup of existing technology made possible thanks to mechanical ingenuity and a keen eye for behavioral processes. As for the personality of the innovator, Scott possessed “a savage indignation directed ordinarily at the inelastic intelligence of all constituted authority, especially the British Admiralty.” Chance also plays a role in this story. “Fortune (in this case, the unaware gun pointer) indeed favors the prepared mind, but even fortune and the prepared mind need a favorable environment before they can conspire to produce sudden change. No intelligence can proceed very far above the threshold of existing data or the binding combinations of existing data.”

Whilst stationed in China several years later, Admiral Scott crosses paths with William Sims, an American Junior Officer of similar temperament. Sims’s efforts to reform the naval service are perhaps best told in his own words: “I am perfectly willing that those holding views differing from mine should continue to live, but with every fibre of my being I loathe indirection and shiftiness, and where it occurs in high place, and is used to save face at the expense of the vital interests of our great service (in which silly people place such a child-like trust), I want that man’s blood and I will have it no matter what it costs me personally.” Sims built on Scott’s inventions and made further modifications, resulting in new records in accuracy. “These elements were brought into successful combination by minds not interested in the instruments for themselves but in what they could do with them.”

“Sure of the usefulness of his gunnery methods, Sims then turned to the task of educating the Navy at large.” And this is where the fun really begins. His first strategy was to relay in writing the results of his methods “with a mass of factual data.” Sims authored over a dozen detailed data-driven reports on innovations in naval gunfire strage which he sent from his China Station to the powers that be in Washington DC. At first, there was no response from DC. Sims thus decided to change his tone by using deliberately shocking language in subsequent reports. Writes Sims: “I therefore made up my mind I would give these later papers such a form that they would be dangerous documents to leave neglected in the files.” Sims also decided to share his reports with other officers in the fleet to force a response from the men in Washington.

The response, however, was not exactly what Sims had hoped. Washington’s opinion was that American technology was generally as good as the British, which implied that the trouble was with the men operating the technology, which thus meant that ship officers ought to conduct more training. What probably annoyed Sims most, however, was Washington’s comments vis-a-vis the new records in accuracy that Sims claimed to have achieved. Headquarters simply waived these off as impossible. So while the first reaction was dead silence, DC’s second strategy was to try and “meet Sims’s claims by logical, rational rebuttal.”

I agree with the author, Elting Morison, that this second stage reaction, “the apparent resort to reason,” is the “most entertaining and instructive in our investigation of the responses to innovation.” That said, the third stage, name-calling, can be just as entertaining for some, and Sims took the argumentum ad hominem as evidence that “he was being attacked by shifty, dishonest men who were the victims, as he said, of insufferable conceit and ignorance.” He thus took the extraordinary step of writing directly to the President of the United States, Theodore Roosevelt, to inform him of the remarkable achievements in accuracy that he and Admiral Scott had achieved. “Roosevelt, who always liked to respond to such appeals when he conveniently could, brought Sims back from China late in 1902 and installed him as Inspector of Target Practice […]. And when he left, after many spirited encounters […], he was universally acclaimed as ‘the man who taught us how to shoot.'”

What fascinates Morison in this story is the concerted resistance triggered by Sims’s innovation. Why so much resistance? Morison identifies three main sources: “honest disbelief in the dramatic but substantiated claims of the new process; protection of the existing devices and instruments with which they identified themselves; and maintenance of the existing society with which they were identified.” He argues that the latter explanation is the most important, i.e., resistance due to the “fixed structure of our habits of mind and behavior” and the fact that relatively small innovations in gunfire accuracy could quite conceivably unravel the entire fabric of naval doctrine. Indeed,

“From changes in gunnery flowed an extraordinary complex of changes: in shipboard routines, ship design, and fleet tactics. There was, too, a social change. In the days when gunnery was taken lightly, the gunnery officer was taken lightly. After 1903, he became one of the most significant and powerful members of a ship’s company, and this shift of emphasis nat- urally was shortly reflected in promotion lists. Each one of these changes provoked a dislocation in the naval society, and with man’s troubled foresight and natural indisposition to break up classic forms, the men in Washington withstood the Sims onslaught as long as they could. It is very significant that they withstood it until an agent from outside, outside and above, who was not clearly identified with the naval society, entered to force change.”

The resistance to change thus “springs from the normal human instinct to protect oneself, and more especially, one’s way of life.” Interestingly, the deadlock between those who sought change and those who sought to retain things as they were was broken only by an appeal to superior force, a force removed from and unidentified with the mores, conventions, devices of the society. This seems to me a very important point.”  The appeal to Roosevelt suggests perhaps that no organization “should or can undertake to reform itself. It must seek assistance from outside.”

I am absolutely intrigued by what these insights might imply vis-a-vis innovation (and resistance to innovation) in the humanitarian sector. Whether it be the result of combining existing technologies to produce open-source crisis mapping platforms or the use of new information management processes such as crowdsourcing, is concerted resistance to such innovation in the humanitarian space inevitable as well? Do we have a Roosevelt equivalent, i..e, an external and somewhat independent actor who might disrupt the resistance? I can definitely trace the same stages of resistance to innovations in humanitarian technology as those identified by Morison: (1) dead silence; (2) reasoned dismissal; and (3) name-calling. But as Morison himself is compelled to ask: “How then can we find the means to accept with less pain to ourselves and less damage to our social organization the dislocations in our society that are produced by innovation?”

This question, or rather Morison’s insights in tackling this question are profound and have important implications vis-a-vis innovation in the humanitarian space. Morison hones in on the imperative of “identification” in innovation:

“It cannot have escaped notice that some men identified themselves with their creations- sights, gun, gear, and so forth-and thus obtained a presumed satisfaction from the thing itself, a satisfaction that prevented them from thinking too closely on either the use or the defects of the thing; that others identified themselves with a settled way of life they had inherited or accepted with minor modification and thus found their satisfaction in attempting to maintain that way of life unchanged; and that still others identified themselves as rebellious spirits, men of the insurgent cast of mind, and thus obtained a satisfaction from the act of revolt itself.”

This purely personal identification is a powerful barrier to innovation. So can this identifying process be tampered in order to facilitate change that is ultima-tely in everyone’s interest? Morison recommends that we “spend some time and thought on the possibility of enlarging the sphere of our identifications from the part to the whole.” In addition, he suggests an emphasis on process rather than product. If we take this advice to heart, what specific changes should we seek to make in the humanitarian technology space? How do we enlarge the sphere of our identifications and in doing so focus on processes rather than products? There’s no doubt that these are major challenges in and of themselves, but ignoring them may very well mean that important innovations in life-saving technologies and processes will go un-adopted by large humanitarian organiza-tions for many years to come.

The KoBo Platform: Data Collection for Real Practitioners

Update: be sure to check out the excellent points in the comments section below.

I recently visited my alma mater, the Harvard Humanitarian Initiative (HHI), where I learned more about the free and open source KoBo ToolBox project that my colleagues Phuong Pham, Patrick Vinck and John Etherton have been working on. What really attracts me about KoBo, which means transfer in Acholi, is that the entire initiative is driven by highly experienced and respec-ted practitioners. Often, software developers are the ones who build these types of platforms in the hopes that they add value to the work of practitioners. In the case of KoBo, a team of seasoned practitioners are fully in the drivers seat. The result is a highly dedicated, customized and relevant solution.

Phuong and Patrick first piloted handheld digital data collection in 2007 in Northern Uganda. This early experience informed the development of KoBo which continues to be driven by actual field-based needs and challenges such as limited technical know-how. In short, KoBo provides an integrated suite of applications for handheld data collection that are specifically designed for a non-technical audience, ie., the vast majority of human rights and humanitarian practitioners out there. This suite of applications enable users to collect and analyze field data in virtually real-time.

KoBoForm allows you to build multimedia surveys for data collection purposes, integrating special datatypes like bar-codes, images and audio. Time stamps and geo-location via GPS let you know exactly where and when the data was collected (important for monitoring and evaluation, for example). KoBoForm’s optional data constraints and skip logic further ensure data accuracy. KoBoCollect is an Android-based app based on ODK. Surveys built with KoBoForm are easily uploaded to any number of Android phones sporting the KoBoCollect app, which can also be used offline and automatically synched when back in range. KoBoSync pushes survey data from the Android(s) to your computer for data analysis while KoBoMap lets you display your results in an interactive map with a user-friendly interface. Importantly, KoBoMap is optimized for low-bandwidth connections.

The KoBo platform has been used in to conduct large scale population studies in places like the Central African Republic, Northern Uganda and Liberia. In total, Phuong and Patrick have interviewed more than 25,000 individuals in these countries using KoBo, so the tool has certainly been tried and tested. The resulting data, by the way, is available via this data-visualization portal. The team is  currently building new features for KoBo to apply the tool in the Democratic Republic of the Congo (DRC). They are also collaborating with UNDP to develop a judicial monitoring project in the DRC using KoBoToolbox, which will help them “think through some of the requirements for longitudinal data collection and tracking of cases.”

In sum, the expert team behind KoBo is building these software solutions first and foremost for their own field work. As Patrick notes here, “the use of these tools was instrumental to the success of many of our projects.” This makes all the difference vis-a-vis the resulting technology.