Big Data for Development: Challenges and Opportunities

The UN Global Pulse report on Big Data for Development ought to be required reading for anyone interested in humanitarian applications of Big Data. The purpose of this post is not to summarize this excellent 50-page document but to relay the most important insights contained therein. In addition, I question the motivation behind the unbalanced commentary on Haiti, which is my only major criticism of this otherwise authoritative report.

Real-time “does not always mean occurring immediately. Rather, “real-time” can be understood as information which is produced and made available in a relatively short and relevant period of time, and information which is made available within a timeframe that allows action to be taken in response i.e. creating a feedback loop. Importantly, it is the intrinsic time dimensionality of the data, and that of the feedback loop that jointly define its characteristic as real-time. (One could also add that the real-time nature of the data is ultimately contingent on the analysis being conducted in real-time, and by extension, where action is required, used in real-time).”

Data privacy “is the most sensitive issue, with conceptual, legal, and technological implications.” To be sure, “because privacy is a pillar of democracy, we must remain alert to the possibility that it might be compromised by the rise of new technologies, and put in place all necessary safeguards.” Privacy is defined by the International Telecommunications Union as theright of individuals to control or influence what information related to them may be disclosed.” Moving forward, “these concerns must nurture and shape on-going debates around data privacy in the digital age in a constructive manner in order to devise strong principles and strict rules—backed by adequate tools and systems—to ensure “privacy-preserving analysis.”

Non-representative data is often dismissed outright since findings based on such data cannot be generalized beyond that sample. “But while findings based on non-representative datasets need to be treated with caution, they are not valueless […].” Indeed, while the “sampling selection bias can clearly be a challenge, especially in regions or communities where technological penetration is low […],  this does not mean that the data has no value. For one, data from “non-representative” samples (such as mobile phone users) provide representative information about the sample itself—and do so in close to real time and on a potentially large and growing scale, such that the challenge will become less and less salient as technology spreads across and within developing countries.”

Perceptions rather than reality is what social media captures. Moreover, these perceptions can also be wrong. But only those individuals “who wrongfully assume that the data is an accurate picture of reality can be deceived. Furthermore, there are instances where wrong perceptions are precisely what is desirable to monitor because they might determine collective behaviors in ways that can have catastrophic effects.” In other words, “perceptions can also shape reality. Detecting and understanding perceptions quickly can help change outcomes.”

False data and hoaxes are part and parcel of user-generated content. While the challenges around reliability and verifiability are real, Some media organizations, such as the BBC, stand by the utility of citizen reporting of current events: “there are many brave people out there, and some of them are prolific bloggers and Tweeters. We should not ignore the real ones because we were fooled by a fake one.” And have thus devised internal strategies to confirm the veracity of the information they receive and chose to report, offering an example of what can be done to mitigate the challenge of false information.” See for example my 20-page study on how to verify crowdsourced social media data, a field I refer to as information forensics. In any event, “whether false negatives are more or less problematic than false positives depends on what is being monitored, and why it is being monitored.”

“The United States Geological Survey (USGS) has developed a system that monitors Twitter for significant spikes in the volume of messages about earthquakes,” and as it turns out, 90% of user-generated reports that trigger an alert have turned out to be valid. “Similarly, a recent retrospective analysis of the 2010 cholera outbreak in Haiti conducted by researchers at Harvard Medical School and Children’s Hospital Boston demonstrated that mining Twitter and online news reports could have provided health officials a highly accurate indication of the actual spread of the disease with two weeks lead time.”

This leads to the other Haiti example raised in the report, namely the finding that SMS data was correlated with building damage. Please see my previous blog posts here and here for context. What the authors seem to overlook is that Benetech apparently did not submit their counter-findings for independent peer-review whereas the team at the European Commission’s Joint Research Center did—and the latter passed the peer-review process. Peer-review is how rigorous scientific work is validated. The fact that Benetech never submitted their blog post for peer-review is actually quite telling.

In sum, while this Big Data report is otherwise strong and balanced, I am really surprised that they cite a blog post as “evidence” while completely ignoring the JRC’s peer-reviewed scientific paper published in the Journal of the European Geosciences Union. Until counter-findings are submitted for peer review, the JRC’s results stand: unverified, non-representative crowd-sourced text messages from the disaster affected population in Port-au-Prince that were in turn translated from Haitian Creole to English via a novel crowdsourced volunteer effort and subsequently geo-referenced by hundreds of volunteers  which did not undergo any quality control, produced a statistically significant, positive correlation with building damage.

In conclusion, “any challenge with utilizing Big Data sources of information cannot be assessed divorced from the intended use of the information. These new, digital data sources may not be the best suited to conduct airtight scientific analysis, but they have a huge potential for a whole range of other applications that can greatly affect development outcomes.”

One such application is disaster response. Earlier this year, FEMA Administrator Craig Fugate, gave a superb presentation on “Real Time Awareness” in which he relayed an example of how he and his team used Big Data (twitter) during a series of devastating tornadoes in 2011:

“Mr. Fugate proposed dispatching relief supplies to the long list of locations immediately and received pushback from his team who were concerned that they did not yet have an accurate estimate of the level of damage. His challenge was to get the staff to understand that the priority should be one of changing outcomes, and thus even if half of the supplies dispatched were never used and sent back later, there would be no chance of reaching communities in need if they were in fact suffering tornado damage already, without getting trucks out immediately. He explained, “if you’re waiting to react to the aftermath of an event until you have a formal assessment, you’re going to lose 12-to-24 hours…Perhaps we shouldn’t be waiting for that. Perhaps we should make the assumption that if something bad happens, it’s bad. Speed in response is the most perishable commodity you have…We looked at social media as the public telling us enough information to suggest this was worse than we thought and to make decisions to spend [taxpayer] money to get moving without waiting for formal request, without waiting for assessments, without waiting to know how bad because we needed to change that outcome.”

“Fugate also emphasized that using social media as an information source isn’t a precise science and the response isn’t going to be precise either. “Disasters are like horseshoes, hand grenades and thermal nuclear devices, you just need to be close— preferably more than less.”

Big Data Philanthropy for Humanitarian Response

My colleague Robert Kirkpatrick from Global Pulse has been actively promoting the concept of “data philanthropy” within the context of development. Data philanthropy involves companies sharing proprietary datasets for social good. I believe we urgently need big (social) data philanthropy for humanitarian response as well. Disaster-affected communities are increasingly the source of big data, which they generate and share via social media platforms like twitter. Processing this data manually, however, is very time consuming and resource intensive. Indeed, large numbers of digital humanitarian volunteers are often needed to monitor and process user-generated content from disaster-affected communities in near real-time.

Meanwhile, companies like Crimson Hexagon, Geofeedia, NetBase, Netvibes, RecordedFuture and Social Flow are defining the cutting edge of automated methods for media monitoring and analysis. So why not set up a Big Data Philanthropy group for humanitarian response in partnership with the Digital Humanitarian Network? Call it Corporate Social Responsibility (CRS) for digital humanitarian response. These companies would benefit from the publicity of supporting such positive and highly visible efforts. They would also receive expert feedback on their tools.

This “Emergency Access Initiative” could be modeled along the lines of the International Charter whereby certain criteria vis-a-vis the disaster would need to be met before an activation request could be made to the Big Data Philanthropy group for humanitarian response. These companies would then provide a dedicated account to the Digital Humanitarian Network (DHNet). These accounts would be available for 72 hours only and also be monitored by said companies to ensure they aren’t being abused. We would simply need to  have relevant members of the DHNet trained on these platforms and draft the appropriate protocols, data privacy measures and MoUs.

I’ve had preliminary conversations with humanitarian colleagues from the United Nations and DHnet who confirm that “this type of collaboration would be see very positively from the coordination area within the traditional humanitarian sector.” On the business development end, this setup would enable companies to get their foot in the door of the humanitarian sector—a multi-billion dollar industry. Members of the DHNet are early adopters of humanitarian technology and are ideally placed to demonstrate the added value of these platforms since they regularly partner with large humanitarian organizations. Indeed, DHNet operates as a partnership model. This would enable humanitarian professionals to learn about new Big Data tools, see them in action and, possibly, purchase full licenses for their organizations. In sum, data philanthropy is good for business.

I have colleagues at most of the companies listed above and thus plan to actively pursue this idea further. In the meantime, I’d be very grateful for any feedback and suggestions, particularly on the suggested protocols and MoUs. So I’ve set up this open and editable Google Doc for feedback.

Big thanks to the team at the Disaster Information Management Research Center (DIMRC) for planting the seeds of this idea during our recent meeting. Check out their very neat Emergency Access Initiative.

Geofeedia: Next Generation Crisis Mapping Technology?

My colleague Jeannine Lemaire from the Core Team of the Standby Volunteer Task Force (SBTF) recently pointed me to Geofeedia, which may very well be the next generation in crisis mapping technology. So I spent over an hour talking with GeoFeedia’s CEO, Phil Harris, to learn more about the platform and discuss potential applications for humanitarian response. The short version: I’m impressed; not just with the technology itself and potential, but also by Phil’s deep intuition and genuine interest in building a platform that enables others to scale positive social impact.

Situational awareness is absolutely key to emergency response, hence the rise of crisis mapping. The challenge? Processing and geo-referencing Big Data from social media sources to produce live maps has largely been a manual (and arduous) task for many in the humanitarian space. In fact, a number of humanitarian colleagues I’ve spoken to recently have complained that the manual labor required to create (and maintain) live maps is precisely why they aren’t able to launch their own crisis maps. I know this is also true of several international media organizations.

There have been several attempts at creating automated live maps. Take Havaria and Global Incidents Map, for example. But neither of these provide the customi-zability necessary for users to apply the platforms in meaningful ways. Enter Geofeedia. Lets take the recent earthquake and 800 aftershocks in Emilia, Italy. Simply type in the place name (or an exact address) and hit enter. Geofeedia automatically parses Twitter, YouTube, Flickr, Picasa and Instagram for the latest updates in that area and populates the map with this content. The algorithm pulls in data that is already geo-tagged and designated as public.

The geo-tagging happens on the smartphone, laptop/desktop when an image or Tweet is generated. The platform then allows you to pivot between the map and to browse through a collage of the automatically harvested content. Note that each entry includes a time stamp. Of course, since the search function is purely geo-based, the result will not be restricted to earthquake-related updates, hence the picture of friends at a picnic.

But lets click on the picture of the collapsed roof directly to the left. This opens up a new page with the following: the original picture and a map displaying where this picture was taken.

In between these, you’ll note the source of the picture, the time it was uploaded and the author. Directly below this you’ll find the option to query the map further by geographic distance. Lets click on the 300 meters option. The result is the updated collage below.

We know see a lot more content relevant to the earthquake than we did after the initial search. Geofeedia only parses for recently published information, which adds temporal relevance to the geographic search. The result of combing these two dimensions is a more filtered result. Incidentally, Geofeedia allows you to save and very easily share these searches and results. Now lets click on the first picture on the top left.

Geofeedia allows you to create collections (top right-hand corner).  I’ve called mine “Earthquake Damage” so I can collect all the relevant Tweets, pictures and video footage of the disaster. The platform gives me the option of inviting specific colleagues to view and help curate this new collection by adding other relevant content such as tweets and video footage. Together with Geofeedia’s multi-media approach, these features facilitate the clustering and triangulation of multi-media data in a very easy way.

Now lets pivot from these search results in collage form to the search results in map view. This display can also be saved and shared with others.

One of the clear strengths of Geofeedia is the simplicity of the user-interface. Key features and functions are esthetically designed. For example, if we wish to view the YouTube footage that is closest to the circle’s center, simply click on the icon and the video can be watched in the pop-up on the same page.

Now notice the menu just to the right of the YouTube video. Geofeedia allows you to create geo-fences on the fly. For example, we can click on “Search by Polygon” and draw a “digital fence” of that shape directly onto the map with just a few clicks of the mouse. Say we’re interested in the residential area just north of Via Statale. Simply trace the area, double-click to finish and then press on the magnifying glass icon to search for the latest social media updates and Geofeedia will return all content with relevant geo-tags.

The platform allows us to filter these results further the “Settings” menu as displayed below. On the technical side, the tool’s API supports ATOM/RSS, JSON and GeoRSS formats.

Geofeedia has a lot of potential vis-a-vis humanitarian applications, which is why the Standby Volunteer Task Force (SBTF) is partnering with the group to explore this potential further. A forthcoming blog post on the SBTF blog will outline this partnership in more detail.

In the meantime, below are a few thoughts and suggestions for Phil and team on how they can make Geofeedia even more relevant and compelling for humanitarian applications. A quick qualifier is in order beforehand, however. I often have a tendency to ask for the moon when discovering a new platform I’m excited about. The suggestions that follow are thus not criticism at all but rather the result of my imagination gone wild. So big congrats to Phil and team for having built what is already a very, very neat platform!

  • Topical search feature that enables users to search by location and a specific theme or topic.
  • Delete function that allows users to delete content that is not relevant to them either from the Map or Collage interface. In the future, perhaps some “basic” machine learning algorithms could be added to learn what types of content the user does not want displayed or prioritized.
  • Add function that gives users the option of adding relevant multi-media content, say perhaps from a blog post, a Wikipedia entry, news article or (Geo)RSS feed. I would be particularly interested in seeing a Storyful feed integrated into Geofeedia, for example. The ability to add KML files could also be interesting, e.g., a KML of an earthquake’s epicenter and estimated impact.
  • Commenting function that enables users to comment on individual data points (Tweets, pictures, etc) and a “discussion forum” feature that enables users to engage in text-based conversation vis-a-vis a specific data point.
  • Storify feature that gives users the ability to turn their curated content into a storify-like story board with narrative. A Storify plugin perhaps.
  • Ushahidi feature that enables users to export an item (Tweet, picture, etc) directly to an Ushahidi platform with just one click. This feature should also allow for the automatic publishing of said item on an Ushahidi map.
  • Alerts function that allows one to turn a geo-fence into an automated alert feature. For example, once I’ve created my geo-fence, having an option that allows me (and others) to subscribe to this geo-fence for future updates could be particularly interesting. These alerts would be sent out as emails (and maybe SMS) with a link to the new picture or Tweet that has been geo-tagged within the geographical area of the geo-fence. Perhaps each geo-fence could tweet updates directly to anyone subscribed to that Geofeedia deployment.
  • Trends alert feature that gives users the option of subscribing to specific trends of interest. For example, I’d like to be notified if the number of data points in my geo-fence increases by more than 25% within a 24-hour time period. Or more specifically whether the number of pictures has suddenly increased. These meta-level trends can provide important insights vis-a-vis early detection & response.
  • Analytics function that produces summary statistics and trends analysis for a geo-fence of interest. This is where Geofeedia could better capture temporal dynamics by including charts, graphs and simple time-series analysis to depict how events have been unfolding over the past hour vs 12 hours, 24 hours, etc.
  • Sentiment analysis feature that enables users to have an at-a-glance understanding of the sentiments and moods being expressed in the harvested social media content.
  • Augmented Reality feature … just kidding (sort-of).

Naturally, most or all of the above may not be in line with Geofeedia’s vision, purpose or business model. But I very much look forward to collaborating with Phil & team vis-a-vis our SBTF partnership. A big thanks to Jeannine once again for pointing me to Geofeedia, and equally big thanks to my SBTF colleague Timo Luege for his blog post on the platform. I’m thrilled to see more colleagues actively blog about the application of new technologies for disaster response.

On this note, anyone familiar with this new Iremos platform (above picture) from France? They recently contacted me to offer a demo.

The Future of Crisis Mapping? Full-Sized Arcade Pinball Machines

Remember those awesome pinball machines (of the analog kind)? You’d launch the ball and see it bounce all over, reacting wildly to various fun objects as you accumulate bonus points. The picture below hardly does justice so have a look on YouTube for some neat videos. I wish today’s crisis maps were that dynamic. Instead, they’re still largely static and hardly as interactive or user-friendly.

Do we live in an inert, static universe? No, obviously we don’t, and yet the state of our crisis mapping platforms would seem to suggest otherwise; a rather linear and flat world, which reminds me more of this game:

Things are always changing and interacting around us. So we need maps with automated geo-fencing alerts that can trigger kinetic and non-kinetic actions. To this end, dynamic check-in features should be part and parcel of crisis mapping platforms as well. My check-in at a certain location and time of day should trigger relevant messages to certain individuals and things (cue the Internet of Things) both nearby and at a distance based on the weather and latest crime statistics, for example. In addition, crisis mapping platforms need to have more gamification options and “special effects”. Indeed, they should be more game-like in terms of consoles and user-interface design. They also ought to be easier to use and be more rewarding.

This explains why I blogged about the “Fisher Price Theory of Crisis Mapping” back in 2008. We’ve made progress over the past four years, for sure, but the ultimate pinball machine of crisis mapping still seems to be missing from the arcade of humanitarian technology.

State of the Art in Digital Disease Detection

Larry Brilliant’s TED Talk back in 2006 played an important role in catalyzing my own personal interest in humanitarian technology. Larry spoke about the use of natural language processing and computational linguistics for the early detection and early response to epidemics. So it was with tremendous honor and deep gratitude that I delivered the first keynote presentation at Harvard University’s Digital Disease Detection (DDD) conference earlier this year.

The field of digital disease detection has remained way ahead of the curve since 2006 in terms of leveraging natural language processing, computational linguistics and now crowdsourcing for the purposes of early detection of critical events. I thus highly, highly recommend watching the videos of the DDD Ignite Talks and panel presentations, which are all available here. Topics include “Participatory Surveillance,” “Monitoring Rumors,” “Twitter and Disease Detection,” “Search Query Surveillance,” “Open Source Surveillance,” “Mobile Disease Detection,” etc. The presentation on BioCaster is also well worth watching. I blogged about BioCaster here over three years ago and the platform is as impressive as ever.

These public health experts are really operating at the cutting-edge and their insights are proving important to the broader humanitarian technology community. To be sure, the potential added value of cross-fertilization between fields is tremendous. Just take this example of a public health data mining platform (HealthMap) being used by Syrian activists to detect evidence of killings and human rights violations.

Does Your Brand Have a Plot? How Great Leaders Inspire Action

“Does your brand have a plot?” I overheard this intriguing question whilst at SXSW 2012 earlier this year. The question came to mind again recently while watching Simon Sinek’s excellent TEDx talk on “How Great Leaders Inspire Action.” All too often, most companies seek to inspire customers (to purchase their product or service) by explaining “What they do” rather than “Why they do” what they do. This approach, as it turns out, is exactly the wrong way to catalyze inspiration. Sinek demonstrates how starting with why makes all the difference when seeking to inspire others.

Take Apple as an example:

“If Apple were like everyone else, a marketing message from them might sound like this: we make great computers; they are beautifully designed, simple to use and user-friendly. Want to buy one? … And that’s how most of us communicate, that’s how most marketing is done, that’s how most sales is done […]. We say what we do, we say how we’re different or how we’re better, and we expect some sort of behavior, a purchase or vote […]. But it’s uninspiring. Here’s how Apple actually communicates. Everything we do, we believe in changing the status quo, we believe in thinking differently. The way we challenge the status quo is by making our products beautifully designed, simple to use, and user-friendly. We just happen to make great computers. Want to buy one?”

Sinek’s main take away message from this example (and indeed his entire talk) is that people don’t buy what you do, they buy why you do it. This explains the importance of starting with why:  your purpose, your cause, your belief. “The goal is not to do business with everybody who needs what you have; the goal is to do business with people who believe what you believe.” If you talk about what you believe, you will attract those who believe what you believe. These are you early adopters who have the potential to change the world. Again, “what you do simply proves what you believe.”

Why you believe what you believe is ultimately a story, a narrative. So what is your story? What is your plot? The answer to these questions is what will inspire others to join you in your cause. “If you want to build a ship,” wrote Antoine de St. Exupery, then “don’t drum up the men to gather wood, divide the work and give orders. Instead, teach them to yearn for the vast and endless sea.” Why you yearn for that open horizon is your story. Dr. Martin Luther King didn’t give the “I have a plan” speech on the steps of the Lincoln Memorial on that hot summer day in August 1963, he gave the “I have a dream” speech.

Disaster Response, Self-Organization and Resilience: Shocking Insights from the Haiti Humanitarian Assistance Evaluation

Tulane University and the State University of Haiti just released a rather damming evaluation of the humanitarian response to the 2010 earthquake that struck Haiti on January 12th. The comprehensive assessment, which takes a participatory approach and applies a novel resilience framework, finds that despite several billion dollars in “aid”, humanitarian assistance did not make a detectable contribution to the resilience of the Haitian population and in some cases increased certain communities’ vulnerability and even caused harm. Welcome to supply-side humanitarian assistance directed by external actors.

“All we need is information. Why can’t we get information?” A quote taken from one of many focus groups conducted by the evaluators. “There was little to no information exchange between the international community tasked with humanitarian response and the Haitian NGOs, civil society or affected persons / communities themselves.” Information is critical for effective humanitarian assistance, which should include two objectives: “preventing excess mortality and human suffering in the immediate, and in the longer term, improving the community’s ability to respond to potential future shocks.” This longer term objective thus focuses on resilience, which the evaluation team defines as follows:

“Resilience is the capacity of the affected community to self-organize, learn from and vigorously recover from adverse situations stronger than it was before.”

This link between resilience and capacity for self-organization is truly profound and incredibly important. To be sure, the evaluation reveals that “the humani-tarian response frequently undermined the capacity of Haitian individuals and organizations.” This completely violates the Hippocratic Oath of Do No Harm. The evaluators thus “promote the attainment of self-sufficiency, rather than the ongoing dependency on standard humanitarian assistance.” Indeed, “focus groups indicated that solutions to help people help themselves were desired.”

I find it particularly telling that many aid organizations interviewed for this assessment were reluctant to assist the evaluators in fully capturing and analyzing resource flows, which are critical for impact evaluation. “The lack of transparency in program dispersal of resources was a major constraint in our research of effective program evaluation.” To this end, the evaluation team argue that “by strengthening Haitian institutions’ ability to monitor and evaluate, Haitians will more easily be able to track and monitor international efforts.”

I completely disagree with this remedy. The institutions are part of the problem, and besides, institution-building takes years if not decades. To assume there is even political will and the resources for such efforts is at best misguided. If resilience is about strengthening the capacity of affected communities to self-organize, then I would focus on just that, applying existing technologies and processes that both catalyze and facilitate demand-side, people-centered self-organization. My previous blog post on “Technology and Building Resilient Societies to Mitigate the Impact of Disasters” elaborates on this point.

In sum, “resilience is the critical link between disaster and development; monitoring it will ensure that relief efforts are supporting, and not eroding, household and community capabilities.” This explains why crowdsourcing and data mining efforts like those of Ushahidi, HealthMap and UN Global Pulse are important for disaster response, self-organization and resilience.

Using Rayesna to Track the 2012 Egyptian Presidential Candidates on Twitter

My (future) colleague at the Qatar Foundation’s Computing Research Institute (QCRI) have just launched a new platform that Al Jazeera is using to track the 2012 Egyptian Presidential Candidates on Twitter. Called Rayesna, which  means “our president” in colloquial Egyptian Arabic, this fully automated platform uses cutting-edge Arabic computational linguistics processing developed by the Arabic Language Technology (ALT) group at QCRI.

“Through Rayesna, you can find out how many times a candidate is mentioned, which other candidate he is likely to appear with, and the most popular tweets for a candidate, with a special category for the most retweeted jokes about the candidates. The site also has a time-series to explore and compares the mentions of the candidate day-by-day. Caveats: 1. The site reflects only the people who choose to tweet, and this group may not be representative of general society; 2. Tweets often contain foul language and we do not perform any filtering.”

I look forward to collaborating with the ALT group and exploring how their platform might also be used in the context of humanitarian response in the Arab World and beyond. There may also be important synergies with the work of the UN Global Pulse, particularly vis-a-vis their use of Twitter for real-time analysis of vulnerable communities.

From Gunfire at Sea to Maps of War: Implications for Humanitarian Innovation

MIT Professor Eric von Hippel is the author of Democratizing Innovation, a book I should have read when it was first published seven years ago. The purpose of this blog post, however, is to share some thoughts on “Gunfire at Sea: A Case Study in Innovation” (PDF), which Eric recently instructed me to read. Authored by Elting Morison in 1968, this piece is definitely required reading for anyone engaged in disruptive innovation, particularly in the humanitarian space. Morison was one of the most distinguished historians of the last century and the founder of MIT’s Program in Science, Technology and Society (STS). The Boston Globe called him “an educator and industrial historian who believed that technology could only be harnessed to serve human beings when scientists and poets could meet with mutual understanding.”

Morison details in intriguing fashion the challenges of using light artillery at sea in the late 1,800’s to illustrate how new technologies and new forms of power collide and indeed, “bombard the fixed structure of our habits of mind and behavior.” The first major innovative disruption in naval gunfire technology is the result of one person’s acute observation. Admiral Sir Percy Scott happened to watched his men during target practice one day while the ship they were on was pitching and rolling acutely due to heavy weather. The resulting accuracy of the shots was dismal save for one man who was doing something slightly different to account for the swaying. Scott observed this positive deviance carefully and cobbled existing to technology to render the strategy easier to repeat and replicate. Within a year, his gun crews were remarkable accurate.

Note that Scott was not responsible for the invention of the basic instruments he cobbled together to scale the positive deviance he observed. Scott’s contribution, rather, was  a mashup of existing technology made possible thanks to mechanical ingenuity and a keen eye for behavioral processes. As for the personality of the innovator, Scott possessed “a savage indignation directed ordinarily at the inelastic intelligence of all constituted authority, especially the British Admiralty.” Chance also plays a role in this story. “Fortune (in this case, the unaware gun pointer) indeed favors the prepared mind, but even fortune and the prepared mind need a favorable environment before they can conspire to produce sudden change. No intelligence can proceed very far above the threshold of existing data or the binding combinations of existing data.”

Whilst stationed in China several years later, Admiral Scott crosses paths with William Sims, an American Junior Officer of similar temperament. Sims’s efforts to reform the naval service are perhaps best told in his own words: “I am perfectly willing that those holding views differing from mine should continue to live, but with every fibre of my being I loathe indirection and shiftiness, and where it occurs in high place, and is used to save face at the expense of the vital interests of our great service (in which silly people place such a child-like trust), I want that man’s blood and I will have it no matter what it costs me personally.” Sims built on Scott’s inventions and made further modifications, resulting in new records in accuracy. “These elements were brought into successful combination by minds not interested in the instruments for themselves but in what they could do with them.”

“Sure of the usefulness of his gunnery methods, Sims then turned to the task of educating the Navy at large.” And this is where the fun really begins. His first strategy was to relay in writing the results of his methods “with a mass of factual data.” Sims authored over a dozen detailed data-driven reports on innovations in naval gunfire strage which he sent from his China Station to the powers that be in Washington DC. At first, there was no response from DC. Sims thus decided to change his tone by using deliberately shocking language in subsequent reports. Writes Sims: “I therefore made up my mind I would give these later papers such a form that they would be dangerous documents to leave neglected in the files.” Sims also decided to share his reports with other officers in the fleet to force a response from the men in Washington.

The response, however, was not exactly what Sims had hoped. Washington’s opinion was that American technology was generally as good as the British, which implied that the trouble was with the men operating the technology, which thus meant that ship officers ought to conduct more training. What probably annoyed Sims most, however, was Washington’s comments vis-a-vis the new records in accuracy that Sims claimed to have achieved. Headquarters simply waived these off as impossible. So while the first reaction was dead silence, DC’s second strategy was to try and “meet Sims’s claims by logical, rational rebuttal.”

I agree with the author, Elting Morison, that this second stage reaction, “the apparent resort to reason,” is the “most entertaining and instructive in our investigation of the responses to innovation.” That said, the third stage, name-calling, can be just as entertaining for some, and Sims took the argumentum ad hominem as evidence that “he was being attacked by shifty, dishonest men who were the victims, as he said, of insufferable conceit and ignorance.” He thus took the extraordinary step of writing directly to the President of the United States, Theodore Roosevelt, to inform him of the remarkable achievements in accuracy that he and Admiral Scott had achieved. “Roosevelt, who always liked to respond to such appeals when he conveniently could, brought Sims back from China late in 1902 and installed him as Inspector of Target Practice […]. And when he left, after many spirited encounters […], he was universally acclaimed as ‘the man who taught us how to shoot.'”

What fascinates Morison in this story is the concerted resistance triggered by Sims’s innovation. Why so much resistance? Morison identifies three main sources: “honest disbelief in the dramatic but substantiated claims of the new process; protection of the existing devices and instruments with which they identified themselves; and maintenance of the existing society with which they were identified.” He argues that the latter explanation is the most important, i.e., resistance due to the “fixed structure of our habits of mind and behavior” and the fact that relatively small innovations in gunfire accuracy could quite conceivably unravel the entire fabric of naval doctrine. Indeed,

“From changes in gunnery flowed an extraordinary complex of changes: in shipboard routines, ship design, and fleet tactics. There was, too, a social change. In the days when gunnery was taken lightly, the gunnery officer was taken lightly. After 1903, he became one of the most significant and powerful members of a ship’s company, and this shift of emphasis nat- urally was shortly reflected in promotion lists. Each one of these changes provoked a dislocation in the naval society, and with man’s troubled foresight and natural indisposition to break up classic forms, the men in Washington withstood the Sims onslaught as long as they could. It is very significant that they withstood it until an agent from outside, outside and above, who was not clearly identified with the naval society, entered to force change.”

The resistance to change thus “springs from the normal human instinct to protect oneself, and more especially, one’s way of life.” Interestingly, the deadlock between those who sought change and those who sought to retain things as they were was broken only by an appeal to superior force, a force removed from and unidentified with the mores, conventions, devices of the society. This seems to me a very important point.”  The appeal to Roosevelt suggests perhaps that no organization “should or can undertake to reform itself. It must seek assistance from outside.”

I am absolutely intrigued by what these insights might imply vis-a-vis innovation (and resistance to innovation) in the humanitarian sector. Whether it be the result of combining existing technologies to produce open-source crisis mapping platforms or the use of new information management processes such as crowdsourcing, is concerted resistance to such innovation in the humanitarian space inevitable as well? Do we have a Roosevelt equivalent, i..e, an external and somewhat independent actor who might disrupt the resistance? I can definitely trace the same stages of resistance to innovations in humanitarian technology as those identified by Morison: (1) dead silence; (2) reasoned dismissal; and (3) name-calling. But as Morison himself is compelled to ask: “How then can we find the means to accept with less pain to ourselves and less damage to our social organization the dislocations in our society that are produced by innovation?”

This question, or rather Morison’s insights in tackling this question are profound and have important implications vis-a-vis innovation in the humanitarian space. Morison hones in on the imperative of “identification” in innovation:

“It cannot have escaped notice that some men identified themselves with their creations- sights, gun, gear, and so forth-and thus obtained a presumed satisfaction from the thing itself, a satisfaction that prevented them from thinking too closely on either the use or the defects of the thing; that others identified themselves with a settled way of life they had inherited or accepted with minor modification and thus found their satisfaction in attempting to maintain that way of life unchanged; and that still others identified themselves as rebellious spirits, men of the insurgent cast of mind, and thus obtained a satisfaction from the act of revolt itself.”

This purely personal identification is a powerful barrier to innovation. So can this identifying process be tampered in order to facilitate change that is ultima-tely in everyone’s interest? Morison recommends that we “spend some time and thought on the possibility of enlarging the sphere of our identifications from the part to the whole.” In addition, he suggests an emphasis on process rather than product. If we take this advice to heart, what specific changes should we seek to make in the humanitarian technology space? How do we enlarge the sphere of our identifications and in doing so focus on processes rather than products? There’s no doubt that these are major challenges in and of themselves, but ignoring them may very well mean that important innovations in life-saving technologies and processes will go un-adopted by large humanitarian organiza-tions for many years to come.

Joining the Qatar Foundation to Advance Humanitarian Technology

Big news! I’ll be taking a senior level position at the Qatar Foundation to work on the next generation of humanitarian technology solutions. I’ll be based at the Foundation’s Computing Research Institute (QCRI) and be working alongside some truly amazing minds defining the cutting edge of social and scientific computing, computational linguistics, big data, etc. My role at QCRI will be to leverage the expertise within the Institute, the region and beyond to drive technology solutions for humanitarian and social impact globally—think of it as Computing for Good backed by some serious resources.  I’ll spend just part of the time in Doha. The rest of my time will be based wherever necessary to have the greatest impact. Needless to say, I’m excited!

My mission over the past five years has been to catalyze strategic linkages between the technology and humanitarian space to promote both innovation and change, so this new adventure feels like the perfect next chapter in this exciting adventure. I’ve had the good fortune and distinct honor of working with some truly inspiring and knowledgeable colleagues who have helped me define and pursue my passions over the years. Needless to say, I’ve learned a great deal from these colleagues; knowledge, contacts and partnerships that I plan to fully leverage at the Qatar Foundation.

It really has been an amazing five years. I joined the Harvard Humanitarian Initiative (HHI) in 2007 to co-found and co-direct the Program on Crisis Mapping and Early Warning. The purpose of the program was to assess how new technologies were changing the humanitarian space and how these could be deliberately leveraged to yield more significant impact. As part of my time at HHI, I consulted on a number of cutting-edge projects including the UNDP’s Crisis and Risk Mapping Analysis (CRMA) Program in the Sudan. I also leveraged this iRevolution blog extensively to share my findings and learnings with both the humanitarian and technology communities. In addition, I co-authored the UN Foundation & Vodafone Foundation Report on “New Technologies in Emergen-cies and Conflicts” (PDF).

Towards the end of HHI’s program in 2009, I co-launched the Humanitarian Technology Network, CrisisMappers, and have co-organized and curated each International Conference of Crisis Mappers (ICCM) since then. The Network now includes close to 4,000 members based in some 200 countries around the world. Last year, ICCM 2011 brought together more than 400 participants to Geneva, Switzerland to explore and define the cutting edge of humanitarian technology. This year, ICCM 2012 is being hosted by the World Bank and will no doubt draw an even greater number of experts from the humanitarian & technology space.

I joined Ushahidi as Director of Crisis Mapping shortly after launching the Crisis Mappers Network. My goal was to better understand the field of crisis mapping from the perspective of a technology company and to engage directly with international humanitarian, human rights and media organizations so they too could better understand how to leverage the technologies in the Ushahidi ecosystem. There, I spearheaded several defining crisis mapping projects including Haiti, Libya, Somalia and Syria in partnership with key humanitarian, human rights and media organizations. I also spoke at many high-profile conferences to share many of the lessons learned and best practices resulting from these projects. I am very grateful to these conference organizers for giving me the stage at so many important events, thank you very much. And of course, special thanks to the team at Ushahidi for the truly life-changing experience.

Whilst at Ushahidi, I also completed my PhD during my pre-doctoral fellowship at Stanford and co-founded the award-winning Standby Volunteer Task Force (SBTF) to provide partner organizations with surge capacity for live mapping support. I co-created the SBTF’s Satellite Imagery Team to apply crowdsourcing and micro-tasking to satellite imagery analysis in support of humanitarian operations. I also explored a number of promising data mining solutions for social media analysis vis-a-vis crisis response. More recently, I co-launched the Digital Humanitarian Network (DHN) in partnership with a UN colleague.

The words “co-founded,” “co-launched,” and “co-directed” appear throughout the above because all these initiatives are the direct result of major team-work, truly amazing partners and inspiring mentors. You all know who you are. Thank you very much for your guidance, expertise, friendship and for your camara-derie throughout. I look forward to collaborating with you even more once I get settled at the Qatar Foundation.

To learn more about QCRI’s work thus far, I recommend watching the above presentation given by the Institute’s Director who has brought together an incredible team—professionals who all share his ambition and exciting vision. When we began to discuss my job description at the Foundation, I was simply told: “Think Big.” The Institute’s Advisory Board is also a source of excitement for me: Joichi Ito (MIT) and Rich deMillo (GeorgiaTech), to name a few. 

Naturally, the Qatar Foundation also has access to tremendous resources and an amazing set of partners from multiple sectors in Doha, the region and across the globe. In short, the opportunity for QCRI to become an important contributor to the humanitarian technology space is huge. I look forward to collaborating with many existing colleagues and partners to turn this exciting opportunity into reality and look forward to continuing this adventure with an amazing team of experts in Doha who are some of the best in their fields. More soon!