How to Create Resilience Through Big Data

Revised! I have edited this article several dozen times since posting the initial draft. I have also made a number of substantial changes to the flow of the article after discovering new connections, synergies and insights. In addition, I  have greatly benefited from reader feedback as well as the very rich conversa-tions that took place during the PopTech & Rockefeller workshop—a warm thank you to all participants for their important questions and feedback!

Introduction

I’ve been invited by PopTech and the Rockefeller Foundation to give the opening remarks at an upcoming event on interdisciplinary dimensions of resilience, which is  being hosted at Georgetown University. This event is connected to their new program focus on “Creating Resilience Through Big Data.” I’m absolutely de-lighted to be involved and am very much looking forward to the conversations. The purpose of this blog post is to summarize the presentation I intend to give and to solicit feedback from readers. So please feel free to use the comments section below to share your thoughts. My focus is primarily on disaster resilience. Why? Because understanding how to bolster resilience to extreme events will provide insights on how to also manage less extreme events, while the converse may not be true.

Big Data Resilience

terminology

One of the guiding questions for the meeting is this: “How do you understand resilience conceptually at present?” First, discourse matters.  The term resilience is important because it focuses not on us, the development and disaster response community, but rather on local at-risk communities. While “vulnerability” and “fragility” were used in past discourse, these terms focus on the negative and seem to invoke the need for external protection, overlooking the fact that many local coping mechanisms do exist. From the perspective of this top-down approach, international organizations are the rescuers and aid does not arrive until these institutions mobilize.

In contrast, the term resilience suggests radical self-sufficiency, and self-sufficiency implies a degree of autonomy; self-dependence rather than depen-dence on an external entity that may or may not arrive, that may or may not be effective, and that may or may not stay the course. The term “antifragile” just recently introduced by Nassim Taleb also appeals to me. Antifragile sys-tems thrive on disruption. But lets stick with the term resilience as anti-fragility will be the subject of a future blog post, i.e., I first need to finish reading Nassim’s book! I personally subscribe to the following definition of resilience: the capacity for self-organization; and shall expand on this shortly.

(See the Epilogue at the end of this blog post on political versus technical defini-tions of resilience and the role of the so-called “expert”. And keep in mind that poverty, cancer, terrorism etc., are also resilient systems. Hint: we have much to learn from pernicious resilience and the organizational & collective action models that render those systems so resilient. In their book on resilience, Andrew Zolli and Ann Marie Healy note the strong similarities between Al-Qaeda & tuber-culosis, one of which are the two systems’ ability to regulate their metabolism).

Hazards vs Disasters

In the meantime, I first began to study the notion of resilience from the context of complex systems and in particular the field of ecology, which defines resilience as “the capacity of an ecosystem to respond to a perturbation or disturbance by resisting damage and recovering quickly.” Now lets unpack this notion of perturbation. There is a subtle but fundamental difference between disasters (processes) and hazards (events); a distinction that Jean-Jacques Rousseau first articulated in 1755 when Portugal was shaken by an earthquake. In a letter to Voltaire one year later, Rousseau notes that, “nature had not built [process] the houses which collapsed and suggested that Lisbon’s high population density [process] contributed to the toll” (1). In other words, natural events are hazards and exogenous while disas-ters are the result of endogenous social processes. As Rousseau added in his note to Voltaire, “an earthquake occurring in wilderness would not be important to society” (2). That is, a hazard need not turn to disaster since the latter is strictly a product or calculus of social processes (structural violence).

And so, while disasters were traditionally perceived as “sudden and short lived events, there is now a tendency to look upon disasters in African countries in particular, as continuous processes of gradual deterioration and growing vulnerability,” which has important “implications on the way the response to disasters ought to be made” (3). (Strictly speaking, the technical difference between events and processes is one of scale, both temporal and spatial, but that need not distract us here). This shift towards disasters as processes is particularly profound for the creation of resilience, not least through Big Data. To under-stand why requires a basic introduction to complex systems.

complex systems

All complex systems tend to veer towards critical change. This is explained by the process of Self-Organized Criticality (SEO). Over time, non-equilibrium systems with extended degrees of freedom and a high level of nonlinearity become in-creasingly vulnerable to collapse. Social, economic and political systems certainly qualify as complex systems. As my “alma mater” the Santa Fe Institute (SFI) notes, “The archetype of a self-organized critical system is a sand pile. Sand is slowly dropped onto a surface, forming a pile. As the pile grows, avalanches occur which carry sand from the top to the bottom of the pile” (4). That is, the sand pile becomes increasingly unstable over time.

Consider an hourglass or sand clock as an illustration of self-organized criticality. Grains of sand sifting through the narrowest point of the hourglass represent individual events or natural hazards. Over time a sand pile starts to form. How this process unfolds depends on how society chooses to manage risk. A laisser-faire attitude will result in a steeper pile. And grain of sand falling on an in-creasingly steeper pile will eventually trigger an avalanche. Disaster ensues.

Why does the avalanche occur? One might ascribe the cause of the avalanche to that one grain of sand, i.e., a single event. On the other hand, a complex systems approach to resilience would associate the avalanche with the pile’s increasing slope, a historical process which renders the structure increasingly vulnerable to falling grains. From this perspective, “all disasters are slow onset when realisti-cally and locally related to conditions of susceptibility”. A hazard event might be rapid-onset, but the disaster, requiring much more than a hazard, is a long-term process, not a one-off event. The resilience of a given system is therefore not simply dependent on the outcome of future events. Resilience is the complex product of past social, political, economic and even cultural processes.

dealing with avalanches

Scholars like Thomas Homer-Dixon argue that we are becoming increasingly prone to domino effects or cascading changes across systems, thus increasing the likelihood of total synchronous failure. “A long view of human history reveals not regular change but spasmodic, catastrophic disruptions followed by long periods of reinvention and development.” We must therefore “reduce as much as we can the force of the underlying tectonic stresses in order to lower the risk of synchro-nous failure—that is, of catastrophic collapse that cascades across boundaries between technological, social and ecological systems” (5).

Unlike the clock’s lifeless grains of sand, human beings can adapt and maximize their resilience to exogenous shocks through disaster preparedness, mitigation and adaptation—which all require political will. As a colleague of mine recently noted, “I wish it were widely spread amongst society  how important being a grain of sand can be.” Individuals can “flatten” the structure of the sand pile into a less hierarchical but more resilience system, thereby distributing and diffusing the risk and size of an avalanche. Call it distributed adaptation.

operationalizing resilience

As already, the field of ecology defines  resilience as “the capacity of an ecosystem to respond to a perturbation or disturbance by resisting damage and recovering quickly.” Using this understanding of resilience, there are at least 2 ways create more resilient “social ecosystems”:

  1. Resist damage by absorbing and dampening the perturbation.
  2. Recover quickly by bouncing back or rather forward.

Resisting Damage

So how does a society resist damage from a disaster? As hinted earlier, there is no such thing as a “natural” disaster. There are natural hazards and there are social systems. If social systems are not sufficiently resilient to absorb the impact of a natural hazard such as an earthquake, then disaster unfolds. In other words, hazards are exogenous while disasters are the result of endogenous political, economic, social and cultural processes. Indeed, “it is generally accepted among environmental geographers that there is no such thing as a natural disaster. In every phase and aspect of a disaster—causes, vulnerability, preparedness, results and response, and reconstruction—the contours of disaster and the difference between who lives and dies is to a greater or lesser extent a social calculus” (6).

So how do we apply this understanding of disasters and build more resilient communities? Focusing on people-centered early warning systems is one way to do this. In 2006, the UN’s International Strategy for Disaster Reduction (ISDR) recognized that top-down early warning systems for disaster response were increasingly ineffective. They thus called for a more bottom-up approach in the form of people-centered early warning systems. The UN ISDR’s Global Survey of Early Warning Systems (PDF), defines the purpose of people-centered early warning systems as follows:

“… to empower individuals and communities threatened by hazards to act in sufficient time and in an appropriate manner so as to reduce the possibility of personal injury, loss of life, damage to property and the environment, and loss of livelihoods.”

Information plays a central role here. Acting in sufficient time requires having timely information about (1) the hazard/s, (2) our resilience and (3) how to respond. This is where information and communication technologies (ICTs), social media and Big Data play an important role. Take the latter, for example. One reason for the considerable interest in Big Data is prediction and anomaly detection. Weather and climatic sensors provide meteorologists with the copious amounts of data necessary for the timely prediction of weather patterns and  early detection of atmospheric hazards. In other words, Big Data Analytics can be used to anticipate the falling grains of sand.

Now, predictions are often not correct. But the analysis of Big Data can also help us characterize the sand pile itself, i.e., our resilience, along with the associated trends towards self-organized criticality. Recall that complex systems tend towards instability over time (think of the hourglass above). Thanks to ICTs, social media and Big Data, we now have the opportunity to better characterize in real-time the social, economic and political processes driving our sand pile. Now, this doesn’t mean that we have a perfect picture of the road to collapse; simply that our picture is clearer than ever before in human history. In other words, we can better measure our own resilience. Think of it as the Quantified Self move-ment applied to an entirely different scale, that of societies and cities. The point is that Big Data can provide us with more real-time feedback loops than ever before. And as scholars of complex systems know, feedback loops are critical for adaptation and change. Thanks to social media, these loops also include peer-to-peer feedback loops.

An example of monitoring resilience in real-time (and potentially anticipating future changes in resilience) is the UN Global Pulse’s project on food security in Indonesia. They partnered with Crimson Hexagon to forecast food prices in Indonesia by analyzing tweets referring to the price of rice. They found an inter-esting relationship between said tweets and government statistics on food price inflation. Some have described the rise of social media as a new nervous system for the planet, capturing the pulse of our social systems. My colleagues and I at QCRI are therefore in the process of appling this approach to the study of the Arabic Twittersphere. Incidentally, this is yet another critical reason why Open Data is so important (check out the work of OpenDRI, Open Data for Resilience Initiative. See also this post on Demo-cratizing ICT for Development with DIY Innovation and Open Data). More on open data and data philanthropy in the conclusion.

Finally, new technologies can also provide guidance on how to respond. Think of Foursquare but applied to disaster response. Instead of “Break Glass in Case of Emergency,” how about “Check-In in Case of Emergency”? Numerous smart-phone apps such as Waze already provide this kind of at-a-glance, real-time situational awareness. It is only a matter of time until humanitarian organiza-tions develop disaster response apps that will enable disaster-affected commu-nities to check-in for real time guidance on what to do given their current location and level of resilience. Several disaster preparedness apps already exist. Social computing and Big Data Analytics can power these apps in real-time.

Quick Recovery

As already noted, there are at least two ways create more resilient “social eco-systems”. We just discussed the first: resisting damage by absorbing and dam-pening the perturbation.  The second way to grow more resilient societies is by enabling them to rapidly recover following a disaster.

As Manyena writes, “increasing attention is now paid to the capacity of disaster-affected communities to ‘bounce back’ or to recover with little or no external assistance following a disaster.” So what factors accelerate recovery in eco-systems in general? In ecological terms, how quickly the damaged part of an ecosystem can repair itself depends on how many feedback loops it has to the non- (or less-) damaged parts of the ecosystem(s). These feedback loops are what enable adaptation and recovery. In social ecosystems, these feedback loops can be comprised of information in addition to the transfer of tangible resources.  As some scholars have argued, a disaster is first of all “a crisis in communicating within a community—that is, a difficulty for someone to get informed and to inform other people” (7).

Improving ways for local communities to communicate internally and externally is thus an important part of building more resilient societies. Indeed, as Homer-Dixon notes, “the part of the system that has been damaged recovers by drawing resources and information from undamaged parts.” Identifying needs following a disaster and matching them to available resources is an important part of the process. Indeed, accelerating the rate of (1) identification; (2) matching and, (3) allocation, are important ways to speed up overall recovery.

This explains why ICTs, social media and Big Data are central to growing more resilient societies. They can accelerate impact evaluations and needs assessments at the local level. Population displacement following disasters poses a serious public health risk. So rapidly identifying these risks can help affected populations recover more quickly. Take the work carried out by my colleagues at Flowminder, for example. They  empirically demonstrated that mobile phone data (Big Data!) can be used to predict population displacement after major disasters. Take also this study which analyzed call dynamics to demonstrate that telecommunications data could be used to rapidly assess the impact of earthquakes. A related study showed similar results when analyzing SMS’s and building damage Haiti after the 2010 earthquake.

haiti_overview_570

Resilience as Self-Organization and Emergence

Connection technologies such as mobile phones allow individual “grains of sand” in our societal “sand pile” to make necessary connections and decisions to self-organize and rapidly recover from disasters. With appropriate incentives, pre-paredness measures and policies, these local decisions can render a complex system more resilient. At the core here is behavior change and thus the importance of understanding behavior change models. Recall  also Thomas Schelling’s observation that micro-motives can lead to macro-behavior. To be sure, as Thomas Homer-Dixon rightly notes, “Resilience is an emergent property of a system—it’s not a result of any one of the system’s parts but of the synergy between all of its parts.  So as a rough and ready rule, boosting the ability of each part to take care of itself in a crisis boosts overall resilience.” (For complexity science readers, the notions of transforma-tion through phase transitions is relevant to this discussion).

In other words, “Resilience is the capacity of the affected community to self-organize, learn from and vigorously recover from adverse situations stronger than it was before” (8). This link between resilience and capacity for self-organization is very important, which explains why a recent and major evaluation of the 2010 Haiti Earthquake disaster response promotes the “attainment of self-sufficiency, rather than the ongoing dependency on standard humanitarian assistance.” Indeed, “focus groups indicated that solutions to help people help themselves were desired.”

The fact of the matter is that we are not all affected in the same way during a disaster. (Recall the distinction between hazards and disasters discussed earlier). Those of use who are less affected almost always want to help those in need. Herein lies the critical role of peer-to-peer feedback loops. To be sure, the speed at which the damaged part of an ecosystem can repair itself depends on how many feedback loops it has to the non- (or less-) damaged parts of the eco-system(s). These feedback loops are what enable adaptation and recovery.

Lastly, disaster response professionals cannot be every where at the same time. But the crowd is always there. Moreover, the vast majority of survivals following major disasters cannot be attributed to external aid. One study estimates that at most 10% of external aid contributes to saving lives. Why? Because the real first responders are the disaster-affected communities themselves, the local popula-tion. That is, the real first feedback loops are always local. This dynamic of mutual-aid facilitated by social media is certainly not new, however. My colleagues in Russia did this back in 2010 during the major forest fires that ravaged their country.

While I do have a bias towards people-centered interventions, this does not mean that I discount the importance of feedback loops to external actors such as traditional institutions and humanitarian organizations. I also don’t mean to romanticize the notion of “indigenous technical knowledge” or local coping mechanism. Some violate my own definition of human rights, for example. However, my bias stems from the fact that I am particularly interested in disaster resilience within the context of areas of limited statehood where said institutions and organizations are either absent are ineffective. But I certainly recognize the importance of scale jumping, particularly within the context of social capital and social media.

RESILIENCE THROUGH SOCIAL CAPITAL

Information-based feedback loops general social capital, and the latter has been shown to improve disaster resilience and recovery. In his recent book entitled “Building Resilience: Social Capital in Post-Disaster Recovery,” Daniel Aldrich draws on both qualitative and quantitative evidence to demonstrate that “social resources, at least as much as material ones, prove to be the foundation for resilience and recovery.” His case studies suggest that social capital is more important for disaster resilience than physical and financial capital, and more important than conventional explanations. So the question that naturally follows given our interest in resilience & technology is this: can social media (which is not restricted by geography) influence social capital?

Social Capital

Building on Daniel’s research and my own direct experience in digital humani-tarian response, I argue that social media does indeed nurture social capital during disasters. “By providing norms, information, and trust, denser social networks can implement a faster recovery.” Such norms also evolve on Twitter, as does information sharing and trust building. Indeed, “social ties can serve as informal insurance, providing victims with information, financial help and physical assistance.” This informal insurance, “or mutual assistance involves friends and neighbors providing each other with information, tools, living space, and other help.” Again, this bonding is not limited to offline dynamics but occurs also within and across online social networks. Recall the sand pile analogy. Social capital facilitates the transformation of the sand pile away (temporarily) from self-organized criticality. On a related note vis-a-vis open source software, “the least important part of open source software is the code.” Indeed, more important than the code is the fact that open source fosters social ties, networks, communities and thus social capital.

(Incidentally, social capital generated during disasters is social capital that can subsequently be used to facilitate self-organization for non-violent civil resistance and vice versa).

RESILIENCE through big data

My empirical research on tweets posted during disasters clearly shows that while many use twitter (and social media more generally) to post needs during a crisis, those who are less affected in the social ecosystem will often post offers to help. So where does Big Data fit into this particular equation? When disaster strikes, access to information is equally important as access to food and water. This link between information, disaster response and aid was officially recognized by the Secretary General of the International Federation of Red Cross & Red Crescent Societies in the World Disasters Report published in 2005. Since then, disaster-affected populations have become increasingly digital thanks to the very rapid and widespread adoption of mobile technologies. Indeed, as a result of these mobile technologies, affected populations are increasingly able to source, share and generate a vast amount of information, which is completely transforming disaster response.

In other words, disaster-affected communities are increasingly becoming the source of Big (Crisis) Data during and following major disasters. There were over 20 million tweets posted during Hurricane Sandy. And when the major earth-quake and Tsunami hit Japan in early 2011, over 5,000 tweets were being posted every secondThat is 1.5 million tweets every 5 minutes. So how can Big Data Analytics create more resilience in this respect? More specifically, how can Big Data Analytics accelerate disaster recovery? Manually monitoring millions of tweets per minute is hardly feasible. This explains why I often “joke” that we need a local Match.com for rapid disaster recovery. Thanks to social computing, artifi-cial intelligence, machine learning and Big Data Analytics, we can absolutely develop a “Match.com” for rapid recovery. In fact, I’m working on just such a project with my colleagues at QCRI. We are also developing algorithms to auto-matically identify informative and actionable information shared on Twitter, for example. (Incidentally, a by-product of developing a robust Match.com for disaster response could very well be an increase in social capital).

There are several other ways that advanced computing can create disaster resilience using Big Data. One major challenge is digital humanitarian response is the verification of crowdsourced, user-generated content. Indeed, misinforma-tion and rumors can be highly damaging. If access to information is tantamount to food access as noted by the Red Cross, then misinformation is like poisoned food. But Big Data Analytics has already shed some light on how to develop potential solutions. As it turns out, non-credible disaster information shared on Twitter propagates differently than credible information, which means that the credibility of tweets could be predicted automatically.

Conclusion

In sum, “resilience is the critical link between disaster and development; monitoring it [in real-time] will ensure that relief efforts are supporting, and not eroding […] community capabilities” (9). While the focus of this blog post has been on disaster resilience, I believe the insights provided are equally informa-tive for less extreme events.  So I’d like to end on two major points. The first has to do with data philanthropy while the second emphasizes the critical importance of failing gracefully.

Big Data is Closed and Centralized

A considerable amount of “Big Data” is Big Closed and Centralized Data. Flow-minder’s study mentioned above draws on highly proprietary telecommunica-tions data. Facebook data, which has immense potential for humanitarian response, is also closed. The same is true of Twitter data, unless you have millions of dollars to pay for access to the full Firehose, or even Decahose. While access to the Twitter API is free, the number of tweets that can be downloaded and analyzed is limited to several thousand a day. Contrast this with the 5,000 tweets per second posted after the earthquake and Tsunami in Japan. We therefore need some serious political will from the corporate sector to engage in “data philanthropy”. Data philanthropy involves companies sharing proprietary datasets for social good. Call it Corporate Social Responsibility (CRS) for digital humanitarian response. More here on how this would work.

Failing Gracefully

Lastly, on failure. As noted, complex systems tend towards instability, i.e., self-organized criticality, which is why Homer-Dixon introduces the notion of failing gracefully. “Somehow we have to find the middle ground between dangerous rigidity and catastrophic collapse.” He adds that:

“In our organizations, social and political systems, and individual lives, we need to create the possibility for what computer programmers and disaster planners call ‘graceful’ failure. When a system fails gracefully, damage is limited, and options for recovery are preserved. Also, the part of the system that has been damaged recovers by drawing resources and information from undamaged parts.” Homer-Dixon explains that “breakdown is something that human social systems must go through to adapt successfully to changing conditions over the long term. But if we want to have any control over our direction in breakdown’s aftermath, we must keep breakdown constrained. Reducing as much as we can the force of underlying tectonic stresses helps, as does making our societies more resilient. We have to do other things too, and advance planning for breakdown is undoubtedly the most important.”

As Louis Pasteur famously noted, “Chance favors the prepared mind.” Preparing for breakdown is not defeatist or passive. Quite on the contrary, it is wise and pro-active. Our hubris—including our current infatuation with Bid Data—all too often clouds our better judgment. Like Macbeth, rarely do we seriously ask our-selves what we would do “if we should fail.” The answer “then we fail” is an option. But are we truly prepared to live with the devastating consequences of total synchronous failure?

In closing, some lingering (less rhetorical) questions:

  • How can resilience can be measured? Is there a lowest common denominator? What is the “atom” of resilience?
  • What are the triggers of resilience, creative capacity, local improvisation, regenerative capacity? Can these be monitored?
  • Where do the concepts of “lived reality” and “positive deviance” enter the conversation on resilience?
  • Is resiliency a right? Do we bear a responsibility to render systems more resilient? If so, recalling that resilience is the capacity to self-organize, do local communities have the right to self-organize? And how does this differ from democratic ideals and freedoms?
  • Recent research in social-psychology has demonstrated that mindfulness is an amplifier of resilience for individuals? How can be scaled up? Do cultures and religions play a role here?
  • Collective memory influences resilience. How can this be leveraged to catalyze more regenerative social systems?

bio

Epilogue: Some colleagues have rightfully pointed out that resilience is ultima-tely political. I certainly share that view, which is why this point came up in recent conversations with my PopTech colleagues Andrew Zolli & Leetha Filderman. Readers of my post will also have noted my emphasis on distinguishing between hazards and disasters; that the latter are the product of social, economic and political processes. As noted in my blog post, there are no natural disastersTo this end, some academics rightly warn that “Resilience is a very technical, neutral, apolitical term. It was initially designed to characterize systems, and it doesn’t address power, equity or agency…  Also, strengthening resilience is not free—you can have some winners and some losers.”

As it turns out, I have a lot say about the political versus technical argument. First of all, this is hardly a new or original argument but nevertheless an important one. Amartya Senn discussed this issue within the context of famines decades ago, noting that famines do not take place in democracies. In 1997, Alex de Waal published his seminal book, “Famine Crimes: Politics and the Disaster Relief In-dustry in Africa.” As he rightly notes, “Fighting famine is both a technical and political challenge.” Unfortunately, “one universal tendency stands out: technical solutions are promoted at the expense of political ones.” There is also a tendency to overlook the politics of technical actions, muddle or cover political actions with technical ones, or worse, to use technical measures as an excuse not to undertake needed political action.

De Waal argues that the use of the term “governance” was “an attempt to avoid making the political critique too explicit, and to enable a focus on specific technical aspects of government.” In some evaluations of development and humanitarian projects, “a caveat is sometimes inserted stating that politics lies beyond the scope of this study.” To this end, “there is often a weak call for ‘political will’ to bridge the gap between knowledge of technical measures and action to implement them.” As de Waal rightly notes, “the problem is not a ‘missing link’ but rather an entire political tradition, one manifestation of which is contemporary international humanitarianism.” In sum, “technical ‘solutions’ must be seen in the political context, and politics itself in the light of the domi-nance of a technocratic approach to problems such as famine.”

From a paper I presented back in 2007: “the technological approach almost always serves those who seek control from a distance.” As a result of this technological drive for pole position, a related “concern exists due to the separation of risk evaluation and risk reduction between science and political decision” so that which is inherently politically complex becomes depoliticized and mechanized. In Toward a Rational Society (1970), the German philosopher Jürgen Habermas describes “the colonization of the public sphere through the use of instrumental technical rationality. In this sphere, complex social problems are reduced to technical questions, effectively removing the plurality of contending perspectives.”

To be sure, Western science tends to pose the question “How?” as opposed to “Why?”What happens then is that “early warning systems tend to be largely conceived as hazard-focused, linear, topdown, expert driven systems, with little or no engagement of end-users or their representatives.” As De Waal rightly notes, “the technical sophistication of early warning systems is offset by a major flaw: response cannot be enforced by the populace. The early warning information is not normally made public.”  In other words, disaster prevention requires “not merely identifying causes and testing policy instruments but building a [social and] political movement” since “the framework for response is inherently political, and the task of advocacy for such response cannot be separated from the analytical tasks of warning.”

Recall my emphasis on people-centered early warning above and the definition of resilience as capacity for self-organization. Self-organization is political. Hence my efforts to promote greater linkages between the fields of nonviolent action and early warning years ago. I have a paper (dated 2008) specifically on this topic should anyone care to read. Anyone who has read my doctoral dissertation will also know that I have long been interested in the impact of technology on the balance of power in political contexts. A relevant summary is available here. Now, why did I not include all this in the main body of my blog post? Because this updated section already runs over 1,000 words.

In closing, I disagree with the over-used criticism that resilience is reactive and about returning to initial conditions. Why would we want to be reactive or return to initial conditions if the latter state contributed to the subsequent disaster we are recovering from? When my colleague Andrew Zolli talks about resilience, he talks about “bouncing forward”, not bouncing back. This is also true of Nassim Taleb’s term antifragility, the ability to thrive on disruption. As Homer-Dixon also notes, preparing to fail gracefully is hardly reactive either.

Personal Reflections: 3 Years After the Haiti Earthquake

The devastating earthquake that struck Port-au-Prince on January 12, 2010 killed as many as 200,000 people. My fiancée and five close friends were in Haiti at the time and narrowly escaped a collapsing building. They were some of the lucky few survivors. But I had no knowledge that they had survived until 8 hours or so after the earthquake because we were unable get any calls through. The Haiti Crisis Map I subsequently spearheaded still stands as the most psycho-logically and emotionally difficult project I’ve ever been a part of.

The heroes of this initiative and the continuing source of my inspiration today were the hundreds and hundreds of volunteers who ensured the Haiti Crisis Map remained live for so many weeks. The majority of these volunteers were of course the Haitian Diaspora as well as Haitians in country. I had the honor of meeting and working with one of these heroes while in Port-au-Prince, Kurt Jean-Charles, the CEO of the Haitian software company Solutions.ht. I invited Kurt to give the Keynote at the 2010 International Crisis Mappers Conference (ICCM 2010) and highly recommend watching the video above. Kurt speaks directly from the heart.

HaitianDiaspora

Another personal hero of mine (pictured above) is Sabina Carlson—now Sabina Carlson Robillard following her recent wedding to Louino in Port-au-Prince! She volunteered as the Haitian Diaspora Liaison for the Haiti Crisis Map and has been living in Cité Soleil ever since. Needless to say, she continues to inspire all of us who have had the honor of working with her and learning from her.

Finally, but certainly not (!) least, the many, many hundreds of amazing volun-teers who tirelessly translated tens of thousands of text messages for this project. Thanks to you, some 1,500 messages from the disaster-affected population were added to the live crisis map of Haiti. This link points to the only independent, rigorous and professional evaluation of the project that exists. I highly reco-mmend reading this report as it comprises a number of important lessons learned in crisis mapping and digital humanitarian response.

Fonkoze

In the meantime, please consider making a donation to Fonkoze, an outstanding local organization committed to the social and economic improvement of the Haitian poor. Fonkoze is close to my heart not only because of the great work that they do but also because its staff and CEO were the ones who ensured the safe return of my fiancée and friends after the earthquake. In fact, my fiancée has continued to collaborate with them ever since and still works on related projects in Haiti. She is headed back to Port-au-Prince this very weekend. To make a tax deductible donation to Fonkoze, please visit this link. Thank you.

My thoughts & prayers go out to all those who lost loved ones in Haiti years ago.

Comparing the Quality of Crisis Tweets Versus 911 Emergency Calls

In 2010, I published this blog post entitled “Calling 911: What Humanitarians Can Learn from 50 Years of Crowdsourcing.” Since then, humanitarian colleagues have become increasingly open to the use of crowdsourcing as a methodology to  both collect and process information during disasters.  I’ve been studying the use of twitter in crisis situations and have been particularly interested in the quality, actionability and credibility of such tweets. My findings, however, ought to be placed in context and compared to other, more traditional, reporting channels, such as the use of official emergency telephone numbers. Indeed, “Information that is shared over 9-1-1 dispatch is all unverified information” (1).

911ex

So I did some digging and found the following statistics on 911 (US) & 999 (UK) emergency calls:

  • “An astounding 38% of some 10.4 million calls to 911 [in New York City] during 2010 involved such accidental or false alarm ‘short calls’ of 19 seconds or less — that’s an average of 10,700 false calls a day”.  – Daily News
  • “Last year, seven and a half million emergency calls were made to the police in Britain. But fewer than a quarter of them turned out to be real emergencies, and many were pranks or fakes. Some were just plain stupid.” – ABC News

I also came across the table below in this official report (PDF) published in 2011 by the European Emergency Number Association (EENA). The Greeks top the chart with a staggering 99% of all emergency calls turning out to be false/hoaxes, while Estonians appear to be holier than the Pope with less than 1% of such calls.

Screen Shot 2012-12-11 at 4.45.34 PM

Point being: despite these “data quality” issues, European law enforcement agencies have not abandoned the use of emergency phone numbers to crowd-source the reporting of emergencies. They are managing the challenge since the benefit of these number still far outweigh the costs. This calculus is unlikely to change as law enforcement agencies shift towards more mobile-based solutions like the use of SMS for 911 in the US. This important shift may explain why tra-ditional emergency response outfits—such as London’s Fire Brigade—are putting in place processes that will enable the public to report via Twitter.

For more information on the verification of crowdsourced social media informa-tion for disaster response, please follow this link.

What Waze Can Teach Us About Crowdsourcing and Crisis Mapping

I recently tried out Waze, “the world’s fastest-growing community-based traffic and navigation app.” The smart phone app enables drivers to easily share real-time traffic and road information, “saving everyone time and gas money on their daily commute.” Waze is quite possibly the most successful live, crowdsourced mapping platform there is. Over 30 million drivers use the app to “outsmart traffic and get everyone the best route to work and back, every day.”

Why is Waze so successful? Because the creators of this app recognize full well that “Traffic is more than just red lines on the map.” Likewise, Crisis Mapping is also more than just dots on the map. But unlike Waze, groups like Ushahidi have not adopted dynamic geo-fencing features (despite my best efforts back in 2011). With Waze, drivers get automatically alerted before they approach police, accidents, road hazards or traffic jams, all shared by other drivers in real time. “It’s like a personal heads-up from a few million of your friends on the road.” The creators of Waze have also integrated gaming and social networking features into their app, something I also lobbied Ushahidi to do years ago. Not only is updating Waze super easy and fast to update, it is also fun and rewarding—which reminds me of the “Fisher Price Theory of Crisis Mapping.”

Just like drivers on the motorway, humanitarians do not have the time to keep watching and analyzing dots on a map. They have to keep their hands “on the wheel” and focus on the more important tasks at hand. Crisis Mapping platforms therefore have to be hands-free and more voice-based to limit the distraction of tactile data entry. In other words, the interface needs to become invisible. As computing pioneer Mark Weiser noted, “The best technology should be invisible, get out of your way, and let you live your life.” My colleague Amber Case add that “We shouldn’t have to fiddle with interfaces. We should be humans; machines should be machines; each amplifying the best of both. Wouldn’t that make for a nice reality?” This is what Waze is doing by integrating very neat user-interface features and voice-based controls, which improve situational awareness and facilitates real-time decision-making.

In conclusion, Waze’s clever user-centered design features are also relevant to map-based development and human rights projects in the majority world (i.e. developing countries). Otherwise, the value of digital maps is more like that of news articles—that is, informative but not necessarily operational and actionable for local decision-making purposes.

bio

See also: Waze Helps Drivers Get Around Damage from Oklahoma Tornadoes

Tweeting is Believing? Analyzing Perceptions of Credibility on Twitter

What factors influence whether or not a tweet is perceived as credible? According to this recent study, users have “difficulty discerning truthfulness based on con-tent alone, with message topic, user name, and user image all impacting judg-ments of tweets and authors to varying degrees regardless of the actual truth-fulness of the item.”

For example, “Features associated with low credibility perceptions were the use of non-standard grammar and punctuation, not replacing the default account image, or using a cartoon or avatar as an account image. Following a large number of users was also associated with lower author credibility, especially when unbalanced in comparison to follower count […].” As for features enhan-cing a tweet’s credibility, these included “author influence (as measured by follower, retweet, and  mention counts), topical expertise (as established through a Twitter homepage bio, history of on-topic tweeting, pages outside of Twitter, or having a location relevant to the topic of the tweet), and reputation (whether an author is someone a user follows, has heard of, or who has an official Twitter account verification seal). Content related features viewed as credibility-enhancing were containing a URL leading to a high-quality site, and the existence of other tweets conveying similar information.”

 In general, users’ ability to “judge credibility in practice is largely limited to those features visible at-a-glance in current UIs (user picture, user name, and tweet content). Conversely, features that often are obscured in the user interface, such as the bio of a user, receive little attention despite their ability to impact cred-ibility judgments.” The table below compares a features’s perceived credibility impact with the attention actually allotted to assessing that feature.

“Message topic influenced perceptions of tweet credibility, with science tweets receiving a higher mean tweet credibility rating than those about either politics  or entertainment. Message topic had no statistically significant impact on perceptions of author credibility.” In terms of usernames, “Authors with topical names were considered more credible than those with traditional user names, who were in turn considered more credible than those with internet name styles.” In a follow up experiment, the study analyzed perceptions of credibility vis-a-vis a user’s image, i.e., the profile picture associated with a given Twitter account. “Use of the default Twitter icon significantly lowers ratings of content and marginally lowers ratings of authors […]” in comparison to generic, topical, female and male images.

Obviously, “many of these metrics can be faked to varying extents. Selecting a topical username is trivial for a spam account. Manufacturing a high follower to following ratio or a high number of retweets is more difficult but not impossible. User interface changes that highlight harder to fake factors, such as showing any available relationship between a user’s network and the content in question, should help.” Overall, these results “indicate a discrepancy between features people rate as relevant to determining credibility and those that mainstream social search engines make available.” The authors of the study conclude by suggesting changes in interface design that will enhance a user’s ability to make credibility judgements.

“Firstly, author credentials should be accessible at a glance, since these add value and users rarely take the time to click through to them. Ideally this will include metrics that convey consistency (number of tweets on topic) and legitimization by other users (number of mentions or retweets), as well as details from the author’s Twitter page (bio, location, follower/following counts). Second, for con-tent assessment, metrics on number of retweets or number of times a link has been shared, along with who is retweeting and sharing, will provide consumers with context for assessing credibility. […] seeing clusters of tweets that conveyed similar messages was reassuring to users; displaying such similar clusters runs counter to the current tendency for search engines to strive for high recall by showing a diverse array of retrieved items rather than many similar ones–exploring how to resolve this tension is an interesting area for future work.”

In sum, the above findings and recommendations explain why platforms such as RapportiveSeriously Rapid Source Review (SRSR) and CrisisTracker add so much value to the process of assessing the credibility of tweets in near real-time. For related research: Predicting the Credibility of Disaster Tweets Automatically and: Automatically Ranking the Credibility of Tweets During Major Events.

Social Media = Social Capital = Disaster Resilience?

Do online social networks generate social capital, which, in turn, increases resilience to disasters? How might one answer this question? For example, could we analyze Twitter data to capture levels of social capital in a given country? If so, do countries with higher levels of social capital (as measured using Twitter) demonstrate greater resiliences to disasters?

Twitter Heatmap Hurricane

These causal loops are fraught with all kinds of intervening variables, daring assumptions and econometric nightmares. But the link between social capital and disaster resilience is increasingly accepted. In “Building Resilience: Social Capital in Post-Disaster Recover,” Daniel Aldrich draws on both qualitative and quantita-tive evidence to demonstrate that “social resources, at least as much as material ones, prove to be the foundation for resilience and recovery.” A concise summary of his book is available in my previous blog post.

So the question that follows is whether the link between social media, i.e., online social networks and social capital can be established. “Although real-world organizations […] have demonstrated their effectiveness at building bonds, virtual communities are the next frontier for social capital-based policies,” writes Aldrich. Before we jump into the role of online social networks, however, it is important to recognize the function of “offline” communities in disaster response and resilience.

iran-reliefs

“During the disaster and right after the crisis, neighbors and friends—not private firms, government agencies, or NGOs—provide the necessary resources for resilience.” To be sure, “the lack of systematic assistance from government and NGOs [means that] neighbors and community groups are best positioned to undertake efficient initial emergency aid after a disaster. Since ‘friends, family, or coworkers of victims and also passersby are always the first and most effective responders, “we should recognize their role on the front line of disasters.”

In sum, “social ties can serve as informal insurance, providing victims with information, financial help and physical assistance.” This informal insurance, “or mutual assistance involves friends and neighbors providing each other with information, tools, living space, and other help.” Data driven research on tweets posted during disasters reveal that many provide victims with information, help, tools, living space, assistance and other help. But this support is also provided to complete strangers since it is shared openly and publicly on Twitter. “[…] Despite—or perhaps because of—horrendous conditions after a crisis, survivors work together to solve their problems; […] the amount of (bounding) social capital seems to increase under difficult conditions.” Again, this bonding is not limited to offline dynamics but occurs also within and across online social networks. The tweet below was posted in the aftermath of Hurricane Sandy.

Sandy Tweets Mutual Aid

“By providing norms, information, and trust, denser social networks can implement a faster recovery.” Such norms also evolve on Twitter, as does information sharing and trust building. So is the degree of activity on Twitter directly proportional to the level of community resilience?

This data-driven study, “Do All Birds Tweet the Same? Characterizing Twitter Around the World,” may shed some light in this respect. The authors, Barbara Poblete, Ruth Garcia, Marcelo Mendoza and Alejandro Jaimes, analyze various aspects of social media–such as network structure–for the ten most active countries on Twitter. In total, the working dataset consisted close to 5 million users and over 5 billion tweets. The study is the largest one carried out to date on Twitter data, “and the first one that specifically examines differences across different countries.”

Screen Shot 2012-11-30 at 6.19.45 AM

The network statistics per country above reveals that Japan, Canada, Indonesia and South Korea have highest percentage of reciprocity on Twitter. This is important because according to Poblet et al., “Network reciprocity tells us about the degree of cohesion, trust and social capital in sociology.” In terms of network density, “the highest values correspond to South Korea, Netherlands and Australia.” Incidentally, the authors find that “communities which tend to be less hierarchical and more reciprocal, also displays happier language in their content updates. In this sense countries with high conversation levels (@) … display higher levels of happiness too.”

If someone is looking for a possible dissertation topic, I would recommend the following comparative case study analysis. Select two of the four countries with highest percentage of reciprocity on Twitter: Japan, Canada, Indonesia and South Korea. The two you select should have a close “twin” country. By that I mean a country that has many social, economic and political factors in common. The twin countries should also be in geographic proximity to each other since we ultimately want to assess how they weather similar disasters. The paired can-didates that come to mind are thus: Canada & US and Indonesia & Malaysia.

Next, compare the countries’ Twitter networks, particularly degrees of  recipro-city since this metric appears to be a suitable proxy for social capital. For example, Canada’s reciprocity score is 26% compared to 19% for the US. In other words, quite a difference. Next, identify recent  disasters that both countries have experienced. Do the affected cities in the respective countries weather the disasters differently? Is one community more resilient than the other? If so, do you find a notable quantitative difference in their Twitter networks and degrees of reciprocity? If so, does a subsequent comparative qualitative analysis support these findings?

As cautioned earlier, these causal loops are fraught with all kinds of intervening variables, daring assumptions and econometric nightmares. But if anyone wants to brave the perils of applied social science research, and finds the above re-search questions of interest, then please do get in touch!

Debating the Value of Tweets For Disaster Response (Intelligently)

With every new tweeted disaster comes the same old question: what is the added value of tweets for disaster response? Only a handful of data-driven studies actually bother to move the debate beyond anecdotes. It is thus high time that a meta-level empirical analysis of the existing evidence be conducted. Only then can we move towards a less superficial debate on the use of social media for disaster response and emergency management.

In her doctoral research Dr. Sarah Vieweg found that between 8% and 24% of disaster tweets she studied “contain information that provides tactical, action-able information that can aid people in making decisions, advise others on how to obtain specific information from various sources, or offer immediate post-impact help to those affected by the mass emergency.” Two of the disaster datasets that Vieweg analyzed were the Red River Floods of 2009 and 2010. The tweets from the 2010 disaster resulted in a small increase of actionable tweets (from ~8% to ~9%). Perhaps Twitter users are becoming more adept at using Twitter during crises? The lowest number of actionable tweets came from the Red River Floods of 2009, whereas the highest came from the Haiti Earthquake of 2010. Again, there is variation—this time over space.

In this separate study, over 64,000 tweets generated during Thailand’s major floods in 2011 were analyzed. The results indicate that about 39% of these tweets belonged to the “Situational Awareness and Alerts” category. “Twitter messages in this category include up-to-date situational and location-based information related to the flood such as water levels, traffic conditions and road conditions in certain areas. In addition, emergency warnings from authorities advising citizens to evacuate areas, seek shelter or take other protective measures are also included.” About 8% of all tweets (over 5,000 unique tweets) were “Requests for Assistance,” while 5% were “Requests for Information Categories.”

TwitterDistribution

In this more recent study, researchers mapped flood-related tweets and found a close match between that resulting map and the official government flood map. In the map below, tweets were normalized, such that values greater than one mean more tweets than would be expected in normal Twitter traffic. “Unlike many maps of online phenomena, careful analysis and mapping of Twitter data does NOT simply mirror population densities. Instead con-centration of twitter activity (in this case tweets containing the keyword flood) seem to closely reflect the actual locations of floods and flood alerts even when we simply look at the total counts.” This also implies that a relatively high number of flood-related tweets must have contained accurate information.

TwitterMapUKfloods

Shifting from floods to fires, this earlier research analyzed some 1,700 tweets generated during Australia’s worst bushfire in history. About 65% of the tweets had “factual details,” i.e., “more than three of every five tweets had useful infor-mation.” In addition, “Almost 22% of the tweets had geographical data thus identifying location of the incident which is critical in crisis reporting.” Around 7% of the tweets were seeking information, help or answers. Finally, close to 5% (about 80 tweets) were considered “directly actionable.”

Preliminary findings from applied research that I am carrying out with my Crisis Computing team at QCRI also reveal variation in value. In one disaster dataset we studied, up to 56% of the tweets were found to be informative. But in two other datasets, we found the number of informative tweets to be very low. Meanwhile, a recent Pew Research study found that 34% of tweets during Hurricane Sandy “involved news organizations providing content, government sources offering information, people sharing their own eyewitness accounts and still more passing along information posted by others.” In addition, “fully 25% [of tweets] involved people sharing photos and videos,” thus indicating “the degree to which visuals have become a more common element of this realm.”

Finally, this recent study analyzed over 35 million tweets posted by ~8 million users based on current trending topics. From this data, the authors identified 14 major events reflected in the tweets. These included the UK riots, Libya crisis, Virginia earthquake and Hurricane Irene, for example. The authors found that “on average, 30% of the content about an event, provides situational awareness information about the event, while 14% was spam.”

So what can we conclude from these few studies? Simply that the value of tweets for disaster response can vary considerably over time and space. The debate should thus not center around whether tweets yield added value for disaster response but rather what drives this variation in value. Identifying these drivers may enable those with influence to incentivize high-value tweets.

This interesting study, “Do All Birds Tweet the Same? Characterizing Twitter Around the World,” reveals some very interesting drivers. The social network analysis (SNA) of some 5 million users and 5 billion tweets across 10 countries reveals that “users in the US give Twitter a more informative purpose, which is reflected in more globalized communities, which are more hierarchical.” The study is available here (PDF). This American penchant for posting “informative” tweets is obviously not universal. To this end, studying network typologies on Twitter may yield further insights on how certain networks can be induced—at a structural level—to post more informative tweets following major disasters.

Twitter Pablo Gov

Regardless of network typology, however, policy still has an important role to play in incentivizing high-value tweets. To be sure, if demand for such tweets is not encouraged, why would supply follow? Take the forward-thinking approach by the Government of the Philippines, for example. The government actively en-couraged users to use specific hashtags for disaster tweets days before Typhoon Pablo made landfall. To make this kind of disaster reporting via twitter more actionable, the Government could also encourage the posting of pictures and the use of a structured reporting syntax—perhaps a simplified version of the Tweak the Tweet approach. Doing so would not only provide the government with greater situational awareness, it would also facilitate self-organized disaster response initiatives.

In closing, perhaps we ought to keep in mind that even if only, say, 0.001% of the 20 million+ tweets generated during the first five days of Hurricane Sandy were actionable and only half of these were accurate, this would still mean over a thousand informative, real-time tweets, or about 15,000 words, or 25 pages of single-space, relevant, actionable and timely disaster information.

PS. While the credibility and veracity of tweets is an important and related topic of conversation, I have already written at length about this.

Analyzing Tweets From Australia’s Worst Bushfires

As many as 400 fires were identified in Victoria on February 7, 2010. These resulted in Australia’s highest ever loss of life from a bushfire; 173 people were killed and over 400 injured. This analysis of 1,684 tweets generated during these fires found that they were “laden with actionable factual information which contrasts with earlier claims that tweets are of no value made of mere random personal notes.”

Of the 705 unique users who exchanged tweets during the fires, only two could be considered “official sources of communication”; both accounts were held by ABC Radio Melbourne. “This demonstrates the lack of state or government based initiatives to use social media tools for official communication purposes. Perhaps the growth in Twitter usage for political campaigns will force policy makers to reconsider.” In any event, about 65% of the tweets had “factual details,” i.e., “more than three of every five tweets had useful information.” In addition, “Almost 22% of the tweets had geographical data thus identifying location of the incident which is critical in crisis reporting.” Around 7% of the tweets were see-king information, help or answers. Finally, close to 5% (about 80 tweets) were “directly actionable.”

While 5% is obviously low, there’s no reason why this figure has to remain this low. If humanitarian organizations were to create demand for posting actionable information on Twitter, this would likely increase the supply of more actionable content. Take for example the pro-active role taken by the Philippines Govern-ment vis-a-vis the use of Twitter for disaster response. In any case, the findings from the above study do reveal that 65% of tweets had useful information. Surely contacting the publishers of those tweets could produce even more directly actionable content—which is why the BBC’s User-Generated Content Hub (UGC) uses follow-up as strategy to verify content posted on social media.

Finally, keep in mind that calls to emergency numbers like “911” in the US and “000” in Australia are not spontaneously actionable. That is, human operators who handle these emergency calls ask a series of detailed questions in order to turn the information into structured, actionable content. Some of these standard questions are: What is your emergency? What is your current location? What is your phone number? What is happening? When did the incident occur? Are there injuries? etc. In other words, without being prompted with specific questions, callers are unlikely to provide as much actionable information. The same is true for the use of twitter in crisis response.

 

Does Social Capital Drive Disaster Resilience?

The link between social capital and disaster resilience is increasingly accepted. In “Building Resilience: Social Capital in Post-Disaster Recover,” Daniel Aldrich draws on both qualitative and quantitative evidence to demonstrate that “social resources, at least as much as material ones, prove to be the foundation for re-silience and recovery.” His case studies suggest that social capital is more important for disaster resilience than physical and financial capital, and more im-portant than conventional explanations.

Screen Shot 2012-11-30 at 6.03.23 AM

Aldrich argues that social capital catalyzes increased “participation among networked members; providing information and knowledge to individuals in the group; and creating trustworthiness.” The author goes so far as using “the phrases social capital and social networks nearly interchangeably.” He finds that “higher levels of social capital work together more effectively to guide resources to where they are needed.” Surveys confirm that “after disasters, most survivors see social connections and community as critical for their recovery.” To this end, “deeper reservoirs of social capital serve as informal insurance and mutual assistance for survivors,” helping them “overcome collective action constraints.”

Capacity for self-organization is thus intimately related to resilience since “social capital can overcome obstacles to collective action that often prevent groups from accomplishing their goals.” In other words, “higher levels of social capital reduce transaction costs, increase the probability of collective action, and make cooperation among individuals more likely.” Social capital is therefore “an asset, a functioning propensity for mutually beneficial collective action […].”

In contrast, communities exhibiting “less resilience fail to mobilize collectively and often must wait for recover guidance and assistance […].”  This implies that vulnerable populations are not solely characterized in terms of age, income, etc., but in terms of “their lack of connections and embeddedness in social networks.” Put differently, “the most effective—and perhaps least expensive—way to mitigate disasters is to create stronger bonds between individuals in vulnerable populations.”

Social Capital

The author brings conceptual clarity to the notion of social capital when he unpacks the term into Bonding Capital, Bridging Capital and Linking Capital. The figure above explains how these differ but relate to each other. The way this relates and applies to digital humanitarian response is explored in this blog post.

How Can Digital Humanitarians Best Organize for Disaster Response?

My colleague Duncan Watts recently spoke with Scientific American about a  new project I am collaborating on with him & colleagues at Microsoft Research. I first met Duncan while at the Santa Fe Institute (SFI) back in 2006. We recently crossed paths again (at 10 Downing Street, of all places), and struck up a conver-sation about crisis mapping and the Standby Volunteer Task Force (SBTF). So I shared with him some of the challenges we were facing vis-a-vis the scaling up of our information processing workflows for digital humanitarian response. Duncan expressed a strong interest in working together to address some of these issues. As he told Scientific American, “We’d like to help them by trying to understand in a more scientific manner how to scale up information processing organizations like the SBTF without over-loading any part of the system.”

Scientific American Title

Here are the most relevant sections of his extended interview:

In addition to improving research methods, how might the Web be used to deliver timely, meaningful research results?

Recently, a handful of volunteer “crisis mapping” organizations such as The Standby Task Force [SBTF] have begun to make a difference in crisis situations by performing real-time monitoring of information sources such as Facebook, Twitter and other social media, news reports and so on and then superposing these reports on a map interface, which then can be used by relief agencies and affected populations alike to improve their under-standing of the situation. Their efforts are truly inspiring, and they have learned a lot from experience. We want to build off that real-world model through Web-based crisis-response drills that test the best ways to comm-unicate and coordinate resources during and after a disaster.

How might you improve upon existing crisis-mapping efforts?

The efforts of these crisis mappers are truly inspiring, and groups like the SBTF have learned a lot about how to operate more effectively, most from hard-won experience.  At the same time, they’ve encountered some limita-tions to their model, which depends critically on a relatively small number of dedicated individuals, who can easily get overwhelmed or burned out. We’d like to help them by trying to understand in a more scientific manner how to scale up information processing organizations like the SBTF without over-loading any part of the system.

How would you do this in the kind of virtual lab environment you’ve been describing?

The basic idea is to put groups of subjects into simulated crisis-mapping drills, systematically vary different ways of organizing them, and measure how quickly and accurately they collectively process the corresponding information. So for any given drill, the organizer would create a particular disaster scenario, including downed power lines, fallen trees, fires and flooded streets and homes. The simulation would then generate a flow of information, like a live tweet stream that resembles the kind of on-the-ground reporting that occurs in real events, but in a controllable way.

As a participant in this drill, imagine you’re monitoring a Twitter feed, or some other stream of reports, and that your job is to try to accurately recreate the organizer’s disaster map based on what you’re reading. So for example, you’re looking at Twitter feeds for everything during hurricane Sandy that has “#sandy” associated with it. From that information, you want to build a map of New York and the tri-state region that shows everywhere there’s been lost power, everywhere there’s a downed tree, everywhere where there’s a fire.

You could of course try to do this on your own, but as the rate of infor-mation flow increased, any one person would get overwhelmed; so it would be necessary to have a group of people working on it together. But depen-ding on how the group is organized, you could imagine that they’d do a better or worse job, collectively. The goal of the experiment then would be to measure the performance of different types of organizations—say with different divisions of labor or different hierarchies of management—and discover which work better as a function of the complexity of the scenario you’ve presented and the rate of information being generated. This is something that we’re trying to build right now.

What’s the time frame for implementing such crowdsourced disaster mapping drills?

We’re months away from doing something like this. We still need to set up the logistics and are talking to a colleague [Patrick Meier] who works as a crisis mapper to get a better understanding of how they do things so that we can design the experiment in a way that is motivated by a real problem.

How will you know when your experiments have created something valuable for better managing disaster responses?

There’s no theory that says, here’s the best way to organize n people to process the maximum amount of information reliably. So ideally we would like to design an experiment that is close enough to realistic crisis-mapping scenarios that it could yield some actionable insights. But the experiment would also need to be sufficiently simple and abstract so that we learn something about how groups of people process information that generalizes beyond the very specific case of crisis mapping.

As a scientist, I want to identify causal mechanisms in a nice, clean way and reduce the problem to its essence. But as someone who cares about making a difference in the real world, I would also like to be able to go back to my friend who’s a crisis mapper and say we did the experiment, and here’s what the science says you should do to be more effective.

The full interview is available at Scientific AmericanStay tuned for further up-dates on this research.