My colleague Duncan Watts recently spoke with Scientific American about a new project I am collaborating on with him & colleagues at Microsoft Research. I first met Duncan while at the Santa Fe Institute (SFI) back in 2006. We recently crossed paths again (at 10 Downing Street, of all places), and struck up a conver-sation about crisis mapping and the Standby Volunteer Task Force (SBTF). So I shared with him some of the challenges we were facing vis-a-vis the scaling up of our information processing workflows for digital humanitarian response. Duncan expressed a strong interest in working together to address some of these issues. As he told Scientific American, “We’d like to help them by trying to understand in a more scientific manner how to scale up information processing organizations like the SBTF without over-loading any part of the system.”
Here are the most relevant sections of his extended interview:
In addition to improving research methods, how might the Web be used to deliver timely, meaningful research results?
Recently, a handful of volunteer “crisis mapping” organizations such as The Standby Task Force [SBTF] have begun to make a difference in crisis situations by performing real-time monitoring of information sources such as Facebook, Twitter and other social media, news reports and so on and then superposing these reports on a map interface, which then can be used by relief agencies and affected populations alike to improve their under-standing of the situation. Their efforts are truly inspiring, and they have learned a lot from experience. We want to build off that real-world model through Web-based crisis-response drills that test the best ways to comm-unicate and coordinate resources during and after a disaster.
How might you improve upon existing crisis-mapping efforts?
The efforts of these crisis mappers are truly inspiring, and groups like the SBTF have learned a lot about how to operate more effectively, most from hard-won experience. At the same time, they’ve encountered some limita-tions to their model, which depends critically on a relatively small number of dedicated individuals, who can easily get overwhelmed or burned out. We’d like to help them by trying to understand in a more scientific manner how to scale up information processing organizations like the SBTF without over-loading any part of the system.
How would you do this in the kind of virtual lab environment you’ve been describing?
The basic idea is to put groups of subjects into simulated crisis-mapping drills, systematically vary different ways of organizing them, and measure how quickly and accurately they collectively process the corresponding information. So for any given drill, the organizer would create a particular disaster scenario, including downed power lines, fallen trees, fires and flooded streets and homes. The simulation would then generate a flow of information, like a live tweet stream that resembles the kind of on-the-ground reporting that occurs in real events, but in a controllable way.
As a participant in this drill, imagine you’re monitoring a Twitter feed, or some other stream of reports, and that your job is to try to accurately recreate the organizer’s disaster map based on what you’re reading. So for example, you’re looking at Twitter feeds for everything during hurricane Sandy that has “#sandy” associated with it. From that information, you want to build a map of New York and the tri-state region that shows everywhere there’s been lost power, everywhere there’s a downed tree, everywhere where there’s a fire.
You could of course try to do this on your own, but as the rate of infor-mation flow increased, any one person would get overwhelmed; so it would be necessary to have a group of people working on it together. But depen-ding on how the group is organized, you could imagine that they’d do a better or worse job, collectively. The goal of the experiment then would be to measure the performance of different types of organizations—say with different divisions of labor or different hierarchies of management—and discover which work better as a function of the complexity of the scenario you’ve presented and the rate of information being generated. This is something that we’re trying to build right now.
What’s the time frame for implementing such crowdsourced disaster mapping drills?
We’re months away from doing something like this. We still need to set up the logistics and are talking to a colleague [Patrick Meier] who works as a crisis mapper to get a better understanding of how they do things so that we can design the experiment in a way that is motivated by a real problem.
How will you know when your experiments have created something valuable for better managing disaster responses?
There’s no theory that says, here’s the best way to organize n people to process the maximum amount of information reliably. So ideally we would like to design an experiment that is close enough to realistic crisis-mapping scenarios that it could yield some actionable insights. But the experiment would also need to be sufficiently simple and abstract so that we learn something about how groups of people process information that generalizes beyond the very specific case of crisis mapping.
As a scientist, I want to identify causal mechanisms in a nice, clean way and reduce the problem to its essence. But as someone who cares about making a difference in the real world, I would also like to be able to go back to my friend who’s a crisis mapper and say we did the experiment, and here’s what the science says you should do to be more effective.
The full interview is available at Scientific American. Stay tuned for further up-dates on this research.