Some Thoughts on Real-Time Awareness for Tech@State

I’ve been invited to present at Tech@State in Washington DC to share some thoughts on the future of real-time awareness. So I thought I’d use my blog to brainstorm and invite feedback from iRevolution readers. The organizers of the event have shared the following questions with me as a way to guide the conver-sation: Where is all of this headed?  What will social media look like in five to ten years and what will we do with all of the data? Knowing that the data stream can only increase in size, what can we do now to prepare and prevent being over-whelmed by the sheer volume of data?

These are big, open-ended questions, and I will only have 5 minutes to share some preliminary thoughts. I shall thus focus on how time-critical crowdsourcing can yield real-time awareness and expand from there.

Two years ago, my good friend and colleague Riley Crane won DARPA’s $40,000 Red Balloon Competition. His team at MIT found the location of 10 weather balloons hidden across the continental US in under 9 hours. The US covers more than 3.7 million square miles and the balloons were barely 8 feet wide. This was truly a needle-in-the-haystack kind of challenge. So how did they do it? They used crowdsourcing and leveraged social media—Twitter in particular—by using a “recursive incentive mechanism” to recruit thousands of volunteers to the cause. This mechanism would basically reward individual participants financially based on how important their contributions were to the location of one or more balloons. The result? Real-time, networked awareness.

Around the same time that Riley and his team celebrated their victory at MIT, another novel crowdsourcing initiative was taking place just a few miles away at The Fletcher School. Hundreds of students were busy combing through social and mainstream media channels for actionable and mappable information on Haiti following the devastating earthquake that had struck Port-au-Prince. This content was then mapped on the Ushahidi-Haiti Crisis Map, providing real-time situational awareness to first responders like the US Coast Guard and US Marine Corps. At the same time, hundreds of volunteers from the Haitian Diaspora were busy translating and geo-coding tens of thousands of text messages from disaster-affected communities in Haiti who were texting in their location & most urgent needs to a dedicated SMS short code. Fletcher School students filtered and mapped the most urgent and actionable of these text messages as well.

One year after Haiti, the United Nation’s Office for the Coordination of Humanitarian Affairs (OCHA) asked the Standby Volunteer Task Force (SBTF) , a global network of 700+ volunteers, for a real-time map of crowdsourced social media information on Libya in order to improve their own situational awareness. Thus was born the Libya Crisis Map.

The result? The Head of OCHA’s Information Services Section at the time sent an email to SBTF volunteers to commend them for their novel efforts. In this email, he wrote:

“Your efforts at tackling a difficult problem have definitely reduced the information overload; sorting through the multitude of signals on the crisis is no easy task. The Task Force has given us an output that is manageable and digestible, which in turn contributes to better situational awareness and decision making.”

These three examples from the US, Haiti and Libya demonstrate what is already possible with time-critical crowdsourcing and social media. So where is all this headed? You may have noted from each of these examples that their success relied on the individual actions of hundreds and sometimes thousands of volunteers. This is primarily because automated solutions to filter and curate the data stream are not yet available (or rather accessible) to the wider public. Indeed, these solutions tend to be proprietary, expensive and/or classified. I thus expect to see free and open source solutions crop up in the near future; solutions that will radically democratize the tools needed to gain shared, real-time awareness.

But automated natural language processing (NLP) and machine learning alone are not likely to succeed, in my opinion. The data stream is actually not a stream, it is a massive torent of non-indexed information, a 24-hour global firehose of real-time, distributed multi-media data that continues to outpace our ability to produce actionable intelligence from this torrential downpour of 0’s and 1’s. To turn this data tsunami into real-time shared awareness will require that our filtering and curation platforms become more automated and collaborative. I believe the key is thus to combine automated solutions with real-time collabora-tive crowdsourcing tools—that is, platforms that enable crowds to collaboratively filter and curate real-time information, in real-time.

Right now, when we comb through Twitter, for example, we do so on our own, sitting behind our laptop, isolated from others who may be seeking to filter the exact same type of content. We need to develop free and open source platforms that allow for the distributed-but-networked, crowdsourced filtering and curation of information in order to democratize the sense-making of the firehose. Only then will the wider public be able to win the equivalent of Red Balloon competitions without needing $40,000 or a degree from MIT.

I’d love to get feedback from readers about what other compelling cases or arguments I should bring up in my presentation tomorrow. So feel free to post some suggestions in the comments section below. Thank you!

6 responses to “Some Thoughts on Real-Time Awareness for Tech@State

  1. Hi Patrick! I just finished reading a book called “How We Decide” by Jonah Lehrer, that discusses how decision makers in a split second make dramatically important decisions. They make these decisions despite the fact that they are bombarded with way too many inputs/too much data to parse, all at once. I met a Fire Chief in charge of the large fires in SD County last week and wanted to peer inside his mind. How does he decide, which way the fire is burning, whether the fire will jump the road, and how to direct firefighters? It is a split-second life or death decision in a high-pressure environment with way too much data at the ready. He has past info in his head about how fires move, he has current wind conditions, terrain conditions, but in the end, it is actually the work of the unconscious brain that makes the call, feels the call, intuits the call. Once when fighting a house fire, he recounts suddenly ordering his men: out of the building, now! A second later, the whole building collapsed. How did he know? He has no idea. I think more could be done parsing how people make decisions under uncertainty, and whether, if someone interviewed him about these tough calls, he could look back and say something like: “you know, upon further reflection, I think the clue was that the floor started moving, or the sound of the fire changed, and that was my signal that we needed to run.”

    The point? I’d love to use crowdsourcing to compile lessons learned from experts like the Chief, like the airline pilot who lands a plane despite faulty equipment, and the seasoned humanitarian worker on the nuances, the details, of the nitty gritty of making decisions for real-world problems, in real-time. These pearls of wisdom could be gleaned in such a way that reveal underlying patterns in the kinds of data. Volunteer sorcerers could also collect people’s wisdom and expertise, meeting them where they are, regarding what kinds of visualizations & analyses of complex data streams would be most useful to them, in a stressful environment. This would reveal I think so much information about the kind of data/category that is critical in each crisis environment, how it would best be served to them, etc.

    Some quick thoughts on a hard problem. Good luck!

  2. Crowdsourced and participatory mapping is a trend that I think will only grow. Maps provide a medium for mashing up data from a variety of sources, and tying them to social media provides a way for people to collaborate in interpreting results. Look at http://www.databasin.org for a good example of a tool for sharing data using maps. Though developed for conservation, it could easily be used for climate adaptation, disaster risk reduction, conflict mapping, and a host of cross-sectoral applications (looking for example at poverty, conflict, and natural resources use). The caveat of course is bandwidth – which is why efforts to increase public access in underserved areas is so important.

  3. Very helpful, many thanks John!

  4. Pingback: On Crowdsourcing, Crisis Mapping and Data Protection Standards « GEODATA POLICY

  5. Pingback: CrisisTracker: Collaborative Social Media Analysis For Disaster Response | iRevolution

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s