Tag Archives: SBTF

Live Crisis Map of Disaster Damage Reported on Social Media

Update: See early results of MicroMappers deployment here

Digital humanitarian volunteers have been busing tagging images posted to social media in the aftermath of Typhoon Yolanda. More specifically, they’ve been using the new MicroMappers ImageClicker to rate the level of damage they see in each image. Thus far, they have clicked over 7,000 images. Those that are tagged as “Mild” and “Severe” damage are then geolocated by members of the Standby Volunteer Task Force (SBTF) who have partnered with GISCorps and ESRI to create this live Crisis Map of the disaster damage tagged using the ImageClicker. The map takes a few second to load, so please be patient.

YolandaPH Crisis Map 1

The more pictures are clicked using the ImageClicker, the more populated this crisis map will become. So please help out if you have a few seconds to spare—that’s really all it takes to click an image. If there are no picture left to click or the system is temporarily offline, then please come back a while later as we’re uploading images around the clock. And feel free to join our list-serve in the meantime if you wish to be notified when humanitarian organizations need your help in the future. No prior experience or training necessary. Anyone who knows how to use a computer mouse can become a digital humanitarian.

The SBTF, GISCorps and ESRI are members of the Digital Humanitarian Network (DHN), which my colleague Andrej Verity and I co-founded last year. The DHN serves as the official interface for direct collaboration between traditional “brick-and-mortar” humanitarian organizations and highly skilled digital volunteer networks. The SBTF Yolanda Team, spearheaded by my colleague Justine Mackinnon, for example, has also produced this map based on the triangulated results of the TweetClicker:

YolandaPH Crisis Map 2
There’s a lot of hype around the use of new technologies and social media for disaster response. So I want to be clear that our digital humanitarian operations in the Philippines have not been perfect. This means  that we’re learning (a lot) by doing (a lot). Such is the nature of innovation. We don’t have the luxury of locking ourselves up in a lab for a year to build the ultimate humanitarian technology platform. This means we have to work extra, extra hard when deploying new platforms during major disasters—because not only do we do our very best to carry out Plan A, but we often have to carry out  Plans B and C in parallel just in case Plan A doesn’t pan out. Perhaps Samuel Beckett summed it up best: “Ever tried. Ever failed. No matter. Try Again. Fail again. Fail better.”

bio

MicroMappers Launched for Pakistan Earthquake Response (Updated)

Update 1: MicroMappers is now public! Anyone can join to help the efforts!
Update 2: Results of MicroMappers Response to Pakistan Earthquake [Link]

MicroMappers was not due to launch until next month but my team and I at QCRI received a time-sensitive request by colleagues at the UN to carry out an early test of the platform given yesterday’s 7.7 magnitude earthquake, which killed well over 300 and injured hundreds more in south-western Pakistan.

pakistan_quake_2013

Shortly after this request, the UN Office for the Coordination of Humanitarian Affairs (OCHA) in Pakistan officially activated the Digital Humanitarian Network (DHN) to rapidly assess the damage and needs resulting from the earthquake. The award-winning Standby Volunteer Task Force (SBTF), a founding member of the DHN. teamed up with QCRI to use MicroMappers in response to the request by OCHA-Pakistan. This exercise, however, is purely for testing purposes only. We made this clear to our UN partners since the results may be far from optimal.

MicroMappers is simply a collection of microtasking apps (we call them Clickers) that we have customized for disaster response purposes. We just launched both the Tweet and Image Clickers to support the earthquake relief and may also launch the Tweet and Image GeoClickers as well in the next 24 hours. The TweetClicker is pictured below (click to enlarge).

MicroMappers_Pakistan1

Thanks to our partnership with GNIP, QCRI automatically collected over 35,000 tweets related to Pakistan and the Earthquake (we’re continuing to collect more in real-time). We’ve uploaded these tweets to the TweetClicker and are also filtering links to images for upload to the ImageClicker. Depending on how the initial testing goes, we may be able to invite help from the global digital village. Indeed, “crowdsourcing” is simply another way of saying “It takes a village…” In fact, that’s precisely why MicroMappers was developed, to enable anyone with an Internet connection to become a digital humanitarian volunteer. The Clicker for images is displayed below (click to enlarge).

MicroMappers_Pakistan2

Now, whether this very first test of the Clickers goes well remains to be seen. As mentioned, we weren’t planning to launch until next month. But we’ve already learned heaps from the past few hours alone. For example, while the Clickers are indeed ready and operational, our automatic pre-processing filters are not yet optimized for rapid response. The purpose of these filters is to automatically identify tweets that link to images and videos so that they can be uploaded to the Clickers directly. In addition, while our ImageClicker is operational, our VideoClicker is still under development—as is our TranslateClicker, both of which would have been useful in this response. I’m sure will encounter other issues over the next 24-36 hours. We’re keeping track of these in a shared Google Spreadsheet so we can review them next week and make sure to integrate as much of the feedback as possible before the next disaster strikes.

Incidentally, we (QCRI) also teamed up with the SBTF to test the very first version of the Artificial Intelligence for Disaster Response (AIDR) platform for about six hours. As far as we know, this test represents the first time that machine learning classifiers for disaster resposne were created on the fly using crowdsourcing. We expect to launch AIDR publicly at the 2013 CrisisMappers conference this November (ICCM 2013). We’ll be sure to share what worked and didn’t work during this first AIDR pilot test. So stay tuned for future updates via iRevolution. In the meantime, a big, big thanks to the SBTF Team for rallying so quickly and for agreeing to test the platforms! If you’re interested in becoming a digital humanitarian volunteer, simply join us here.

Bio

Using CrowdFlower to Microtask Disaster Response

Cross-posted from CrowdFlower blog

A devastating earthquake struck Port-au-Prince on January 12, 2010. Two weeks later, on January 27th, a CrowdFlower was used to translate text messages from Haitian Creole to English. Tens of thousands of messages were sent by affected Haitians over the course of several months. All of these were heroically translated by hundreds of dedicated Creole-speaking volunteers based in dozens of countries across the globe. While Ushahidi took the lead by developing the initial translation platform used just days after the earthquake, the translation efforts were eventually rerouted to CrowdFlower. Why? Three simple reasons:

  1. CrowdFlower is one of the leading and most highly robust micro-tasking platforms there is;
  2. CrowdFlower’s leadership is highly committed to supporting digital humanitarian response efforts;
  3. Haitians in Haiti could now be paid for their translation work.

While the CrowdFlower project was launched 15 days after the earthquake, i.e., following the completion of search and rescue operations, every single digital humanitarian effort in Haiti was reactive. The key takeaway here was the proof of concept–namely that large-scale micro-tasking could play an important role in humanitarian information management. This was confirmed months later when devastating floods inundated much of Pakistan. CrowdFlower was once again used to translate incoming messages from the disaster affected population. While still reactive, this second use of CrowdFlower demonstrated replicability.

The most recent and perhaps most powerful use of CrowdFlower for disaster response occurred right after Typhoon Pablo devastated the Philippines in early December 2012. The UN Office for the Coordination of Humanitarian Affairs (OCHA) activated the Digital Humanitarian Network (DHN) to rapidly deliver a detailed dataset of geo-tagged pictures and video footage (posted on Twitter) depicting the damage caused by the Typhoon. The UN needed this dataset within 12 hours, which required that 20,000 tweets to be analyzed as quickly as possible. The Standby Volunteer Task Force (SBTF), a member of Digital Huma-nitarians, immediately used CrowdFlower to identify all tweets with links to pictures & video footage. SBTF volunteers subsequently analyzed those pictures and videos for damage and geographic information using other means.

This was the most rapid use of CrowdFlower following a disaster. In fact, this use of CrowdFlower was pioneering in many respects. This was the first time that a member of the Digital Humanitarian Network made use of CrowdFlower (and thus micro-tasking) for disaster response. It was also the first time that Crowd-Flower’s existing workforce was used for disaster response. In addition, this was the first time that data processed by CrowdFlower contributed to an official crisis map produced by the UN for disaster response (see above).

These three use-cases, Haiti, Pakistan and the Philippines, clearly demonstrate the added value of micro-tasking (and hence CrowdFlower) for disaster response. If CrowdFlower had not been available in Haiti, the alternative would have been to pay a handful of professional translators. The total price could have come to some $10,000 for 50,000 text messages (at 0.20 cents per word). Thanks to CrowdFlower, Haitians in Haiti were given the chance to make some of that money by translating the text messages themselves. Income generation programs are absolutely critical to rapid recovery following major disasters. In Pakistan, the use of CrowdFlower enabled Pakistani students and the Diaspora to volunteer their time and thus accelerate the translation work for free. Following Typhoon Pablo, paid CrowdFlower workers from the Philippines, India and Australia categorized several thousand tweets in just a couple hours while the volunteers from the Standby Volunteer Task Force geo-tagged the results. Had CrowdFlower not been available then, it is highly, highly unlikely that the mission would have succeeded given the very short turn-around required by the UN.

While impressive, the above use-cases were also reactive. We need to be a lot more pro-active, which is why I’m excited to be collaborating with CrowdFlower colleagues to customize a standby platform for use by the Digital Humanitarian Network. Having a platform ready-to-go within minutes is key. And while digital volunteers will be able to use this standby platform, I strongly believe that paid CrowdFlower workers also have a key role to play in the digital huma-nitarian ecosystem. Indeed, CrowdFlower’s large, multinational and multi-lingual global workforce is simply unparalleled and has the distinct advantage of being very well versed in the CrowdFlower platform.

In sum, it is high time that the digital humanitarian space move from crowd-sourcing to micro-tasking. It has been three years since the tragic earthquake in Haiti but we have yet to adopt micro-tasking more widely. CrowdFlower should thus play a key role in promoting and enabling this important shift. Their con-tinued important leadership in digital humanitarian response should also serve as a model for other private sector companies in the US and across the globe.

bio

Video: Minority Report Meets Crisis Mapping

This short video was inspired by the pioneering work of the Standby Volunteer Task Force (SBTF). A global network of 1,000+ digital humanitarians in 80+ countries, the SBTF is responsible for some of the most important live crisis mapping operations that have supported both humanitarian and human rights organizations over the past 2+ years. Today, the SBTF is a founding and active member of the Digital Humanitarian Network (DHN) and remains committed to rapid learning and innovation thanks to an outstanding team of volunteers (“Mapsters”) and their novel use of next-generation humanitarian technologies.

The video first aired on the National Geographic Television Channel in February 2013. A big thanks to the awesome folks from National Geographic and the outstanding Evolve Digital Cinema Team for visioning the future of digital humanitarian technologies—a future that my Crisis Computing Team and I at QCRI are working to create.

An aside: I tried on several occasions to hack the script and say “We” rather than “I” since crisis mapping is very rarely a solo effort but the main sponsor insisted that the focus be on one individual. On the upside, one of the scenes in the commercial is of a Situation Room full of Mapsters coupled with the narration: “Our team can map the pulse of the planet, from anywhere, getting aid to the right places.” Our team = SBTF! Which is why the $$ received for being in this commercial will go towards supporting Mapsters.

bio

Digital Humanitarian Response: Moving from Crowdsourcing to Microtasking

A central component of digital humanitarian response is the real-time monitor-ing, tagging and geo-location of relevant reports published on mainstream and social media. This has typically been a highly manual and time-consuming process, which explains why dozens if not hundreds of digital volunteers are often needed to power digital humanitarian response efforts. To coordinate these efforts, volunteers typically work off Google Spreadsheets which, needless to say, is hardly the most efficient, scalable or enjoyable interface to work on for digital humanitarian response.

complicated128

The challenge here is one of design. Google Spreadsheets was simply not de-signed to facilitate real-time monitoring, tagging and geo-location tasks by hundreds of digital volunteers collaborating synchronously and asynchronously across multiple time zones. The use of Google Spreadsheets not only requires up-front training of volunteers but also oversight and management. Perhaps the most problematic feature of Google Spreadsheets is the interface. Who wants to spend hours staring at cells, rows and columns? It is high time we take a more volunteer-centered design approach to digital humanitarian response. It is our responsibility to reduce the “friction” and make it as easy, pleasant and re-warding as possible for digital volunteers to share their time for the better good. While some deride the rise of “single-click activism,” we have to make it as easy as a double-click-of-the-mouse to support digital humanitarian efforts.

This explains why I have been actively collaborating with my colleagues behind the free & open-source micro-tasking platform, PyBossa. I often describe micro-tasking as “smart crowdsourcing”. Micro-tasking is simply the process of taking a large task and breaking it down into a series of smaller tasks. Take the tagging and geo-location of disaster tweets, for example. Instead of using Google Spread-sheets, tweets with designated hashtags can be imported directly into PyBossa where digital volunteers can tag and geo-locate said tweets as needed. As soon as they are processed, these tweets can be pushed to a live map or database right away for further analysis.

Screen Shot 2012-12-18 at 5.00.39 PM

The Standby Volunteer Task Force (SBTF) used PyBossa in the digital disaster response to Typhoon Pablo in the Philippines. In the above example, a volunteer goes to the PyBossa website and is presented with the next tweet. In this case: “Surigao del Sur: relief good infant needs #pabloPH [Link] #ReliefPH.” If a tweet includes location information, e.g., “Surigao del Sur,” a digital volunteer can simply copy & paste that information into the search box or  pinpoint the location in question directly on the map to generate the GPS coordinates. Click on the screenshot above to zoom in.

The PyBossa platform presents a number of important advantages when it comes to digital humanitarian response. One advantage is the user-friendly tutorial feature that introduces new volunteers to the task at hand. Furthermore, no prior experience or additional training is required and the interface itself can be made available in multiple languages. Another advantage is the built-in quality control mechanism. For example, one can very easily customize the platform such that every tweet is processed by 2 or 3 different volunteers. Why would we want to do this? To ensure consensus on what the right answers are when processing a tweet. For example, if three individual volunteers each tag a tweet as having a link that points to a picture of the damage caused by Typhoon Pablo, then we may find this to be more reliable than if only one volunteer tags a tweet as such. One additional advantage of PyBossa is that having 100 or 10,000 volunteers use the platform doesn’t require additional management and oversight—unlike the use of Google Spreadsheets.

There are many more advantages of using PyBossa, which is why my SBTF colleagues and I are collaborating with the PyBossa team with the ultimate aim of customizing a standby platform specifically for digital humanitarian response purposes. As a first step, however, we are working together to customize a PyBossa instance for the upcoming elections in Kenya since the SBTF was activated by Ushahidi to support the election monitoring efforts. The plan is to microtask the processing of reports submitted to Ushahidi in order to significantly accelerate and scale the live mapping process. Stay tuned to iRevolution for updates on this very novel initiative.

crowdflower-crowdsourcing-site

The SBTF also made use of CrowdFlower during the response to Typhoon Pablo. Like PyBossa, CrowdFlower is a micro-tasking platform but one developed by a for-profit company and hence primarily geared towards paying workers to complete tasks. While my focus vis-a-vis digital humanitarian response has chiefly been on (integrating) automated and volunteer-driven micro-tasking solutions, I believe that paid micro-tasking platforms also have a critical role to play in our evolving digital humanitarian ecosystem. Why? CrowdFlower has an unrivaled global workforce of more than 2 million contributors along with rigor-ous quality control mechanisms.

While this solution may not scale significanlty given the costs, I’m hoping that CrowdFlower will offer the Digital Humanitarian Network (DHN) generous discounts moving forward. Either way, identifying what kinds of tasks are best completed by paid workers versus motivated volunteers is a questions we must answer to improve our digital humanitarian workflows. This explains why I plan to collaborate with CrowdFlower directly to set up a standby platform for use by members of the Digital Humanitarian Network.

There’s one major catch with all microtasking platforms, however. Without well-designed gamification features, these tools are likely to have a short shelf-life. This is true of any citizen-science project and certainly relevant to digital human-itarian response as well, which explains why I’m a big, big fan of Zooniverse. If there’s a model to follow, a holy grail to seek out, then this is it. Until we master or better yet partner with the talented folks at Zooniverse, we’ll be playing catch-up for years to come. I will do my very best to make sure that doesn’t happen.

How Can Digital Humanitarians Best Organize for Disaster Response?

My colleague Duncan Watts recently spoke with Scientific American about a  new project I am collaborating on with him & colleagues at Microsoft Research. I first met Duncan while at the Santa Fe Institute (SFI) back in 2006. We recently crossed paths again (at 10 Downing Street, of all places), and struck up a conver-sation about crisis mapping and the Standby Volunteer Task Force (SBTF). So I shared with him some of the challenges we were facing vis-a-vis the scaling up of our information processing workflows for digital humanitarian response. Duncan expressed a strong interest in working together to address some of these issues. As he told Scientific American, “We’d like to help them by trying to understand in a more scientific manner how to scale up information processing organizations like the SBTF without over-loading any part of the system.”

Scientific American Title

Here are the most relevant sections of his extended interview:

In addition to improving research methods, how might the Web be used to deliver timely, meaningful research results?

Recently, a handful of volunteer “crisis mapping” organizations such as The Standby Task Force [SBTF] have begun to make a difference in crisis situations by performing real-time monitoring of information sources such as Facebook, Twitter and other social media, news reports and so on and then superposing these reports on a map interface, which then can be used by relief agencies and affected populations alike to improve their under-standing of the situation. Their efforts are truly inspiring, and they have learned a lot from experience. We want to build off that real-world model through Web-based crisis-response drills that test the best ways to comm-unicate and coordinate resources during and after a disaster.

How might you improve upon existing crisis-mapping efforts?

The efforts of these crisis mappers are truly inspiring, and groups like the SBTF have learned a lot about how to operate more effectively, most from hard-won experience.  At the same time, they’ve encountered some limita-tions to their model, which depends critically on a relatively small number of dedicated individuals, who can easily get overwhelmed or burned out. We’d like to help them by trying to understand in a more scientific manner how to scale up information processing organizations like the SBTF without over-loading any part of the system.

How would you do this in the kind of virtual lab environment you’ve been describing?

The basic idea is to put groups of subjects into simulated crisis-mapping drills, systematically vary different ways of organizing them, and measure how quickly and accurately they collectively process the corresponding information. So for any given drill, the organizer would create a particular disaster scenario, including downed power lines, fallen trees, fires and flooded streets and homes. The simulation would then generate a flow of information, like a live tweet stream that resembles the kind of on-the-ground reporting that occurs in real events, but in a controllable way.

As a participant in this drill, imagine you’re monitoring a Twitter feed, or some other stream of reports, and that your job is to try to accurately recreate the organizer’s disaster map based on what you’re reading. So for example, you’re looking at Twitter feeds for everything during hurricane Sandy that has “#sandy” associated with it. From that information, you want to build a map of New York and the tri-state region that shows everywhere there’s been lost power, everywhere there’s a downed tree, everywhere where there’s a fire.

You could of course try to do this on your own, but as the rate of infor-mation flow increased, any one person would get overwhelmed; so it would be necessary to have a group of people working on it together. But depen-ding on how the group is organized, you could imagine that they’d do a better or worse job, collectively. The goal of the experiment then would be to measure the performance of different types of organizations—say with different divisions of labor or different hierarchies of management—and discover which work better as a function of the complexity of the scenario you’ve presented and the rate of information being generated. This is something that we’re trying to build right now.

What’s the time frame for implementing such crowdsourced disaster mapping drills?

We’re months away from doing something like this. We still need to set up the logistics and are talking to a colleague [Patrick Meier] who works as a crisis mapper to get a better understanding of how they do things so that we can design the experiment in a way that is motivated by a real problem.

How will you know when your experiments have created something valuable for better managing disaster responses?

There’s no theory that says, here’s the best way to organize n people to process the maximum amount of information reliably. So ideally we would like to design an experiment that is close enough to realistic crisis-mapping scenarios that it could yield some actionable insights. But the experiment would also need to be sufficiently simple and abstract so that we learn something about how groups of people process information that generalizes beyond the very specific case of crisis mapping.

As a scientist, I want to identify causal mechanisms in a nice, clean way and reduce the problem to its essence. But as someone who cares about making a difference in the real world, I would also like to be able to go back to my friend who’s a crisis mapper and say we did the experiment, and here’s what the science says you should do to be more effective.

The full interview is available at Scientific AmericanStay tuned for further up-dates on this research.

Help Tag Tweets from Typhoon Pablo to Support UN Disaster Response!

Update: Summary of digital humanitarian response efforts available here.

The United Nations Office for the Coordination of Humanitarian Affairs (OCHA) has just activated the Digital Humanitarian Network (DHN) to request support in response to Typhoo Pablo. They also need your help! Read on!

pablopic

The UN has asked for pictures and videos of the damage to be collected from tweets posted over the past 48 hours. These pictures/videos need to be geo-tagged if at all possible, and time-stamped. The Standby Volunteer Task Force (SBTF) and Humanity Road (HR), both members of Digital Humanitarians, are thus collaborating to provide the UN with the requested data, which needs to be submitted by today 10pm 11pm New York time, 5am Geneva time tomorrow. Given this very short turn around time, we only have 10 hours (!), the Digital Humani-tarian Network needs your help!

Pybossa Philippines

The SBTF has partnered with colleagues at PyBossa to launch this very useful microtasking platform for you to assist the UN in these efforts. No prior experience necessary. Click here or on the display above to see just how easy it is to support the disaster relief operations on the ground.

A very big thanks to Daniel Lombraña González from PyBossa for turning this around at such short notice! If you have any questions about this project or with respect to volunteering, please feel free to add a comment to this blog post below. Even if you only have time tag one tweet, it counts! Please help!

Some background information on this project is available here.

PeopleBrowsr: Next-Generation Social Media Analysis for Humanitarian Response?

As noted in this blog post on “Data Philanthropy for Humanitarian Response,” members of the Digital Humanitarian Network (DHNetwork) are still using manual methods for media monitoring. When the United Nations Office for the Coordination of Humanitarian Affairs (OCHA) activated the Standby Volunteer Task Force (SBTF) to crisis map Libya last year, for example, SBTF volunteers manually monitored hundreds of Twitter handles, news sites for several weeks.

SBTF volunteers (Mapsters) do not have access to a smart microtasking platform that could have distributed the task in more efficient ways. Nor do they have access to even semi-automated tools for content monitoring and information retrieval. Instead, they used a Google Spreadsheet to list the sources they were manually monitoring and turned this spreadsheet into a sign-up sheet where each Mapster could sign on for 3-hour shifts every day. The SBTF is basically doing “crowd computing” using the equivalent of a typewriter.

Meanwhile, companies like Crimson Hexagon, NetBase, RecordedFuture and several others have each developed sophisticated ways to monitor social and/or mainstream media for various private sector applications such as monitoring brand perception. So my colleague Nazila kindly introduced me to her colleagues at PeopleBrowsr after reading my post on Data Philanthropy. Last week, Marc from PeopleBrowsr gave me a thorough tour of the platform. I was definitely impressed and am excited that Marc wants us to pilot the platform in support of the Digital Humanitarian Network. So what’s the big deal about PeopleBrowsr? To begin with, the platform has access to 1,000 days of social media data and over 3 terabytes of social data per month.

To put this in terms of information velocity, PeopleBrowsr receives 10,000 social media posts per second from a variety of sources including Twitter, Facebook, fora and blogs. On the latter, they monitor posts from over 40 million blogs including all of Tumblr, Posterious, Blogspot and every WordPress-hosted site. They also pull in content from YouTube and Flickr. (Click on the screenshots below to magnify them).

Lets search for the term “tsunami” on Twitter. (One could enter a complex query, e.g., and/or, not, etc., and also search using twitter handles, word or hashtag clouds, top URLs, videos, pictures, etc). PeopleBrowsr summarizes the result by Location and Community. Location simply refers to where those generating content referring to a tsunami are located. Of course, many Twitter users may tweet about an event without actually being eye-witness accounts (think of Diaspora groups, for example). While PeopleBrowsr doesn’t geo-tag the location of reports events, you can very easily and quickly identify which twitter users are tweeting the most about a given event and where they are located.

As for Community, PeopleBrowsr has  indexed millions of social media users and clustered them into different communities based on their profile/bio information. Given our interest in humanitarian response, we could create our own community of social media users from the humanitarian sector and limit our search to those users only. Communities can also be created based on hashtags. The result of the “tsunami” search is displayed below.

This result can be filtered further by gender, sentiment, number of twitter followers, urgent words (e.g., alert, help, asap), time period and location, for example. The platform can monitor and view posts in any language that is posted. In addition, PeopleBrowsr have their very own Kred score which quantifies the “credibility” of social media users. The scoring metrics for Kred scores is completely transparent and also community driven. “Kred is a transparent way to measure influence and outreach in social media. Kred generates unique scores for every domain of expertise. Regardless of follower count, a person is influential if their community is actively listening and engaging with their content.”

Using Kred, PeopleBrows can do influence analysis using Twitter across all languages. They’ve also added Facebook to Kred, but only as an opt in option.  PeopleBrowsr also has some great built-in and interactive data analytics tools. In addition, one can download a situation report in PDF and print that off if there’s a need to go offline.

What appeals to me the most is perhaps the full “drill-down” functionality of PeopleBrowsr’s data analytics tools. For example, I can drill down to the number of tweets per month that reference the word “tsunami” and drill down further per week and per day.

Moreover, I can sort through the individual tweets themselves based on specific filters and even access the underlying tweets complete with twitter handles, time-stamps, Kred scores, etc.

This latter feature would make it possible for the SBTF to copy & paste and map individual tweets on a live crisis map. In fact, the underlying data can be downloaded into a CSV file and added to a Google Spreadsheet for Mapsters to curate. Hopefully the Ushahidi team will also provide an option to upload CSVs to SwiftRiver so users can curate/filter pre-existing datasets as well as content generated live. What if you don’t have time to get on PeopleBrowsr and filter, download, etc? As part of their customer support, PeopleBrowsr will simply provide the data to you directly.

So what’s next? Marc and I are taking the following steps: Schedule online demo of PeopleBrowsr of the SBTF Core Team (they are for now the only members of the Digital Humanitarian Network with a dedicated and experienced Media Monitoring Team); SBTF pilots PeopleBrowsr for preparedness purposes; SBTF deploys  PeopleBrowsr during 2-3 official activations of the Digital Humanitarian Network; SBTF analyzes the added value of PeopleBrowsr for humanitarian response and provides expert feedback to PeopleBrowsr on how to improve the tool for humanitarian response.

Some Thoughts on Real-Time Awareness for Tech@State

I’ve been invited to present at Tech@State in Washington DC to share some thoughts on the future of real-time awareness. So I thought I’d use my blog to brainstorm and invite feedback from iRevolution readers. The organizers of the event have shared the following questions with me as a way to guide the conver-sation: Where is all of this headed?  What will social media look like in five to ten years and what will we do with all of the data? Knowing that the data stream can only increase in size, what can we do now to prepare and prevent being over-whelmed by the sheer volume of data?

These are big, open-ended questions, and I will only have 5 minutes to share some preliminary thoughts. I shall thus focus on how time-critical crowdsourcing can yield real-time awareness and expand from there.

Two years ago, my good friend and colleague Riley Crane won DARPA’s $40,000 Red Balloon Competition. His team at MIT found the location of 10 weather balloons hidden across the continental US in under 9 hours. The US covers more than 3.7 million square miles and the balloons were barely 8 feet wide. This was truly a needle-in-the-haystack kind of challenge. So how did they do it? They used crowdsourcing and leveraged social media—Twitter in particular—by using a “recursive incentive mechanism” to recruit thousands of volunteers to the cause. This mechanism would basically reward individual participants financially based on how important their contributions were to the location of one or more balloons. The result? Real-time, networked awareness.

Around the same time that Riley and his team celebrated their victory at MIT, another novel crowdsourcing initiative was taking place just a few miles away at The Fletcher School. Hundreds of students were busy combing through social and mainstream media channels for actionable and mappable information on Haiti following the devastating earthquake that had struck Port-au-Prince. This content was then mapped on the Ushahidi-Haiti Crisis Map, providing real-time situational awareness to first responders like the US Coast Guard and US Marine Corps. At the same time, hundreds of volunteers from the Haitian Diaspora were busy translating and geo-coding tens of thousands of text messages from disaster-affected communities in Haiti who were texting in their location & most urgent needs to a dedicated SMS short code. Fletcher School students filtered and mapped the most urgent and actionable of these text messages as well.

One year after Haiti, the United Nation’s Office for the Coordination of Humanitarian Affairs (OCHA) asked the Standby Volunteer Task Force (SBTF) , a global network of 700+ volunteers, for a real-time map of crowdsourced social media information on Libya in order to improve their own situational awareness. Thus was born the Libya Crisis Map.

The result? The Head of OCHA’s Information Services Section at the time sent an email to SBTF volunteers to commend them for their novel efforts. In this email, he wrote:

“Your efforts at tackling a difficult problem have definitely reduced the information overload; sorting through the multitude of signals on the crisis is no easy task. The Task Force has given us an output that is manageable and digestible, which in turn contributes to better situational awareness and decision making.”

These three examples from the US, Haiti and Libya demonstrate what is already possible with time-critical crowdsourcing and social media. So where is all this headed? You may have noted from each of these examples that their success relied on the individual actions of hundreds and sometimes thousands of volunteers. This is primarily because automated solutions to filter and curate the data stream are not yet available (or rather accessible) to the wider public. Indeed, these solutions tend to be proprietary, expensive and/or classified. I thus expect to see free and open source solutions crop up in the near future; solutions that will radically democratize the tools needed to gain shared, real-time awareness.

But automated natural language processing (NLP) and machine learning alone are not likely to succeed, in my opinion. The data stream is actually not a stream, it is a massive torent of non-indexed information, a 24-hour global firehose of real-time, distributed multi-media data that continues to outpace our ability to produce actionable intelligence from this torrential downpour of 0’s and 1’s. To turn this data tsunami into real-time shared awareness will require that our filtering and curation platforms become more automated and collaborative. I believe the key is thus to combine automated solutions with real-time collabora-tive crowdsourcing tools—that is, platforms that enable crowds to collaboratively filter and curate real-time information, in real-time.

Right now, when we comb through Twitter, for example, we do so on our own, sitting behind our laptop, isolated from others who may be seeking to filter the exact same type of content. We need to develop free and open source platforms that allow for the distributed-but-networked, crowdsourced filtering and curation of information in order to democratize the sense-making of the firehose. Only then will the wider public be able to win the equivalent of Red Balloon competitions without needing $40,000 or a degree from MIT.

I’d love to get feedback from readers about what other compelling cases or arguments I should bring up in my presentation tomorrow. So feel free to post some suggestions in the comments section below. Thank you!

Information Forensics: Five Case Studies on How to Verify Crowdsourced Information from Social Media

My 20+ page study on verifying crowdsourced information is now publicly available here as a PDF and here as an open Google Doc for comments. I very much welcome constructive feedback from iRevolution readers so I can improve the piece before it gets published in an edited book next year.

Abstract

False information can cost lives. But no information can also cost lives, especially in a crisis zone. Indeed, information is perishable so the potential value of information must be weighed against the urgency of the situation. Correct information that arrives too late is useless. Crowdsourced information can provide rapid situational awareness, especially when added to a live crisis map. But information in the social media space may not be reliable or immediately verifiable. This may explain why humanitarian (and news) organizations are often reluctant to leverage crowdsourced crisis maps. Many believe that verifying crowdsourced information is either too challenging or impossible. The purpose of this paper is to demonstrate that concrete strategies do exist for the verification of geo-referenced crowdsourced social media information. The study first provides a brief introduction to crisis mapping and argues that crowdsourcing is simply non-probability sampling. Next, five case studies comprising various efforts to verify social media are analyzed to demonstrate how different verification strategies work. The five case studies are: Andy Carvin and Twitter; Kyrgyzstan and Skype; BBC’s User-Generated Content Hub; the Standby Volunteer Task Force (SBTF); and U-Shahid in Egypt. The final section concludes the study with specific recommendations.

Update: See also this link and my other posts on Information Forensics.