Tag Archives: SBTF

PeopleBrowsr: Next-Generation Social Media Analysis for Humanitarian Response?

As noted in this blog post on “Data Philanthropy for Humanitarian Response,” members of the Digital Humanitarian Network (DHNetwork) are still using manual methods for media monitoring. When the United Nations Office for the Coordination of Humanitarian Affairs (OCHA) activated the Standby Volunteer Task Force (SBTF) to crisis map Libya last year, for example, SBTF volunteers manually monitored hundreds of Twitter handles, news sites for several weeks.

SBTF volunteers (Mapsters) do not have access to a smart microtasking platform that could have distributed the task in more efficient ways. Nor do they have access to even semi-automated tools for content monitoring and information retrieval. Instead, they used a Google Spreadsheet to list the sources they were manually monitoring and turned this spreadsheet into a sign-up sheet where each Mapster could sign on for 3-hour shifts every day. The SBTF is basically doing “crowd computing” using the equivalent of a typewriter.

Meanwhile, companies like Crimson Hexagon, NetBase, RecordedFuture and several others have each developed sophisticated ways to monitor social and/or mainstream media for various private sector applications such as monitoring brand perception. So my colleague Nazila kindly introduced me to her colleagues at PeopleBrowsr after reading my post on Data Philanthropy. Last week, Marc from PeopleBrowsr gave me a thorough tour of the platform. I was definitely impressed and am excited that Marc wants us to pilot the platform in support of the Digital Humanitarian Network. So what’s the big deal about PeopleBrowsr? To begin with, the platform has access to 1,000 days of social media data and over 3 terabytes of social data per month.

To put this in terms of information velocity, PeopleBrowsr receives 10,000 social media posts per second from a variety of sources including Twitter, Facebook, fora and blogs. On the latter, they monitor posts from over 40 million blogs including all of Tumblr, Posterious, Blogspot and every WordPress-hosted site. They also pull in content from YouTube and Flickr. (Click on the screenshots below to magnify them).

Lets search for the term “tsunami” on Twitter. (One could enter a complex query, e.g., and/or, not, etc., and also search using twitter handles, word or hashtag clouds, top URLs, videos, pictures, etc). PeopleBrowsr summarizes the result by Location and Community. Location simply refers to where those generating content referring to a tsunami are located. Of course, many Twitter users may tweet about an event without actually being eye-witness accounts (think of Diaspora groups, for example). While PeopleBrowsr doesn’t geo-tag the location of reports events, you can very easily and quickly identify which twitter users are tweeting the most about a given event and where they are located.

As for Community, PeopleBrowsr has  indexed millions of social media users and clustered them into different communities based on their profile/bio information. Given our interest in humanitarian response, we could create our own community of social media users from the humanitarian sector and limit our search to those users only. Communities can also be created based on hashtags. The result of the “tsunami” search is displayed below.

This result can be filtered further by gender, sentiment, number of twitter followers, urgent words (e.g., alert, help, asap), time period and location, for example. The platform can monitor and view posts in any language that is posted. In addition, PeopleBrowsr have their very own Kred score which quantifies the “credibility” of social media users. The scoring metrics for Kred scores is completely transparent and also community driven. “Kred is a transparent way to measure influence and outreach in social media. Kred generates unique scores for every domain of expertise. Regardless of follower count, a person is influential if their community is actively listening and engaging with their content.”

Using Kred, PeopleBrows can do influence analysis using Twitter across all languages. They’ve also added Facebook to Kred, but only as an opt in option.  PeopleBrowsr also has some great built-in and interactive data analytics tools. In addition, one can download a situation report in PDF and print that off if there’s a need to go offline.

What appeals to me the most is perhaps the full “drill-down” functionality of PeopleBrowsr’s data analytics tools. For example, I can drill down to the number of tweets per month that reference the word “tsunami” and drill down further per week and per day.

Moreover, I can sort through the individual tweets themselves based on specific filters and even access the underlying tweets complete with twitter handles, time-stamps, Kred scores, etc.

This latter feature would make it possible for the SBTF to copy & paste and map individual tweets on a live crisis map. In fact, the underlying data can be downloaded into a CSV file and added to a Google Spreadsheet for Mapsters to curate. Hopefully the Ushahidi team will also provide an option to upload CSVs to SwiftRiver so users can curate/filter pre-existing datasets as well as content generated live. What if you don’t have time to get on PeopleBrowsr and filter, download, etc? As part of their customer support, PeopleBrowsr will simply provide the data to you directly.

So what’s next? Marc and I are taking the following steps: Schedule online demo of PeopleBrowsr of the SBTF Core Team (they are for now the only members of the Digital Humanitarian Network with a dedicated and experienced Media Monitoring Team); SBTF pilots PeopleBrowsr for preparedness purposes; SBTF deploys  PeopleBrowsr during 2-3 official activations of the Digital Humanitarian Network; SBTF analyzes the added value of PeopleBrowsr for humanitarian response and provides expert feedback to PeopleBrowsr on how to improve the tool for humanitarian response.

Some Thoughts on Real-Time Awareness for Tech@State

I’ve been invited to present at Tech@State in Washington DC to share some thoughts on the future of real-time awareness. So I thought I’d use my blog to brainstorm and invite feedback from iRevolution readers. The organizers of the event have shared the following questions with me as a way to guide the conver-sation: Where is all of this headed?  What will social media look like in five to ten years and what will we do with all of the data? Knowing that the data stream can only increase in size, what can we do now to prepare and prevent being over-whelmed by the sheer volume of data?

These are big, open-ended questions, and I will only have 5 minutes to share some preliminary thoughts. I shall thus focus on how time-critical crowdsourcing can yield real-time awareness and expand from there.

Two years ago, my good friend and colleague Riley Crane won DARPA’s $40,000 Red Balloon Competition. His team at MIT found the location of 10 weather balloons hidden across the continental US in under 9 hours. The US covers more than 3.7 million square miles and the balloons were barely 8 feet wide. This was truly a needle-in-the-haystack kind of challenge. So how did they do it? They used crowdsourcing and leveraged social media—Twitter in particular—by using a “recursive incentive mechanism” to recruit thousands of volunteers to the cause. This mechanism would basically reward individual participants financially based on how important their contributions were to the location of one or more balloons. The result? Real-time, networked awareness.

Around the same time that Riley and his team celebrated their victory at MIT, another novel crowdsourcing initiative was taking place just a few miles away at The Fletcher School. Hundreds of students were busy combing through social and mainstream media channels for actionable and mappable information on Haiti following the devastating earthquake that had struck Port-au-Prince. This content was then mapped on the Ushahidi-Haiti Crisis Map, providing real-time situational awareness to first responders like the US Coast Guard and US Marine Corps. At the same time, hundreds of volunteers from the Haitian Diaspora were busy translating and geo-coding tens of thousands of text messages from disaster-affected communities in Haiti who were texting in their location & most urgent needs to a dedicated SMS short code. Fletcher School students filtered and mapped the most urgent and actionable of these text messages as well.

One year after Haiti, the United Nation’s Office for the Coordination of Humanitarian Affairs (OCHA) asked the Standby Volunteer Task Force (SBTF) , a global network of 700+ volunteers, for a real-time map of crowdsourced social media information on Libya in order to improve their own situational awareness. Thus was born the Libya Crisis Map.

The result? The Head of OCHA’s Information Services Section at the time sent an email to SBTF volunteers to commend them for their novel efforts. In this email, he wrote:

“Your efforts at tackling a difficult problem have definitely reduced the information overload; sorting through the multitude of signals on the crisis is no easy task. The Task Force has given us an output that is manageable and digestible, which in turn contributes to better situational awareness and decision making.”

These three examples from the US, Haiti and Libya demonstrate what is already possible with time-critical crowdsourcing and social media. So where is all this headed? You may have noted from each of these examples that their success relied on the individual actions of hundreds and sometimes thousands of volunteers. This is primarily because automated solutions to filter and curate the data stream are not yet available (or rather accessible) to the wider public. Indeed, these solutions tend to be proprietary, expensive and/or classified. I thus expect to see free and open source solutions crop up in the near future; solutions that will radically democratize the tools needed to gain shared, real-time awareness.

But automated natural language processing (NLP) and machine learning alone are not likely to succeed, in my opinion. The data stream is actually not a stream, it is a massive torent of non-indexed information, a 24-hour global firehose of real-time, distributed multi-media data that continues to outpace our ability to produce actionable intelligence from this torrential downpour of 0’s and 1’s. To turn this data tsunami into real-time shared awareness will require that our filtering and curation platforms become more automated and collaborative. I believe the key is thus to combine automated solutions with real-time collabora-tive crowdsourcing tools—that is, platforms that enable crowds to collaboratively filter and curate real-time information, in real-time.

Right now, when we comb through Twitter, for example, we do so on our own, sitting behind our laptop, isolated from others who may be seeking to filter the exact same type of content. We need to develop free and open source platforms that allow for the distributed-but-networked, crowdsourced filtering and curation of information in order to democratize the sense-making of the firehose. Only then will the wider public be able to win the equivalent of Red Balloon competitions without needing $40,000 or a degree from MIT.

I’d love to get feedback from readers about what other compelling cases or arguments I should bring up in my presentation tomorrow. So feel free to post some suggestions in the comments section below. Thank you!

Information Forensics: Five Case Studies on How to Verify Crowdsourced Information from Social Media

My 20+ page study on verifying crowdsourced information is now publicly available here as a PDF and here as an open Google Doc for comments. I very much welcome constructive feedback from iRevolution readers so I can improve the piece before it gets published in an edited book next year.

Abstract

False information can cost lives. But no information can also cost lives, especially in a crisis zone. Indeed, information is perishable so the potential value of information must be weighed against the urgency of the situation. Correct information that arrives too late is useless. Crowdsourced information can provide rapid situational awareness, especially when added to a live crisis map. But information in the social media space may not be reliable or immediately verifiable. This may explain why humanitarian (and news) organizations are often reluctant to leverage crowdsourced crisis maps. Many believe that verifying crowdsourced information is either too challenging or impossible. The purpose of this paper is to demonstrate that concrete strategies do exist for the verification of geo-referenced crowdsourced social media information. The study first provides a brief introduction to crisis mapping and argues that crowdsourcing is simply non-probability sampling. Next, five case studies comprising various efforts to verify social media are analyzed to demonstrate how different verification strategies work. The five case studies are: Andy Carvin and Twitter; Kyrgyzstan and Skype; BBC’s User-Generated Content Hub; the Standby Volunteer Task Force (SBTF); and U-Shahid in Egypt. The final section concludes the study with specific recommendations.

Update: See also this link and my other posts on Information Forensics.

Crowdsourcing Satellite Imagery Analysis for UNHCR-Somalia: Latest Results


253,711

That is the total number of tags created by 168 volunteers after processing 3,909 satellite images in just five days. A quarter of a million tags in 120 hours; that’s more than 2,000 tags per hour. Wow. As mentioned in this earlier blog post, volunteers specifically tagged three different types of informal shelters to provide UNHCR with an estimate of the IDP population in the Afgooye Corridor. So what happens now?

Our colleagues at Tomnod are going to use their CrowdRank algorithm to triangulate the data. About 85% of 3,000+ images were analyzed by at least 3 volunteers. So the CrowdRank algorithm will determine which tags had the most consensus across volunteers. This built-in quality control mechanism is a distinct advantage of using micro-tasking platforms like Tomnod. The tags with the most consensus will then be pushed to a dedicated UNHCR Ushahidi platform for further analysis. This project represents an applied research & development initiative. In short, we certainly don’t have all the answers. This next phase is where the assessment and analysis begins.

In the meantime, I’ve been in touch with the EC’s Joint Research Center about running their automated shelter detection algorithm on the same set of satellite imagery. The purpose is to compare those results with the crowdsourced tags in order to improve both methodologies. Clearly, none of this would be possible without the imagery and  invaluable support from our colleagues at DigitalGlobe, so huge thanks to them.

And of course, there would be no project at all were it not for our incredible volunteers, the best “Mapsters” on the planet. Indeed, none of those 200,000+ tags would exist were it not for the combined effort between the Standby Volunteer Task Force (SBTF) and students from the American Society for Photogrammetry and Remote Sensing (ASPRS); Columbia University’s New Media Task Force (NMTF) who were joined by students from the New School; the Geography Departments at the University of Wisconsin-Madison, the University of Georgia, and George Mason University, and many other volunteers including humanitarian professionals from the United Nations and beyond.

As many already know, my colleague Shadrock Roberts played a pivotal role in this project. Shadrock is my fellow co-lead on the SBTF Satellite Team and he took the important initiative to draft the feature-key and rule-sets for this mission. He also answered numerous questions from many volunteers throughout past five days. Thank you, Shadrock!

It appears that word about this innovative project has gotten back to UNHCR’s Deputy High Commissioner, Professor Alexander Aleinikoff. Shadrock and I have just been invited to meet with him in Geneva on Monday, just before the 2011 International Conference of Crisis Mappers (ICCM 2011) kicks off. We’ll be sure to share with him how incredible this volunteer network is and we’ll definitely let all volunteers know how the meeting goes. Thanks again for being the best Mapsters around!

 

Crowdsourcing Satellite Imagery Tagging to Support UNHCR in Somalia

The Standby Volunteer Task Force (SBTF) recently launched a new team called the Satellite Imagery Team. This team has been activated twice within the past few months. The first was to carry out this trial run in Somalia and the second was in partnership with AI-USA for this human rights project in Syria. We’re now back in Somalia thanks to a new and promising partnership with UNHCR, DigitalGlobe, Tomnod, SBTF and Ushahidi.

The purpose of this joint project is to crowdsource the geolocation of shelters in Somalia’s Afgooye corridor. This resembles our first trial run initiative only this time we have developed formal and more specialized rule-set and feature-key in direct collaboration with our colleagues at UNHCR. As noted in this document, “Because access to the ground is difficult in Somalia, it is hard to know how many people, exactly, are affected and in what areas. By using satellite imagery to identify different types of housing/shelters, etc., we can make a better and more rapid population estimate of the number of people that live in these shelters. These estimates are important for logistics and planning purposes but are also important for understanding how the displaced population is moving and changing over time.” Hence the purpose of this project.

We’ll be tagging three different types of shelters: (1) Large permanent structures; (2) Temporary structures with a metal roof; and (3) Temporary shelters without a metal roof. Each of these shelter types is described in more details in the rule-set along with real satellite imagery examples—the feature key. The rule-set describes the shape, color, tone and clustering of the different shelter types. As per previous SBTF Satellite Team deployments, we will be using Tomnod’s excellent microtasking platform for satellite imagery analysis.

Over 100 members of the SBTF have joined the Satellite Team to support this project. One member of this team, Jamon, is an associate lecturer in the Geography Department at the University of Wisconsin-Madison. He teaches on a broad array of technologies and applications of Geographic Information Science, including GPS and  satellite imagery analysis. He got in touch today to propose offering this project for class credit to his 36 undergraduate students who he will supervise during the exercise.

In addition, my colleague and fellow Satellite Team coordinator at the SBTF, has recruited many graduate students who are members of the American Society for Photogrammetry and Remote Sensing (ASPRS) to join the SBTF team on this project. The experience that these students bring to the team will be invaluable. Shadrock has also played a pivotal role in making this project happen: thanks to his extensive expertise in remote sensing and satellite imagery, he took the lead in developing the rule-set and feature-key in collaboration with UNHCR.

The project officially launches this Friday. The triangulated results will be pushed to a dedicated UNHCR Ushahidi map for review. This will allow UNCHR to add additional contextual data to the maps for further analysis. We also hope that our colleagues at the European Commission’s Joint Research Center (JRC) will run their automated shelter tagging algorithm on the satellite imagery for comparative analysis purposes. This will help us better understand the strengths and shortcomings of both approaches and more importantly provide us with insights on how to best improve each individually and in combination.

The Standby Volunteer Task Force: One Year On

The Standby Volunteer Task Force (SBTF) was launched exactly a year ago tomorrow and what a ride it has been! It was on September 26, 2010, that I published the blog post below to begin rallying the first volunteers to the cause.

The first blog post announcing the SBTF

Some three hundred and sixty plus days later, no fewer than 621 volunteers have joined the SBTF. These amazing individuals are based in the following sixty plus countries, including: Afghanistan, Algeria, Argentina, Armenia, Australia, Belgium, Brazil, Canada, Chile, Colombia, Czech Republic, Denmark, Egypt, Finland, France, Germany, Ghana, Greece, Guam, Guatemala, Haiti, Hungary, India, Indonesia, Iran, Ireland, Israel, Italy, Japan, Jordan, Kenya, Republic of South Korea, Lebanon, Liberia, Libya, Mexico, Morocco, Nepal, Netherlands, New Zealand, Nigeria, Pakistan, Palestine, Peru, Philippines, Poland, Portugal, Senegal, Serbia, Singapore, Slovenia, Somalia, South Africa, Spain, Sudan, Switzerland, Tajikistan, Trinidad and Tobago, Tunisia, Turkey, Uganda, United Kingdom, United States and Venezuela.

Most members have added themselves to the SBTF map below.

Between them, members of the SBTF represent several hundred organizations, including the American Red Cross, the American University in Cairo, Australia’s National University, Bertelsmann Foundation, Briceland Volunteer Fire Department, Brussels School of International Studies, Carter Center, Columbia University, Crisis Commons, Deloitte Consulting, Engineers without Borders, European Commission Joint Research Center, Fairfax County International Search & Rescue Team, Fire Department of NYC, Fletcher School, GIS Corps, Global Voices Online, Google, Government of Ontario, Grameen Development Services, Habitat for Humanity, Harvard Humanitarian Initiative, International Labor Organization, International Organization for Migration, John Carroll University, Johns Hopkins University, Lewis and Clark College, Lund University, Mercy Corps, Ministry of Agriculture and Forestry of New Zealand, Medecins Sans Frontieres, NASA, National Emergency Management Association, National Institute for Urban Search and Rescue, Nethope, New York University, OCHA, Open Geospatial Consortium, OpenStreetMap, OSCE, Pan American Health Organization, Portuguese Red Cross, Sahana Software Foundation, Save the Children, Sciences Po Paris, Skoll Foundation, School of Oriental and African Studies, Tallinn University, Tech Change, Tulane University, UC Berkeley,  UN Volunteers, UNAIDS, UNDP Bangladesh, University of Algiers, University of Colorado, University of Portsmouth, UNOPS, Ushahidi-Liberia, WHO, World Bank and Yale University.

Over the past twelve months, major SBTF deployments have included the Colombia Disaster Simulation with UN OCHA Colombia, Sudan Vote Monitor, Cyclone Yasi, Christchurch Earthquake, Libya Crisis Map and the Alabama Tornado. SBTF volunteers were also involved in other projects in Mumbai, Khartoum, Somalia and Syria with partners such as UNHCR and AI-USA. The latter two saw the establishment of a brand new SBTF team, the Satellite Imagery Team, the eleventh team to joint the SBTF Group (see figure below).  SBTF Coordinators organized and held several trainings for new members in 2011, as have our partners like the Humanitarian OpenStreetMap Team. You can learn more about all this (and join!) by visiting the SBTF blog.

We’re  grateful to have been featured in the media on several occasions over the past year, documenting how we’re changing the world, one map at a time. CNN, UK Guardian, The Economist, Fast Company, IRIN News, Washington Post, Technology Review, PBS and NPR all covered our efforts. The SBTF has also been presented at numerous conferences such as TEDxSilicon Valley, The Skoll World Forum, Re:publica, ICRC Global Communications Forum, ESRI User Conference and Share Conference. But absolutely none of this would be possible without the inspiring dedication of SBTF members and Team Coordinators.

Indeed, were it not for them, the Libya Crisis Map that we launched for UN OCHA would have looked like this (as would all the other maps):

So this digital birthday cakes goes to every SBTF member who offered their time and thereby made what this global network is today, you all know who you are and have my sincere gratitude, respect and deep admiration. SBTF Coordinators and Core Team Members deserve very special thanks and recognition for the many, many extra days and indeed weeks they have committed to the SBTF. We are also most grateful to our partners, including Ning, UN OCHA-Geneva and OCHA-Colombia for their support, camaraderie and mentorship. So a big, big thank you to all and a very happy birthday, Mapsters! I look forward to the second candle!

Crowdsourcing Satellite Imagery Analysis for Somalia: Results of Trial Run

We’ve just completed our very first trial run of the Standby Task Volunteer Force (SBTF) Satellite Team. As mentioned in this blog post last week, the UN approached us a couple weeks ago to explore whether basic satellite imagery analysis for Somalia could be crowdsourced using a distributed mechanical turk approach. I had actually floated the idea in this blog post during the floods in Pakistan a year earlier. In any case, a colleague at Digital Globe (DG) read my post on Somalia and said: “Lets do it.”

So I reached out to Luke Barrington at Tomnod to set up distributed micro-tasking platform for Somalia. To learn more about Tomond’s neat technology, see this previous blog post. Within just a few days we had high resolution satellite imagery from DG and a dedicated crowdsourcing platform for imagery analysis, courtesy of Tomnod . All that was missing were some willing and able “mapsters” from the SBTF to tag the location of shelters in this imagery. So I sent out an email to the group and some 50 mapsters signed up within 48 hours. We ran our pilot from August 26th to August 30th. The idea here was to see what would go wrong (and right!) and thus learn as much as we could before doing this for real in the coming weeks.

It is worth emphasizing that the purpose of this trial run (and entire exercise) is not to replicate the kind of advanced and highly-skilled satellite imagery analysis that professionals already carry out.  This is not just about Somalia over the next few weeks and months. This is about Libya, Syria, Yemen, Afghanistan, Iraq, Pakistan, North Korea, Zimbabwe, Burma, etc. Professional satellite imagery experts who have plenty of time to volunteer their skills are far and few between. Meanwhile, a staggering amount of new satellite imagery is produced  every day; millions of square kilometers’ worth according to one knowledgeable colleague.

This is a big data problem that needs mass human intervention until the software can catch up. Moreover, crowdsourcing has proven to be a workable solution in many other projects and sectors. The “crowd” can indeed scan vast volumes of satellite imagery data and tag features of interest. A number of these crowds-ourcing platforms also have built-in quality assurance mechanisms that take into account the reliability of the taggers and tags. Tomnod’s CrowdRank algorithm, for example, only validates imagery analysis if a certain number of users have tagged the same image in exactly the same way. In our case, only shelters that get tagged identically by three SBTF mapsters get their locations sent to experts for review. The point here is not to replace the experts but to take some of the easier (but time-consuming) tasks off their shoulders so they can focus on applying their skill set to the harder stuff vis-a-vis imagery interpretation and analysis.

The purpose of this initial trial run was simply to give SBTF mapsters the chance to test drive the Tomnod platform and to provide feeback both on the technology and the work flows we put together. They were asked to tag a specific type of shelter in the imagery they received via the web-based Tomnod platform:

There’s much that we would do differently in the future but that was exactly the point of the trial run. We had hoped to receive a “crash course” in satellite imagery analysis from the Satellite Sentinel Project (SSP) team but our colleagues had hardly slept in days because of some very important analysis they were doing on the Sudan. So we did the best we could on our own. We do have several satellite imagery experts on the SBTF team though, so their input throughout the process was very helpful.

Our entire work flow along with comments and feedback on the trial run is available in this open and editable Google Doc. You’ll note the pages (and pages) of comments, questions and answers. This is gold and the entire point of the trial run. We definitely welcome additional feedback on our approach from anyone with experience in satellite imagery interpretation and analysis.

The result? SBTF mapsters analyzed a whopping 3,700+ individual images and tagged more than 9,400 shelters in the green-shaded area below. Known as the “Afgooye corridor,” this area marks the road between Mogadishu and Afgooye which, due to displacement from war and famine in the past year, has become one of the largest urban areas in Somalia. [Note, all screen shots come from Tomnod].

Last year, UNHCR used “satellite imaging both to estimate how many people are living there, and to give the corridor a concrete reality. The images of the camps have led the UN’s refugee agency to estimate that the number of people living in the Afgooye Corridor is a staggering 410,000. Previous estimates, in September 2009, had put the number at 366,000” (1).

The yellow rectangles depict the 3,700+ individual images that SBTF volunteers individually analyzed for shelters: And here’s the output of 3 days’ worth of shelter tagging, 9,400+ tags:

Thanks to Tomnod’s CrowdRank algorithm, we were able to analyze consensus between mapsters and pull out the triangulated shelter locations. In total, we get 1,423 confirmed locations for the types of shelters described in our work flows. A first cursory glance at a handful (“random sample”) of these confirmed locations indicate they are spot on. As a next step, we could crowdsource (or SBTF-source, rather) the analysis of just these 1,423 images to triple check consensus. Incidentally, these 1,423 locations could easily be added to Google Earth or a password-protected Ushahidi map.

We’ve learned a lot during this trial run and Luke got really good feedback on how to improve their platform moving forward. The data collected should also help us provide targeted feedback to SBTF mapsters in the coming days so they can further refine their skills. On my end, I should have been a lot more specific and detailed on exactly what types of shelters qualified for tagging. As the Q&A section on the Google Doc shows, many mapsters weren’t exactly sure at first because my original guidelines were simply too vague. So moving forward, it’s clear that we’ll need a far more detailed “code book” with many more examples of the features to look for along with features that do not qualify. A colleague of mine suggested that we set up an interactive, online quiz that takes volunteers through a series of examples of what to tag and not to tag. Only when a volunteer answers all questions correctly do they move on to live tagging. I have no doubt whatsoever that this would significantly increase consensus in subsequent imagery analysis.

Please note: the analysis carried out in this trial run is not for humanitarian organizations or to improve situational awareness, it is simply for testing purposes only. The point was to try something new and in the process work out the kinks so when the UN is ready to provide us with official dedicated tasks we don’t have to scramble and climb the steep learning curve there and then.

In related news, the Humanitarian Open Street Map Team (HOT) provided SBTF mapsters with an introductory course on the OSM platform this past weekend. The HOT team has been working hard since the response to Haiti to develop an OSM Tasking Server that would allow them to micro-task the tracing of satellite imagery. They demo’d the platform to me last week and I’m very excited about this new tool in the OSM ecosystem. As soon as the system is ready for prime time, I’ll get access to the backend again and will write up a blog post specifically on the Tasking Server.