We’ve just completed our very first trial run of the Standby Task Volunteer Force (SBTF) Satellite Team. As mentioned in this blog post last week, the UN approached us a couple weeks ago to explore whether basic satellite imagery analysis for Somalia could be crowdsourced using a distributed mechanical turk approach. I had actually floated the idea in this blog post during the floods in Pakistan a year earlier. In any case, a colleague at Digital Globe (DG) read my post on Somalia and said: “Lets do it.”
So I reached out to Luke Barrington at Tomnod to set up distributed micro-tasking platform for Somalia. To learn more about Tomond’s neat technology, see this previous blog post. Within just a few days we had high resolution satellite imagery from DG and a dedicated crowdsourcing platform for imagery analysis, courtesy of Tomnod . All that was missing were some willing and able “mapsters” from the SBTF to tag the location of shelters in this imagery. So I sent out an email to the group and some 50 mapsters signed up within 48 hours. We ran our pilot from August 26th to August 30th. The idea here was to see what would go wrong (and right!) and thus learn as much as we could before doing this for real in the coming weeks.
It is worth emphasizing that the purpose of this trial run (and entire exercise) is not to replicate the kind of advanced and highly-skilled satellite imagery analysis that professionals already carry out. This is not just about Somalia over the next few weeks and months. This is about Libya, Syria, Yemen, Afghanistan, Iraq, Pakistan, North Korea, Zimbabwe, Burma, etc. Professional satellite imagery experts who have plenty of time to volunteer their skills are far and few between. Meanwhile, a staggering amount of new satellite imagery is produced every day; millions of square kilometers’ worth according to one knowledgeable colleague.
This is a big data problem that needs mass human intervention until the software can catch up. Moreover, crowdsourcing has proven to be a workable solution in many other projects and sectors. The “crowd” can indeed scan vast volumes of satellite imagery data and tag features of interest. A number of these crowds-ourcing platforms also have built-in quality assurance mechanisms that take into account the reliability of the taggers and tags. Tomnod’s CrowdRank algorithm, for example, only validates imagery analysis if a certain number of users have tagged the same image in exactly the same way. In our case, only shelters that get tagged identically by three SBTF mapsters get their locations sent to experts for review. The point here is not to replace the experts but to take some of the easier (but time-consuming) tasks off their shoulders so they can focus on applying their skill set to the harder stuff vis-a-vis imagery interpretation and analysis.
The purpose of this initial trial run was simply to give SBTF mapsters the chance to test drive the Tomnod platform and to provide feeback both on the technology and the work flows we put together. They were asked to tag a specific type of shelter in the imagery they received via the web-based Tomnod platform:
There’s much that we would do differently in the future but that was exactly the point of the trial run. We had hoped to receive a “crash course” in satellite imagery analysis from the Satellite Sentinel Project (SSP) team but our colleagues had hardly slept in days because of some very important analysis they were doing on the Sudan. So we did the best we could on our own. We do have several satellite imagery experts on the SBTF team though, so their input throughout the process was very helpful.
Our entire work flow along with comments and feedback on the trial run is available in this open and editable Google Doc. You’ll note the pages (and pages) of comments, questions and answers. This is gold and the entire point of the trial run. We definitely welcome additional feedback on our approach from anyone with experience in satellite imagery interpretation and analysis.
The result? SBTF mapsters analyzed a whopping 3,700+ individual images and tagged more than 9,400 shelters in the green-shaded area below. Known as the “Afgooye corridor,” this area marks the road between Mogadishu and Afgooye which, due to displacement from war and famine in the past year, has become one of the largest urban areas in Somalia. [Note, all screen shots come from Tomnod].
Last year, UNHCR used “satellite imaging both to estimate how many people are living there, and to give the corridor a concrete reality. The images of the camps have led the UN’s refugee agency to estimate that the number of people living in the Afgooye Corridor is a staggering 410,000. Previous estimates, in September 2009, had put the number at 366,000” (1).
The yellow rectangles depict the 3,700+ individual images that SBTF volunteers individually analyzed for shelters: And here’s the output of 3 days’ worth of shelter tagging, 9,400+ tags:
Thanks to Tomnod’s CrowdRank algorithm, we were able to analyze consensus between mapsters and pull out the triangulated shelter locations. In total, we get 1,423 confirmed locations for the types of shelters described in our work flows. A first cursory glance at a handful (“random sample”) of these confirmed locations indicate they are spot on. As a next step, we could crowdsource (or SBTF-source, rather) the analysis of just these 1,423 images to triple check consensus. Incidentally, these 1,423 locations could easily be added to Google Earth or a password-protected Ushahidi map.
We’ve learned a lot during this trial run and Luke got really good feedback on how to improve their platform moving forward. The data collected should also help us provide targeted feedback to SBTF mapsters in the coming days so they can further refine their skills. On my end, I should have been a lot more specific and detailed on exactly what types of shelters qualified for tagging. As the Q&A section on the Google Doc shows, many mapsters weren’t exactly sure at first because my original guidelines were simply too vague. So moving forward, it’s clear that we’ll need a far more detailed “code book” with many more examples of the features to look for along with features that do not qualify. A colleague of mine suggested that we set up an interactive, online quiz that takes volunteers through a series of examples of what to tag and not to tag. Only when a volunteer answers all questions correctly do they move on to live tagging. I have no doubt whatsoever that this would significantly increase consensus in subsequent imagery analysis.
Please note: the analysis carried out in this trial run is not for humanitarian organizations or to improve situational awareness, it is simply for testing purposes only. The point was to try something new and in the process work out the kinks so when the UN is ready to provide us with official dedicated tasks we don’t have to scramble and climb the steep learning curve there and then.
In related news, the Humanitarian Open Street Map Team (HOT) provided SBTF mapsters with an introductory course on the OSM platform this past weekend. The HOT team has been working hard since the response to Haiti to develop an OSM Tasking Server that would allow them to micro-task the tracing of satellite imagery. They demo’d the platform to me last week and I’m very excited about this new tool in the OSM ecosystem. As soon as the system is ready for prime time, I’ll get access to the backend again and will write up a blog post specifically on the Tasking Server.
Pingback: Analyzing Satellite Imagery of the Somali Crisis Using Crowdsourcing | iRevolution
Very interesting. Various approaches and technologies are definitely converging.
Just wondered if there is not a typo here: “We ran our pilot from September 26th to September 30th.”
Wasn’t it rather this August?
Oops, yes, August. Just made the correction, thank for letting me know!
Initial feedback from a geographer:
“They needed to better inform their volunteers about the context of why they were looking at that specific area and why they were not interested in dense collections of structures. Also, they should have given a threshold or rule of thumb for what constitutes structure densities that are too high to be tagged. Shadows seem to confuse human eyes as well as image processing procedures, so that makes me feel better. Regardless, that Turk approach would have been very useful for Darfur. My question is, does a quantitative estimate of the number of structures provide much useful information, especially given that many structures are temporary and that the images were taken at different times? It is a glamorous and fabulously organized application of crowd-sourcing geo-information, I am just wondering what they will use the numbers for. Population estimates? The number of structures used for habitation will likely depend on how densely they are laid out and in what manner they are arranged (i.e. nucleated, linear), so these would be important attributes to tag for clusters of structures as well as possibly distance from nearest high density cluster with a market. That could help determine areas that have poor access to services and shared resources.”
Hi Patrick, hope you are doing great.
I was impressed not only by the pilot but also how nicely crowdsourcing may fit with geo-spatial analysis in general.
As I see it, it will be critical in the future to integrate the efforts of SBTF (or generally, the volunteer technical community) and the work of professional analysts at formal organizations so they complement each other. At least, if I understood your post correctly, the goal was not to replicate what “formal humanitarians” are doing. Has the UN told you guys how they plan to incorporate results coming from the task force into the workflows of professional analysts? I’d love to learn more about this because I believe this is key.
Best wishes!
Very interesting. Keep on doing this good work.
Congrats on an excellent and very fruitful trial, Patrick and the SBTF Team!
This is amazing. You guys are great!!! Can’t believe it.
Pingback: OpenStreetMap’s New Micro-Tasking Platform for Satellite Imagery Tracing | iRevolution
Pingback: Syria: Crowdsourcing Satellite Imagery Analysis to Identify Mass Human Rights Violations | iRevolution
Pingback: Syria: Crowdsourcing Satellite Imagery Analysis to Identify Mass Human Rights Violations
Pingback: Combining Crowdsourced Satellite Imagery Analysis with Crisis Reporting: An Update on Syria | iRevolution
Pingback: Crowd-sourcing in Syria? Satellite crisis-mapping Imagery Analysis? « Adonis Diaries
Pingback: Combining Crowdsourced Satellite Imagery Analysis with Crisis Reporting: An Update on Syria | Ikkevold
Pingback: Combining Crowdsourced Satellite Imagery Analysis with Crisis Reporting: An Update on Syria
Pingback: The Horn of Africa and the Crisis Mapping Community | iRevolution
Pingback: New development data | OverCognition
Pingback: Microtasking Advocacy and Humanitarian Response in Somalia | iRevolution
Pingback: NMTF Situation Room – Satellite Imagery Analysis: Mapping IDP Settlements in Somalia « New Media Task Force
Pingback: Crowdsourcing Satellite Imagery Tagging to Support UNHCR in Somalia
Pingback: Crowdsourcing Satellite Imagery Tagging to Support UNHCR in Somalia | iRevolution
Pingback: New development data | HunchWorks