While static, this crisis map includes a truly unique detail. Click on the map below to see a larger version as this may help you spot what is so striking.
For a hint, click this link. Still stumped? Look at the sources listed in the Key.
While static, this crisis map includes a truly unique detail. Click on the map below to see a larger version as this may help you spot what is so striking.
For a hint, click this link. Still stumped? Look at the sources listed in the Key.
Ok this is a guess- but does it combine a problem (how many dead/injured/ atrocities) with the in the ground available assistance and aid?
It’s nice to see that Syria Tracker, a Crowdmap deployment, is being used as an official source of data 😉
yes… and the source of data… : )
This raises a lot of interesting questions because normally it is not the business of the UN or the Red Cross Red Crescent Movement to count the dead. That is the role of the government. So what do you do if the government is not credible, either because it is unwilling to publish a count or because it lacks the capacity to count the dead. Secondary data is an obvious answer and Syria Crisis Tracker can shed some light on this. But I have my doubts about the accuracy: I believe that a tool like this can show trends and hot-spots but knowing how elusive numbers are in a disaster or conflict area, I don’t think that the numbers are accurate and I’m surprised that USAID is using them as such.
Using Ushahidi’s Crowdmap product, we developed Syria Tracker [https://syriatracker.crowdmap.com] to be a crowdsourced effort where citizen reporters on the ground or abroad are reporting crimes in Syria either via direct web entry, by sending reports via email to syriatracker@gmail.com, by using Twitter @syriatracker. Syria Tracker also incorporates complementary situation awareness information by mining news, blogs, Facebook posts, etc. in Arabic and English, including pro-regime sources, where we track crimes against civilians in addition to revenge killing.
The key goal of Syria Tracker has not been to provide the most recent numbers, but to help document what has been happening for the future when an impartial investigation can be launched. In the long run, we don’t feel that the specific counts are as important as the individual names, dates and places, which can be edited, updated, and revised over time. We are sure that, at that time, many reports will be removed from the list, but many others will be added. When the most accurate tally is collected and correlated across the different sources, it will be a reflection of all of the reporting groups, and we only hope that we can preserve the memory of a victim that may otherwise be forgotten. While getting accurate and timely figures will always be a challenge, we rely on eyewitness reports, and supplement and cross work with news and social reports. We occasionally triangulate our numbers with other reporting groups to see where we fit. We have ‘boots on the ground’ [please see here the locations of where we get this information from: http://syriatracker.blogspot.com/2012/07/where-do-syria-tracker-reports-come.html%5D, but our approach to preserve anonymity makes it unfeasible sometimes to ask for follow up information. Having said that, however, ~60% of Syria Tracker’s documented killing (more than 40k victims documented between March 18, 2011 to November 5, 2012 https://syriatracker.crowdmap.com/reports/view/2875) have at least one video or picture verified. >80% of the reports have context about what happened which has been verified and corroborated with other sources.
We never “estimate” figures. We are currently pushing out updates every one or two weeks, depending on the availability of volunteers, under special category “Summary” available here: http://goo.gl/a7agQ. If our goal was one of making a political statement, we would not wait so long. We have also tried to be transparent as possible with the data we collect—we publish, in spreadsheet form, the majority of the information we collect for each reported death in addition to making all reports available publicly and free to download [https://syriatracker.crowdmap.com/reports/download]. (Of course, we cannot share identifying information about the person making the report if they choose to remain anonymous, and we have a backlog of media links to update). We’ve welcomed external analysis, and hope that more would take up the offer. (We had an interesting analysis of causes of death: https://syriatracker.crowdmap.com/reports/view/2888) Our site also allows viewers to tag any reports that seem spurious, and need additional investigation, and welcome visitors to use the feature. We would welcome critical insight that any would wish to offer.
Many thanks for taking the time to contribute this additional information, ST.
The inclusion of the Golan Heights?
I have similar concerns to Timo Luege. The counts of civilian deaths are simply convenience samples that are subject to selection bias.
In your November 26 blogpost, you cite the work of Zagheni and Weber and describe how they adjust for selection bias when estimating migration flows using email data. The raw civilian count data on this map are also subject to selection bias, so the technical challenge is not to plot the points on a map, but rather to adjust for the underlying selection bias of civilian casualty data and then study the *estimates* (not raw counts) in relation to the spatial distribution of refugee camps, logistics/relief supplies, etc.
Crisismappers really need to follow the lead of Zagheni and Weber by expending more thought, time and energy on assessing data quality and adjusting for errors and biases in the underlying data. Just plotting raw data on a map is a form of empirical speculation and fails to incorporate the last 300 years of statistical thinking. It really is time to move past speculative analytics (with or without colored maps). Just because we can get more data faster, it does not mean that we can skip the difficult work of estimation and modeling.
Thanks for highlighting our program map, Patrick. We have incorporated crowdsourced data into some of our Syria crisis mapping products to help illustrate the areas most affected by conflict and the relationship between those areas and U.S. government humanitarian programs. As noted on the map, our use of this data does not imply endorsement or acceptance.
We hope that our use of this kind of data — even if it is only for illustrative purposes on a static map — will help generate further discussion in the crisis mapping community and the broader humanitarian community.
Thank you very much for taking the time to commment and contribute to these discussions, USAID/OFDA.
One observation that has been (surprisingly) overlooked are the ranges (or scale) used by USAID to represent the casualty data provided by SyriaTracker:
2,001 to 5,694 (ie, approx +/- 1,800)
1,001 to 2,000 (+/- 500)
201 to 1,000 (+/- 400)
1 to 200 (+/- 100)
In other words, USAID’s representation of the casualty data is not in absolute numbers but rather in ranges, which could be interpreted as an “error margin,” i.e., with a lower bound; a more conservative figure that could be considered a minimum essential indicator.
In any event, if USAID is using this map (amongst other information products) to allocate humanitarian aid to the most hard-hit areas, then perhaps casualty data itself need only be “approximately right and not perfectly wrong.” After all, the aid provided–as depicted in the map’s Key–is focused primarily on the living, e.g., food assistance; refugee assistance; water, sanitation and hygiene; etc. To assume that USAID (or any other humanitarian org) would base their entire decision-making processes on one (or primarily one) figure, particularly casualties, is perhaps a bit farfetched?
The primary objective of this map is to show the geographic and sectoral coverage of our life-saving humanitarian aid programs. Since we were only looking to *illustrate* conflict areas, it was necessary to parse this relatively dense crowdsourced data into ranges.
As you might imagine, we make programmatic decisions based on humanitarian needs identified by various sources, including our partners in the field and the international humanitarian community. We are not, in fact, using this crowdsourced data in our decision-making process.
Many thanks for confirming, USAID/OFDA. Your contributions to this conversation are invaluable.
Patrick,
What is the probabilistic (either Bayesian or Frequentist) justification for interpreting these ranges (for the casualty figures) as a measure of error or uncertainty?
Could you kindly elaborate on this interpretation? There are deep mathematical relationships that underpin probability and measure theory. The ranges in the USAID map look at best arbitrary.
Sorry, somehow missed this comment. Would you like to offer to carry out this analysis in the rigorous way you propose given that you are an expert in this area? I’m happy to send you the data so you can let us know exactly what the ranges should be. Thanks for your consideration.
Pingback: Weekly: RHOK hackathon and more! – Blog | Ushahidi
Pingback: Analyzing Tweets Posted During Mumbai Terrorist Attacks | iRevolution