Category Archives: Crisis Mapping

Predicting the Future of Global Geospatial Information Management

The United Nations Committee of Experts on Global Information Management (GGIM) recently organized a meeting of thought-leaders and visionaries in the geo-spatial world to identify the future of this space over the next 5-10 years. These experts came up with some 80+ individual predictions. I’ve included some of the more interesting ones below.

  • The use of Unmanned Aerial Vehicles (UAVs) as a tool for rapid geospatial data collection will increase.
  • 3D and even 4D geospatial information, incorporating time as the fourth dimension, will increase.
  • Technology will move faster than legal and governance structures.
  • The link between geospatial information and social media, plus other actor networks, will become more and more important.
  • Real-time info will enable more dynamic modeling & response to disasters.
  • Free and open source software will continue to grow as viable alternatives both in terms of software, and potentially in analysis and processing.
  • Geospatial computation will increasingly be non-human consumable in nature, with an increase in fully-automated decision systems.
  • Businesses and Governments will increasingly invest in tools and resources to manage Big Data. The technologies required for this will enable greater use of raw data feeds from sensors and other sources of data.
  • In ten years time it is likely that all smart phones will be able to film 360 degree 3D video at incredibly high resolution by today’s standards & wirelessly stream it in real time.
  • There will be a need for geospatial use governance in order to discern the real world from the virtual/modelled world in a 3D geospatial environ-ment.
  • Free and open access to data will become the norm and geospatial information will increasingly be seen as an essential public good.
  • Funding models to ensure full data coverage even in non-profitable areas will continue to be a challenge.
  • Rapid growth will lead to confusion and lack of clarity over data ownership, distribution rights, liabilities and other aspects.
  • In ten years, there will be a clear dividing line between winning and losing nations, dependent upon whether the appropriate legal and policy frameworks have been developed that enable a location-enabled society to flourish.
  • Some governments will use geospatial technology as a means to monitor or restrict the movements and personal interactions of their citizens. Individuals in these countries may be unwilling to use LBS or applications that require location for fear of this information being shared with authorities.
  • The deployment of sensors and the broader use of geospatial data within society will force public policy and law to move into a direction to protect the interests and rights of the people.
  • Spatial literacy will not be about learning GIS in schools but will be more centered on increasing spatial awareness and an understanding of the value of understanding place as context.
  • The role of National Mapping Agencies as an authoritative supplier of high quality data and of arbitrator of other geospatial data sources will continue to be crucial.
  • Monopolies held by National Mapping Agencies in some areas of specialized spatial data will be eroded completely.
  • More activities carried out by National Mapping Agencies will be outsourced and crowdsourced.
  • Crowdsourced data will push National Mapping Agencies towards niche markets.
  • National Mapping Agencies will be required to find new business models to provide simplified licenses and meet the demands for more free data from mapping agencies.
  • The integration of crowdsourced data with government data will increase over the next 5 to 10 years.
  • Crowdsourced content will decrease cost, improve accuracy and increase availability of rich geospatial information.
  •  There will be increased combining of imagery with crowdsourced data to create datasets that could not have been created affordably on their own.
  • Progress will be made on bridging the gap between authoritative data and crowdsourced data, moving towards true collaboration.
  • There will be an accelerated take-up of Volunteer Geographic Information over the next five years.
  • Within five years the level of detail on transport systems within OpenStreetMap will exceed virtually all other data sources & will be respected/used by major organisations & governments across the globe.
  • Community-based mapping will continue to grow.
  • There is unlikely to be a market for datasets like those currently sold to power navigation and location-based services solutions in 5 years, as they will have been superseded by crowdsourced datasets from OpenStreetMaps or other comparable initiatives.

Which trends have the experts missed? Do you think they’re completely off on any of the above? The full set of predictions on the future of global geospatial information management is available here as a PDF.

Behind the Scenes: The Digital Operations Center of the American Red Cross

The Digital Operations Center at the American Red Cross is an important and exciting development. I recently sat down with Wendy Harman to learn more about the initiative and to exchange some lessons learned in this new world of digital  humanitarians. One common challenge in emergency response is scaling. The American Red Cross cannot be everywhere at the same time—and that includes being on social media. More than 4,000 tweets reference the Red Cross on an average day, a figure that skyrockets during disasters. And when crises strike, so does Big Data. The Digital Operations Center is one response to this scaling challenge.

Sponsored by Dell, the Center uses customized software produced by Radian 6 to monitor and analyze social media in real-time. The Center itself sits three people who have access to six customized screens that relate relevant information drawn from various social media channels. The first screen below depicts some of key topical areas that the Red Cross monitors, e.g., references to the American Red Cross, Storms in 2012, and Delivery Services.

Circle sizes in the first screen depict the volume of references related to that topic area. The color coding (red, green and beige) relates to sentiment analysis (beige being neutral). The dashboard with the “speed dials” right underneath the first screen provides more details on the sentiment analysis.

Lets take a closer look at the circles from the first screen. The dots “orbiting” the central icon relate to the categories of key words that the Radian 6 platform parses. You can click on these orbiting dots to “drill down” and view the individual key words that make up that specific category. This circles screen gets updated in near real-time and draws on data from Twitter, Facebook, YouTube, Flickr and blogs. (Note that the distance between the orbiting dots and the center does not represent anything).

An operations center would of course not be complete without a map, so the Red Cross uses two screens to visualize different data on two heat maps. The one below depicts references made on social media platforms vis-a-vis storms that have occurred during the past 3 days.

The screen below the map highlights the bio’s of 50 individual twitter users who have made references to the storms. All this data gets generated from the “Engagement Console” pictured below. The purpose of this web-based tool, which looks a lot like Tweetdeck, is to enable the Red Cross to customize the specific types of information they’re looking form, and to respond accordingly.

Lets look at the Consul more closely. In the Workflow section on the left, users decide what types of tags they’re looking for and can also filter by priority level. They can also specify the type of sentiment they’re looking, e.g., negative feelings vis-a-vis a particular issue. In addition, they can take certain actions in response to each information item. For example, they can reply to a tweet, a Facebook status update, or a blog post; and they can do this directly from the engagement consul. Based on the license that the Red Cross users, up to 25 of their team members can access the Consul and collaborate in real-time when processing the various tweets and Facebook updates.

The Consul also allows users to create customized timelines, charts and wordl graphics to better understand trends changing over time in the social media space. To fully leverage this social media monitoring platform, Wendy and team are also launching a digital volunteers program. The goal is for these volunteers to eventually become the prime users of the Radian platform and to filter the bulk of relevant information in the social media space. This would considerably lighten the load for existing staff. In other words, the volunteer program would help the American Red Cross scale in the social media world we live in.

Wendy plans to set up a dedicated 2-hour training for individuals who want to volunteer online in support of the Digital Operations Center. These trainings will be carried out via Webex and will also be available to existing Red Cross staff.


As  argued in this previous blog post, the launch of this Digital Operations Center is further evidence that the humanitarian space is ready for innovation and that some technology companies are starting to think about how their solutions might be applied for humanitarian purposes. Indeed, it was Dell that first approached the Red Cross with an expressed interest in contributing to the organization’s efforts in disaster response. The initiative also demonstrates that combining automated natural language processing solutions with a digital volunteer net-work seems to be a winning strategy, at least for now.

After listening to Wendy describe the various tools she and her colleagues use as part of the Operations Center, I began to wonder whether these types of tools will eventually become free and easy enough for one person to be her very own operations center. I suppose only time will tell. Until then, I look forward to following the Center’s progress and hope it inspires other emergency response organizations to adopt similar solutions.

Twitcident: Filtering Tweets in Real-Time for Crisis Response

The most recent newcomer to the “tweetsourcing” space comes to us from Delft University of Technology in the Netherlands. Twitcident is a web-based filtering system that extracts crisis information from Twitter in real-time to support emergency response efforts. Dutch emergency services have been testing the platform over the past 10 months and results “show the system to be far more useful than simple keyword searching of a twitter feed” (NewScientist).

Here’s how it works. First the dashboard, which shows current events-of-interest being monitored.

Lets click on “Texas”, which produces the following page. More than 22,000 tweets potentially relate to the actual fire of interest.

This is where the filtering comes in:

The number of relevant tweets is reduced with every applied filter.

Naturally, geo-location is also an optional filter.

Twitcident also allows for various visualization options, including timelines, word clouds and charts.

The system also allows the user to view the filtered tweets on a map. The pictures and videos shared via twitter are also aggregated and viewable on dedicated tabs.

The developers of the platform have not revealed how their algorithms work but will demo the tool at the World Wide Web 2012 conference in France next week. In the meantime, here’s a graphic that summarizes the platform workflow.

I look forward to following Twitcident’s developments. I’d be particularly interested in learning more about how Dutch emergency services have been using the tool and what features they think would improve the platform’s added value.

Does the Humanitarian Industry Have a Future in The Digital Age?

I recently had the distinct honor of being on the opening plenary of the 2012 Skoll World Forum in Oxford. The panel, “Innovation in Times of Flux: Opportunities on the Heels of Crisis” was moderated by Judith Rodin, CEO of the Rockefeller Foundation. I’ve spent the past six years creating linkages between the humanitarian space and technology community, so the conversations we began during the panel prompted me to think more deeply about innovation in the humanitarian industry. Clearly, humanitarian crises have catalyzed a number of important innovations in recent years. At the same time, however, these crises extend the cracks that ultimately reveal the inadequacies of existing organiza-tions, particularly those resistant to change; and “any organization that is not changing is a battle-field monument” (While 1992).

These cracks, or gaps, are increasingly filled by disaster-affected communities themselves thanks in part to the rapid commercialization of communication technology. Question is: will the multi-billion dollar humanitarian industry change rapidly enough to avoid being left in the dustbin of history?

Crises often reveal that “existing routines are inadequate or even counter-productive [since] response will necessarily operate beyond the boundary of planned and resourced capabilities” (Leonard and Howitt 2007). More formally, “the ‘symmetry-breaking’ effects of disasters undermine linearly designed and centralized administrative activities” (Corbacioglu 2006). This may explain why “increasing attention is now paid to the capacity of disaster-affected communities to ‘bounce back’ or to recover with little or no external assistance following a disaster” (Manyena 2006).

But disaster-affected populations have always self-organized in times of crisis. Indeed, first responders are by definition those very communities affected by disasters. So local communities—rather than humanitarian professionals—save the most lives following a disaster (Gilbert 1998). Many of the needs arising after a disaster can often be met and responded to locally. One doesn’t need 10 years of work experience with the UN in Darfur or a Masters degree to know basic first aid or to pull a neighbor out of the rubble, for example. In fact, estimates suggest that “no more than 10% of survival in emergencies can be attributed to external sources of relief aid” (Hilhorst 2004).

This figure may be higher today since disaster-affected communities now benefit from radically wider access to information and communication technologies (ICTs). After all, a “disaster is first of all seen as a crisis in communicating within a community—that is as a difficulty for someone to get informed and to inform other people” (Gilbert 1998). This communication challenge is far less acute today because disaster-affected communities are increasingly digital, and thus more and more the primary source of information communicated following a crisis. Of course, these communities were always sources of information but being a source in an analog world is fundamentally different than being a source of information in the digital age. The difference between “read-only” versus “read-write” comes to mind as an analogy. And so, while humanitarian organiza-tions typically faced a vacuum of information following sudden onset disasters—limited situational awareness that could only be filled by humanitarians on the ground or via established news organizations—one of the major challenges today is the Big Data produced by disaster-affected communities themselves.

Indeed, vacuums are not empty and local communities are not invisible. One could say that disaster-affected communities are joining the quantified self (QS) movement given that they are increasingly quantifying themselves. If inform-ation is power, then the shift of information sourcing and sharing from the select few—the humanitarian professionals—to the masses must also engender a shift in power. Indeed, humanitarians rarely have access to exclusive information any longer. And even though affected populations are increasingly digital, some groups believe that humanitarian organizations have largely failed at commu–nicating with disaster-affected communities. (Naturally, there are important and noteworthy exceptions).

So “Will Twitter Put the UN Out of Business?” (Reuters), or will humanitarian organizations cope with these radical changes by changing themselves and reshaping their role as institutions before it’s too late? Indeed, “a business that doesn’t communicate with its customers won’t stay in business very long—it’ll soon lose track of what its clients want, and clients won’t know what products or services are on offer,” whilst other actors fill the gaps (Reuters). “In the multi-billion dollar humanitarian aid industry, relief agencies are businesses and their beneficiaries are customers. Yet many agencies have muddled along for decades with scarcely a nod towards communicating with the folks they’re supposed to be serving” (Reuters).

The music and news industries were muddling along as well for decades. Today, however, they are facing tremendous pressures and are undergoing radical structural changes—none of them by choice. Of course, it would be different if affected communities were paying for humanitarian services but how much longer do humanitarian organizations have until they feel similar pressures?

Whether humanitarian organizations like it or not, disaster affected communities will increasingly communicate their needs publicly and many will expect a response from the humanitarian industry. This survey carried out by the American Red Cross two years ago already revealed that during a crisis the majority of the public expect a response to needs they communicate via social media. Moreover, they expect this response to materialize within an hour. Humanitarian organizations simply don’t have the capacity to deal with this surge in requests for help, nor are they organizationally structured to do so. But the fact of the matter is that humanitarian organizations have never been capable of dealing with this volume of requests in the first place. So “What Good is Crowd-sourcing When Everyone Needs Help?” (Reuters). Perhaps “crowdsourcing” is finally revealing all the cracks in the system, which may not be a bad thing. Surely by now it is no longer a surprise that many people may be in need of help after a disaster, hence the importance of disaster risk reduction and preparedness.

Naturally, humanitarian organizations could very well chose to continue ignoring calls for help and decide that communicating with disaster affected communities is simply not tenable. In the analog world of the past, the humanitarian industry was protected by the fact that their “clients” did not have a voice because they could not speak out digitally. So the cracks didn’t show. Today, “many traditional humanitarian players see crowdsourcing as an unwelcome distraction at a time when they are already overwhelmed. They worry that the noise-to-signal ration is just too high” (Reuters). I think there’s an important disconnect here worth emphasizing. Crowdsourced information is simply user-generated content. If humanitarians are to ignore user-generated content, then they can forget about two-way communications with disaster-affected communities and drop all the rhetoric. On the other hand, “if aid agencies are to invest time and resources in handling torrents of crowdsourced information in disaster zones, they should be confident it’s worth their while” (Reuters).

This last comment is … rather problematic for several reasons (how’s that for being diplomatic?). First of all, this kind of statement continues to propel the myth that we the West are the rescuers and aid does not start until we arrive (Barrs 2006). Unfortunately, we rarely arrive: how many “neglected crises” and so-called “forgotten emergencies” have we failed to intervene in? This kind of mindset may explain why humanitarian interventions often have the “propensity to follow a paternalistic mode that can lead to a skewing of activities towards supply rather than demand” and towards informing at the expense of listening (Manyena 2006).

Secondly, the assumption that crowdsourced data would be for the exclusive purpose of the humanitarian cavalry is somewhat arrogant and ignores the reality that local communities are by definition the first responders in a crisis. Disaster-affected communities (and Diasporas) are already collecting (and yes crowdsourcing) information to create their own crisis maps in times of need as a forthcoming report shows. And they’ll keep doing this whether or not humanita-rian organizations approve or leverage that information. As my colleague Tim McNamara has noted “Crisis mapping is not simply a technological shift, it is also a process of rapid decentralization of power. With extremely low barriers to entry, many new entrants are appearing in the fields of emergency and disaster response. They are ignoring the traditional hierarchies, because the new entrants perceive that there is something that they can do which benefits others.”

Thirdly, humanitarian organizations are far more open to using free and open source software than they were just two years ago. So the resources required to monitor and map crowdsourced information need not break the bank. Indeed, the Syria Crisis Map uses a free and open source data-mining platform called HealthMap, which has been monitoring some 2,000 English-based sources on a daily basis for months. The technology powering the map itself, Ushahidi, is also free and open source. Moreover, the team behind the project is comprised of just a handful of volunteers doing this in their own free time (for almost an entire year now). And as a result of this initiative, I am collaborating with a colleague from UNDP to pilot HealthMap’s data mining feature for conflict monitoring and peacebuilding purposes.

Fourth, other than UN Global Pulse, humanitarian agencies are not investing time and resources to manage Big (Crisis) Data. Why? Because they have neither the time nor the know-how. To this end, they are starting to “outsource” and indeed “crowdsource” these tasks—just as private sector businesses have been doing for years in order to extend their reach. Anyone actually familiar with this space and developments since Haiti already knows this. The CrisisMappers Network, Standby Volunteer Task Force (SBTF), Humanitarian OpenStreetMap (HOT) and Crisis Commons (CC) are four volunteer/technical networks that have already collaborated actively with a number of humanitarian organizations since Haiti to provide the “surge capacity” requested by the latter; this includes UN OCHA in Libya and Colombia, UNHCR in Somalia and WHO in Libya, to name a few. In fact, these groups even have their own acronym: Volunteer & Technical Communities (V&TCs).

As the former head of OCHA’s Information Services Section (ISS) noted after the SBTF launched the Libya Crisis Map, “Your efforts at tackling a difficult problem have definitely reduced the information overload; sorting through the multitude of signals on the crisis is not easy task” (March 8, 2011). Furthermore, the crowdsourced social media information mapped on the Libya Crisis Map was integrated into official UN OCHA information products. I dare say activating the SBTF was worth OCHA’s while. And it cost the UN a grand total of $0 to benefit from this support.

Credit: Chris Bow

The rapid rise of V&TC’s has catalyzed the launch of the Digital Humanitarian Network (DHN), formerly called the Humanitarian Standby Task Force (H-SBTF). Digital Humanitarians is a network-of-network catalyzed by the UN and comprising some of the most active members of the volunteer & technical co-mmunity. The purpose of the Digital Humanitarian platform (powered by Ning) is to provide a dedicated interface for traditional humanitarian organizations to outsource and crowdsource important information management tasks during and in-between crises. OCHA has also launched the Communities of Interest (COIs) platform to further leverage volunteer engagement in other areas of humanitarian response.

These are not isolated efforts. During the massive Russian fires of 2010, volunteers launched their own citizen-based disaster response agency that was seen by many as more visible and effective than the Kremlin’s response. Back in Egypt, volunteers used IntaFeen.com to crowdsource and coordinate their own humanitarian convoys to Libya, for example. The company LinkedIn has also taken innovative steps to enable the matching of volunteers with various needs. They recently added a “Volunteer and Causes” field to its member profile page, which is now available to 150 million LinkedIn users worldwide. Sparked.com is yet another group engaged in matching volunteers with needs. The company is the world’s first micro-volunteering network, sending challenges to registered volunteers that are targeted to their skill set and the causes that they are most passionate about.

It is not farfetched to envisage how these technologies could be repurposed or simply applied to facilitate and streamline volunteer management following a disaster. Indeed, researchers at the University of Queensland in Australia have already developed a new smart phone app to help mobilize and coordinate volunteer efforts during and following major disasters. The app not only provides information on preparedness but also gives real-time updates on volunteering opportunities by local area. For example, volunteers can register for a variety of tasks including community response to extreme weather events.

Meanwhile, the American Red Cross just launched a Digital Operations Center in partnership with Dell Labs, which allows them to leverage digital volunteers and Dell’s social media monitoring platforms to reduce the noise-to-signal ratio. This is a novel “social media-based operation devoted to humanitarian relief, demonstrating the growing importance of social media in emergency situations.” As part of this center, the Red Cross also “announced a Digital Volunteer program to help respond to question from and provide information to the public during disasters.”

While important challenges do exist, there are many positive externalities to leveraging digital volunteers. As deputy high commissioner of UNHCR noted about this UNHCR-volunteer project in Somalia, these types of projects create more citizen-engagement and raises awareness of humanitarian organizations and projects. This in part explains why UNHCR wants more, not less, engage-ment with digital volunteers. Indeed, these volunteers also develop important skills that will be increasingly sought after by humanitarian organizations recruit-ing for junior full-time positions. Humanitarian organizations are likely to be come smarter and more up to speed on humanitarian technologies and digital humanitarian skills as a result. This change should be embraced.

So given the rise of “self-quantified” disaster-affected communities and digitally empowered volunteer communities, is there a future for traditional humani-tarian organizations? Of course, anyone who suggests otherwise is seriously misguided and out of touch with innovation in the humanitarian space. Twitter will not put the UN out of business. Humanitarian organizations will continue to play some very important roles, especially those relating to logistics and coor-dination. These organizations will continue outsourcing some roles but will also take on some new roles. The issue here is simply one of comparative advantage. Humanitarian organizations used to have a comparative advantage in some areas, but this has shifted for all the reasons described above. So outsourcing in some cases makes perfect sense.

Interestingly, organizations like UN OCHA are also changing some of their own internal information management processes as a result of their collaboration with volunteer networks like the SBTF, which they expect will lead to a number of efficiency gains. Furthermore, OCHA is behind the Digital Humanitarians initiative and has also been developing a check-in app for humanitarian pro-fessionals to use in disaster response—clear signs of innovation and change. Meanwhile, the UK’s Department for International Development (DfID) has just launched a $75+ million fund to leverage new technologies in support of humani-tarian response; this includes mobile phones, satellite imagery, Twitter as well as other social media technologies, digital mapping and gaming technologies. Given that crisis mapping integrates these new technologies and has been at the cutting edge of innovation in the humanitarian space, I’ve invited DfID to participate in this year’s International Conference on Crisis Mapping (ICCM 2012).

In conclusion, and as argued two years ago, the humanitarian industry is shifting towards a more multi-polar system. The rise of new actors, from digitally empowered disaster-affected communities to digital volunteer networks, has been driven by the rapid commercialization of communication technology—particularly the mobile phone and social networking platforms. These trends are unlikely to change soon and crises will continue to spur innovations in this space. This does not mean that traditional humanitarian organizations are becoming obsolete. Their roles are simply changing and this change is proof that they are not battlefield monuments. Of course, only time will tell whether they change fast enough.

Crowdsourcing Humanitarian Convoys in Libya

Many activists in Egypt donated food and medical supplies to support the Libyan revolution in early 2011. As a result, volunteers set up and coordinated humanitarian convoys from major Egyptian cities to Tripoli. But these convoys faced two major problems. First, volunteers needed to know where the convoys were in order to communicate this to Libyan revolutionists so they could wait for the fleet at the border and escort them to Tripoli. Second, because these volunteers were headed into a war zone, their friends and family wanted to keep track of them to make sure they were safe. The solution? IntaFeen.com.

Inta feen? means “where are you?” in Arabic and IntaFeen.com is a mobile check-in service like Foursquare but localized for the Arab World. Convoy drivers used IntaFeen to check-in at different stops along the way to Tripoli to provide regular updates on the situation. This is how volunteers back in Egypt who coordinated the convoy kept track of their progress and communicated updates in real-time to their Libyan counterparts. Volunteers who went along with the convoys also used IntaFeen and their check-in’s would also get posted on Twitter and Facebook, allowing families and friends in Egypt to track their whereabouts.

Al Amain Road is a highway between Alexandria and Tripoli. These tweets and check-in’s acted as a DIY fleet management system for volunteers and activists.

The use of IntaFeen combined with Facebook and Twitter also created an interesting side-effect in terms of social media marketing to promote activism. The sharing of these updates within and across various social networks galvanized more Egyptians to volunteer their time and resulted in more convoys.

I wonder whether these activists knew about another crowdsourced volunteer project taking place at exactly the same time in support of the UN’s humanitarian relief operations: Libya Crisis Map. Much of the content added to the map was sourced from social media. Could the #LibyaConvoy project have benefited from the real-time situational awareness provided by the Libya Crisis Map?

Will we see more convergence between volunteer-run crisis maps and volunteer-run humanitarian response in the near future?

Big thanks to Adel Youssef from IntaFeen.com who spoke about this fascinating project (and Ushahidi) at Where 2.0 this week. More information on #Libya Convoy is available here. See also my earlier blog posts on the use of check-in’s for activism and disaster response.

Crisis Mapping Syria: Automated Data Mining and Crowdsourced Human Intelligence

The Syria Tracker Crisis Map is without doubt one of the most impressive crisis mapping projects yet. Launched just a few weeks after the protests began one year ago, the crisis map is spearheaded by a just handful of US-based Syrian activists have meticulously and systematically documented 1,529 reports of human rights violations including a total of 11,147 killings. As recently reported in this NewScientist article, “Mapping the Human Cost of Syria’s Uprising,” the crisis map “could be the most accurate estimate yet of the death toll in Syria’s uprising […].” Their approach? “A combination of automated data mining and crowdsourced human intelligence,” which “could provide a powerful means to assess the human cost of wars and disasters.”

On the data-mining side, Syria Tracker has repurposed the HealthMap platform, which mines thousands of online sources for the purposes of disease detection and then maps the results, “giving public-health officials an easy way to monitor local disease conditions.” The customized version of this platform for Syria Tracker (ST), known as HealthMap Crisis, mines English information sources for evidence of human rights violations, such as killings, torture and detainment. As the ST Team notes, their data mining platform “draws from a broad range of sources to reduce reporting biases.” Between June 2011 and January 2012, for example, the platform collected over 43,o00 news articles and blog posts from almost 2,000 English-based sources from around the world (including some pro-regime sources).

Syria Tracker combines the results of this sophisticated data mining approach with crowdsourced human intelligence, i.e., field-based eye-witness reports shared via webform, email, Twitter, Facebook, YouTube and voicemail. This naturally presents several important security issues, which explains why the main ST website includes an instructions page detailing security precautions that need to be taken while sub-mitting reports from within Syria. They also link to this practical guide on how to protect your identity and security online and when using mobile phones. The guide is available in both English and Arabic.

Eye-witness reports are subsequently translated, geo-referenced, coded and verified by a group of volunteers who triangulate the information with other sources such as those provided by the HealthMap Crisis platform. They also filter the reports and remove dupli-cates. Reports that have a low con-fidence level vis-a-vis veracity are also removed. Volunteers use a dig-up or vote-up/vote-down feature to “score” the veracity of eye-witness reports. Using this approach, the ST Team and their volunteers have been able to verify almost 90% of the documented killings mapped on their platform thanks to video and/or photographic evidence. They have also been able to associate specific names to about 88% of those reported killed by Syrian forces since the uprising began.

Depending on the levels of violence in Syria, the turn-around time for a report to be mapped on Syria Tracker is between 1-3 days. The team also produces weekly situation reports based on the data they’ve collected along with detailed graphical analysis. KML files that can be uploaded and viewed using Google Earth are also made available on a regular basis. These provide “a more precisely geo-located tally of deaths per location.”

In sum, Syria Tracker is very much breaking new ground vis-a-vis crisis mapping. They’re combining automated data mining technology with crowdsourced eye-witness reports from Syria. In addition, they’ve been doing this for a year, which makes the project the longest running crisis maps I’ve seen in a hostile environ-ment. Moreover, they’ve been able to sustain these import efforts with just a small team of volunteers. As for the veracity of the collected information, I know of no other public effort that has taken such a meticulous and rigorous approach to documenting the killings in Syria in near real-time. On February 24th, Al-Jazeera posted the following estimates:

Syrian Revolution Coordination Union: 9,073 deaths
Local Coordination Committees: 8,551 deaths
Syrian Observatory for Human Rights: 5,581 deaths

At the time, Syria Tracker had a total of 7,901 documented killings associated with specific names, dates and locations. While some duplicate reports may remain, the team argues that “missing records are a much bigger source of error.” Indeed, They believe that “the higher estimates are more likely, even if one chooses to disregard those reports that came in on some of the most violent days where names were not always recorded.”

The Syria Crisis Map itself has been viewed by visitors from 136 countries around the world and 2,018 cities—with the top 3 cities being Damascus, Washington DC and, interestingly, Riyadh, Saudia Arabia. The witnessing has thus been truly global and collective. When the Syrian regime falls, “the data may help sub-sequent governments hold him and other senior leaders to account,” writes the New Scientist. This was one of the principle motivations behind the launch of the Ushahidi platform in Kenya over four years ago. Syria Tracker is powered by Ushahidi’s cloud-based platform, Crowdmap. Finally, we know for a fact that the International Criminal Court (ICC) and Amnesty International (AI) closely followed the Libya Crisis Map last year.

Twitter, Crises and Early Detection: Why “Small Data” Still Matters

My colleagues John Brownstein and Rumi Chunara at Harvard Univer-sity’s HealthMap project are continuing to break new ground in the field of Digital Disease Detection. Using data obtained from tweets and online news, the team was able to identify a cholera outbreak in Haiti weeks before health officials acknowledged the problem publicly. Meanwhile, my colleagues from UN Global Pulse partnered with Crimson Hexagon to forecast food prices in Indonesia by carrying out sentiment analysis of tweets. I had actually written this blog post on Crimson Hexagon four years ago to explore how the platform could be used for early warning purposes, so I’m thrilled to see this potential realized.

There is a lot that intrigues me about the work that HealthMap and Global Pulse are doing. But one point that really struck me vis-a-vis the former is just how little data was necessary to identify the outbreak. To be sure, not many Haitians are on Twitter and my impression is that most humanitarians have not really taken to Twitter either (I’m not sure about the Haitian Diaspora). This would suggest that accurate, early detection is possible even without Big Data; even with “Small Data” that is neither representative or indeed verified. (Inter-estingly, Rumi notes that the Haiti dataset is actually larger than datasets typically used for this kind of study).

In related news, a recent peer-reviewed study by the European Commi-ssion found that the spatial distribution of crowdsourced text messages (SMS) following the earthquake in Haiti were strongly correlated with building damage. Again, the dataset of text messages was relatively small. And again, this data was neither collected using random sampling (i.e., it was crowdsourced) nor was it verified for accuracy. Yet the analysis of this small dataset still yielded some particularly interesting findings that have important implications for rapid damage detection in post-emergency contexts.

While I’m no expert in econometrics, what these studies suggests to me is that detecting change-over–time is ultimately more critical than having a large-N dataset, let alone one that is obtained via random sampling or even vetted for quality control purposes. That doesn’t mean that the latter factors are not important, it simply means that the outcome of the analysis is relatively less sensitive to these specific variables. Changes in the baseline volume/location of tweets on a given topic appears to be strongly correlated with offline dynamics.

What are the implications for crowdsourced crisis maps and disaster response? Could similar statistical analyses be carried out on Crowdmap data, for example? How small can a dataset be and still yield actionable findings like those mentioned in this blog post?

Crisis Mapping Climate Change, Conflict and Aid in Africa

I recently gave a guest lecture at the University of Texas, Austin, and finally had the opportunity to catch up with my colleague Josh Busby who has been working on a promising crisis mapping project as part of the university’s Climate Change and African Political Stability Program (CCAPS).

Josh and team just released the pilot version of its dynamic mapping tool, which aims to provide the most comprehensive view yet of climate change and security in Africa. The platform, developed in partnership with AidData, enables users to “visualize data on climate change vulnerability, conflict, and aid, and to analyze how these issues intersect in Africa.” The tool is powered by ESRI technology and allows researchers as well as policymakers to “select and layer any combination of CCAPS data onto one map to assess how myriad climate change impacts and responses intersect. For example, mapping conflict data over climate vulnera-bility data can assess how local conflict patterns could exacerbate climate-induced insecurity in a region. It also shows how conflict dynamics are changing over time and space.”

The platform provides hyper-local data on climate change and aid-funded interventions, which can provide important insights on how development assistance might (or might not) be reducing vulnerability. For example, aid projects funded by 27 donors in Malawi (i.e., aid flows) can be layered on top of the climate change vulnerability data to “discern whether adaptation aid is effectively targeting the regions where climate change poses the most significant risk to the sustainable development and political stability of a country.”

If this weren’t impressive enough, I was positively amazed when I learned from Josh and team that the conflict data they’re using, the Armed Conflict Location Event Data (ACLED), will be updated on a weekly basis as part of this project, which is absolutely stunning. Back in the day, ACLED was specifically coding historical data. A few years ago they closed the gap by updating some conflict data on a yearly basis. Now the temporal lag will just be one week. Note that the mapping tool already draws on the Social Conflict in Africa Database (SCAD).

This project is an important contribution to the field of crisis mapping and I look forward to following CCAPS’s progress closely over the next few months. I’m hoping that Josh will present this project at the 2012 International Crisis Mappers Conference (ICCM 2012) later this year.

#UgandaSpeaks: Al-Jazeera uses Ushahidi to Amplify Local Voices in Response to #Kony2012

[Cross-posted from the Ushahidi blog]

Invisible Children’s #Kony2012 campaign has set off a massive firestorm of criticism with the debate likely to continue raging for many more weeks and months. In the meantime, our colleagues at Al-Jazeera have repurposed our previous #SomaliaSpeaks project to amplify Ugandan voices responding to the Kony campaign: #UgandaSpeaks.

Other than GlobalVoices, this Al-Jazeera initiative is one of the very few seeking to amplify local reactions to the Kony campaign. Over 70 local voices have been shared and mapped on Al-Jazeera’s Ushahidi platform in the first few hours since the launch. The majority of reactions submitted thus far are critical of the campaign but a few are positive.

One person from Kampala asks, “How come the world now knows more about #Kony2012 than about the Nodding Syndrome in Northern Uganda?” Another person in Gulu complains that “there is nothing new they are showing us. Its like a campaign against our country. […] Did they put on consideration how much its costing our country’s image? It shows as if Uganda is finished.” In nearby Lira, one person shares their story about growing up in Northern Uganda and attending “St. Mary’s College Aboke, a school from which Joseph Kony’s rebels abducted 139 girls in ordinary level […]. For the 4 years that I spent in that school (1999-2002), together with other students, I remember praying the Rosary at the School Grotto on daily basis and in the process, reading out the names of the 30 girls who had remained in captivity after Sr. Rachelle an Italian Nun together with a Ugandan teacher John Bosco rescued only 109 of them.”

The Ushahidi platform was first launched in neighboring Kenya to give ordinary Kenyans a voice during the post election-violence in 2007/2008. Indeed, “ushahidi” means witness or testimony in Swahili. So I am pleased to see this free and open source platform from Africa being used to amplify voices next door in Uganda, voices that are not represented in the #Kony2012 campaign.

Some Ugandan activists are asking why they should respond to “some American video release about something that happened 20 years ago by someone who is not in my country?” Indeed, why should anyone? If the #Kony2012 campaign and underlying message doesn’t bother Ugandans and doesn’t paint the country in a bad light, then there’s no need to respond. If the campaign doesn’t divert attention from current issues that are more pressing to Ugandans and does not adversely effect tourism, then again, why should anyone respond? This is, after all a personal choice, no one is forced to have their voices heard.

At SXSW yesterday, Ugandan activist Teddy Ruge weighed in on the #Kony2012 campaign with the following:

“We [Ugandans] have such a hard time being given the microphone to talk about our issues that sometimes we have to follow on the coat-tails of Western projects like this one and say that we also have a voice in this matter.”

I believe one way to have those local voices heard is to have them echoed using innovative software “Made in Africa” like Ushahidi and then amplified by a non-Western but international news company like Al-Jazeera. Looking at my Twitter stream this morning, it appears that I’m not the only one. The microphone is yours. Over to you.

Innovation and Counter-Innovation: Digital Resistance in Russia

Want to know what the future of digital activism looks like? Then follow the developments in Russia. I argued a few years back that the fields of digital activism and civil resistance were converging to a point I referred to as  “digital resistance.” The pace of tactical innovation and counter-innovation in Russia’s digital battlefield is stunning and rapidly converging to this notion of digital resistance.

“Crisis can be a fruitful time for innovation,” writes Gregory Asmolov. Contested elections are also ripe for innovation, which is why my dissertation case studies focused on elections. “In most cases,” says Asmolov, “innovations are created by the oppressed (the opposition, in Russia’s case), who try to challenge the existing balance of power by using new tools and technologies. But the state can also adapt and adopt some of these technologies to protect the status quo.” These innovations stem not only from the new technologies themselves but are embodied in the creative ways they are used. In other words, tactical innovation (and counter-innovation) is taking place alongside technological innovation. Indeed, “innovation can be seen not only in the new tools, but also in the new forms of protest enabled by the technology.”

Some of my favorite tactics from Russia include the YouTube video of Vladimir Putin arrested for fraud and corruption. The video was made to look like a real “breaking news” announcement on Russian television. The site got millions of viewers in just a few days. Another tactic is the use of DIY drones, mobile phone live-streaming and/or 360-degree 3D photo installations to more accurately relay the size of protests. A third tactic entails the use of a twitter username that resembles that of a well-known individual. Michael McFaul, the US Ambassador to Russia, has the twitter handle @McFaul. Activists set up the twitter handle @McFauI that appears identical but actually uses a capital “i” instead of a lower case “L” for the last letter in McFaul.

Asmolov lists a number of additional innovations in the Russian context in this excellent write-up. From coordination tools such as the “League of Voters” website, the “Street Art” group on Facebook and the car-based flashmob protests which attracted more than one thousand cars in one case, to the crowdsourced violations map “Karta Narusheniy“, the “SMS Golos” and “Svodny Protocol” platforms used to collect, analyze and/or map reports from trusted election observers (using bounded crowdsourcing).

One of my favorite tactics is the “solo protest.” According to Russian law, “a protest by one person does not require special permission. So activist Olesya Shmagun stood in from of Putin’s office with a poster that read “Putin, go and take part in public debates!” While she was questioned by the police and security service, she was not detained since one-person protests are not illegal. Even though she only caught the attention of several dozen people walking by at the time, she published the story of her protests and a few photos on her LiveJournal blog, which drew considerable attention after being shared on many blogs and media outlets. As Asmolov writes, “this story shows the power of what is known as Manuel Castell’s ‘mass self-communication’. Thanks to the presence of one camera, an offline one-person protest found a way to a [much wider] audience online.”

This innovative tactic lead to another challenge: how to turn a one-person protests into a massive number of one-person protests? So on top of this original innovation came yet another innovation, the Big White Circle action. The dedicated online tool Feb26.ru was developed specifically to coordinate many simultaneous one-person protests. The platform,

“[…] allowed people to check in at locations of their choice on the map of the Garden Ring circle, and showed what locations were already occupied. Unlike other protests, the Big White Circle did not have any organizational committee or a particular leader. The role of the leader was played by a website. The website suffered from DDoS attacks; as a result, it was closed and deleted by the provider; a day later, it was restored.  The practice of creating special dedicated websites for specific protest events is one of the most interesting innovations of the Russian protests. The initial idea belongs to Ilya Klishin, who launched the dec24.ru website (which doesn’t exist anymore) for the big opposition rally that took place in Moscow on December 24, 2011.”

The reason I like this tactic is because it takes a perfectly legal action and simply multiplies it, thus forcing the regime to potentially come up with a new set of laws that will clearly appear absurd and ridiculed by a larger segment of the population.

Citizen-based journalism played a pivotal role by “increasing transparency of the coverage of pro-government rallies.” As Asmolov notes, “Internet users were able to provide much content, including high quality YouTube reports that showed that many of those who took a part in these rallies had been forced or paid to participate, without really having any political stance.” This relates to my earlier blog post, “Wag the Dog, or Why Falsifying Crowdsourced Information Can be a Pain.”

Of course, there is plenty of “counter-innovation” coming from the Kremlin and friends. Take this case of pro-Kremlin activists producing an instructional YouTube video on how to manipulate a crowdsourced election-monitoring platform. In addition, Putin loyalists have adapted some of the same tactics as opposition activists, such as the car-based flash-mob protest. The Russian government also decided to create an online system of their own for election monitoring:

“Following an order from Putin, the state communication company Rostelecom developed a website webvybory2012.ru, which allowed people to follow the majority of the Russian polling stations (some 95,000) online on the day of the March 4 presidential election.  Every polling station was equipped with two cameras: one has to be focused on the ballot box and the other has to give the general picture of the polling station. Once the voting was over, one of the cameras broadcasted the counting of the votes. The cost of this project is at least 13 billion rubles (around $500 million). Many bloggers have criticized this system, claiming that it creates an imitation of transparency, when actually the most common election violations cannot be monitored through webcameras (more detailed analysis can be found here). Despite this, the cameras allowed to spot numerous violations (1, 2).”

From the perspective of digital resistance strategies, this is exactly the kind of reaction you want to provoke from a repressive regime. Force them to decen-tralize, spend hundreds of millions of dollars and hundreds of labor-hours to adopt similar “technologies of liberation” and in the process document voting irregularities on their own websites. In other words, leverage and integrate the regime’s technologies within the election-monitoring ecosystem being created, as this will spawn additional innovation. For example, one Russian activist proposed that this webcam network be complemented by a network of citizen mobile phones. In fact, a group of activists developed a smartphone app that could do just this. “The application Webnablyudatel has a classification of all the violations and makes it possible to instantly share video, photos and reports of violations.”

Putin supporters also made an innovative use of crowdsourcing during the recent elections. “What Putin has done is based on a map of Russia where anyone can submit information about Putin’s good deeds.” Just like pro-Kremlin activists can game pro-democracy crowdsourcing platforms, so can supporters of the opposition game a platform like this Putin map. In addition, activists could have easily created a Crowdmap and called it “What Putin Has Not Done” and crowdsource that map, which no doubt would be far more populated than the original good deed map.

One question that comes to mind is how the regime will deal with disinformation on crowdsourcing platforms they set up? Will they need to hire more supporters to vet the information submitted to said platform? Or will  they close up the reporting and use “bounded crowdsourcing” instead? If so, will they have a communications challenge on their hands in trying to convince that trusted reporters are indeed legitimate? Another question has to do with collective action. Pro-Kremlin activists are already innovating on their own but will this create a collective-action challenge for the Russian government? Take the example of the pro-regime “Putin Alarm Clock” (Budilnikputina.ru) tactic which backfired and even prompted Putin’s chief of elections staff to dismiss the initiative as “a provocation organized by the protestors.”

There has always been an interesting asymmetric dynamic in digital activism, with activists as first-movers innovating under oppression and regimes counter-innovating. How will this asymmetry change as digital activism and civil resistance tactics and strategies increasingly converge? Will repressive regimes be pushed to decentralize their digital resistance innovations in order to keep pace with the distributed pro-democracy innovations springing up? Does innovation require less coordination than counter-innovation? And as Gregory Asmolov concludes in his post-script, how will the future ubiquity of crowd-funding platforms and tools for micro-donations/payments online change digital resistance?