Category Archives: Social Media

On Rumors, Repression and Digital Disruption in China: Opening Pandora’s Inbox of Truthiness?

The Economist recently published a brilliant piece on China entitled: “The Power of Microblogs: Zombie Followers and Fake Re-Tweets.” BBC News followed with an equally excellent article: “Damaging Coup Rumors Ricochet Across China.” Combined, these articles reveal just how profound the digital disruption in China is likely to be now that Pandora’s Inbox has been opened.

Credit: The Economist

The Economist article opens with an insightful historical comparison:

“In the year 15AD, during the short-lived Xin dynasty, a rumor spread that a yellow dragon, a symbol of the emperor, had inauspiciously crashed into a temple in the mountains of central China and died. Ten thousand people rushed to the site. The emperor Wang Mang, aggrieved by such seditious gossip, ordered arrests and interrogations to quash the rumor, but never found the source. He was dethroned and killed eight years later, and Han-dynasty rule was restored.”

“The next ruler, Emperor Guangwu, took a different approach, studying rumors as a barometer of public sentiment, according to a recent book Rumors in the Han Dynasty by Lu Zongli, a historian. Guangwu’s government compiled a ‘Rumors Report’, cataloguing people’s complaints about local officials, and making assessments that were passed to the emperor. The early Eastern Han dynasty became known for officials who were less corrupt and more attuned to the people.”

In present day China, a popular pastime among 250+ million Chinese users of microblogging platforms is to “spread news and rumors, both true and false, that challenge the official script of government officials and state-propaganda organs.” In Domination and the Arts of Resistance: Hidden Transcripts, James Scott distinguishes between public and hidden transcripts. The former describes the open, public discourse that take place between dominators and oppressed while hidden transcripts relate to the critique of power that “goes on offstage”, which the power elites cannot decode. Scott writes that when the oppressed classes publicize this “hidden transcript”, (the truthiness?) they become con-scious of its common status. Borrowing from Juergen Habermas (as interpreted by Clay Shirky), those who take on the tools of open expression become a public, and a synchronized public increasingly constrains undemocratic rulers while ex-panding the rights of that public. The result in China? “It is hard to overestimate how much the arrival of [microblogging platforms] has changed the dynamic between rulers and ruled over the past two years” (The Economist).

Chinese authorities have responded to this threat in two predictable ways, one repeating the ill-fated actions of the Xin Dynasty and the other reflecting the more open spirit of Emperor Guangwu. In the latter case, authorities are turning to microblogs as a “listening post” for public opinion and also as a publishing platform. Indeed, “government agencies, party organs and individual officials have set up more than 50,000 weibo accounts [Chinese equivalent of Twitter]” (The Economist). In the former case, the regime has sought to “combat rumors harshly and to tighten controls over the microblogs and their users, censoring posts and closely monitoring troublemakers.” The UK Guardian reports that China is now “taking the toughest steps yet against major microblogs and detain-ing six people for spreading rumors of a coup amid Beijing’s most serious political crisis for years.”

Beijing’s attempt to regulate microblogging companies by requiring users to sign up with their real names is unlikely to be decisive, however. “No matter how it is enforced, user verification seems unlikely to deter the spread of rumors and information that has so concerned authorities” (The Economist). To be sure, companies are already selling fake verification services for a small fee. Besides, verifying accounts for millions of users is simply too time-consuming and hence costly. Even Twitter gave up their verified account service a while back. The task of countering rumors is even more of a Quixotic dream.

Property tycoon Zhang Xin, who has more than 3 million followers, wrote: “What is the best way to stop ‘rumors’? It is transparency and openness. The more speech is discouraged, the more rumors there will be” (UK Guardian).

This may in part explains why Chinese authorities have shifted their approach to one of engagement as evidenced by those 50,000 new weibo accounts. With this second reaction, however, Beijing is possibly passing the point of no return. “This degree of online engagement can be awkward for authorities used to a comfortable buffer from public opinion,” writes The Economist. This is an understatement; Pandora’s (In)box is now open and the “hidden transcript” is cloaked no longer. The critique of power is decoded and elites are “forced” to devise a public reply as a result of this shared awareness lest they lose legitimacy vis-a-vis the broader population. But the regime doesn’t even have a “customer service” mechanism in place to deal with distributed and potentially high-volume complaints. Censorship is easy compared to engagement.

Recall the “Rumors Report” compiled by Emperor Guangwu’s government to catalogue people’s complaints about local officials. How will these 50,000 new weibo users deal with such complaints now that the report can be crowdsourced, especially given that fact that China’s “Internet users have become increasingly bold in their willingness to discuss current affairs and even sensitive political news […]” (UK Guardian).

As I have argued in my dissertation, repressive regimes can react to real (or perceived)  threats posed by “liberation technologies” by either cracking down and further centralizing control and/or by taking on the same strategies as digital activists, which at times requires less centralization. Either way, they’re taking the first step on a slippery slope. By acknowledging the problem of rumors so publicly, the regime is actually calling more attention to how disruptive these simple speculations can be—the classic Streisand effect.

“By falsely packaging lies and speculation as ‘truth’ and ‘existence’, online rumours undermine the morale of the public, and, if out of control, they will seriously disturb the public order and affect social stability,” said a commentary in the People’s Daily, the official Communist party newspaper. (UK Guardian).

Practically speaking, how will those 50,000 new weibo users coordinate their efforts to counter rumors and spread state propaganda? “We have a saying among us: you only need to move your lips to start a rumor, but you need to run until your legs are broken to refute one,” says an employee of a state media outlet (The Economist). How will these new weibo users synchronize collective action in near real-time to counter rumors when any delay is likely to be interpreted as evidence of further guilt? Will they know how to respond to myriads of questions being bombarded at them in real-time by hundreds of thousands of Chinese microbloggers? This may lead to high-pressure situations that are rife for mistakes and errors, particularly if these government officials are new to microblogging. Indeed, If just one of these state-microbloggers slips, that slip could go viral with a retweet tsunami. Any retreat by authorities from this distributed engagement strategy will only lead to more rumors.

The rumors of the coup d’état continue to ricochet across China, gaining remarkable traction far and wide. Chinese microblogs were also alight last week with talk of corruption and power struggles within the highest ranks of the party, which may have fueled the rumor of an overthrow. This is damaging to China’s Communist Party which “likes to portray itself as unified and in control,” particularly as it prepares for it’s once-in-a-decade leadership shuffle. “The problem for China’s Communist Party is that it has no effective way of refuting such talk. There are no official spokesmen who will go on the record, no sources briefing the media on the background. Did it happen? Nobody knows. So the rumors swirl” (BBC News). Even the official media, which is “often found waiting for political guidance, can be slow and unresponsive.”

So if Chinese authorities and state media aren’t even equipped (beyond plain old censorship) to respond to national rumors of vis-a-vis an event as important as a coup (can it possibly get more important than that?), then how in the world will they deal with the undercurrent of rumors that continue to fill Chinese microblogs now that these can have 50,000 new targets online? Moreover, “many in China are now so cynical about the level of censorship that they will not believe what comes from the party’s mouthpieces even if it is true. Instead they will give credence to half-truths or fabrications on the web,” which is “corrosive for the party’s authority” (BBC News). This is a serious problem for China’s Communist elite who are obsessed with the task of projecting an image of total unity and stability.

In contrast, speculators on Chinese microblogging platforms don’t need a highly coordinated strategy to spread conspiracies. They are not handicapped by the centralization and collective action problem that Chinese authorities face; after all, it is clearly far easier to spread a rumor than to debunk one. As noted by The Economist, those spreading rumors have “at their disposal armies of zombie followers and fake re-tweets as well as marketing companies, which help draw attention to rumors until they are spread by a respected user with many real followers, such as a celebrity.” But there’s more at stake here than mere rumors. In fact, as noted by The Economist, the core of the problem has less to do with hunting down rumors of yellow dragons than with “the truth that they reflect: a nervous public. In the age of weibo, it may be that the wisps of truth prove more problematic for authorities than the clouds of falsehood.”

Fascinating epilogues:

China’s censorship can never defeat the internet
China’s censors tested by microbloggers who keep one step ahead of state media

Crisis Mapping Syria: Automated Data Mining and Crowdsourced Human Intelligence

The Syria Tracker Crisis Map is without doubt one of the most impressive crisis mapping projects yet. Launched just a few weeks after the protests began one year ago, the crisis map is spearheaded by a just handful of US-based Syrian activists have meticulously and systematically documented 1,529 reports of human rights violations including a total of 11,147 killings. As recently reported in this NewScientist article, “Mapping the Human Cost of Syria’s Uprising,” the crisis map “could be the most accurate estimate yet of the death toll in Syria’s uprising […].” Their approach? “A combination of automated data mining and crowdsourced human intelligence,” which “could provide a powerful means to assess the human cost of wars and disasters.”

On the data-mining side, Syria Tracker has repurposed the HealthMap platform, which mines thousands of online sources for the purposes of disease detection and then maps the results, “giving public-health officials an easy way to monitor local disease conditions.” The customized version of this platform for Syria Tracker (ST), known as HealthMap Crisis, mines English information sources for evidence of human rights violations, such as killings, torture and detainment. As the ST Team notes, their data mining platform “draws from a broad range of sources to reduce reporting biases.” Between June 2011 and January 2012, for example, the platform collected over 43,o00 news articles and blog posts from almost 2,000 English-based sources from around the world (including some pro-regime sources).

Syria Tracker combines the results of this sophisticated data mining approach with crowdsourced human intelligence, i.e., field-based eye-witness reports shared via webform, email, Twitter, Facebook, YouTube and voicemail. This naturally presents several important security issues, which explains why the main ST website includes an instructions page detailing security precautions that need to be taken while sub-mitting reports from within Syria. They also link to this practical guide on how to protect your identity and security online and when using mobile phones. The guide is available in both English and Arabic.

Eye-witness reports are subsequently translated, geo-referenced, coded and verified by a group of volunteers who triangulate the information with other sources such as those provided by the HealthMap Crisis platform. They also filter the reports and remove dupli-cates. Reports that have a low con-fidence level vis-a-vis veracity are also removed. Volunteers use a dig-up or vote-up/vote-down feature to “score” the veracity of eye-witness reports. Using this approach, the ST Team and their volunteers have been able to verify almost 90% of the documented killings mapped on their platform thanks to video and/or photographic evidence. They have also been able to associate specific names to about 88% of those reported killed by Syrian forces since the uprising began.

Depending on the levels of violence in Syria, the turn-around time for a report to be mapped on Syria Tracker is between 1-3 days. The team also produces weekly situation reports based on the data they’ve collected along with detailed graphical analysis. KML files that can be uploaded and viewed using Google Earth are also made available on a regular basis. These provide “a more precisely geo-located tally of deaths per location.”

In sum, Syria Tracker is very much breaking new ground vis-a-vis crisis mapping. They’re combining automated data mining technology with crowdsourced eye-witness reports from Syria. In addition, they’ve been doing this for a year, which makes the project the longest running crisis maps I’ve seen in a hostile environ-ment. Moreover, they’ve been able to sustain these import efforts with just a small team of volunteers. As for the veracity of the collected information, I know of no other public effort that has taken such a meticulous and rigorous approach to documenting the killings in Syria in near real-time. On February 24th, Al-Jazeera posted the following estimates:

Syrian Revolution Coordination Union: 9,073 deaths
Local Coordination Committees: 8,551 deaths
Syrian Observatory for Human Rights: 5,581 deaths

At the time, Syria Tracker had a total of 7,901 documented killings associated with specific names, dates and locations. While some duplicate reports may remain, the team argues that “missing records are a much bigger source of error.” Indeed, They believe that “the higher estimates are more likely, even if one chooses to disregard those reports that came in on some of the most violent days where names were not always recorded.”

The Syria Crisis Map itself has been viewed by visitors from 136 countries around the world and 2,018 cities—with the top 3 cities being Damascus, Washington DC and, interestingly, Riyadh, Saudia Arabia. The witnessing has thus been truly global and collective. When the Syrian regime falls, “the data may help sub-sequent governments hold him and other senior leaders to account,” writes the New Scientist. This was one of the principle motivations behind the launch of the Ushahidi platform in Kenya over four years ago. Syria Tracker is powered by Ushahidi’s cloud-based platform, Crowdmap. Finally, we know for a fact that the International Criminal Court (ICC) and Amnesty International (AI) closely followed the Libya Crisis Map last year.

Communicating with Disaster Affected Communities (CDAC)

Communication is Aid: Curated tweets and commentary from the CDAC Network’s Media and Technology Fair, London 2012. My commentary in blue. This is the first time I’ve used Storify to curate content. (I bumped into the co-founder of the platform at SXSW which reminded me I really needed to get in on the action).

  1. Sha

    re
    “After the Japan earthquake, >20% of ALL web queries issued were on tsunamis.” @spangledrongo #commisaid

    Thu, Mar 22 2012 08:53:43
  2. Would be great to see how this type of search data compares to data from Tweets. Take this analysis of tweets following the earthquake in Chile, for example.
  3. Share
    RT @UNFPA: Professional systems are being replaced by consumer tools says @Google Crisis Response #commisaid

    Thu, Mar 22 2012 08:48:58
  4. And as a result, crisis-affected communities are increasingly becoming digital as I note in this blog post.
  5. Share
    Closed systems closed data will be left behind and unused:crisis response is social and collaboration is empowering @CDACNetwork #commisaid

    Thu, Mar 22 2012 08:49:58
  6. Share
    RT @catherinedem: Crisis response is #social – online social collaboration spikes during and after disaster @spangledrongo #commisaid

    Thu, Mar 22 2012 08:54:03
  7. Share
    “It is essential to have authoritative content” Nigel Snoad at @CDACNetwork’s Media & Tech Fair #commisaid

    Thu, Mar 22 2012 08:58:28
  8. Does this mean that all user-generated content should be ignored because said content does not necessarily come from a known and authoritative source? Who decides what is authoritative?
  9. Share
    we don’t empower communities by giving them info,they empower themselves by giving us info that we can act on-@komunikasikan #commisaid

    Thu, Mar 22 2012 11:31:02
  10. What if this information is not authoritative because it does not come from official sources?
  11. Share
    RT @ushahidi: “In a crisis, the mobile internet stays most resilient, even more than SMS.” #commisaid Nigel Snoad

    Thu, Mar 22 2012 09:02:53
  12. Share
    RT @jqg: Empower local communities to generate their own tools and figure out their own solutions #commisaid

    Thu, Mar 22 2012 09:10:43
  13. See this blog post on Democratizing ICT for Development Using DIY Innovation and Open Data.
  14. Share
    Through #Mission4636 SMS system, radio presenter @carelpedre was able to communicate directly with affected people in Haiti #commisaid

    Thu, Mar 22 2012 12:20:14
  15. This is rather interesting, I hadn’t realized that radio stations in Haiti actively used the information from the Ushahidi Haiti 4636 project.
  16. Share
    Fascinating talk with @carelpedre- many of #Haiti ‘s pre-earthquake twitter users came from @juno7’s lottery push notifications #Commisaid

    Thu, Mar 22 2012 11:11:43
  17. Share
    More about @Carelpedre using 4636 project after #Haiti earthquake bit.ly/plgzXJ #commisaid

    Thu, Mar 22 2012 12:17:52
  18. Share
    Internews: Innovation comes within, info comes from within, people will find ways to communicate no matter what #commisaid @CDACNetwork

    Thu, Mar 22 2012 09:26:25
  19. So best of luck to those who wish to regulate this space! As my colleague Tim McNamara has noted “Crisis mapping is not simply a technological shift, it is also a process of rapid decentralisation of power. With extremely low barriers to entry, many new entrants are appearing in the fields of emergency and disaster response. They are ignoring the traditional hierarchies, because the new entrants perceive that there is something that they can do which benefits others.”
  20. Share
    @souktel: “be simple, be creative and learn from the community around you” #commisaid

    Thu, Mar 22 2012 10:57:09
  21. Share
    Tools shouldn’t own data. RT @whiteafrican: “It’s not about the platform being open, it’s about the data being open”- @jcrowley #commisaid

    Thu, Mar 22 2012 11:56:08
  22. Share
    Humanitarians have to get their heads around media and tech or risk being left behind #m4d #media #tech #commisaid

    Thu, Mar 22 2012 11:56:58
  23. Share
    Cutting edge is to get the #crowd & the #algorithm to filter each other in filtering massive overload of information in a crisis #commisaid

    Thu, Mar 22 2012 12:01:17
  24. As Robert Kirkpatrick likes to say, “Use the hunch of the expert, machine algorithms and the wisdom of the crowd.”
  25. Share
    “How long until the disaster affected communities start analyzing the aid agencies?” #commisaid

    Thu, Mar 22 2012 12:14:50
  26. Yes! Sousveillance meets analysis of big data on the humanitarian sector.
  27. Share
    Why is it such a big deal, for the humanitarian industry to get feedback from a community? Companies have done it for decades. #commisaid

    Thu, Mar 22 2012 12:17:31
  28. Some of my thoughts on what the humanitarian community can learn from the private sector vis-a-vis customer support.
  29. Share
    People who are not traditional humanitarian actors are taking on humanitarian roles, driven by the democratisation of technology #commisaid

    Thu, Mar 22 2012 13:00:12
  30. Indeed, not only are disaster-affected communities increasingly digital, so are global volunteer networks like the Standby Volunteer Task Force (SBTF).
  31. Share
    Technology is shifting the power balance – it’s helping local communities to organise their own responses to disasters #commisaid

    Thu, Mar 22 2012 13:11:18
  32. Indeed, as a result of these mobile technologies, affected populations are increasingly able to source, share and generate a vast amount of information, which is completely transforming disaster response. More on this here.
  33. Share
    Like Paul Currion’s analogy – humanitarian sector risks obsolescence in the same way the record industry did. Watch out? #commisaid

    Thu, Mar 22 2012 13:17:36
  34. Share


    Thu, Mar 22 2012 14:57:13
  35. One of my favorite books, The Starfish and the Spider: The Unstoppable Power of Leaderless Organizations, has an excellent case study on the music industry. The above picture is taken from that chapter and charts the history of the industry from the perspective of hierarchies vs networks. I’ve argued a couple years ago that the same dynamic is taking place within humanitarian response. See this blog post on Disaster Relief 2.0: Toward a Multipolar System.
  36. Share
    A thousand flowers can bloom beautifully IF common data standards allow sharing. Right now no natural selection improving quality #commisaid

    Thu, Mar 22 2012 13:43:19
  37. Share
    @GSMADisasterRes :free phone numbers &short codes r not silver bullets 4 meaningful communication w/disaster affected communities #commisaid

    Thu, Mar 22 2012 13:58:44
  38. Share
    Scale horizontally, not vertically. The end of command and control?! #commisaid

    Thu, Mar 22 2012 14:07:54
  39. Share
    RT @reeniac: what now? communities have the solution, we need to listen. start by including them in the discussions – Dr Jamilah #commisaid

    Thu, Mar 22 2012 14:14:55

Twitter, Crises and Early Detection: Why “Small Data” Still Matters

My colleagues John Brownstein and Rumi Chunara at Harvard Univer-sity’s HealthMap project are continuing to break new ground in the field of Digital Disease Detection. Using data obtained from tweets and online news, the team was able to identify a cholera outbreak in Haiti weeks before health officials acknowledged the problem publicly. Meanwhile, my colleagues from UN Global Pulse partnered with Crimson Hexagon to forecast food prices in Indonesia by carrying out sentiment analysis of tweets. I had actually written this blog post on Crimson Hexagon four years ago to explore how the platform could be used for early warning purposes, so I’m thrilled to see this potential realized.

There is a lot that intrigues me about the work that HealthMap and Global Pulse are doing. But one point that really struck me vis-a-vis the former is just how little data was necessary to identify the outbreak. To be sure, not many Haitians are on Twitter and my impression is that most humanitarians have not really taken to Twitter either (I’m not sure about the Haitian Diaspora). This would suggest that accurate, early detection is possible even without Big Data; even with “Small Data” that is neither representative or indeed verified. (Inter-estingly, Rumi notes that the Haiti dataset is actually larger than datasets typically used for this kind of study).

In related news, a recent peer-reviewed study by the European Commi-ssion found that the spatial distribution of crowdsourced text messages (SMS) following the earthquake in Haiti were strongly correlated with building damage. Again, the dataset of text messages was relatively small. And again, this data was neither collected using random sampling (i.e., it was crowdsourced) nor was it verified for accuracy. Yet the analysis of this small dataset still yielded some particularly interesting findings that have important implications for rapid damage detection in post-emergency contexts.

While I’m no expert in econometrics, what these studies suggests to me is that detecting change-over–time is ultimately more critical than having a large-N dataset, let alone one that is obtained via random sampling or even vetted for quality control purposes. That doesn’t mean that the latter factors are not important, it simply means that the outcome of the analysis is relatively less sensitive to these specific variables. Changes in the baseline volume/location of tweets on a given topic appears to be strongly correlated with offline dynamics.

What are the implications for crowdsourced crisis maps and disaster response? Could similar statistical analyses be carried out on Crowdmap data, for example? How small can a dataset be and still yield actionable findings like those mentioned in this blog post?

#UgandaSpeaks: Al-Jazeera uses Ushahidi to Amplify Local Voices in Response to #Kony2012

[Cross-posted from the Ushahidi blog]

Invisible Children’s #Kony2012 campaign has set off a massive firestorm of criticism with the debate likely to continue raging for many more weeks and months. In the meantime, our colleagues at Al-Jazeera have repurposed our previous #SomaliaSpeaks project to amplify Ugandan voices responding to the Kony campaign: #UgandaSpeaks.

Other than GlobalVoices, this Al-Jazeera initiative is one of the very few seeking to amplify local reactions to the Kony campaign. Over 70 local voices have been shared and mapped on Al-Jazeera’s Ushahidi platform in the first few hours since the launch. The majority of reactions submitted thus far are critical of the campaign but a few are positive.

One person from Kampala asks, “How come the world now knows more about #Kony2012 than about the Nodding Syndrome in Northern Uganda?” Another person in Gulu complains that “there is nothing new they are showing us. Its like a campaign against our country. […] Did they put on consideration how much its costing our country’s image? It shows as if Uganda is finished.” In nearby Lira, one person shares their story about growing up in Northern Uganda and attending “St. Mary’s College Aboke, a school from which Joseph Kony’s rebels abducted 139 girls in ordinary level […]. For the 4 years that I spent in that school (1999-2002), together with other students, I remember praying the Rosary at the School Grotto on daily basis and in the process, reading out the names of the 30 girls who had remained in captivity after Sr. Rachelle an Italian Nun together with a Ugandan teacher John Bosco rescued only 109 of them.”

The Ushahidi platform was first launched in neighboring Kenya to give ordinary Kenyans a voice during the post election-violence in 2007/2008. Indeed, “ushahidi” means witness or testimony in Swahili. So I am pleased to see this free and open source platform from Africa being used to amplify voices next door in Uganda, voices that are not represented in the #Kony2012 campaign.

Some Ugandan activists are asking why they should respond to “some American video release about something that happened 20 years ago by someone who is not in my country?” Indeed, why should anyone? If the #Kony2012 campaign and underlying message doesn’t bother Ugandans and doesn’t paint the country in a bad light, then there’s no need to respond. If the campaign doesn’t divert attention from current issues that are more pressing to Ugandans and does not adversely effect tourism, then again, why should anyone respond? This is, after all a personal choice, no one is forced to have their voices heard.

At SXSW yesterday, Ugandan activist Teddy Ruge weighed in on the #Kony2012 campaign with the following:

“We [Ugandans] have such a hard time being given the microphone to talk about our issues that sometimes we have to follow on the coat-tails of Western projects like this one and say that we also have a voice in this matter.”

I believe one way to have those local voices heard is to have them echoed using innovative software “Made in Africa” like Ushahidi and then amplified by a non-Western but international news company like Al-Jazeera. Looking at my Twitter stream this morning, it appears that I’m not the only one. The microphone is yours. Over to you.

Truthiness as Probability: Moving Beyond the True or False Dichotomy when Verifying Social Media

I asked the following question at the Berkman Center’s recent Symposium on Truthiness in Digital Media: “Should we think of truthiness in terms of probabili-ties rather than use a True or False dichotomy?” The wording here is important. The word “truthiness” already suggests a subjective fuzziness around the term. Expressing truthiness as probabilities provides more contextual information than does a binary true or false answer.

When we set out to design the SwiftRiver platform some three years ago, it was already clear to me then that the veracity of crowdsourced information ought to be scored in terms of probabilities. For example, what is the probability that the content of a Tweet referring to the Russian elections is actually true? Why use probabilities? Because it is particularly challenging to instantaneously verify crowdsourced information in the real-time social media world we live in.

There is a common tendency to assume that all unverified information is false until proven otherwise. This is too simplistic, however. We need a fuzzy logic approach to truthiness:

“In contrast with traditional logic theory, where binary sets have two-valued logic: true or false, fuzzy logic variables may have a truth value that ranges in degree between 0 and 1. Fuzzy logic has been extended to handle the concept of partial truth, where the truth value may range between completely true and completely false.”

The majority of user-generated content is unverified at time of birth. (Does said data deserve the “original sin” of being labeled as false, unworthy, until prove otherwise? To digress further, unverified content could be said to have a distinct wave function that enables said data to be both true and false until observed. The act of observation starts the collapse of said wave function. To the astute observer, yes, I’m riffing off Shroedinger’s Cat, and was also pondering how to weave in Heisenberg’s uncertainty principle as an analogy; think of a piece of information characterized by a “probability cloud” of truthiness).

I believe the hard sciences have much to offer in this respect. Why don’t we have error margins for truthiness? Why not take a weather forecast approach to information truthiness in social media? What if we had a truthiness forecast understanding full well that weather forecasts are not always correct? The fact that a 70% chance of rain is forecasted doesn’t prevent us from acting and using that forecast to inform our decision-making. If we applied binary logic to weather forecasts, we’d be left with either a 100% chance of rain or 100% chance of sun. Such weather forecasts would be at best suspect if not wrong rather frequently.

In any case, instead of dismissing content generated in real-time because it is not immediately verifiable, we can draw on Information Forensics to begin assessing the potential validity of said content. Tactics from information forensics can help us create a score card of heuristics to express truthiness in terms of probabilities. (I call this advanced media literacy). There are indeed several factors that one can weigh, e.g., the identity of the messenger relaying the content, the source of the content, the wording of said content, the time of day the information was shared, the geographical proximity of the source to the event being reported, etc.

These weights need not be static as they are largely subjective and temporal; after all, truth is socially constructed and dynamic. So while a “wisdom of the crowds” approach alone may not always be well-suited to generating these weights, perhaps integrating the hunch of the expert coupled with machine learning algorithms (based on lessons learned in information forensics) could result more useful decision-support tools for truthiness forecasting (or rather “backcasting”).

In sum, thinking of truthiness strictly in terms of true and false prevents us from “complexifying” a scalar variable into a vector (a wave function), which in turn limits our ability to develop new intervention strategies. We need new conceptual frameworks to reflect the complexity and ambiguity of user-generated content:

 

Innovation and Counter-Innovation: Digital Resistance in Russia

Want to know what the future of digital activism looks like? Then follow the developments in Russia. I argued a few years back that the fields of digital activism and civil resistance were converging to a point I referred to as  “digital resistance.” The pace of tactical innovation and counter-innovation in Russia’s digital battlefield is stunning and rapidly converging to this notion of digital resistance.

“Crisis can be a fruitful time for innovation,” writes Gregory Asmolov. Contested elections are also ripe for innovation, which is why my dissertation case studies focused on elections. “In most cases,” says Asmolov, “innovations are created by the oppressed (the opposition, in Russia’s case), who try to challenge the existing balance of power by using new tools and technologies. But the state can also adapt and adopt some of these technologies to protect the status quo.” These innovations stem not only from the new technologies themselves but are embodied in the creative ways they are used. In other words, tactical innovation (and counter-innovation) is taking place alongside technological innovation. Indeed, “innovation can be seen not only in the new tools, but also in the new forms of protest enabled by the technology.”

Some of my favorite tactics from Russia include the YouTube video of Vladimir Putin arrested for fraud and corruption. The video was made to look like a real “breaking news” announcement on Russian television. The site got millions of viewers in just a few days. Another tactic is the use of DIY drones, mobile phone live-streaming and/or 360-degree 3D photo installations to more accurately relay the size of protests. A third tactic entails the use of a twitter username that resembles that of a well-known individual. Michael McFaul, the US Ambassador to Russia, has the twitter handle @McFaul. Activists set up the twitter handle @McFauI that appears identical but actually uses a capital “i” instead of a lower case “L” for the last letter in McFaul.

Asmolov lists a number of additional innovations in the Russian context in this excellent write-up. From coordination tools such as the “League of Voters” website, the “Street Art” group on Facebook and the car-based flashmob protests which attracted more than one thousand cars in one case, to the crowdsourced violations map “Karta Narusheniy“, the “SMS Golos” and “Svodny Protocol” platforms used to collect, analyze and/or map reports from trusted election observers (using bounded crowdsourcing).

One of my favorite tactics is the “solo protest.” According to Russian law, “a protest by one person does not require special permission. So activist Olesya Shmagun stood in from of Putin’s office with a poster that read “Putin, go and take part in public debates!” While she was questioned by the police and security service, she was not detained since one-person protests are not illegal. Even though she only caught the attention of several dozen people walking by at the time, she published the story of her protests and a few photos on her LiveJournal blog, which drew considerable attention after being shared on many blogs and media outlets. As Asmolov writes, “this story shows the power of what is known as Manuel Castell’s ‘mass self-communication’. Thanks to the presence of one camera, an offline one-person protest found a way to a [much wider] audience online.”

This innovative tactic lead to another challenge: how to turn a one-person protests into a massive number of one-person protests? So on top of this original innovation came yet another innovation, the Big White Circle action. The dedicated online tool Feb26.ru was developed specifically to coordinate many simultaneous one-person protests. The platform,

“[…] allowed people to check in at locations of their choice on the map of the Garden Ring circle, and showed what locations were already occupied. Unlike other protests, the Big White Circle did not have any organizational committee or a particular leader. The role of the leader was played by a website. The website suffered from DDoS attacks; as a result, it was closed and deleted by the provider; a day later, it was restored.  The practice of creating special dedicated websites for specific protest events is one of the most interesting innovations of the Russian protests. The initial idea belongs to Ilya Klishin, who launched the dec24.ru website (which doesn’t exist anymore) for the big opposition rally that took place in Moscow on December 24, 2011.”

The reason I like this tactic is because it takes a perfectly legal action and simply multiplies it, thus forcing the regime to potentially come up with a new set of laws that will clearly appear absurd and ridiculed by a larger segment of the population.

Citizen-based journalism played a pivotal role by “increasing transparency of the coverage of pro-government rallies.” As Asmolov notes, “Internet users were able to provide much content, including high quality YouTube reports that showed that many of those who took a part in these rallies had been forced or paid to participate, without really having any political stance.” This relates to my earlier blog post, “Wag the Dog, or Why Falsifying Crowdsourced Information Can be a Pain.”

Of course, there is plenty of “counter-innovation” coming from the Kremlin and friends. Take this case of pro-Kremlin activists producing an instructional YouTube video on how to manipulate a crowdsourced election-monitoring platform. In addition, Putin loyalists have adapted some of the same tactics as opposition activists, such as the car-based flash-mob protest. The Russian government also decided to create an online system of their own for election monitoring:

“Following an order from Putin, the state communication company Rostelecom developed a website webvybory2012.ru, which allowed people to follow the majority of the Russian polling stations (some 95,000) online on the day of the March 4 presidential election.  Every polling station was equipped with two cameras: one has to be focused on the ballot box and the other has to give the general picture of the polling station. Once the voting was over, one of the cameras broadcasted the counting of the votes. The cost of this project is at least 13 billion rubles (around $500 million). Many bloggers have criticized this system, claiming that it creates an imitation of transparency, when actually the most common election violations cannot be monitored through webcameras (more detailed analysis can be found here). Despite this, the cameras allowed to spot numerous violations (1, 2).”

From the perspective of digital resistance strategies, this is exactly the kind of reaction you want to provoke from a repressive regime. Force them to decen-tralize, spend hundreds of millions of dollars and hundreds of labor-hours to adopt similar “technologies of liberation” and in the process document voting irregularities on their own websites. In other words, leverage and integrate the regime’s technologies within the election-monitoring ecosystem being created, as this will spawn additional innovation. For example, one Russian activist proposed that this webcam network be complemented by a network of citizen mobile phones. In fact, a group of activists developed a smartphone app that could do just this. “The application Webnablyudatel has a classification of all the violations and makes it possible to instantly share video, photos and reports of violations.”

Putin supporters also made an innovative use of crowdsourcing during the recent elections. “What Putin has done is based on a map of Russia where anyone can submit information about Putin’s good deeds.” Just like pro-Kremlin activists can game pro-democracy crowdsourcing platforms, so can supporters of the opposition game a platform like this Putin map. In addition, activists could have easily created a Crowdmap and called it “What Putin Has Not Done” and crowdsource that map, which no doubt would be far more populated than the original good deed map.

One question that comes to mind is how the regime will deal with disinformation on crowdsourcing platforms they set up? Will they need to hire more supporters to vet the information submitted to said platform? Or will  they close up the reporting and use “bounded crowdsourcing” instead? If so, will they have a communications challenge on their hands in trying to convince that trusted reporters are indeed legitimate? Another question has to do with collective action. Pro-Kremlin activists are already innovating on their own but will this create a collective-action challenge for the Russian government? Take the example of the pro-regime “Putin Alarm Clock” (Budilnikputina.ru) tactic which backfired and even prompted Putin’s chief of elections staff to dismiss the initiative as “a provocation organized by the protestors.”

There has always been an interesting asymmetric dynamic in digital activism, with activists as first-movers innovating under oppression and regimes counter-innovating. How will this asymmetry change as digital activism and civil resistance tactics and strategies increasingly converge? Will repressive regimes be pushed to decentralize their digital resistance innovations in order to keep pace with the distributed pro-democracy innovations springing up? Does innovation require less coordination than counter-innovation? And as Gregory Asmolov concludes in his post-script, how will the future ubiquity of crowd-funding platforms and tools for micro-donations/payments online change digital resistance?

Trails of Trustworthiness in Real-Time Streams

Real-time information channels like Twitter, Facebook and Google have created cascades of information that are becoming increasingly challenging to navigate. “Smart-filters” alone are not the solution since they won’t necessarily help us determine the quality and trustworthiness of the information we receive. I’ve been studying this challenge ever since the idea behind SwiftRiver first emerged several years ago now.

I was thus thrilled to come across a short paper on “Trails of Trustworthiness in Real-Time Streams” which describes a start-up project that aims to provide users with a “system that can maintain trails of trustworthiness propagated through real-time information channels,” which will “enable its educated users to evaluate its provenance, its credibility and the independence of the multiple sources that may provide this information.” The authors, Panagiotis Metaxas and Eni Mustafaraj, kindly cite my paper on “Information Forensics” and also reference SwiftRiver in their conclusion.

The paper argues that studying the tactics that propagandists employ in real life can provide insights and even predict the tricks employed by Web spammers.

“To prove the strength of this relationship between propagandistic and spamming techniques, […] we show that one can, in fact, use anti-propagandistic techniques to discover Web spamming networks. In particular, we demonstrate that when starting from an initial untrustworthy site, backwards propagation of distrust (looking at the graph defined by links pointing to to an untrustworthy site) is a successful approach to finding clusters of spamming, untrustworthy sites. This approach was inspired by the social behavior associated with distrust: in society, recognition of an untrustworthy entity (person, institution, idea, etc) is reason to question the trust- worthiness of those who recommend it. Other entities that are found to strongly support untrustworthy entities become less trustworthy themselves. As in society, distrust is also propagated backwards on the Web graph.”

The authors document that today’s Web spammers are using increasingly sophisticated tricks.

“In cases where there are high stakes, Web spammers’ influence may have important consequences for a whole country. For example, in the 2006 Congressional elections, activists using Google bombs orchestrated an effort to game search engines so that they present information in the search results that was unfavorable to 50 targeted candidates. While this was an operation conducted in the open, spammers prefer to work in secrecy so that their actions are not revealed. So,  revealed and documented the first Twitter bomb, which tried to influence the Massachusetts special elections, show- ing how an Iowa-based political group, hiding its affiliation and profile, was able to serve misinformation a day before the election to more than 60,000 Twitter users that were follow- ing the elections. Very recently we saw an increase in political cybersquatting, a phenomenon we reported in [28]. And even more recently, […] we discovered the existence of Pre-fabricated Twitter factories, an effort to provide collaborators pre-compiled tweets that will attack members of the Media while avoiding detection of automatic spam algorithms from Twitter.

The theoretical foundations for a trustworthiness system:

“Our concept of trustworthiness comes from the epistemology of knowledge. When we believe that some piece of information is trustworthy (e.g., true, or mostly true), we do so for intrinsic and/or extrinsic reasons. Intrinsic reasons are those that we acknowledge because they agree with our own prior experience or belief. Extrinsic reasons are those that we accept because we trust the conveyor of the information. If we have limited information about the conveyor of information, we look for a combination of independent sources that may support the information we receive (e.g., we employ “triangulation” of the information paths). In the design of our system we aim to automatize as much as possible the process of determining the reasons that support the information we receive.”

“We define as trustworthy, information that is deemed reliable enough (i.e., with some probability) to justify action by the receiver in the future. In other words, trustworthiness is observable through actions.”

“The overall trustworthiness of the information we receive is determined by a linear combination of (a) the reputation RZ of the original sender Z, (b) the credibility we associate with the contents of the message itself C(m), and (c) characteristics of the path that the message used to reach us.”

“To compute the trustworthiness of each message from scratch is clearly a huge task. But the research that has been done so far justifies optimism in creating a semi-automatic, personalized tool that will help its users make sense of the information they receive. Clearly, no such system exists right now, but components of our system do exist in some of the popular [real-time information channels]. For a testing and evaluation of our system we plan to use primarily Twitter, but also real-time Google results and Facebook.”

In order to provide trails of trustworthiness in real-time streams, the authors plan to address the following challenges:

•  “Establishment of new metrics that will help evaluate the trustworthiness of information people receive, especially from real-time sources, which may demand immediate attention and action. […] we show that coverage of a wider range of opinions, along with independence of results’ provenance, can enhance the quality of organic search results. We plan to extend this work in the area of real-time information so that it does not rely on post-processing procedures that evaluate quality, but on real-time algorithms that maintain a trail of trustworthiness for every piece of information the user receives.”

• “Monitor the evolving ways in which information reaches users, in particular citizens near election time.”

•  “Establish a personalizable model that captures the parameters involved in the determination of trustworthiness of in- formation in real-time information channels, such as Twitter, extending the work of measuring quality in more static information channels, and by applying machine learning and data mining algorithms. To implement this task, we will design online algorithms that support the determination of quality via the maintenance of trails of trustworthiness that each piece of information carries with it, either explicitly or implicitly. Of particular importance, is that these algorithms should help maintain privacy for the user’s trusting network.”

• “Design algorithms that can detect attacks on [real-time information channels]. For example we can automatically detect bursts of activity re- lated to a subject, source, or non-independent sources. We have already made progress in this area. Recently, we advised and provided data to a group of researchers at Indiana University to help them implement “truthy”, a site that monitors bursty activity on Twitter.  We plan to advance, fine-tune and automate this process. In particular, we will develop algorithms that calculate the trust in an information trail based on a score that is affected by the influence and trustworthiness of the informants.”

In conclusion, the authors “mention that in a month from this writing, Ushahidi […] plans to release SwiftRiver, a platform that ‘enables the filtering and verification of real-time data from channels like Twitter, SMS, Email and RSS feeds’. Several of the features of Swift River seem similar to what we propose, though a major difference appears to be that our design is personalization at the individual user level.”

Indeed, having been involved in SwiftRiver research since early 2009 and currently testing the private beta, there are important similarities and some differences. But one such difference is not personalization. Indeed, Swift allows full personalization at the individual user level.

Another is that we’re hoping to go beyond just text-based information with Swift, i.e., we hope to pull in pictures and video footage (in addition to Tweets, RSS feeds, email, SMS, etc) in order to cross-validate information across media, which we expect will make the falsification of crowdsourced information more challenging, as I argue here. In any case, I very much hope that the system being developed by the authors will be free and open source so that integration might be possible.

A copy of the paper is available here (PDF). I hope to meet the authors at the Berkman Center’s “Truth in Digital Media Symposium” and highly recommend the wiki they’ve put together with additional resources. I’ve added the majority of my research on verification of crowdsourced information to that wiki, such as my 20-page study on “Information Forensics: Five Case Studies on How to Verify Crowdsourced Information from Social Media.”

Mobile Technologies, Crisis Mapping & Disaster Response: My Talk at #MWC12

Many thanks to GSMA for their kind invitation to speak at the 2012 World Mobile Congress (MWC12) in Barcelona, Spain. GSMA is formally launching its Disaster Response Program at MWC12 with an inaugural working group. “The Disaster Response programme seeks to understand how mobile operators can most effectively support each other and improve resilience among networks in disaster scenarios, and identify how the mobile industry can best help citizens and humanitarian organisations on the ground following a crisis.” Below is the presentation I plan to give.

When disaster strikes, access to information is equally important as access to food and water. This link between information, disaster response and aid was officially recognized by the Secretary General of the International Federation of the Red Cross and Red Crescent Societies in the 2005 World Disasters Report. Since then, disaster-affected populations have become increasingly digital thanks to the widespread adoption of mobile technologies. Indeed, as a result of these mobile technologies, affected populations are increasingly able to source, share and generate a vast amount of information, which is completely transforming disaster response.

Take the case of Haiti, for example. Within 48 hours of the devastating earthquake that struck Port-au-Prince in 2010, a dedicated SMS short code was set up to crowdsource information on the urgent needs of the disaster-affected population. This would not have been possible without the partnership with Digicel Haiti since they’re the ones who provided the free SMS short code that enabled anyone in Haiti to text in their most urgent needs and location.

This graphic depicts the words that appeared most frequently in the text messages that were received during the first two weeks after the earthquake. Obviously, the original text messages were in Haitian Creole, so  volunteers from the Diaspora translated some 80,000 SMS’s during the entire 3-month operation. From these, the most urgent life-and-death text messages were identified and geo-located as quickly as possible.

The result was a live crisis map of Haiti, which became the most comprehensive and up-to-date information available to the humanitarian community. In fact, one first-responder noted that the live map helped them save hundreds of lives during their search and rescue operations.

Live crisis maps are critical for disaster response because they can provide real-time situational awareness, like this official UN Crisis Map of Libya. Because the UN Office for the Coordination of Humanitarian Affairs (UN OCHA) did not have information management officers in-country when the crisis began to escalate, they turned to the crisis-affected population for real-time information on the rapidly changing situation. Indeed, a lot of local and relevant user-generated content was already being shared via Twitter, Flickr and YouTube.

The result was this crowdsourced social media map which was used not only by OCHA but also by the World Food Program and other humanitarian organiza-tions. Needless to say, the majority of the rich, multi-media content that populated this map was generated thanks to mobile technology.

Humanitarian organizations are not the only groups using mobile technologies and crisis mapping platforms. Indeed, the mainstream media plays an instrumental role following a disaster. Their ability to widely and rapidly disseminate information to disaster affected populations is absolutely critical for disaster response. And they too are turning to live crisis maps to do this. Just a few weeks ago, Al-jazeera launched this live map to document the impact of the snowstorm emergency in the Balkans.

The map became the most viewed page on the Al-jazeera Balkans website for several weeks running, a clear testament to the demand for this type information and medium. This is actually the third time that Al-jazeera has leveraged mobile technologies for crisis mapping. Just two short months ago, we partnered with Al-jazeera to run a similar project in Somalia using an SMS short code.

There is no doubt that access to information is as important as access to food and water. In fact, sometimes information is the only help that can be made available, especially when isolated populations are cut off and beyond the reach of traditional aid. So while we talk of humanitarian aid and food relief, we also need to talk about “information aid” and “information relief”. Indeed, we have a “World Food Program” but we don’t have a “World Information Program” for communicating with disaster-affected populations.

This explains why I very much welcome and applaud the GSMA for launching their Disaster Response Program. It is perfectly clear that telecommunications companies are pivotal to the efforts just described. I thus look forward to collaborating with this new working group and hope that we’ll begin our conver-sations by addressing the pressing need and challenge to provide disaster-affected populations with free “information rations” (i.e., limited but free voice calls and SMS) in the immediate aftermath of a major disaster.

Stranger than Fiction: A Few Words About An Ethical Compass for Crisis Mapping

The good people at the Sudan Sentinel Project (SSP), housed at my former “alma matter,” the Harvard Humanitarian Initiative (HHI), have recently written this curious piece on crisis mapping and the need for an “ethical compass” in this new field. They made absolutely sure that I’d read the piece by directly messaging me via the @CrisisMappers twitter feed. Not to worry, good people, I read your masterpiece. Interestingly enough, it was published the day after my blog post reviewing IOM’s data protection standards.

To be honest, I was actually not going to spend any time writing up a response because the piece says absolutely nothing new and is hardly pro-active. Now, before any one spins and twists my words: the issues they raise are of paramount importance. But if the authors had actually taken the time to speak with their fellow colleagues at HHI, they would know that several of us participated in a brilliant workshop last year which addressed these very issues. Organized by World Vision, the workshop included representatives from the International Committee of the Red Cross (ICRC), Care International, Oxfam GB, UN OCHA, UN Foundation, Standby Volunteer Task Force (SBTF), Ushahidi, the Harvard Humanitarian Initiative (HHI) and obviously Word Vision. There were several data protection experts at this workshop, which made the event one of the most important workshops I attended in all of 2011. So a big thanks again to Phoebe Wynn-Pope at World Vision for organizing.

We discussed in-depth issues surrounding Do No Harm, Informed Consent, Verification, Risk Mitigation, Ownership, Ethics and Communication, Impar-tiality, etc. As expected, the outcome of the workshop was the clear need for data protection standards that are applicable for the new digital context we operate in, i.e., a world of social media, crowdsourcing and volunteer geographical informa-tion. Our colleagues at the ICRC have since taken the lead on drafting protocols relevant to a data 2.0 world in which volunteer networks and disaster-affected communities are increasingly digital. We expect to review this latest draft in the coming weeks (after Oxfam GB has added their comments to the document). Incidentally, the summary report of the workshop organized by World Vision is available here (PDF) and highly recommended. It was also shared on the Crisis Mappers Google Group. By the way, my conversations with Phoebe about these and related issues began at this conference in November 2010, just a month after the SBTF launched.

I should confess the following: one of my personal pet peeves has to do with people stating the total obvious and calling for action but actually doing absolutely nothing else. Talk for talk’s sake just makes it seem like the authors of the article are simply looking for attention. Meanwhile, many of us are working on these new data protection challenges in our own time, as volunteers. And by the way, the SSP project is first and foremost focused on satellite imagery analysis and the Sudan, not on crowdsourcing or on social media. So they’re writing their piece as outsiders and, well, are hence less informed as a result—particularly since they didn’t do their homework.

Their limited knowledge of crisis mapping is blatantly obvious throughout the article. Not only do the authors not reference the World Vision workshop, which HHI itself attended, they also seem rather confused about the term “crisis mappers” which they keep using. This is somewhat unfortunate since the Crisis Mappers Network is an offshoot of HHI. Moreover, SSP participated and spoke at last year’s Crisis Mappers Conference—just a few months ago, in fact. One outcome of this conference was the launch of a dedicated Working Group on Security and Privacy, which will now become two groups, one addressing security issues and the other data protection. This information was shared on the Crisis Mappers Google Group and one of the authors is actually part of the Security Working Group.

To this end, one would have hoped, and indeed expected, that the authors would write a somewhat more informed piece about these issues. At the very least, they really ought to have documented some of the efforts to date in this innovative space. But they didn’t and unfortunately several statements they make in their article are, well… completely false and rather revealing at the same time. (Incidentally, the good people at SSP did their best to disuade the SBTF from launching a Satellite Team on the premise that only experts are qualified to tag satellite imagery; seems like they’re not interested in citizen science even though some experts I’ve spoken to have referred to SSP as citizen science).

In any case, the authors keep on referring to “crisis mappers this” and “crisis mappers that” throughout their article. But who exactly are they referring to? Who knows. On the one hand, there is the International Network of Crisis Mappers, which is a loose, decentralized, and informal network of some 3,500 members and 1,500 organizations spanning 150+ countries. Then there’s the Standby Volunteer Task Force (SBTF), a distributed, global network of 750+ volunteers who partner with established organizations to support live mapping efforts. And then, easily the largest and most decentralized “group” of all, are all those “anonymous” individuals around the world who launch their own maps using whatever technologies they wish and for whatever purposes they want. By the way, to define crisis mapping as mapping highly volatile and dangerous conflict situations is really far from being accurate either. Also, “equating” crisis mapping with crowdsourcing, which the authors seem to do, is further evidence that they are writing about a subject that they have very little understanding of. Crisis mapping is possible without crowdsourcing or social media. Who knew?

Clearly, the authors are confused. They appear to refer to “crisis mappers” as if the group were a legal entity, with funding, staff, administrative support and brick-and-mortar offices. Furthermore, and what the authors don’t seem to realize, is that much of what they write is actually true of the formal professional humanitarian sector vis-a-vis the need for new data protection standards. But the authors have obviously not done their homework, and again, this shows. They are also confused about the term “crisis mapping” when they refer to “crisis mapping data” which is actually nothing other than geo-referenced data. Finally, a number of paragraphs in the article have absolutely nothing to do with crisis mapping even though the authors seem insinuate otherwise. Also, some of the sensationalism that permeates the article is simply unnecessary and poor taste.

The fact of the matter is that the field of crisis mapping is maturing. When Dr. Jennifer Leaning and I co-founded and co-directed HHI’s Program on Crisis Mapping and Early Warning from 2007-2009, the project was very much an exploratory, applied-research program. When Dr. Jen Ziemke and I launched the Crisis Mappers Network in 2009, we were just at the beginning of a new experiment. The field has come a long way since and one of the consequences of rapid innovation is obviously the lack of any how-to-guide or manual. These certainly need to be written and are being written.

So, instead of  stating the obvious, repeating the obvious, calling for the obvious and making embarrassing factual errors in a public article (which, by the way, is also quite revealing of the underlying motives), perhaps the authors could actually have done some research and emailed the Crisis Mappers Google Group. Two of the authors also have my email address; one even has my private phone number; oh, and they could also have DM’d me on Twitter like they just did.