Tag Archives: Repression

On Rumors, Repression and Digital Disruption in China: Opening Pandora’s Inbox of Truthiness?

The Economist recently published a brilliant piece on China entitled: “The Power of Microblogs: Zombie Followers and Fake Re-Tweets.” BBC News followed with an equally excellent article: “Damaging Coup Rumors Ricochet Across China.” Combined, these articles reveal just how profound the digital disruption in China is likely to be now that Pandora’s Inbox has been opened.

Credit: The Economist

The Economist article opens with an insightful historical comparison:

“In the year 15AD, during the short-lived Xin dynasty, a rumor spread that a yellow dragon, a symbol of the emperor, had inauspiciously crashed into a temple in the mountains of central China and died. Ten thousand people rushed to the site. The emperor Wang Mang, aggrieved by such seditious gossip, ordered arrests and interrogations to quash the rumor, but never found the source. He was dethroned and killed eight years later, and Han-dynasty rule was restored.”

“The next ruler, Emperor Guangwu, took a different approach, studying rumors as a barometer of public sentiment, according to a recent book Rumors in the Han Dynasty by Lu Zongli, a historian. Guangwu’s government compiled a ‘Rumors Report’, cataloguing people’s complaints about local officials, and making assessments that were passed to the emperor. The early Eastern Han dynasty became known for officials who were less corrupt and more attuned to the people.”

In present day China, a popular pastime among 250+ million Chinese users of microblogging platforms is to “spread news and rumors, both true and false, that challenge the official script of government officials and state-propaganda organs.” In Domination and the Arts of Resistance: Hidden Transcripts, James Scott distinguishes between public and hidden transcripts. The former describes the open, public discourse that take place between dominators and oppressed while hidden transcripts relate to the critique of power that “goes on offstage”, which the power elites cannot decode. Scott writes that when the oppressed classes publicize this “hidden transcript”, (the truthiness?) they become con-scious of its common status. Borrowing from Juergen Habermas (as interpreted by Clay Shirky), those who take on the tools of open expression become a public, and a synchronized public increasingly constrains undemocratic rulers while ex-panding the rights of that public. The result in China? “It is hard to overestimate how much the arrival of [microblogging platforms] has changed the dynamic between rulers and ruled over the past two years” (The Economist).

Chinese authorities have responded to this threat in two predictable ways, one repeating the ill-fated actions of the Xin Dynasty and the other reflecting the more open spirit of Emperor Guangwu. In the latter case, authorities are turning to microblogs as a “listening post” for public opinion and also as a publishing platform. Indeed, “government agencies, party organs and individual officials have set up more than 50,000 weibo accounts [Chinese equivalent of Twitter]” (The Economist). In the former case, the regime has sought to “combat rumors harshly and to tighten controls over the microblogs and their users, censoring posts and closely monitoring troublemakers.” The UK Guardian reports that China is now “taking the toughest steps yet against major microblogs and detain-ing six people for spreading rumors of a coup amid Beijing’s most serious political crisis for years.”

Beijing’s attempt to regulate microblogging companies by requiring users to sign up with their real names is unlikely to be decisive, however. “No matter how it is enforced, user verification seems unlikely to deter the spread of rumors and information that has so concerned authorities” (The Economist). To be sure, companies are already selling fake verification services for a small fee. Besides, verifying accounts for millions of users is simply too time-consuming and hence costly. Even Twitter gave up their verified account service a while back. The task of countering rumors is even more of a Quixotic dream.

Property tycoon Zhang Xin, who has more than 3 million followers, wrote: “What is the best way to stop ‘rumors’? It is transparency and openness. The more speech is discouraged, the more rumors there will be” (UK Guardian).

This may in part explains why Chinese authorities have shifted their approach to one of engagement as evidenced by those 50,000 new weibo accounts. With this second reaction, however, Beijing is possibly passing the point of no return. “This degree of online engagement can be awkward for authorities used to a comfortable buffer from public opinion,” writes The Economist. This is an understatement; Pandora’s (In)box is now open and the “hidden transcript” is cloaked no longer. The critique of power is decoded and elites are “forced” to devise a public reply as a result of this shared awareness lest they lose legitimacy vis-a-vis the broader population. But the regime doesn’t even have a “customer service” mechanism in place to deal with distributed and potentially high-volume complaints. Censorship is easy compared to engagement.

Recall the “Rumors Report” compiled by Emperor Guangwu’s government to catalogue people’s complaints about local officials. How will these 50,000 new weibo users deal with such complaints now that the report can be crowdsourced, especially given that fact that China’s “Internet users have become increasingly bold in their willingness to discuss current affairs and even sensitive political news […]” (UK Guardian).

As I have argued in my dissertation, repressive regimes can react to real (or perceived)  threats posed by “liberation technologies” by either cracking down and further centralizing control and/or by taking on the same strategies as digital activists, which at times requires less centralization. Either way, they’re taking the first step on a slippery slope. By acknowledging the problem of rumors so publicly, the regime is actually calling more attention to how disruptive these simple speculations can be—the classic Streisand effect.

“By falsely packaging lies and speculation as ‘truth’ and ‘existence’, online rumours undermine the morale of the public, and, if out of control, they will seriously disturb the public order and affect social stability,” said a commentary in the People’s Daily, the official Communist party newspaper. (UK Guardian).

Practically speaking, how will those 50,000 new weibo users coordinate their efforts to counter rumors and spread state propaganda? “We have a saying among us: you only need to move your lips to start a rumor, but you need to run until your legs are broken to refute one,” says an employee of a state media outlet (The Economist). How will these new weibo users synchronize collective action in near real-time to counter rumors when any delay is likely to be interpreted as evidence of further guilt? Will they know how to respond to myriads of questions being bombarded at them in real-time by hundreds of thousands of Chinese microbloggers? This may lead to high-pressure situations that are rife for mistakes and errors, particularly if these government officials are new to microblogging. Indeed, If just one of these state-microbloggers slips, that slip could go viral with a retweet tsunami. Any retreat by authorities from this distributed engagement strategy will only lead to more rumors.

The rumors of the coup d’état continue to ricochet across China, gaining remarkable traction far and wide. Chinese microblogs were also alight last week with talk of corruption and power struggles within the highest ranks of the party, which may have fueled the rumor of an overthrow. This is damaging to China’s Communist Party which “likes to portray itself as unified and in control,” particularly as it prepares for it’s once-in-a-decade leadership shuffle. “The problem for China’s Communist Party is that it has no effective way of refuting such talk. There are no official spokesmen who will go on the record, no sources briefing the media on the background. Did it happen? Nobody knows. So the rumors swirl” (BBC News). Even the official media, which is “often found waiting for political guidance, can be slow and unresponsive.”

So if Chinese authorities and state media aren’t even equipped (beyond plain old censorship) to respond to national rumors of vis-a-vis an event as important as a coup (can it possibly get more important than that?), then how in the world will they deal with the undercurrent of rumors that continue to fill Chinese microblogs now that these can have 50,000 new targets online? Moreover, “many in China are now so cynical about the level of censorship that they will not believe what comes from the party’s mouthpieces even if it is true. Instead they will give credence to half-truths or fabrications on the web,” which is “corrosive for the party’s authority” (BBC News). This is a serious problem for China’s Communist elite who are obsessed with the task of projecting an image of total unity and stability.

In contrast, speculators on Chinese microblogging platforms don’t need a highly coordinated strategy to spread conspiracies. They are not handicapped by the centralization and collective action problem that Chinese authorities face; after all, it is clearly far easier to spread a rumor than to debunk one. As noted by The Economist, those spreading rumors have “at their disposal armies of zombie followers and fake re-tweets as well as marketing companies, which help draw attention to rumors until they are spread by a respected user with many real followers, such as a celebrity.” But there’s more at stake here than mere rumors. In fact, as noted by The Economist, the core of the problem has less to do with hunting down rumors of yellow dragons than with “the truth that they reflect: a nervous public. In the age of weibo, it may be that the wisps of truth prove more problematic for authorities than the clouds of falsehood.”

Fascinating epilogues:

China’s censorship can never defeat the internet
China’s censors tested by microbloggers who keep one step ahead of state media

Innovation and Counter-Innovation: Digital Resistance in Russia

Want to know what the future of digital activism looks like? Then follow the developments in Russia. I argued a few years back that the fields of digital activism and civil resistance were converging to a point I referred to as  “digital resistance.” The pace of tactical innovation and counter-innovation in Russia’s digital battlefield is stunning and rapidly converging to this notion of digital resistance.

“Crisis can be a fruitful time for innovation,” writes Gregory Asmolov. Contested elections are also ripe for innovation, which is why my dissertation case studies focused on elections. “In most cases,” says Asmolov, “innovations are created by the oppressed (the opposition, in Russia’s case), who try to challenge the existing balance of power by using new tools and technologies. But the state can also adapt and adopt some of these technologies to protect the status quo.” These innovations stem not only from the new technologies themselves but are embodied in the creative ways they are used. In other words, tactical innovation (and counter-innovation) is taking place alongside technological innovation. Indeed, “innovation can be seen not only in the new tools, but also in the new forms of protest enabled by the technology.”

Some of my favorite tactics from Russia include the YouTube video of Vladimir Putin arrested for fraud and corruption. The video was made to look like a real “breaking news” announcement on Russian television. The site got millions of viewers in just a few days. Another tactic is the use of DIY drones, mobile phone live-streaming and/or 360-degree 3D photo installations to more accurately relay the size of protests. A third tactic entails the use of a twitter username that resembles that of a well-known individual. Michael McFaul, the US Ambassador to Russia, has the twitter handle @McFaul. Activists set up the twitter handle @McFauI that appears identical but actually uses a capital “i” instead of a lower case “L” for the last letter in McFaul.

Asmolov lists a number of additional innovations in the Russian context in this excellent write-up. From coordination tools such as the “League of Voters” website, the “Street Art” group on Facebook and the car-based flashmob protests which attracted more than one thousand cars in one case, to the crowdsourced violations map “Karta Narusheniy“, the “SMS Golos” and “Svodny Protocol” platforms used to collect, analyze and/or map reports from trusted election observers (using bounded crowdsourcing).

One of my favorite tactics is the “solo protest.” According to Russian law, “a protest by one person does not require special permission. So activist Olesya Shmagun stood in from of Putin’s office with a poster that read “Putin, go and take part in public debates!” While she was questioned by the police and security service, she was not detained since one-person protests are not illegal. Even though she only caught the attention of several dozen people walking by at the time, she published the story of her protests and a few photos on her LiveJournal blog, which drew considerable attention after being shared on many blogs and media outlets. As Asmolov writes, “this story shows the power of what is known as Manuel Castell’s ‘mass self-communication’. Thanks to the presence of one camera, an offline one-person protest found a way to a [much wider] audience online.”

This innovative tactic lead to another challenge: how to turn a one-person protests into a massive number of one-person protests? So on top of this original innovation came yet another innovation, the Big White Circle action. The dedicated online tool Feb26.ru was developed specifically to coordinate many simultaneous one-person protests. The platform,

“[…] allowed people to check in at locations of their choice on the map of the Garden Ring circle, and showed what locations were already occupied. Unlike other protests, the Big White Circle did not have any organizational committee or a particular leader. The role of the leader was played by a website. The website suffered from DDoS attacks; as a result, it was closed and deleted by the provider; a day later, it was restored.  The practice of creating special dedicated websites for specific protest events is one of the most interesting innovations of the Russian protests. The initial idea belongs to Ilya Klishin, who launched the dec24.ru website (which doesn’t exist anymore) for the big opposition rally that took place in Moscow on December 24, 2011.”

The reason I like this tactic is because it takes a perfectly legal action and simply multiplies it, thus forcing the regime to potentially come up with a new set of laws that will clearly appear absurd and ridiculed by a larger segment of the population.

Citizen-based journalism played a pivotal role by “increasing transparency of the coverage of pro-government rallies.” As Asmolov notes, “Internet users were able to provide much content, including high quality YouTube reports that showed that many of those who took a part in these rallies had been forced or paid to participate, without really having any political stance.” This relates to my earlier blog post, “Wag the Dog, or Why Falsifying Crowdsourced Information Can be a Pain.”

Of course, there is plenty of “counter-innovation” coming from the Kremlin and friends. Take this case of pro-Kremlin activists producing an instructional YouTube video on how to manipulate a crowdsourced election-monitoring platform. In addition, Putin loyalists have adapted some of the same tactics as opposition activists, such as the car-based flash-mob protest. The Russian government also decided to create an online system of their own for election monitoring:

“Following an order from Putin, the state communication company Rostelecom developed a website webvybory2012.ru, which allowed people to follow the majority of the Russian polling stations (some 95,000) online on the day of the March 4 presidential election.  Every polling station was equipped with two cameras: one has to be focused on the ballot box and the other has to give the general picture of the polling station. Once the voting was over, one of the cameras broadcasted the counting of the votes. The cost of this project is at least 13 billion rubles (around $500 million). Many bloggers have criticized this system, claiming that it creates an imitation of transparency, when actually the most common election violations cannot be monitored through webcameras (more detailed analysis can be found here). Despite this, the cameras allowed to spot numerous violations (1, 2).”

From the perspective of digital resistance strategies, this is exactly the kind of reaction you want to provoke from a repressive regime. Force them to decen-tralize, spend hundreds of millions of dollars and hundreds of labor-hours to adopt similar “technologies of liberation” and in the process document voting irregularities on their own websites. In other words, leverage and integrate the regime’s technologies within the election-monitoring ecosystem being created, as this will spawn additional innovation. For example, one Russian activist proposed that this webcam network be complemented by a network of citizen mobile phones. In fact, a group of activists developed a smartphone app that could do just this. “The application Webnablyudatel has a classification of all the violations and makes it possible to instantly share video, photos and reports of violations.”

Putin supporters also made an innovative use of crowdsourcing during the recent elections. “What Putin has done is based on a map of Russia where anyone can submit information about Putin’s good deeds.” Just like pro-Kremlin activists can game pro-democracy crowdsourcing platforms, so can supporters of the opposition game a platform like this Putin map. In addition, activists could have easily created a Crowdmap and called it “What Putin Has Not Done” and crowdsource that map, which no doubt would be far more populated than the original good deed map.

One question that comes to mind is how the regime will deal with disinformation on crowdsourcing platforms they set up? Will they need to hire more supporters to vet the information submitted to said platform? Or will  they close up the reporting and use “bounded crowdsourcing” instead? If so, will they have a communications challenge on their hands in trying to convince that trusted reporters are indeed legitimate? Another question has to do with collective action. Pro-Kremlin activists are already innovating on their own but will this create a collective-action challenge for the Russian government? Take the example of the pro-regime “Putin Alarm Clock” (Budilnikputina.ru) tactic which backfired and even prompted Putin’s chief of elections staff to dismiss the initiative as “a provocation organized by the protestors.”

There has always been an interesting asymmetric dynamic in digital activism, with activists as first-movers innovating under oppression and regimes counter-innovating. How will this asymmetry change as digital activism and civil resistance tactics and strategies increasingly converge? Will repressive regimes be pushed to decentralize their digital resistance innovations in order to keep pace with the distributed pro-democracy innovations springing up? Does innovation require less coordination than counter-innovation? And as Gregory Asmolov concludes in his post-script, how will the future ubiquity of crowd-funding platforms and tools for micro-donations/payments online change digital resistance?

Crowdsourcing vs Putin: “Mapping Dots is a Disease on the Map of Russia”

I chose to focus my dissertation research on the impact of information and communication technologies (ICTs) during elections in repressive states. Why? Because the contentious relationship between state and society during elections is accentuated and the stakes are generally higher than periods in-between elections. To be sure, elections provide momentary opportunities for democratic change. Moreover, the impact of ICTs on competitive events such as contentious elections may be more observable than the impact on state-society relations during the regular calendar year.  In other words, the use of ICTs during election periods may shed some light on whether said technologies empower coercive regimes at the expense of civil society or vice versa.

This was certainly the case this past week in Russia as a result of the above crowdsourced election-violations map, which was used to monitor the country’s Parliamentary Elections. The map displays over 5,000 reports of election viola-tions that span the following categories:

While this map is not powered by the Ushahidi platform (contrary to this claim), the many similarities suggest that the project was inspired by the earlier nation-wide use of the Ushahidi platform in 2010, namely the Russia Fires Help Map. In fact, the major initiator of the Violations Map attended a presentation on Ushahidi and Help Map in Boston earlier this year.

The Elections-Violation Map was launched by Golos, the country’s only independent election monitoring organization and Gazeta.ru, Russia’s leading Internet newspaper. This promotional banner for the map was initially displayed on Gazeta.ru’s website but was subsequently taken down by the Editor in Chief who cited commercial reasons for the action: “Right now we have such a period that this advertisement place is needed for commercial advertisement. But we’re still partners with Golos.”

The deputy editor from Gazeta who had curated the map resigned in protest: “After it became evident that the Violation Map ‘no longer suited the leadership and the owners of the website’, [the deputy editor said,]  it would have been “cowardice” to continue the work. Despite Gazeta.ru’s withdrawal from the project, the Violation Map found another partner, “Slon.ru, a popular blog platform (~1 million unique visitors monthly).”

My colleague Alexey Sidorenko argues that the backlash against the Violations Map “induced the Streisand Effect, whereby any attempt to contain the spread of information results in the opposite reaction.” Indeed, as one Russian blogger tweeted: “Why are ‘United Russia’ representatives so short-sighted? It is evident that now half of the country will know about the Violation Map.” Needless to say, the Violations Map is one of the trending topics being discussed in Russia today (on election day).

As is well know, Golos is funded by both American and European organizations. Not surprisingly, Vladimir Putin is not a fan of Golos, as recently quoted in the Washington Post:

“Representatives of some states are organizing meetings with those who receive money from them, the so-called grant recipients, briefing them on how to ‘work’ in order to influence the course of the election campaign in our country,” Putin said.

“As the saying goes, it’s money down the drain,” he added. “First, because Judas is not the most respected of biblical characters among our people. And, second, they would do better to use that money to redeem their national debt and stop pursuing their costly and ineffective foreign policy.”

As expected by many, hackers took down the Golos website along with the Election-Violations Map. (The Sudanese government did the same last year when independent Sudanese civil society groups used the Ushahidi platform to monitor the country’s first presidential elections in two decades). Incidentally, Slon.ru seems to have evaded the take-down. In any case, the blocking of websites is just one very easy tactic available to hackers and repressive regimes. Take this other tactic, for example:

According to a Russian-speaking colleague of mine (who also pointed me to this pro-Kremlin activist video), the woman says that “mapping dots is a disease on the map of Russia.” The video shows her calling the Map’s dedicated number to report a false message (she gives a location that doesn’t exist) and subsequently fills out a false report online. In other words, this is an instructional video on how to submit false information to a crowdsourcing platform. A fully translated transcript of the video is available here.

This same colleague informed me that one of Russia’s State Television Channels subsequently broadcast a program in which it accused those behind the Violations Map of making false claims about the falsification of reports, accusing the “Maptivists” of using an American tool in efforts against the Russian ruling party. In addition, the head of Russia’s Election Committee submitted a complaint against the map to the court, which resulted in the organizers receiving a $1,000 fine (30,0000 Rubles).

Gregory Asmolov, a PhD student at LSE, argues that the Russian government’s nervous reaction to the crowdsourced map and its attempt to delegitimize and limit its presence in cyberspace is clear proof of the project’s impact. Gregory goes on to write that the crowdsourced map is an interim product, not a finished product, which serves as a diagnostic system in which individuals are the sensors. He also argues that crowdsourcing mirrors the reliability of society and thus claims that if there is low confidence in the reliability of crowdsourced information, this is a diagnosis of society and not the crowdsourcing tool itself.

Alexey Sidorenko concludes with the following: “The Violation Map incident is just an indicator of a much deeper trend – the growing will for the need of change, exercised by free, non-falsified elections. In previous election cycles, most journalists would not have resigned and no big portal would have been brave enough to advertise election violation monitoring. Aside from the deeper sociological undercurrent, technology plays a crucial role in all presented stories. […] none of these events would actually have happened if Golos and Gazeta.ru had not united in producing the Violation Map. Golos has had an election violation database since 2008, but it never was as influential as it is now. This suggests the success of the project relies heavily on its online mapping element (if any event gets concrete geographic coordinates it automatically gets more real and more appealing) and having a proper media partner.”

My dissertation research asked the following question: Do New ICTs Change the Balance of Power Between Repressive States and Civil Society? In the case of Russia’s Parliamentary Elections, it would seem so. So my next question is this: If Help Map inspired this week’s Election Violations Map, then what will the latter inspire now that many more have been exposed to the power of crowdsourcing and live maps? Stay tuned for the next round of Crowdsourcing vs. Putin.

Why Architecture and Calendars Are Trojan Horses for Repressive Regimes

The simple thought first occurred to me while visiting Serbia earlier this year. As I walked in front of the country’s parliament, I recalled Steve York’s docu-mentary, “Bringing Down a Dictator.” In one particular scene, a large crowd assembles in front of the Serbian parliament chanting for the resignation of Slobodan Milosevic. Soon after, they storm the building and find thousands of election ballots rigged in the despot’s favor. I then thought of Tahrir Square and how more than a million protestors had assembled there to demand that Hosni Mubarak step down. There was one obvious place for protestors to assemble in Cairo during the recent revolts. The word Tahrir means “liberation” in Arabic. That’s what I call free advertising and framing par excellence.

These scenes play out over and over across the history of revolutions and popular resistance movements. In many ways, state architecture that is meant to project power and authority can just as easily be magnets and mobilization mechanisms for popular dissent; a hardware hack turned against it’s coders. A Trojan Horse of sorts in the computing sense of the word.

So not only was the hardware vulnerable to attack in Cairo, but the software—and indeed the name of the main variable, Tahrir—was also susceptible to “political hacking”. These factors help synchronize shared awareness and purpose in resistance movements. There’s not much that repressive regimes can do about massive hardware vulnerabilities. Yes, they can block off Tahrir for a certain period of time but the square won’t disappear. Besides, regimes require the hardware to project symbols of legitimacy and order. So these must stay but be secured by the army. The latter must also preempt any disorder. So more soldiers need to be deployed, especially around sensitive dates such as anniversaries of revolutions, massacres, independent movements etc. These politically sensitive days need not be confined to local events either. They can include dates for international events in contemporary world history.

A colleague of mine recently returned from China where he was doing research for a really interesting book he’s writing on subverting authoritarian control. He relayed how the calendar in China is getting more crowded with sensitive dates. Each date requires the state to deploy at times considerable resources to preempt or quickly put down any unrest. He described how the vast majority of people assembled at a recent protest in Beijing were actually undercover police officers in plain clothes. This is not immediately obvious when watching the news on television. The undercover officers inadvertently make the turnout look far bigger than it actually is.

As authoritarian regimes increase their efforts to control public spaces, they may require more time and resources to do so–a classic civil resistance strategy. They may sometimes resort to absurd measures like in Belarus. According to a Polish colleague of mine, the regime there has gone so far as to outlaw “doing nothing” in public venues. Previously, activists would simply assemble in numbers in specific places but do nothing—just to prove a point. The regime’s attempt to crack down on “doing nothing” makes it look foolish and susceptible to political jokes, a potent weapon in civil resistance. More here on subversive strategies.

The importance of public spaces like Tahrir in Cairo are even more evident when you look at a city like Alexandria. According to my colleague Katherine Maher, one of the main challenges for activists in Alex has been the lack of a central place for mass gathering. In fact, the lack of such hardware means that the “activism software” needs to run differently: activists in Alex are looking to organize marches instead of mass sit-ins, for example. More here on civil resistance strategies and tactics used by Egyptian activists.

Know of other “hardware” hacks? I’d love to hear them. Please feel free to share your thoughts in the comments section below. Thank you!

On Synchrony, Technology and Revolutions: The Political Power of Synchronized Resistance

Synchronized action is a powerful form of resistance against repressive regimes. Even if the action itself is harmless, like walking, meditation or worship, the public synchrony of that action by a number of individuals can threaten an authoritarian state. To be sure, synchronized public action demonstrates independency which may undermine state propaganda, reverse information cascades and thus the shared perception that the regime is both in control and unchallenged.

This is especially true if the numbers participating in synchrony reaches a tipping point. As Karl Marx writes in Das Kapital, “Merely quantitative differences, beyond a certain point, pass into qualitative changes.” We call this “emergent behavior” or “phase transitions” in the field of complexity science. Take a simple example from the physical world: the heating of water. A one degree increase in temperature is a quantitative change. But keep adding one degree and you’ll soon reach the boiling point of water and surprise! A physical phase transition occurs: liquid turns into gas.

In social systems, information creates friction and heat. Moreover, today’s information and communication technologies (ICTs) are perhaps the most revolutionary synchronizing tools for “creating heat” because of their scalability. Indeed, ICTs today can synchronize communities in ways that were unimaginable just a few short years ago. As one Egyptian activist proclaimed shortly before the fall of Mubarak, “We use Facebook to scheduled our protests, Twitter to coordinate, and YouTube to tell the world.” The heat is already on.

Synchrony requires that individuals be connected in order to synchronize. Well guess what? ICTs are mass, real-time connection technologies. There is conse-quently little doubt in my mind that “the advent and power of connection technologies—tools that connect people to vast amounts of information and to one another—will make the twenty-first century all about surprises;” surprises that take the form of “social phase transitions” (Schmidt and Cohen 2011). Indeed, ICTs can  dramatically increase the number of synchronized participants while sharply reducing the time it takes to reach the social boiling point. Some refer to this as “punctuated equilibria” or “reversed information cascades” in various academic literatures. Moreover, this can all happen significantly faster than ever before, and as argued in this previous blog post on digital activism, faster is indeed different.

Clay Shirky argues that “this basic hypothesis is an updated version of that outlined by Jürgen Habermas in his 1962 publication, The Structural Transformation of the Public Sphere: an Inquiry into a Category of Bourgeois Society. A group of people, so Habermas’s theory goes, who take on the tools of open expression becomes a public, and the presence of a synchronized public increasingly constrains undemocratic rulers while expanding the rights of that public […].” But to understand the inherent power of synchrony and then leverage it, we must first recognized that synchrony is a fundamental force of nature that goes well beyond social systems.

In his TED Talk from 2004, American mathematician Steven Strogatz argues that synchrony may be one of the most pervasive drivers in all of nature, extending from the subatomic scale to the farthest reaches of the cosmos. In many ways, this deep tendency towards spontaneous order is what pushes back against the second law of thermodynamics, otherwise known as entropy. 

Strogatz shares example from nature and shows a beautiful ballet of hundreds of birds flocking in unison. He explains that this display of synchrony has to do with defense. “When you’re small and vulnerable […] it helps to swarm to avoid and/or confuse predators.” When a predator strikes, however, all bets are off, and everyone disperses—but only temporarily. “The law of attraction,” says Strogatz, brings them right back together in synchrony within seconds. “There’s this constant splitting and reforming,” grouping and dispersion—swarming—which has several advantages. If you’re in a swarm, the odds of getting caught are far lower. There are also many eyes to spot the danger.

What’s spectacular about these ballets is how quickly they phase from one shape to another, dispersing and regrouping almost instantaneously even across vast distances. Individual changes in altitude, speed and direction are communicated and acted on across half-a-kilometer within just seconds. The same is true of fireflies in Borneo that synchronize their blinking across large distances along the river banks. Thousands and thousands of fireflies somehow overcoming the communication delay between the one firefly at one end of the bank and the other firefly at the furthest opposite end. How is this possible? The answer to this question may perhaps provide insights for social synchrony in the context of resistance against repressive regimes.

Strogatz and Duncan Watts eventually discovered the answer, which they published in their seminal paper entitled “Collective dynamics of small-world networks.” Published in the prestigious journal Nature,  the paper became the most highly cited article about networks for 10 years and the sixth most cited paper in all of physics. A small-world network is a type of network in which even though most nodes are not neighbors of one another, most can still be reached from other nodes by a small number of hops or steps. In the context of social systems, this type of network results in the “small world phenomenon of strangers being linked by a mutual acquaintance.”

These types of networks often arise out of preferential attachment, an inherently social dynamic. Indeed, small world networks pervade social systems. So what does this mean for synchrony as applied to civil resistance? Are smart-mobs synonymous with synchronized mobs? Do ICTs increase the prevalence of small world networks in social systems—thus increasing robustness and co-synchrony between social networks. Will meshed-communication technologies and features like check-in’s alter the topology of small world networks?

Examples of synchrony from nature clearly show that real-time communication and action across large distances don’t require mobile phones. Does that mean the same is possible in social systems? Is it possible to disseminate information instantaneously within a large crowd without using communication technologies? Is strategic synchrony possible in this sense? Can social networks engage in instantaneous dispersion and cohesion tactics to confuse the repressive regime and remain safe?

I recently spoke with a colleague who is one of the world’s leading experts on civil resistance, and was astonished when she mentioned (without my prompting) that many of the tactics around civil resistance have to do with synchronizing cohesion and dispersion. On a different note, some physicists argue that small world networks are more robust to perturbations than other network structures. Indeed, the small work structure may represent an evolutionary advantage.

But how are authoritarian networks structured? Are they too of the small world variety? If not, how do they compare in terms of robustness, flexibility and speed? In many ways, state repression is a form of synchrony itself—so is genocide. Synchrony is clearly not always a good thing. How is synchrony best interrupted or sabotaged? What kind of interference strategies are effective in this context?

Introduction to Digital Origins of Dictatorship and Democracy

Reading Philip Howard’s “Digital Origins of Dictatorship and Democracy” and Evgeny Morozov’s “Net Delusion” back-to-back over a 10-day period in January was quite a trip. The two authors couldn’t possibly be more different in terms of tone, methodology and research design. Howard’s approach is rigorous and balanced. He takes a data-driven, mixed-methods approach that ought to serve as a model for the empirical study of digital activism.

In contrast, Morozov’s approach frequently takes the form of personal attacks, snarky remarks and cheap rhetorical arguments. This regrettably drowns out the important and valid points he does make in some chapters. But what discredits Net Delusion the most lies not in what Morozov writes but in what he hides. To say the book is one-sided would be an understatement. But this has been a common feature of the author’s writings on digital activism, and one of the reasons  I took him to task a couple years ago with my blog posts on anecdote heaven. If you follow that back and forth, you’ll note it ends with personal attacks by Morozov mixed with evasive counter-arguments. For an intelligent and informed critique of Net Delusion, see my colleague Mary Joyce’s blog posts.

In this blog post, I summarize Howard’s introductory chapter. For a summary of his excellent prologue, please see my previous post here.

The introductory chapter to Digital Origins provides a critique of the datasets and methodologies used to study digital activism. Howard notes that the majority of empirical studies, “rely on a few data sources, chiefly the International Telecommunications Union, the World Bank, and the World Resources Institute. Indeed, these organizations often just duplicate each other’s poor quality data. Many researchers rely heavily on this data for their comparative or single-country case studies, rather than collecting original observations or combining data in interesting ways. The same data tables appear over and over again.”

I faced this challenge in my dissertation research. Collecting original data is often a major undertaking. Howard’s book is the culmination of 3-4 years of research supported by important grants and numerous research assistants. Alas, PhD students don’t always get this kind of support. The good news is that Howard and others are sharing their new datasets like the Global Digital Activism Dataset.

In terms of methods, there are limits in the existing literature. As Howard writes,

“Large-scale, quantitative, and cross-sectionalstudies must often collapse fundamentally different political systems—autocracies, democracies, emerging democracies, and crisis states—into afew categories or narrow indices. […] Area studies that focus on one or two countries get at the rich history of technology diffusion and political development, but rarely offer conclusions that can be useful in understanding some of the seemingly intractable political and security crises in other parts of the world.”

Howard thus takes a different approach, particularly in his quantitative analysis, and introduces fuzzy set logic:

“Fuzzy set logic offers general knowledge through the strategy of looking for shared causal conditions across multiple instances of the same outcome—sometimes called ‘selecting on the dependent variable.’ For large-N, quantitative, and variable oriented researchers, this strategy is unacceptable because neither the outcome nor the shared causal conditions vary across the cases. However, the strategy of selecting on the dependent variableis useful when researchers are interested in studying necessary conditions, and very useful when constructing a new theoretically defined population such as ‘Islamic democracy.’

“Perhaps most important, this strategy is most useful when developing theory grounded in the observed, real-world experience of democratization in the Muslim communities ofthe developing world, rather than developing theory by privileging null, hypothetical, and unobserved cases.”

Using original data and this new innovative statistical approach, Howard finds that “technology diffusion has had a crucial causal role in improvements in democratic institutions.”

“I find that technology diffusion has become, in combination with otherfactors, both a necessary and suffi cient cause of democratic transition or entrenchment.”

“Protests and activist movements have led to successful democratic insurgencies, insurgencies that depended on ICTs for the timing and logistics of protest. Sometimes democratic transitions are the outcome, and sometimes the outcome is slight improvement in the behavior of authoritarianstates. Clearly the internet and cell phones have not on their owncaused a single democratic transition, but it is safe to conclude that today, no democratic transition is possible without information technologies.”

My next blog post on Howard’s book will summarize Chapter 1: Evolution and Revolution, Transition and Entrenchment.

Political Change in the Digital Age: The Prospect of Smart Mobs in Authoritarian States

The latest edition of the SAIS Review of International Affairs is focused on cyber threats and opportunities. My Stanford colleague Rob Munro and I contributed a piece on crowdsourcing SMS for crisis response. Colleagues at Harvard’s Berkman Center wrote this piece on political change in the digital age—specifically with respect to authoritarian and semi-authoritarian regimes. Their research overlaps considerably with my dissertation so what follows is a short summary of their article.

Bruce Etling, Robert Faris and John Palfrey basically argue that policymakers and scholars have been focusing too narrowly on the role of digital technology in providing unfiltered access to the Internet and independent sources of information. They argue that “more attention should be paid to the means of overcoming the difficulties of online organization in the face of authoritarian governments in an increasingly digital geopolitical environment.” The authors thus seek to distinguish between flow of information and social organization facilitated by digital tools.

“While information and organizing are inextricably linked—photographs and videos play an important and growing role in empowering and motivating social activists—it is helpful to consider them separately as the use of technology entails different opportunities and challenges for each.”

They therefore develop a simple analytical framework to describe the interaction between civil society, media and governments in different types of regimes.

They argue that to understand the role of digital tools on democratic processes, “we must better understand the impact of the use of these tools on the composition and role of civil society.” Etling, Faris and Palfrey therefore assess the influence of digital technologies on the formation and activities of civil society groups—and in particular mobs, movements and civil society organizations. See Figure 2 below.

The authors claim that “hierarchical organizations with strong networks—the mainstay of civil society in consolidated democracies—are not a viable option in authoritarian states.” No news there. They write that civil society organizations (CSOs) are therefore easy targets since their “offline activities are already highly regimented and watched by the state.”

The protests in Burma and Iran are characterized by a “grey area between a flash mob and social movement” and efforts at digital organizing in these cases have been largely ineffective, according to the authors. They do have hope for smart mobs, however, given their ability to emerge organically and take governments by surprise: “In a few cases, the ability of a mob to quickly overwhelming unprepared governments has been successful.” They cite the case of Estrada in the Philippines, also the Philippines and Kyrgyzstan. The authors don’t elaborate on any of these anecdotes (see my rant on the use of anecdotes in the study of digital activism here).

As iRevolution readers will know, I’m not an advocate of spontaneous protests in the context of authoritarian states. I have argued time and time again that digital activists need more dedicated training in civil resistance and nonviolent action, which emphasizes planning and preparation. The Berkman authors write that success is “likely determined not by the given technology tool, but by the human skill and facility in using the networks that are being mobilized.” Likely? More like “definitely not determined by the technology.”

The authors also write that successful movements:

“… appear to combine the best of ‘classic’ organizing tactics with the improvisation, or “jazz” that is enabled by new Internet tools; for example, constantly updated mobile mapping tools […]. It is less clear how far online organizing and digital communities will be allowed to push states toward drastic political change and greater democratization, especially in states where offline restrictions to civic and political organization are severe. As scholars, we ought to focus our attention on the people involved and their competencies in using digitally-mediated tools to organize themselves and their fellow citizens, whether as flash mobs or through sustained social movements or organizations, rather than the flow of information as such.”

The Berkman scholars are mistaken in their reference to improvisation and jazz. As anyone interested in music will know, playing jazz—and acquiring the skills for jazz improv—takes years of training and hard work. It is therefore foolhardy to advocate for spontaneous mob action in repressive environments or to romanticize their power. The authors only dedicate one sentence to this concern: “Poorly organized mass actions are highly unpredictable and easily manipulated.”

In closing, I’d like to link this Berkman paper to the ongoing conversations around WikiLeaks. As the authors note, the best illustration of the threat that new information flows pose to authoritarian governments is their reaction to it.

Weighing the Scales: The Internet’s Effect on State-Society Relations

The Chair of my dissertation committed, Professor Dan Drezner just published this piece in the Brown Journal of World Affairs that directly relates to my dissertation research. He presented an earlier version of this paper at a conference in 2005 which was instrumental in helping me frame and refine my dissertation question. I do disagree a bit with the paper’s approach, however.

Professor Drezner first reviews the usual evidence on whether the Internet empowers coercive regimes at the expense of resistance movements or vice versa. Not surprisingly, this perusal doesn’t point to a clear winner. Indeed, as is repeatedly stated in the academic discourse, “parsing out how ICT affects the tug-of-war between states and civil society activists is exceedingly difficult.”

Drezner therefore turns to a transaction costs metaphor for insight. He argues that “metaphorically, the problem is akin to the one economists faced when predicting how the communications revolution would affect the optimal size of the firm.” I’m not convinced this is an appropriate metaphor but lets proceed and summarize his reasoning on firm size in any case.

Economists argue that the size of a firm is a function of transaction costs. “If these costs of market exchange exceed those of more hierarchical governance structures—i.e., firms—then hierarchy would be the optimal choice. With the fall in communication costs, economists therefore predicted an associated decline in firm size. “There were lots of predictions about how the communications revolution would lead to an explosion in independent entrepreneurship.”

But Drezner argues that decreasing communication costs (a transaction cost) has not affected aggregate firm size: “Empirically, there has been minimal change.” Unfortunately, he doesn’t cite any literature to back this claim. Regardless, Drezner concludes that firm size has not significantly changed because “the information revolution has lowered the organizational costs of hierarchy as well” and even “increased the optimal size of the firm” in some sectors. “The implications of this [metaphor] for the internet’s effect on states and civil society should be apparent.”

The problem (even if the choice of metaphor were applicable) is that these implications provide minimal insight into the debate on liberation technologies: large organizations or institutions have the opportunity to scale thanks to the Internet; meaning that government monitoring becomes more efficient and sophisticated, making it “easier for the state to anticipate and regulate civic protests.” More specifically, “repressive regimes can monitor opposition websites, read Twitter feeds, and hack e-mails—and crack down on these services when necessary.” Yes, but this is already well known so I’m not sure what the transaction metaphor adds to the discourse.

That said, Drezner does recognize that the Internet could have a “pivotal effect” on state-society relations with respect to “authoritarian and semi-authoritarian states that wish to exploit the economic possibilities of the information society.” Unfortunately, he doesn’t really expand on this point beyond the repeating the “Dictator’s Dilemma” argument. But he does address the potential relevance of “information cascades” for the study of digital activism in non-permissive environments.

“An informational cascade takes place when individuals acting in an environment of uncertainty strongly condition their choices on     what others have done previously. More formally, an information cascade is a situation in which every actor, based on the observations of others, makes the same choice independent of his/her private information signal. Less formally, an information cascade demonstrates the power of peer pressure—many individuals will choose actions based on what they observe others doing.”

So if others are not protesting, you are unlikely to stick your neck out and start a protest yourself, particularly against a repressive state. But Drezner argues that information cascades can be reversed as a result of a shock to the system such as an election or natural disaster. These events can “trigger spontaneous acts of protest or a reverse in the cascade,” especially since “a little bit of public information can reverse a long-standing informational cascade that contributed to citizen quiescence.” In sum,  “even if people may have previously chosen one action, seemingly little information can induce the same people to choose the exact opposite action in response to a slight increase in information.”

This line of argument seems to cast aside what has been learned about civil disobedience. Drezner suggests that reverse information cascades can catalyze spontaneous protests. Perhaps, but are these “improvised” protests actually effective in achieving their stated aims? The empirical evidence from the literature on civil resistance suggests otherwise: extensive planning and strategizing is more likely to result in success then unplanned spontaneous protests. If I find out that it’s cooler in the frying pan than the fire, will I automatically jump into said pan? A little bit of additional information without prior planning on how to leverage that information into action can be dangerous and counterproductive.

For example:

“The spread of information technology increases the fragility of information cascades that sustain the appearance of authoritarian control. This effect creates windows of opportunity for civil society groups.”

Yes, but this means little if these groups are not adequately prepared to deliberately exploit weaknesses in authoritarian control and cash in on this window of opportunity.

“At moments when a critical mass of citizens recognizes their mutual dissatisfaction with their government, the ability of the state to repress can evaporate.”

Yes, but this rarely happens completely spontaneously. Undermining the pillars of power of a repressive state takes deliberate and calculated work with an appropriate mix of tactics and strategies to delegitimize the regime. There is a reason why civil resistance is often referred to as (nonviolent) guerrilla warfare. The latter is not random or haphazard. Guerilla campaigns are carefully thought through and successful actions are meticulously planned.

Drezner argues that, “Extremists, criminals, terrorists, and hyper-nationalists have embraced the information society just as eagerly as classical liberals.” Yes, this is already well known but the author doesn’t make the connection to training and planning on the part of extremists. As Thomas Homer-Dixon notes in his book The Upside of Down: “Extremists are often organized in coherent and well-coordinated groups that have clear goals, distinct identities, and strong internal bonds that have grown around a shared radical ideology. As a result, they can mobilize resources and power effectively.” Successful terrorists do not spontaneously terrorize! Furthermore, they create information cascades as much as they react to them.

In conclusion, Drezner criticizes the State Department’s Civil Society 2.0 Initiative. State presumes that technologies will primarily help the “good guys” and  “assumes that the biggest impediment to the flowering of digital liberalism comes from the heavy hand of the state.” (He doesn’t say what the biggest impediment is, however). Drezner ends his piece with the following: “It is certainly possible that the initiative fails because of the coercive apparatus of a repressive government. It is equally likely, however, that the initiative succeeds—in empowering illiberal forces across the globe.” This is already well known. I’m not sure that one needs a transaction metaphor or to refer to the dictator’s dilemma, information cascades, spontaneous protests and extremist groups to reach this conclusion.

Is Ushahidi a Liberation Technology?

Professor Larry Diamond, one of my dissertation advisers, recently published a piece on “Liberation Technology” (PDF) in the Journal of Democracy in which he cites Ushahidi and FrontlineSMS amongst other tools. Is Ushahidi really a liberation technology?

Larry recently set up the Program on Liberation Technology at Stanford University together with colleagues Joshua Cohen and Terry Winograd to catalyze more rigorous, applied research on the role of technology in repressive environments—both in terms of liberation and repression. This explains why I’ll be joining the group as a Visiting Fellow this year. The program focuses on the core questions I’m exploring in my dissertation research and ties in technologies like Ushahidi which I’m directly working on.

What is Liberation Technology? Larry defines this technology as,

“… any form of information and communication technology (ICT) that can expand political, social, and economic freedom. In the contemporary era, it means essentially the modern, interrelated forms of digital ICT—the computer, the Internet, the mobile phone, and countless innovative applications for them, including “new social media” such as Facebook and Twitter.”

As is perfectly well known, however, technology can also be used to repress. This should not be breaking news. Liberation Technology vs Digital Repression. My dissertation describes this competition as an arms-race, a cyber game of cat-and-mouse. But the technology variable is not the most critical piece, as I argue in this recent Newsweek article:

“The technology variable doesn’t matter the most,” says Patrick Meier […] “It is the organizational structure that will matter the most. Rigid structures are unable to adapt as quickly to a rapidly changing environment as a decentralized system. Ultimately, it is a battle of organizational theory.”

As Larry writes,

“Democrats and autocrats now compete to master these technologies. Ultimately, however, not just technology but political organization and strategy and deep-rooted normative, social, and economic forces will determine who ‘wins’ the race.”

That is precisely the hypothesis I am testing in my dissertation research. As the Newsweek article put it,

“The only way to stay ahead in this cyberwar, though, is to play offense, not defense. ‘If it is a cat-and-mouse game,’ says Meier of Ushahidi, ‘by definition, the cat will adopt the mouse’s technology, and vice versa.’ His view is that activists will have to get better at adopting some of the same tactics states use. Just as authoritarian governments try to block Voice of America broadcasts, so protest movements could use newer technology to jam state propaganda on radio or TV.”

Larry rightly notes that,

“In the end, technology is merely a tool, open to both noble and nefarious purposes. Just as radio and TV could be vehicles of information pluralism and rational debate, so they could also be commandeered by totalitarian regimes for fanatical mobilization and total state control. Authoritarian states could commandeer digital ICT to a similar effect. Yet to the extent that innovative citizens can improve and better use these tools, they can bring authoritarianism down—as in several cases they have.”

A bold statement for sure. But as Larry recognizes, it is particularly challenging to disentangle political, social and technology factors. This is why more empirical research is needed in this space which is largely limited to qualitative case-studies. We need to bring mixed-methods research to the study of digital activism in repressive environments. This is why I’m part of the Meta-Activism Project (MAP) and why I’m particularly excited to be collaborating on the development of a Global Digital Activism Dataset (GDADS).

Larry writes that Liberation Technology is also “Accountability Technology” in that “it provides efficient and powerful tools for transparency and monitoring.” This is where he describes the FrontlineSMS and Ushahidi platforms. In some respects, these tools have already served as liberation technologies. The question is, will innovative citizens improve these tools and use them more effectively to be able to bring down dictators? I’d love to know your thoughts.

Patrick Philippe Meier

Breaking News: Repressive States Use Technologies to Repress!

I kid you not: repressive regimes actually have the nerve to use technologies to repress! Who would’ve guessed?! Nobody could possibly have seen this one coming. I mean, this shocking development is completely unprecedented in the history of state repression. Goodness, how did these repressive regimes even come up with the idea?!

Yes, that was sarcasm. But I never cease to be amazed by the incredible hullabaloo generated by the media every time a new anecdote pops up on a repressive regime caught red handed with digital technology. Just stunning. It’s as if world history started yesterday.

I hate to state what should be obvious but repressive states also used technology to repress in 2009, and in 2008, 2007, 2006, 2005, 2004, 2003, 2002, 2001 … You get the point. Hint: tech-based repression doesn’t start in 1984 either, try a little earlier. As Brafman and Beckstrom point out,

All phone calls were routed through Moscow [during the time of the Soviet Union]. Why? The Kremlin wanted to keep tabs on what you were talking about–whether plotting to overthrow the government or locating spare parts for your tractor. The Soviets weren’t the first, or the last, to keep central control of communication lines. Even the Roman empire, though spread around the world, maintained a highly centralized transportation system, giving rise to the expression ‘All roads lead to Rome’ (52).

Why the media continues to treat digital repression as a surprise is beyond me. Repressive states have used technologies for hundreds of years. So someone please tell me why repressive regimes wouldn’t use new technologies as well? Because they’re new? No, that’s probably not it. Wait, because they’re cheap? Or effective? Darn, I don’t know, what’s the answer? Is this a trick question?

As Evgeny Morozov notes,

There is, of course, nothing surprising about it: why wouldn’t governments be doing this? After all, there are many smart techies working for the governments as well – and sometimes they even believe in and like what they are doing.

But you still come across the typical comment “I told you so!” on Twitter, blogs, etc., “I told you that repressive states would use technology to repress!” And so the anecdotes keep flying and the “oooh’s” and “aaah’s” keep coming. The media freaks out, everyone gets excited. And the next day is exactly the same since the media thrives on repetitive soundbites, especially very catchy (preferably one-word) soundbites, which explains why I increasingly feel like I’m stuck in digital groundhog day.

If I had more time, I’d write a blog post entitled “10 Easy Steps to Writing the Best Anecdote on Digital Repression Ever” along the lines of Evgeny’s fun post on “10 Easy Steps to Writing the Scariest Cyberwarfare Article Ever.” But my post would be a lot shorter:

1) Find an anecdote in the mainstream media;
2) Formulate a blockbuster title ending with an exclamation mark;
3) Preface your post with a note that no one but you anticipated this to happen;
4) Quote at least one full paragraph on the anecdote from another source;
5) For extra credit, create your own new one-word soundbite;
6) Conclude with a few snarky lines about how this clearly refutes all the dumb hype on digital technologies.

Some applaud the media’s focus on digital repression. They are grateful to the media for countering the Utopian hype. Fair enough, but this refrain is quickly becoming an excuse to spew out more anecdotes instead of contributing solid analysis. Moreover, the media is largely responsible for promoting the techno-Utopian hype to begin with. This inevitably triggers an arms race of anecdotes, which only leads to mutually assured confusion. But don’t panic, we’ve always got our catchy one-word soundbites to clear things up!

So here’s a practical thought: why doesn’t someone aggregate and code all these anecdotes to analyze them and look for trends? I realize that’s a little harder than writing up daily blog posts on the latest anecdotes so why not do this together? Lets set up an open spreadsheet to keep track of digital repression event-data. Then, when we have 6 months or more of event-data for a particular country, lets analyze this data so we can actually say something more informative about the dynamics of digital repression.

Come to think of it, Global Voices Advocacy, Herdict and the OpenNet Initiative are already doing a lot of this information collection, and very well. Still, it would be great if they could turn this information into event-data and expand beyond the Internet to include mobile phones and other digital technologies. Something along these lines, perhaps.

This won’t answer all our questions, but it would give us the underlying event-data to study digital repression at the tactical level over time. (Would asking daily data updates be too much?).

The next step would be to do the same for “digital liberation”, i.e., capturing event-data on how/when/where civil society groups evade digital repression. Analyzing both datasets would allow us to get a grasp on the cat-and-mouse dynamics that may characterize the race between digital activists and repressive states. I think the analysis would show that states are more often than not reactive. But who knows. Such is life in data hell.

Patrick Philippe Meier