Tag Archives: Impact

Proof: How Crowdsourced Election Monitoring Makes a Difference

My colleagues Catie Bailard & Steven Livingston have just published the results of their empirical study on the impact of citizen-based crowdsourced election monitoring. Readers of iRevolution may recall that my doctoral dissertation analyzed the use of crowdsourcing in repressive environments and specifically during contested elections. This explains my keen interest in the results of my colleagues’ news data-driven study, which suggests that crowdsourcing does have a measurable and positive impact on voter turnout.

Reclaim Naija

Catie and Steven are “interested in digitally enabled collective action initiatives” spearheaded by “nonstate actors, especially in places where the state is incapable of meeting the expectations of democratic governance.” They are particularly interested in measuring the impact of said initiatives. “By leveraging the efficiencies found in small, incremental, digitally enabled contributions (an SMS text, phone call, email or tweet) to a public good (a more transparent election process), crowdsourced elections monitoring constitutes [an] important example of digitally-enabled collective action.” To be sure, “the successful deployment of a crowdsourced elections monitoring initiative can generate information about a specific political process—information that would otherwise be impossible to generate in nations and geographic spaces with limited organizational and administrative capacity.”

To this end, their new study tests for the effects of citizen-based crowdsourced election monitoring efforts on the 2011 Nigerian presidential elections. More specifically, they analyzed close to 30,000 citizen-generated reports of failures, abuses and successes which were publicly crowdsourced and mapped as part of the Reclaim Naija project. Controlling for a number of factors, Catie and Steven find that the number and nature of crowdsourced reports is “significantly correlated with increased voter turnout.”

Reclaim Naija 2

What explains this correlation? The authors “do not argue that this increased turnout is a result of crowdsourced reports increasing citizens’ motivation or desire to vote.” They emphasize that their data does not speak to individual citizen motivations. Instead, Catie and Steven show that “crowdsourced reports provided operationally critical information about the functionality of the elections process to government officials. Specifically, crowdsourced information led to the reallocation of resources to specific polling stations (those found to be in some way defective by information provided by crowdsourced reports) in preparation for the presidential elections.”

(As an aside, this finding is also relevant for crowdsourced crisis mapping efforts in response to natural disasters. In these situations, citizen-generated disaster reports can—and in some cases do—provide humanitarian organizations with operationally critical information on disaster damage and resulting needs).

In sum, “the electoral deficiencies revealed by crowdsourced reports […] provided actionable information to officials that enabled them to reallocate election resources in preparation for the presidential election […]. This strengthened the functionality of those polling stations, thereby increasing the number of votes that could be successfully cast and counted–an argument that is supported by both quantitative and qualitative data brought to bear in this analysis.” Another important finding is that the resulting “higher turnout in the presidential election was of particular benefit to the incumbent candidate.” As Catie and Steven rightly note, “this has important implications for how various actors may choose to utilize the information generated by new [technologies].”

In conclusion, the authors argue that “digital technologies fundamentally change information environments and, by doing so, alter the opportunities and constraints that the political actors face.” This new study is an important contribution to the literature and should be required reading for anyone interested in digitally-enabled, crowdsourced collective action. Of course, the analysis focuses on “just” one case study, which means that the effects identified in Nigeria may not occur in other crowdsourced, election monitoring efforts. But that’s another reason why this study is important—it will no doubt catalyze future research to determine just how generalizable these initial findings are.

bio

See also:

  • Traditional Election Monitoring Versus Crowdsourced Monitoring: Which Has More Impact? [link]
  • Artificial Intelligence for Monitoring Elections (AIME) [link]
  • Automatically Classifying Crowdsourced Election Reports [link]
  • Evolution in Live Mapping: The Egyptian Elections [link]

Could Lonely Planet Render World Bank Projects More Transparent?

That was the unexpected question that my World Bank colleague Johannes Kiess asked me the other day. I was immediately intrigued. So I did some preliminary research and offered to write up a blog post on the idea to solicit some early feedback. According to recent statistics, international tourist arrivals numbered over 1 billion in 2012 alone. Of this population, the demographic that Johannes is interested in comprises those intrepid and socially-conscious backpackers who travel beyond the capitals of developing countries. Perhaps the time is ripe for a new form of tourism: Tourism for Social Good.

tourism_socialmedia

There may be a real opportunity to engage a large crowd because travelers—and in particular the backpacker type—are smartphone savvy, have time on their hands, want to do something meaningful, are eager to get off the beaten track and explore new spaces where others do not typically trek. Johannes believes this approach could be used to map critical social infrastructure and/or to monitor development projects. Consider a simple smartphone app, perhaps integrated with existing travel guide apps or Tripadvisor. The app would ask travelers to record the quality of the roads they take (with the GPS of their smartphone) and provide feedback on the condition, e.g.,  bumpy, even, etc., every 50 miles or so.

They could be asked to find the nearest hospital and take a geotagged picture—a scavenger hunt for development (as Johannes calls it); Geocaching for Good? Note that governments often do not know exactly where schools, hospitals and roads are located. The app could automatically alert travelers of a nearby development project or road financed by the World Bank or other international donor. Travelers could be prompted to take (automatically geo-tagged) pictures that would then be forwarded to development organizations for subsequent visual analysis (which could easily be carried out using microtasking). Perhaps a very simple, 30-second, multiple-choice survey could even be presented to travelers who pass by certain donor-funded development projects. For quality control purposes, these pictures and surveys could easily be triangulated. Simple gamification features could also be added to the app; travelers could gain points for social good tourism—collect 100 points and get your next Lonely Planet guide for free? Perhaps if you’re the first person to record a road within the app, then it could be named after you (of course with a notation of the official name). Even Photosynth could be used to create panoramas of visual evidence.

The obvious advantage of using travelers against the now en vogue stakeholder monitoring approach is that they said bagpackers are already traveling there anyway and have their phones on them to begin with. Plus, they’d be independent third parties and would not need to be trained. This obviously doesn’t mean that the stakeholder approach is not useful. The travelers strategy would simply be complementary. Furthermore, this tourism strategy comes with several key challenges, such as the safety of backpackers who choose to take on this task, for example. But appropriate legal disclaimers could be put in place, so this challenge seems surmountable. In any event, Johannes, together with his colleagues at the World Bank (and I), hope to explore this idea of Tourism for Social Good further in the coming months.

In the meantime, we would be very grateful for feedback. What might we be overlooking? Would you use such an app if it were available? Where can we find reliable statistics on top backpacker destinations and flows?

Bio

See also: 

  • What United Airlines can Teach the World Bank about Mobile Accountability [Link]

Tweeting is Believing? Analyzing Perceptions of Credibility on Twitter

What factors influence whether or not a tweet is perceived as credible? According to this recent study, users have “difficulty discerning truthfulness based on con-tent alone, with message topic, user name, and user image all impacting judg-ments of tweets and authors to varying degrees regardless of the actual truth-fulness of the item.”

For example, “Features associated with low credibility perceptions were the use of non-standard grammar and punctuation, not replacing the default account image, or using a cartoon or avatar as an account image. Following a large number of users was also associated with lower author credibility, especially when unbalanced in comparison to follower count […].” As for features enhan-cing a tweet’s credibility, these included “author influence (as measured by follower, retweet, and  mention counts), topical expertise (as established through a Twitter homepage bio, history of on-topic tweeting, pages outside of Twitter, or having a location relevant to the topic of the tweet), and reputation (whether an author is someone a user follows, has heard of, or who has an official Twitter account verification seal). Content related features viewed as credibility-enhancing were containing a URL leading to a high-quality site, and the existence of other tweets conveying similar information.”

 In general, users’ ability to “judge credibility in practice is largely limited to those features visible at-a-glance in current UIs (user picture, user name, and tweet content). Conversely, features that often are obscured in the user interface, such as the bio of a user, receive little attention despite their ability to impact cred-ibility judgments.” The table below compares a features’s perceived credibility impact with the attention actually allotted to assessing that feature.

“Message topic influenced perceptions of tweet credibility, with science tweets receiving a higher mean tweet credibility rating than those about either politics  or entertainment. Message topic had no statistically significant impact on perceptions of author credibility.” In terms of usernames, “Authors with topical names were considered more credible than those with traditional user names, who were in turn considered more credible than those with internet name styles.” In a follow up experiment, the study analyzed perceptions of credibility vis-a-vis a user’s image, i.e., the profile picture associated with a given Twitter account. “Use of the default Twitter icon significantly lowers ratings of content and marginally lowers ratings of authors […]” in comparison to generic, topical, female and male images.

Obviously, “many of these metrics can be faked to varying extents. Selecting a topical username is trivial for a spam account. Manufacturing a high follower to following ratio or a high number of retweets is more difficult but not impossible. User interface changes that highlight harder to fake factors, such as showing any available relationship between a user’s network and the content in question, should help.” Overall, these results “indicate a discrepancy between features people rate as relevant to determining credibility and those that mainstream social search engines make available.” The authors of the study conclude by suggesting changes in interface design that will enhance a user’s ability to make credibility judgements.

“Firstly, author credentials should be accessible at a glance, since these add value and users rarely take the time to click through to them. Ideally this will include metrics that convey consistency (number of tweets on topic) and legitimization by other users (number of mentions or retweets), as well as details from the author’s Twitter page (bio, location, follower/following counts). Second, for con-tent assessment, metrics on number of retweets or number of times a link has been shared, along with who is retweeting and sharing, will provide consumers with context for assessing credibility. […] seeing clusters of tweets that conveyed similar messages was reassuring to users; displaying such similar clusters runs counter to the current tendency for search engines to strive for high recall by showing a diverse array of retrieved items rather than many similar ones–exploring how to resolve this tension is an interesting area for future work.”

In sum, the above findings and recommendations explain why platforms such as RapportiveSeriously Rapid Source Review (SRSR) and CrisisTracker add so much value to the process of assessing the credibility of tweets in near real-time. For related research: Predicting the Credibility of Disaster Tweets Automatically and: Automatically Ranking the Credibility of Tweets During Major Events.

Traditional vs. Crowdsourced Election Monitoring: Which Has More Impact?

Max Grömping makes a significant contribution to the theory and discourse of crowdsourced election monitoring in his excellent study: “Many Eyes of Any Kind? Comparing Traditional and Crowdsourced Monitoring and their Contribu-tion to Democracy” (PDF). This 25-page study is definitely a must-read for anyone interested in this topic. That said, Max paints a false argument when he writes: “It is believed that this new methodology almost magically improves the quality of elections […].” Perhaps tellingly, he does not reveal who exactly believes in this false magic. Nor does he cite who subscribes to the view that  “[…] crowdsourced citizen reporting is expected to have significant added value for election observation—and by extension for democracy.”

My doctoral dissertation focused on the topic of crowdsourced election observa-tion in countries under repressive rule. At no point in my research or during interviews with activists did I come across this kind of superficial mindset or opinion. In fact, my comparative analysis of crowdsourced election observation showed that the impact of these initiatives was at best minimal vis-a-vis electoral accountability—particularly in the Sudan. That said, my conclusions do align with Max’s principle findings: “the added value of crowdsourcing lies mainly in the strengthening of civil society via a widened public sphere and the accumulation of social capital with less clear effects on vertical and horizontal accountability.”

This is huge! Traditional monitoring campaigns don’t strengthen civil society or the public sphere. Traditional monitoring teams are typically composed of inter-national observers and thus do not build social capital domestically. At times, traditional election monitoring programs may even lead to more violence, as this recent study revealed. But the point is not to polarize the debate. This is not an either/or argument but rather a both/and issue. Traditional and crowdsourced election observation efforts can absolutely complement each other precisely because they each have a different comparative advantage. Max concurs: “If the crowdsourced project is integrated with traditional monitoring from the very beginning and thus serves as an additional component within the established methodology of an Election Monitoring Organization, the effect on incentive structures of political parties and governments should be amplified. It would then include the best of both worlds: timeliness, visualization and wisdom of the crowd as well as a vetted methodology and legitimacy.”

Recall Jürgen Habermas and his treatise that “those who take on the tools of open expression become a public, and the presence of a synchronized public increasingly constrains un-democratic rulers while expanding the right of that public.” Why is this important? Because crowdsourced election observation projects can potentially bolster this public sphere and create local ownership. Furthermore, these efforts can help synchronize shared awareness, an important catalyzing factor of social movements, according to Habermas. Furthermore, my colleague Phil Howard has convincingly demonstrated that a large active online civil society is a key causal factor vis-a-vis political transitions towards more democratic rule. This is key because the use of crowdsourcing and crowd-mapping technologies often requires some technical training, which can expand the online civil society that Phil describes and render that society more active (as occurred in Egypt during the 2010 Parliamentary Elections—see  dissertation).

The problem? There is very little empirical research on crowdsourced election observation projects let alone assessments of their impact. Then again, these efforts at crowdsourcing are only a few years old and many do’ers in this space are still learning how to be more effective through trial and error. Incidentally, it is worth noting that there has also been very little empirical analysis on the impact of traditional monitoring efforts: “Further quantitative testing of the outlined mechanisms is definitely necessary to establish a convincing argument that election monitoring has positive effects on democracy.”

In the second half of his important study, Max does an excellent job articulating the advantages and disadvantages of crowdsourced election observation. For example, he observes that many crowdsourced initiatives appear to be spon-taneous rather than planned. Therein lies part of the problem. As demonstrated in my dissertation, spontaneous crowdsourced election observation projects are highly unlikely to strengthen civil society let alone build any kind of social capital. Furthermore, in order to solicit a maximum number of citizen-generated election reports, a considerable amount of upfront effort on election awareness raising and education needs to take place in addition to partnership outreach not to mention a highly effective media strategy.

All of this requires deliberate, calculated planning and preparation (key to an effective civil society), which explains why Egyptian activists were relatively more successful in their crowdsourced election observation efforts compared to their counterparts in the Sudan (see dissertation). This is why I’m particularly skeptical of Max’s language on the “spontaneous mechanism of protection against electoral fraud or other abuses.” That said, he does emphasize that “all this is of course contingent on citizens being informed about the project and also the project’s relevance in the eyes of the media.”

I don’t think that being informed is enough, however. An effective campaign not only seeks to inform but to catalyze behavior change, no small task. Still Max is right to point out that a crowdsourced election observation project can “encou-rage citizens to actively engage with this information, to either dispute it, confirm it, or at least register its existence.” To this end, recall that political change is a two-step process, with the second—social step—being where political opinions are formed (Katz and Lazarsfeld 1955). “This is the step in which the Internet in general, and social media in particular, can make a difference” (Shirky 2010). In sum, Max argues that “the public sphere widens because this engagement, which takes place in the context of the local all over the country, is now taken to a wider audience by the means of mapping and real-time reporting.” And so, “even if crowdsourced reports are not acted upon, the very engagement of citizens in the endeavor to directly make their voices heard and hold their leaders accountable widens the public sphere considerably.”

Crowdsourcing efforts are fraught with important and very real challenges, as is already well known. Reliability of crowdsourced information, risk of hate speech spread via uncontrolled reports, limited evidence of impact, concerns over security and privacy of citizen reporters, etc. That said, it is important to note that this “field” is evolving and many in this space are actively looking for solutions to these challenges. During the 2010 Parliamentary Elections in Egypt, the U-Shahid project was able to verify over 90% of the crowdsourced reports. The “field” of information forensics is becoming more sophisticated and variants to crowdsourcing such as bounded crowdsourcing and crowdseeding are not only being proposed but actually implemented.

The concern over unconfirmed reports going viral has little to do with crowd-sourcing. Moreover, the vast majority of crowdsourced election observation initiatives I have studied moderate all content before publication. Concerns over security and privacy are issues not limited to crowdsourced election observation and speak to a broader challenge. There are already several key initiatives underway in the humanitarian and crisis mapping community to address these important challenges. And lest we forget, there are few empirical studies that demonstrate the impact of traditional monitoring efforts in the first place.

In conclusion, traditional monitors are sometimes barred from observing an election. In the past, there have been few to no alternatives to this predicament. Today, crowdsourced efforts are sure to swell up. Furthermore, in the event that traditional monitors conclude that an election was stolen, there’s little they can do to catalyze a local social movement to place pressure on the thieves. This is where crowdsourced election observation efforts could have an important contribution. To quote Max: “instead of being fearful of the ‘uncontrollable crowd’ and criticizing the drawbacks of crowdsourcing, […] governments would be well-advised to embrace new social media. Citizens […] will use new techno-logies and new channels for information-sharing anyway, whether endorsed by their governments or not. So, governments might as well engage with ICTs and crowdsourcing proactively.”

Big thanks to Max for this very valuable contribution to the discourse and to my colleague Tiago Peixoto for flagging this important study.

Disaster Response, Self-Organization and Resilience: Shocking Insights from the Haiti Humanitarian Assistance Evaluation

Tulane University and the State University of Haiti just released a rather damming evaluation of the humanitarian response to the 2010 earthquake that struck Haiti on January 12th. The comprehensive assessment, which takes a participatory approach and applies a novel resilience framework, finds that despite several billion dollars in “aid”, humanitarian assistance did not make a detectable contribution to the resilience of the Haitian population and in some cases increased certain communities’ vulnerability and even caused harm. Welcome to supply-side humanitarian assistance directed by external actors.

“All we need is information. Why can’t we get information?” A quote taken from one of many focus groups conducted by the evaluators. “There was little to no information exchange between the international community tasked with humanitarian response and the Haitian NGOs, civil society or affected persons / communities themselves.” Information is critical for effective humanitarian assistance, which should include two objectives: “preventing excess mortality and human suffering in the immediate, and in the longer term, improving the community’s ability to respond to potential future shocks.” This longer term objective thus focuses on resilience, which the evaluation team defines as follows:

“Resilience is the capacity of the affected community to self-organize, learn from and vigorously recover from adverse situations stronger than it was before.”

This link between resilience and capacity for self-organization is truly profound and incredibly important. To be sure, the evaluation reveals that “the humani-tarian response frequently undermined the capacity of Haitian individuals and organizations.” This completely violates the Hippocratic Oath of Do No Harm. The evaluators thus “promote the attainment of self-sufficiency, rather than the ongoing dependency on standard humanitarian assistance.” Indeed, “focus groups indicated that solutions to help people help themselves were desired.”

I find it particularly telling that many aid organizations interviewed for this assessment were reluctant to assist the evaluators in fully capturing and analyzing resource flows, which are critical for impact evaluation. “The lack of transparency in program dispersal of resources was a major constraint in our research of effective program evaluation.” To this end, the evaluation team argue that “by strengthening Haitian institutions’ ability to monitor and evaluate, Haitians will more easily be able to track and monitor international efforts.”

I completely disagree with this remedy. The institutions are part of the problem, and besides, institution-building takes years if not decades. To assume there is even political will and the resources for such efforts is at best misguided. If resilience is about strengthening the capacity of affected communities to self-organize, then I would focus on just that, applying existing technologies and processes that both catalyze and facilitate demand-side, people-centered self-organization. My previous blog post on “Technology and Building Resilient Societies to Mitigate the Impact of Disasters” elaborates on this point.

In sum, “resilience is the critical link between disaster and development; monitoring it will ensure that relief efforts are supporting, and not eroding, household and community capabilities.” This explains why crowdsourcing and data mining efforts like those of Ushahidi, HealthMap and UN Global Pulse are important for disaster response, self-organization and resilience.