Peer Producing Human Rights

Molly Land at New York Law School has written an excellent paper on peer producing human rights, which will appear in the Alberta Law Review, 2009. This is one of the best pieces of research that I have come across on the topic. I highly recommend reading her article when published.

Molly considers Wikipedia, YouTube and Witness.org in her excellent research but somewhat surprisingly does not reference Ushahidi. I thus summarize her main points below and draw on the case study of Ushahidi—particularly Swift River—to compare and contrast her analysis with my own research and experience.

Introduction

Funding for human rights monitoring and advocacy is particularly limited, which is why “amateur involvement in human rights activities has the potential to have a significant impact on the field.” At the same time, Molly recognizes that peer producing human rights may “present as many problems as it solves.”

Human rights reporting is the most professionalized activity of human rights organizations. This professionalization exists “not because of an inherent desire to control the process, but rather as a practical response to the demands of reporting-namely, the need to ensure accuracy of the information contained in the report.” The question is whether peer-produced human rights reporting can achieve the same degree of accuracy without a comparable centralized hierarchy.

Accurate documentation of human rights abuses is very important for building up a reputation as a credible human rights organization. Accuracy is also important to counter challenges by repressive regimes that question the validity of certain human rights reports. Moreover, “inaccurate reporting risks injury not only to the organization’s credibility and influence but also to those whose behalf the organization advocates.”

Control vs Participation

A successful model for peer producing human rights monitoring would represent an important leap forward in the human rights community. Such a model would enable us to process a lot more information in a timelier manner and would also “increase the extent to which ordinary individuals connect to human rights issues, thus fostering the ability of the movement to mobilize broad constituencies and influence public opinion in support of human rights.”

Increased participation is often associated with an increased risk of inaccuracy. In fact, “even the perception of unreliability can be enough to provide […] a basis for critiquing the information as invalid.” Clearly, ensuring the trustworthiness of information in any peer-reviewed project is a continuing challenge.

Wikipedia uses corrective editing as the primary mechanism to evaluate the accuracy of crowdsourced information. Molly argues that this may not work well in the human rights context because direct observation, interviews and interpretation are central to human rights research.

To this end, “if the researcher contributes this information to a collaboratively-edited report, other contributors will be unable to verify the statements because they do not have access to either the witness’s statement or the information that led the researcher to conclude it was reliable.” Even if they were able to verify statements, much of human rights reporting is interpretive, which means that even experienced human rights professionals disagree about interpretive conclusions.

Models for Peer Production

Molly presents three potential models to outline how human rights reporting and advocacy might be democratized. The first two models focus on secondary and primary information respectively, while the third proposes certification by local NGOs. Molly outlines the advantages and challenges that each model presents. Below is a summary with my critiques. I do not address the third model because as noted by Molly it is not entirely participatory.

Model 1. This approach would limit peer-production to collecting, synthesizing and verifying secondary information. Examples include “portals or spin-offs of existing portals, such as Wikipedia,” which could “allow participants to write about human rights issues but require them to rely only on sources that are verifiable […].” Accuracy challenges could be handled in the same way that Wikipedia does; namely through a “combination of collaborative editing and policies; all versions of the page are saved and it is easy for editors who notice gaming or vandalism to revert to the earlier version.”

The two central limitations of this approach are that (1) the model would be limited to a subset of available information restricted to online or print media; and (2) even limiting the subset of information might be insufficient to ensure reliability. To this end, this model might be best used to complement, not substitute, existing fact-finding efforts.

Model 2. This approach would limit the peer-production of human rights report to those with first-hand knowledge. While Molly doesn’t reference Ushahidi in her research, she does mention the possibility of using a website that would allow witnesses to report human rights abuses that they saw or experienced. Molly argues that this first-hand information on human rights violations could be particularly useful for human rights organizations that seek to “augment their capacity to collect primary information.”

This model still presents accuracy problems, however. “There would be no way to verify the information contributed and it would be easy for individuals to manipulate the system.” I don’t agree. The statement: “there would be no way to verify the information” is an exaggeration. There multiple methods that could be employed to determine the probability that the contributed information is reliable, which is the motivation behind our Swift River project at Ushahidi, which seeks to use crowdsourcing to filter human rights information.

Since Swift River deserves an entire blog post to itself, I won’t describe the project. I’d just like to mention that the Ushahidi team just spent two days brainstorming creative ways that crowdsourced information could be verified. Stay tuned for more on Swift River.

We can still address Molly’s concerns without reference to Ushahidi’s Swift River.

Individuals who wanted to spread false allegations about a particular government or group, or to falsely refute such allegations, might make multiple entries (which would therefore corroborate each other) regarding a specific incident. Once picked up by other sources, such allegations ‘may take on a life of their own.’ NGOs using such information may feel compelled to verify this information, thus undermining some of the advantages that might otherwise be provided by peer production.

Unlike Molly, I don’t see the challenge of crowdsourced human rights data as first and foremost a problem of accuracy but rather volume. Accuracy, in many instances, is a function of how many data points exist in our dataset.

To be sure, more crowdsourced information can provide an ideal basis for triangulation and validation of peer produced human rights reporting-particularly if we embrace multimedia in addition to simply text. In addition, more information allows us to use probability analysis to determine the potential reliability of incoming reports. This would not undermine the advantages of peer-production.

Of course, this method also faces some challenges since the success of triangulating crowdsourced human rights reports is dependent on volume. I’m not suggesting this is a perfect fix, but I do argue that this method will become increasingly tenable since we are only going to see more user-generated content, not less. For more on crowdsourcing and data validation, please see my previous posts here.

Molly is concerned that a website allowing peer-production based on primary information may “become nothing more than an opinion site.” However, a crowdsourcing platform like Ushahidi is not an efficient platform for interactive opinion sharing. Witnesses simply report on events, when they took place and where. Unlike blogs, the platform does not provide a way for users to comment on individual reports.

Capacity Building

Molly does raise an excellent point vis-à-vis the second model, however. The challenges of accuracy and opinion competition might be resolved by “shifting the purpose for which the information is used from identifying violations to capacity building.” As we all know, “most policy makers and members of the political elite know the facts already; what they want to know is what they should do about them.”

To this end, “the purpose of reporting in the context of capacity building is not to establish what happened, but rather to collect information about particular problems and generate solutions. As a result, the information collected is more often in the form of opinion testimony from key informants rather than the kind of primary material that needs to be verified for accuracy.”

This means that the peer produced reporting does not “purport to represent a kind of verifiable ‘truth’ about the existence or non-existence of a particular set of facts,” so the issue of “accuracy is somewhat less acute.” Molly suggests that accuracy might be further improved by “requiring participants to register and identify themselves when they post information,” which would “help minimize the risk of manipulation of the system.” Moreover, this would allow participants to view each other’s contributions and enable a contributor to build a reputation for credible contributions.

However, Molly points out that these potential solutions don’t change the fact that only those with Internet access would be able to contribute human right reports, which could “introduce significant bias considering that most victims and eyewitnesses of human rights violations are members of vulnerable populations with limited, if any, such access.” I agree with this general observation, but I’m surprised that Molly doesn’t reference the use of mobile phones (and other mobile technologies) as a way to collect testimony from individuals without access to the Internet or in inaccessible areas.

Finally, Molly is concerned that Model 2 by itself “lacks the deep participation that can help mobilize ordinary individuals to become involved in human rights advocacy.” This is increasingly problematic since “traditional  ‘naming and shaming’ may, by itself, be increasingly less effective in its ability to achieve changes state conduct regarding human rights.” So Molly rightly encourages the human rights community to “investigate ways to mobilize the public to become involved in human rights advocacy.”

In my opinion, peer produced advocacy faces the same challenges as traditional human rights advocacy. It is therefore important that the human rights community adopt a more tactical approach to human rights monitoring. At Ushahidi, for example, we’re working to add a “subscribe-to-alerts” feature, which will allow anyone to receive SMS alerts for specific locations.

P2P Human Rights

The point is to improve the situational awareness of those who find themselves at risk so they can get out of harm’s way and not become another human rights statistic. For more on tactical human rights, please see my previous blog post.

Human rights organizations that are engaged in intervening to prevent human rights violations would also benefit from subscribing to Ushahidi. More importantly, the average person on the street would have the option of intervening as well. I, for one, am optimistic about the possibility of P2P human rights protection.

Patrick Philippe Meier

9 responses to “Peer Producing Human Rights

  1. Hey Patrick — Nice find. No chance of sharing Ms Land’s advance paper I guess? Makes it harder to craft a decent response without it.

  2. Thanks Patrick – forgot to check back …

  3. Pingback: Accurate Crowdsourcing for Human Rights « iRevolution

  4. Pingback: Searching for Models… Crowdsourcing and Human Rights « Dan Cull Weblog

  5. Great post. I am working on the application of social marketing and transmedia storytelling to conflict context. It touches similar points of developing a participatory culture through collective intelligence of what I call conflict consumers. Crowd sourcing and reporting are also vital links in this chain.

  6. I am newly appointed as Head of News at Amnesty and trying to rethink the way we deliver information (human rights news) to the supporter base and the general public. At the moment it’s very traditional press statements and a very one way flow of information and there is a huge emphasis on accuracy above speed and sensitivity about damaging the brand. I would be interested in ideas about how Ushahidi might work alongside our research, not to replace it but to complement it. Also in crowd sourcing but using an online community to filter out abuse/opinion etc. Also in ideas about aggregating human rights news and distributing it on mobiles.

  7. Hi Patrick – Thanks for this and all your blog posts. Really helpful in getting me up to speed on crowdmapping, etc.

    Can you comment on (or direct me to any info on) the issue of protecting the safety or anonymity of the people reporting the human rights violations in the types of systems mentioned above.

    It seems like verifying the identities and geo-coordinates of the reporting individuals is key to verifying their reports, and some verification is necessary to prevent the reporting system from being gamed by parties in a conflict. However what do you do once your system contains all this “sensitive” info that could endanger people’s lives if, for example, an armed group or repressive regime learns which individuals are “snitching” on them? If the data is anonymous, or anonymized after vetting, is it still useful as a body of evidence?

    I have read some of your other posts on verification/vetting, and I understand that massive, multi-sourced video or photo evidence is harder to fake and thus more verifiable. However many of the use cases I’m thinking of do not lend themselves to photo or video reporting but would depend on verbal/text/SMS reports by individuals.

    Thanks if you can offer any leads on this, as it is one issue I’m really trying to wrap my head around for possible applications of crowdmapping and SMS reporting in situations of human rights abuses, ceasefire monitoring, early warning, etc.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s