Molly Land at New York Law School has written an excellent paper on peer producing human rights, which will appear in the Alberta Law Review, 2009. This is one of the best pieces of research that I have come across on the topic. I highly recommend reading her article when published.
Molly considers Wikipedia, YouTube and Witness.org in her excellent research but somewhat surprisingly does not reference Ushahidi. I thus summarize her main points below and draw on the case study of Ushahidi—particularly Swift River—to compare and contrast her analysis with my own research and experience.
Introduction
Funding for human rights monitoring and advocacy is particularly limited, which is why “amateur involvement in human rights activities has the potential to have a significant impact on the field.” At the same time, Molly recognizes that peer producing human rights may “present as many problems as it solves.”
Human rights reporting is the most professionalized activity of human rights organizations. This professionalization exists “not because of an inherent desire to control the process, but rather as a practical response to the demands of reporting-namely, the need to ensure accuracy of the information contained in the report.” The question is whether peer-produced human rights reporting can achieve the same degree of accuracy without a comparable centralized hierarchy.
Accurate documentation of human rights abuses is very important for building up a reputation as a credible human rights organization. Accuracy is also important to counter challenges by repressive regimes that question the validity of certain human rights reports. Moreover, “inaccurate reporting risks injury not only to the organization’s credibility and influence but also to those whose behalf the organization advocates.”
Control vs Participation
A successful model for peer producing human rights monitoring would represent an important leap forward in the human rights community. Such a model would enable us to process a lot more information in a timelier manner and would also “increase the extent to which ordinary individuals connect to human rights issues, thus fostering the ability of the movement to mobilize broad constituencies and influence public opinion in support of human rights.”
Increased participation is often associated with an increased risk of inaccuracy. In fact, “even the perception of unreliability can be enough to provide […] a basis for critiquing the information as invalid.” Clearly, ensuring the trustworthiness of information in any peer-reviewed project is a continuing challenge.
Wikipedia uses corrective editing as the primary mechanism to evaluate the accuracy of crowdsourced information. Molly argues that this may not work well in the human rights context because direct observation, interviews and interpretation are central to human rights research.
To this end, “if the researcher contributes this information to a collaboratively-edited report, other contributors will be unable to verify the statements because they do not have access to either the witness’s statement or the information that led the researcher to conclude it was reliable.” Even if they were able to verify statements, much of human rights reporting is interpretive, which means that even experienced human rights professionals disagree about interpretive conclusions.
Models for Peer Production
Molly presents three potential models to outline how human rights reporting and advocacy might be democratized. The first two models focus on secondary and primary information respectively, while the third proposes certification by local NGOs. Molly outlines the advantages and challenges that each model presents. Below is a summary with my critiques. I do not address the third model because as noted by Molly it is not entirely participatory.
Model 1. This approach would limit peer-production to collecting, synthesizing and verifying secondary information. Examples include “portals or spin-offs of existing portals, such as Wikipedia,” which could “allow participants to write about human rights issues but require them to rely only on sources that are verifiable […].” Accuracy challenges could be handled in the same way that Wikipedia does; namely through a “combination of collaborative editing and policies; all versions of the page are saved and it is easy for editors who notice gaming or vandalism to revert to the earlier version.”
The two central limitations of this approach are that (1) the model would be limited to a subset of available information restricted to online or print media; and (2) even limiting the subset of information might be insufficient to ensure reliability. To this end, this model might be best used to complement, not substitute, existing fact-finding efforts.
Model 2. This approach would limit the peer-production of human rights report to those with first-hand knowledge. While Molly doesn’t reference Ushahidi in her research, she does mention the possibility of using a website that would allow witnesses to report human rights abuses that they saw or experienced. Molly argues that this first-hand information on human rights violations could be particularly useful for human rights organizations that seek to “augment their capacity to collect primary information.”
This model still presents accuracy problems, however. “There would be no way to verify the information contributed and it would be easy for individuals to manipulate the system.” I don’t agree. The statement: “there would be no way to verify the information” is an exaggeration. There multiple methods that could be employed to determine the probability that the contributed information is reliable, which is the motivation behind our Swift River project at Ushahidi, which seeks to use crowdsourcing to filter human rights information.
Since Swift River deserves an entire blog post to itself, I won’t describe the project. I’d just like to mention that the Ushahidi team just spent two days brainstorming creative ways that crowdsourced information could be verified. Stay tuned for more on Swift River.
We can still address Molly’s concerns without reference to Ushahidi’s Swift River.
Individuals who wanted to spread false allegations about a particular government or group, or to falsely refute such allegations, might make multiple entries (which would therefore corroborate each other) regarding a specific incident. Once picked up by other sources, such allegations ‘may take on a life of their own.’ NGOs using such information may feel compelled to verify this information, thus undermining some of the advantages that might otherwise be provided by peer production.
Unlike Molly, I don’t see the challenge of crowdsourced human rights data as first and foremost a problem of accuracy but rather volume. Accuracy, in many instances, is a function of how many data points exist in our dataset.
To be sure, more crowdsourced information can provide an ideal basis for triangulation and validation of peer produced human rights reporting-particularly if we embrace multimedia in addition to simply text. In addition, more information allows us to use probability analysis to determine the potential reliability of incoming reports. This would not undermine the advantages of peer-production.
Of course, this method also faces some challenges since the success of triangulating crowdsourced human rights reports is dependent on volume. I’m not suggesting this is a perfect fix, but I do argue that this method will become increasingly tenable since we are only going to see more user-generated content, not less. For more on crowdsourcing and data validation, please see my previous posts here.
Molly is concerned that a website allowing peer-production based on primary information may “become nothing more than an opinion site.” However, a crowdsourcing platform like Ushahidi is not an efficient platform for interactive opinion sharing. Witnesses simply report on events, when they took place and where. Unlike blogs, the platform does not provide a way for users to comment on individual reports.
Capacity Building
Molly does raise an excellent point vis-à-vis the second model, however. The challenges of accuracy and opinion competition might be resolved by “shifting the purpose for which the information is used from identifying violations to capacity building.” As we all know, “most policy makers and members of the political elite know the facts already; what they want to know is what they should do about them.”
To this end, “the purpose of reporting in the context of capacity building is not to establish what happened, but rather to collect information about particular problems and generate solutions. As a result, the information collected is more often in the form of opinion testimony from key informants rather than the kind of primary material that needs to be verified for accuracy.”
This means that the peer produced reporting does not “purport to represent a kind of verifiable ‘truth’ about the existence or non-existence of a particular set of facts,” so the issue of “accuracy is somewhat less acute.” Molly suggests that accuracy might be further improved by “requiring participants to register and identify themselves when they post information,” which would “help minimize the risk of manipulation of the system.” Moreover, this would allow participants to view each other’s contributions and enable a contributor to build a reputation for credible contributions.
However, Molly points out that these potential solutions don’t change the fact that only those with Internet access would be able to contribute human right reports, which could “introduce significant bias considering that most victims and eyewitnesses of human rights violations are members of vulnerable populations with limited, if any, such access.” I agree with this general observation, but I’m surprised that Molly doesn’t reference the use of mobile phones (and other mobile technologies) as a way to collect testimony from individuals without access to the Internet or in inaccessible areas.
Finally, Molly is concerned that Model 2 by itself “lacks the deep participation that can help mobilize ordinary individuals to become involved in human rights advocacy.” This is increasingly problematic since “traditional ‘naming and shaming’ may, by itself, be increasingly less effective in its ability to achieve changes state conduct regarding human rights.” So Molly rightly encourages the human rights community to “investigate ways to mobilize the public to become involved in human rights advocacy.”
In my opinion, peer produced advocacy faces the same challenges as traditional human rights advocacy. It is therefore important that the human rights community adopt a more tactical approach to human rights monitoring. At Ushahidi, for example, we’re working to add a “subscribe-to-alerts” feature, which will allow anyone to receive SMS alerts for specific locations.
P2P Human Rights
The point is to improve the situational awareness of those who find themselves at risk so they can get out of harm’s way and not become another human rights statistic. For more on tactical human rights, please see my previous blog post.
Human rights organizations that are engaged in intervening to prevent human rights violations would also benefit from subscribing to Ushahidi. More importantly, the average person on the street would have the option of intervening as well. I, for one, am optimistic about the possibility of P2P human rights protection.
Patrick Philippe Meier
Like this:
Like Loading...