Philosophy Professor, Karen Frost-Arnold, has just published a highly lucid analysis of the dangers that come with Internet accountability (PDF). While the anonymity provided by social media can facilitate the spread of lies, Karen rightly argues that preventing anonymity can undermine online communities by stifling communication and spreading ignorance, thus leading to a larger volume of untrustworthy information. Her insights are instructive for those interested in information forensics and digital humanitarian action.
To make her case, Karen distinguishes between error-avoidance and truth-attainment. The former seeks to avoid false beliefs while the latter seeks to attain true belief. Take mainstream and social media, for example. Some argue that the “value of traditional media surpasses that of the blogosphere […] because the traditional media are superior at filtering out false claims” since professional journalists “reduce the number of errors that might otherwise be reported and believed.” Others counter this assertion: “People who confine themselves to a filtered medium may well avoid believing falsehoods (if the filters are working well), but inevitably they will also miss out on valuable knowledge,” including many true beliefs.
Karen argues that Internet anonymity is at odds with both error-avoiding purists and truth-seeking purists. For example, “some experimental evidence indicates that anonymity in computer-mediated discussion increases the quantity and novelty of ideas shared.” In addition, anonymity provides a measure of safety. This is particularly important for digital activists and others who are vulnerable and oppressed. Without this anonymity, important knowledge may not be shared. To this end, “Removal of anonymity could deprive the community of true beliefs spread by reports from socially threatened groups. Without online anonymity, activists, citizen journalists, and members of many socially stigmatized groups are much less likely to take the risk of sharing what they know with others.”
This leads to decreased participation, which in turn undermines the diversity of online communities and their ability to detect errors. To be sure, “anonymity can enhance error-detection by enabling increased transformative criticism to weed out error and bias.” In fact, “anonymity enables such groups to share criticisms of false beliefs. These criticisms can lead community members to reject or suspend judgment on false claims.” In other words, “Blogging and tweeting are not simply means of disseminating knowledge claims; they are also means of challenging, criticizing & uncovering errors in others’ knowledge claims.” As Karen rightly notes, “The error-uncovering efficacy of such criticism is enhanced by the anonymity that facilitates participation by diverse groups who would otherwise, for fear of sanction, not join the discussion. Removing anonymity risks silencing their valuable criticisms.” In sum, “anonymity facilitates error detection as well as truth attainment.”
Karen thus argues for internet norms of civility instead of barring Internet anonymity. She also outlines the many costs of enforcing the use of real-world identities online. Detecting false identities is both time and resource intensive. I experienced this first-hand during the Libya Crisis Map operation. Investigating online identities diverts time and resources away obtaining other valuable truths and detecting other important errors. Moreover, this type of investigative accountability “can have a dampening effect on internet speech as those who desire anonymity avoid making surprising claims that might raise the suspicions of potential investigators.” This curtails the sharing of valuable truths.
“To prevent the problem of disproportionate investigation of marginalized and minority users,” Karen writes that online communities “need mechanisms for checking the biases of potential investigators.” To this end, “if the question of whether some internet speech merits investigation is debated within a community, then as the diversity of that community increases, the likelihood increases that biased reasons for suspicion will be challenged.”
Karen also turns to recent research in behavioral and experimental economics, sociology and psychology for potential solutions. For example, “People appear less likely to lie when the lie only gives them a small benefit but does the recipient a great harm.” Making this possible harm more visible to would-be perpetrators may dissuade dishonest actions. Research also shows that “when people are asked to reflect on their own moral values or read a code of ethics before being tempted with an opportunity for profitable deception, they are less likely to be dishonest, even when there is no risk of dishonesty being detected.” This is precisely the rational behind my piece on crowdsourcing honesty.
See also:
Great points, Patrick. I’d also suggest that the ability to stay anonymous presents very useful opportunities for using these social media data in applied research/social science. When people can submit their views, opinions, and experiences without revealing their identities, we (sometimes) hear their voices in a much more unvarnished way. This can be a powerful data source for understanding certain hard-to-reach (i.e. survey) populations, especially those who have reason to self-censor their true feelings for fear of authority and/or retribution.
Thanks for reading, Tasha, and thanks for your comment, good point indeed.
There is a long history going back to the 60’s of anonymous group discussions to allow groups to explore complex problems with out the biases produced by discussion where bias occurs because of status and people who dominate discussion even when they do not know much about the problem being discussed. There are lots of examples in the past few decades of online Delphi process used for many complex problems.
One can find the best reference book on the method free online
The Delphi Method: Techniques and Applications, 1975, by lindstone and turoff at my website http://is.njit.edu/turoff
There is also a special issue of Technological Forecasting and Social Change in 2011 devoted to teh Delphi method.. Harold Linstone and i wrote a paper giving sort of an update since the 1975 book and i am willing to send it to anyone interested turoff@njit.edu
Besides the process of anonymity (or pen names) the method is devoted to coming up with communication structures that are tailored to the nature of the problem and the nature of the group and the book has many examples that have become very popular for things like planning studies in corporations and govnerment organizations..
Many thanks for reading and sharing, Murray
Pingback: The Best of iRevolution in 2013 | iRevolution
Pingback: Social Media: The First 2,000 Years | iRevolution
Pingback: Why a real name policy for online comments works best (except when it doesn’t) | Privacy and Surveillance