The Starfish and the Spider: 8 Principles of Decentralization

“The Starfish and the Spider: The Unstoppable Power of Leaderless Organizations” by Ori Brafman and Rod Beckstrom is still one of my favorite books on organizational theory and complex systems.

The starfish represents decentralized “organizations” while the spider describes hierarchical command-and-control structures. In reviewing the book, the Executive Chairman of the World Economic Forum wrote that “[it has] not only stimulated my thinking, but as a result of the reading, I proposed ten action points for my own organization.”

The Starfish and the Spider is about “what happens when there’s no one in charge. It’s about what happens when there’s no hierarchy. You’d think there would be disorder, even chaos. But in many arenas, a lack of traditional leadership is giving rise to powerful groups that are turning industry and society upside down.” The book draws on a series of case studies that illustrate 8 Principles of Decentralization. I include these below with short examples.

1. When attacked, a decentralized organization tends to become even more open and decentralized:

Not only did the Apaches survive the Spanish attacks, but amazingly, the attacks served to make them even stronger. When the Spanish attacked them, the Apaches became even more decentralized and even more difficult to conquer (21).

2. It’s easy to mistake starfish for spiders:

When we first encounter a collection of file-swapping teenagers, or a native tribe in the Arizona desert, their power is easy to overlook. We need an entirely different set of tools in order to understand them (36).

3. An open system doesn’t have central intelligence; the intelligence is spread throughout the system:

It’s not that open systems necessarily make better systems. It’s just that they’re able to respond more quickly because each member has access to knowledge and the ability to make direct use of it (39).

4. Open systems can easily mutate:

The Apaches did not—and could not—plan ahead about how to deal with the European invaders, but once the Spanish showed up, Apache society easily mutated. They went from living in villages to being nomads. The decision didn’t have to be approved by headquarters (40).

5. The decentralized organization sneaks up on you:

For a century, the recording industry was owned by a handful of corporations, and then a bunch of hackers altered the face of the industry. We’ll see this pattern repeat itself across different sectors and in different industries (41).

6. As industries become decentralized, overall profits decrease:

The combined revenues of the remaining four [music industry giants] were 25 percent less than they had been in 2001. Where did the revenues go? Not to P2P players [Napster]. The revenue disappeared (50).

7. Put people into an open system and they’ll automatically want to contribute:

People take great care in making the articles objective, accurate, and easy to understand [on Wikipedia] (74).

8. When attacked, centralized organizations tend to become even more centralized:

As we saw in the case of the Apaches and the P2P players, when attacked decentralized organizations become even more decentralized (139).

Patrick Phillipe Meier

Where I Disagree with Morozov vs Shirky on Digital Activism

Prospect Magazine just published the final back-and-forth between Evgeny Morozov and Clay Shirky on digital activism. The debate followed Evgeny’s cover story in Prospect published last November, which I responded to (and disagreed with) at length here: Why Dictators Love the Web or: How I Learned to Stop Worrying and Say So What?! And here: Digital Activism and the Puffy Clouds of Anecdote Heaven.

I enjoyed reading the final exchange between Morozov and Shirky. Here’s where I agree and disagree with both authors.

Agree with Morozov

  • Growing Internet censorship in Iran is a logical reaction from a “rational-thinking” government concerned with possible revolution.
  • Only focusing on who controls communication networks may not be terribly helpful since Iran has other ways to control the internet. “One unfortunate consequence of limiting our analysis of internet control to censorship only is that it presents all authoritarian governments as technophobic and unable to capitalise on new technologies,” which is hardly the case.

Disagree with Morozov

  • Protests are not necessarily rare in repressive states. According to a study in 2006, “group protests in China have risen at a rate of at least 17% a year.” In 2005, there were an estimated 241 group protests per day. In Pakistan, local Field Monitors with Swisspeace coded 54 individual protests during 2007. Compare this with Reuters coverage of Pakistan, which only reported 7 protests that same year. If it’s not in the news does not mean it’s not happening.
  • The regime in Tehran may very well have the ability to turn off mobile phone coverage in public places where protests are organized but remember those things called land line telephones? Iran has 24.8 million of those (2008 est.) and is ranked 12th (above South Korea and Canada) in number of land line phones (ref). And besides, we’ve clearly seen that mobile phones are increasingly used for more than just communication. The tragic video footage of Neda (along with hundreds of other pictures) were all captured on mobile phones.
  • The fact that the Iranian regime has become more authoritarian following the post-elections  protests does not automatically imply the regime has become stronger. As I have written elsewhere, citing Brafman and Beckstrom’s The Starfish and the Spider: “when attacked, centralized organizations tend to become even more centralized.” A more centralized and paranoid regime, however, doesn’t mean a more powerful regime. Greater repression is a typical reaction by a threatened regime during a revolution and often before a change of power.
  • The increased repression can also backfire. As mentioned in this previous blog post, “Backfire occurs when an attack creates more support for or attention to what/who is attacked. Any injustice or norm violation can backfire on the perpetrator.” For further research on this issue, I recommend reading this piece on “Repression, Backfire and the Theory of Transformative Events,” by David Hess and Brian Martin.
  • One should also take into consideration the organizational topology of resistance movements. Brafman and Beckstrom argue that “when attacked, a decentralized organization tends to become even more decentralized.” This may make it more difficult for the regime to identify and crack down on the resistance movement in Iran.
  • I don’t think Burma serves as a valid comparison with Iran. In addition, the fact that there were no major democratic changes in Burma following the Saffron revolution in 2007 hardly means that the situation has been static since. I recently spoke with two colleagues who were in Burma a few months ago and was taken aback by some of the changes they observed.
  • Morozov asks what is to be gained if the ability to organize protests is matched or even overpowered by the ability to provoke, identify and arrest the protesters and possible future dissidents? A good question but one that seems to assumes a static and linear state of affairs. As I have argued elsewhere, tactical innovation, organizational learning and technological change means that this is unlikely.

Agree with Shirky

  • Just like the Protestant Reformation was shaped by the printing press, the Iranian protests were and is being shaped by social media, rather than simply Twitter. Perhaps “the real revolution was the use of mobile phones, which allowed the original protesters to broadcast their actions to other citizens and to the wider world with remarkable speed and immediacy.”
  • I’m including the following paragraphs in full as they are relevant to my dissertation research. For me the key words here are “synchronized public.”

“The basic hypothesis is an updated version of that outlined by Jürgen Habermas in his 1962 publication, The Structural Transformation of the Public Sphere: an Inquiry into a Category of Bourgeois Society. A group of people, so Habermas’s theory goes, who take on the tools of open expression becomes a public, and the presence of a synchronised public increasingly constrains undemocratic rulers while expanding the rights of that public (the monarchies of Europe, in Habermas’s telling, become authoritarian governments within the contemporary scenario).

Put another way, even taking into account the increased availability of surveillance, the net value of social media has shifted the balance of power in the direction of Iran’s citizens. As Evgeny notes, however, that hypothesis might be wrong. Or, if it is right, the ways in which it is right might be minor, or rare, or take decades to unfold.

  • Iran is unlikely to become a permanent Burma since “the kind of information shutdown required to keep all forms of public assembly from boiling over will be beyond the authorities in Iran.”

Disagree with Shirky

  • On the Habermas reference to the “synchronized public”, Shirky overlooks the fact that a centralized, command-and-control organization is likely to have the upper hand on synchronization. He also forgets that repressive regimes do not face the same collective action problem that resistance movements face (c.f. information cascades). Granted, the “public” may be quicker in adapting to relatively rapid and small-albeit-important changes in the political environment but this needs to be tested in more depth.
  • Shirky agrees with Evgeny regarding the possibility that Iran may move towards the Burmese model of steady control. Put this way, I also agree with the possibility that the sun may not rise tomorrow. Neither Shirky, Morozov or myself are Iran specialists or have any inside information on the internal politics of the country. So best not to rely on any of us for expert political commentary on Iranian politics.

Patrick Philippe Meier

Diane Coyle Responds to Criticisms of UN/Vodafore Report

Guest Blogger: Diane Coyle, lead author of UN/Vodafone Report

The tone of some comments about our report has surprised and disappointed me. It should go without saying that a report-as opposed to a catalog-could never include all the technologies and applications available in this exciting field now.

We selected examples that illustrated relevant aspects of the use of communications and information in the context of an emergency or conflict. The selection methodology was that they should be good illustrations of innovative uses, covering a reasonable spread of technologies, users and countries. In addition, we played to our own strengths in terms of the technologies and approaches we know well, meaning that we understood thoroughly the contribution they can make.

What’s more, this is not an academic report, which some of the criticisms ignored. It is meant to be accessible to a general audience, especially practitioners and policy makers. Why on earth would we include a literature review?

Some comments simply seemed to have missed the point, and no doubt I could have spelled some things out more clearly. For example, the point made in the document about Cyclone Nargis is precisely that even in contexts which hold no promise for humanitarian agencies to introduce new information-rich technologies, very simple low-tech information can still help build community resilience.

Having said that, I am really grateful for comments which pointed out a few factual errors and infelicities, and I hope we can correct them soon. I’m confident that the report makes an important contribution in highlighting the potential for the latest technologies in this field, and the obstacles to realization of that potential.

Responding to Feedback on UN Foundation/Vodafone Report

Update: I’m in conversation with the UN/Vodafone Foundation about adding a paragraph on case selection, a case study on Sahana and correcting the error on UNOSAT.

The UN/Vodafone Foundation recently published a new Report on New Technologies in Emergencies and Conflicts. This blog post responds to feedback on this report. Please note that I do so as second-author. My respected colleague Diane Coyle is the report’s lead author and editor so she may have other thoughts on the feedback. Naturally, I will only address the feedback that relates to my input.

  • Intended audience: The report was written for a general (non-technical/expert) audience as a way to showcase technology applications in the humanitarian space.
  • Case selection: The case studies were selected in consultation with the UN/Vodafone Foundation throughout the research period. These consultations included the authors of the report (Diane and myself) and three members of the UN/Vodafone Foundation who served as editors. Some of the case studies were requested by the Foundation. For the other case studies, we strove to highlight some of the most recent initiatives in consultation with the UN/Vodafone Foundation. Of the 19 case studies selected, 13 didn’t exist some 2 years ago. The others comprise major global initiatives, fit well together as examples for a general audience, or were requested case studies. Clearly, the limited space did not allow us include everyone’s favorite project.
  • Length of report: We originally had some 80-or-so draft pages between us and had to reduce the content to about 60 pages. This meant having to decide what to keep and what to put aside. Some of the rewrite was also done to make the report less technical and more widely understandable. This helped to save on space.
  • UNOSAT and Sri Lanka: The reference does not intend to endorse or discount the interpretation of the imagery by the international community. The reference is based on in-person consultations with several well-placed experts. That said, I would agree that some rephrasing is in order this paragraph needs to be reworked.

On a personal note, I have found the tone of some criticisms rather disappointing and old school. There are several ways to give feedback: one is constructive, another is destructive. The former provides incentives to improve and continue an open collaborative conversation as a community. The latter defeats the incentive for growth and leads to a more self-centered community.

Some of the criticisms of this report have been destructive. Why is it that some of us can’t get our points across with more composure? Does bitterness make us feel more important? I expected a lot more from some of my (older, wiser) colleagues. But I haven’t always been good on providing constructive feedback either, so thanks to my “new fan club” I’ve got another New Year’s resolution for 2010: I will do my best to give constructive, supportive feedback.

Patrick Philippe Meier

Crisis Mapping Uganda: Combining Narratives and GIS to Study Genocide

This new peer-reviewed paper in The Professional Geographer is worth reading, especially if you’re new to crisis mapping. Authors Marguerite Madden and Amy Ross combine qualitative data of personal narratives with GIS technologies to “explore the potential for critical cartography in the study of mass atrocity.”

The authors use Northern Uganda, where millions have been affected by physical violence, as a case study. Their research yields the following conclusions:

  • Satellite images confirm the disruptive impact of forced relocation on economic activity, which resulted in greater levels of mortality than overt violence of LRA attacks.
  • Points, lines, and polygons delineated in Google Earth can be uploaded in ArcGIS to analyze the spatial patterns of huts and changes in IDP camps over time.
  • GIS data appear to have potential in documenting crimes that fall within the category of crimes against humanity.
  • Qualitative data may fail to demonstrate the extent and systematic nature of violence. GIS techniques may be able to provide the widespread and systematic criteria necessary for a conviction on crimes against humanity when individual life experiences are too difficult to document in sufficient numbers.
  • GIScience technologies appear to have less value in determining the crime of genocide. To reach a legal finding of genocide, the intent of the perpetrators must be established. GIS technologies alone, it seems, fail to provide solutions to this difficulty.

Marguerite and Amy cite the following research, which may be of interest to those looking for further reading in this area:

Kwan, M., and G. Ding. 2008. Geo-narrative: Extending geographic information systems for narrative
analysis in qualitative and mixed-method research. The Professional Geographer 60 (4): 443–65.

I wonder whether Marguerite and Amy have thought about exploring a collaboration with the EC’s Joint Research Center (JRC). The latter has developed automated change-detection methods for refugee/IDP camps, and if I remember correctly, they are looking for ways to validate their analysis using ground truthing.

Many thanks to my colleague Andrew Linke of Colorado University for sharing this paper with me.

Patrick Philippe Meier

Top 10 Posts on iRevolution in 2009

Here are the top 10 most popular posts on iRevolution in 2009:

  1. How To Communicate Securely in Repressive Environments
  2. A Brief History of Crisis Mapping
  3. Crisis Mapping Kenya’s Election Violence
  4. Video Introduction to Crisis Mapping
  5. Impact of ICTs on Repressive Regimes
  6. Proposing the Field of Crisis Mapping
  7. Mobile Banking for the Bottom of the Pyramid
  8. Digital Resistance: Between Digital Activism and Civil Resistance
  9. Moving Forward with Swift River
  10. Why Dictators Love the Web or: How I Learned to Stop Worrying and Say So What?!

Note the contrasting titles of posts #1 and #10. I actually wrote the former back in June during Iran’s post-election crackdown and the latter just a few weeks ago in response to reading a laundry list of “techtics” (technologies + tactics) that repressive regimes like Iran’s employ.

As David Sasaki recently noted, making lists of what is wrong is all fine and well, but we also need action items for “what needs to be done to make it right” so that  “month by month, year by year, we’re slowly [able to be] checking those items off.” So consider the most popular post (communicating securely in repressive environments), a collection of action items (gathered from multiple sources) to deal with some laundry lists of what is wrong.

I was glad to see one of my posts on Ushahidi’s Swift River appear in the top 10, and surprised to see a post on mobile banking in position 7. I’ll be looking to blog more about mobile banking in 2010 especially as my Fletcher colleagues (and alumni) are becoming leaders in their own right in this exciting space ripe for iRevolutions. See this link for the international conference on Mobile Banking that we co-organized in Kenya earlier this year.

As for crisis mapping, I think 2009 will mark the launch of crisis mapping as a field in it’s own right. Thank you to all the donors who have helped to make this happen. There’s no doubt that 2010 will be a year of many more iRevolutions. I look forward to it!

Patrick Philippe Meier

Google’s New Earth Engine for Satellite Imagery Analysis: Applications to Humanitarian Crises

So that’s what they’ve been up to. Google is developing a new computational platform for global-scale analysis of satellite imagery to monitor deforestation. But this is just “the first of many Earth Engine applications that will help scientists, policymakers, and the general public to better monitor and understand the Earth’s ecosystems.”

How about the Earth’s social systems? Humanitarian crises? Armed conflicts? This has been one of the main drivers of the Program on Crisis Mapping and Early Warning (CM&EW) which I co-direct at the Harvard Humanitarian Initiative (HHI) with Dr. Jennifer Leaning. Indeed, we had a meeting with the Google Earth team earlier this year to discuss the development of a computational platform to analyze satellite imagery of humanitarian crises for the purposes of early detection and early response.

In particular, we were interested in determining whether certain spatial patterns could be identified and if so whether we could develop a taxonomy of different spatial patterns of humanitarian crises; something like a library of “crisis finger prints.” As we noted to Google in writing following the conversations,

It is our view that the work of interpretation will be powerfully enhanced by the development of valid patterns relating to issues of importance in specific sets of circumstances that can be reproducibly recognized in satellite imagery. To be sure, the geo-spatial analysis of humanitarian crisis can serve as an important control mechanism for Google’s efforts in extending the functionality of Google Earth and Google’s analytical expertise.

This is something that a consortium of organizations including HHI can get engaged in. Population movement and settlement, shelter options and conditions, environmental threats, access to food and water, are discernible from various elements and resolution levels of satellite imagery.  But much more could be apprehended from these images were patterns assembled and then tested against other information sources and empirical field assessments. For an excellent presentation on this, see my colleague Jennifer Leaning’s excellent Keynote address at ICCM 2009:

The military uses of satellite imagery are far more developed than the humanitarian capacities because the interpretive link between what can be seen in the image and what is actually happening on the ground has been made, in great iterative detail, over a period of many years, encompassing a wide span of geographies and technological deployments. We need to develop a process to explore and validate what can be understood from satellite imagery about key humanitarian concerns by augmenting standard satellite analytics with time-specific and informed assessments of what was concurrently taking place in the location being photographed.

The potential for such applications has just begun to surface in humanitarian circles.  The Darfur Google initiative has demonstrated the force of vivid images of destruction tethered to actual locations of villages across the span of Darfur.  Little further detail is available from the actual images, however, and much of the associated information depicted by clicking on the image is static derived from other sources, somewhat laboriously acquired.  The full power of what might be gleaned simply from the satellite image remains to be explored.

Because systematic and empirical analysis of what a series of satellite images might reveal about humanitarian issues has not yet been undertaken, any effort to draw inferences from current images does not lead far.  The recent coverage of the war in Sri Lanka included satellite photos of the same contested terrain in the northeast, for two time frames, a month apart.  The attempt to determine what had transpired in that interim, relating to population movement, shelter de-construction and reconstruction, and land bombardment, was a matter of conjecture.

Bridging this gap from image to insight will not only be a matter of technological enhancement of satellite imaging. It will require interrogating the satellite images through the filter of questions and concerns that are relevant to humanitarian action and then infusing other kinds of information, gathered through a range of methods, to create visual metrics for understanding what the images project.

There is a lot of exciting work to be done in this space and I do hope that Google will seek to partner with humanitarian organizations and applied research institutes to develop an Earth Engine for Humanitarian Crises. While the technological and analytical breakthroughs are path breaking, let us remember that they can be even more breathtaking by applying them to save lives in humanitarian crises.

Patrick Philippe Meier

Is Journalism Just Failed Crowdsourcing?

This provocative question materialized during a recent conversation I had with a Professor of Political Science whilst in New York in this week. Major news companies like CNN have started to crowdsource citizen generated news on the basis that “looking at the news from different angles gives us a deeper understanding of what’s going on.” CNN’s iReporter thus invites citizens to help shape the news “in order to paint a more complete picture of the news.”

This would imply that traditional journalism has provided a relatively incomplete picture of global events. So the question is, if crowdsourcing platforms had been available to journalists one hundred years ago, would they view these platforms as an exciting opportunity to get early leads on breaking stories? The common counter argument is: but crowdsourcing “opens the floodgates” of information and we simply can’t follow up on everything. Yes, but whoever said that every lead requires follow up?

Journalists are not always interested in following up on every lead that comes their way. They’ll select a few sources, interview them and then write up the story. What crowdsourcing citizen generated news does, however, is to provide them with many more leads to choose from. Isn’t this an ideal set up for a journalist? Instead of having to chase down leads across town, the leads come directly to them with names, phone numbers and email addresses.

Imagine that the field of journalism had started out using crowdsourcing platforms combined with investigative journalism. If these platforms were then outlawed for whatever reason, would investigative journalists be hindered in their ability to cover the news from different angles? Or would they still be able to paint an equally complete picture of the news?

Granted, one common criticism of citizen journalism is the lack of context they provide especially when using Twitter given the 140 characters restriction. But surely 140 characters are plenty for the purposes of a potential lead. And if a mountain of Tweets started to point to the same lead story, then a professional journalist could take advantage of this information when deciding whether or not to follow up.

Source: CoolThing

I also find the criticism against Twitter interesting coming from traditional journalists. In the early 1900s, large newspapers started hiring war correspondents “who used the new telegraph and expanding railways to move news faster to their newspapers.” However, the cost of sending telegrams forced reporters to develop a “new concise or ‘tight’ style of writing which became the standard for journalism through the next century.”

Today, the costs of hiring professional journalists means that a newspaper like the Herald (at the time),  is not going to send any modern Henry Stanley to find a certain Dr. Livingstone in Africa. And besides, if the Herald had global crowdsourcing platforms back in the 1870s, they may have instead used Twitter to crowdsource the coordinates of Dr. Livingstone.

This may imply that traditional journalism was primarily shaped by the constraint of technology at the time. In a teleological sense, then, crowdsourcing may simply by the next phase in the future of journalism.

Patrick Philippe Meier

Crisis Information and The End of Crowdsourcing

When Wired journalist Jeff Howe coined the term crowdsourcing back in 2006, he did so in contradistinction to the term outsourcing and defined crowdsourcing as tapping the talent of the crowd. The tag line of his article was: “Remember outsourcing? Sending jobs to India and China is so 2003. The new pool of cheap labor: everyday people using their spare cycles to create content, solve problems, even do corporate R & D.”

If I had a tag line for this blog post it would be: “Remember crowdsourcing? Cheap labor to create content and solve problems using the Internet is so 2006. What’s new and cool today is the tapping of official and unofficial sources using new technologies to create and validate quality content.” I would call this allsourcing.

The word “crowdsourcing” is obviously a compound word that combines “crowd” and “sourcing”. But what exactly does “crowd” mean in this respect? And how has “sourcing” changed since Jeff introduced the term crowdsourcing over three-and-a-half years ago?

Lets tackle the question of “sourcing” first. In his June 2006 article on crowdsourcing, Jeff provides case studies that all relate to a novel application of a website and perhaps the most famous example of crowdsourcing is Wikipedia, another website. But we’ve just recently seen some interesting uses of mobile phones to crowdsource information. See Ushahidi or Nathan Eagle’s talk at ETech09, for example:

So the word “sourcing” here goes beyond the website-based e-business approach that Jeff originally wrote about in 2006. The mobile technology component here is key. A “crowd” is not still. A crowd moves, especially in crisis, which is my area of interest. So the term “allsourcing” not only implies collecting information from all sources but also the use of “all” technologies to collect said information in different media.

As for the word “crowd”, I recently noted in this Ushahidi blog post that we may need some qualifiers—namely bounded and unbounded crowdsourcing. In other words, the term “crowd” can mean a large group of people (unbounded crowdsourcing) or perhaps a specific group (bounded crowdsourcing). Unbounded crowdsourcing implies that the identity of individuals reporting the information is unknown whereas bounded crowdsourcing would describe a known group of individuals supplying information.

The term “allsourcing” represents a combination of bounded and unbounded crowdsourcing coupled with new “sourcing” technologies. An allsourcing approach would combined information supplied by known/official sources and unknown/unofficial sources using the Web, e-mail, SMS, Twitter, Flickr, YouTube etc. I think the future of crowdsourcing is allsourcing because allsourcing combines the strengths of both bounded and unbounded approaches while reducing the constraints inherent to each individual approach.

Let me explain. One main important advantage of unbounded crowdsourcing is the ability to collect information from unofficial sources. I consider this an advantage over bounded crowdsourcing since more information can be collected this way. The challenge of course is how to verify the validity of said information. Verifying information is by no means a new process, but unbounded crowdsourcing has the potential to generate a lot more information than bounded crowdsourcing since the former does not censor unofficial content. This presents a challenge.

At the same time, bounded crowdsourcing has the advantage of yielding reliable information since the reports are produced by known/official sources. However, bounded crowdsourcing is constrained to a relatively small number of individuals doing the reporting. Obviously, these individuals cannot be everywhere at the same time. But if we combined bounded and unbounded crowdsourcing, we would see an increase in (1) overall reporting, and (2) in the ability to validate reports from unknown sources.

The increased ability to validate information is due to the fact that official and unofficial sources can be triangulated when using an allsourcing approach. Given that official sources are considered trusted sources, any reports from unofficial sources that match official reports can be considered more reliable along with their associated sources. And so the combined allsourcing approach in effect enables the identification of new reliable sources even if the identify of these sources remains unknown.

Ushahidi is good example of an allsourcing platform. Organizations can use Ushahidi to capture both official and unofficial sources using all kinds of new sourcing technologies. Allsourcing is definitely something new so there’s still much to learn. I have a hunch that there is huge potential. Jeff Howe titled his famous article in Wired “The Rise of Crowdsourcing.” Will a future edition of Wired include an article on “The Rise of Allsourcing”?

Patrick Philippe Meier

Three Common Misconceptions About Ushahidi

Cross posted on Ushahidi

Here are three interesting misconceptions about Ushahidi and crowdsourcing in general:

  1. Ushahidi takes the lead in deploying the Ushahidi platform
  2. Crowdsourced information is statistically representative
  3. Crowdsourced information cannot be validated

Lets start with the first. We do not take the lead in deploying Ushahidi platforms. In fact, we often learn about new deployments second-hand via Twitter. We are a non-profit tech company and our goal is to continue developing innovative crowdsourcing platforms that cater to the growing needs of our current and prospective partners. We provide technical and strategic support when asked but otherwise you’ll find us in the backseat, which is honestly where we prefer to be. Our comparative advantage is not in deployment. So the credit for Ushahidi deployments really go the numerous organizations that continue to implement the platform in new and innovative ways.

On this note, keep in mind that the first downloadable Ushahidi platform was made available just this May, and the second version just last week. So implementing organizations have been remarkable test pilots, experimenting and learning on the fly without recourse to any particular manual or documented best practices. Most election-related deployments, for example, were even launched before May, when platform stability was still an issue and the code was still being written. So our hats go off to all the organizations that have piloted Ushahidi and continue to do so. They are the true pioneers in this space.

Also keep in mind that these organizations rarely had more than a month or two of lead-time before scheduled elections, like in India. If all of us have learned anything from watching these deployments in 2009, it is this: the challenge is not one of technology but election awareness and voter education. So we’re impressed that several organizations are already customizing the Ushahidi platform for elections that are more than 6-12 months away. These deployments will definitely be a first for Ushahidi and we look forward to learning all we can from implementing organizations.

The second misconception, “crowdsourced information is statistically representative,” often crops up in conversations around election monitoring. The problem is largely one of language. The field of election monitoring is hardly new. Established organizations have been involved in election monitoring for decades and have gained a wealth of knowledge and experience in this area. For these organizations, the term “election monitoring” has specific connotations, such as random sampling and statistical analysis, verification, validation and accredited election monitors.

When partners use Ushahidi for election monitoring, I think they mean something different. What they generally mean is citizen-powered election monitoring aided by crowdsourcing. Does this imply that crowdsourced information is statistically representative of all the events taking place across a given country? Of course not: I’ve never heard anyone suggest that crowdsourcing is equivalent to random sampling.

Citizen-powered election monitoring is about empowering citizens to take ownership over their elections and to have a voice. Indeed, elections do not start and stop at the polling booth. Should we prevent civil society groups from crowdsourcing crisis information on the basis that their reports may not be statistically representative? No. This is not our decision to make and the data is not even meant for us.

Another language-related problem has to due with the term “crowdsourcing”. The word  “crowd” here can literally mean anyone (unbounded crowdsourcing) or a specific group (bounded crowdsourcing) such as designated election monitors. If these official monitors use Ushahidi and they are deliberately positioned across a country for random sampling purposes, then this becomes no different at all to standard and established approaches to election monitoring. Bounded crowdsourcing can be statistically representative.

The third misconception about Ushahidi has to do with the tradeoff between unbounded crowdsourcing and the validation of said crowdsourced information. One of the main advantages of unbounded crowdsourcing is the ability to collect a lot of information from a variety of sources and media—official and nonofficial sources—in near real time. Of course, this means that a lot more of information can be reported at once, which can make the validation of said information a challenging process.

A common reaction to this challenge is to dismiss crowdsourcing altogether because unofficial sources may be unreliable or at worse deliberately misleading. Some organizations thus find it easier to write off all unofficial content because of these concerns. Ushahidi takes a different stance. We recognize that user-generated content is not about to disappear any time soon and that a lot of good can come out of such content, not least because official information can too easily become proprietary and guarded instead of shared.

So we’re not prepared to write off user-generated content because validating information happens to be challenging. Crowdsourcing crisis information is our business and so is (obviously) the validation of crowdsourced information. This is why Ushahidi is fully committed to developing Swift River. Swift is a free and open source platform that validates crowdsourced information in near real-time. Follow the Ushahidi blog for exciting updates!