Tag Archives: Organizations

Does the Humanitarian Industry Have a Future in The Digital Age?

I recently had the distinct honor of being on the opening plenary of the 2012 Skoll World Forum in Oxford. The panel, “Innovation in Times of Flux: Opportunities on the Heels of Crisis” was moderated by Judith Rodin, CEO of the Rockefeller Foundation. I’ve spent the past six years creating linkages between the humanitarian space and technology community, so the conversations we began during the panel prompted me to think more deeply about innovation in the humanitarian industry. Clearly, humanitarian crises have catalyzed a number of important innovations in recent years. At the same time, however, these crises extend the cracks that ultimately reveal the inadequacies of existing organiza-tions, particularly those resistant to change; and “any organization that is not changing is a battle-field monument” (While 1992).

These cracks, or gaps, are increasingly filled by disaster-affected communities themselves thanks in part to the rapid commercialization of communication technology. Question is: will the multi-billion dollar humanitarian industry change rapidly enough to avoid being left in the dustbin of history?

Crises often reveal that “existing routines are inadequate or even counter-productive [since] response will necessarily operate beyond the boundary of planned and resourced capabilities” (Leonard and Howitt 2007). More formally, “the ‘symmetry-breaking’ effects of disasters undermine linearly designed and centralized administrative activities” (Corbacioglu 2006). This may explain why “increasing attention is now paid to the capacity of disaster-affected communities to ‘bounce back’ or to recover with little or no external assistance following a disaster” (Manyena 2006).

But disaster-affected populations have always self-organized in times of crisis. Indeed, first responders are by definition those very communities affected by disasters. So local communities—rather than humanitarian professionals—save the most lives following a disaster (Gilbert 1998). Many of the needs arising after a disaster can often be met and responded to locally. One doesn’t need 10 years of work experience with the UN in Darfur or a Masters degree to know basic first aid or to pull a neighbor out of the rubble, for example. In fact, estimates suggest that “no more than 10% of survival in emergencies can be attributed to external sources of relief aid” (Hilhorst 2004).

This figure may be higher today since disaster-affected communities now benefit from radically wider access to information and communication technologies (ICTs). After all, a “disaster is first of all seen as a crisis in communicating within a community—that is as a difficulty for someone to get informed and to inform other people” (Gilbert 1998). This communication challenge is far less acute today because disaster-affected communities are increasingly digital, and thus more and more the primary source of information communicated following a crisis. Of course, these communities were always sources of information but being a source in an analog world is fundamentally different than being a source of information in the digital age. The difference between “read-only” versus “read-write” comes to mind as an analogy. And so, while humanitarian organiza-tions typically faced a vacuum of information following sudden onset disasters—limited situational awareness that could only be filled by humanitarians on the ground or via established news organizations—one of the major challenges today is the Big Data produced by disaster-affected communities themselves.

Indeed, vacuums are not empty and local communities are not invisible. One could say that disaster-affected communities are joining the quantified self (QS) movement given that they are increasingly quantifying themselves. If inform-ation is power, then the shift of information sourcing and sharing from the select few—the humanitarian professionals—to the masses must also engender a shift in power. Indeed, humanitarians rarely have access to exclusive information any longer. And even though affected populations are increasingly digital, some groups believe that humanitarian organizations have largely failed at commu–nicating with disaster-affected communities. (Naturally, there are important and noteworthy exceptions).

So “Will Twitter Put the UN Out of Business?” (Reuters), or will humanitarian organizations cope with these radical changes by changing themselves and reshaping their role as institutions before it’s too late? Indeed, “a business that doesn’t communicate with its customers won’t stay in business very long—it’ll soon lose track of what its clients want, and clients won’t know what products or services are on offer,” whilst other actors fill the gaps (Reuters). “In the multi-billion dollar humanitarian aid industry, relief agencies are businesses and their beneficiaries are customers. Yet many agencies have muddled along for decades with scarcely a nod towards communicating with the folks they’re supposed to be serving” (Reuters).

The music and news industries were muddling along as well for decades. Today, however, they are facing tremendous pressures and are undergoing radical structural changes—none of them by choice. Of course, it would be different if affected communities were paying for humanitarian services but how much longer do humanitarian organizations have until they feel similar pressures?

Whether humanitarian organizations like it or not, disaster affected communities will increasingly communicate their needs publicly and many will expect a response from the humanitarian industry. This survey carried out by the American Red Cross two years ago already revealed that during a crisis the majority of the public expect a response to needs they communicate via social media. Moreover, they expect this response to materialize within an hour. Humanitarian organizations simply don’t have the capacity to deal with this surge in requests for help, nor are they organizationally structured to do so. But the fact of the matter is that humanitarian organizations have never been capable of dealing with this volume of requests in the first place. So “What Good is Crowd-sourcing When Everyone Needs Help?” (Reuters). Perhaps “crowdsourcing” is finally revealing all the cracks in the system, which may not be a bad thing. Surely by now it is no longer a surprise that many people may be in need of help after a disaster, hence the importance of disaster risk reduction and preparedness.

Naturally, humanitarian organizations could very well chose to continue ignoring calls for help and decide that communicating with disaster affected communities is simply not tenable. In the analog world of the past, the humanitarian industry was protected by the fact that their “clients” did not have a voice because they could not speak out digitally. So the cracks didn’t show. Today, “many traditional humanitarian players see crowdsourcing as an unwelcome distraction at a time when they are already overwhelmed. They worry that the noise-to-signal ration is just too high” (Reuters). I think there’s an important disconnect here worth emphasizing. Crowdsourced information is simply user-generated content. If humanitarians are to ignore user-generated content, then they can forget about two-way communications with disaster-affected communities and drop all the rhetoric. On the other hand, “if aid agencies are to invest time and resources in handling torrents of crowdsourced information in disaster zones, they should be confident it’s worth their while” (Reuters).

This last comment is … rather problematic for several reasons (how’s that for being diplomatic?). First of all, this kind of statement continues to propel the myth that we the West are the rescuers and aid does not start until we arrive (Barrs 2006). Unfortunately, we rarely arrive: how many “neglected crises” and so-called “forgotten emergencies” have we failed to intervene in? This kind of mindset may explain why humanitarian interventions often have the “propensity to follow a paternalistic mode that can lead to a skewing of activities towards supply rather than demand” and towards informing at the expense of listening (Manyena 2006).

Secondly, the assumption that crowdsourced data would be for the exclusive purpose of the humanitarian cavalry is somewhat arrogant and ignores the reality that local communities are by definition the first responders in a crisis. Disaster-affected communities (and Diasporas) are already collecting (and yes crowdsourcing) information to create their own crisis maps in times of need as a forthcoming report shows. And they’ll keep doing this whether or not humanita-rian organizations approve or leverage that information. As my colleague Tim McNamara has noted “Crisis mapping is not simply a technological shift, it is also a process of rapid decentralization of power. With extremely low barriers to entry, many new entrants are appearing in the fields of emergency and disaster response. They are ignoring the traditional hierarchies, because the new entrants perceive that there is something that they can do which benefits others.”

Thirdly, humanitarian organizations are far more open to using free and open source software than they were just two years ago. So the resources required to monitor and map crowdsourced information need not break the bank. Indeed, the Syria Crisis Map uses a free and open source data-mining platform called HealthMap, which has been monitoring some 2,000 English-based sources on a daily basis for months. The technology powering the map itself, Ushahidi, is also free and open source. Moreover, the team behind the project is comprised of just a handful of volunteers doing this in their own free time (for almost an entire year now). And as a result of this initiative, I am collaborating with a colleague from UNDP to pilot HealthMap’s data mining feature for conflict monitoring and peacebuilding purposes.

Fourth, other than UN Global Pulse, humanitarian agencies are not investing time and resources to manage Big (Crisis) Data. Why? Because they have neither the time nor the know-how. To this end, they are starting to “outsource” and indeed “crowdsource” these tasks—just as private sector businesses have been doing for years in order to extend their reach. Anyone actually familiar with this space and developments since Haiti already knows this. The CrisisMappers Network, Standby Volunteer Task Force (SBTF), Humanitarian OpenStreetMap (HOT) and Crisis Commons (CC) are four volunteer/technical networks that have already collaborated actively with a number of humanitarian organizations since Haiti to provide the “surge capacity” requested by the latter; this includes UN OCHA in Libya and Colombia, UNHCR in Somalia and WHO in Libya, to name a few. In fact, these groups even have their own acronym: Volunteer & Technical Communities (V&TCs).

As the former head of OCHA’s Information Services Section (ISS) noted after the SBTF launched the Libya Crisis Map, “Your efforts at tackling a difficult problem have definitely reduced the information overload; sorting through the multitude of signals on the crisis is not easy task” (March 8, 2011). Furthermore, the crowdsourced social media information mapped on the Libya Crisis Map was integrated into official UN OCHA information products. I dare say activating the SBTF was worth OCHA’s while. And it cost the UN a grand total of $0 to benefit from this support.

Credit: Chris Bow

The rapid rise of V&TC’s has catalyzed the launch of the Digital Humanitarian Network (DHN), formerly called the Humanitarian Standby Task Force (H-SBTF). Digital Humanitarians is a network-of-network catalyzed by the UN and comprising some of the most active members of the volunteer & technical co-mmunity. The purpose of the Digital Humanitarian platform (powered by Ning) is to provide a dedicated interface for traditional humanitarian organizations to outsource and crowdsource important information management tasks during and in-between crises. OCHA has also launched the Communities of Interest (COIs) platform to further leverage volunteer engagement in other areas of humanitarian response.

These are not isolated efforts. During the massive Russian fires of 2010, volunteers launched their own citizen-based disaster response agency that was seen by many as more visible and effective than the Kremlin’s response. Back in Egypt, volunteers used IntaFeen.com to crowdsource and coordinate their own humanitarian convoys to Libya, for example. The company LinkedIn has also taken innovative steps to enable the matching of volunteers with various needs. They recently added a “Volunteer and Causes” field to its member profile page, which is now available to 150 million LinkedIn users worldwide. Sparked.com is yet another group engaged in matching volunteers with needs. The company is the world’s first micro-volunteering network, sending challenges to registered volunteers that are targeted to their skill set and the causes that they are most passionate about.

It is not farfetched to envisage how these technologies could be repurposed or simply applied to facilitate and streamline volunteer management following a disaster. Indeed, researchers at the University of Queensland in Australia have already developed a new smart phone app to help mobilize and coordinate volunteer efforts during and following major disasters. The app not only provides information on preparedness but also gives real-time updates on volunteering opportunities by local area. For example, volunteers can register for a variety of tasks including community response to extreme weather events.

Meanwhile, the American Red Cross just launched a Digital Operations Center in partnership with Dell Labs, which allows them to leverage digital volunteers and Dell’s social media monitoring platforms to reduce the noise-to-signal ratio. This is a novel “social media-based operation devoted to humanitarian relief, demonstrating the growing importance of social media in emergency situations.” As part of this center, the Red Cross also “announced a Digital Volunteer program to help respond to question from and provide information to the public during disasters.”

While important challenges do exist, there are many positive externalities to leveraging digital volunteers. As deputy high commissioner of UNHCR noted about this UNHCR-volunteer project in Somalia, these types of projects create more citizen-engagement and raises awareness of humanitarian organizations and projects. This in part explains why UNHCR wants more, not less, engage-ment with digital volunteers. Indeed, these volunteers also develop important skills that will be increasingly sought after by humanitarian organizations recruit-ing for junior full-time positions. Humanitarian organizations are likely to be come smarter and more up to speed on humanitarian technologies and digital humanitarian skills as a result. This change should be embraced.

So given the rise of “self-quantified” disaster-affected communities and digitally empowered volunteer communities, is there a future for traditional humani-tarian organizations? Of course, anyone who suggests otherwise is seriously misguided and out of touch with innovation in the humanitarian space. Twitter will not put the UN out of business. Humanitarian organizations will continue to play some very important roles, especially those relating to logistics and coor-dination. These organizations will continue outsourcing some roles but will also take on some new roles. The issue here is simply one of comparative advantage. Humanitarian organizations used to have a comparative advantage in some areas, but this has shifted for all the reasons described above. So outsourcing in some cases makes perfect sense.

Interestingly, organizations like UN OCHA are also changing some of their own internal information management processes as a result of their collaboration with volunteer networks like the SBTF, which they expect will lead to a number of efficiency gains. Furthermore, OCHA is behind the Digital Humanitarians initiative and has also been developing a check-in app for humanitarian pro-fessionals to use in disaster response—clear signs of innovation and change. Meanwhile, the UK’s Department for International Development (DfID) has just launched a $75+ million fund to leverage new technologies in support of humani-tarian response; this includes mobile phones, satellite imagery, Twitter as well as other social media technologies, digital mapping and gaming technologies. Given that crisis mapping integrates these new technologies and has been at the cutting edge of innovation in the humanitarian space, I’ve invited DfID to participate in this year’s International Conference on Crisis Mapping (ICCM 2012).

In conclusion, and as argued two years ago, the humanitarian industry is shifting towards a more multi-polar system. The rise of new actors, from digitally empowered disaster-affected communities to digital volunteer networks, has been driven by the rapid commercialization of communication technology—particularly the mobile phone and social networking platforms. These trends are unlikely to change soon and crises will continue to spur innovations in this space. This does not mean that traditional humanitarian organizations are becoming obsolete. Their roles are simply changing and this change is proof that they are not battlefield monuments. Of course, only time will tell whether they change fast enough.

Seeking the Trustworthy Tweet: Can “Tweetsourcing” Ever Fit the Needs of Humanitarian Organizations?

Can microblogged data fit the information needs of humanitarian organizations? This is the question asked by a group of academics at Pennsylvania State University’s College of Information Sciences and Technology. Their study (PDF) is an important contribution to the discourse on humanitarian technology and crisis information. The applied research provides key insights based on a series of interviews with humanitarian professionals. While I largely agree with the majority of the arguments presented in this study, I do have questions regarding the framing of the problem and some of the assertions made.

The authors note that “despite the evidence of strong value to those experiencing the disaster and those seeking information concerning the disaster, there has been very little uptake of message data by large-scale, international humanitarian relief organizations.” This is because real-time message data is “deemed as unverifiable and untrustworthy, and it has not been incorporated into established mechanisms for organizational decision-making.” To this end, “committing to the mobilization of valuable and time sensitive relief supplies and personnel, based on what may turn out be illegitimate claims, has been perceived to be too great a risk.” Thus far, the authors argue, “no mechanisms have been fashioned for harvesting microblogged data from the public in a manner, which facilitates organizational decisions.”

I don’t think this latter assertion is entirely true if one looks at the use of Twitter by the private sector. Take for example the services offered by Crimson Hexagon, which I blogged about 3 years ago. This successful start-up launched by Gary King out of Harvard University provides companies with real-time sentiment analysis of brand perceptions in the Twittersphere precisely to help inform their decision making. Another example is Storyful, which harvests data from authenticated Twitter users to provide highly curated, real-time information via microblogging. Given that the humanitarian community lags behind in the use and adoption of new technologies, it behooves us to look at those sectors that are ahead of the curve to better understand the opportunities that do exist.

Since the study principally focused on Twitter, I’m surprised that the authors did not reference the empirical study that came out last year on the behavior of Twitter users after the 8.8 magnitude earthquake in Chile. The study shows that about 95% of tweets related to confirmed reports validated that information. In contrast only 0.03% of tweets denied the validity of these true cases. Interestingly, the results also show  that “the number of tweets that deny information becomes much larger when the information corresponds to a false rumor.” In fact, about 50% of tweets will deny the validity of false reports. This means it may very well be posible to detect rumors by using aggregate analysis on tweets.

On framing, I believe the focus on microblogging and Twitter in particular misses the bigger picture which ultimately is about the methodology of crowdsourcing rather than the technology. To be sure, the study by Penn State could just as well have been titled “Seeking the Trustworthy SMS.” I think this important research on microblogging would be stronger if this distinction were made and the resulting analysis tied more closely to the ongoing debate on crowdsourcing crisis information that began during the response to Haiti’s earthquake in 2010.

Also, as was noted during the Red Cross Summit in 2010, more than two-thirds of respondents to a survey noted that they would expect a response within an hour if they posted a need for help on a social media platform (and not just Twitter) during a crisis. So whether humanitarian organizations like it or not, crowdsourced social media information cannot be ignored.

The authors carried out a series of insightful interviews with about a dozen international humanitarian organizations to try and better understand the hesitation around the use of Twitter for humanitarian response. As noted earlier, however, it is not Twitter per se that is a concern but the underlying methodology of crowdsourcing.

As expected, interviewees noted that they prioritize the veracity of information over the speed of communication. “I don’t think speed is necessarily the number one tool that an emergency operator needs to use.” Another interviewee opined that “It might be hard to trust the data. I mean, I don’t think you can make major decisions based on a couple of tweets, on one or two tweets.” What’s interesting about this latter comment is that it implies that only one channel of information, Twitter, is to be used in decision-making, which is a false argument and one that nobody I know has ever made.

Either way, the trade-off between speed and accuracy is a well known one. As mentioned in this blog post from 2009, information is perishable and accuracy is often a luxury in the first few hours and days following a major disaster. As the authors for the study rightly note, “uncertainty is ‘always expected, if sometimes crippling’ (Benini, 1997) for NGOs involved in humanitarian relief.” Ultimately, the question posed by the authors of the Penn study can be boiled down to this: is some information better than no information if it cannot be immediately verified? In my opinion, yes. If you have some information, then at least you can investigate it’s veracity which may lead to action. I also believe that from this philosophical point of view, the answer would still be yes.

Based on the interviews, the authors found that organizations engaged in immediate emergency response were less likely to make use of Twitter (or crowdsourced information) as a channel for information. As one interviewee put it, “Lives are on the line. Every moment counts. We have it down to a science. We know what information we need and we get in and get it…” In contrast, those organizations engaged in subsequent phases of disaster response were thought more likely to make use of crowdsourced data.

I’m not entirely convinced by this: “We know what information we need and we get in and get it…”. Yes, humanitarian organizations typically know but whether they get it, and in time, is certainly not a given. Just look at the humanitarian responses to Haiti and Libya, for example. Organizations may very well be “unwilling to trade data assurance, veracity and authenticity for speed,” but sometimes this mindset will mean having absolutely no information. This is why OCHA asked the Standby Volunteer Taskforce to provide them with a live crowdsourced social media may of Libya. In Haiti, while the UN is not thought to have used crowdsourced SMS data from Mission 4636, other responders like the Marine Corps did.

Still, according to one interviewee, “fast is good, but bad information fast can kill people. It’s got to be good, and maybe fast too.” This assumes that no information doesn’t kill people. Also good information that is late, can also kill people. As one of the interviewees admitted when using traditional methods, “it can be quite slow before all that [information] trickles through all the layers to get to us.” The authors of the study also noted that, “Many [interviewees] were frustrated with how slow the traditional methods of gathering post-disaster data had remained despite the growing ubiquity of smart phones and high quality connectivity and power worldwide.”

On a side note, I found the following comment during the interviews especially revealing: “When we do needs assessments, we drive around and we look with our eyes and we talk to people and we assess what’s on the ground and that’s how we make our evaluations.” One of the common criticisms leveled against the use of crowdsourced information is that it isn’t representative. But then again, driving around, checking things out and chatting with people is hardly going to yield a representative sample either.

One of the main findings from this research has to do with a problem in attitude on the part of humanitarian organizations. “Each of the interviewees stated that their organization did not have the organizational will to try out new technolo-gies. Most expressed this as a lack of resources, support, leadership and interest to adopt new technologies.” As one interview noted, “We tried to get the president and CEO both to use Twitter. We failed abysmally, so they’re not– they almost never use it.” Interestingly, “most of the respondents admitted that many of their technological changes were motivated by the demands of their donors. At this point in time their donors have not demanded that these organizations make use of microblogged data. The subjects believed they would need to wait until this occurred for real change to begin.”

For me the lack of will has less to do with available resources and limited capacity and far more to do with a generational gap. When today’s young professionals in the humanitarian space work their way up to more executive positions, we’ll  see a significant change in attitude within these organizations. I’m thinking in particular of the many dozens of core volunteers who played a pivotal role in the crisis mapping operations in Haiti, Chile, Pakistan, Russia and most recently Libya. And when attitude changes, resources can be reallocated and new priorities can be rationalized.

What’s interesting about these interviews is that despite all the concerns and criticisms of crowdsourced Twitter data, all interviewees still see microblogged data as a “vast trove of potentially useful information concerning a disaster zone.” One of the professionals interviewed said, “Yes! Yes! Because that would – again, it would tell us what resources are already in the ground, what resources are still needed, who has the right staff, what we could provide. I mean, it would just – it would give you so much more real-time data, so that as we’re putting our plans together we can react based on what is already known as opposed to getting there and discovering, oh, they don’t really need medical supplies. What they really need is construction supplies or whatever.”

Another professional stated that, “Twitter data could potentially be used the same way… for crisis mapping. When an emergency happens there are so many things going on in the ground, and an emergency response is simply prioritization, taking care of the most important things first and knowing what those are. The difficult thing is that things change so quickly. So being able to gather information quickly…. <with Twitter> There’s enormous power.”

The authors propose three possible future directions. The first is bounded microblogging, which I have long referred to as “bounded crowdsourcing.” It doesn’t make sense to focus on the technology instead of the methodology because at the heart of the issue are the methods for information collection. In “bounded crowdsourcing,” membership is “controlled to only those vetted by a particular organization or community.” This is the approach taken by Storyful, for example. One interviewee acknowledge that “Twitter might be useful right after a disaster, but only if the person doing the Tweeting was from <NGO name removed>, you know, our own people. I guess if our own people were sending us back Tweets about the situation it could help.”

Bounded crowdsourcing overcomes the challenge of authentication and verification but obviously with a tradeoff in the volume of data collected “if an additional means were not created to enable new members through an automatic authentication system, to the bounded microblogging community.” However, the authors feel that bounded crowdsourcing environments “undermine the value of the system” since “the power of the medium lies in the fact that people, out of their own volition, make localized observations and that organizations could harness that multitude of data. The bounded environment argument neutralizes that, so in effect, at that point, when you have a group of people vetted to join a trusted circle, the data does not scale, because that pool by necessity would be small.”

That said, I believe the authors are spot on when they write that “Bounded environments might be a way of introducing Twitter into the humanitarian centric organizational discourse, as a starting point, because these organizations, as seen from the evidence presented above, are not likely to initially embrace the medium. Bounded environments could hence demonstrate the potential for Twitter to move beyond the PR and Communications departments.”

The second possible future direction is to treat crowdsourced data is ambient, “contextual information rather than instrumental information, (i.e., factual in nature).” This grassroots information could be considered as an “add-on to traditional, trusted institutional lines of information gathering.” As one interviewee noted, “Usually information exists. The question is the context doesn’t exist…. that’s really what I see as the biggest value [of crowdsourced information] and why would you use that in the future is creating the context…”.

The authors rightly suggest that “that adding contextual information through microblogged data may alleviate some of the uncertainty during the time of disaster. Since the microblogged data would not be the single data source upon which decisions would be made, the standards for authentication and security could be less stringent. This solution would offer the organization rich contextual data, while reducing the need for absolute data authentication, reducing the need for the organization to structurally change, and reducing the need for significant resources.” This is exactly how I consider and treat crowdsourced data.

The third and final forward-looking solution is computational. The authors “believe better computational models will eventually deduce informational snippets with acceptable levels of trust.” They refer to Ushahidi’s SwiftRiver project as an example.

In sum, this study is an important contribution to the discourse. The challenges around using crowdsourced crisis information are well known. If I come across as optimistic, it is for two reasons. First, I do think a lot can be done to address the challenges. Second, I do believe that attitudes in the humanitarian sector will continue to change.

The Starfish and the Spider: 8 Principles of Decentralization

“The Starfish and the Spider: The Unstoppable Power of Leaderless Organizations” by Ori Brafman and Rod Beckstrom is still one of my favorite books on organizational theory and complex systems.

The starfish represents decentralized “organizations” while the spider describes hierarchical command-and-control structures. In reviewing the book, the Executive Chairman of the World Economic Forum wrote that “[it has] not only stimulated my thinking, but as a result of the reading, I proposed ten action points for my own organization.”

The Starfish and the Spider is about “what happens when there’s no one in charge. It’s about what happens when there’s no hierarchy. You’d think there would be disorder, even chaos. But in many arenas, a lack of traditional leadership is giving rise to powerful groups that are turning industry and society upside down.” The book draws on a series of case studies that illustrate 8 Principles of Decentralization. I include these below with short examples.

1. When attacked, a decentralized organization tends to become even more open and decentralized:

Not only did the Apaches survive the Spanish attacks, but amazingly, the attacks served to make them even stronger. When the Spanish attacked them, the Apaches became even more decentralized and even more difficult to conquer (21).

2. It’s easy to mistake starfish for spiders:

When we first encounter a collection of file-swapping teenagers, or a native tribe in the Arizona desert, their power is easy to overlook. We need an entirely different set of tools in order to understand them (36).

3. An open system doesn’t have central intelligence; the intelligence is spread throughout the system:

It’s not that open systems necessarily make better systems. It’s just that they’re able to respond more quickly because each member has access to knowledge and the ability to make direct use of it (39).

4. Open systems can easily mutate:

The Apaches did not—and could not—plan ahead about how to deal with the European invaders, but once the Spanish showed up, Apache society easily mutated. They went from living in villages to being nomads. The decision didn’t have to be approved by headquarters (40).

5. The decentralized organization sneaks up on you:

For a century, the recording industry was owned by a handful of corporations, and then a bunch of hackers altered the face of the industry. We’ll see this pattern repeat itself across different sectors and in different industries (41).

6. As industries become decentralized, overall profits decrease:

The combined revenues of the remaining four [music industry giants] were 25 percent less than they had been in 2001. Where did the revenues go? Not to P2P players [Napster]. The revenue disappeared (50).

7. Put people into an open system and they’ll automatically want to contribute:

People take great care in making the articles objective, accurate, and easy to understand [on Wikipedia] (74).

8. When attacked, centralized organizations tend to become even more centralized:

As we saw in the case of the Apaches and the P2P players, when attacked decentralized organizations become even more decentralized (139).

Patrick Phillipe Meier