On Technology and Learning, Or Why the Wright Brothers Did Not Create the 747

I am continuously amazed by critics who outlaw learning curves, especially when it comes to new and innovative projects. These critics expect instantaneous perfection from the outset even when new technologies are involved. So when a new project doesn’t meet their satisfaction (based on their own, often arbitrary measurement of success), they publicly castigate the groups behind the projects for their “failures”.

What would the world look like if these critics were in charge? There would be little to no innovation, progress or breakthrough’s. Isaac Newton would never have spoken the words “If I have seen so far it is because I have stood on the shoulders of giants.” There would be no giants, no perspective, and no accumulation of knowledge.

Take the Wright brothers, Orville and Wilbur, who invented and built the world’s first successful airplane by developing controls that made fixed-wing powered flight possible. They too had their critics. Some of them in “the European aviation community had converted the press to an anti-Wright brothers stance. European newspapers, especially in France, were openly derisive, calling them bluffers.” The Wright brothers certainly failed on numerous occasions. But they kept tinkering and experimenting. It was their failures that eventually made them successful. As the Chinese proverb goes, “Failure is not falling down but refusing to get up.”

The Wright brothers did not create the 747 because they had to start from scratch. They first had to figure out the basic principles of flight. Today’s critics of new technology-for-social impact projects are basically blaming NGOs in the developing world for not creating 747s.

Finally, it is problematic that many critics of technology-for-social impact projects have little to no scholarly or professional background in monitoring and evaluation (M&E). These critics think that because they are technology or development experts they know how to evaluate projects—one of the most common mistakes made by self-styled evaluators as noted at the start of The Fletcher School’s introductory course on M&E.

The field of M&E is a science, not an amateur sport. M&E is an independent, specialized field of expertise in it’s own right, one that requires months (if not several semesters) of dedicated study and training.

Patrick Philippe Meier

Demystifying Crowdsourcing: An Introduction to Non-Probability Sampling

The use of crowdsourcing may be relatively new to the technology, business and humanitarian sectors but when it comes to statistics, crowdsourcing is a well known and established sampling method. Crowdsourcing is just non-probability sampling. The crowdsourcing of crisis information is simply an application of non-probability sampling.

Lets first review probability sampling in which every unit in the population being sampled has a known probability (greater than zero) of being selected. This approach makes it possible to “produce unbiased estimates of population totals, by weighting sampled units according to their probability selection.”

Non-probability sampling, on the other hand, describes an approach in which some units of the population have no chance of being selected or where the probability of selection cannot be accurately determined. An example is convenience sampling. The main drawback of non-probability sampling techniques is that “information about the relationship between sample and population is limited, making it difficult to extrapolate from the sample to the population.”

There are several advantages, however. First, non-probability sampling is a quick way to collect way to collect and analyze data in range of settings with diverse populations. The approach is also a “cost-efficient means of greatly increasing the sample, thus enabling more frequent measurement.” In some cases, the non-probability sampling may actually be the only approach available—a common constrain in a lot of research, including many medical studies, not to mention Ushahidi Haiti. The method is also used in exploratory research, e.g., for hypothesis generation, especially when attempting to determine whether a problem exists or not.

The point is that non-probability sampling can save lives, many lives. Much of the data used for medical research is the product of convenience sampling. When you see your doctor, or you’re hospitalized, that is not a representative sample. Should the medical field throw away all this data based on the fact that it constitutes non-probability sampling. Of course not, that would be ludicrous.

The notion of bounded crowdsourcing, which I blogged about here, is also a known sampling technique called purposive sampling. This approach involves targeting experts or key informants. Snowball sampling is another type of non-probability sampling, which may also be applied to crowdsource of crisis information.

In snowball sampling, you begin by identifying someone who meets the criteria for inclusion in your study. You then ask them to recommend others who they may know who also meet the criteria. Although this method would hardly lead to representative samples, there are times when it may be the best method available. Snowball sampling is especially useful when you are trying to reach populations that are inaccessible or hard to find.

A project like Mission 4636 and Ushahidi-Haiti could take advantage of this approach by using two-way SMS communication to ask respondents to spread the word. Individuals who sent in text messages about persons trapped under the rubble could (later) be sent an SMS asking them to share the 4636 short code with people who may know of other trapped individuals. When the humanitarian response began to scale during the search and rescue operations, purposive sampling using UN personnel could also have been implemented.

In contrast to non-probability sampling techniques, probability sampling often requires considerable time and extensive resources. Furthermore, non-response effects can easily turn any probability design into non-probability sampling if the “characteristics of non-response are not well understood” since these modify each unit’s probability of being sampled.

This is not to suggest that one approach is better than the other since this depends entirely on the context and research question.

Patrick Philippe Meier

Think You Know What Ushahidi Is? Think Again

Ushahidi is the name of both the organization (Ushahidi Inc) and the platform. This understandably leads to some confusion. So let me elaborate on both.

Ushahidi the platform is a piece of software, not a methodology. The Ushahidi platform allows users to map information of interest to them. I like to think of it as democratizing map making in the style of neogeography. How users choose to collect the information they map is where methodology comes in. Users themselves select which methodology they want to use, such as representative sampling, crowdsourcing, etc. In other words, Ushahidi is not exclusively a platform for crowdsourcing. Nor is Ushahidi restricted to mapping crisis information. A wide range of events can be mapped using the platform. Non-events can also be mapped, such as football stadiums, etc.

The platform versus methodology distinction is significant. Why? Because new users often don’t realize that they themselves need to think through which methodology they should use to collect information. Furthermore, once they’ve chosen the methodology, they need to set up the appropriate tools to collect information using that methodology, and then collect.

For example, if a user wants to collect election data using representative sampling, they will need to ensure that they select a sample of polling stations that are likely to be representative of the overall population in terms of voting behavior. They will then need to decide whether they want to use SMS, email, phone calls, etc., to relay that information. Next, they’ll want to hire trusted monitors and train them on what and how to report. But none of this has anything to do with Ushahidi the platform.

Here’s an analogy: Microsoft Word won’t tell me what methodology to use if I want to write a paper on the future of technology. That is up to me, the author, to decide. If I don’t have any training in research methods and design, then I need to get up to speed independently. MS Word won’t provide me with insights on research methods. MS Word is just the platform. Coming back to Ushahidi, if an organization does not have adequate expertise, staff, capacity, time and resources to deploy Ushahidi, that is not the fault of the platform.

In many ways, the use of Ushahidi will only be as good as the organization or persons using the tool.

Ushahidi is only 10% of the solution (graphic by Chris Blow)

As my colleague Ory aptly cautioned: “Don’t get too jazzed up about Ushahidi. It is only 10% of the solution.” The other 90% is up to the organization using the platform. If they don’t have their act together, the Ushahidi platform won’t change that. If they do and successfully deploy the Ushahidi platform, then at least 90% of the credit goes to them.

Ushahidi the organization is a non-profit tech company. The group is not a humanitarian organization. We do not take the lead in deployments. In the case of Haiti, I launched the Ushahidi platform at The Fletcher School (where I am a PhD student) and where graduate students (not Ushahidi employees) created a “live” map of the disaster for several weeks. The Ushahidi tech team provided invaluable technical support around the clock during those weeks. It was thus a partnership led by The Fletcher Team.

We do not have a comparative advantage in deploying platforms and our core mission is to continue developing the Ushahidi platform. On occasion, we partner on select projects but do not take the lead on these projects. Why do we partner at all? Because we are required to diversify our business model as part of the grant we received from the Omidyar Network. And I think that’s a good idea.

Patrick Philippe Meier

Information Sharing During Crisis Management in Hierarchical vs. Network Teams

The month of May turned out to be ridiculously busy, so much so that I haven’t been able to blog. And when that happens, I know I’m doing too much. So my plan for June is to slow down, prioritize and do more of what I enjoy, e.g., blog.

In the meantime, the Journal of Contingencies and Crisis Management just published an interesting piece on “Information Sharing During Crisis Management in Hierarchical vs. Network Teams.” The topic and findings have implications for digital activism as well as crisis management.

Here’s the abstract:

This study examines the differences between hierarchical and network teams in emergency management. A controlled experimental environment was created in which we could study teams that differed in decision rights, availability of information, information sharing, and task division. Thirty-two teams of either two (network) or three (hierarchy) participants (N=80 in total) received messages about an incident in a tunnel with high-ranking politicians possibly being present. Based on experimentally induced knowledge, teams had to decide as quickly and as accurately as possible what the likely cause of the incident was: an attack by Al Qaeda, by anti-globalists, or an accident. The results showed that network teams were overall faster and more accurate in difficult scenarios than hierarchical teams. Network teams also shared more knowledge in the difficult scenarios, compared with the easier scenarios. The advantage of being able to share information that is inherent in network teams is thus contingent upon the type of situation encountered.

The authors define a hierarchical team as one in which members pass on information to a leader, but not to each other. In a network team, members can freely exchange information with each other. Here’s more on the conclusions derived by the study:

Our goal with the present study was to focus on a relatively simple comparison between a classic hierarchical structure and a network structure. The structures differed in terms of decision rights, availability of information, information sharing, and task division. Although previous research has not found unequivocal support in terms of speed or accuracy for one structure or the other, we expected our network structure to perform better and faster on the decision problems. We also expected the network teams to learn faster and exchange more specialist knowledge than the hierarchical teams.

Our hypotheses are partially supported. Network teams are indeed faster than hierarchical teams. Further analyses showed that network teams were, on average, as fast as the slowest working individual in the hierarchical teams. Analyses also showed that network teams very early on converged on a rapid mode of arriving at a decision, whereas hierarchical teams took more time. The extra time needed by hierarchical teams is therefore due to the time needed by the team leader to arrive at his or her decision.

We did not find an overall effect of team structure on the quality of team decision, contrary to our prediction. Interestingly, we did find that network teams were significantly better than hierarchical teams on the Al Qaeda scenarios (as compared with the anti-globalist scenarios). The Al Qaeda scenarios were the most difficult scenarios. Furthermore, scores on the Post-test showed that there was a larger transfer of knowledge on Al Qaeda from the specialist to the nonspecialist in the network condition as compared with the hierarchical condition. These results indicate that a high level of team member interaction leads to shared specialist knowledge, particularly in difficult scenarios. This in turn leads to more accurate decisions.

This study focused on the information assessment part of crisis management, not on the operative part. However, there may not be that much of a difference in terms of the actual teamwork involved. When team members have to carry out particular tasks, they may frequently also have to share specialist knowledge. Wilson, Salas, Priest, and Andrews (2007) have studied how teamwork breakdowns in the military may contribute to fratricide, the accidental shooting of one’s own troops rather than the enemy. This is obviously a very operative part of the military task. Teamwork breakdowns are subdivided into communication, coordination and cooperation, with information exchange mutual performance monitoring, and mutual trust as representative teamwork behaviours for each category (Wilson et al., 2007).

We believe that it is precisely these behaviours that are fostered by network structures rather than hierarchical structures. Network structures allow teams to exchange information quickly, monitor each other’s performance, and build up mutual trust. This is just as important in the operative part of crisis management work as it is in the information assessment part.

In conclusion, then, network teams are faster than hierarchical teams, while at the same time maintaining the same level of accuracy in relatively simple environments. In relatively complex environments, on the other hand, network teams arrive at correct decisions more frequently than hierarchical teams. This may very likely be due to a better exchange of knowledge in network teams.

Patrick Philippe Meier

How to Run a Successful Crowdsourcing Project

My colleague Ankit Sharma at the London School of Economics (LSE) recently sent me his research paper entitled “Crowdsourcing Critical Success Factor Model” (PDF). It’s definitely worth a read. Ankit is interested in better understanding the “dynamic and innovative discipline of crowdsourcing by developing a critical success factor model for it.” He focuses specifically on mobile crowdsourcing and does a great job unpacking the term.

Ankit first reviews four crowdsourcing projects to inform the development of his critical success model: txtEagle, Ushahidi, Peer Water Exchange and mCollect. He then notes the crucial difference between outsourcing and crowdsourcing. The latter’s success is dependent on the scale of crowd participation. This means that incentives need to tailored to recruit the most effective collaborators while “the motive of the crowd needs to be aligned with the long term objective of the crowdsourcing initiative.” To this end, Ankit defines successful crowdsourcing in terms of participation.

Ensuring participation requires that the motives of the of the crowd be directly aligned with the long term objectives of the crowdsourcing initiative. “Additionally, to promote participation the users must use and accept the technology of crowdsourcing.” Ankit draws on Heeks and Nicholson (2004), Carmel (2003) and Farrell (2006) to develop the following model.

The five peripheral factors above “affect the motive alignment of the crowd which is the prime determinant of success of the crowdsourcing initiative. It is assumed to directly affect user participation. The success of the initiative is expected to bring in more participation. Hence, the relationship between motive alignment and crowdsourcing success is bidirectional in the model.”

  • Vision and Strategy: “The coherence of the initiative’s vision and strategy with the aspirations of the crowd ensures that the crowd is willing to participate in it.”
  • Human Capital: The skills and abilities that the crowd possesses is a determinant of successful crowdsourcing. The more skillful and able the crowd is, “the less effort required by the crowd to make a meaningful contribution to the initiative.”
  • Infrastructure: “Crowdsourcing requires abundant, reliable and cheap telephone or mobile access for its communication needs in order to ensure participation of the crowd.”
  • Linkages and Trust: Crowdsourcing initiatives all involve a time or information cost for the crowd, which is why developing the trust factor is critical. Proper linkages can also “add a substantial trust aspect to the crowdsourcing initiative.”
  • External environment: “The macroeconomic environment comprising of the governance support, business environment, economic environment, living environment and risk profiles are important determinants of the success of the crowdsourcing initiative.”
  • Motive alignment: “Motive alignment of the crowd may be defined as the extent to which crowd is able to associate with long term objective of crowdsourcing initiative thereby encouraging its wider participation.” The table below explains how the peripheral factors effect the motive alignment of the crowd.”

Ankit applies his matrix to the four case studies cited earlier. This yields the following summary:

Based on this analysis, Ankit argues that for crowdsourcing projects to succeed it is “critical that the crowd is viewed as a partner in the initiative. The needs, aspirations, motivations and incentives of the crowd to participate in the initiative must remain the most important consideration while developing the crowdsourcing initiative. The practitioners must understand the crowd motivation and align their goals according to it.” In an ideal scenario, Ankit notes that technology must be “optimally usable” without the need to provide training and assistance. Successful crowdsourcing initiatives also require an “aggressive marketing and public relations plan.”

The main question I look forward to discussing with Ankit is this: what level of crowd participation is sufficient for a crowdsourcing initiative to be deemed successful? Should this be a percentage? e.g., the % of a given population participating in the crowdsourcing project. Or should the number be an absolute number? This is not an academic question. Who decides whether a crowdsourcing project is successful and based on what grounds?

Patrick Philippe Meier

The Future of News: Mobilizing the Masses to Write the First Draft of History

I’ve been meaning to write this blog post since February when I presented Ushahidi to BBC, CNN, UK Guardian and Channel4 in London. A session we just had at Foo Camp East made me realize it’s high time I write this.

The idea I pitched to CNN et al. was as follows: use a dedicated Ushahidi smart phone app that allows anyone to be a real-time iReporter. You download the app and allow CNN to know your location (although you can decide whether that’s within a 10-mile, 1-mile, 100 yards etc radius). As the CNN newsroom gets wind of a new potentially news-worthy incident, they just press the “red button”.

This red button sends out a message to all mobile iReporters within, say, 100 yards of where the incident took place asking them to take a quick picture if they happen to be just around the block. These users—turned volunteer CNN citizen journalists—can then be mobilized to report in near real-time. They could send in geo-referenced pictures annotated with comments and video footage of unfolding events.

I think this kind of app would appeal to many would-be citizen journalists. It costs nothing, just download to your smart phone so that if you ever find yourself at the right place at the right time then you know your breaking news picture may make it to CNN. There could be additional incentives like the number of reports you’ve submitted to CNN following a request. In other words, you could turn this into a quasi-competition or game. Yearly awards could be given out.

Ideally, you’d have a dozen citizens respond to the red button call, scrambling to be the first on site to take the picture, or to be the one who takes the best picture of the incident. This would allow CNN to triangulate the incoming information, possibly creating a Photosynth product updated in real-time. Comments (ie, captions that come with the pictures) could also be cross-validated for reliability purposes. Here’s a 3D rendering of Venice using Photosynth:

One other use-case for this Ushahidi mobile app would be for users to submit pictures/reports without being solicited by CNN. In fact, more often than not, these mobile iReporters are likely to be the first to break the news of an incident to CNN—rather than the other way around. Once an iReporter does this and CNN receives the geo-located picture, they can press the red button to mobilize other would-be iReporters to the scene.

This is why I love citing the Ushahidi article that my New York Times colleague Anand wrote up earlier this year. The screen shot below from the Ushahidi-Haiti deployment literally illustrates the reasoning behind Anand’s question: “Will the triangulated crisis map be regarded as the new first draft of history?” The beauty of crowdsourcing is that it opens the floodgates of information. This means more and more witnesses can capture evidence of the same historical events unfolding. In other words, there is overlap and the triangulated map becomes possible.

The key word for me in Anand’s quote is “draft”. History is now a draft, not a finished product, but a work in progress—and one that is now written (and corrected) by the crowd. Anand adds, “They say that history is written by the victors. But now, before the victors win, there is a fresh chance to scream out, with a text message that will not vanish.” I’d go even further and say that the crowd can now be the victors by being  mobile iReporters.

Think of these smart phone apps as the “seismographs” for crises. This allows users to form a veritable real-time, real-space human sensor web—or as Secretary Clinton describes it, “a new nervous system for the planet.”

Of course there are liability issues with mobilizing the masses to write the first draft(s) of history. So disclaimers will be necessary. For example, do not try and cover a story if this places in you physical danger or psychological harm. You’d probably have to give up all rights to the picture/text you submit, but at least you’d have your name credited on CNN.

Patrick Philippe Meier

Crowdsourcing and the Veil of Ignorance: A Question of Morality?

Patrick Ball and I had a series of long email exchanges this past week on the much talked-about-issue of crowdsourcing versus representative sampling. It’s an old issue that keeps coming up. But there’s really no debate, in my opinion. Crowdsourced data is not necessarily representative. That really should not be breaking news.

Also, it is worth repeating that Ushahidi is a platform, not a methodology. So an election-monitoring organization like the National Democratic Institute (NDI) could certainly generate representative polling data using Ushahidi by applying random sampling methods, for example. I already blogged about this several months ago in a post titled “Three Common Misconceptions About Ushahidi.” So I’m not going to rehash this here. Instead, I’d like to take a more “philosophical” approach.

In a “Theory of Justice,” the philosopher John Rawls introduces the “veil of ignorance“, a thought-experiment designed to determine the morality of a certain issue. The idea goes something like this: imagine that you have to decide on the morality of an issue before you are born, i.e., you stand behind a veil of ignorance as you don’t know where you will be born, what race, with what kind of family, etc.

As put by John Rawls himself … “no one knows his place in society, his class position or social status; nor does he know his fortune in the distribution of natural assets and abilities, his intelligence and strength, and the like.”

For example, in the imaginary society, you might or might not be intelligent, rich, or born into a preferred class. Since you may occupy any position in the society once the veil is lifted, this theory encourages thinking about society from the perspective of all members. The veil of ignorance is part of the long tradition of thinking in terms of a social contract.

What does this have to do with crowdsourcing? If you were standing behind this metaphorical veil of ignorance, would you outlaw the crowdsourcing of crisis information on the basis that the data may not be  representative? Or would you still like to receive SMS alerts from crowdsourced information? The text messages sent to Ushahidi-Haiti by Haitians in life-and-death situations were not necessarily statistically representative, but they saved lives.

What would you choose?

Patrick Philippe Meier

My TEDx Talk: From Photosynth to ALLsynth

I just gave a TEDx talk and my presentation played off a recent blog post of mine entitled “Wag The Dog, or How Falsifying Crowdsourced Information can be a Pain.” I introduced some new ideas and angles to the topic so here is basically a blog post version of the presentation.

We all know that open crowdsourcing platforms are susceptible to information vandalism, i.e., false information deliberately used to mislead. For example, if an Ushahidi platform were used in Iran, the government there could start reporting events to Ushahidi that never happened; perhaps events that suggest protesters attacked first and that riot police were just acting in self defense. But, I’m going to argue that falsifying crowdsourced information can actually be a pain. And I’m going to use the analogy of “Wag the Dog” to explain why. If you haven’t watched the movie, the story is based on a White House Administration that pretends a war has broken out in Albania to divert public opinion and hopefully increase the President’s ratings prior to re-election.

Here’s a 30 second highlight on how they created a fake war:

In a way, Wag the Dog already happened for real. Except the story was called “The War of the Worlds” and it was played as a radio broadcast in 1938. “War of the Worlds” is drama about a Martian invasion of Earth. What was particularly fun about this radio broadcast was that the first 2/3 of the 1-hr long story was just a series of simulated news bulletins. And the story ran uninterrupted, ie, without commercials. So many radio listeners in the US freaked out, thinking a real invasion was taking place!

The panic this caused even made it on the front page of the New York Times! Clearly, pulling of a Wag-the-Dog in the 1930s was a piece of cake!

And that’s because the information ecosystem looked something like this in the 1930s. Largely disconnected and broadcast only, ie, one-to-many. Can anyone point out an important node that should be included in this ecosystem? That’s right, the newspaper. But the paper would not have been printed at the speed that the radio broadcast was taking place to help counter fears; unlike today, of course, thanks to online news.

Today’s information ecosystem obviously looks little different. Many-to-many, peer-to-peer, 2-way, real-time information and communication technologies. Now, we might argue that this kind of ecosystem makes it easier for repressive regimes to game since the system is closely integrated and interoperable, which means information can propagate very quickly. Secretary Clinton recently called our information ecosystem the new nervous system of the planet. But then again, these diverse sources of user-generated content could also make it easier to triangulate and filter out false information.

For example, in the case of Iran, the high volume of pictures and videos posted on Flickr and YouTube made it rather difficult for the government to claim nothing was happening. Information blockades are likely to join the Berlin Walls of history. Today, you can get pictures of the same incident from three different camera phones, in addition to tweets and text messages, etc.

This is what Ushahidi is about, aggregating crisis information across different media and mapping that information in near real time to improve transparency, accountability and coordination.

Take the Ushahidi-Haiti map, for example. Crowdsourcing crisis information on Haiti allowed us to map several thousand incidents over just a few weeks, which actually saved lives on the ground. The incidents we mapped came from a myriad of sources: thousands of text messages directly from Haiti, hundreds of Tweets, information from Facebook Groups, online media, live Skype chats with the Search and Rescue Teams in Port-au-Prince, list serves, radio, you name it. Volunteers at The Fletcher School mapped this information in near real-time for several weeks and first responders used the map to save lives.

Check out this animation of the events unfolding from just a few hours after the quake.

What you see are events “overlapping” and clustering, ie, on several occasions we get two or more text messages from different numbers reporting the same event. And then a Tweet with similar information, for example. The crowdsourcing of crisis information allows us to triangulate and validate information thanks to the reporting coming from a myriad of sources in near real-time. This would hardly have been possible in the 1930s, which is what prompted my colleague Anand at the New York Times to write an article on our work and ask,

They say that history is written by the winners, will future history be written by the crowd?

Ushahidi’s crowded map of Haiti reminded me of Photosynth. Taking hundreds crowdsourced pictures and “stitching” them together to reproduce historical monuments. In 3D no less!

Here’s a quick 20 second video demo:

So the question is, can Ushahidi become the “ALLsynth” by stitching together crowdsourced crisis information across many different types of media? Ushahidi platforms have been deployed hundreds of times across the world. Here are just four examples.

From mapping the Swine Flu outbreak to reporting on the war in Gaza, to citizen-powered election monitoring in India and disaster response in the Philippines. Would stitching together these hundreds of platforms amount to creating an ALLsynth? What would it take to game an ALLsynth?

As I mentioned in my Wag the Dog post, perhaps some of the following:

  • Dozens of pictures from as many different camera phones of an event that never happened.
  • Text messages using different wording to describe an event that never happened.
  • Tweets (not retweets!).
  • Fake blog posts, Facebook groups and Wikipedia entries.
  • Fake video footage. Heck, you’d probably want to hack the international media and plant a fake article in the New York Times home page.
  • If you really want to go all out, you’d want to get hundreds of (paid?) actors like in The Truman Show.
  • You’d likely want to cordon off an entire area of the city or city outskirts.
  • Then you’d want to choreograph a few fight scenes with these actors.
  • A few rehearsals would probably be in order too.
  • Oh and of course props, plus lots of ketchup if you want things to look like they went badly.

In other words, you’d probably want to move to Hollywood to fabricate all this… That said, there’s another way that repressive regimes could deal with an unwanted Ushahidi platform, like this one being used by Sudanese civil society groups in the Sudan to monitor the elections currently taking place. We found out yesterday from our Sudanese colleagues that the site was no longer accessible in the Sudan (see official press release here in PDF). Blocking and censoring websites is really easy for governments to do, and we expected that Sudan would be no different.

So our Sudanese colleagues have been working with their tech-savvy friends to circumvent the censorship and continue mapping election irregularities—this is my applied dissertation research in action, I just never thought that my own actions would influence the data.  They set up a mirror site under an different domain name. This may become a cyber-game-of-cat-and-mouse, there is plenty of precedents for this: civil society finds a loophole, which is then blocked by the state, which prompts the search for another loophole, etc, etc. I expect that repressive regimes may eventually give up on blocking websites given the likely futility. Instead, they may try to game the platforms by falsifying crowdsourced information.

But as I have just argued, falsifying crowdsourced information can be a pain. So if repressive regimes start pouring money into their domestic film industries, particularly in blue screen technology, you’ll know why, and this is what you can expect to happen next:

Patrick Philippe Meier

Crisis Response and SMS Systems Management for NGOs and Governments

Guest blog post: Bart Stidham is an enterprise architect committed to bringing positive change to the world via better information systems architecture. He has served as CTO of four companies including one of the largest communications companies in the world, been a senior executive at Accenture, and served as CIO of the largest NGO funded by USAID. He is an independent consultant and can be found in Washington, DC when he is not traveling.

This blog post builds off of and supports Patrick Meier’s previous post on developing an SMS Code of Conduct for Humanitarian Response. Patrick raises many important issues in his post and it is clear that with the success of Ushahidi-Haiti it is likely we will see a vast increase in the use of similar SMS based information management systems in the future. While the deployment of such systems and all communications systems is likely to be orderly and well structured in normal circumstances, it is likely that during crises such order may break down and these systems may negatively impact one another. For this reason I applaud Patrick’s effort to raise this issue but my hope is that the “normal order” imposed by governments and societies will help prevent the potential disruption of communications systems from occurring in disasters, emergencies, and crises.

I believe Patrick’s concerns are best discussed in the larger issue of frequency spectrum management. This is a huge issue and one that needs substantial education within the entire response space. It is a growing problem across each and every communications system not just in crises but also globally as we humans desire to communicate more in more ways and with more devices. There are limits to the amount of information that can be “pushed” through any communications system and those limits increasingly have to do with the laws of physics, not just the design of the systems.

The electromagnetic frequency spectrum (EF) is the basis of all wireless communication. We started our use of it the late 1800s with the first use of radio. Long ago we exhausted the entire spectrum and are now trying to find ways to reuse parts of it more efficiently. However it is critical that we protect this “public commons” on which so many of our communications systems depend.

Every communications system needs a “physical channel” and this varies widely but they all share some common characteristics. One is the problem of “collisions” which are bad because that means that the information is not delivered successfully. As humans using the physical channel of sound and speech we encounter this in our normal conversations whenever we meet in groups. A simple example of a collision is when two or more people are talking loudly over each other with the result being that no one understands what either is saying.

There are multiple ways to deal with collisions and every communications system must manage collisions or the system collapses. One way is to have a token and you are only allowed to talk if you hold the token. This method of managing communications was used brilliantly by various Native American tribes when discussing heated issues such as war – if you are not in possession of the peace pipe (the token) you are not allowed to speak. This forces everyone to listen to what you are saying and to politely take turns speaking. It is passed back and forth and everyone gets a turn. There are several types of network architecture that use this exact method for avoiding collisions.

Another method is collision avoidance by assigning each speaker a window of time to speak in. This is roughly the approach used by GSM for instance. Yet another method is collision detection where you allow for a certain statistical overlap and all parties know that the last “conversation” collided with another and the information was lost. The system then corrects the problem. This is not as efficient but is easy to do and cheap to implement. This is what Ethernet uses.

Finally as systems are deployed and interact with each other in a certain physical space they need to divide up the space. This can be done by frequency or cables or physical area or by time or all of the above.

In our discussion SMS are best likened to frequencies (although this is not an exact analogy). The advantage of them is that no two NGOs can ever end up with the same long code as this is handled by the carriers and their agreements. Internationally no two carriers can ever issue the same phone number or long code globally. If all NGOs stuck with long codes or full phone numbers we could avoid the problem Patrick is rightly concerned with.

NGOs and other organizations can problematically and mistakenly issue the same short code within a geographic area and we should all be concerned about this exactly as Patrick is. This problem can happen because short codes are for humans – not for the system itself. If the carriers are using different underlying cell phone technologies they can both issue the same short code and neither will interfere technically with the other one. Unfortunately it could have disastrous consequences for the socialization of the short codes to the local or larger population if they cross either technical or geographic lines. This is a problem largely unique to SMS and the plethora of technologies,carriers, bands (or frequencies) that can be deployed in a large physical area and the fact that short codes are for human convenience.

Right now there are 14 frequency bands just within the GSM voice system (thankfully quad band phones support all the widely used ones) and another 14 for data (furthermore the data “bands” actually cover a huge range of frequencies). This is why it is possible to have within one region, country or city with two “overlapping” short codes on two or more different carriers – the codes will each work only on the carrier that operates on that actual GSM frequency band. The system doesn’t care but it can be confusing to us humans.

Another thing Patrick has raised as a concern is actually “subject matter frequency” overlap (or collisions) and the confusion that can result to us simpleminded humans.

It makes no difference how many SMS codes are used as long as they are long codes OR if private and on short codes. The only time there is a problem is when two groups set up short codes that become public (meaning they are advertised in some way to the general public) as “the right number for X” where X is the same subject matter area.

In order to speed response many countries do NOT follow the US 911 system which uses a single short number for all emergencies. For instance Austria uses no less than 9 “short code” voice numbers each for a separate emergency type. That’s great if you live there and have them all memorized and know that for extreme sports there is a number just for “alpine rescue” to get your friend off some ledge that he crashed into in his para-glider. It speeds vital response and gets the right team dispatched in the least time. It does however require a massive amount of public education.

In the US the government decided to have a “one number fits all” system. This was in response to the fact that previously we had thousands of local numbers for each fire and police department, hospital and ambulance service. Without a local phone book it wasn’t possible to know who to call in an emergency. We designed the 911 system as a way to solve this problem and looped all three major responders into the one system. This was then deployed on a county by county basis across the US. There is no national 911 system. The system scales by dividing itself into small geographic sections.

SMS systems tend to be larger in size because SMS carriers are geographically larger that the old local phone POP (point of presence) that became the basis of the US 911 system. Another major concern for SMS design is the total carrying capacity of the carrier SMS system itself. SMS is NOT designed to be use for “one to many” messages. That was never part of the design and the system can be knocked out if the overall limits of the system are exceeded. At that point the SMS systems themselves collapse under the load and start failing and can cause a cascade failure of the entire carrier network in a region – this means that SMS can knock out voice. It does appear that such a failure occurred in Haiti to one of the local carriers that implemented an SMS emergency broadcast system in conjunction with an NGO so this is a real problem.

Getting back to Patrick’s identified concern – we should be worried when multiple SMS “subject channels” are socialized via the mass media and it confuses the public. In Haiti that didn’t happen because there was only one due largely to the work and efforts of the 4636 Haiti.Ushahidi community.

I believe in the future that is also unlikely to happen because I hope the mass media outlets will simply refuse to say “use any of the following SMS codes for health and these for x and y and z.” I think they won’t do this haphazardly. Doing so (meaning confusing the public) could endanger their (mass media) operating license from the host country.

Furthermore countries and cities are typically aware of this whole discussion and carefully control the distribution of short codes (but this does vary widely from region to region). The country that issues the carrier the license to operate the infrastructure is the ultimate authority for this and reserves the right to yank someone off the air or kick them out of the country for failure to follow the rules. Frequency spectrum must be managed for the greater public good or the classic “crisis of the commons” will result. This is the concern Patrick has brought to light.
One can not and should not assume that the rules, laws and policies we (individuals) are used to operating under in our home country apply elsewhere. The term “sovereign nation” means exactly that – they set their own laws concerning how things operate – including technology and communications systems. For instance WiFi is NOT WiFi everywhere and a WiFI router sold in Japan is illegal to operate in the US.

Some well meaning but largely uneducated NGOs deployed systems in Haiti that badly broke rules, laws, policies, etc and the Government of Haiti (and the US Government on behalf of the Haitian Government) was very polite to them. They stepped all over local businesses and disrupted them. Had this happened in the US the FCC would have issued huge fines to them – fines that likely would drive them out of business – and for good reason. They are exploiting the “public commons” for their own advantage. Whether they meant to or not is irrelevant just as ignorance of the law is no excuse.

In the past most responders to such emergencies were large NGOs with trained communications teams that knew they must coordinate their use of various communications platforms with each other or everyone would suffer. In this past this was easy and obvious because NGOs, governments and businesses made extensive use of UHF and VHF radios for communications. Because these systems were voice based it was obvious when you had a problem and when someone was on your assigned frequency. Furthermore you frequently had the opportunity to yell at them over that same communications system.

In the era of digital communications systems we no longer have the ability to yell at anyone and in fact both the designed legal and official user and the illegal user may be unaware that they are colliding and causing both systems to fail. This is a huge problem because it means that both parties have no way to know even know they are interfering with each other much less how or where to resolve the problem.

In conclusion, I applaud Patrick’s efforts as he has raised an important issue that all NGOs that respond to emergencies (both in the US and abroad) must to be aware of. Education is critical. Please tell your organization that they must contact and coordinate with the official frequency manager, typically the local government’s communications agency or ministry, prior to deploying any communications equipment. Failing to do so is typically illegal and can have grave consequences in emergencies, crises and disasters.

Failing Gracefully in Complex Systems: A Note on Resilience

Macbeth’s castle, Act 1, Scene VII. Macbeth and Lady Macbeth are plotting Duncan’s death.

Macbeth: If we should fail?

Lady Macbeth: Then we fail! But screw your courage to the sticking place, And we’ll not fail.

Complex dynamic systems tend to veer towards critical change. This is explained by the process of Self-Organized Criticality (SEO). Over time, non-equilibrium systems with extended degrees of freedom and a high level of nonlinearity become increasingly vulnerable to collapse. As the Santa Fe Institute (SFI) notes,

“The archetype of a self-organized critical system is a sand pile. Sand is slowly dropped onto a surface, forming a pile. As the pile grows, avalanches occur which carry sand from the top to the bottom of the pile.”

Scholars like Thomas Homer-Dixon argue that we are becoming increasingly prone to domino effects or cascading changes across systems, thus increasing the likelihood of total synchronous failure. “A long view of human history reveals not regular change but spasmodic, catastrophic disruptions followed by long periods of reinvention and development.”

That doesn’t mean we’re necessarily done for, however. As Homer-Dixon notes, we can “build resilience into all systems critical to our well-being. A resilience system can absorb large disturbances without changing its fundamental nature.”

“Resilience is an emergent property of a system–it’s not a result of any one of the system’s parts but of the synergy between all of its parts.  So as a rough and ready rule, boosting the ability of each part to take care of itself in a crisis boosts overall resilience.”

This is where Homer-Dixon’s notion of “failing gracefully” comes in: “somehow we have to find the middle ground between dangerous rigidity and catastrophic collapse.”

“In our organizations, social and political systems, and individual lives, we need to create the possibility for what computer programmers and disaster planners call ‘graceful’ failure. When a system fails gracefully, damage is limited, and options for recovery are preserved. Also, the part of the system that has been damaged recovers by drawing resources and information from undamaged parts.”

“Breakdown is probably something that human social systems must go through to adapt successfully to changing conditions over the long term. But if we want to have any control over our direction in breakdown’s aftermath, we must keep breakdown constrained. Reducing as much as we can the force of underlying tectonic stresses helps, as does making our societies more resilient. We have to do other things too, and advance planning for breakdown is undoubtedly the most important.”

Planning for breakdown is not defeatist or passive. Quite on the contrary, it is wise and pro-active. Our hubris all too often clouds our better judgment and rarely do we—as the humanitarian/development community—seriously ask ourselves what we would do “if we should fail.” The answer: “then we fail” is an option. But are we, like Macbeth, prepared to live with the consequences?

Patrick Philippe Meier