Category Archives: Social Media

Here Come the Crowd-Sorcerers: “No We Can’t, No We Won’t” says Muggle Master

Sigh indeed. Yawn, even.

The purpose of this series is not to make it about Paul and Patrick. That’s boring as heck. The idea behind the series was not simply to provoke and use humorous analogies but to dispel confusion about crowdsourcing and thereby provide a more informed understanding of this methodology. I fear this is getting completely lost.

Recall that it was a humanitarian colleague who came up with the label “Crowd Sorcerer”. It made me laugh so I figured we’d have a little fun by using the label Muggle in return. But that’s all it is, good fun. And of course many humanitarians see eye to eye with the Crowd Sorcerer approach, so apologies to those who felt they were wrongly placed in the Muggle category. We’ll use the Sorting Hat next time.

Henry and Erik from Ushahidi

This is not about a division between Crowd Sorcerers and Muggles. As a colleague recently noted, “the line lies somewhere else, between effective implementation of new tools and methodologies versus traditional ways of collecting crisis information.” There are plenty of humanitarians who see value in trying out new approaches. Of course, there are some who simply say “No We Can’t, No We Won’t.”

There’s no point going back and forth with Paul on every one of his issues because many of these have actually little to do with crowdsourcing and more to do with him being provoked. In this post, I’m going to stick to the debate about the in’s and out’s of crowdsourcing in humanitarian response.

On Verification

Muggle Master: And of course the way in which Patrick interprets those words bears little relation to what those words actually said, which is this: “Unless there are field personnel providing “ground truth” data, consumers will never have reliable information upon which to build decision support products.”

I disagree. Again, the traditional mindset here is that unless you have field personnel (your own people) in charge, then there is no way to get accurate information. This implies that the disaster affected populations are all liars, which is clearly untrue.

Verification is of course important—no one said the contrary. Why would Ushahidi be dedicating time and resources to the Swift platform if the group didn’t think that verification was important.

The reality here is that verification is not always possible regardless of which methodology is employed. So it boils down to this: is having information that is not immediately verified better than having no information at all? If your answer is yes or “it depends”, then you’re probably a Crowd Sorcerer. If your answer is, “lets try to test some innovative ways to make rapid verification possible,” then again, you likely are a Crowd Sorcerer/ette.

Incidentally, no one I know has advocated for the use of crowdsourced data at the expense of any other information. Crowd Sorcerers and (many humanitarians) are simply suggesting that it be considered one of multiple feeds. Also, as I’ve argued before, a combined approach of bounded and unbounded crowdsourcing is the way to go.

On Impact Evaluation

The Fletcher Team has commissioned an independent evaluation of the Ushahidi deployment in Haiti to go beyond the informal testimonies of success provided by first responders. This is a four-week evaluation lead by Dr. Nancy Mock, a seasoned humanitarian and M&E expert with over 30 years of experience in the humanitarian and development field.

Nathan Morrow will be working directly with Nancy. Nathan is a geographer who has worked extensively on humanitarian and development information systems. He is a member of the European Evaluation Society and like Nancy a member of the American Evaluation Association. Nathan and Nancy will be aided by a public health student who has several years of experience in community development in Haiti and is a fluent Haitian Creole speaker.

The evaluation team has already gone through much of the data and been in touch with many of the first responders as well as other partners. Their job is to do as rigorous an evaluation  as possible and do this fully transparently. Nancy plans to present her findings publicly at the 2010 Crisis Mappers Conference where we’ve dedicated a roundtable to reviewing these findings, as well as other reviews.

As for background, the ToR (available here) was drafted by graduate students specializing in M&E and reviewed closely by Professor Cheyanne Church, who teaches advanced graduate courses on M&E. She is considered a leading expert on the subject. The ToR was then shared on a number of listserves including the ReliefWeb, CrisisMappers Group and Pelican (a listserve for professional evaluators).

Nancy and Nathan are both experienced in the method known as utilization-focused evaluation (UFE), an approach chosen by The Fletcher Team to ensure that the evaluation is useful to all primary users as well as the humanitarian field. The UFE approach means that the ToR is a living document and being adapted as necessary by the evaluators to ensure that the information gathered is useful and actionable, not just interesting.

We don’t have anything to hide here, Muggles. This was a complete first in terms of live crisis mapping and mobile crowdsourcing. Unlike the humanitarian community, we weren’t prepared at all, nor trained, nor had prior experience with live crisis mapping and mobile crowdsourcing, nor with the use of crowdsourcing for near real-time translation, nor with managing hundreds of unpaid volunteers, nor did the vast majority of them have any background in disaster response, nor were most able to focus on this full time because of their under/graduate coursework and mid-term exams, nor did they have direct links or contacts with first responders prior to the deployment, nor did the many responders know they existed and/or who they were. In sum, they had all the odds stacked against them.

If the evaluation shows that the deployment and the Fletcher Team’s efforts didn’t save lives or are unlikely to have saved any lives, rescued people, had no impact, etc., none of us will dispute this. Will we give up? Of course not, Crowd Sorcerers don’t give up. We’ll learn and do better next time.

One of the main reasons for having this evaluation is not only to assess the impact of the deployment but to create a concrete list of lessons learned so that what didn’t work then is more likely to work in the future. The point here is to assess the impact just as much as it is to assess the potential added value of the approach for future deployments.

How can anyone innovate in a space riddled with a “No We Can’t, No We Won’t” mindset? Trial and error is not allowed, iterative learning and adaptation is as illegal as the dark arts. Some Muggles really need to read this post “On Technology and Learning, or Why the Wright Brothers Did Not Create the 747.” If die-Hard Muggles had had their way, they would have forced the brothers to close up shop after just their first attempt because it “failed.”

Incidentally, the majority of development, humanitarian, aid, etc., projects are never evaluated in any rigorous or meaningful way (if at all, even). But that’s ok because these are double (Muggle) standards.

On Communicating with Local Communities

Concerns over security need not always be used as an excuse for not communicating with local communities. We need to find a way not to exclude potentially important informants. A little innovation and creative thinking wouldn’t hurt. Humanitarians working with Crowd Sorcerers could use SMS to crowdsource reports, triangulate as best as possible using manual means combined with Swift River, cross-reference with official information feeds and investigate reports that appear the most clustered and critical.

That way, if you see a significant number of text messages reporting the lack of water in an area of Port-au-Prince then at least this gives you an indication that something more serious may be happening in that location and you can cross-reference your other sources to check whether the issue has already been picked up. Again, it’s this clustering affect that can provide important insights on a given situation.

This would provide a mechanism to allow Haitians to report problems (or complaints for that matter) via SMS, phone, etc. Imogen Wall and other experienced humanitarians have long called for this to change. Hence the newly founded group Communicating with Disaster Affected Communities (CDAC).

Confusion to the End

Me: Despite what some Muggles may think, crowdsourcing is not actually magic. It’s just a methodology like any other, with advantages and disadvantages.

Muggle Master: That’s exactly what “Muggles” think.

Haha, well if that’s exactly what Muggles think, then this is yet more evidence of confusion in the land of Muggles. Crowdsourcing is just a methodology to collect information. There’s nothing new about non-probability sampling. Understanding the  advantages and disadvantages of this methodology doesn’t require an advanced degree in statistical physics.

Muggle Master: Crowdsourcing should not form part of our disaster response plans because there are no guarantees that a crowd is going to show up. Crowdsourcing is no different from any other form of volunteer effort, and the reason why we have professional aid workers now is because, while volunteers are important, you can’t afford to make them the backbone of the operation. The technology is there and the support is welcome, but this is not the future of aid work.

This just reinforces what I’ve already observed, many in the humanitarian space are still confused about crowdsourcing. The crowd is always there. Haitians were always there. And crowdsourcing is not about volunteering. Again, crowdsourcing is just a methodology to collect information. When the UN does it’s rapid needs assessment does the crowd all of a sudden vanish into thin air? Of course not.

As for volunteers, the folks at Fletcher and SIPA are joining forces to work together on deploying live crisis mapping projects in the future. They’re setting up their own protocols, operating procedures, etc. based on what they’ve learned over the past 6 months in order to replicate the “surge mapping capacity” they demonstrated in response to Haiti and Chile. (Swift River will make the need for a large number of volunteers unnecessary).

And pray tell who in the world has ever said that volunteers should be the backbone of a humanitarian operation? Please, do tell. That would be a nice magic trick.

Muggle Master: “The technology is there and the support is welcome, but this is not the future of aid work.”

The support is welcome? Great! But who said that crowdsourcing was the future of aid work? It’s just a methodology. How can one sole methodology be the future of aid work?

I’ll close with this observation. The email thread that started this Crowd-Sorcerer series ended with a second email written by the same group that wrote the first. That second email was far more constructive and conducive to building bridges. I’m excited by the prospects expressed in this second email and really appreciate the positive tone and interest they expressed in working together. I definitely look forward to working with them and learning more from them as we proceed forward in this space and collaboration.

Patrick Philippe Meier

Here Come the Crowd-Sorcerers: How Technology is Disrupting the Humanitarian Space and How Easy It Is

I’ve recently been cc’d on an email thread in which a humanitarian group has started to “air out some latent issues and frustrations” vis-a-vis the use of crowdsourcing in emergencies. I applaud them for speaking up and credit them for coining the fantastic term “crowd-sorcerers” which is brilliant! The group is apparently preparing to publish a report concerning Humanitarian Information Management in Haiti. I really hope to appear in their chapter on “The Crowd-Sorcerers.”

I wonder which kind of sorcerer I am...

I too prefer candid conversations over diplomatic pillow talk. Lets be honest, it’s not actually difficult to disrupt the humanitarian system. It’s  hierarchical, overly bureaucratic, slow, often unaccountable and at times spectacularly corrupt. But I want to make sure my tone here is not misunderstood. I want to be constructive but playful and provocative at the same time, to “lighten things” up a bit. We often take ourselves way too seriously, too often. That’s why I absolutely love the term Crowd-Sorcerer! Lets use Muggles for our humanitarian friends.

I’ll first lay out some of the frustrations aired by the Muggles in their own words so I don’t  misrepresent their concerns—some of which are obviously valid (but not necessarily new). I’ll be reviewing these concerns in a series of blog posts, so stay tuned for future episodes in the new Crowd-Sorcerer Series! Caution: in case it’s not yet obvious, I will be deliberately provocative and playful in this series.

Muggles: Unless there are field personnel providing “ground truth” data, consumers will never have reliable information upon which to build decision support products. Crowdsourcing may be a quick way to get a message out, but it is not good information unless there is on-the-ground verification going on.

Not sure how you’d interpret these words but what they say to me is this: unless information comes from official field personnel, i.e., Muggles, it’s absolutely useless and should be dumped in the trash. I personally find that somewhat… is colonial too provocative?

Crisis information that was crowdsourced using the distributed short code 4636 in Haiti helped save hundreds of lives according to the Marine Corps. The vast majority of this information could not be verified and yet both the Marine Corps and Coast Guard used this as one of their feeds while FEMA encouraged the crowd-sorcerers to continue mapping, calling the crisis map of Haiti the most comprehensive and up-to-date source of information available to Muggles.

There’s another extraordinary story here, and that’s the story of Mission 4636. Tens of thousands of incoming text messages from disaster affected communities in Haiti were translated from Haitian Kreyol to English in near real-time thanks to crowdsourcing. These text messages were translated by thousands of Haitian Kreyol speaking volunteers from all around the world.

Map of volunteer locations

Without this crowdsourcing, the Marine Corps, Coast Guard, FEMA and others could not have used the information streaming in from 4636 as effectively as they did. And guess what? The original platform that was used to do this translation-by-crowdsourcing was built overnight by Brian Herbert, a 20-something tech developer at Ushahidi.

Where were the Muggles then? I’m sorry to put it in these terms but if we listened to (and waited for) Muggles all the time, then perhaps several hundred more people would have needlessly lost their lives in Haiti.

A forthcoming USIP report that reviews the deployment of the Ushahidi platform found that Haitian NGO’s and local civil society groups were physically barred from entering LogBase—the humanitarian community’s compound near the airport in Port-au-Prince. One Haitian NGO rep who was interviewed said he felt like a foreigner in his own country when he wasn’t allowed to enter LogBase and attend meetings where he could share vital information on urgent needs.

Now tell me, how is trashing Haitian text messages any different than  physically excluding Haitians from having a voice at LogBase? Because the so-called “unwashed masses” don’t have the “right” credentials as defined by the Muggles? Either way, they are excluded from having a stake in the hierarchical system that is supposed help them.

Incidentally, a fully independent evaluation led by a team of three accomplished experts in M&E  (monitoring and evaluation) are currently carrying out their impact assessment of the Ushahidi deployment during the emergency period. They will be in Haiti to for the field work and yes, one member of the team speaks fluent Kreyol. The PI from Tulane University has over 20 years of relevant experience. It would make absolutely no sense for Ushahidi to carry out this review.

Ushahidi has little to no expertise in M&E and such a review would likely be viewed as biased if Ushahidi was authoring it. In fact, Ushahidi didn’t even commission the evaluation, The Fletcher Team did, and they should be applauded for doing so. By the way, as I have blogged here, it is misguided to assume that experts in, say development, are by definition experts at evaluating development projects. M&E is a separate area of expertise and profession in it’s own right. Anyone who has taken M&E 101 will know this from the first lecture.

We’re going to a commercial break now, but stay tuned for the next episode: “Here Come the Crowd-Sorcerers: Is it Possible to Teach an Old (Humanitarian) Dog New Tech’s?”

Patrick Philippe Meier

From Caveman to Sufi Sheikh: Some Thoughts on Cognitive Surplus and Technology Deficits

This is the first of two blog posts inspired by Clay Shirky’s new book “Cognitive Surplus: Creativity and Generosity in a Connected Age.” Clay disagrees with the notion that new communication tools craft new behaviors. I agree. “What if we’ve always wanted to produce [media] as well as consume, but no one offered us that opportunity?”

Technology has long limited our behavior as a gregarious, mobile species, not created new ones. “Many of the unexpected uses of communication tools are surprising because our old beliefs about human nature were so lousy.” We thought that “sharing was inherently rather than accidentally limited to small, tight-knit groups.”

So when we come across a surprising new application of technology, “instead of asking Why is this new?” which produces a technology centric answer, “we can [and should] ask Why is it a surprise?” The technology deficit (my own term) has long constrained our behaviors.

Lets take our favorite Caveman from the Geico commercials, for example. The technology deficit during those days meant that our caveman was constrained to static cave paintings. But surely Caveman would have preferred the Web to share his group’s story (or buy cheap mammoth insurance) rather than a darkly-lit cave with limited access. In fact, Flickr would have been perfect for Caveman. Another constraint with caves is the limited space for comments. Caves represent a technology deficit that prevented preferred behavior.

Lets take my friend Ma Al Eineen as an other example. I met Sheikh Ma Al Aineen, the grandson of the Blue Sultan of the Sahara, on the Western Sahara border with Mauritania some 10 years ago. He loved joking about how the cell phone was the perfect technology for nomads. Did cell phones cause nomadic behavior amongst nomads? No, nomads have always been nomadic and the fixed land line phone restricted that behavior.

This leads me to the following point: bounded crowdsourcing (which I blogged about here) is an accident caused by technology deficits. Information wants to be open but it’s been bounded by technology and power trips. “Bounded crowdsourcing” is nothing new. Indeed, restricting information flows has been the “default setting” for thousands of years. So why use the new term “bounded crowdsourcing” then?

As Clay notes, “the privilege of establishing what value the default is set at is an act of power and influence.” The use of the adjective bounded is thus as much of normative statement as it is a descriptive one. Crowdsourcing is information collection unrestricted by technology and entrenched interests. It is the norm, the “original” default setting. Anything that deviates from this is the result of tech deficits and/or of power interests.

Patrick Philippe Meier

The Future of News: Mobilizing the Masses to Write the First Draft of History

I’ve been meaning to write this blog post since February when I presented Ushahidi to BBC, CNN, UK Guardian and Channel4 in London. A session we just had at Foo Camp East made me realize it’s high time I write this.

The idea I pitched to CNN et al. was as follows: use a dedicated Ushahidi smart phone app that allows anyone to be a real-time iReporter. You download the app and allow CNN to know your location (although you can decide whether that’s within a 10-mile, 1-mile, 100 yards etc radius). As the CNN newsroom gets wind of a new potentially news-worthy incident, they just press the “red button”.

This red button sends out a message to all mobile iReporters within, say, 100 yards of where the incident took place asking them to take a quick picture if they happen to be just around the block. These users—turned volunteer CNN citizen journalists—can then be mobilized to report in near real-time. They could send in geo-referenced pictures annotated with comments and video footage of unfolding events.

I think this kind of app would appeal to many would-be citizen journalists. It costs nothing, just download to your smart phone so that if you ever find yourself at the right place at the right time then you know your breaking news picture may make it to CNN. There could be additional incentives like the number of reports you’ve submitted to CNN following a request. In other words, you could turn this into a quasi-competition or game. Yearly awards could be given out.

Ideally, you’d have a dozen citizens respond to the red button call, scrambling to be the first on site to take the picture, or to be the one who takes the best picture of the incident. This would allow CNN to triangulate the incoming information, possibly creating a Photosynth product updated in real-time. Comments (ie, captions that come with the pictures) could also be cross-validated for reliability purposes. Here’s a 3D rendering of Venice using Photosynth:

One other use-case for this Ushahidi mobile app would be for users to submit pictures/reports without being solicited by CNN. In fact, more often than not, these mobile iReporters are likely to be the first to break the news of an incident to CNN—rather than the other way around. Once an iReporter does this and CNN receives the geo-located picture, they can press the red button to mobilize other would-be iReporters to the scene.

This is why I love citing the Ushahidi article that my New York Times colleague Anand wrote up earlier this year. The screen shot below from the Ushahidi-Haiti deployment literally illustrates the reasoning behind Anand’s question: “Will the triangulated crisis map be regarded as the new first draft of history?” The beauty of crowdsourcing is that it opens the floodgates of information. This means more and more witnesses can capture evidence of the same historical events unfolding. In other words, there is overlap and the triangulated map becomes possible.

The key word for me in Anand’s quote is “draft”. History is now a draft, not a finished product, but a work in progress—and one that is now written (and corrected) by the crowd. Anand adds, “They say that history is written by the victors. But now, before the victors win, there is a fresh chance to scream out, with a text message that will not vanish.” I’d go even further and say that the crowd can now be the victors by being  mobile iReporters.

Think of these smart phone apps as the “seismographs” for crises. This allows users to form a veritable real-time, real-space human sensor web—or as Secretary Clinton describes it, “a new nervous system for the planet.”

Of course there are liability issues with mobilizing the masses to write the first draft(s) of history. So disclaimers will be necessary. For example, do not try and cover a story if this places in you physical danger or psychological harm. You’d probably have to give up all rights to the picture/text you submit, but at least you’d have your name credited on CNN.

Patrick Philippe Meier

Wag the Dog, or How Falsifying Crowdsourced Data Can Be a Pain

I had the pleasure of finally meeting Robert Scoble in person at Where 2.0 last week. We had a great chat about validating crowdsourced information, which he caught on camera below. In the conversation, I used Wag the Dog as an analogy for Ushahidi‘s work on Swift River. I’d like to expand on this since open platforms are obviously susceptible to “information vandalism”, ie, having false data deliberately entered.

The typical concern goes something like this: what if a repressive regime (or non-state group) feeds false information to an Ushahidi deployment? As I’ve noted on iRevolution before (here, here and here), Swift River collects information from sources across various media such as SMS, Twitter, Facebook Groups, Blogs, Online News, Flickr and YouTube. In other words, Swift River pulls in visual and text based information.

So where does Wag the Dog come in? Have a look at this scene from the movie if you haven’t watched it yet:

If an authoritarian state wanted to pretend that rioters had violently attacked military police and submit false information to this effect in an Ushahidi deployment, for example, then what would that effort entail? Really gaming the system would probably require the following recipe:

  1. Dozens of  pictures of different quality from different types of phones of fake rioters taken from different angles at different times.
  2. Dozens of text messages from different phone using similar language to describe the fake riots.
  3. Several dozens of Tweets to this same effect. Not just retweets.
  4. Several fake blog posts and Facebook groups.
  5. Several YouTube videos of fake footage.
  6. Hacking national and international media to plant fake reports in the mainstream media.
  7. Hundreds of (paid?) actors of different ages and genders to play the rioters, military police, shopkeepers, onlookers, etc.
  8. Dozens of “witnesses” who can take pictures, create video footage, etc.
  9. A cordoned off area in the city where said actors can stage the scene. Incidentally, choreographing a fight scene using hundreds actors definitely needs time and requires rehearsals. A script would help.
  10. Props including flags, banners, guns, etc.
  11. Ketchup, lots of ketchup.
  12. Consistent weather. Say a repressive state decides to preemptively create this false information just in case it might become useful later in the year. If it was raining during the acting, it better be raining when the state wants to use that false data.

Any others you can think of? I’d love to expand the recipe. In any case, I think the above explains why I like using the analogy of Wag the Dog. If a repressive state wanted to fabricate an information ecosystem to game an Ushahidi install, they’d have to move to Hollywood. Is Swift River a silver bullet? No, but the platform will make it more of pain for states to game Ushahidi.

Patrick Philippe Meier

Haiti and the Power of Crowdsourcing

It’s been two weeks since I called David Kobia to launch Ushahidi’s crisis mapping platform in Haiti. I could probably write 100 blog posts on the high’s and low’s of the past 14 days. Perhaps there will more time be next month to recount the first two weeks of the disaster response. For now, I wanted to share an astounding example of crowdsourcing that took place 10 days ago.

Boston, January 17, 8pm

Picture a snowy Boston evening and the following “Situation Room” a.k.a. my living room at Blakeley Hall, part of The Fletcher School.

My fellow PhD colleague Anna Schulz, who has rapidly become an expert in satellite imagery analysis and geolocation, receives an urgent request via Skype from InSTEDD‘s Eric Rasmussen pictured below with Nico di Tada. That tent is pitched right next to the runway of Port-au-Prince’s international airport, some 1,600 miles south of Boston.

The urgent request? GPS coordinates for 7 key locations across Port-au-Prince where many Haitians were known to be trapped under rubble. They needed to communicate this information to the Search and Rescue (SAR) teams before 0600. Anna immediately got to work.

Boston, January 17, 8.30pm

An hour later, Anna had found the GPS coordinates for all but one of the locations for the rescue operations.

Boston, January 17, 9.41pm

Boston, January 17, 10.26pm

Some time later, the same urgent request originally sent by Eric and Nico appears on the CrisisMappers Google Group:

Boston, January 17, ~11.00pm

At Anna’s request, I send out the following Tweet on Ushahidi.

Boston, January 18, ~1.00am

The following report is submitted to the Ushahidi-Haiti platform by someone from the Twittersphere:

Boston, January 18, 1.20am

My colleague Jaroslav in The Fletcher Situation room Skypes back to Anna:

Courtesy of high-resolution satellite imagery on Google Earth:

Boston, January 18, ~2.00am

It’s getting late but the ALL CAPS in this Tweet to Ushahidi catches my eye:

My colleague Jaroslav and I decide to try the number. Low and behold, we get Marc on the phone after just one ring. With a mixture of English and French, we find out that he was indeed a former employee of Au Bon Prix which happens to be a book store just off “Au Champs de Mars” near the Palace. We immediately Skype this information back to Eric and Nico at Port-au-Prince airport.

Boston, January 18, ~2.15am

Boston, January 18, ~4.30am

It’s still snowing in Boston. Time to get a few hours of sleep. We hand over operations to the Ushahidi Team in Africa.

Patrick Philippe Meier

Diane Coyle Responds to Criticisms of UN/Vodafore Report

Guest Blogger: Diane Coyle, lead author of UN/Vodafone Report

The tone of some comments about our report has surprised and disappointed me. It should go without saying that a report-as opposed to a catalog-could never include all the technologies and applications available in this exciting field now.

We selected examples that illustrated relevant aspects of the use of communications and information in the context of an emergency or conflict. The selection methodology was that they should be good illustrations of innovative uses, covering a reasonable spread of technologies, users and countries. In addition, we played to our own strengths in terms of the technologies and approaches we know well, meaning that we understood thoroughly the contribution they can make.

What’s more, this is not an academic report, which some of the criticisms ignored. It is meant to be accessible to a general audience, especially practitioners and policy makers. Why on earth would we include a literature review?

Some comments simply seemed to have missed the point, and no doubt I could have spelled some things out more clearly. For example, the point made in the document about Cyclone Nargis is precisely that even in contexts which hold no promise for humanitarian agencies to introduce new information-rich technologies, very simple low-tech information can still help build community resilience.

Having said that, I am really grateful for comments which pointed out a few factual errors and infelicities, and I hope we can correct them soon. I’m confident that the report makes an important contribution in highlighting the potential for the latest technologies in this field, and the obstacles to realization of that potential.

Responding to Feedback on UN Foundation/Vodafone Report

Update: I’m in conversation with the UN/Vodafone Foundation about adding a paragraph on case selection, a case study on Sahana and correcting the error on UNOSAT.

The UN/Vodafone Foundation recently published a new Report on New Technologies in Emergencies and Conflicts. This blog post responds to feedback on this report. Please note that I do so as second-author. My respected colleague Diane Coyle is the report’s lead author and editor so she may have other thoughts on the feedback. Naturally, I will only address the feedback that relates to my input.

  • Intended audience: The report was written for a general (non-technical/expert) audience as a way to showcase technology applications in the humanitarian space.
  • Case selection: The case studies were selected in consultation with the UN/Vodafone Foundation throughout the research period. These consultations included the authors of the report (Diane and myself) and three members of the UN/Vodafone Foundation who served as editors. Some of the case studies were requested by the Foundation. For the other case studies, we strove to highlight some of the most recent initiatives in consultation with the UN/Vodafone Foundation. Of the 19 case studies selected, 13 didn’t exist some 2 years ago. The others comprise major global initiatives, fit well together as examples for a general audience, or were requested case studies. Clearly, the limited space did not allow us include everyone’s favorite project.
  • Length of report: We originally had some 80-or-so draft pages between us and had to reduce the content to about 60 pages. This meant having to decide what to keep and what to put aside. Some of the rewrite was also done to make the report less technical and more widely understandable. This helped to save on space.
  • UNOSAT and Sri Lanka: The reference does not intend to endorse or discount the interpretation of the imagery by the international community. The reference is based on in-person consultations with several well-placed experts. That said, I would agree that some rephrasing is in order this paragraph needs to be reworked.

On a personal note, I have found the tone of some criticisms rather disappointing and old school. There are several ways to give feedback: one is constructive, another is destructive. The former provides incentives to improve and continue an open collaborative conversation as a community. The latter defeats the incentive for growth and leads to a more self-centered community.

Some of the criticisms of this report have been destructive. Why is it that some of us can’t get our points across with more composure? Does bitterness make us feel more important? I expected a lot more from some of my (older, wiser) colleagues. But I haven’t always been good on providing constructive feedback either, so thanks to my “new fan club” I’ve got another New Year’s resolution for 2010: I will do my best to give constructive, supportive feedback.

Patrick Philippe Meier

Is Journalism Just Failed Crowdsourcing?

This provocative question materialized during a recent conversation I had with a Professor of Political Science whilst in New York in this week. Major news companies like CNN have started to crowdsource citizen generated news on the basis that “looking at the news from different angles gives us a deeper understanding of what’s going on.” CNN’s iReporter thus invites citizens to help shape the news “in order to paint a more complete picture of the news.”

This would imply that traditional journalism has provided a relatively incomplete picture of global events. So the question is, if crowdsourcing platforms had been available to journalists one hundred years ago, would they view these platforms as an exciting opportunity to get early leads on breaking stories? The common counter argument is: but crowdsourcing “opens the floodgates” of information and we simply can’t follow up on everything. Yes, but whoever said that every lead requires follow up?

Journalists are not always interested in following up on every lead that comes their way. They’ll select a few sources, interview them and then write up the story. What crowdsourcing citizen generated news does, however, is to provide them with many more leads to choose from. Isn’t this an ideal set up for a journalist? Instead of having to chase down leads across town, the leads come directly to them with names, phone numbers and email addresses.

Imagine that the field of journalism had started out using crowdsourcing platforms combined with investigative journalism. If these platforms were then outlawed for whatever reason, would investigative journalists be hindered in their ability to cover the news from different angles? Or would they still be able to paint an equally complete picture of the news?

Granted, one common criticism of citizen journalism is the lack of context they provide especially when using Twitter given the 140 characters restriction. But surely 140 characters are plenty for the purposes of a potential lead. And if a mountain of Tweets started to point to the same lead story, then a professional journalist could take advantage of this information when deciding whether or not to follow up.

Source: CoolThing

I also find the criticism against Twitter interesting coming from traditional journalists. In the early 1900s, large newspapers started hiring war correspondents “who used the new telegraph and expanding railways to move news faster to their newspapers.” However, the cost of sending telegrams forced reporters to develop a “new concise or ‘tight’ style of writing which became the standard for journalism through the next century.”

Today, the costs of hiring professional journalists means that a newspaper like the Herald (at the time),  is not going to send any modern Henry Stanley to find a certain Dr. Livingstone in Africa. And besides, if the Herald had global crowdsourcing platforms back in the 1870s, they may have instead used Twitter to crowdsource the coordinates of Dr. Livingstone.

This may imply that traditional journalism was primarily shaped by the constraint of technology at the time. In a teleological sense, then, crowdsourcing may simply by the next phase in the future of journalism.

Patrick Philippe Meier

New Media, Accuracy and Balance of Power in Crises

I just read Nik Gowing’s book entitled “Skyful of Lies and Black Swans: The New Tyranny of Shifting Information Power in Crises.” The term “Black Swan” refers to sudden onset crises and the title of an excellent book on the topic by Nassim Taleb. “Skyful of Lies,” were the words used by the Burmese junta to dismiss the deluge of digital evidence of the mass pro-democracy protests  that took place in 2007.

yournewmedia001_3

Nik packs in some very interesting content in this study, a lot of which is directly relevant to my dissertation research and consulting work. He describes the rise of new media as “having an asymmetric, negative impact on the traditional structures of power.”

Indeed, British Foreign Secretary David Miliband labeled this “shifting of power from state to citizen as the new ‘civilian surge.'” To be sure, “that ‘civilian surge’ of growing digital empowerment is forcing an enhanced level of accountability that […] is a ‘real change to democracy’.” As for authoritarian regimes, “the impact of new media technologies has been shown to be as potentially ‘subversive’ as for highly developed democratic states.”

However, Nik recognizes that “the implications for power and policy-makers is not well developed or appreciated.” He adds that “the implications of this new level of empowerment are profound but still, in many ways, unquantifiable.” Hence the purpose and focus of my dissertation.

Time Lines out of Sync

Nik notes that the time lines of media action and institutional reaction are increasingly out of sync. “The information pipelines facilitated by the new media can provide information and revelations within minutes. But the apparatus of government, the military or the corporate world remain conditioned to take hours.”

Take for example, the tube and train bombings in London, 2005. During the first three hours following the incidents, the official government line was that an accidental power surge had caused the catastrophe. Meanwhile, some 1,300 blog posts were written within just 80 minutes of the terrorist attack which pointed to explosive devices as the cause. “The content of the real-time reporting of 20,000 emails, 3,000 text messages, 1,000 digital images and 20 video clips was both dramatic and largely correct.”

New Media and Accuracy

I find the point about accuracy particularly interesting. According to Nik, the repeated warnings that new media and user-generated content (UGC) cannot be trusted “does not seem to apply in a major crisis.”

“Far from it. The accumulated evidence is that the asymmetric torrent of overwhelming ‘amateur’ inputs from the new generators of content produces largely accurate, if personalized, information in real time. It may be imperfect and incomplete as the crisis time line unfolds.

There is also the risk of exaggeration or downright misleading ‘reporting’. But the impact is profound. Internal BBC research discovered that audiences are understanding if errors or exaggerations creep in by way of such information doer material, as long as they are sourced and later corrected.

In addition, the concept of trust can ‘flex’ in a crisis. Trust does not diminish as long as the ongoing levels of doubt or lack of certainty are always made clear. It is about ‘doing your best in [a] world where speed and information are the keys’. But the research concluded that the BBC needed to do more work to analyze the implications of the UGC phenomenon for accuracy, speed, personalization, dialogue and trust. That challenge is the same for all traditional media organizations.

Low Tech Power

Nik describes the onslaught of new media as the low tech empowerment of the media space. During the Burma protests of 2007, “the ad hoc community of risk-taking information doers became empowered. Those undisputed and widely corroborated images swiftly challenged the authority and claims of the regime.”

burmaprotests1

During the aftermath of Cyclone Nargis, both foreign journalists and aid agencies were forbidden from entering the country. But one producer and camera operator from a major news organization “managed to enter the country on tourist visas. Before being arrested and deported they evaded security checks and military intelligence to record vivid video that confirmed the terrible impact and human cost of the cyclone. Hiding in ditches they beamed it out of the country on a new tiny, portable Bgan satellite uplink carried in a hiker’s backpack.”

The Question

This is definitely an example of the “asymmetric, negative impact on the traditional structures of power,” that Nik refers to in his introduction. Question is, how much of a threat does this asymmetry pose to repressive regimes? That is one of the fundamental questions I pose in my dissertation research.

Patrick Philippe Meier