Category Archives: Social Media

Mobile Technologies for Conflict Management

“Mobile Technologies for Conflict Management: Online Dispute Resolution, Governance, Participation” is the title of a new book edited by Marta Poblet. I recently met Marta in Vienna, Austria during the UN Expert Meeting on Croudsource Mapping organized by UN SPIDER. I’m excited that her book has just launched. The chapters are is divided into 3 sections: Disruptive Applications of Mobile Technologies; Towards a Mobile ODR; and Mobile Technologies: New Challenges for Governance, Privacy and Security.

The book includes chapters by several colleagues of mine like Mike Best on “Mobile Phones in Conflict Stressed Environments”, Ken Banks on “Appropriate Mobile Technologies,” Oscar Salazar and Jorge Soto on “How to Crowdsource Election Monitoring in 30 Days,” Jacok Korenblum and Bieta Andemariam on “How Souktel Uses SMS Technology to Empower and Aid in Conflict-Affected Communities,” and Emily Jacobi on “Burma: A Modern Anomaly.”

My colleagues Jessica Heinzelman, Rachel Brown and myself also contributed one of the chapters. I include the introduction below.

I had long wanted to collaborate on a peer-reviewed chapter in which I could combine my earlier study of conflict resolution theory with my experience in conflict early warning and crisis mapping. See also this earlier blog post on “Crowdsourcing for Peace Mapping.”  I’ve been a big fan of Will Ury’s approach ever since coming across his work while at Columbia University back in 2003. Little did I know then that I’d be co-authoring this book chapter with two new stellar colleagues. Rachel has taken much of this thinking and applied it to the real world in her phenomenal project called Sisi ni Amni, or “We Are Peace.” You can follow them on Twitter. Jessica now serves on their Advisory Board.

How to Verify Social Media Content: Some Tips and Tricks on Information Forensics

Update: I have authored a 20+ page paper on verifying social media content based on 5 case studies. Please see this blog post for a copy.

I get this question all the time: “How do you verify social media data?” This question drives many of the conversations on crowdsourcing and crisis mapping these days. It’s high time that we start compiling our tips and tricks into an online how-to-guide so that we don’t have to start from square one every time the question comes up. We need to build and accumulate our shared knowledge in information forensics. So here is the Google Doc version of this blog post, please feel free to add your best practices and ask others to contribute. Feel free to also add links to other studies on verifying social media content.

If every source we monitored in the social media space was known and trusted, then the need for verification would not be as pronounced. In other words, it is the plethora and virtual anonymity of sources that makes us skeptical of the content they deliver. The process of verifying  social media data thus requires a two-step process: the authentication of the source as reliable and the triangulation of the content as valid. If we can authenticate the source and find it trustworthy, this may be sufficient to trust the content and mark is a verified depending on context. If source authentication is difficult to ascertain, then we need to triangulate the content itself.

Lets unpack these two processes—authentication and triangulation—and apply them to Twitter since the most pressing challenges regarding social media verification have to do with eyewitness, user-generated content. The first step is to try and determine whether the source is trustworthy. Here are some tips on how to do this:

  • Bio on Twitter: Does the source provide a name, picture, bio and any  links to their own blog, identity, professional occupation, etc., on their page? If there’s a name, does searching for this name on Google provide any further clues to the person’s identity? Perhaps a Facebook page, a professional email address, a LinkedIn profile?
  • Number of Tweets: Is this a new Twitter handle with only a few tweets? If so, this makes authentication more difficult. Arasmus notes that “the more recent, the less reliable and the more likely it is to be an account intended to spread disinformation.” In general, the longer the Twitter handle has been around and the more Tweets linked to this handle, the better. This gives a digital trace, a history of prior evidence that can be scrutinized for evidence of political bias, misinformation, etc. Arasmus specifies: “What are the tweets like? Does the person qualify his/her reports? Are they intelligible? Is the person given to exaggeration and inconsistencies?”
  • Number of followers: Does the source have a large following? If there are only a few, are any of the followers know and credible sources? Also, how many lists has this Twitter hanlde been added to?
  • Number following: How many Twitter users does the Twitter handle follow? Are these known and credible sources?
  • Retweets: What type of content does the Twitter handle retweet? Does the Twitter handle in question get retweeted by known and credible sources?
  • Location: Can the source’s geographic location be ascertained? If so, are they nearby the unfolding events? One way to try and find out by proxy is to examine during which periods of the day/night the source tweets the most. This may provide an indication as to the person’s time zone.
  • Timing: Does the source appear to be tweeting in near real-time? Or are there considerable delays? Does anything appear unusual about the timing of the person’s tweets?
  • Social authentication: If you’re still unsure about the source’s reliability, use your own social network–Twitter, Facebook, LinkedIn–to find out if anyone in your network know about the source’s reliability.
  • Media authentication: Is the source quoted by trusted media outlines whether this be in the mainstream or social media space?
  • Engage the source: Tweet them back and ask them for further information. NPR’s Andy Carvin has employed this technique particularly well. For example, you can tweet back and ask for the source of the report and for any available pictures, videos, etc. Place the burden of proof on the source.

These are some of the tips that come to mind for source authentication. For more thoughts on this process, see my previous blog post “Passing the I’m-Not-Gaddafi-Test: Authenticating Identity During Crisis Mapping Operations.” If you some tips of your own not listed here, please do add them to the Google Doc—they don’t need to be limited to Twitter either.

Now, lets say that we’ve gone through list above and find the evidence inconclusive. We thus move to try and triangulate the content. Here are some tips on how to do this:

  • Triangulation: Are other sources on Twitter or elsewhere reporting on the event you are investigating? As Arasmus notes, “remain skeptical about the reports that you receive. Look for multiple reports from different unconnected sources.” The more independent witnesses you can get information from the better and the less critical the need for identity authentication.
  • Origins: If the user reporting an event is not necessarily the original source, can the original source be identified and authenticated? In particular, if the original source is found, does the time/date of the original report make sense given the situation?
  • Social authentication: Ask members of your own social network whether the tweet you are investigating is being reported by other sources. Ask them how unusual the event reporting is to get a sense of how likely it is to have happened in the first place. Andy Carvin’s followers, for example, “help him translate, triangulate, and track down key information. They enable remarkable acts of crowdsourced verification […] but he must always tell himself to check and challenge what he is told.”
  • Language: Andy Carvin notes that tweets that sound too official, using official language like “breaking news”, “urgent”, “confirmed” etc. need to be scrutinized. “When he sees these terms used, Carvin often replies and asks for additional details, for pictures and video. Or he will quote the tweet and add a simple one word question to the front of the message: Source?” The BBC’s UGC (user-generated content) Hub in London also verifies whether the vocabulary, slang, accents are correct for the location that a source might claim to be reporting from.
  • Pictures: If the twitter handle shares photographic “evidence”, does the photo provide any clues about the location where it was taken based on buildings, signs, cars, etc., in the background? The BBC’s UGC Hub checks weaponry against those know for the given country and also looks for shadows to determine the possible time of day that a picture was taken. In addition, they examine weather reports to “confirm that the conditions shown fit with the claimed date and time.” These same tips can be applied to Tweets that share video footage.
  • Follow up: If you have contacts in the geographic area of interest, then you could ask them to follow up directly/in-person to confirm the validity of the report. Obviously this is not always possible, particularly in conflict zones. Still, there is increasing anecdotal evidence that this strategy is being used by various media organizations and human rights groups. One particularly striking example comes from Kyrgyzstan where  a Skype group with hundreds of users across the country were able disprove and counter rumors at a breathtaking pace. See this blog post for more details. See my blog post on “How to Use Technology to Counter Rumors During Crises: Anecdotes from Kyrgyzstan.”

These are just a handful of tips and tricks come to mind. The number of bullet points above clearly shows we are not completely powerless when verifying social media data. There are several strategies available. The main challenge, as the BBC points out, is that this type of information forensics “can take anything from seconds […] to hours, as we hunt for clues and confirmation.” See for example my earlier post on “The Crowdsourcing Detective: Crisis, Deception and Intrigue in the Twitterspehere” which highlights some challenges but also new opportunities.

One of Storyful‘s comparative strengths when it comes to real-time news curation is the growing list of authenticated users it follows. This represents more of a bounded (but certainly not static) approach.  As noted in my previous blog post on “Seeking the Trustworthy Tweet,” following a bounded model presents some obvious advantages. This explains by the BBC recommends “maintaining lists of previously verified material [and sources] to act as a reference for colleagues covering the stories.” This strategy is also employed by the Verification Team of the Standby Volunteer Task Force (SBTF).

In sum, I still stand by my earlier blog post entitled “Wag the Dog: How Falsifying Crowdsourced Data can be a Pain.” I also continue to stand by my opinion that some data–even if not immediately verifiable—is better than no data. Also, it’s important to recognize that  we have in some occasions seen social media prove to be self-correcting, as I blogged about here. Finally, we know that information is often perishable in times of crises. By this I mean that crisis data often has a “use-by date” after which, it no longer matters whether said information is true or not. So speed is often vital. This is why semi-automated platforms like SwiftRiver that aim to filter and triangulate social media content can be helpful.

Passing the I’m-Not-Gaddafi Test: Authenticating Identity During Crisis Mapping Operations

I’ve found myself telling this story so often in response to various questions that it really should be a blog post. The story begins with the launch of the Libya Crisis Map a few months ago at the request of the UN. After the first 10 days of deploying the live map, the UN asked us to continue for another two weeks. When I write “us” here, I mean the Standby Volunteer Task Force (SBTF), which is designed for short-term rapid crisis mapping support, not long term deploy-ments. So we needed to recruit additional volunteers to continue mapping the Libya crisis. And this is where the I’m-not-Gaddafi test comes in.

To do our live crisis mapping work, SBTF volunteers generally need password access to whatever mapping platform we happen to be using. This has typically been the Ushahidi platform. Giving out passwords to several dozen volunteers in almost as many countries requires trust. Password access means one could start sabotaging the platform, e.g., deleting reports, creating fake ones, etc. So when we began recruiting 200+ new volunteers to sustain our crisis mapping efforts in Libya, we needed a way to vet these new recruits, particularly since we were dealing with a political conflict. So we set up an I’m-not-Gaddafi test by using this Google Form:

So we placed the burden of proof on our (very patient) volunteers. Here’s a quick summary of the key items we used in our “grading” to authenticate volunteers’ identity:

Email address: Professional or academic email addresses were preferred and received a more favorable “score”.

Twitter handle: The great thing about Twitter is you can read through weeks’ worth of someone’s Twitter stream. I personally used this feature several times to determine whether any political tweets revealed a pro-Gaddafi attitude.

Facebook page: Given that posing as someone else or a fictitious person on Facebook violates their terms of service, having the link to an applicant’s Facebook page was considered a plus.

LinkedIn profile: This was a particularly useful piece of evidence given that the majority of people on LinkedIn are professionals.

Personal/Professional blog or website: This was also a great to way to authenticate an individual’s identity. We also encouraged applicants to share links to anything they had published which was available online.

For every application, we had two or more of us from the core team go through the responses. In order to sign off a new volunteer as vetted, two people had to write down “Yes” with their name. We would give priority to the most complete applications. I would say that 80% of the 200+ applications we received were able to be signed off on without requiring additional information. We did follow ups via email for the remaining 20%, the majority of whom provided us with extra info that enabled us to validate their identity. One individual even sent us a copy of his official ID. There may have been a handful who didn’t reply to our requests for additional information.

This entire vetting process appears to have worked, but it was extremely laborious and time-consuming. I personally spent hours and hours going through more than 100 applications. We definitely need to come up with a different system in the future. So I’ve been exploring some possible solutions—such as social authentication—with a number of groups and I hope to provide an update next month which will make all our lives a lot easier, not to mention give us more dedicated mapping time. There’s also the need to improve the Ushahidi platform to make it more like Wikipedia, i.e., where contributions can be tracked and logged. I think combining both approaches—identity authentication and tracking—may be the way to go.

Seeking the Trustworthy Tweet: Can “Tweetsourcing” Ever Fit the Needs of Humanitarian Organizations?

Can microblogged data fit the information needs of humanitarian organizations? This is the question asked by a group of academics at Pennsylvania State University’s College of Information Sciences and Technology. Their study (PDF) is an important contribution to the discourse on humanitarian technology and crisis information. The applied research provides key insights based on a series of interviews with humanitarian professionals. While I largely agree with the majority of the arguments presented in this study, I do have questions regarding the framing of the problem and some of the assertions made.

The authors note that “despite the evidence of strong value to those experiencing the disaster and those seeking information concerning the disaster, there has been very little uptake of message data by large-scale, international humanitarian relief organizations.” This is because real-time message data is “deemed as unverifiable and untrustworthy, and it has not been incorporated into established mechanisms for organizational decision-making.” To this end, “committing to the mobilization of valuable and time sensitive relief supplies and personnel, based on what may turn out be illegitimate claims, has been perceived to be too great a risk.” Thus far, the authors argue, “no mechanisms have been fashioned for harvesting microblogged data from the public in a manner, which facilitates organizational decisions.”

I don’t think this latter assertion is entirely true if one looks at the use of Twitter by the private sector. Take for example the services offered by Crimson Hexagon, which I blogged about 3 years ago. This successful start-up launched by Gary King out of Harvard University provides companies with real-time sentiment analysis of brand perceptions in the Twittersphere precisely to help inform their decision making. Another example is Storyful, which harvests data from authenticated Twitter users to provide highly curated, real-time information via microblogging. Given that the humanitarian community lags behind in the use and adoption of new technologies, it behooves us to look at those sectors that are ahead of the curve to better understand the opportunities that do exist.

Since the study principally focused on Twitter, I’m surprised that the authors did not reference the empirical study that came out last year on the behavior of Twitter users after the 8.8 magnitude earthquake in Chile. The study shows that about 95% of tweets related to confirmed reports validated that information. In contrast only 0.03% of tweets denied the validity of these true cases. Interestingly, the results also show  that “the number of tweets that deny information becomes much larger when the information corresponds to a false rumor.” In fact, about 50% of tweets will deny the validity of false reports. This means it may very well be posible to detect rumors by using aggregate analysis on tweets.

On framing, I believe the focus on microblogging and Twitter in particular misses the bigger picture which ultimately is about the methodology of crowdsourcing rather than the technology. To be sure, the study by Penn State could just as well have been titled “Seeking the Trustworthy SMS.” I think this important research on microblogging would be stronger if this distinction were made and the resulting analysis tied more closely to the ongoing debate on crowdsourcing crisis information that began during the response to Haiti’s earthquake in 2010.

Also, as was noted during the Red Cross Summit in 2010, more than two-thirds of respondents to a survey noted that they would expect a response within an hour if they posted a need for help on a social media platform (and not just Twitter) during a crisis. So whether humanitarian organizations like it or not, crowdsourced social media information cannot be ignored.

The authors carried out a series of insightful interviews with about a dozen international humanitarian organizations to try and better understand the hesitation around the use of Twitter for humanitarian response. As noted earlier, however, it is not Twitter per se that is a concern but the underlying methodology of crowdsourcing.

As expected, interviewees noted that they prioritize the veracity of information over the speed of communication. “I don’t think speed is necessarily the number one tool that an emergency operator needs to use.” Another interviewee opined that “It might be hard to trust the data. I mean, I don’t think you can make major decisions based on a couple of tweets, on one or two tweets.” What’s interesting about this latter comment is that it implies that only one channel of information, Twitter, is to be used in decision-making, which is a false argument and one that nobody I know has ever made.

Either way, the trade-off between speed and accuracy is a well known one. As mentioned in this blog post from 2009, information is perishable and accuracy is often a luxury in the first few hours and days following a major disaster. As the authors for the study rightly note, “uncertainty is ‘always expected, if sometimes crippling’ (Benini, 1997) for NGOs involved in humanitarian relief.” Ultimately, the question posed by the authors of the Penn study can be boiled down to this: is some information better than no information if it cannot be immediately verified? In my opinion, yes. If you have some information, then at least you can investigate it’s veracity which may lead to action. I also believe that from this philosophical point of view, the answer would still be yes.

Based on the interviews, the authors found that organizations engaged in immediate emergency response were less likely to make use of Twitter (or crowdsourced information) as a channel for information. As one interviewee put it, “Lives are on the line. Every moment counts. We have it down to a science. We know what information we need and we get in and get it…” In contrast, those organizations engaged in subsequent phases of disaster response were thought more likely to make use of crowdsourced data.

I’m not entirely convinced by this: “We know what information we need and we get in and get it…”. Yes, humanitarian organizations typically know but whether they get it, and in time, is certainly not a given. Just look at the humanitarian responses to Haiti and Libya, for example. Organizations may very well be “unwilling to trade data assurance, veracity and authenticity for speed,” but sometimes this mindset will mean having absolutely no information. This is why OCHA asked the Standby Volunteer Taskforce to provide them with a live crowdsourced social media may of Libya. In Haiti, while the UN is not thought to have used crowdsourced SMS data from Mission 4636, other responders like the Marine Corps did.

Still, according to one interviewee, “fast is good, but bad information fast can kill people. It’s got to be good, and maybe fast too.” This assumes that no information doesn’t kill people. Also good information that is late, can also kill people. As one of the interviewees admitted when using traditional methods, “it can be quite slow before all that [information] trickles through all the layers to get to us.” The authors of the study also noted that, “Many [interviewees] were frustrated with how slow the traditional methods of gathering post-disaster data had remained despite the growing ubiquity of smart phones and high quality connectivity and power worldwide.”

On a side note, I found the following comment during the interviews especially revealing: “When we do needs assessments, we drive around and we look with our eyes and we talk to people and we assess what’s on the ground and that’s how we make our evaluations.” One of the common criticisms leveled against the use of crowdsourced information is that it isn’t representative. But then again, driving around, checking things out and chatting with people is hardly going to yield a representative sample either.

One of the main findings from this research has to do with a problem in attitude on the part of humanitarian organizations. “Each of the interviewees stated that their organization did not have the organizational will to try out new technolo-gies. Most expressed this as a lack of resources, support, leadership and interest to adopt new technologies.” As one interview noted, “We tried to get the president and CEO both to use Twitter. We failed abysmally, so they’re not– they almost never use it.” Interestingly, “most of the respondents admitted that many of their technological changes were motivated by the demands of their donors. At this point in time their donors have not demanded that these organizations make use of microblogged data. The subjects believed they would need to wait until this occurred for real change to begin.”

For me the lack of will has less to do with available resources and limited capacity and far more to do with a generational gap. When today’s young professionals in the humanitarian space work their way up to more executive positions, we’ll  see a significant change in attitude within these organizations. I’m thinking in particular of the many dozens of core volunteers who played a pivotal role in the crisis mapping operations in Haiti, Chile, Pakistan, Russia and most recently Libya. And when attitude changes, resources can be reallocated and new priorities can be rationalized.

What’s interesting about these interviews is that despite all the concerns and criticisms of crowdsourced Twitter data, all interviewees still see microblogged data as a “vast trove of potentially useful information concerning a disaster zone.” One of the professionals interviewed said, “Yes! Yes! Because that would – again, it would tell us what resources are already in the ground, what resources are still needed, who has the right staff, what we could provide. I mean, it would just – it would give you so much more real-time data, so that as we’re putting our plans together we can react based on what is already known as opposed to getting there and discovering, oh, they don’t really need medical supplies. What they really need is construction supplies or whatever.”

Another professional stated that, “Twitter data could potentially be used the same way… for crisis mapping. When an emergency happens there are so many things going on in the ground, and an emergency response is simply prioritization, taking care of the most important things first and knowing what those are. The difficult thing is that things change so quickly. So being able to gather information quickly…. <with Twitter> There’s enormous power.”

The authors propose three possible future directions. The first is bounded microblogging, which I have long referred to as “bounded crowdsourcing.” It doesn’t make sense to focus on the technology instead of the methodology because at the heart of the issue are the methods for information collection. In “bounded crowdsourcing,” membership is “controlled to only those vetted by a particular organization or community.” This is the approach taken by Storyful, for example. One interviewee acknowledge that “Twitter might be useful right after a disaster, but only if the person doing the Tweeting was from <NGO name removed>, you know, our own people. I guess if our own people were sending us back Tweets about the situation it could help.”

Bounded crowdsourcing overcomes the challenge of authentication and verification but obviously with a tradeoff in the volume of data collected “if an additional means were not created to enable new members through an automatic authentication system, to the bounded microblogging community.” However, the authors feel that bounded crowdsourcing environments “undermine the value of the system” since “the power of the medium lies in the fact that people, out of their own volition, make localized observations and that organizations could harness that multitude of data. The bounded environment argument neutralizes that, so in effect, at that point, when you have a group of people vetted to join a trusted circle, the data does not scale, because that pool by necessity would be small.”

That said, I believe the authors are spot on when they write that “Bounded environments might be a way of introducing Twitter into the humanitarian centric organizational discourse, as a starting point, because these organizations, as seen from the evidence presented above, are not likely to initially embrace the medium. Bounded environments could hence demonstrate the potential for Twitter to move beyond the PR and Communications departments.”

The second possible future direction is to treat crowdsourced data is ambient, “contextual information rather than instrumental information, (i.e., factual in nature).” This grassroots information could be considered as an “add-on to traditional, trusted institutional lines of information gathering.” As one interviewee noted, “Usually information exists. The question is the context doesn’t exist…. that’s really what I see as the biggest value [of crowdsourced information] and why would you use that in the future is creating the context…”.

The authors rightly suggest that “that adding contextual information through microblogged data may alleviate some of the uncertainty during the time of disaster. Since the microblogged data would not be the single data source upon which decisions would be made, the standards for authentication and security could be less stringent. This solution would offer the organization rich contextual data, while reducing the need for absolute data authentication, reducing the need for the organization to structurally change, and reducing the need for significant resources.” This is exactly how I consider and treat crowdsourced data.

The third and final forward-looking solution is computational. The authors “believe better computational models will eventually deduce informational snippets with acceptable levels of trust.” They refer to Ushahidi’s SwiftRiver project as an example.

In sum, this study is an important contribution to the discourse. The challenges around using crowdsourced crisis information are well known. If I come across as optimistic, it is for two reasons. First, I do think a lot can be done to address the challenges. Second, I do believe that attitudes in the humanitarian sector will continue to change.

The Smart-Talk Trap in the Era of Social Media (and What to Do About It)

I just came across an excellent piece in the Harvard Business Review thanks to my colleague Larry Pixa. Published in 1999 by Stanford professors Pfeffer and Sutton, “The Smart-Talk Trap” (PDF) is even more pertinent in today’s new media world where user-generated content is ubiquitous.  The key to success is action but the authors warn that we are increasingly “rewarded for talking—and the longer, louder, and more confusingly, the better.” This dynamic, which substitutes talk for action, is responsible for what Pfeffer and Sutton call the knowing-doing gap. The purpose of this blog post is to assess this gap in the context of social media and to offer potential solutions.

“When confronted with a problem, people act as if discussing it, formulating decisions, and hashing out plans for action are the same as actually fixing it. It’s an understandable response, after all, talk, unlike action, carries little risk. But it can paralyze a company.” In particular, the authors found that a distinct kind of talk is “an especially insidious inhibitor of organizational action: ‘smart talk.'”

The elements of “smart talk” include “sounding confident, articulate and eloquent; having interesting information and ideas; and possessing a good vocabulary.” But there’s also a dark side to smart talk: “first, it focuses on the negative, and second, it is unnecessarily complicated or abstract (or both). In other words, people engage in smart talk to spout criticisms and complexities. Unfortunately, such talk has an uncanny way of stopping action in it’s cracks.” This realization is in part what drove me to write this blog post last year On Technology and Learning, Or Why the Wright Brothers Did Not Create the 747.

Pfeffer and Sutton find that “because management today revolves around meetings, teams and consensus building, the more a person says, the more valuable he or she appears.” Studies of contemporary organizations show that “people who talk frequently are more likely to be judged by others as influential and important—they’re considered leaders.” People compete in the “conversational marketplace” where quantity is often confused for quality. Some scholarly studies on leadership called this the “blabbermouth theory of leadership,” which states that “people who talk more often and longer—regardless of the quality of their comments—are more likely to emerge as leaders of new groups […].” And so, “people who want to get ahead in organizations [or communities of practice] learn that talking a lot helps them reach their goal more reliably than taking action or inspiring others to act does.”

The Stanford professors also note that “the fact that people get hired, promoted, and assigned to coveted jobs based on their ability to sound intelligent, and not necessarily on their ability to act that way, is well known in most organizations.” I would add getting funding to this list, by the way. It gets wore though. The studies carried out by Pfeffer and Sutton revealed that “junior executives made a point—especially in meetings with their bosses present—of trashing the ideas of their peers. Every time someone dared to offer an idea, everyone around the table would leap in with reasons why it was nothing short of idiotic.” This particular dynamic is amplified several fold with the very public nature of social media. It can take some tough skin to offer new ideas in a blog post or via a Tweet when some users of social media actively use these same communication technologies to publicly criticize and humiliate others.

This evidence is not just anecdotal. Citing an earlier Harvard study called “Brilliant but Cruel,” the authors relay that “people who wrote negative book reviews were perceived by others as being less likable but more intelligent, competent and expert than people who wrote positive reviews of the same books.” In summary, “Only pessimism sounds profound. Optimism sounds superficial.” This has a direct connection to the knowing-doing gap. “If those with courage to propose something concrete have been devastated in the process, they’ll either leave or learn to be smart-talkers themselves. As a result, a company will end up being filled with clever put-down artists. It will end up paralyzed by the fear and silence those people spawn.” Incidentally, “people will try to sound smart not only by being critical but also by using trendy, pretentious, or overblown language.” This describes so well some tweets and blog post comments that I come across. Some people have actually managed to make a career out of this by leveraging social media to magnify their smart talk.

The authors note that using complex language and concepts can also make one sound smart. Indeed, “rare is the manager who stands before his or her peers to present a new strategy with a single slide and an idea that can be summarized in a sentence or two. Instead, managers congratulate themselves and one another when they come up with ideas that are so elaborate and convoluted they require two hours of multipart, multicolored slides and a liberal sprinkling of the latest buzzwords.” Now, the authors are “not claiming that complex language and concepts never add value to an organization.” They are simply suggesting that such language “brings a lot less value than most executives realize.” In the context of social media like Twitter, however, I have seen people simplify or summarize a concept to the extreme in order to trash someone else’s point of view.

Do not despair, however, as Pfeffer and Sutton have found five characteristics that describe organizations that have managed to avoid the smart-talk trap. All of these characteristics are applicable in the era of social media. In fact, social media can perhaps even amplify these characteristics when leveraged carefully. Here’s a summary of the five characteristics and some preliminary thoughts on their intersection with social media—please do share any ideas you may have:

1. “They have leaders who know and do the work.” “Leaders who do the work, rather than just talk about it, help prevent the knowing-doing gap from opening in the first place.” For me, this means avoiding blog and twitter wars—you know, the endless back-and-forth with smart-talk experts. The only way to win, like in the movie War Games, is not to play. Otherwise, you’ll get pulled into an endless debate instead of doing the real work that actually matters. Doing the work also increases the chances that people who respect you will blog and tweet about your concrete actions.

2. “They have a bias for plain language and a simple concepts. Companies that avoid the knowing-doing gap are often masters of the mundane. […] They consider ‘common sense’ a compliment rather than an insult.” Keep it simple. Blogging has actually helped me refine some of my ideas since blog posts are generally expected to be on the shorter side. Thus keeping a blog gives you an incentive to really think through the core elements of your new idea and prevents you from writing endlessly without a point. Also, bloggers generally want a wider readership so avoiding jargon and industry terms is a good idea (when possible). In terms of branding and marketing, this means focusing on creating a simple message about what your product, what you do and how that matters.

3. “They frame the questions by asking ‘how,’ not just ‘why’.” “In other words, the conversation focuses not on faults but on overcoming them.” I like this idea a lot and it can be imported into the social media space. Instead of shooting down someone else’s idea with a tweet reply “why in the world would you ever want to do that?!” invite them to explain how they’d implement their idea. The same tactic may help prevent blog wars.

4. “They have strong mechanisms for closing the loop.” “The companies in our study that bridged the gap […] had effective mechanisms in place to make sure decisions didn’t just end on paper but actually got implemented.” “Closing the loop—following up to make sure something actually happens after it has been decided on—isn’t a very complicated idea. But it is a potent means for preventing talk from being the only thing that occurs.” Social media leaves a digital trace, which may help you be more accountable to yourself and your organization. There are parallels with the Quantified Self movement here. If you blog and tweet about planning to launch a new and exciting project, then people may ask what happened if you don’t follow through with more blog posts on the topic. Another idea would be to use Basecamp and tie to-do’s and milestones (that are not confidential) to your twitter account, especially if your donor follows you online.

5. “They believe that experience is the best teacher ever.” “They make the process of doing into an opportunity to learn.” One company “had ‘No Whining’ patches sewn on everyone’s uniform and explained that complaining about something without trying to do anything about it was not acceptable.” I like this idea as well. Don’t be afraid to try something new and iterate. “Fail fast and fail forward,” some software developers like to say. Share your experience with others via blogs and tweets and invite them to propose solutions to address your challenges. Make the smart-talkers around you part of the solution when possible and publicly invite them to act constructively. If they don’t and continue with their slander, they’ll continue to lose credibility while you focus on concrete action, which is the key to success.

The Role of Facebook in Disaster Response

I recently met up with some Facebook colleagues to discuss the role that they and their platform might play in disaster response. So I thought I’d share some thoughts that come up during the conversation seeing as I’ve been thinking about this topic with a number of other colleagues for a while. I’m also very interested to hear any ideas and suggestions that iRevolution readers may have on this.

There’s no doubt that Facebook can—and already does—play an important role in disaster response. In Haiti, a colleague used Facebook to recruit hundreds of Creole speaking volunteers to translate tens of thousands of text messages into English as part of our Ushahidi-Haiti crisis mapping efforts. When an earth-quake struck New Zealand earlier this year, thousands of students organized their response via a Facebook group and also used the platform’s check-in’s feature to alert others in their social network that they were alright.

But how else might Facebook be used? The Haiti example demonstrates that the ability to rapidly recruit large numbers of volunteers is really key. So Facebook could create a dedicated landing page when a crisis unfolds, much like Google does. This landing page could then be used to recruit thousands of new volunteers for live crisis mapping operations in support of humanitarian organizations (for example). The landing page could spotlight a number of major projects that new volunteers could join, such as the Standby Volunteer Task Force (SBTF) or perhaps highlight the deployment of an Ushahidi platform for a particular crisis.

The use of Facebook to recruit volunteers presents several advantages, the most important ones being identity and scale. When we recruited hundreds of new volunteers for the Libya Crisis Map in support of the UN’s humanitarian response, we had to vet and verify each and every single one of them twice to ensure they were who they really said they were. This took hours, which wouldn’t be the case using Facebook. If we could set up a way for Facebook users to sign into an Ushahidi platform directly from their Facebook account, this too would save many hours of tedious work—a nice idea that my colleague Jaroslav Valuch suggested. See Facebook Connect, for example.

Facebook also operates at a scale of more than half-a-billion people, which has major “Cognitive Surplus” potential. We could leverage Facebook’s ad services as well—a good point made one Facebook colleague (and also Jon Gosier in an earlier conversation). That way, Facebook users would receive targeted adds on how they could volunteer based on their existing profiles.

So there’s huge potential, but like much else in the ICT-for-you-name-it space, you first have to focus on people, then process and then the technology. In other words, what we need to do first is establish a relationship with Facebook and decide on the messaging and the process by which volunteers on Facebook would join a volunteer network like the Standby Volunteer Task Force and help out on an Ushahidi map, for example.

Absorbing several hundred or thousands of new volunteers is no easy task but as long as we have a simple and efficient micro-tasking system via Facebook, we should be able to absorb this surge. Perhaps our colleagues at Facebook could take the lead on that, i.e, create a a simple interface allowing groups like the Task Force to farm out all kinds of micro-tasks, much like Crowdflower, which already embeds micro-tasks in Facebook. Indeed, we worked with Crowdflower during the floods in Pakistan to create this micro-tasking app for volunteers.

As my colleague Jaroslav also noted, this Mechanical Turk approach would allow these organizations to evaluate the performance of their volunteers on particular tasks. I would add to this some gaming dynamics to provide incentives and rewards for volunteering, as I blogged about here. Having a public score board based on the number of tasks completed by each volunteer would be just one idea. One could add badges, stickers, banners, etc., to your Facebook profile page as you complete tasks. And yes, the next question would be: how do we create the Farmville of disaster response?

On the Ushahidi end, it would also be good to create a Facebook app for Ushahidi so that users could simply map from their own Facebook page rather than open up  another browser to map critical information. As one Facebook colleague also noted, friends could then easily invite others to help map a crisis via Facebook. Indeed, this social effect could be most powerful reason to develop an Ushahidi Facebook app. As you submit a report on a map, this could be shared as a status update, for example, inviting your friends to join the cause. This could help crisis mapping go viral across your own social network—an effect that was particularly important in launching the Ushahidi-Haiti project.

As a side note, there is an Ushahidi plugin for Facebook that allows content posted on a wall to be directly pushed to the Ushahidi backend for mapping. But perhaps our colleagues at Facebook could help us add more features to this existing plugin to make it even more useful, such add integrating Facebook Connect, as noted earlier.

In sum, there are some low hanging fruits and quick wins that a few weeks of collaboration with Facebook could yield. These quick wins could make a really significant impact even if they sound (and are) rather simple. For me, the most exciting of these is the development of a Facebook app for Ushahidi.

Harnessing Social Media Tools to Fight Corruption

I had the distinct pleasure of being interviewed for this report on Harnessing Social Media Tools to Fight Corruption (PDF). The study was prepared by Dana Bekri, Brynne Dunn, Isik Oguzertem, Yan Su and Shivani Upreti as part of a final project for their degree from the Department of International Development at the London School of Economics and Political Science (LSE). The report was prepared for Transparency International (TI).

As part of this project, the authors compiled a very useful database of projects that apply social tools to create greater transparency and accountability around corruption issues. The authors recommend that TI draw on this list of projects to catalyze an active network of civil society initiatives that challenge corruption. The report also includes an interesting section on Mobilizing Volunteers and considers the role of volunteer networks as important in the fight against corruption. The authors write that,

“As an essential expression of citizenship and democracy, the past 25 years have seen rapid growth in the practice of volunteering worldwide. One study reports approximately 20.8 million volunteers in 37 countries, contributing US$ 400 billion to the world economy. The increasing enthusiasm of individuals to serve a cause while improving their own skills complements key goals of civil society organisations to build a strong volunteer force.”

This of course relates directly to the Standby Volunteer Task Force (SBTF), so I’m always keen to learn more about lessons learned and best practices in catalyzing a thriving volunteer network.

Do let me know if you’d like to get in touch with the authors, I’d be happy to provide an introduction via email.

Discussing the Recommendations of the Disaster 2.0 Report

It’s been well over a month since the Disaster 2.0 Report was publicly launched and while some conversations on the report have figured on the Crisis Mappers Network list-serve and the Standby Task Force blog, much of this discussion has largely overlooked the report’s detailed recommendations. I had hoped by now that someone would have taken the lead on catalyzing a debate around these recommendations, but since that still hasn’t happened, I might as well start.

The report’s authors clearly state that, “the development of an interface between the Volunteer and Technical Communities (V&TCs) and formal humanitarian system is a design problem that must be left to the stakeholders.” In addition, they clarify that “the purpose of this document is not to set forth the final word on how to connect new information flows into the international humanitarian system; but to initiate a conversation about the design challenges involved with this endeavor.” This conversation has yet to happen.

While the humanitarian community has proposed some design ideas, V&TCs have not responded in any detail to these proposals—although to be fair, no deadline for feedback has been suggested either. In any case, the proposed designs are meant to create an interface between humanitarian organizations and V&TCs—the two main stakeholders discussed in the Disaster 2.0 Report. It would be unfortunate and probably defeat the purpose of the report if the final interface were operationalized before any V&TCs had the chance to explain what would work best for them in terms of interface. Indeed, without an open and pro-active conversation that includes both stakeholder groups, it is unlikely that the final interface design will gain buy-in from both groups, which would result in wasted funding.

So here’s an open and editable Google Doc that includes the report’s recommen-dations. I have already added some of my comments to the Google Doc and hope others will as well. On June 1st, I will publish a new blog post that will summarize all the feedback added to the Google Doc. I hope this summary will serve to move the conversations forward so we can co-develop an interface that will prove useful and effective to all those concerned.

Video: Changing the World, One Map at a Time

Hosted in the beautiful city of Berlin, Re:publica 2011 is Germany’s largest annual conference on blogs, new media and the digital society, drawing thousands of participants from across the world for three days of exciting conversations and presentations. The conference venue was truly a spectacular one and while conference presentations are typically limited to 10-20 minutes, the organizers gave us an hour to share our stories. So I’m posting the video of my presentation below for anyone interested in learning more about new media, crowdsourcing, crisis mapping, live maps, crisis response, civil resistance, digital activism and check-in’s. I draw on my experience with Ushahidi and the Standby Volunteer Task Force (SBTF) and share examples from Kenya, Haiti, Libya, Japan, the US and Egypt to illustrate how live maps can change the world. My slides are available on Slideshare here.

When the Network Bears Witness: From Photosynth to Allsynth?

I’ve blogged about Photosynth before and toyed around with the idea of an Allsynth platform, the convergence of multiple technologies and sensors for human rights monitoring and disaster response. The idea would be to “stitch” pictures and video footage together to reconstruct evidence in multi-media 3D type format. What if we could do this live and in networked way though?

The thought popped into my head while at the Share Conference in Belgrade recently. The conference included a “Share by Night” track with concerts, live bands, etc., in the evenings. What caught my eye, one night, was not on stage but the dozens of smart phones being held up in the audience to capture the vibes, sounds, movements, etc on stage.

I then thought about that for a moment and the new start up, Color.com, that a colleague of mine co-founded. “Color creates new, dynamic social networks for your iPhone or Android wherever you go. It is meant to be used with other people right next to you who have the Color app on their smartphone. This way, you can take photos and videos together that everyone keeps.”

I was thinking about the concert and all zones bright screens. What if they were streaming live and networked? Meaning one could watch all the streaming videos from one website and pivot between different phones. Because the phones are geo-tagged, I’d be able to move closer to the stage, pivoting to phone users closer to the stage. Why need a film crew anymore? The phones in the audience become your cameras and you could even mix your own music video that way.

I’m at Where 2.0 today and Blaise Agüera y Arcas from Photosynth shared his most recent work, ReadWriteWorld, which he just announced today. “Technically it’s an index-ing, unification, and connection of the world’s geo-linked media. Informally, it’s the magic of:

  • Seeing your photos automatically connected to others;
  • Being able to simply create immersive experiences from your or your friends photos, videos, and panoramas;
  • “Fixing” the world, when the official imagery of your street is out of date;
  • Visually mapping your business, your favorite park, or your real estate for everyone to see;
  • Understanding the emergent information from the density and tagging of media.”
What if we applied this kind of networked, meshed technology to human rights monitoring or disaster response? Granted, most of the world does not own a smart phone. But that won’t always be the case. What if the hundreds of thousands of phones used in Tahrir Square were networked and streaming as if they were covering a concert?

 I don’t know the answer to that question, but I find the idea interesting.