Tag Archives: Facebook

The Future of Crisis Mapping is Finally Here

In 2010, I had the opportunity to participate in the very first Disaster Response Working Group meeting held at Facebook. The digital humanitarian response to the tragic Haiti earthquake months earlier was the main point of discussion. Digital Humanitarians at the time had crowdsourced social media monitoring and satellite imagery analysis to create a unique set of crisis maps used by a range of responders. Humanitarian organizations to this day point to the Haiti response as a pivotal milestone in the history of crisis mapping. Today marks an equally important milestone thanks to three humanitarian groups and Facebook.

Facebook just announced a new partnership with UNICEF, the International Federation of the Red Cross (IFRC), American Red Cross (ARC) and the World Food Program (WFP) to begin sharing actionable, real-time data that will fill critical data gaps that exist in the first hours of a sudden onset disaster. UNICEF, IFRC, ARC and WFP deserve considerable praise in partnering on such an innovative effort. As the IFRC’s World Disaster Report noted in 2005, having access to information during disasters is equally important as having access to food, water and medicine. But unlike these other commodities, information has a far shorter shelf life. In other words, the value of information depreciates very quickly; information rots fast.

Disaster responders need information that is both reliable and timely. Both are typically scarce after disasters. Saving time can make all the difference. The faster responders get reliable information, the faster they can prioritize and mobilize relief efforts based on established needs. Information takes time to analyze, however, especially unstructured information. Digital Humanitarians encountered this Big Data challenge first hand during the Haiti Earthquake response, and after most disasters since then. Still, online data has the potential to fill crucial data gaps. This is especially true if this data is made available in a structured and responsible way by a company like Facebook; a platform that reaches nearly 2 billion people around the world. And by listening to what aid organizations need, Facebook is providing this information in a format that is actually usable and useful.

Listening to Humanitarian Needs

In early 2016, I began consulting with Facebook on their disaster mapping initiative. One of our first orders of business was to reach out to subject matter experts around the world. It is all too easy for companies in Silicon Valley to speculate about solutions that could be useful to humanitarian organizations. The problem with that approach is that said companies almost never consult seasoned humanitarian professionals in the process. Facebook took a different approach. They spent well over half-a-year meeting with and listening to humanitarian professionals across a number of different aid organizations. Then, they co-developed the solution together with experts from UNCIEF, IFRC, ARC, WFP and myself. This process insured that they built solutions that are actually needed by the intended end users. Other Silicon Valley companies really ought to take the same approach when seeking to support social good efforts in a meaningful manner.

UNICEF, IFRC, ARC and WFP bring extensive expertise and global reach to this new partnership with Facebook. They have both the capacity and strong interest to fully leverage the new disaster maps being made available. And each of these humanitarian organizations have spent a considerable amount of time and energy collaborating with Facebook to iterate on the disaster maps. This type of commitment, partnership and leadership from the humanitarian sector is vital and indeed absolutely necessary to innovate and scale innovation.

One of the areas in which Facebook exercised great care was in applying protection standards. This was another area in which I provided guidance, along with colleagues at the International Committee of the Red Cross (ICRC). We worked closely with Facebook to ensure that their efforts followed established protection protocols in the humanitarian sector. In September 2016, for example, three Facebookers and I participated in a full-day protection workshop organized the ICRC. Facebook presented on the new mapping project – still in its very early stages – and actively solicited feedback from the ICRC and a dozen other humanitarian organizations that participated in the workshop. Facebook noted upfront that they didn’t have all the answers and welcomed as much input as humanitarian professionals could give. As it turns out, they were already well on their way to being fully in line with the ICRC’s own protection protocols.

Facebook also worked with its own internal privacy, security and legal teams to ensure that the datasets it produced were privacy-preserving and consistent with legal standards around the world. This process took a long time. Some insight from the “inside”: I began joking that this process makes the UN look fast. But the fact that Facebook was so careful and meticulous when it came to data privacy was certainly reassuring. To be sure, Facebook developed a rigorous review process to ensure that our applied research was carried out responsibly and ethically. This demonstrates that using data for high-impact, social good projects need not be at odds with privacy—we can achieve both. By using data aggregating and spatial smoothing, for example, we can reduce noise in the data and identify important trends while following its data privacy standards.

Another important area of collaboration very early on focused specifically on data bias. The team at Facebook was careful to emphasize that their data was not a silver bullet – it is representative of people who use Facebook on mobile with Location Services enabled. To this end, one of the areas I worked on closely with Facebook was validation. For example, in an early iteration of the maps, I analyzed mainstream media news reports on the Fort McMurray Fires in Canada and matched them with specific patterns we had observed on Facebook’s maps. The results suggested that Facebook’s geospatial data was providing reliable insights about evacuation and safety on the ground albeit in real time compared to the media reports which were published many hours later.

Facebook Safety Check

Within 24 hours of activating Safety Check, we see that there are far fewer people than usual in the town of Fort McMurray. Areas that are color-coded red reflect much lower numbers of Facebook users there compared to the same time the week before. This makes sense since these locations are affected by the wildfires and have thus been evacuated.

We can use Facebook’s Safety Check data to create live disaster maps that quickly highlight where groups of users are checking in safe, and also where they are not checking in safe. This could provide a number of important proxies such as disaster damage, for example.

Facebook Location Maps

We see that before the crisis began (left plot) people were located in the town in expected numbers, but quickly vacated over the next 24 hour period (map turning red). Even within just an hour and half into the crisis we can tell that users are evacuating the town (the red color indicating low values of people present compared to baseline data). This signal becomes even more clear and consistent as the crisis progresses.

Population here refers to the population of Facebook users. These aggregated maps can provide a proxy for population density and movement before, during and after humanitarian disasters.

In the above video, the blue line that stretches diagonally across the map is Highway 63, which was the primary evacuation route for many in McMurray. The video shows where the general population of Facebook users is moving over time at half-hour intervals. Notice that the blue line becomes even denser between 1 and 3 A.M. local time. Reports from the mainstream media published that afternoon revealed that many drivers ended up having to “camp” along the highway overnight.

Take the map below of the Kaikoura Earthquake in New Zealand as another example. The disaster maps for the earthquake show the location and movement of people in Kaikoura following the disaster. One day after the earthquake, we notice that the population begins to evacuate the city. Using news articles, we can cross validate that residents of Kaikoura were evacuated to Christchurch, 200 kilometers away. Several days later, we notice from the Facebook maps that individuals are starting to return to Kaikoura, presumably to repair and rebuild their community.

It’s still early days, and Facebook plans to work closely alongside their partners to better understand and report biases in the data. This is another reason why Facebook’s partnership with UNICEF, IFRC, ARC and WFP is so critical. These groups have the capacity to compare the disaster maps with other datasets, validate the maps with field surveys, and support Facebook in understanding how to address issues of representativeness. One approach they are exploring is to compare the disaster maps to the population density datasets that Facebook has already open-sourced. By making this comparison, we can clearly communicate any areas that are likely to be inadequately covered by the disaster data. They are also working with Facebook’s Connectivity Lab to develop bias-correcting solutions based on maps of cell phone connectivity. For more on social media, bias and crisis mapping, see Chapter 2 of Digital Humanitarians.

Moving Forward

Our humanitarian partners are keen to use Facebook’s new solution in their relief efforts. Thanks to Facebook’s data, we can create a series of unique maps that in turn provide unique insights and do so in real-time. These maps can be made available right away and updated at 15 minute intervals if need be. Let me repeat that: every 15 minutes. This is the first time in history that humanitarian organizations will have access to such high frequency, privacy-preserving structured data powered by some 1.86 billion online users.

There is no doubt that responders would’ve had far more situational awareness and far more quickly had these crisis maps existed in the wake of Haiti’s tragic earthquake in 2010. Since the maps aggregate Facebook data to administrative boundaries, humanitarian partners can also integrate this unique dataset into their own systems. During the first Facebook Disaster Working Group meeting back in 2010, we asked ourselves how Facebook might leverage it’s own data to create unique maps to help aid organizations reduce suffering and loss of life. Today, not only do we have an answer to this question, we also have the beginnings of an operational solution that humanitarians can use directly.

Facebook’s new disaster mapping solution is not a silver bullet, however; all my colleagues at Facebook recognize this full well, as do our humanitarian partners. These maps simply serve as new, unique and independent sources of real-time data and insights for humanitarian organizations. The number of Facebook users has essentially doubled since the Haiti Earthquake, nearing 2 billion users today. The more people around the planet connect and share on Facebook, the more insights responders gain on how best to carry out relief efforts during major disasters. This information is a public good that has the potential to save lives, and it’s crucial that insights derived from the data be made available to those who can put it to use. I sincerely hope that other Silicon Valley companies take note of these efforts and following in Facebook’s footsteps.

As a next step, Facebook is looking to both international and local humanitarian partners to help improve, validate and measure the impact of these new disaster maps. As the Facebook team works to validate the maps with the humanitarian community, they also hope to make the maps available to aid organizations though a dedicated API and Visualization tool. Interested organizations will be asked to follow a simple application process to gain access to the disaster maps.

Facebook disaster maps are really unique and we’ve only begun to scratch the surface vis-à-vis the different humanitarian efforts these maps can inform. For example, my team and I at WeRobotics were recently in the Dominican Republic (DR) where we ran a full-fledged disaster response exercise with the country’s Emergency Operations Center (EOC) and the World Food Program (WFP). The purpose of the simulation—which focused on searching for survivors and assessing disaster damage—was to develop and test coordination mechanisms to facilitate the rapid deployment of small drones or Unmanned Aerial Vehicles (UAVs). As the drone pilots began to program their drones to carry out the aerial surveys, I turned to my WFP colleague Gabriela and said:

“What if, during the next disaster, we used Facebook’s Safety Check Map to prioritize which areas the drones should search? What if we used Facebook’s Population Map to prioritize aerial surveys of areas that are being abandoned, possibly to due to collapsed buildings or other types of infrastructure damage? Since the Facebook maps are available in near real-time, we could program the drone flights within minutes of a disaster. What do you think?”

Gaby looked back at the drones and said:

“Wow. This would change everything.”


What Was Novel About Social Media Use During Hurricane Sandy?

We saw the usual spikes in Twitter activity and the typical (reactive) launch of crowdsourced crisis maps. We also saw map mashups combining user-generated content with scientific weather data. Facebook was once again used to inform our social networks: “We are ok” became the most common status update on the site. In addition, thousands of pictures where shared on Instagram (600/minute), documenting both the impending danger & resulting impact of Hurricane Sandy. But was there anything really novel about the use of social media during this latest disaster?

I’m asking not because I claim to know the answer but because I’m genuinely interested and curious. One possible “novelty” that caught my eye was this FrankenFlow experiment to “algorithmically curate” pictures shared on social media. Perhaps another “novelty” was the embedding of webcams within a number of crisis maps, such as those below launched by #HurricaneHacker and Team Rubicon respectively.

Another “novelty” that struck me was how much focus there was on debunking false information being circulated during the hurricane—particularly images. The speed of this debunking was also striking. As regular iRevolution readers will know, “information forensics” is a major interest of mine.

This Tumblr post was one of the first to emerge in response to the fake pictures (30+) of the hurricane swirling around the social media whirlwind. Snopes.com also got in on the action with this post. Within hours, The Atlantic Wire followed with this piece entitled “Think Before You Retweet: How to Spot a Fake Storm Photo.” Shortly after, Alexis Madrigal from The Atlantic published this piece on “Sorting the Real Sandy Photos from the Fakes,” like the one below.

These rapid rumor-bashing efforts led BuzzFeed’s John Herman to claim that Twitter acted as a truth machine: “Twitter’s capacity to spread false information is more than cancelled out by its savage self-correction.” This is not the first time that journalists or researchers have highlighted Twitter’s tendency for self-correction. This peer-reviewed, data-driven study of disaster tweets generated during the 2010 Chile Earthquake reports the same finding.

What other novelties did you come across? Are there other interesting, original and creative uses of social media that ought to be documented for future disaster response efforts? I’d love to hear from you via the comments section below. Thanks!

Behind the Scenes: The Digital Operations Center of the American Red Cross

The Digital Operations Center at the American Red Cross is an important and exciting development. I recently sat down with Wendy Harman to learn more about the initiative and to exchange some lessons learned in this new world of digital  humanitarians. One common challenge in emergency response is scaling. The American Red Cross cannot be everywhere at the same time—and that includes being on social media. More than 4,000 tweets reference the Red Cross on an average day, a figure that skyrockets during disasters. And when crises strike, so does Big Data. The Digital Operations Center is one response to this scaling challenge.

Sponsored by Dell, the Center uses customized software produced by Radian 6 to monitor and analyze social media in real-time. The Center itself sits three people who have access to six customized screens that relate relevant information drawn from various social media channels. The first screen below depicts some of key topical areas that the Red Cross monitors, e.g., references to the American Red Cross, Storms in 2012, and Delivery Services.

Circle sizes in the first screen depict the volume of references related to that topic area. The color coding (red, green and beige) relates to sentiment analysis (beige being neutral). The dashboard with the “speed dials” right underneath the first screen provides more details on the sentiment analysis.

Lets take a closer look at the circles from the first screen. The dots “orbiting” the central icon relate to the categories of key words that the Radian 6 platform parses. You can click on these orbiting dots to “drill down” and view the individual key words that make up that specific category. This circles screen gets updated in near real-time and draws on data from Twitter, Facebook, YouTube, Flickr and blogs. (Note that the distance between the orbiting dots and the center does not represent anything).

An operations center would of course not be complete without a map, so the Red Cross uses two screens to visualize different data on two heat maps. The one below depicts references made on social media platforms vis-a-vis storms that have occurred during the past 3 days.

The screen below the map highlights the bio’s of 50 individual twitter users who have made references to the storms. All this data gets generated from the “Engagement Console” pictured below. The purpose of this web-based tool, which looks a lot like Tweetdeck, is to enable the Red Cross to customize the specific types of information they’re looking form, and to respond accordingly.

Lets look at the Consul more closely. In the Workflow section on the left, users decide what types of tags they’re looking for and can also filter by priority level. They can also specify the type of sentiment they’re looking, e.g., negative feelings vis-a-vis a particular issue. In addition, they can take certain actions in response to each information item. For example, they can reply to a tweet, a Facebook status update, or a blog post; and they can do this directly from the engagement consul. Based on the license that the Red Cross users, up to 25 of their team members can access the Consul and collaborate in real-time when processing the various tweets and Facebook updates.

The Consul also allows users to create customized timelines, charts and wordl graphics to better understand trends changing over time in the social media space. To fully leverage this social media monitoring platform, Wendy and team are also launching a digital volunteers program. The goal is for these volunteers to eventually become the prime users of the Radian platform and to filter the bulk of relevant information in the social media space. This would considerably lighten the load for existing staff. In other words, the volunteer program would help the American Red Cross scale in the social media world we live in.

Wendy plans to set up a dedicated 2-hour training for individuals who want to volunteer online in support of the Digital Operations Center. These trainings will be carried out via Webex and will also be available to existing Red Cross staff.


As  argued in this previous blog post, the launch of this Digital Operations Center is further evidence that the humanitarian space is ready for innovation and that some technology companies are starting to think about how their solutions might be applied for humanitarian purposes. Indeed, it was Dell that first approached the Red Cross with an expressed interest in contributing to the organization’s efforts in disaster response. The initiative also demonstrates that combining automated natural language processing solutions with a digital volunteer net-work seems to be a winning strategy, at least for now.

After listening to Wendy describe the various tools she and her colleagues use as part of the Operations Center, I began to wonder whether these types of tools will eventually become free and easy enough for one person to be her very own operations center. I suppose only time will tell. Until then, I look forward to following the Center’s progress and hope it inspires other emergency response organizations to adopt similar solutions.

Crowdsourcing Humanitarian Convoys in Libya

Many activists in Egypt donated food and medical supplies to support the Libyan revolution in early 2011. As a result, volunteers set up and coordinated humanitarian convoys from major Egyptian cities to Tripoli. But these convoys faced two major problems. First, volunteers needed to know where the convoys were in order to communicate this to Libyan revolutionists so they could wait for the fleet at the border and escort them to Tripoli. Second, because these volunteers were headed into a war zone, their friends and family wanted to keep track of them to make sure they were safe. The solution? IntaFeen.com.

Inta feen? means “where are you?” in Arabic and IntaFeen.com is a mobile check-in service like Foursquare but localized for the Arab World. Convoy drivers used IntaFeen to check-in at different stops along the way to Tripoli to provide regular updates on the situation. This is how volunteers back in Egypt who coordinated the convoy kept track of their progress and communicated updates in real-time to their Libyan counterparts. Volunteers who went along with the convoys also used IntaFeen and their check-in’s would also get posted on Twitter and Facebook, allowing families and friends in Egypt to track their whereabouts.

Al Amain Road is a highway between Alexandria and Tripoli. These tweets and check-in’s acted as a DIY fleet management system for volunteers and activists.

The use of IntaFeen combined with Facebook and Twitter also created an interesting side-effect in terms of social media marketing to promote activism. The sharing of these updates within and across various social networks galvanized more Egyptians to volunteer their time and resulted in more convoys.

I wonder whether these activists knew about another crowdsourced volunteer project taking place at exactly the same time in support of the UN’s humanitarian relief operations: Libya Crisis Map. Much of the content added to the map was sourced from social media. Could the #LibyaConvoy project have benefited from the real-time situational awareness provided by the Libya Crisis Map?

Will we see more convergence between volunteer-run crisis maps and volunteer-run humanitarian response in the near future?

Big thanks to Adel Youssef from IntaFeen.com who spoke about this fascinating project (and Ushahidi) at Where 2.0 this week. More information on #Libya Convoy is available here. See also my earlier blog posts on the use of check-in’s for activism and disaster response.

Trails of Trustworthiness in Real-Time Streams

Real-time information channels like Twitter, Facebook and Google have created cascades of information that are becoming increasingly challenging to navigate. “Smart-filters” alone are not the solution since they won’t necessarily help us determine the quality and trustworthiness of the information we receive. I’ve been studying this challenge ever since the idea behind SwiftRiver first emerged several years ago now.

I was thus thrilled to come across a short paper on “Trails of Trustworthiness in Real-Time Streams” which describes a start-up project that aims to provide users with a “system that can maintain trails of trustworthiness propagated through real-time information channels,” which will “enable its educated users to evaluate its provenance, its credibility and the independence of the multiple sources that may provide this information.” The authors, Panagiotis Metaxas and Eni Mustafaraj, kindly cite my paper on “Information Forensics” and also reference SwiftRiver in their conclusion.

The paper argues that studying the tactics that propagandists employ in real life can provide insights and even predict the tricks employed by Web spammers.

“To prove the strength of this relationship between propagandistic and spamming techniques, […] we show that one can, in fact, use anti-propagandistic techniques to discover Web spamming networks. In particular, we demonstrate that when starting from an initial untrustworthy site, backwards propagation of distrust (looking at the graph defined by links pointing to to an untrustworthy site) is a successful approach to finding clusters of spamming, untrustworthy sites. This approach was inspired by the social behavior associated with distrust: in society, recognition of an untrustworthy entity (person, institution, idea, etc) is reason to question the trust- worthiness of those who recommend it. Other entities that are found to strongly support untrustworthy entities become less trustworthy themselves. As in society, distrust is also propagated backwards on the Web graph.”

The authors document that today’s Web spammers are using increasingly sophisticated tricks.

“In cases where there are high stakes, Web spammers’ influence may have important consequences for a whole country. For example, in the 2006 Congressional elections, activists using Google bombs orchestrated an effort to game search engines so that they present information in the search results that was unfavorable to 50 targeted candidates. While this was an operation conducted in the open, spammers prefer to work in secrecy so that their actions are not revealed. So,  revealed and documented the first Twitter bomb, which tried to influence the Massachusetts special elections, show- ing how an Iowa-based political group, hiding its affiliation and profile, was able to serve misinformation a day before the election to more than 60,000 Twitter users that were follow- ing the elections. Very recently we saw an increase in political cybersquatting, a phenomenon we reported in [28]. And even more recently, […] we discovered the existence of Pre-fabricated Twitter factories, an effort to provide collaborators pre-compiled tweets that will attack members of the Media while avoiding detection of automatic spam algorithms from Twitter.

The theoretical foundations for a trustworthiness system:

“Our concept of trustworthiness comes from the epistemology of knowledge. When we believe that some piece of information is trustworthy (e.g., true, or mostly true), we do so for intrinsic and/or extrinsic reasons. Intrinsic reasons are those that we acknowledge because they agree with our own prior experience or belief. Extrinsic reasons are those that we accept because we trust the conveyor of the information. If we have limited information about the conveyor of information, we look for a combination of independent sources that may support the information we receive (e.g., we employ “triangulation” of the information paths). In the design of our system we aim to automatize as much as possible the process of determining the reasons that support the information we receive.”

“We define as trustworthy, information that is deemed reliable enough (i.e., with some probability) to justify action by the receiver in the future. In other words, trustworthiness is observable through actions.”

“The overall trustworthiness of the information we receive is determined by a linear combination of (a) the reputation RZ of the original sender Z, (b) the credibility we associate with the contents of the message itself C(m), and (c) characteristics of the path that the message used to reach us.”

“To compute the trustworthiness of each message from scratch is clearly a huge task. But the research that has been done so far justifies optimism in creating a semi-automatic, personalized tool that will help its users make sense of the information they receive. Clearly, no such system exists right now, but components of our system do exist in some of the popular [real-time information channels]. For a testing and evaluation of our system we plan to use primarily Twitter, but also real-time Google results and Facebook.”

In order to provide trails of trustworthiness in real-time streams, the authors plan to address the following challenges:

•  “Establishment of new metrics that will help evaluate the trustworthiness of information people receive, especially from real-time sources, which may demand immediate attention and action. […] we show that coverage of a wider range of opinions, along with independence of results’ provenance, can enhance the quality of organic search results. We plan to extend this work in the area of real-time information so that it does not rely on post-processing procedures that evaluate quality, but on real-time algorithms that maintain a trail of trustworthiness for every piece of information the user receives.”

• “Monitor the evolving ways in which information reaches users, in particular citizens near election time.”

•  “Establish a personalizable model that captures the parameters involved in the determination of trustworthiness of in- formation in real-time information channels, such as Twitter, extending the work of measuring quality in more static information channels, and by applying machine learning and data mining algorithms. To implement this task, we will design online algorithms that support the determination of quality via the maintenance of trails of trustworthiness that each piece of information carries with it, either explicitly or implicitly. Of particular importance, is that these algorithms should help maintain privacy for the user’s trusting network.”

• “Design algorithms that can detect attacks on [real-time information channels]. For example we can automatically detect bursts of activity re- lated to a subject, source, or non-independent sources. We have already made progress in this area. Recently, we advised and provided data to a group of researchers at Indiana University to help them implement “truthy”, a site that monitors bursty activity on Twitter.  We plan to advance, fine-tune and automate this process. In particular, we will develop algorithms that calculate the trust in an information trail based on a score that is affected by the influence and trustworthiness of the informants.”

In conclusion, the authors “mention that in a month from this writing, Ushahidi […] plans to release SwiftRiver, a platform that ‘enables the filtering and verification of real-time data from channels like Twitter, SMS, Email and RSS feeds’. Several of the features of Swift River seem similar to what we propose, though a major difference appears to be that our design is personalization at the individual user level.”

Indeed, having been involved in SwiftRiver research since early 2009 and currently testing the private beta, there are important similarities and some differences. But one such difference is not personalization. Indeed, Swift allows full personalization at the individual user level.

Another is that we’re hoping to go beyond just text-based information with Swift, i.e., we hope to pull in pictures and video footage (in addition to Tweets, RSS feeds, email, SMS, etc) in order to cross-validate information across media, which we expect will make the falsification of crowdsourced information more challenging, as I argue here. In any case, I very much hope that the system being developed by the authors will be free and open source so that integration might be possible.

A copy of the paper is available here (PDF). I hope to meet the authors at the Berkman Center’s “Truth in Digital Media Symposium” and highly recommend the wiki they’ve put together with additional resources. I’ve added the majority of my research on verification of crowdsourced information to that wiki, such as my 20-page study on “Information Forensics: Five Case Studies on How to Verify Crowdsourced Information from Social Media.”

Passing the I’m-Not-Gaddafi Test: Authenticating Identity During Crisis Mapping Operations

I’ve found myself telling this story so often in response to various questions that it really should be a blog post. The story begins with the launch of the Libya Crisis Map a few months ago at the request of the UN. After the first 10 days of deploying the live map, the UN asked us to continue for another two weeks. When I write “us” here, I mean the Standby Volunteer Task Force (SBTF), which is designed for short-term rapid crisis mapping support, not long term deploy-ments. So we needed to recruit additional volunteers to continue mapping the Libya crisis. And this is where the I’m-not-Gaddafi test comes in.

To do our live crisis mapping work, SBTF volunteers generally need password access to whatever mapping platform we happen to be using. This has typically been the Ushahidi platform. Giving out passwords to several dozen volunteers in almost as many countries requires trust. Password access means one could start sabotaging the platform, e.g., deleting reports, creating fake ones, etc. So when we began recruiting 200+ new volunteers to sustain our crisis mapping efforts in Libya, we needed a way to vet these new recruits, particularly since we were dealing with a political conflict. So we set up an I’m-not-Gaddafi test by using this Google Form:

So we placed the burden of proof on our (very patient) volunteers. Here’s a quick summary of the key items we used in our “grading” to authenticate volunteers’ identity:

Email address: Professional or academic email addresses were preferred and received a more favorable “score”.

Twitter handle: The great thing about Twitter is you can read through weeks’ worth of someone’s Twitter stream. I personally used this feature several times to determine whether any political tweets revealed a pro-Gaddafi attitude.

Facebook page: Given that posing as someone else or a fictitious person on Facebook violates their terms of service, having the link to an applicant’s Facebook page was considered a plus.

LinkedIn profile: This was a particularly useful piece of evidence given that the majority of people on LinkedIn are professionals.

Personal/Professional blog or website: This was also a great to way to authenticate an individual’s identity. We also encouraged applicants to share links to anything they had published which was available online.

For every application, we had two or more of us from the core team go through the responses. In order to sign off a new volunteer as vetted, two people had to write down “Yes” with their name. We would give priority to the most complete applications. I would say that 80% of the 200+ applications we received were able to be signed off on without requiring additional information. We did follow ups via email for the remaining 20%, the majority of whom provided us with extra info that enabled us to validate their identity. One individual even sent us a copy of his official ID. There may have been a handful who didn’t reply to our requests for additional information.

This entire vetting process appears to have worked, but it was extremely laborious and time-consuming. I personally spent hours and hours going through more than 100 applications. We definitely need to come up with a different system in the future. So I’ve been exploring some possible solutions—such as social authentication—with a number of groups and I hope to provide an update next month which will make all our lives a lot easier, not to mention give us more dedicated mapping time. There’s also the need to improve the Ushahidi platform to make it more like Wikipedia, i.e., where contributions can be tracked and logged. I think combining both approaches—identity authentication and tracking—may be the way to go.

The Role of Facebook in Disaster Response

I recently met up with some Facebook colleagues to discuss the role that they and their platform might play in disaster response. So I thought I’d share some thoughts that come up during the conversation seeing as I’ve been thinking about this topic with a number of other colleagues for a while. I’m also very interested to hear any ideas and suggestions that iRevolution readers may have on this.

There’s no doubt that Facebook can—and already does—play an important role in disaster response. In Haiti, a colleague used Facebook to recruit hundreds of Creole speaking volunteers to translate tens of thousands of text messages into English as part of our Ushahidi-Haiti crisis mapping efforts. When an earth-quake struck New Zealand earlier this year, thousands of students organized their response via a Facebook group and also used the platform’s check-in’s feature to alert others in their social network that they were alright.

But how else might Facebook be used? The Haiti example demonstrates that the ability to rapidly recruit large numbers of volunteers is really key. So Facebook could create a dedicated landing page when a crisis unfolds, much like Google does. This landing page could then be used to recruit thousands of new volunteers for live crisis mapping operations in support of humanitarian organizations (for example). The landing page could spotlight a number of major projects that new volunteers could join, such as the Standby Volunteer Task Force (SBTF) or perhaps highlight the deployment of an Ushahidi platform for a particular crisis.

The use of Facebook to recruit volunteers presents several advantages, the most important ones being identity and scale. When we recruited hundreds of new volunteers for the Libya Crisis Map in support of the UN’s humanitarian response, we had to vet and verify each and every single one of them twice to ensure they were who they really said they were. This took hours, which wouldn’t be the case using Facebook. If we could set up a way for Facebook users to sign into an Ushahidi platform directly from their Facebook account, this too would save many hours of tedious work—a nice idea that my colleague Jaroslav Valuch suggested. See Facebook Connect, for example.

Facebook also operates at a scale of more than half-a-billion people, which has major “Cognitive Surplus” potential. We could leverage Facebook’s ad services as well—a good point made one Facebook colleague (and also Jon Gosier in an earlier conversation). That way, Facebook users would receive targeted adds on how they could volunteer based on their existing profiles.

So there’s huge potential, but like much else in the ICT-for-you-name-it space, you first have to focus on people, then process and then the technology. In other words, what we need to do first is establish a relationship with Facebook and decide on the messaging and the process by which volunteers on Facebook would join a volunteer network like the Standby Volunteer Task Force and help out on an Ushahidi map, for example.

Absorbing several hundred or thousands of new volunteers is no easy task but as long as we have a simple and efficient micro-tasking system via Facebook, we should be able to absorb this surge. Perhaps our colleagues at Facebook could take the lead on that, i.e, create a a simple interface allowing groups like the Task Force to farm out all kinds of micro-tasks, much like Crowdflower, which already embeds micro-tasks in Facebook. Indeed, we worked with Crowdflower during the floods in Pakistan to create this micro-tasking app for volunteers.

As my colleague Jaroslav also noted, this Mechanical Turk approach would allow these organizations to evaluate the performance of their volunteers on particular tasks. I would add to this some gaming dynamics to provide incentives and rewards for volunteering, as I blogged about here. Having a public score board based on the number of tasks completed by each volunteer would be just one idea. One could add badges, stickers, banners, etc., to your Facebook profile page as you complete tasks. And yes, the next question would be: how do we create the Farmville of disaster response?

On the Ushahidi end, it would also be good to create a Facebook app for Ushahidi so that users could simply map from their own Facebook page rather than open up  another browser to map critical information. As one Facebook colleague also noted, friends could then easily invite others to help map a crisis via Facebook. Indeed, this social effect could be most powerful reason to develop an Ushahidi Facebook app. As you submit a report on a map, this could be shared as a status update, for example, inviting your friends to join the cause. This could help crisis mapping go viral across your own social network—an effect that was particularly important in launching the Ushahidi-Haiti project.

As a side note, there is an Ushahidi plugin for Facebook that allows content posted on a wall to be directly pushed to the Ushahidi backend for mapping. But perhaps our colleagues at Facebook could help us add more features to this existing plugin to make it even more useful, such add integrating Facebook Connect, as noted earlier.

In sum, there are some low hanging fruits and quick wins that a few weeks of collaboration with Facebook could yield. These quick wins could make a really significant impact even if they sound (and are) rather simple. For me, the most exciting of these is the development of a Facebook app for Ushahidi.