Crowdsourcing Critical Thinking to Verify Social Media During Crises

My colleagues and I at QCRI and the Masdar Institute will be launching Verily in the near future. The project has already received quite a bit of media coverage—particularly after the Boston marathon bombings. So here’s an update. While major errors were made in the crowdsourced response to the bombings, social media can help to find quickly find individuals and resources during a crisis. Moreover, time-critical crowdsourcing can also be used to verify unconfirmed reports circulating on social media.

Screen Shot 2013-05-19 at 5.51.06 PM

The errors made following the bombings were the result of two main factors:

(1) the crowd is digitally illiterate
(2) the platforms used were not appropriate for the tasks at hand

The first factor has to do with education. Most of us are still in Kindergarden when it comes to the appropriate use social media. We lack the digital or media literacy required for the responsible use of social media during crises. The good news, however, is that the major backlash from the mistakes made in Boston are already serving as an important lesson to many in the crowd who are very likely to think twice about retweeting certain content or making blind allegations on social media in the future. The second factor has to do with design. Tools like Reddit and 4Chan that are useful for posting photos of cute cats are not always the tools best designed for finding critical information during crises. The crowd is willing to help, this much has been proven. The crowd simply needs better tools to focus and rationalize to goodwill of it’s members.

Verily was inspired from the DARPA Red Balloon Challenge which leveraged social media & social networks to find the location of 10 red weather balloons planted across the continental USA (3 million square miles) in under 9 hours. So Verily uses that same time-critical mobilization approach—negative incentive recursive mechanism—to rapidly collect evidence around a particular claim during a disaster, such as “The bridge in downtown LA has been destroyed by the earthquake”. Users of Verily can share this verification challenge directly from the Verily website (e.g., Share via Twitter, FB, and Email), which posts a link back to the Verily claim page.

This time-critical mobilization & crowdsourcing element is the first main component of Verily. Because disasters are far more geographically bounded than the continental US, we believe that relevant evidence can be crowdsourced in a matter of minutes rather than hours. Indeed, while the degree of separation in the analog world is 6, that number falls closer to 4 on social media, and we believe falls even more in bounded geographical areas like urban centers. This means that the 20+ people living opposite that bridge in LA are only 2 or 3 hops from your social network and could be tapped via Verily to take pictures of the bridge from their window, for example.

pinterest_blog

The second main component is to crowdsource critical thinking which is key to countering the spread of false rumors during crises. The interface to post evidence on Verily is modeled along the lines of Pinterest, but with each piece of content (text, image, video), users are required to add a sentence or two to explain why they think or know that piece of evidence is authentic or not. Others can comment on said evidence accordingly. This workflow prompts users to think critically rather than blindly share/RT content on Twitter without much thought, context or explanation. Indeed, we hope that with Verily more people will share links back to Verily pages rather than to out of context and unsubstantiated links of images/videos/claims, etc.

In other words, we want to redirect traffic to a repository of information that incentivises critical thinking. This means Verily is also looking to be an educational tool; we’ll have simple mini-guides on information forensics available to users (drawn from the BBC’s UGC, NPR’s Andy Carvin, etc). While we’ll include dig ups/downs on perceived authenticity of evidence posted to Verily, this is not the main focus of Verily. Dig ups/downs are similar to retweets and simply do not capture/explain whether said digger has voted based on her/his expertise or any critical thinking.

If you’re interested in supporting this project and/or sharing feedback, then please feel free to contact me at any time. For more background information on Verily, kindly see this post.

Bio

Jointly: Peer-to-Peer Disaster Recovery App

My colleague Samia Kallidis is launching a brilliant self-help app to facilitate community-based disaster recovery efforts. Samia is an MFA Candidate at the School of Visual Arts in New York. While her work on this peer-to-peer app began as part of her thesis, she has since been accepted to the NEA Studio Incubator Program to make her app a reality. NEA provides venture capital to help innovative entrepreneurs build transformational initiatives around the world. So huge congrats to Samia on this outstanding accomplishment. I was already hooked back in February when she presented her project at NYU and am even more excited now. Indeed, there are exciting synergies with the MatchApp project I’m working on with QCRI and MIT-CSAIL , which is why we’re happily exploring ways to collaborate & complement our respective initiatives.

Samia’s app is aptly called Jointly and carries the tag line: “More Recovery, Less Red Tape.” In her February presentation, Samia made many very compelling arguments for a self-help approach to disaster response based on her field research and interviews she conducted following Hurricane Sandy. She rightly noted that many needs that arise during the days, weeks and months following a disaster do not require the attention of expert disaster response professionals—in fact these responders may not have the necessary skills to match the needs that frequently arise after a disaster (assuming said responders even stay the course). Samia also remarked that solving little challenges and addressing the little needs that surface post-disaster can make the biggest differences. Hence Jointly. In her own words:

“Jointly is a decentralized mobile application that helps communities self-organize disaster relief without relying on bureaucratic organizations. By directly connecting disaster victims with volunteers, Jointly allows individuals to request help through services and donations, and to find skilled volunteers who are available to fulfill those needs. This minimizes waste of resources, reduces duplication of services, and significantly shortens recovery time for individuals and communities.”

Samia kindly shared the above video and screenshots of Jointly below.

Jointly_App-01

Jointly_App-02 Jointly_App-03

Jointly_App-04

I’m thrilled to see Jointly move forward and am excited to be collaborating with Samia on the Jointly and MatchApp connection. We certainly share the same goal: to help people help themselves. Indeed, increasing this capacity for self-organization builds resilience. These connection technologies and apps provide for more rapid and efficient self-help actions in times of need. This doesn’t mean that professional disaster response organizations are obsolete—quite on the contrary, in fact. Organizations like the American Red Cross can feed relevant service delivery data to the apps so that affected communities also know where, when and how to access these. In Jointly, official resources will be geo-tagged and updated live in the “Resources” part of the app.

You can contact Samia directly at: hello@jointly.us should you be interested in learning more or collaborating with her.

bio

Web App Tracks Breaking News Using Wikipedia Edits

A colleague of mine at Google recently shared a new and very interesting Web App that tracks breaking news events by monitoring Wikipedia edits in real-time. The App, Wikipedia Live Monitor, alerts users to breaking news based on the frequency of edits to certain articles. Almost every significant news event has a Wikipedia page that gets updated in near real-time and thus acts as a single, powerful cluster for tacking an evolving crisis.

Wikipedia Live Monitor

Social media, in contrast, is far more distributed, which makes it more difficult to track. In addition, social media is highly prone to false positives. These, however, are almost immediately corrected on Wikipedia thanks to dedicated editors. Wikipedia Live Monitor currently works across several dozen languages and also “cross-checks edits with social media updates on Twitter, Google Plus and Facebook to help users get a better sense of what is trending” (1).

I’m really excited to explore the use of this Live Monitor for crisis response and possible integration with some of the humanitarian technology platforms that my colleagues and I at QCRI are developing. For example, the Monitor could be used to supplement crisis information collected via social media using the Artificial Intelligence for Disaster Response (AIDR) platform. In addition, the Wikipedia Monitor could also be used to triangulate reports posted to our Verily platform, which leverages time-critical crowdsourcing techniques to verify user-generated content posted on social media during disasters.

bio

Social Media for Emergency Management: Question of Supply and Demand

I’m always amazed by folks who dismiss the value of social media for emergency management based on the perception that said content is useless for disaster response. In that case, libraries are also useless (bar the few books you’re looking for, but those rarely represent more than 1% of all the books available in a major library). Does that mean libraries are useless? Of course not. Is social media useless for disaster response? Of course not. Even if only 0.001% of the 20+ million tweets posted during Hurricane Sandy were useful, and only half of these were accurate, this would still mean over 1,000 real-time and informative tweets, or some 15,000 words—i.e., the equivalent of a 25-page, single-space document exclusively composed of fully relevant, actionable & timely disaster information.

LibTweet

Empirical studies clearly prove that social media reports can be informative for disaster response. Numerous case studies have also described how social media has saved lives during crises. That said, if emergency responders do not actively or explicitly create demand for relevant and high quality social media content during crises, then why should supply follow? If the 911 emergency number (999 in the UK) were never advertised, then would anyone call? If 911 were simply a voicemail inbox with no instructions, would callers know what type of actionable information to relay after the beep?

While the majority of emergency management centers do not create the demand for crowdsourced crisis information, members of the public are increasingly demanding that said responders monitor social media for “emergency posts”. But most responders fear that opening up social media as a crisis communication channel with the public will result in an unmanageable flood of requests, The London Fire Brigade seems to think otherwise, however. So lets carefully unpack the fear of information flooding.

First of all, New York City’s 911 operators receive over 10 million calls every year that are accidental, false or hoaxes. Does this mean we should abolish the 911 system? Of course not. Now, assuming that 10% of these calls takes an operator 10 seconds to manage, this represents close to 3,000 hours or 115 days worth of “wasted work”. But this filtering is absolutely critical and requires human intervention. In contrast, “emergency posts” published on social media can be automatically filtered and triaged thanks to Big Data Analytics and Social Computing, which could save time operators time. The Digital Operations Center at the American Red Cross is currently exploring this automated filtering approach. Moreover, just as it is illegal to report false emergency information to 911, there’s no reason why the same laws could not apply to social media when these communication channels are used for emergency purposes.

Second, if individuals prefer to share disaster related information and/or needs via social media, this means they are less likely to call in as well. In other words, double reporting is unlikely to occur and could also be discouraged and/or penalized. In other words, the volume of emergency reports from “the crowd” need not increase substantially after all. Those who use the phone to report an emergency today may in the future opt for social media instead. The only significant change here is the ease of reporting for the person in need. Again, the question is one of supply and demand. Even if relevant emergency posts were to increase without a comparable fall in calls, this would simply reveal that the current voice-based system creates a barrier to reporting that discriminates against certain users in need.

Third, not all emergency calls/posts require immediate response by a paid professional with 10+ years of experience. In other words, the various types of needs can be triaged and responded to accordingly. As part of their police training or internships, new cadets could be tasked to respond to less serious needs, leaving the more seasoned professionals to focus on the more difficult situations. While this approach certainly has some limitations in the context of 911, these same limitations are far less pronounced for disaster response efforts in which most needs are met locally by the affected communities themselves anyway. In fact, the Filipino government actively promotes the use of social media reporting and crisis hashtags to crowdsource disaster response.

In sum, if disaster responders and emergency management processionals are not content with the quality of crisis reporting found on social media, then they should do something about it by implementing the appropriate policies to create the demand for higher quality and more structured reporting. The first emergency telephone service was launched in London some 80 years ago in response to a devastating fire. At the time, the idea of using a phone to report emergencies was controversial. Today, the London Fire Brigade is paving the way forward by introducing Twitter as a reporting channel. This move may seem controversial to some today, but give it a few years and people will look back and ask what took us so long to adopt new social media channels for crisis reporting.

Bio

Data Science for Social Good and Humanitarian Action

My (new) colleagues at the University of Chicago recently launched a new and exciting program called “Data Science for Social Good”. The program, which launches this summer, will bring together dozens top-notch data scientists, computer scientists an social scientists to address major social challenges. Advisors for this initiative include Eric Schmidt (Google), Raed Ghani (Obama Administration) and my very likable colleague Jake Porway (DataKind). Think of “Data Science for Social Good” as a “Code for America” but broader in scope and application. I’m excited to announce that QCRI is looking to collaborate with this important new program given the strong overlap with our Social Innovation Vision, Strategy and Projects.

My team and I at QCRI are hoping to mentor and engage fellows throughout the summer on key humanitarian & development projects we are working on in partnership with the United Nations, Red Cross, World Bank and others. This would provide fellows with the opportunity to engage in  “real world” challenges that directly match their expertise and interests. Second, we (QCRI) are hoping to replicate this type of program in Qatar in January 2014.

Why January? This will give us enough time to design the new program based on the result of this summer’s experiment. More importantly, perhaps, it will be freezing in Chicago ; ) and wonderfully warm in Doha. Plus January is an easier time for many students and professionals to take “time off”. The fellows program will likely be 3 weeks in duration (rather than 3 months) and will focus on applying data science to promote social good projects in the Arab World and beyond. Mentors will include top Data Scientists from QCRI and hopefully the University of Chicago. We hope to create 10 fellowship positions for this Data Science for Social Good program. The call for said applications will go out this summer, so stay tuned for an update.

bio

How To Disconnect in a Hyper Connected World (Updated)

Disconnecting in a hyper connected world is already challenging enough if not near impossible. An exploding inbox does not help. No one wants to come back from quality time off only to find an inbox of 5,000+ unread messages. Moreover, the very thought of having thousands of emails piling up is stressful and inevitably results in checking emails. This in turns sucks you right back into work mode, which blows. I have a solution. Read on.

road

I will be away until [February 5, 2015]. During this time, I will disconnect and be offline. In order to truly enjoy complete peace of mind during these precious days, I shall set up an automated email reply with a link to this post. So if you’ve found your way here from said reply, here’s the full message and catch: I’m off-grid until [Feb 5]. This means that any emails that come in until then will be *automatically deleted*. I do this for peace of mind whilst on vacation, not because I don’t value what you have to say. So, if your message is important, then please kindly resend it after [Feb 5]. If your message is super urgent, then please leave a note in the comments section below with your name, phone number and the nature of your emergency. Rest assured that your message will remain private and will not be made public on this blog or anywhere else.

Many thanks for your kind understanding about the auto-deletion. I promise to return the favor. Lets all help each other find easier ways to disconnect in our hyper connected world. In the meantime, may I interest you in my new book?

bio

Artificial Intelligence for Monitoring Elections (AIME)

Citizen-based, crowdsourced election observation initiatives are on the rise. Leading election monitoring organizations are also looking to leverage citizen-based reporting to complement their own professional election monitoring efforts. Meanwhile, the information revolution continues apace, with the number of new mobile phone subscriptions up by over 1 billion in just the past 36 months alone. The volume of election-related reports generated by “the crowd” is thus expected to grow significantly in the coming years. But international, national and local election monitoring organizations are completely unprepared to deal with the rise of Big (Election) Data.

Liberia2011

The purpose of this collaborative research project, AIME, is to develop a free and open source platform to automatically filter relevant election reports from the crowd. The platform will include pre-defined classifiers (e.g., security incidents,  intimidation, vote-buying, ballot stuffing etc.) for specific countries and will also allow end-users to create their own classifiers on the fly. The project, launched by QCRI and several key partners, will specifically focus on unstructured user-generated content from SMS and Twitter. AIME partners include a major international election monitoring organization and several academic research centers. The AIME platform will use the technology being developed for QCRI’s AIDR project: Artificial Intelligence for Disaster Response.

Bio

  • Acknowledgements Fredrik Sjoberg kindly provided the Uchaguzi data which he scraped from the public website at the time.
  • Qualification: Professor Michael Best has rightly noted that these preliminary results are overstated given that the machine learning analysis was carried out on corpus of pre-structured reports.

Self-Organized Crisis Response to #BostonMarathon Attack

I’m going to keep this blog post technical because the emotions from yesterday’s events are still too difficult to deal with. Within an hour of the bombs going off, I received several emails asking me to comment on the use of social media in Boston and how it differed to the digital humanitarian response efforts I am typically engaged in. So here are just a few notes, nothing too polished, but some initial reactions.

I Stand with Boston

Once again, we saw the outpouring of operational support from the “Crowd” with over two thousand people in the Boston area volunteering to take people in if they needed help, and this within 60 minutes of the attack. This was coordinated via a Google Spreadsheet & Google Form. This is not the first time that these web-based solutions were used for disaster response. For example, Google Spreadsheets was used to coordinate grassroots response efforts during the major Philippine floods in 2012.

We’re not all affected the same way during a crisis and those of us who are less affected almost always look for ways to help. Unlike the era of television broadcasting, the crowd can now become an operational actor in disaster response. To be sure, paid disaster response professionals cannot be everywhere at the same time, but the crowd is always there. This explains I have look called for a “Match.com for disaster response” to match local needs with local resources. So while I received numerous pings on Twitter, Skype and email about launching a crisis map for Boston, I am skeptical that doing so would have added much value.

What was/is needed is real-time filtering of social media content and matching of local needs (information and material needs) with local resources. There are two complementary ways to do this: human computing (e.g., crowdsourcing, microtasking, etc) and machine computing (natural language processing, machine learning, etc), which is why my team and I at QCRI are working on developing these solutions.

Other observations from the response to yesterday’s tragedy:

  • Boston Police made active use of their Twitter account to inform and advise. They also asked other Twitter users to spread their request for everyone to leave the city center area. The police and other emergency services also actively crowdsourced photographs and video footage to begin their criminal investigations. There was such heavy multimedia social media activity in the area that one could no doubt develop a Photosynth rendering of the scene.
  • There were calls for residents to unlock their Wifi networks to enable people in the streets to get access to the Internet. This was especially important after the cellphone network was taken offline for security reasons. To be sure, access to information is equally important as access to water, food, shelter, etc, during a crisis.

I’d welcome any other observation from readers, e.g., similarities and differences between the use of technologies for domestic emergency management versus international humanitarian efforts. I would also be interested to hear thoughts about how the two could be integrated or at the very least learn from each other.

bio

Introducing MicroMappers for Digital Disaster Response

The UN activated the Digital Humanitarian Network (DHN) on December 3, 2012 to carry out a rapid damage needs assessment in response to Typhoon Pablo in the Philippines. More specifically, the UN requested that Digital Humanitarians collect and geo-reference all tweets with links to pictures or video footage capturing Typhoon damage. To complete this mission, I reached out to my colleagues at CrowdCrafting. Together, we customized a microtasking app to filter, classify and geo-reference thousands of tweets. This type of rapid damage assessment request was the first of its kind, which means that setting up the appropriate workflows and technologies took a while, leaving less time for the tagging, verification and analysis of the multimedia content pointed to in the disaster tweets. Such is the nature of innovation; optimization takes place through iteration and learning.

Microtasking is key to the future of digital humanitarian response, which is precisely why I am launching MicroMappers in partnership with CrowdCrafting. MicroMappers, which combimes the terms Micro-Tasking and Crisis-Mappers, is a collection of free & open source microtasking apps specifically customized and optimized for digital disaster response. The first series of apps focus on rapid damage assessment activations. In other words, the apps include Translate, Locate and Assess. The Translate & Locate Apps are self-explanatory. The Assess App enables digital volunteers to quickly tag disaster tweets that link to relevant multimedia that captures disaster damage. This app also invites volunteers to rate the level of damage in each image and video.

For example, say an earthquake strikes Mexico City. We upload disaster tweets with links to the Translate App. Volunteer translators only translate tweets with location information. These get automatically pushed to the Assess App where digital volunteers tag tweets that point to relevant images/videos. They also rate the level of damage in each. (On a side note, my colleagues and I at QCRI are also developing a crawler that will automatically identify whether links posted on twitter actually point to images/videos). Assessed  tweets are then pushed in real-time to the Locate App for geo-referencing. The resulting tweets are subsequently published to a live map where the underlying data can also be downloaded.  Both the map & data download feature can be password protected.

The plan is to have these apps online and live 24/7 in the event of an activation request. When a request does come in, volunteers with the Digital Humanitarian Network will simply go to MicroMappers.com (not yet live) to start using the apps right away. Members of the public will also be invited to support these efforts and work along side digital humanitarian volunteers. In other words, the purpose of the MicroMappers Apps is not only to facilitate and accelerate digital humanitarian efforts but also to radically democratize these efforts by increasing the participation base. To be sure, one doesn’t need prior training to microtask, simply being able to read and access the web will make you an invaluable member of the team.

We plan to have the MicroMappers Apps completed in May/June September for testing by members of the Digital Humanitarian Network. In the meantime, huge thanks to our awesome partners at CrowdCrafting for making all of this possible! If you’re a coder and interested in contributing to these efforts, please feel free to get in touch with me. We may be able to launch and test these apps earlier with your help. After all, disasters won’t wait until we’re ready and we have several more disaster response apps that are in need of customization.

bio

Data Protection Protocols for Crisis Mapping

The day after the CrisisMappers 2011 Conference in Geneva, my colleague Phoebe Wynn-Pope organized and facilitated the most important workshop I attended that year. She brought together a small group of seasoned crisis mappers and experts in protection standards. The workshop concluded with a pressing action item: update the International Committee of the Red Cross’s (ICRC) Professional Standards for Protection Work in order to provide digital humanitarians with expert guidance on protection standards for humani-tarianism in the network age.

My colleague Anahi Ayala and I were invited to provide feedback on the new 20+ page chapter specifically dedicated to data management and new technologies. We added many, many comments and suggestions on the draft. The full report is available here (PDF). Today, thanks to ICRC, I am in Switzerland to give a Keynote on Next Generation Humanitarian Technology for the official launch of the report. The purpose of this blog post is to list the protection protocols that relate most directly to Crisis Mapping &  Digital Humanitarian Response; and to problematize some of these protocols. 

The Protocols

In the preface of the ICRC’s 2013 Edition of the Professional Standards for Protection Work, the report lists three reasons for the updated edition. The first has to do with new technologies:

In light of the rapidly proliferating initiatives to make new uses of information technology for protection purposes, such as satellite imagery, crisis mapping and publicizing abuses and violations through social media, the advisory group agreed to review the scope and language of the standards on managing sensitive information. The revised standards reflect the experiences and good practices of humanitarian and human rights organizations as well as of information & communication technology actors.

The new and most relevant protection standards relating—or applicable to—digital humanitarians are listed below (indented text) together with commentary.

Protection actors must only collect information on abuses and violations when necessary 
for the design or implementation of protection activities. It may not be used for other purposes without additional consent.

A number of Digital Humanitarian Networks such as the Standby Volunteer Task Force (SBTF) only collect crisis information specifically requested by the “Activating Organization,” such as the UN Office for the Coordination of Humanitarian Affairs (OCHA) for example. Volunteer networks like the SBTF are not “protection actors” but rather provide direct support to humanitarian organizations when the latter meet the SBTF’s activation criteria. In terms of what type of information the SBTF collects, again it is the Activating Organization that decides this, not the SBTF. For example, the Libya Crisis Map launched by the SBTF at the request of OCHA displayed categories of information that were decided by the UN team in Geneva.

Protection actors must collect and handle information containing personal details in accordance with the rules and principles of international law and other relevant regional or national laws on individual data protection.

These international, regional and national rules, principles and laws need to be made available to Digital Humanitarians in a concise, accessible and clear format. Such a resource is still missing.

Protection actors seeking information bear the responsibility to assess threats to the persons providing information, and to take necessary measures to avoid negative consequences for those from whom they are seeking information.

Protection actors setting up systematic information collection through the Internet or other media must analyse the different potential risks linked to the collection, sharing or public display of the information and adapt the way they collect, manage and publicly release the information accordingly.

Interestingly, when OCHA activated the SBTF in response to the Libya Crisis, it was the SBTF, not the UN, that took the initiative to formulate a Threat and Risks Mitigation Strategy that was subsequently approved by the UN. Furthermore, unlike other digital humanitarian networks, the Standby Task Force’s  “Prime Directive” is to not interact with the crisis-affected population. Why? Precisely to minimize the risk to those voluntarily sharing information on social media.

Protection actors must determine the scope, level of precision and depth of detail
of the information collection process, in relation to the intended use of the information collected.

Again, this is determined by the protection actor activating a digital humanitarian network like the SBTF.

Protection actors should systematically review the information collected in order to confirm that it is reliable, accurate, and updated.

The SBTF has a dedicated Verification Team that strives to do this. The verification of crowdsourced, user-generated content posted on social media during crises is no small task. But the BBC’s User-Generated Hub (UGC) has been doing just this for 8 years. Meanwhile, new strategies and technologies are under development to facilitate the rapid verification of such content. Also, the ICRC report notes that “Combining and cross-checking such [crowdsourced] information with other sources, including information collected directly from communities and individuals affected, is becoming standard good practice.”

Protection actors should be explicit as to the level of reliability and accuracy of information they use or share.

Networks like the SBTF make explicit whether a report published on a crisis map has been verified or not. If the latter, the report is clearly marked as “Unverified”. There are more nuanced ways to do this, however. I have recently given feedback on some exciting new research that is looking to quantify the probable veracity of user-generated content.

Protection actors must gather and subsequently process protection information in an objective and impartial manner, to avoid discrimination. They must identify and minimize bias that may affect information collection.

Objective, impartial, non-discriminatory and unbiased information is often more a fantasy than reality even with traditional data. Meeting these requirements in a conflict zone can be prohibitively expensive, overly time consuming and/or downright dangerous. This explains why advanced statistical methods dedicated to correcting biases exist. These can and have been applied to conflict and human rights data. They can also be applied to user-generated content on social media to the extent that the underlying demographic & census based information is possible.

To place this into context, Harvard University Professor Gary King, reminded me that the vast majority of medical data is not representative either. Nor is the vast majority of crime data. Does that render these datasets void? Of course not. Please see this post on Demystifying Crowdsourcing: An Introduction to Non-Probability Sampling.

Security safeguards appropriate to the sensitivity of the information must be in place prior
to any collection of information, to ensure protection from loss or theft, unauthorized access, disclosure, copying, use or modification, in any format in which it is kept.

One of the popular mapping technologies used by digital humanitarian networks is the Ushahidi platform. When the SBTF learned in 2012 that security holes had still not been patched almost a year after reporting them to Ushahidi Inc., the SBTF Core Team made an executive decision to avoid using Ushahidi technology whenever possible given that the platform could be easily hacked. (Just last month, a colleague of mine who is not a techie but a UN practitioner was able to scrape Ushahidi’s entire Kenya election monitoring data form March 2013, which included some personal identifying information). The SBTF has thus been exploring work-arounds and is looking to make greater use of GeoFeedia and Google’s new mapping technology, Stratomap, in future crisis mapping operations.

Protection actors must integrate the notion of informed consent when calling upon the general public, or members of a community, to spontaneously send them information through SMS, an open Internet platform, or any other means of communication, or when using information already available on the Internet.

This is perhaps the most problematic but important protection protocol as far as digital humanitarian work is concerned. While informed consent is absolutely of critical importance, the vast majority of crowdsourced content displayed on crisis maps is user-generated and voluntarily shared on social media. The very act of communicating with these individuals to request their consent not only runs the risk of endangering these individuals but also violates the SBTF’s Prime Directive for the exact same reason. Moreover, interacting with crisis-affected communities may raise expectations of response that digital humanitarians are simply not in position to guarantee. In situations of armed conflict and other situations of violence, conducting individual interviews can put people at risk not only because of the sensitive nature of the information collected, but because mere participation in the process can cause these people to be stigmatized or targeted.

That said, the ICRC does recognize that, “When such consent cannot be realistically obtained, information allowing the identification of victims or witnesses, should only be relayed in the public domain if the expected protection outcome clearly outweighs the risks. In case of doubt, displaying only aggregated data, with no individual markers, is strongly recommended.”

Protection actors should, to the degree possible, keep victims or communities having transmitted information on abuses and violations informed of the action they have taken
on their behalf – and of the ensuing results. Protection actors using information provided
by individuals should remain alert to any negative repercussions on the individuals or communities concerned, owing to the actions they have taken, and take measures to mitigate these repercussions.

Part of this protocol is problematic for the same reason as the above protocol. The very act of communicating with victims could place them in harm’s way. As far as staying alert to any negative repercussions, I believe the more seasoned digital humanitarian networks make this one of their top priorities.

When handling confidential and sensitive information on abuses and violations, protection actors should endeavor when appropriate and feasible, to share aggregated data on the trends they observed.

The purpose of the SBTF’s Analysis Team is precisely to serve this function.

Protection actors should establish formal procedures on the information handling process, from collection to exchange,  archiving or destruction.

Formal procedures to archive & destroy crowdsourced crisis information are largely lacking. Moving forward, the SBTF will defer this responsibility to the Activating Organization.

Conclusion

In conclusion, the ICRC notes that, “When it comes to protection, crowdsourcing can be an extremely efficient way to collect data on ongoing violence and abuses and/or their effects on individuals and communities. Made possible by the wide availability of Internet or SMS in countries affected by violence, crowdsourcing has rapidly gained traction.” To this end,

Although the need for caution is a central message [in the ICRC report], it should in no way be interpreted as a call to avoid sharing information. On the contrary, when the disclosing of protection information is thought to be of benefit to the individuals and communities concerned, it should be shared, as appropriate, with local, regional or national authorities, UN peacekeeping operations, other protection actors, and last but not least with service providers.

This is inline with the conclusions reached by OCHA’s landmark report, which notes that “Concern over the protection of information and data is not a sufficient reason to avoid using new communications technologies in emergencies, but it must be taken into account.” And so, “Whereas the first exercises were conducted without clear procedures to assess and to subsequently limit the risks faced by individuals who participated or who were named, the groups engaged in crisis mapping efforts over the years have become increasingly sensitive to the need to identify & manage these risks” (ICRC 2013).

It is worth recalling that the vast majority of the groups engaged in crisis mapping efforts, such as the SBTF, are first and foremost volunteers who are not only continuing to offer their time, skills and services for free, but are also taking it upon themselves to actively manage the risks involved in crisis mapping—risks that they, perhaps better than anyone else, understand and worry about the most because they are after all at the frontlines of these digital humanitarian efforts. And they do this all on a grand operational budget of $0 (as far as the SBTF goes). And yet, these volunteers continue to mobilize at the request of international humanitarian organizations and are always looking to learn, improve and do better. They continue to change the world for one map at a time.

I have organized a CrisisMappers Webinar on April 17, 2013, featuring presentations and remarks by the lead authors of the new ICRC report. Please join the CrisisMappers list-serve for more information.

Bio

See also:

  • SMS Code of Conduct for Disaster Response (Link)
  • Humanitarian Accountability Handbook (PDF)