Category Archives: Humanitarian Technologies

GDACSmobile: Disaster Responders Turn to Bounded Crowdsourcing

GDACS, the Global Disaster Alert and Coordination System, sparked my interest in technology and disaster response when it was first launched back in 2004, which is why I’ve referred to GDACS in multiple blog posts since. This near real-time, multi-hazard monitoring platform is a joint initiative between the UN’s Office for the Coordination of Humanitarian Affairs (OCHA) and the European Commission (EC). GDACS serves to consolidate and improve the dissemination of crisis-related information including rapid mathematical analyses of expected disaster impact. The resulting risk information is distributed via Web and auto-mated email, fax and SMS alerts.

Screen Shot 2013-03-25 at 3.13.35 AM

I recently had the pleasure of connecting with two new colleagues, Daniel Link and Adam Widera, who are researchers at the University of Muenster’s European Research Center for Information Systems (ERCIS). Daniel and Adam have been working on GDACSmobile, a smartphone app that was initially developed to extend the reach of the GDACS portal. This project originates from a student project supervised by Daniel, Adam along with the Chair of the Center Bernd Hellingrath in cooperation with both Tom de Groeve from the Joint Research Center (JRC) and Minu Kumar Limbu, who is now with UNICEF Kenya.

GDACSmobile is intended for use by disaster responders and the general public, allowing for a combined crowdsourcing and “bounded crowdsourcing” approach to data collection and curation. This bounded approach was a deliberate design feature for GDACSmobile from the outset. I coined the term “bounded crowd-sourcing” four years ago (see this blog post from 2009). The “bounded crowd-sourcing” approach uses “snowball sampling” to grow a crowd of trusted reporters for the collection of crisis information. For example, one invites 5 (or more) trusted local reports to collect relevant information and subsequently ask each of these to invite 5 additional reporters who they fully trust; And so on, and so forth. I’m thrilled to see this term applied in practical applications such GDACSmobile. For more on this approach, please see these blog posts.

Bildschirmfoto 2013-03-25 um 13.47.21

GDACSmobile, which operates on all major mobile smartphones, uses a delibera-tely minimalist approach to situation reporting and can be used to collect info-rmation (via text & image) while offline. The collected data is then automatically transmitted when a connection becomes available. Users can also view & filter data via map view and in list form. Daniel and Adam are considering the addition of an icon-based data-entry interface instead of text-based data-entry since the latter is more cumbersome & time-consuming.

Bildschirmfoto 2013-03-24 um 22.15.28

Meanwhile, the server side of GDACSmobile facilitates administrative tasks such as the curation of data submitted by app users and shared on Twitter. Other social media platforms may be added in the future, such as Flickr, to retrieve relevant pictures from disaster-affected areas (similar to GeoFeedia). The server-side moderation feature is used to ensure high data quality standards. But the ERCIS researchers are also open to computational solutions, which is one reason GDACSmobile is not a ‘data island’ and why other systems for computational analysis, microtasking etc., can be used to process the same dataset. The server also “offers a variety of JSON services to allow ‘foreign’ systems to access the data. […] SQL queries can also be used with admin access to the server, and it would be very possible to export tables to spreadsheets […].” 

I very much look forward to following GDACSmobile’s progress. Since Daniel and Adam have designed their app to be open and are also themselves open to con-sidering computational solutions, I have already begun to discuss with them our AIDR project (Artificial Intelligence for Disaster Response) project at the Qatar Computing Research Institute (QCRI). I believe that making the ADIR-GDACS interoperable would make a whole lot of sense. Until then, if you’re going to this year’s International Conference on Information Systems for Crisis Response and Management (ISCRAM 2013) in May, then be sure to participate in the workshop (PDF) that Daniel and Adam are running there. The side-event will present the state of the art and future trends of rapid assessment tools to stimulate a conver-sation on current solutions and developments in mobile tech-nologies for post-disaster data analytics and situational awareness. My colleague Dr. Imran Muhammad from QCRI will also be there to present findings from our crisis computing research, so I highly recommend connecting with him.

Bio

GeoFeedia: Ready for Digital Disaster Response

GeoFeedia was not originally designed to support humanitarian operations. But last year’s blog post on the potential of GeoFeedia for crisis mapping caught the interest of CEO Phil Harris. So he kindly granted the Standby Volunteer Task Force (SBTF) free access to the platform. In return, we provided his team with feedback on what features (listed here) would make GeoFeedia more useful for digital disaster response. This was back in summer 2012. I recently learned that they’ve been quite busy since. Indeed, I had the distinct pleasure of sharing the stage with Phil and his team at this superb conference on social media for emergency management. After listening to their talk, I realized it was high time to publish an update on GeoFeedia, especially since we had used the tool just two months earlier in response to Typhoon Pablo, one of the worst disasters to hit the Philippines in the past 100 years.

The 1-minute video is well worth watching if you’re new to GeoFeedia. The plat-form enables hyper local searches for information by location across multiple social media channels such as Twitter, Youtube, Flickr, Picasa & now Instagram. One of my favorite GeoFeedia features is the awesome geofeed (digital fence), which you can learn more about here. So what’s new besides Instagram? Well, the first suggestion I made last year was to provide users with the option of searching by both location and topic, rather than just location alone. And presto, this now possible, which means that digital humanitarians today can zoom into a disaster-affected area and filter by social media type, date and hashtag. This makes the geofeed feature even more compelling for crisis response, especially since geofeeds can also be saved and shared.

The vast majority of social media monitoring tools out there first filter by key-word and hashtag. Only later do they add location. As Phil points out, this mean they easily miss 70% of hyper local social media reports. Most users and org-anizations, who pay hefty licensing fees to uses these platforms, are typically unaware of this. The fact that GeoFeedia first filters by location is not an accident. This recent study (PDF) of the 2012 London Olympics showed that social media users posted close to 170,000 geo-tagged to Twitter, Instagram, Flickr, Picasa and YouTube during the games. But only 31% of these geo-tagged posts contained any Olympic-specific keywords and/or hashtags! So they decided to analyze another large event and again found the number of results drop by about 70% when not first filtering by location. Phil argues that people in a crisis situation obviously don’t wait for keywords or hashtags to form; so he expects this drop to happen for disasters as well. “Traditional keyword and hashtag search thus be complemented with a geo-graphical search in order to provide a full picture of social media content that is contextually relevant to an event.”

Screen Shot 2013-03-23 at 4.42.25 PM

One of my other main recommendations to Phil & team last year had to do with analytics. There is a strong need for an “Analytics function that produces summary statistics and trends analysis for a geofeed of interest. This is where Geofeedia could better capture temporal dynamics by including charts, graphs and simple time-series analysis to depict how events have been unfolding over the past hour vs 12 hours, 24 hours, etc.” Well sure enough, one of GeoFeedia’s major new features is a GeoAnalytics Dashboard; an interface that enables users to discover temporal trends and patterns in social media—and to do so by geofeed. This means a user can now draw a geofeed around a specific area of interest in a given disaster zone and search for pictures that capture major infrastructure damage on a specified date that contain tags or descriptions with the words “#earthquake”, “damage,” “buildings,” etc. As Phil rightly points out, this provides a “huge time advantage during a crisis to give a yet another filtered layer of intelligence; in effect, social media that is highly relevant and actionable ‘bubbling-up to the top’ of the pile.” 

Analytics Screen Shot - CES Data

I truly am a huge fan of the GeoFeedia platform. Plus, Phil & team have been very responsive to our interests in using their tool for disaster response. So I’m ex-cited to see which features they build out next. They’ve already got a “data portability” functionality that enables data export. Users can also publish content from GeoFeedia directly to their own social networks. Moreover, the filtered content produced by geofeeds can also be shared with individual who do not have a GeoFeedia account. In any event, I hope the team will take into account two items from my earlier wish list—namely Sentiment Analysis and GeoAlerts.

A Sentiment Analysis feature would capture the general mood and sentiment  expressed hyper-locally within a defined geofeed in real-time. The automated Geo-Alerts feature would make the geofeed king. A GeoAlerts functionality would enable users to trigger specific actions based on different kinds of social media traffic within a given geofeed of interest. For example, I’d like to be notified if the number of pictures posted within my geofeed that are tagged with the words “#earthquake” and “damage,” increases by more than 20% in any given hour. Similarly, one could set a geofeed’s GeoAlert for a 10% increase in the number of tweets with the words “cholera” and “diarrhea” (these need not be in English, by the way) in any given 10-minute period. Users would then receive GeoAlerts via automated emails, Tweets and/or SMS’s. This feature would in effect make the GeoFeedia more of a mobile and “hands free” platform, like Waze for example.

My first blog post on GeoFeedia was entitled “GeoFeedia: Next Generation Crisis Mapping Technology?” The answer today is a definite “Yes!” While the platform was not originally designed with disaster response in mind, the team has since been adding important features that make the tool increasingly useful for humanitarian applications. And GeoFeedia has plans for more exciting develop-ments in 2013. Their commitment to innovation and strong continued interest in supporting digital disaster response is why I’m hoping to work more closely with them in the years to come. For example, our AIDR (Artificial Intelligence for Disaster Response) platform would really add a strong Machine Learning com-ponent to GeoFeedia’s search function, in effect enabling the tool to go beyond simple keyword search.

Bio

A Research Framework for Next Generation Humanitarian Technology and Innovation

Humanitarian donors and organizations are increasingly championing innovation and the use of new technologies for humanitarian response. DfID, for example, is committed to using “innovative techniques and technologies more routinely in humanitarian response” (2011). In a more recent strategy paper, DfID confirmed that it would “continue to invest in new technologies” (2012). ALNAP’s important report on “The State of the Humanitarian System” documents the shift towards greater innovation, “with new funds and mechanisms designed to study and support innovation in humanitarian programming” (2012). A forthcoming land-mark study by OCHA makes the strongest case yet for the use and early adoption of new technologies for humanitarian response (2013).

picme8

These strategic policy documents are game-changers and pivotal to ushering in the next wave of humanitarian technology and innovation. That said, the reports are limited by the very fact that the authors are humanitarian professionals and thus not necessarily familiar with the field of advanced computing. The purpose of this post is therefore to set out a more detailed research framework for next generation humanitarian technology and innovation—one with a strong focus on information systems for crisis response and management.

In 2010, I wrote this piece on “The Humanitarian-Technology Divide and What To Do About It.” This divide became increasingly clear to me when I co-founded and co-directed the Harvard Humanitarian Initiative’s (HHI) Program on Crisis Mapping & Early Warning (2007-2009). So I co-founded the annual Inter-national CrisisMappers Conference series in 2009 and have continued to co-organize this unique, cross-disciplinary forum on humanitarian technology. The CrisisMappers Network also plays an important role in bridging the humanitarian and technology divide. My decision to join Ushahidi as Director of Crisis Mapping (2009-2012) was a strategic move to continue bridging the divide—and to do so from the technology side this time.

The same is true of my move to the Qatar Computing Research Institute (QCRI) at the Qatar Foundation. My experience at Ushahidi made me realize that serious expertise in Data Science is required to tackle the major challenges appearing on the horizon of humanitarian technology. Indeed, the key words missing from the DfID, ALNAP and OCHA innovation reports include: Data Science, Big Data Analytics, Artificial Intelligence, Machine Learning, Machine Translation and Human Computing. This current divide between the humanitarian and data science space needs to be bridged, which is precisely why I joined the Qatar Com-puting Research Institute as Director of Innovation; to develop and prototype the next generation of humanitarian technologies by working directly with experts in Data Science and Advanced Computing.

bridgetech

My efforts to bridge these communities also explains why I am co-organizing this year’s Workshop on “Social Web for Disaster Management” at the 2013 World Wide Web conference (WWW13). The WWW event series is one of the most prestigious conferences in the field of Advanced Computing. I have found that experts in this field are very interested and highly motivated to work on humanitarian technology challenges and crisis computing problems. As one of them recently told me: “We simply don’t know what projects or questions to prioritize or work on. We want questions, preferably hard questions, please!”

Yet the humanitarian innovation and technology reports cited above overlook the field of advanced computing. Their policy recommendations vis-a-vis future information systems for crisis response and management are vague at best. Yet one of the major challenges that the humanitarian sector faces is the rise of Big (Crisis) Data. I have already discussed this here, here and here, for example. The humanitarian community is woefully unprepared to deal with this tidal wave of user-generated crisis information. There are already more mobile phone sub-scriptions than people in 100+ countries. And fully 50% of the world’s population in developing countries will be using the Internet within the next 20 months—the current figure is 24%. Meanwhile, close to 250 million people were affected by disasters in 2010 alone. Since then, the number of new mobile phone subscrip-tions has increased by well over one billion, which means that disaster-affected communities today are increasingly likely to be digital communities as well.

In the Philippines, a country highly prone to “natural” disasters, 92% of Filipinos who access the web use Facebook. In early 2012, Filipinos sent an average of 2 billion text messages every day. When disaster strikes, some of these messages will contain information critical for situational awareness & rapid needs assess-ment. The innovation reports by DfID, ALNAP and OCHA emphasize time and time again that listening to local communities is a humanitarian imperative. As DfID notes, “there is a strong need to systematically involve beneficiaries in the collection and use of data to inform decision making. Currently the people directly affected by crises do not routinely have a voice, which makes it difficult for their needs be effectively addressed” (2012). But how exactly should we listen to millions of voices at once, let alone manage, verify and respond to these voices with potentially life-saving information? Over 20 million tweets were posted during Hurricane Sandy. In Japan, over half-a-million new users joined Twitter the day after the 2011 Earthquake. More than 177 million tweets about the disaster were posted that same day, i.e., 2,000 tweets per second on average.

Screen Shot 2013-03-20 at 1.42.25 PM

Of course, the volume and velocity of crisis information will vary from country to country and disaster to disaster. But the majority of humanitarian organizations do not have the technologies in place to handle smaller tidal waves either. Take the case of the recent Typhoon in the Philippines, for example. OCHA activated the Digital Humanitarian Network (DHN) to ask them to carry out a rapid damage assessment by analyzing the 20,000 tweets posted during the first 48 hours of Typhoon Pablo. In fact, one of the main reasons digital volunteer networks like the DHN and the Standby Volunteer Task Force (SBTF) exist is to provide humanitarian organizations with this kind of skilled surge capacity. But analyzing 20,000 tweets in 12 hours (mostly manually) is one thing, analyzing 20 million requires more than a few hundred dedicated volunteers. What’s more, we do not have the luxury of having months to carry out this analysis. Access to information is as important as access to food; and like food, information has a sell-by date.

We clearly need a research agenda to guide the development of next generation humanitarian technology. One such framework is proposed her. The Big (Crisis) Data challenge is composed of (at least) two major problems: (1) finding the needle in the haystack; (2) assessing the accuracy of that needle. In other words, identifying the signal in the noise and determining whether that signal is accurate. Both of these challenges are exacerbated by serious time con-straints. There are (at least) two ways too manage the Big Data challenge in real or near real-time: Human Computing and Artificial Intelligence. We know about these solutions because they have already been developed and used by other sectors and disciplines for several years now. In other words, our information problems are hardly as unique as we might think. Hence the importance of bridging the humanitarian and data science communities.

In sum, the Big Crisis Data challenge can be addressed using Human Computing (HC) and/or Artificial Intelligence (AI). Human Computing includes crowd-sourcing and microtasking. AI includes natural language processing and machine learning. A framework for next generation humanitarian technology and inno-vation must thus promote Research and Development (R&D) that apply these methodologies for humanitarian response. For example, Verily is a project that leverages HC for the verification of crowdsourced social media content generated during crises. In contrast, this here is an example of an AI approach to verification. The Standby Volunteer Task Force (SBTF) has used HC (micro-tasking) to analyze satellite imagery (Big Data) for humanitarian response. An-other novel HC approach to managing Big Data is the use of gaming, something called Playsourcing. AI for Disaster Response (AIDR) is an example of AI applied to humanitarian response. In many ways, though, AIDR combines AI with Human Computing, as does MatchApp. Such hybrid solutions should also be promoted   as part of the R&D framework on next generation humanitarian technology. 

There is of course more to humanitarian technology than information manage-ment alone. Related is the topic of Data Visualization, for example. There are also exciting innovations and developments in the use of drones or Unmanned Aerial Vehicles (UAVs), meshed mobile communication networks, hyper low-cost satellites, etc.. I am particularly interested in each of these areas will continue to blog about them. In the meantime, I very much welcome feedback on this post’s proposed research framework for humanitarian technology and innovation.

 bio

Humanitarian Technology and the Japan Earthquake (Updated)

My Internews colleagues have just released this important report on the role of communications in the 2011 Japan Earthquake. Independent reports like this one are absolutely key to building the much-needed evidence base of humanitarian technology. Internews should thus be applauded for investing in this important study. The purpose of my blog post is to highlight findings that I found most interesting and to fill some of the gaps in the report’s coverage.

sinsai_info

I’ll start with the gaps since there are far fewer of these. While the report does reference the Sinsai Crisis Map, it over looks a number of key points that were quickly identified in an email reply just 61 minutes after Internews posted the study on the CrisisMappers list-serve. These points were made by my Fletcher colleague Jeffrey Reynolds who spearheaded some of the digital response efforts from The Fletcher School in Boston:

“As one of the members who initiated crisis mapping effort in the aftermath of the Great East Japan Earthquake, I’d like to set the record straight on 4 points:

  • The crisis mapping effort started at the Fletcher School with students from Tufts, Harvard, MIT, and BU within a couple hours of the earthquake. We took initial feeds from the SAVE JAPAN! website and put them into the existing OpenStreetMap (OSM) for Japan. This point is not to take credit, but to underscore that small efforts, distant from a catastrophe, can generate momentum – especially when the infrastructure in area/country in question is compromised.
  • Anecdotally, crisis mappers in Boston who have since returned to Japan told me that at least 3 people were saved because of the map.
  • Although crisis mapping efforts may not have been well known by victims of the quake and tsunami, the embassy community in Tokyo leveraged the crisis map to identify their citizens in the Tohuku region. As the proliferation of crisis map-like platforms continues, e.g., Waze, victims in future crises will probably gravitate to social media faster than they did in Japan. Social media, specifically crisis mapping, has revolutionized the role of victim in disasters–from consumer of services, to consumer of relief AND supplier of information.
  • The crisis mapping community would be wise to work with Twitter and other suppliers of information to develop algorithms that minimise noise and duplication of information.

Thank you for telling this important story about the March 11 earthquake. May it lead to the reduction of suffering in current crises and those to come.” Someone else on CrisisMappers noted that “the first OSM mappers of satellite imagery from Japan were the mappers from Haiti who we trained after their own string of catastrophes.” I believe Jeffrey is spot on and would only add the following point: According to Hal, the crisis map received over one million unique views in the weeks and months that followed the Tsunami. The vast majority of these were apparently from inside Japan. So lets assume that 700,000 users accessed the crisis map but that only 1% of them found the map useful for their purposes. This means that 7,000 unique users found the map informative and of consequence. Unless a random sample of these 7,000 users were surveyed, then I find it rather myopic to claim so confidently that the map had no impact. Just because impact is difficult to measure doesn’t imply there was none to measure in the first place.

In any event, Internews’s reply to this feedback was exemplary and far more con-structive than the brouhaha that occurred over the Disaster 2.0 Report. So I applaud the team for how positive, pro-active and engaging they have been to our feedback. Thank you very much.

Screen Shot 2013-03-10 at 3.25.24 PM

In any event, the gaps should not distract from what is an excellent and important report on the use of technology in response to the Japan Earthquake. As my colleague Hal Seki (who spearheaded the Sinsai Crisis Map) noted on Crisis-Mappers, “the report was accurate and covered important on-going issues in Japan.” So I want to thank him again, and his entire team (including Sora, pictured above, the youngest volunteer behind the the crisis mapping efforts) and Jeffrey & team at Fletcher for all their efforts during those difficult weeks and months following the devastating disaster.

Below are multiple short excerpts from the 56-page Internews report that I found most interesting. So if you don’t have time to read the entire report, then simply glance through the list below.

  • Average tweets-per-minute in Japan before earthquake = 3,000
  • Average tweets-per-minute in Japan after earthquake = 11,000
  • DM’s per minute from Japan to world before earthquake = 200
  • DM’s per minute from Japan to world after earthquake = 1,000
  • Twitter’s global network facilitated search & rescue missions for survivors stranded by the tsunami. Within 3 days the Government of Japan had also set up its first disaster-related Twitter account.
  • Safecast, a volunteer-led project to collect and share radiation measurements, was created within a week of the disaster and generated over 3.5 million readings by December 2012.
  • If there is no information after a disaster, people become even more stressed and anxious. Old media works best in emergencies.
  • Community radio, local newspapers, newsletters–in some instances, hand written newsletters–and word of mouth played a key role in providing lifesaving information for communities. Radio was consistently ranked the most useful source of information by disaster-affected communities, from the day of the disaster right through until the end of the first week.
  • The second challenge involved humanitarian responders’ lack of awareness about the valuable information resources being generated by one very significant, albeit volunteer, community: the volunteer technical and crisis mapping communities.
  • The OpenStreet Map volunteer community, for instance, created a map of over 500,000 roads in disaster-affected areas while volunteers working with another crisis map, Sinsai.info, verified, categorised and mapped 12,000 tweets and emails from the affected regions for over three months. These platforms had the potential to close information gaps hampering the response and recovery operation, but it is unclear to what degree they were used by professional responders.
  • The “last mile” needs to be connected in even the most technologically advanced societies.
  • Still, due to the problems at the Fukushima nuclear plant and the scale of the devastation, there was still the issue of “mismatching” – where mainstream media coverage focused on the nuclear crisis and didn’t provide the information that people in evacuation centres needed most.
  • The JMA use a Short Message Service Cell Broadcast (SMS-CB) system to send mass alerts to mobile phone users in specific geographic locations. Earthquakes affect areas in different ways, so alerting phone users based on location enables region-specific alerts to be sent. The system does not need to know specific phone numbers so privacy is protected and the risk of counterfeit emergency alerts is reduced.
  • A smartphone application such as Yurekuru Call, meaning “Earthquake Coming”, can also be downloaded and it will send warnings before an earthquake, details of potential magnitude and arrival times depending on the location.
  • This started with a 14-year-old junior high school student who made a brave but risky decision to live stream NHK on Ustream using his iPhone camera [which is illegal]. This was done within 17 minutes of the earthquake happening on March 11.
  • So for most disaster- affected communities, local initiatives such as community radios, community (or hyper-local) newspapers and word of mouth provided information evacuees wanted the most, including information on the safety of friends and family and other essential information.
  • It is worth noting that it was not only professional reporters who committed themselves to providing information, but also community volunteers and other actors – and that is despite the fact that they too were often victims of the disaster.
  • And after the disaster, while the general level of public trust in media and in social media increased, radio gained the most trust from locals. It was also cited as being a more personable source of information – and it may even have been the most suitable after events as traumatic as these because distressing images couldn’t be seen.
  • Newspapers were also information lifelines in Ishinomaki, 90km from the epicentre of the earthquake. The local radio station was temporarily unable to broadcast due to a gasoline shortage so for a short period of time, the only information source in the city was a handwritten local newspaper, the Hibi Shimbun. This basic, low-cost, community initiative delivered essential information to people there.
  • Newsletters also proved to be a cost-efficient and effective way to inform communities living in evacuation centres, temporary shelters and in their homes.
  • Social networks such as Twitter, Mixi and Facebook provided a way for survivors to locate friends and family and let people know that they had survived.
  • Audio-visual content sharing platforms like YouTube and Ustream were used not only by established organisations and broadcasters, but also by survivors in the disaster-affected areas to share their experiences. There were also a number of volunteer initiatives, such as the crowdsourced disaster map, Sinsai.info, established to support the affected communities.
  • With approx 35 million account holders in Japan, Twitter is the most popular social networking site in that country. This makes Japan the third largest Twitter user in the world behind the USA and Brazil.
  • The most popular hash tags included: #anpi (for finding people) and #hinan (for evacuation centre information) as well as #jishin (earthquake information).
  • The Japanese site, Mixi, was cited as the most used social media in the affected Tohoku region and that should not be underestimated. In areas where there was limited network connectivity, Mixi users could easily check the last time fellow users had logged in by viewing their profile page; this was a way to confirm whether that user was safe. On March 16, 2011, Mixi released a new application that enabled users to view friends’ login history.
  • Geiger counter radiation readings were streamed by dozens, if not hundreds, of individuals based in the area.
  • Ustream also allowed live chats between viewers using their Twitter, Facebook and Instant Messenger accounts; this service was called “Social Stream”.
  • Local officials and NGOs commented that the content of the tweets or Facebook messages requesting assistance were often not relevant because many of the messages were based on secondary information or were simply being re-tweeted.
  • The JRC received some direct messages requesting help, but after checking the situation on the ground, it became clear that many of these messages were, for instance, re-tweets of aid requests or were no longer relevant, some being over a week old.
  • “Ultimately the opportunities (of social media) outweigh the risks. Social media is here to stay and non-engagement is simply not an option.”
  • The JRC also had direct experience of false information going viral; the organisation became the subject of a rumour falsely accusing it of deducting administration fees from cash donations. The rumour originated online and quickly spread across social networks, causing the JRC to invest in a nationwide advertising campaign confirming that 100 percent of the donations went to the affected people.
  • In February 2012 Facebook tested their Disaster Message Board, where users mark themselves and friends as “safe” after a major disaster. The service will only be activated after major emergencies.
  • Most page views [of Sinsai.info] came from the disaster-affected city of Sendai where internet penetration is higher than in surrounding rural areas. […] None of the survivors interviewed during field research in Miyagi and Iwate were aware of this crisis map.
  • The major mobile phone providers in Japan created emergency messaging services known as “disaster message boards” for people to type, or record messages, on their phones for relatives and friends to access. This involved two types of message boards. One was text based, where people could input a message on the provider’s website that would be stored online or automatically forwarded to pre-registered email addresses. The other was a voice recording that could be emailed to a recipient just like an answer phone message.
  • The various disaster message boards were used 14 million times after the earthquake and they significantly reduced congestion on the network – especially if the same number of people had to make a direct call.
  • Information & communication are a form of aid – although unfor-tunately, historically, the aid sector has not always recognised this. Getting information to people on the side of the digital divide, where there is no internet, may help them survive in times of crisis and help communities rebuild after immediate danger has passed.
  • Timely and accurate information for disaster- affected people as well as effective communication between local populations and those who provide aid also improve humanitarian responses to disasters. Using local media – such as community radio or print media – is one way to achieve this and it is an approach that should be embraced by humanitarian organisations.
  • With plans for a US$50 smartphone in the pipeline, the interna-tional humanitarian community needs to prepare for a transforma-tion in the way that information flows in disaster zones.
  • This report’s clear message is that the more channels of communication available during a disaster the better. In times of emergency it is simply not possible to rely on only one, or even three or four kinds, of communication. Both low tech and high tech methods of communication have proven themselves equally important in a crisis.

bio

Keynote: Next Generation Humanitarian Technology

I’m excited to be giving the Keynote address at the Social Media and Response Management Interface Event (SMARMIE 2013) in New York this morning. A big thank you to the principal driver behind this important event, Chuck Frank, for kindly inviting me to speak. This is my first major keynote since joining QCRI, so I’m thrilled to share what I’ve learned during this time and my vision for the future of humanitarian technology. But I’m even more excited by the selection of speakers and caliber of participants. I’m eager to learn about their latest projects, gain new insights and hopefully create pro-active partnerships moving forward.

You can follow this event via live stream and @smarmieNYC & #smarmie). I  plan to live tweeting the event at @patrickmeier. My slides are available for download here (125MB). Each slide include speaking notes, which may be of interest to folks who are unable to follow via live stream. Feel free to use my slides but strictly for non-commercial purposes and only with direct attribution. I’ll be sure to post the video of my talk on iRevolution when it becomes available. In the meantime, these videos and publications may be of interest. Also, I’ve curated the table of contents below with 60+ links to every project and/or concept referred to in my keynote and slides (in chronological order) so participants and others can revisit these after the conference—and more importantly keep our conver-sations going via Twitter and the comments section of the blog posts. I plan to hire a Research Assistant in the near future to turn these (and other posts) into a series of up-to-date e-books in which I’ll cite and fully credit the most interesting and insightful comments posted on iRevolution.

Social Media Pulse of Planet

http://iRevolution.net/2013/02/02/pulse-of-the-planet
http://iRevolution.net/2013/02/06/the-world-at-night
http://iRevolution.net/2011/04/20/network-witness

Big Crisis Data and Added Value

http://iRevolution.net/2011/06/22/no-data-bad-data

http://iRevolution.net/2012/02/26/mobile-technologies-crisis-mapping-disaster-response

http://iRevolution.net/2012/12/17/debating-tweets-disaster

http://iRevolution.net/2012/07/18/disaster-tweets-for-situational-awareness

http://iRevolution.net/2013/01/11/disaster-resilience-2-0

Standby Task Force (SBTF)

http://blog.standbytaskforce.com

http://iRevolution.net/2010/09/26/crisis-mappers-task-force

Libya Crisis Map

http://blog.standbytaskforce.com/libya-crisis-map-report

http://irevolution.net/2011/03/04/crisis-mapping-libya

http://iRevolution.net/2011/03/08/volunteers-behind-libya-crisis-map

http://iRevolution.net/2011/06/12/im-not-gaddafi-test

Philippines Crisis Map

http://iRevolution.net/2012/12/05/digital-response-to-typhoon-philippines

http://iRevolution.net/2012/12/08/digital-response-typhoon-pablo

http://iRevolution.net/2012/12/06/digital-disaster-response-typhoon

http://iRevolution.net/2012/06/03/geofeedia-for-crisis-mapping

http://iRevolution.net/2013/02/26/crowdflower-for-disaster-response

Digital Humanitarians 

http://www.digitalhumanitarians.com

Human Computation

http://iRevolution.net/2013/01/20/digital-humanitarian-micro-tasking

Human Computation for Disaster Response (submitted for publication)

Syria Crisis Map

http://iRevolution.net/2012/03/25/crisis-mapping-syria

http://iRevolution.net/2012/11/27/usaid-crisis-map-syria

http://iRevolution.net/2012/07/30/collaborative-social-media-analysis

http://iRevolution.net/2012/05/29/state-of-the-art-digital-disease-detection

Hybrid Systems for Disaster Response

http://iRevolution.net/2012/10/21/crowdsourcing-and-advanced-computing

http://iRevolution.net/2012/07/30/twitter-for-humanitarian-cluster

http://iRevolution.net/2013/02/11/update-twitter-dashboard

Credibility of Social Media: Compare to What?

http://iRevolution.net/2013/01/08/disaster-tweets-versus-911-calls

http://iRevolution.net/2010/09/22/911-system

Human Computed Crediblity 

http://iRevolution.net/2012/07/26/truth-and-social-media

http://iRevolution.net/2011/11/29/information-forensics-five-case-studies

http://iRevolution.net/2010/06/30/crowdsourcing-detective

http://iRevolution.net/2012/11/20/verifying-source-credibility

http://iRevolution.net/2012/09/16/accelerating-verification

http://iRevolution.net/2010/09/19/veracity-of-tweets-during-a-major-crisis

http://iRevolution.net/2011/03/26/technology-to-counter-rumors

http://iRevolution.net/2012/03/10/truthiness-as-probability

http://iRevolution.net/2013/01/27/mythbuster-tweets

http://iRevolution.net/2012/10/31/hurricane-sandy

http://iRevolution.net/2012/07/16/crowdsourcing-for-human-rights-monitoring-challenges-and-opportunities-for-information-collection-verification

Verily: Crowdsourced Verification

http://iRevolution.net/2013/02/19/verily-crowdsourcing-evidence

http://iRevolution.net/2011/11/06/time-critical-crowdsourcing

http://iRevolution.net/2012/09/18/six-degrees-verification

http://iRevolution.net/2011/09/26/augmented-reality-crisis-mapping

AI Computed Credibility

http://iRevolution.net/2012/12/03/predicting-credibility

http://iRevolution.net/2012/12/10/ranking-credibility-of-tweets

Future of Humanitarian Tech

http://iRevolution.net/2012/04/17/red-cross-digital-ops

http://iRevolution.net/2012/11/15/live-global-twitter-map

http://iRevolution.net/2013/02/16/crisis-mapping-minority-report

http://iRevolution.net/2012/04/09/humanitarian-future

http://iRevolution.net/2011/08/22/khan-borneo-galaxies

http://iRevolution.net/2010/03/24/games-to-turksource

http://iRevolution.net/2010/07/08/cognitive-surplus

http://iRevolution.net/2010/08/14/crowd-is-always-there

http://iRevolution.net/2011/09/14/crowdsource-crisis-response

http://iRevolution.net/2012/07/04/match-com-for-economic-resilience

http://iRevolution.net/2013/02/27/matchapp-disaster-response-app

http://iRevolution.net/2013/01/07/what-waze-can-teach-us

Policy

http://iRevolution.net/2012/12/04/catch-22

http://iRevolution.net/2012/02/05/iom-data-protection

http://iRevolution.net/2013/01/23/perils-of-crisis-mapping

http://iRevolution.net/2013/02/25/launching-sms-code-of-conduct

http://iRevolution.net/2013/02/26/haiti-lies

http://iRevolution.net/2012/06/04/big-data-philanthropy-for-humanitarian-response

http://iRevolution.net/2012/07/25/become-a-data-donor

Bio

ps. Please let me know if you find any broken links so I can fix them, thank you!

MatchApp: Next Generation Disaster Response App?

Disaster response apps have multiplied in recent years. I’ve been  reviewing the most promising ones and have found that many cater to  professional responders and organizations. While empowering paid professionals is a must, there has been little focus on empowering the real first responders, i.e., the disaster-affected communities themselves. To this end, there is always a dramatic mismatch in demand for responder services versus supply, which is why crises are brutal audits for humanitarian organizations. Take this Red Cross survey, which found that 74% of people who post a need on social media during a disaster expect a response within an hour. But paid responders cannot be everywhere at the same time during a disaster. The response needs to be decentralized and crowdsourced.

Screen Shot 2013-02-27 at 4.08.03 PM

In contrast to paid responders, the crowd is always there. And most survivals following a disaster are thanks to local volunteers and resources, not external aid or relief. This explains why FEMA Administrator Craig Fugate has called on the public to become a member of the team. Decentralization is probably the only way for emergency response organizations to improve their disaster audits. As many seasoned humanitarian colleagues of mine have noted over the years, the majority of needs that materialize during (and after) a disaster do not require the attention of paid disaster responders with an advanced degree in humanitarian relief and 10 years of experience in Haiti. We are not all affected in the same way when disaster strikes, and those less affected are often very motivated and capable at responding to the basic needs of those around them. After all, the real first responders are—and have always been—the local communities themselves, not the Search and Rescue Teams that parachutes in 36 hours later.

In other words, local self-organized action is a natural response to disasters. Facilitated by social capital, self-organized action can accelerate both response & recovery. A resilient community is therefore one with ample capacity for self-organization. To be sure, if a neighborhood can rapidly identify local needs and quickly match these with available resources, they’ll rebound more quickly than those areas with less capacity for self-organized action. The process is a bit like building a large jigsaw puzzle, with some pieces standing for needs and others for resources. Unlike an actual jigsaw puzzle, however, there can be hundreds of thousands of pieces and very limited time to put them together correctly.

This explains why I’ve long been calling for a check-in & match.com smartphone app for local collective disaster response. The talk I gave (above) at Where 2.0 in 2011 highlights this further as do the blog posts below.

Check-In’s with a Purpose: Applications for Disaster Response
http://iRevolution.net/2011/02/16/checkins-for-disaster-response

Maps, Activism & Technology: Check-In’s with a Purpose
http://iRevolution.net/2011/02/05/check-ins-with-a-purpose

Why Geo-Fencing Will Revolutionize Crisis Mapping
http://iRevolution.net/2011/08/21/geo-fencing-crisis-mapping

How to Crowdsource Crisis Response
http://iRevolution.net/2011/09/14/crowdsource-crisis-response

The Crowd is Always There
http://iRevolution.net/2010/08/14/crowd-is-always-there

Why Crowdsourcing and Crowdfeeding may be the Answer
http://iRevolution.net/2010/12/29/crowdsourcing-crowdfeeding

Towards a Match.com for Economic Resilience
http://iRevolution.net/2012/07/04/match-com-for-economic-resilience

This “MatchApp” could rapidly match hyper local needs with resources (material & informational) available locally or regionally. Check-in’s (think Foursquare) can provide an invaluable function during disasters. We’re all familiar with the command “In case of emergency break glass,” but what if: “In case of emergency, then check-in”? Checking-in is space- and time-dependent. By checking in, I announce that I am at a given location at a specific time with a certain need (red button). This means that information relevant to my location, time, user-profile (and even vital statistics) can be customized and automatically pushed to my MatchApp in real-time. After tapping on red, MatchApp prompts the user to select what specific need s/he has. (Yes, the icons I’m using are from the MDGs and just placeholders). Note that the App we’re building is for Androids, not iPhones, so the below is for demonstration purposes only.

Screen Shot 2013-02-27 at 3.32.29 PM

But MatchApp will also enable users who are less (or not) affected by a disaster to check-in and offer help (by tapping the green button). This is where the match-making algorithm comes to play. There are various (compatible options) in this respect. The first, and simplest, is to use a greedy algorithm. This  algorithm select the very first match available (which may not be the most optimal one in terms of location). A more sophisticated approach is to optimize for the best possible match (which is a non-trivial challenge in advanced computing). As I’m a big fan of Means of Exchange, which I have blogged about here, MatchApp would also enable the exchange of goods via bartering–a mobile eBay for mutual-help during disasters.

Screen Shot 2013-02-27 at 3.34.17 PM

Once a match is made, the two individuals in question receive an automated alert notifying them about the match. By default, both users’ identities and exact locations are kept confidential while they initiate contact via the app’s instant messaging (IM) feature. Each user can decide to reveal their identity/location at any time. The IM feature thus enables  users to confirm that the match is indeed correct and/or still current. It is then up to the user requesting help to share her or his location if they feel comfortable doing so. Once the match has been responded to, the user who received help is invited to rate the individual who offered help (and vice versa, just like the Uber app, depicted on the left below).

Screen Shot 2013-02-27 at 3.49.04 PM

As a next generation disaster response app, MatchApp would include a number of additional data entry features. For example, users could upload geo-tagged pictures and video footage (often useful for damage assessments).  In terms of data consumption and user-interface design,  MatchApp would be modeled along the lines of the Waze crowdsourcing app (depicted on the right above) and thus designed to work mostly “hands-free” thanks to a voice-based interface. (It would also automatically sync up with Google Glasses).

In terms of verifying check-in’s and content submitted via MatchApp, I’m a big fan of InformaCam and would thus integrate the latter’s meta-data verification features into MatchApp: “the user’s current GPS coordinates, altitude, compass bearing, light meter readings, the signatures of neighboring devices, cell towers, and wifi networks; and serves to shed light on the exact circumstances and contexts under which the digital image was taken.” I’ve also long been interested in peer-to-peer meshed mobile communication solutions and would thus want to see an integration with the Splinternet app, perhaps. This would do away with the need for using cell phone towers should these be damaged following a disaster. Finally, MatchApp would include an agile dispatch-and-coordination feature to allow “Super Users” to connect and coordinate multiple volunteers at one time in response to one or more needs.

In conclusion, privacy and security are a central issue for all smartphone apps that share the features described above. This explains why reviewing the security solutions implemented by multiple dating websites (especially those dating services with a strong mobile component like the actual Match.com app) is paramount. In addition, reviewing  security measures taken by Couchsurfing, AirBnB and online classified adds such as Craig’s List is a must. There is also an important role for policy to play here: users who submit false misinformation to MatchApp could be held accountable and prosecuted. Finally, MatchApp would be free and open source, with a hyper-customizable, drag-and-drop front- and back-end.

bio

Using CrowdFlower to Microtask Disaster Response

Cross-posted from CrowdFlower blog

A devastating earthquake struck Port-au-Prince on January 12, 2010. Two weeks later, on January 27th, a CrowdFlower was used to translate text messages from Haitian Creole to English. Tens of thousands of messages were sent by affected Haitians over the course of several months. All of these were heroically translated by hundreds of dedicated Creole-speaking volunteers based in dozens of countries across the globe. While Ushahidi took the lead by developing the initial translation platform used just days after the earthquake, the translation efforts were eventually rerouted to CrowdFlower. Why? Three simple reasons:

  1. CrowdFlower is one of the leading and most highly robust micro-tasking platforms there is;
  2. CrowdFlower’s leadership is highly committed to supporting digital humanitarian response efforts;
  3. Haitians in Haiti could now be paid for their translation work.

While the CrowdFlower project was launched 15 days after the earthquake, i.e., following the completion of search and rescue operations, every single digital humanitarian effort in Haiti was reactive. The key takeaway here was the proof of concept–namely that large-scale micro-tasking could play an important role in humanitarian information management. This was confirmed months later when devastating floods inundated much of Pakistan. CrowdFlower was once again used to translate incoming messages from the disaster affected population. While still reactive, this second use of CrowdFlower demonstrated replicability.

The most recent and perhaps most powerful use of CrowdFlower for disaster response occurred right after Typhoon Pablo devastated the Philippines in early December 2012. The UN Office for the Coordination of Humanitarian Affairs (OCHA) activated the Digital Humanitarian Network (DHN) to rapidly deliver a detailed dataset of geo-tagged pictures and video footage (posted on Twitter) depicting the damage caused by the Typhoon. The UN needed this dataset within 12 hours, which required that 20,000 tweets to be analyzed as quickly as possible. The Standby Volunteer Task Force (SBTF), a member of Digital Huma-nitarians, immediately used CrowdFlower to identify all tweets with links to pictures & video footage. SBTF volunteers subsequently analyzed those pictures and videos for damage and geographic information using other means.

This was the most rapid use of CrowdFlower following a disaster. In fact, this use of CrowdFlower was pioneering in many respects. This was the first time that a member of the Digital Humanitarian Network made use of CrowdFlower (and thus micro-tasking) for disaster response. It was also the first time that Crowd-Flower’s existing workforce was used for disaster response. In addition, this was the first time that data processed by CrowdFlower contributed to an official crisis map produced by the UN for disaster response (see above).

These three use-cases, Haiti, Pakistan and the Philippines, clearly demonstrate the added value of micro-tasking (and hence CrowdFlower) for disaster response. If CrowdFlower had not been available in Haiti, the alternative would have been to pay a handful of professional translators. The total price could have come to some $10,000 for 50,000 text messages (at 0.20 cents per word). Thanks to CrowdFlower, Haitians in Haiti were given the chance to make some of that money by translating the text messages themselves. Income generation programs are absolutely critical to rapid recovery following major disasters. In Pakistan, the use of CrowdFlower enabled Pakistani students and the Diaspora to volunteer their time and thus accelerate the translation work for free. Following Typhoon Pablo, paid CrowdFlower workers from the Philippines, India and Australia categorized several thousand tweets in just a couple hours while the volunteers from the Standby Volunteer Task Force geo-tagged the results. Had CrowdFlower not been available then, it is highly, highly unlikely that the mission would have succeeded given the very short turn-around required by the UN.

While impressive, the above use-cases were also reactive. We need to be a lot more pro-active, which is why I’m excited to be collaborating with CrowdFlower colleagues to customize a standby platform for use by the Digital Humanitarian Network. Having a platform ready-to-go within minutes is key. And while digital volunteers will be able to use this standby platform, I strongly believe that paid CrowdFlower workers also have a key role to play in the digital huma-nitarian ecosystem. Indeed, CrowdFlower’s large, multinational and multi-lingual global workforce is simply unparalleled and has the distinct advantage of being very well versed in the CrowdFlower platform.

In sum, it is high time that the digital humanitarian space move from crowd-sourcing to micro-tasking. It has been three years since the tragic earthquake in Haiti but we have yet to adopt micro-tasking more widely. CrowdFlower should thus play a key role in promoting and enabling this important shift. Their con-tinued important leadership in digital humanitarian response should also serve as a model for other private sector companies in the US and across the globe.

bio

Launching: SMS Code of Conduct for Disaster Response

Shortly after the devastating Haiti Earthquake of January 12, 2010, I published this blog post on the urgent need for an SMS code of conduct for disaster response. Several months later, I co-authored this peer-reviewed study on the lessons learned from the unprecedented use of SMS following the Haiti Earth-quake. This week, at the Mobile World Congress (MWC 2013) in Barcelona, GSMA’s Disaster Response Program organized two panels on mobile technology for disaster response and used the event to launch an official SMS Code of Conduct for Disaster Response (PDF). GSMA members comprise nearly 800 mobile operators based in more than 220 countries.

Screen Shot 2013-02-18 at 2.27.32 PM

Thanks to Kyla Reid, Director for Disaster Response at GSMA, and to Souktel’s Jakob Korenblummy calls for an SMS code of conduct were not ignored. The three of us spent a considerable amount of time in 2012 drafting and re-drafting a detailed set of principles to guide SMS use in disaster response. During this process, we benefited enormously from many experts on the mobile operators side and the humanitarian community; many of whom are at MWC 2013 for the launch of the guidelines. It is important to note that there have been a number of parallel efforts that our combined work has greatly benefited from. The Code of Conduct we launched this week does not seek to duplicate these important efforts but rather serves to inform GSMA members about the growing importance of SMS use for disaster response. We hope this will help catalyze a closer relationship between the world’s leading mobile operators and the international humanitarian community.

Since the impetus for this week’s launch began in response to the Haiti Earth-quake, I was invited to reflect on the crisis mapping efforts I spearheaded at the time. (My slides for the second panel organized by GSMA are available here. My more personal reflections on the 3rd year anniversary of the earthquake are posted here). For several weeks, digital volunteers updated the Ushahidi-Haiti Crisis Map (pictured above) with new information gathered from hundreds of different sources. One of these information channels was SMS. My colleague Josh Nesbit secured an SMS short code for Haiti thanks to a tweet he posted at 1:38pm on Jan 13th (top left in image below). Several days later, the short code (4636) was integrated with the Ushahidi-Haiti Map.

Screen Shot 2013-02-18 at 2.40.09 PM

We received about 10,000 text messages from the disaster-affected population during the during the Search and Rescue phase. But we only mapped about 10% of these because we prioritized the most urgent and actionable messages. While mapping these messages, however, we had to address a critical issue: data privacy and protection. There’s an important trade-off here: the more open the data, the more widely useable that information is likely to be for professional disaster responders, local communities and the Diaspora—but goodbye privacy.

Time was not a luxury we had; an an entire week had already passed since the earthquake. We were at the tail end of the search and rescue phase, which meant that literally every hour counted for potential survivors still trapped under the rubble. So we immediately reached out to 2 trusted lawyers in Boston, one of them a highly reputable Law Professor at The Fletcher School of Law and Diplomacy who also a specialist on Haiti. You can read the lawyers’ written email replies along with the day/time they were received on the right-hand side of the slide. Both lawyers opined that consent was implied vis-à-vis the publishing of personal identifying information. We shared this opinion with all team members and partners working with us. We then made a joint decision 24 hours later to move ahead and publish the full content of incoming messages. This decision was supported by an Advisory Board I put together comprised of humanitarian colleagues from the Harvard Humanitarian Initiative who agreed that the risks of making this info public were minimal vis-à-vis the principle of Do No HarmUshahidi thus launched a micro-tasking platform to crowdsource the translation efforts and hosted this on 4636.Ushahidi.com [link no longer live], which volunteers from the Diaspora used to translate the text messages.

I was able to secure a small amount of funding in March 2010 to commission a fully independent evaluation of our combined efforts. The project was evaluated a year later by seasoned experts from Tulane University. The results were mixed. While the US Marine Corps publicly claimed to have saved hundreds of lives thanks to the map, it was very hard for the evaluators to corroborate this infor-mation during their short field visit to Port-au-Prince more than 12 months after the earthquake. Still, this evaluation remains the only professional, independent and rigorous assessment of Ushahidi and 4636 to date.

Screen Shot 2013-02-25 at 2.10.47 AM

The use of mobile technology for disaster response will continue to increase for years to come. Mobile operators and humanitarian organizations must therefore be pro-active in managing this increase demand by ensuring that the technology is used wisely. I, for one, never again want to spend 24+ precious hours debating whether or not urgent life-and-death text messages can or cannot be mapped because of uncertainties over data privacy and protection—24 hours during a Search and Rescue phase is almost certain to make the difference between life and death. More importantly, however, I am stunned that a bunch of volunteers with little experience in crisis response and no affiliation whatsoever to any established humanitarian organization were able to secure and use an official SMS short code within days of a major disaster. It is little surprise that we made mistakes. So a big thank you to Kyla and Jakob for their leadership and perseverance in drafting and launching GSMA’s official SMS Code of Conduct to make sure the same mistakes are not made again.

While the document we’ve compiled does not solve every possible challenge con-ceivable, we hope it is seen as a first step towards a more informed and responsible use of SMS for disaster response. Rest assured that these guidelines are by no means written in stone. Please, if you have any feedback, kindly share them in the comments section below or privately via email. We are absolutely committed to making this a living document that can be updated.

To connect this effort with the work that my CrisisComputing Team and I are doing at QCRI, our contact at Digicel during the Haiti response had given us the option of sending out a mass SMS broadcast to their 2 million subscribers to get the word out about 4636. (We had thus far used local community radio stations). But given that we were processing incoming SMS’s manually, there was no way we’d be able to handle the increased volume and velocity of incoming text messages following the SMS blast. So my team and I are exploring the use of advanced computing solutions to automatically parse and triage large volumes of text messages posted during disasters. The project, which currently uses Twitter, is described here in more detail.

bio

Verily: Crowdsourced Verification for Disaster Response

Social media is increasingly used for communicating during crises. This rise in Big (Crisis) Data means that finding the proverbial needle in the growing haystack of information is becoming a major challenge. Social media use during Hurricane Sandy produced a “haystack” of half-a-million Instagram photos and 20 million tweets. But which of these were actually relevant for disaster response and could they have been detected in near real-time? The purpose of QCRI’s experimental Twitter Dashboard for Disaster Response project is to answer this question. But what about the credibility of the needles in the info-stack?

10-Red-Balloons

To answer this question, our Crisis Computing Team at QCRI has partnered with the Social Computing & Artificial Intelligence Lab at the Masdar Institute of Science and Technology. This applied research project began with a series of conversations in mid-2012 about DARPA’s Red Balloon Challenge. This challenge posted in 2009 offered $40K to the individual or team that could find the correct location of 10 red weather balloons discretely placed across the continental United States, an area covering well over 3 million square miles (8 million square kilometers). My friend Riley Crane at MIT spearheaded the team that won the challenge in 8 hours and 52 minutes by using social media.

Riley and I connected right after the Haiti Earthquake to start exploring how we might apply his team’s winning strategy to disaster response. But we were pulled in different directions due to PhD & post-doc obligations and start-up’s. Thank-fully, however, Riley’s colleague Iyad Rahwan got in touch with me to continue these conversations when I joined QCRI. Iyad is now at the Masdar Institute. We’re collaborating with him and his students to apply collective intelligence insights from the balloon to address the problem of false or misleading content shared on social media during  disasters.

Screen Shot 2013-02-16 at 2.26.41 AM

If 10 balloons planted across 3 million square miles can be found in under 9 hours, then surely the answer to the question “Did Hurricane Sandy really flood this McDonald’s in Virginia?” can be found in under 9 minutes given that  Virginia is 98% smaller than the “haystack” of the continental US. Moreover, the location of the restaurant would already be known or easily findable. The picture below, which made the rounds on social media during the hurricane is in reality part of an art exhibition produced in 2009. One remarkable aspect of the social media response to Hurricane Sandy was how quickly false information got debunked and exposed as false—not only by one good (digital) Samaritan, but by several.

SandyFake

Having access to accurate information during a crisis leads to more targeted self-organized efforts at the grassroots level. Accurate information is also important for emergency response professionals. The verification efforts during Sandy were invaluable but disjointed and confined to the efforts of a select few individuals. What if thousands could be connected and mobilized to cross-reference and verify suspicious content shared on social media during a disaster?

Say an earthquake struck Santiago, Chile a few minutes ago and contradictory reports begin to circulate on social media that the bridge below may have been destroyed. Determining whether transportation infrastructure is still useable has important consequences for managing the logistics of a disaster response opera-tion. So what if instead of crowdsourcing the correct location of  balloons across an entire country, one could crowdsource the collection of evidence in just one city struck by a disaster to determine whether said bridge had actually been destroyed in a matter of minutes?

santiagobridge

To answer these questions, QCRI and Masdar have launched an experimental  platform called Verily. We are applying best practices in time-critical crowd-sourcing coupled with gamification and reputation mechanisms to leverage the good will of (hopefully) thousands of digital Samaritans during disasters. This is experimental research, which means it may very well not succeed as envisioned. But that is a luxury we have at QCRI—to innovate next-generation humanitarian technologies via targeted iteration and experimentation. For more on this project, our concept paper is available as a Google Doc here. We invite feedback and welcome collaborators.

In the meantime, we are exploring the possibility of integrating the InformCam mobile application as part of Verily. InformaCam adds important metadata to images and videos taken by eyewitnesses. “The metadata includes information like the user’s current GPS coordinates, altitude, compass bearing, light meter readings, the signatures of neighboring devices, cell towers, and wifi net-works; and serves to shed light on the exact circumstances and contexts under which the digital image was taken.” We are also talking to our partners at MIT’s Computer Science & Artificial Intelligence Lab in Boston about other mobile solutions that may facilitate the use of Verily.

Again, this is purely experimental and applied research at this point. We hope to have an update on our progress in the coming months.

Bio

See also:

  •  Crowdsourcing Critical Thinking to Verify Social Media During Crises [Link]
  •  Using Crowdsourcing to Counter Rumors on Social Media [Link]

Video: Minority Report Meets Crisis Mapping

This short video was inspired by the pioneering work of the Standby Volunteer Task Force (SBTF). A global network of 1,000+ digital humanitarians in 80+ countries, the SBTF is responsible for some of the most important live crisis mapping operations that have supported both humanitarian and human rights organizations over the past 2+ years. Today, the SBTF is a founding and active member of the Digital Humanitarian Network (DHN) and remains committed to rapid learning and innovation thanks to an outstanding team of volunteers (“Mapsters”) and their novel use of next-generation humanitarian technologies.

The video first aired on the National Geographic Television Channel in February 2013. A big thanks to the awesome folks from National Geographic and the outstanding Evolve Digital Cinema Team for visioning the future of digital humanitarian technologies—a future that my Crisis Computing Team and I at QCRI are working to create.

An aside: I tried on several occasions to hack the script and say “We” rather than “I” since crisis mapping is very rarely a solo effort but the main sponsor insisted that the focus be on one individual. On the upside, one of the scenes in the commercial is of a Situation Room full of Mapsters coupled with the narration: “Our team can map the pulse of the planet, from anywhere, getting aid to the right places.” Our team = SBTF! Which is why the $$ received for being in this commercial will go towards supporting Mapsters.

bio