Tag Archives: Response

Results: Analyzing 2 Million Disaster Tweets from Oklahoma Tornado

Thanks to the excellent work carried out by my colleagues Hemant Purohit and Professor Amit Sheth, we were able to collect 2.7 million tweets posted in the aftermath of the Category 4 Tornado that devastated Moore, Oklahoma. Hemant, who recently spent half-a-year with us at QCRI, kindly took the lead on carrying out some preliminary analysis of the disaster data. He sampled 2.1 million tweets posted during the first 48 hours for the analysis below.

oklahoma-tornado-20

About 7% of these tweets (~146,000 tweets) were related to donations of resources and services such as money, shelter, food, clothing, medical supplies and volunteer assistance. Many of the donations-related tweets were informative in nature, e.g.: “As President Obama said this morning, if you want to help the people of Moore, visit [link]”. Approximately 1.3% of the tweets (about 30,000 tweets) referred to the provision of financial assistance to the disaster-affected population. Just over 400 unique tweets sought non-monetary donations, such as “please help get the word out, we are accepting kid clothes to send to the lil angels in Oklahoma.Drop off.

Exactly 152 unique tweets related to offers of help were posted within the first 48 hours of the Tornado. The vast majority of these were asking how to get involved in helping others affected by the disaster. For example: “Anyone know how to get involved to help the tornado victims in Oklahoma??#tornado #oklahomacity” and “I want to donate to the Oklahoma cause shoes clothes even food if I can.” These two offers of help are actually automatically “matchable”, making the notion of a “Match.com” for disaster response a distinct possibility. Indeed, Hemant has been working with my team and I at QCRI to develop algorithms (classifiers) that not only identify relevant needs/offers from Twitter automatically but also suggests matches as a result.

Some readers may be suprised to learn that “only” several hundred unique tweets (out of 2+million) were related to needs/offers. The first point to keep in mind is that social media complements rather than replaces traditional information sources. All of us working in this space fully recognize that we are looking for the equivalent of needles in a haystack. But these “needles” may contain real-time, life-saving information. Second, a significant number of disaster tweets are retweets. This is not a negative, Twitter is particularly useful for rapid information dissemination during crises. Third, while there were “only” 152 unique tweets offering help, this still represents over 130 Twitter users who were actively seeking ways to help pro bono within 48 hours of the disaster. Plus, they are automatically identifiable and directly contactable. So these volunteers could also be recruited as digital humanitarian volunteers for MicroMappers, for example. Fourth, the number of Twitter users continues to skyrocket. In 2011, Twitter had 100 million monthly active users. This figure doubled in 2012. Fifth, as I’ve explained here, if disaster responders want to increase the number of relevant disaster tweets, they need to create demand for them. Enlightened leadership and policy is necessary. This brings me to point six: we were “only” able to collect ~2 million tweets but suspect that as many as 10 million were posted during the first 48 hours. So humanitarian organizations along with their partners need access to the Twitter Firehose. Hence my lobbying for Big Data Philanthropy.

Finally, needs/offers are hardly the only type of useful information available on Twitter during crises, which is why we developed several automatic classifiers to extract data on: caution and advice, infrastructure damage, casualties and injuries, missing people and eyewitness accounts. In the near future, when our AIDR platform is ready, colleagues from the American Red Cross, FEMA, UN, etc., will be able create their own classifiers on the fly to automatically collect information that is directly relevant to them and their relief operations. AIDR is spearheaded by QCRI colleague ChaTo and myself.

For now though, we simply emailed relevant geo-tagged and time-stamped data on needs/offers to colleagues at the American Red Cross who had requested this information. We also shared data related to gas leaks with colleagues at FEMA and ESRI, as per their request. The entire process was particularly insightful for Hemant and I, so we plan to follow up with these responders to learn how we can best support them again until AIDR becomes operational. In the meantime, check out the Twitris+ platform developed by Amit, Hemant and team at Kno.e.sis

bio

See also: Analysis of Multimedia Shared on Twitter After Tornado [Link

How Online Gamers Can Support Disaster Response

IRL

FACT: Over half-a-million pictures were shared on Instagram and more than 20 million tweets posted during Hurricane Sandy. The year before, over 100,000 tweets per minute were posted following the Japan Earthquake and Tsunami. Disaster-affected communities are now more likely than ever to be on social media, which dramatically multiplies the amount of user-generated crisis information posted during disasters. Welcome to Big Data—Big Crisis Data.

Humanitarian organizations and emergency management responders are completely unprepared to deal with this volume and velocity of crisis information. Why is this a problem? Because social media can save lives. Recent empirical studies have shown that an important percentage of social media reports include valuable, informative & actionable content for disaster response. Looking for those reports, however, is like searching for needles in a haystack. Finding the most urgent tweets in an information stack of over 20 million tweets (in real time) is indeed a major challenge.

FACT: More than half a billion people worldwide play computer and video games for at least an hour a day. This amounts to over 3.5 billion hours per week. In the US alone, gamers spend over 4 million hours per week online. The average young person will spend 10,000 hours of gaming by the age of 21. These numbers are rising daily. In early 2013, “World of Warcraft” reached 9.6 million subscribers worldwide, a population larger than Sweden. The online game “League of Legends” has over 12 million unique users every day while more than 20 million users log on to Xbox Live every day.

What if these gamers had been invited to search through the information haystack of 20 million tweets posted during Hurricane Sandy? Lets assume gamers were asked to tag which tweets were urgent without ever leaving their games. This simple 20-second task would directly support disaster responders like the American Red Cross. But the Digital Humanitarian Network (DHN) would have taken more than 100 hours or close to 5 days, assuming all their volunteers were working 24/7 with no breaks. In contrast, the 4 million gamers playing WoW (excluding China) would only need  90 seconds to do this. The 12 million gamers on League of Legends would have taken just 30 seconds.

While some of the numbers proposed above may seem unrealistic, there is absolutely no denying that drawing on this vast untapped resource would significantly accelerate the processing of crisis information during major disasters. In other words, gamers worldwide can play a huge role in supporting disaster response operations. And they want to: gamers playing “World of Warcraft” raised close to $2 million in donations to support relief operations following the Japan Earthquake. They also raised another $2.3 million for victims of Superstorm Sandy. Gamers can easily donate their time as well. This is why my colleague Peter Mosur and I are launching the Internet Response League (IRL). Check out our dedicated website to learn more and join the cause.

bio 

 

Project Loon: Google Blimps for Disaster Response (Updated)

A blimp is a floating airship that does not have any internal supporting framework or keel. The airship is typically filled with helium and is controlled remotely using steerable fans. Projet Loon is a Google initiative to launch a fleet of Blimps to extend Internet/wifi access across Africa and Asia. Some believe that “these high-flying networks would spend their days floating over areas outside of major cities where Internet access is either scarce or simply nonexistent.” Small-scale prototypes are reportedly being piloted in South Africa “where a base station is broadcasting signals to wireless access boxes in high schools over several kilometres.” The US military has been using similar technology for years.

Blimp

Google notes that the technology is “well-suited to provide low cost connectivity to rural communities with poor telecommunications infrastructure, and for expanding coverage of wireless broadband in densely populated urban areas.” Might Google Blimps also be used by Google’s Crisis Response Team in the future? Indeed, Google Blimps could be used to provide Internet access to disaster-affected communities. The blimps could also be used to capture very high-resolution aerial imagery for damage assessment purposes. Simply adding a digital camera to said blimps would do the trick. In fact, they could simply take the fourth-generation cameras used for Google Street View and mount them on the blimps to create Google Sky View. As always, however, these innovations are fraught with privacy and data protection issues. Also, the use of UAVs and balloons for disaster response has been discussed for years already.

bio

How Crowdsourced Disaster Response in China Threatens the Government

In 2010, Russian volunteers used social media and a live crisis map to crowdsource their own disaster relief efforts as massive forest fires ravaged the country. These efforts were seen by many as both more effective and visible than the government’s response. In 2011, Egyptian volunteers used social media to crowdsource their own humanitarian convoy to provide relief to Libyans affected by the fighting. In 2012, Iranians used social media to crowdsource and coordinate grassroots disaster relief operations following a series of earthquakes in the north of the country. Just weeks earlier, volunteers in Beijing crowd-sourced a crisis map of the massive flooding in the city. That map was immediately available and far more useful than the government’s crisis map. In early 2013, a magnitude 7  earthquake struck Southwest China, killing close to 200 and injuring more than 13,000. The response, which was also crowdsourced by volunteers using social media and mobile phones, actually posed a threat to the Chinese Government.

chinaquake

“Wang Xiaochang sprang into action minutes after a deadly earthquake jolted this lush region of Sichuan Province […]. Logging on to China’s most popular social media sites, he posted requests for people to join him in aiding the survivors. By that evening, he had fielded 480 calls” (1). While the government had declared the narrow mountain roads to the disaster-affected area blocked to unauthorized rescue vehicles, Wang and hitchhiked his way through with more than a dozen other volunteers. “Their ability to coordinate — and, in some instances, outsmart a government intent on keeping them away — were enhanced by Sina Weibo, the Twitter-like microblog that did not exist in 2008 but now has more than 500 million users” (2). And so, “While the military cleared roads and repaired electrical lines, the volunteers carried food, water and tents to ruined villages and comforted survivors of the temblor […]” (3). Said Wang: “The government is in charge of the big picture stuff, but we’re doing the work they can’t do” (4).

In response to this same earthquake, another volunteer, Li Chengpeng, “turned to his seven million Weibo followers and quickly organized a team of volunteers. They traveled to the disaster zone on motorcycles, by pedicab and on foot so as not to clog roads, soliciting donations via microblog along the way. What he found was a government-directed relief effort sometimes hampered by bureaucracy and geographic isolation. Two days after the quake, Mr. Li’s team delivered 498 tents, 1,250 blankets and 100 tarps — all donated — to Wuxing, where government supplies had yet to arrive. The next day, they hiked to four other villages, handing out water, cooking oil and tents. Although he acknowledges the government’s importance during such disasters, Mr. Li contends that grass-roots activism is just as vital. ‘You can’t ask an NGO to blow up half a mountain to clear roads and you can’t ask an army platoon to ask a middle-aged woman whether she needs sanitary napkins, he wrote in a recent post” (5).

chinaquake2

As I’ve blogged in the past (here and here, for example), using social media to crowdsourced grassroots disaster response efforts serves to create social capital and strengthen collective action. This explains why the Chinese government (and others) faced a “groundswell of social activism” that it feared could “turn into government opposition” following the earthquake (6). So the Communist Party tried to turn the disaster into a “rallying cry for political solidarity. ‘The more difficult the circumstance, the more we should unite under the banner of the party,’ the state-run newspaper People’s Daily declared […], praising the leadership’s response to the earthquake” (7).

This did not quell the rise in online activism, however, which has “forced the government to adapt. Recently, People’s Daily announced that three volunteers had been picked to supervise the Red Cross spending in the earthquake zone and to publish their findings on Weibo. Yet on the ground, the government is hewing to the old playbook. According to local residents, red propaganda banners began appearing on highway overpasses and on town fences even before water and food arrived. ‘Disasters have no heart, but people do,’ some read. Others proclaimed: ‘Learn from the heroes who came here to help the ones struck by disaster’ (8). Meanwhile, the Central Propaganda Department issued a directive to Chinese newspapers and websites “forbidding them to carry negative news, analysis or commentary about the earthquake” (9). Nevertheless, “Analysts say the legions of volunteers and aid workers that descended on Sichuan threatened the government’s carefully constructed narrative about the earthquake. Indeed, some Chinese suspect such fears were at least partly behind official efforts to discourage altruistic citizens from coming to the region” (10).

Aided by social media and mobile phones, grassroots disaster response efforts present a new and more poignant “Dictator’s Dilemma” for repressive regimes. The original Dictator’s Dilemma refers to an authoritarian government’s competing interest in using information communication technology by expanding access to said technology while seeking to control the democratizing influences of this technology. In contrast, the “Dictator’s Disaster Lemma” refers to a repressive regime confronted with effectively networked humanitarian response at the grassroots level, which improves collective action and activism in political contexts as well. But said regime cannot prevent people from helping each other during natural disasters as this could backfire against the regime.

bio

See also:

 •  How Civil Disobedience Improves Crowdsourced Disaster Response [Link]

Crowdsourcing Critical Thinking to Verify Social Media During Crises

My colleagues and I at QCRI and the Masdar Institute will be launching Verily in the near future. The project has already received quite a bit of media coverage—particularly after the Boston marathon bombings. So here’s an update. While major errors were made in the crowdsourced response to the bombings, social media can help to find quickly find individuals and resources during a crisis. Moreover, time-critical crowdsourcing can also be used to verify unconfirmed reports circulating on social media.

Screen Shot 2013-05-19 at 5.51.06 PM

The errors made following the bombings were the result of two main factors:

(1) the crowd is digitally illiterate
(2) the platforms used were not appropriate for the tasks at hand

The first factor has to do with education. Most of us are still in Kindergarden when it comes to the appropriate use social media. We lack the digital or media literacy required for the responsible use of social media during crises. The good news, however, is that the major backlash from the mistakes made in Boston are already serving as an important lesson to many in the crowd who are very likely to think twice about retweeting certain content or making blind allegations on social media in the future. The second factor has to do with design. Tools like Reddit and 4Chan that are useful for posting photos of cute cats are not always the tools best designed for finding critical information during crises. The crowd is willing to help, this much has been proven. The crowd simply needs better tools to focus and rationalize to goodwill of it’s members.

Verily was inspired from the DARPA Red Balloon Challenge which leveraged social media & social networks to find the location of 10 red weather balloons planted across the continental USA (3 million square miles) in under 9 hours. So Verily uses that same time-critical mobilization approach—negative incentive recursive mechanism—to rapidly collect evidence around a particular claim during a disaster, such as “The bridge in downtown LA has been destroyed by the earthquake”. Users of Verily can share this verification challenge directly from the Verily website (e.g., Share via Twitter, FB, and Email), which posts a link back to the Verily claim page.

This time-critical mobilization & crowdsourcing element is the first main component of Verily. Because disasters are far more geographically bounded than the continental US, we believe that relevant evidence can be crowdsourced in a matter of minutes rather than hours. Indeed, while the degree of separation in the analog world is 6, that number falls closer to 4 on social media, and we believe falls even more in bounded geographical areas like urban centers. This means that the 20+ people living opposite that bridge in LA are only 2 or 3 hops from your social network and could be tapped via Verily to take pictures of the bridge from their window, for example.

pinterest_blog

The second main component is to crowdsource critical thinking which is key to countering the spread of false rumors during crises. The interface to post evidence on Verily is modeled along the lines of Pinterest, but with each piece of content (text, image, video), users are required to add a sentence or two to explain why they think or know that piece of evidence is authentic or not. Others can comment on said evidence accordingly. This workflow prompts users to think critically rather than blindly share/RT content on Twitter without much thought, context or explanation. Indeed, we hope that with Verily more people will share links back to Verily pages rather than to out of context and unsubstantiated links of images/videos/claims, etc.

In other words, we want to redirect traffic to a repository of information that incentivises critical thinking. This means Verily is also looking to be an educational tool; we’ll have simple mini-guides on information forensics available to users (drawn from the BBC’s UGC, NPR’s Andy Carvin, etc). While we’ll include dig ups/downs on perceived authenticity of evidence posted to Verily, this is not the main focus of Verily. Dig ups/downs are similar to retweets and simply do not capture/explain whether said digger has voted based on her/his expertise or any critical thinking.

If you’re interested in supporting this project and/or sharing feedback, then please feel free to contact me at any time. For more background information on Verily, kindly see this post.

Bio

Web App Tracks Breaking News Using Wikipedia Edits

A colleague of mine at Google recently shared a new and very interesting Web App that tracks breaking news events by monitoring Wikipedia edits in real-time. The App, Wikipedia Live Monitor, alerts users to breaking news based on the frequency of edits to certain articles. Almost every significant news event has a Wikipedia page that gets updated in near real-time and thus acts as a single, powerful cluster for tacking an evolving crisis.

Wikipedia Live Monitor

Social media, in contrast, is far more distributed, which makes it more difficult to track. In addition, social media is highly prone to false positives. These, however, are almost immediately corrected on Wikipedia thanks to dedicated editors. Wikipedia Live Monitor currently works across several dozen languages and also “cross-checks edits with social media updates on Twitter, Google Plus and Facebook to help users get a better sense of what is trending” (1).

I’m really excited to explore the use of this Live Monitor for crisis response and possible integration with some of the humanitarian technology platforms that my colleagues and I at QCRI are developing. For example, the Monitor could be used to supplement crisis information collected via social media using the Artificial Intelligence for Disaster Response (AIDR) platform. In addition, the Wikipedia Monitor could also be used to triangulate reports posted to our Verily platform, which leverages time-critical crowdsourcing techniques to verify user-generated content posted on social media during disasters.

bio

GDACSmobile: Disaster Responders Turn to Bounded Crowdsourcing

GDACS, the Global Disaster Alert and Coordination System, sparked my interest in technology and disaster response when it was first launched back in 2004, which is why I’ve referred to GDACS in multiple blog posts since. This near real-time, multi-hazard monitoring platform is a joint initiative between the UN’s Office for the Coordination of Humanitarian Affairs (OCHA) and the European Commission (EC). GDACS serves to consolidate and improve the dissemination of crisis-related information including rapid mathematical analyses of expected disaster impact. The resulting risk information is distributed via Web and auto-mated email, fax and SMS alerts.

Screen Shot 2013-03-25 at 3.13.35 AM

I recently had the pleasure of connecting with two new colleagues, Daniel Link and Adam Widera, who are researchers at the University of Muenster’s European Research Center for Information Systems (ERCIS). Daniel and Adam have been working on GDACSmobile, a smartphone app that was initially developed to extend the reach of the GDACS portal. This project originates from a student project supervised by Daniel, Adam along with the Chair of the Center Bernd Hellingrath in cooperation with both Tom de Groeve from the Joint Research Center (JRC) and Minu Kumar Limbu, who is now with UNICEF Kenya.

GDACSmobile is intended for use by disaster responders and the general public, allowing for a combined crowdsourcing and “bounded crowdsourcing” approach to data collection and curation. This bounded approach was a deliberate design feature for GDACSmobile from the outset. I coined the term “bounded crowd-sourcing” four years ago (see this blog post from 2009). The “bounded crowd-sourcing” approach uses “snowball sampling” to grow a crowd of trusted reporters for the collection of crisis information. For example, one invites 5 (or more) trusted local reports to collect relevant information and subsequently ask each of these to invite 5 additional reporters who they fully trust; And so on, and so forth. I’m thrilled to see this term applied in practical applications such GDACSmobile. For more on this approach, please see these blog posts.

Bildschirmfoto 2013-03-25 um 13.47.21

GDACSmobile, which operates on all major mobile smartphones, uses a delibera-tely minimalist approach to situation reporting and can be used to collect info-rmation (via text & image) while offline. The collected data is then automatically transmitted when a connection becomes available. Users can also view & filter data via map view and in list form. Daniel and Adam are considering the addition of an icon-based data-entry interface instead of text-based data-entry since the latter is more cumbersome & time-consuming.

Bildschirmfoto 2013-03-24 um 22.15.28

Meanwhile, the server side of GDACSmobile facilitates administrative tasks such as the curation of data submitted by app users and shared on Twitter. Other social media platforms may be added in the future, such as Flickr, to retrieve relevant pictures from disaster-affected areas (similar to GeoFeedia). The server-side moderation feature is used to ensure high data quality standards. But the ERCIS researchers are also open to computational solutions, which is one reason GDACSmobile is not a ‘data island’ and why other systems for computational analysis, microtasking etc., can be used to process the same dataset. The server also “offers a variety of JSON services to allow ‘foreign’ systems to access the data. […] SQL queries can also be used with admin access to the server, and it would be very possible to export tables to spreadsheets […].” 

I very much look forward to following GDACSmobile’s progress. Since Daniel and Adam have designed their app to be open and are also themselves open to con-sidering computational solutions, I have already begun to discuss with them our AIDR project (Artificial Intelligence for Disaster Response) project at the Qatar Computing Research Institute (QCRI). I believe that making the ADIR-GDACS interoperable would make a whole lot of sense. Until then, if you’re going to this year’s International Conference on Information Systems for Crisis Response and Management (ISCRAM 2013) in May, then be sure to participate in the workshop (PDF) that Daniel and Adam are running there. The side-event will present the state of the art and future trends of rapid assessment tools to stimulate a conver-sation on current solutions and developments in mobile tech-nologies for post-disaster data analytics and situational awareness. My colleague Dr. Imran Muhammad from QCRI will also be there to present findings from our crisis computing research, so I highly recommend connecting with him.

Bio

Zooniverse: The Answer to Big (Crisis) Data?

Both humanitarian and development organizations are completely unprepared to deal with the rise of “Big Crisis Data” & “Big Development Data.” But many still hope that Big Data is but an illusion. Not so, as I’ve already blogged here, here and here. This explains why I’m on a quest to tame the Big Data Beast. Enter Zooniverse. I’ve been a huge fan of Zooniverse for as long as I can remember, and certainly long before I first mentioned them in this post from two years ago. Zooniverse is a citizen science platform that evolved from GalaxyZoo in 2007. Today, Zooniverse “hosts more than a dozen projects which allow volunteers to participate in scientific research” (1). So, why do I have a major “techie crush” on Zooniverse?

Oh let me count the ways. Zooniverse interfaces are absolutely gorgeous, making them a real pleasure to spend time with; they really understand user-centered design and motivations. The fact that Zooniverse is conversent in multiple disciplines is incredibly attractive. Indeed, the platform has been used to produce rich scientific data across multiple fields such as astronomy, ecology and climate science. Furthermore, this citizen science beauty has a user-base of some 800,000 registered volunteers—with an average of 500 to 1,000 new volunteers joining every day! To place this into context, the Standby Volunteer Task Force (SBTF), a digital humanitarian group has about 1,000 volunteers in total. The open source Zooniverse platform also scales like there’s no tomorrow, enabling hundreds of thousands to participate on a single deployment at any given time. In short, the software supporting these pioneering citizen science projects is well tested and rapidly customizable.

At the heart of the Zooniverse magic is microtasking. If you’re new to microtasking, which I often refer to as “smart crowdsourcing,” this blog post provides a quick introduction. In brief, Microtasking takes a large task and breaks it down into smaller microtasks. Say you were a major (like really major) astro-nomy buff and wanted to tag a million galaxies based on whether they are spiral or elliptical galaxies. The good news? The kind folks at the Sloan Digital Sky Survey have already sent you a hard disk packed full of telescope images. The not-so-good news? A quick back-of-the-envelope calculation reveals it would take 3-5 years, working 24 hours/day and 7 days/week to tag a million galaxies. Ugh!

Screen Shot 2013-03-25 at 4.11.14 PM

But you’re a smart cookie and decide to give this microtasking thing a go. So you upload the pictures to a microtasking website. You then get on Facebook, Twitter, etc., and invite (nay beg) your friends (and as many strangers as you can find on the suddenly-deserted digital streets), to help you tag a million galaxies. Naturally, you provide your friends, and the surprisingly large number good digital Samaritans who’ve just show up, with a quick 2-minute video intro on what spiral and elliptical galaxies look like. You explain that each participant will be asked to tag one galaxy image at a time by simply by clicking the “Spiral” or “Elliptical” button as needed. Inevitably, someone raises their hands to ask the obvious: “Why?! Why in the world would anyone want to tag a zillion galaxies?!”

Well, only cause analyzing the resulting data could yield significant insights that may force a major rethink of cosmology and our place in the Universe. “Good enough for us,” they say. You breathe a sigh of relief and see them off, cruising towards deep space to bolding go where no one has gone before. But before you know it, they’re back on planet Earth. To your utter astonishment, you learn that they’re done with all the tagging! So you run over and check the data to see if they’re pulling your leg; but no, not only are 1 million galaxies tagged, but the tags are highly accurate as well. If you liked this little story, you’ll be glad to know that it happened in real life. GalaxyZoo, as the project was called, was the flash of brilliance that ultimately launched the entire Zooniverse series.

Screen Shot 2013-03-25 at 3.23.53 PM

No, the second Zooniverse project was not an attempt to pull an Oceans 11 in Las Vegas. One of the most attractive features of many microtasking platforms such as Zooniverse is quality control. Think of slot machines. The only way to win big is by having three matching figures such as the three yellow bells in the picture above (righthand side). Hit the jackpot and the coins will flow. Get two out three matching figures (lefthand side), and some slot machines may toss you a few coins for your efforts. Microtasking uses the same approach. Only if three participants tag the same picture of a galaxy as being a spiral galaxy does that data point count. (Of course, you could decide to change the requirement from 3 volunteers to 5 or even 20 volunteers). This important feature allows micro-tasking initiatives to ensure a high standard of data quality, which may explain why many Zooniverse projects have resulted in major scientific break-throughs over the years.

The Zooniverse team is currently running 15 projects, with several more in the works. One of the most recent Zooniverse deployments, Planet Four, received some 15,000 visitors within the first 60 seconds of being announced on BBC TV. Guess how many weeks it took for volunteers to tag over 2,000,0000 satellite images of Mars? A total of 0.286 weeks, i.e., forty-eight hours! Since then, close to 70,000 volunteers have tagged and traced well over 6 million Martian “dunes.” For their Andromeda Project, digital volunteers classified over 7,500 star clusters per hour, even though there was no media or press announce-ment—just one newsletter sent to volunteers. Zooniverse de-ployments also involve tagging earth-based pictures (in contrast to telescope imagery). Take this Serengeti Snapshot deployment, which invited volunteers to classify animals using photographs taken by 225 motion-sensor cameras in Tanzania’s Serengeti National Park. Volunteers swarmed this project to the point that there are no longer any pictures left to tag! So Zooniverse is eagerly waiting for new images to be taken in Serengeti and sent over.

Screen Shot 2013-03-23 at 7.49.56 PM

One of my favorite Zooniverse features is Talk, an online discussion tool used for all projects to provide a real-time interface for volunteers and coordinators, which also facilitates the rapid discovery of important features. This also allows for socializing, which I’ve found to be particularly important with digital humanitarian deployments (such as these). One other major advantage of citizen science platforms like Zooniverse is that they are very easy to use and therefore do not require extensive prior-training (think slot machines). Plus, participants get to learn about new fields of science in the process. So all in all, Zooniverse makes for a great date, which is why I recently reached out to the team behind this citizen science wizardry. Would they be interested in going out (on a limb) to explore some humanitarian (and development) use cases? “Why yes!” they said.

Microtasking platforms have already been used in disaster response, such as MapMill during Hurricane SandyTomnod during the Somali Crisis and CrowdCrafting during Typhoon Pablo. So teaming up with Zooniverse makes a whole lot of sense. Their microtasking software is the most scalable one I’ve come across yet, it is open source and their 800,000 volunteer user-base is simply unparalleled. If Zooniverse volunteers can classify 2 million satellite images of Mars in 48 hours, then surely they can do the same for satellite images of disaster-affected areas on Earth. Volunteers responding to Sandy created some 80,000 assessments of infrastructure damage during the first 48 hours alone. It would have taken Zooniverse just over an hour. Of course, the fact that the hurricane affected New York City and the East Coast meant that many US-based volunteers rallied to the cause, which may explain why it only took 20 minutes to tag the first batch of 400 pictures. What if the hurricane had hit a Caribbean instead? Would the surge of volunteers may have been as high? Might Zooniverse’s 800,000+ standby volunteers also be an asset in this respect?

Screen Shot 2013-03-23 at 7.42.22 PM

Clearly, there is huge potential here, and not only vis-a-vis humanitarian use-cases but development one as well. This is precisely why I’ve already organized and coordinated a number of calls with Zooniverse and various humanitarian and development organizations. As I’ve been telling my colleagues at the United Nations, World Bank and Humanitarian OpenStreetMap, Zooniverse is the Ferrari of Microtasking, so it would be such a big shame if we didn’t take it out for a spin… you know, just a quick test-drive through the rugged terrains of humanitarian response, disaster preparedness and international development. 

bio

Postscript: As some iRevolution readers may know, I am also collaborating with the outstanding team at  CrowdCrafting, who have also developed a free & open-source microtasking platform for citizen science projects (also for disaster response here). I see Zooniverse and CrowCrafting as highly syner-gistic and complementary. Because CrowdCrafting is still in early stages, they fill a very important gap found at the long tail. In contrast, Zooniverse has been already been around for half-a-decade and can caters to very high volume and high profile citizen science projects. This explains why we’ll all be getting on a call in the very near future. 

GeoFeedia: Ready for Digital Disaster Response

GeoFeedia was not originally designed to support humanitarian operations. But last year’s blog post on the potential of GeoFeedia for crisis mapping caught the interest of CEO Phil Harris. So he kindly granted the Standby Volunteer Task Force (SBTF) free access to the platform. In return, we provided his team with feedback on what features (listed here) would make GeoFeedia more useful for digital disaster response. This was back in summer 2012. I recently learned that they’ve been quite busy since. Indeed, I had the distinct pleasure of sharing the stage with Phil and his team at this superb conference on social media for emergency management. After listening to their talk, I realized it was high time to publish an update on GeoFeedia, especially since we had used the tool just two months earlier in response to Typhoon Pablo, one of the worst disasters to hit the Philippines in the past 100 years.

The 1-minute video is well worth watching if you’re new to GeoFeedia. The plat-form enables hyper local searches for information by location across multiple social media channels such as Twitter, Youtube, Flickr, Picasa & now Instagram. One of my favorite GeoFeedia features is the awesome geofeed (digital fence), which you can learn more about here. So what’s new besides Instagram? Well, the first suggestion I made last year was to provide users with the option of searching by both location and topic, rather than just location alone. And presto, this now possible, which means that digital humanitarians today can zoom into a disaster-affected area and filter by social media type, date and hashtag. This makes the geofeed feature even more compelling for crisis response, especially since geofeeds can also be saved and shared.

The vast majority of social media monitoring tools out there first filter by key-word and hashtag. Only later do they add location. As Phil points out, this mean they easily miss 70% of hyper local social media reports. Most users and org-anizations, who pay hefty licensing fees to uses these platforms, are typically unaware of this. The fact that GeoFeedia first filters by location is not an accident. This recent study (PDF) of the 2012 London Olympics showed that social media users posted close to 170,000 geo-tagged to Twitter, Instagram, Flickr, Picasa and YouTube during the games. But only 31% of these geo-tagged posts contained any Olympic-specific keywords and/or hashtags! So they decided to analyze another large event and again found the number of results drop by about 70% when not first filtering by location. Phil argues that people in a crisis situation obviously don’t wait for keywords or hashtags to form; so he expects this drop to happen for disasters as well. “Traditional keyword and hashtag search thus be complemented with a geo-graphical search in order to provide a full picture of social media content that is contextually relevant to an event.”

Screen Shot 2013-03-23 at 4.42.25 PM

One of my other main recommendations to Phil & team last year had to do with analytics. There is a strong need for an “Analytics function that produces summary statistics and trends analysis for a geofeed of interest. This is where Geofeedia could better capture temporal dynamics by including charts, graphs and simple time-series analysis to depict how events have been unfolding over the past hour vs 12 hours, 24 hours, etc.” Well sure enough, one of GeoFeedia’s major new features is a GeoAnalytics Dashboard; an interface that enables users to discover temporal trends and patterns in social media—and to do so by geofeed. This means a user can now draw a geofeed around a specific area of interest in a given disaster zone and search for pictures that capture major infrastructure damage on a specified date that contain tags or descriptions with the words “#earthquake”, “damage,” “buildings,” etc. As Phil rightly points out, this provides a “huge time advantage during a crisis to give a yet another filtered layer of intelligence; in effect, social media that is highly relevant and actionable ‘bubbling-up to the top’ of the pile.” 

Analytics Screen Shot - CES Data

I truly am a huge fan of the GeoFeedia platform. Plus, Phil & team have been very responsive to our interests in using their tool for disaster response. So I’m ex-cited to see which features they build out next. They’ve already got a “data portability” functionality that enables data export. Users can also publish content from GeoFeedia directly to their own social networks. Moreover, the filtered content produced by geofeeds can also be shared with individual who do not have a GeoFeedia account. In any event, I hope the team will take into account two items from my earlier wish list—namely Sentiment Analysis and GeoAlerts.

A Sentiment Analysis feature would capture the general mood and sentiment  expressed hyper-locally within a defined geofeed in real-time. The automated Geo-Alerts feature would make the geofeed king. A GeoAlerts functionality would enable users to trigger specific actions based on different kinds of social media traffic within a given geofeed of interest. For example, I’d like to be notified if the number of pictures posted within my geofeed that are tagged with the words “#earthquake” and “damage,” increases by more than 20% in any given hour. Similarly, one could set a geofeed’s GeoAlert for a 10% increase in the number of tweets with the words “cholera” and “diarrhea” (these need not be in English, by the way) in any given 10-minute period. Users would then receive GeoAlerts via automated emails, Tweets and/or SMS’s. This feature would in effect make the GeoFeedia more of a mobile and “hands free” platform, like Waze for example.

My first blog post on GeoFeedia was entitled “GeoFeedia: Next Generation Crisis Mapping Technology?” The answer today is a definite “Yes!” While the platform was not originally designed with disaster response in mind, the team has since been adding important features that make the tool increasingly useful for humanitarian applications. And GeoFeedia has plans for more exciting develop-ments in 2013. Their commitment to innovation and strong continued interest in supporting digital disaster response is why I’m hoping to work more closely with them in the years to come. For example, our AIDR (Artificial Intelligence for Disaster Response) platform would really add a strong Machine Learning com-ponent to GeoFeedia’s search function, in effect enabling the tool to go beyond simple keyword search.

Bio

MatchApp: Next Generation Disaster Response App?

Disaster response apps have multiplied in recent years. I’ve been  reviewing the most promising ones and have found that many cater to  professional responders and organizations. While empowering paid professionals is a must, there has been little focus on empowering the real first responders, i.e., the disaster-affected communities themselves. To this end, there is always a dramatic mismatch in demand for responder services versus supply, which is why crises are brutal audits for humanitarian organizations. Take this Red Cross survey, which found that 74% of people who post a need on social media during a disaster expect a response within an hour. But paid responders cannot be everywhere at the same time during a disaster. The response needs to be decentralized and crowdsourced.

Screen Shot 2013-02-27 at 4.08.03 PM

In contrast to paid responders, the crowd is always there. And most survivals following a disaster are thanks to local volunteers and resources, not external aid or relief. This explains why FEMA Administrator Craig Fugate has called on the public to become a member of the team. Decentralization is probably the only way for emergency response organizations to improve their disaster audits. As many seasoned humanitarian colleagues of mine have noted over the years, the majority of needs that materialize during (and after) a disaster do not require the attention of paid disaster responders with an advanced degree in humanitarian relief and 10 years of experience in Haiti. We are not all affected in the same way when disaster strikes, and those less affected are often very motivated and capable at responding to the basic needs of those around them. After all, the real first responders are—and have always been—the local communities themselves, not the Search and Rescue Teams that parachutes in 36 hours later.

In other words, local self-organized action is a natural response to disasters. Facilitated by social capital, self-organized action can accelerate both response & recovery. A resilient community is therefore one with ample capacity for self-organization. To be sure, if a neighborhood can rapidly identify local needs and quickly match these with available resources, they’ll rebound more quickly than those areas with less capacity for self-organized action. The process is a bit like building a large jigsaw puzzle, with some pieces standing for needs and others for resources. Unlike an actual jigsaw puzzle, however, there can be hundreds of thousands of pieces and very limited time to put them together correctly.

This explains why I’ve long been calling for a check-in & match.com smartphone app for local collective disaster response. The talk I gave (above) at Where 2.0 in 2011 highlights this further as do the blog posts below.

Check-In’s with a Purpose: Applications for Disaster Response
http://iRevolution.net/2011/02/16/checkins-for-disaster-response

Maps, Activism & Technology: Check-In’s with a Purpose
http://iRevolution.net/2011/02/05/check-ins-with-a-purpose

Why Geo-Fencing Will Revolutionize Crisis Mapping
http://iRevolution.net/2011/08/21/geo-fencing-crisis-mapping

How to Crowdsource Crisis Response
http://iRevolution.net/2011/09/14/crowdsource-crisis-response

The Crowd is Always There
http://iRevolution.net/2010/08/14/crowd-is-always-there

Why Crowdsourcing and Crowdfeeding may be the Answer
http://iRevolution.net/2010/12/29/crowdsourcing-crowdfeeding

Towards a Match.com for Economic Resilience
http://iRevolution.net/2012/07/04/match-com-for-economic-resilience

This “MatchApp” could rapidly match hyper local needs with resources (material & informational) available locally or regionally. Check-in’s (think Foursquare) can provide an invaluable function during disasters. We’re all familiar with the command “In case of emergency break glass,” but what if: “In case of emergency, then check-in”? Checking-in is space- and time-dependent. By checking in, I announce that I am at a given location at a specific time with a certain need (red button). This means that information relevant to my location, time, user-profile (and even vital statistics) can be customized and automatically pushed to my MatchApp in real-time. After tapping on red, MatchApp prompts the user to select what specific need s/he has. (Yes, the icons I’m using are from the MDGs and just placeholders). Note that the App we’re building is for Androids, not iPhones, so the below is for demonstration purposes only.

Screen Shot 2013-02-27 at 3.32.29 PM

But MatchApp will also enable users who are less (or not) affected by a disaster to check-in and offer help (by tapping the green button). This is where the match-making algorithm comes to play. There are various (compatible options) in this respect. The first, and simplest, is to use a greedy algorithm. This  algorithm select the very first match available (which may not be the most optimal one in terms of location). A more sophisticated approach is to optimize for the best possible match (which is a non-trivial challenge in advanced computing). As I’m a big fan of Means of Exchange, which I have blogged about here, MatchApp would also enable the exchange of goods via bartering–a mobile eBay for mutual-help during disasters.

Screen Shot 2013-02-27 at 3.34.17 PM

Once a match is made, the two individuals in question receive an automated alert notifying them about the match. By default, both users’ identities and exact locations are kept confidential while they initiate contact via the app’s instant messaging (IM) feature. Each user can decide to reveal their identity/location at any time. The IM feature thus enables  users to confirm that the match is indeed correct and/or still current. It is then up to the user requesting help to share her or his location if they feel comfortable doing so. Once the match has been responded to, the user who received help is invited to rate the individual who offered help (and vice versa, just like the Uber app, depicted on the left below).

Screen Shot 2013-02-27 at 3.49.04 PM

As a next generation disaster response app, MatchApp would include a number of additional data entry features. For example, users could upload geo-tagged pictures and video footage (often useful for damage assessments).  In terms of data consumption and user-interface design,  MatchApp would be modeled along the lines of the Waze crowdsourcing app (depicted on the right above) and thus designed to work mostly “hands-free” thanks to a voice-based interface. (It would also automatically sync up with Google Glasses).

In terms of verifying check-in’s and content submitted via MatchApp, I’m a big fan of InformaCam and would thus integrate the latter’s meta-data verification features into MatchApp: “the user’s current GPS coordinates, altitude, compass bearing, light meter readings, the signatures of neighboring devices, cell towers, and wifi networks; and serves to shed light on the exact circumstances and contexts under which the digital image was taken.” I’ve also long been interested in peer-to-peer meshed mobile communication solutions and would thus want to see an integration with the Splinternet app, perhaps. This would do away with the need for using cell phone towers should these be damaged following a disaster. Finally, MatchApp would include an agile dispatch-and-coordination feature to allow “Super Users” to connect and coordinate multiple volunteers at one time in response to one or more needs.

In conclusion, privacy and security are a central issue for all smartphone apps that share the features described above. This explains why reviewing the security solutions implemented by multiple dating websites (especially those dating services with a strong mobile component like the actual Match.com app) is paramount. In addition, reviewing  security measures taken by Couchsurfing, AirBnB and online classified adds such as Craig’s List is a must. There is also an important role for policy to play here: users who submit false misinformation to MatchApp could be held accountable and prosecuted. Finally, MatchApp would be free and open source, with a hyper-customizable, drag-and-drop front- and back-end.

bio