Category Archives: Humanitarian Technologies

Analyzing Tweets on Malaysia Flight #MH370

My QCRI colleague Dr. Imran is using our AIDR platform (Artificial Intelligence for Disaster Response) to collect & analyze tweets related to Malaysia Flight 370 that went missing several days ago. He has collected well over 850,000 English-language tweets since March 11th; using the following keywords/hashtags: Malaysia Airlines flight, #MH370m #PrayForMH370 and #MalaysiaAirlines.

MH370 Prayers

Imran then used AIDR to create a number of “machine learning classifiers” to automatically classify all incoming tweets into categories that he is interested in:

  • Informative: tweets that relay breaking news, useful info, etc

  • Praying: tweets that are related to prayers and faith

  • Personal: tweets that express personal opinions

The process is super simple. All he does is tag several dozen incoming tweets into their respective categories. This teaches AIDR what an “Informative” tweet should “look like”. Since our novel approach combines human intelligence with artificial intelligence, AIDR is typically far more accurate at capturing relevant tweets than Twitter’s keyword search.

And the more tweets that Imran tags, the more accurate AIDR gets. At present, AIDR can auto-classify ~500 tweets per second, or 30,000 tweets per minute. This is well above the highest velocity of crisis tweets recorded thus far—16,000 tweets/minute during Hurricane Sandy.

The graph below depicts the number of tweets generated since the day we started collecting the AIDR collection, i.e., March 11th.

Volume of Tweets per Day

This series of pie charts simply reflects the relative share of tweets per category over the past four days.

Tweets Trends

Below are some of the tweets that AIDR has automatically classified as being Informative (click to enlarge). The “Confidence” score simply reflects how confident AIDR is that it has correctly auto-classified a tweet. Note that Imran could also have crowdsourced the manual tagging—that is, he could have crowdsourced the process of teaching AIDR. To learn more about how AIDR works, please see this short overview and this research paper (PDF).

AIDR output

If you’re interested in testing AIDR (still very much under development) and/or would like the Tweet ID’s for the 850,000+ tweets we’ve collected using AIDR, then feel free to contact me. In the meantime, we’ll start a classifier that auto-collects tweets related to hijacking, criminal causes, and so on. If you’d like us to create a classifier for a different topic, let us know—but we can’t make any promises since we’re working on an important project deadline. When we’re further along with the development of AIDR, anyone will be able to easily collect & download tweets and create & share their own classifiers for events related to humanitarian issues.

Bio

Acknowledgements: Many thanks to Imran for collecting and classifying the tweets. Imran also shared the graphs and tabular output that appears above.

Calling all UAV Pilots: Want to Support Humanitarian Efforts?

I’m launching a volunteer network to connect responsible civilian UAV pilots who are interested in safely and legally supporting humanitarian efforts when the need arises. I’ve been thinking through the concept for months now and have benefited from great feedback. The result is this draft strategy document; the keyword being draft. The concept is still being developed and there’s still room for improvement. So I very much welcome more constructive feedback.

Click here to join the list-serve for this initiative, which I’m referring to as the Humanitarian UAViators Network. Thank you for sharing this project far and wide—it will only work if we get a critical mass of UAV pilots from all around the world. Of course, launching such a network raises more questions than answers, but I welcome the challenge and believe members of UAViators will be well placed to address and manage these challenges.

bio

Crowdsourcing the Search for Malaysia Flight 370 (Updated)

Early Results available here!

Update from Tomnod: The response has literally been overwhelming: our servers struggled to keep up all day.  We’ve been hacking hard to make some fixes and I think that the site is working now but I apologize if you have problems connecting: we’re getting up to 100,000 page views every minute! DigitalGlobe satellites are continuing to collect imagery as new reports about the possible crash sites come in so we’ll keep updating the site with new data.

Beijing-bound Flight 370 suddenly disappeared on March 8th without a trace. My colleagues at Tomnod have just deployed their satellite imagery crowdsourcing platform to support the ongoing Search & Rescue efforts. Using high-resolution satellite imagery from DigitalGlobe, Tomnod is inviting digital volunteers from around the world to search for any sign of debris from missing Boeing 777.

MH370

The DigitalGlobe satellite imagery is dated March 9th and covers over 1,000 square miles. What the Tomnod platform does is slice that imagery into many small squares like the one below (click to enlarge). Volunteers then tag one image at a time. This process is known as microtasking (or crowd computing). For quality control purposes, each image is shown to more than one volunteer. This consensus-based approach allows Tomnod to triangulate the tagging.

TomNod

I’ve long advocated for the use of microtasking to support humanitarian efforts. In 2010, I wrote about how volunteers used microtasking to crowdsource the search for Steve Fossett who had disappeared while flying a small single-engine airplane in Nevada. This was back in 2007. In 2011, I spearheaded a partnership with the UN Refugee Agency (UNCHR) in Somalia and used the Tomnod platform to crowdsource the search for internally displaced populations in the drought-stricken Afgooye Corridor. More here. I later launched a collaboration with Amnesty International in Syria to crowdsource the search for evidence of major human rights violations—again with my colleagues from Tomnod. Recently, my team and I at QCRI have been developing MicroMappers to support humanitarian efforts. At the UN’s request, MicroMappers was launched following Typhoon Yolanda to accelerate their rapid damage assessment. I’ve also written on the use of crowd computing for Search & Rescue operations.

TomnodSomalia

I’m still keeping a tiny glimmer of hope that somehow Malaysia Flight 370 was able to land somewhere and that there are survivors. I can only image what families, loved ones and friends must be going through. I’m sure they are desperate for information, one way or another. So please consider spending a few minutes of your time to support these Search and Rescue efforts. Thank you.

Bio

Note: If you don’t see any satellite imagery on the Tomnod platform for Flight 370, this means the team is busy uploading new imagery. So please check in again in a couple hours.

See also: Analyzing Tweets on Malaysia Flight #MH370 [link]

Crisis Mapping without GPS Coordinates (Updated)

Update: Video introduction to What3Words:

I recently spoke with a UK start-up that is doing away with GPS coordinates even though their company focuses on geographic information and maps. The start-up, What3Words, has divided the globe into 57 trillion squares and given each of these 3-by-3 meter areas a unique three-word code. Goodbye long postal addresses and cryptic GPS coordinates. Hello planet.inches.most. The start-up also offers a service called OneWord, which allows you to customize a one-word name for any square. In addition, the company has expanded to other languages such as Spanish, Swedish and Russian. They’re now working on including Arabic, Chinese, Japanese and others by mid-January 2014. Meanwhile, their API lets anyone build new applications that tap their global map of 57 trillion squares.

Credit: What3Words

When I spoke with CEO Chris Sheldrick, he noted that their very first users were emergency response organizations. One group in Australia, for example, is using What3Words as part of their SMS emergency service. “This will let people identify their homes with just three words, ensuring that emergency vehicles can find them as quickly as possible.” Such an approach provides greater accuracy, which is vital in rural areas. “Our ambulances have a terrible time with street addresses, particularly in The Bush.” Moreover, many places in the world have no addresses at all. So What3Words may also be useful for certain ICT4D projects in addition to crisis mapping. The real key to this service is simplicity, i.e., communicating three words over the phone, via SMS/Twitter or email is far easier (and less error prone) than dictating a postal address or a complicated set of GPS coordinates.

Credit: What3Words

How else do you think this service could be used vis-à-vis disaster response?

Bio

Quantifying Information Flow During Emergencies

I was particularly pleased to see this study appear in the top-tier journal, Nature. (Thanks to my colleague Sarah Vieweg for flagging). Earlier studies have shown that “human communications are both temporally & spatially localized following the onset of emergencies, indicating that social propagation is a primary means to propagate situational awareness.” In this new study, the authors analyze crisis events using country-wide mobile phone data. To this end, they also analyze the communication patterns of mobile phone users outside the affected area. So the question driving this study is this: how do the communication patterns of non-affected mobile phone users differ from those affected? Why ask this question? Understanding the communication patterns of mobile phone users outside the affected areas sheds light on how situational awareness spreads during disasters.

Nature graphs

The graphs above (click to enlarge) simply depict the change in call volume for three crisis events and one non-emergency event for the two types of mobile phone users. The set of users directly affected by a crisis is labeled G0 while users they contact during the emergency are labeled G1. Note that G1 users are not affected by the crisis. Since the study seeks to assess how G1 users change their communication patterns following a crisis, one logical question is this: do the call volume of G1 users increase like those of G0 users? The graphs above reveal that G1 and G0 users have instantaneous and corresponding spikes for crisis events. This is not the case for the non-emergency event.

“As the activity spikes for G0 users for emergency events are both temporally and spatially localized, the communication of G1 users becomes the most important means of spreading situational awareness.” To quantify the reach of situational awareness, the authors study the communication patterns of G1 users after they receive a call or SMS from the affected set of G0 users. They find 3 types of communication patterns for G1 users, as depicted below (click to enlarge).

Nature graphs 2

Pattern 1: G1 users call back G0 users (orange edges). Pattern 2: G1 users call forward to G2 users (purple edges). Pattern 3: G1 users call other G1 users (green edges). Which of these 3 patterns is most pronounced during a crisis? Pattern 1, call backs, constitute 25% of all G1 communication responses. Pattern 2, call forwards, constitutes 70% of communications. Pattern 3, calls between G1 users only represents 5% of all communications. This means that the spikes in call volumes shown in the above graphs is overwhelmingly driven by Patterns 1 and 2: call backs and call forwards.

The graphs below (click to enlarge) show call volumes by communication patterns 1 and 2. In these graphs, Pattern 1 is the orange line and Pattern 2 the dashed purple line. In all three crisis events, Pattern 1 (call backs) has clear volume spikes. “That is, G1 users prefer to interact back with G0 users rather than contacting with new users (G2), a phenomenon that limits the spreading of information.” In effect, Pattern 1 is a measure of reciprocal communications and indeed social capital, “representing correspondence and coordination calls between social neighbors.” In contrast, Pattern 2 measures the dissemination of the “dissemination of situational awareness, corresponding to information cascades that penetrate the underlying social network.”

Nature graphs 3

The histogram below shows average levels of reciprocal communication for the 4 events under study. These results clearly show a spike in reciprocal behavior for the three crisis events compared to the baseline. The opposite is true for the non-emergency event.Nature graphs 4

In sum, a crisis early warning system based on communication patterns should seek to monitor changes in the following two indicators: (1) Volume of Call Backs; and (2) Deviation of Call Backs from baseline. Given that access to mobile phone data is near-impossible for the vast majority of academics and humanitarian professionals, one question worth exploring is whether similar communication dynamics can be observed on social networks like Twitter and Facebook.

 bio

Using Crowd Computing to Analyze UAV Imagery for Search & Rescue Operations

My brother recently pointed me to this BBC News article on the use of drones for Search & Rescue missions in England’s Lake District, one of my favorite areas of the UK. The picture below is one I took during my most recent visit. In my earlier blog post on the use of UAVs for Search & Rescue operations, I noted that UAV imagery & video footage could be quickly analyzed using a microtasking platform (like MicroMappers, which we used following Typhoon Yolanda). As it turns out, an enterprising team at the University of Central Lancashire has been using microtasking as part of their UAV Search & Rescue exercises in the Lake District.

Lake District

Every year, the Patterdale Mountain Rescue Team assists hundreds of injured and missing persons in the North of the Lake District. “The average search takes several hours and can require a large team of volunteers to set out in often poor weather conditions.” So the University of Central Lancashire teamed up with the Mountain Rescue Team to demonstrate that UAV technology coupled with crowdsourcing can reduce the time it takes to locate and rescue individuals.

The project, called AeroSee Experiment, worked as follows. The Mountain Rescue service receives a simulated distress call. As they plan their Search & Rescue operation, the University team dispatches their UAV to begin the search. Using live video-streaming, the UAV automatically transmits pictures back to the team’s website where members of the public can tag pictures that members of the Mountain Rescue service should investigate further. These tagged pictures are then forwarded to “the Mountain Rescue Control Center for a final opinion and dispatch of search teams.” Click to enlarge the diagram below.

AeroSee

Members of the crowd would simply log on to the AeroSee website and begin tagging. Although the experiment is over, you can still do a Practice Run here. Below is a screenshot of the microtasking interface (click to enlarge). One picture at a time is displayed. If the picture displays potentially important clues, then the digital volunteer points to said area of the picture and types in why they believe the clue they’re pointing at might be important.

AeroSee MT2

The results were impressive. A total of 335 digital volunteers looked through 11,834 pictures and the “injured” walker (UAV image below) was found within 69 seconds of the picture being uploaded to microtasking website. The project team subsequently posted this public leaderboard to acknowledge all volunteers who participated, listing their scores and levels of accuracy for feedback purposes.

Aero MT3

Upon further review of the data and results, the project team concluded that the experiment was a success and that digital Search & Rescue volunteers were able to “home in on the location of our missing person before the drones had even landed!” The texts added to the tagged images were also very descriptive, which helped the team “locate the casualty very quickly from the more tentative tags on other images.”

If the area being surveyed during a Search & Rescue operation is fairly limited, then using the crowd to process UAV images is a quick and straightforward, especially if the crowd is relatively large. We have over 400 digital humanitarian volunteers signed up for MicroMappers (launched in November 2013) and hope to grow this to 1,000+ in 2014. But for much larger areas, like Kruger National Park, one would need far more volunteers. Kruger covers 7,523 square miles compared to the Lake District’s 885 square miles.

kruger-gate-sign

One answer to this need for more volunteers could be the good work that my colleagues over at Zooniverse are doing. Launched in February 2009, Zooniverse has a unique volunteer base of one million volunteers. Another solution is to use machine computing to prioritize the flights paths of UAVs in the first place, i.e., use advanced algorithms to considerably reduce the search area by ruling out areas that missing people or other objects of interest (like rhinos in Kruger) are highly unlikely to be based on weather, terrain, season and other data.

This is the area that my colleague Tom Snitch works in. As he noted in this recent interview (PDF), “We want to plan a flight path for the drone so that the number of unprotected animals is as small as possible.” To do this, he and his team use “exquisite mathematics and complex algorithms” to learn how “animals, rangers and poachers move through space and time.” In the case Search & Rescue, ruling out areas that are too steep and impossible for humans to climb or walk through could go a long way to reducing the search area not to mention the search time.

bio

See also:

  • Using UAVs for Search & Rescue [link]
  • MicroMappers: Microtasking for Disaster Response [link]
  • Results of MicroMappers Response to Typhoon Yolanda [link]
  • How UAVs are Making a Difference in Disaster Response [link]
  • Crowdsourcing Evaluation of Sandy Building Damage [link]

Rapid Disaster Damage Assessments: Reality Check

The Multi-Cluster/Sector Initial Rapid Assessment (MIRA) is the methodology used by UN agencies to assess and analyze humanitarian needs within two weeks of a sudden onset disaster. A detailed overview of the process, methodologies and tools behind MIRA is available here (PDF). These reports are particularly insightful when comparing them with the processes and methodologies used by digital humanitarians to carry out their rapid damage assessments (typically done within 48-72 hours of a disaster).

MIRA PH

Take the November 2013 MIRA report for Typhoon Haiyan in the Philippines. I am really impressed by how transparent the report is vis-à-vis the very real limitations behind the assessment. For example:

  • “The barangays [districts] surveyed do not constitute a represen-tative sample of affected areas. Results are skewed towards more heavily impacted municipalities […].”
  • “Key informant interviews were predominantly held with baranguay captains or secretaries and they may or may not have included other informants including health workers, teachers, civil and worker group representatives among others.”
  • Barangay captains and local government staff often needed to make their best estimate on a number of questions and therefore there’s considerable risk of potential bias.”
  • Given the number of organizations involved, assessment teams were not trained in how to administrate the questionnaire and there may have been confusion on the use of terms or misrepresentation on the intent of the questions.”
  • “Only in a limited number of questions did the MIRA checklist contain before and after questions. Therefore to correctly interpret the information it would need to be cross-checked with available secondary data.”

In sum: The data collected was not representative; The process of selecting interviewees was biased given that said selection was based on a convenience sample; Interviewees had to estimate (guesstimate?) the answer for several questions, thus introducing additional bias in the data; Since assessment teams were not trained to administrate the questionnaire, this also introduces the problem of limited inter-coder reliability and thus limits the ability to compare survey results; The data still needs to be validated with secondary data.

I do not share the above to criticize, only to relay what the real world of rapid assessments resembles when you look “under the hood”. What is striking is how similar the above challenges are to the those that digital humanitarians have been facing when carrying out rapid damage assessments. And yet, I distinctly recall rather pointed criticisms leveled by professional humanitarians against groups using social media and crowdsourcing for humanitarian response back in 2010 & 2011. These criticisms dismissed social media reports as being unrepresentative, unreliable, fraught with selection bias, etc. (Some myopic criticisms continue to this day). I find it rather interesting that many of the shortcomings attributed to crowdsourcing social media reports are also true of traditional information collection methodologies like MIRA.

The fact is this: no data or methodology is perfect. The real world is messy, both off- and online. Being transparent about these limitations is important, especially for those who seek to combine both off- and online methodologies to create more robust and timely damage assessments.

bio

Using UAVs for Search & Rescue

UAVs (or drones) are starting to be used for search & rescue operations, such as in the Philippines following Typhoon Yolanda a few months ago. They are also used to find missing people in the US, which may explain why members of the North Texas Drone User Group (NTDUG) are organizing the (first ever?) Search & Rescue challenge in a few days. The purpose of this challenge is to 1) encourage members to build better drones and 2) simulate a real world positive application of civilian drones.

Drones for SA

Nine teams have signed up to compete in Saturday’s challenge, which will be held in a wheat field near Renaissance Fair in Waxahachie, Texas (satellite image below). The organizers have already sent these teams a simulated missing person’s report. This will include a mock photo, age, height, hair color, ethnicity, clothing and where/when this simulated lost person was last seen. Each drone must have a return to home function and failsafe as well as live video streaming.

Challenge location

When the challenge launches, each team will need to submit a flight plan to the contest’s organizers before being allowed to search for the missing items (at set times). An item is considered found when said item’s color or shape can be described and if the location of this item can be pointed to on a Google Map. These found objects then count as points. Points are also awarded for finding tracks made by humans or animals, for example. Points will be deducted for major crashes, for flying at an altitude above the 375 feet limit and risk disqualification for flying over people.

While I can’t make it to Waxahachie this weekend to observe the challenge first-hand, I’m thrilled that the DC Drones group (which I belong to), is preparing to host its own drones search & rescue challenge this Spring. So I hope to be closely involved with this event in the coming months.

Wildlife challenge

Although search & rescue is typically thought of as searching for people, UAVs are also beginning to appear in conversations about anti-poaching operations. At the most recent DC Drones MeetUp, we heard a presentation on the first ever Wildlife Conservation UAV Challenge (wcUAVc). The team has partnered with Krueger National Park to support their anti-poaching efforts in the face of skyrocketing Rhino poaching.

Rhino graph

The challenge is to “design low cost UAVs that can be deployed over the rugged terrain of Kruger, equipped with sensors able to detect and locate poachers, and communications able to relay accurate and timely intelligence to Park Rangers.” In addition, the UAVs will have to “collect RFID tag data throughout the sector; detect, classify, and tack all humans; regularly report on the location of all rhinos and humans; and receive commands to divert from general surveillance to support poacher engagement anywhere in the sector. They also need to be able to safely operate in same air space with manned helicopters, assisting special helicopter borne rangers engage poachers.” All this for under $3,000.

Why RFID tag data? Because rangers and tourists in Krueger National Park all carry RFID tags so they can be easily located. If a UAV automatically detects a group of humans moving through the bush and does not find an RFID signature for them, the UAV will automatically conclude that they may be poachers. When I spoke with one of the team members following the presentation, he noted that they were also interested in having UAVs automatically detect whether humans are carrying weapons. This is no small challenge, which explains why the total cash prize is $65,000 and an all-inclusive 10-day trip to Krueger National Park for the winning team.

I think it would be particularly powerful if the team could open up the raw footage for public analysis via microtasking, i.e., include a citizen science component to this challenge to engage and educate people from around the world about the plight of rhinos in South Africa. Participants would be asked to tag imagery that show rhinos and humans, for example. In so doing, they’d learn more about the problem, thus becoming better educated and possibly more engaged. Perhaps something along the lines of what we do for digital humanitarian response, as described here.

Drone Innovation Award

In any event, I’m a big proponent of using UAVs for positive social impact, which is precisely why I’m honored to be an advisor for the (first ever?) Drones Social Innovation Award. The award was set up by my colleague Timothy Reuter (founder of the the Drone User Group Network, DUGN). Timothy is also launching a startup, AirDroids, to further democratize the use of micro-copters. Unlike similar copters out there, these heavy-lift AirDroids are easier to use, cheaper and far more portable.

As more UAVs like AirDroids hit the market, we will undoubtedly see more and more aerial photo- and videography uploaded to sites like Flickr and YouTube. Like social media, I expect such user-generated imagery to become increasingly useful in humanitarian response operations. If users can simply slip their smartphones into their pocket UAV, they could provide valuable aerial footage for rapid disaster damage assessments purposes, for example. Why smart-phones? Because people already use their smartphones to snap pictures during disasters. In addition, relatively cheap hardware add-on’s can easily turn smartphones for LIDAR sensing and thermal imaging.

All this may eventually result in an overflow of potentially useful aerial imagery, which is where MicroMappers would come in. Digital volunteers could easily use MicroMappers to quickly tag UAV footage in support of humanitarian relief efforts. Of course, UAV footage from official sources will also continue to play a more important role in the future (as happened following Hurricane Sandy). But professional UAV teams are already outnumbered by DIY UAV users. They simply can’t be everywhere at the same time. But the crowd can. And in time, a bird’s eye view may become less important than a flock’s eye view, especially for search & rescue and rapid disaster assessments.

Bio

 See also:

  • How UAVs are Making a Difference in Disaster Response [link]
  • UN World Food Program to Use UAVs [link]
  • Drones for Human Rights: Brilliant or Foolish? [link]
  • The Use of Drones for Nonviolent Civil Resistance [link]

Yes, I’m Writing a Book (on Digital Humanitarians)

I recently signed a book deal with Taylor & Francis Press. The book, which is tentatively titled “Digital Humanitarians: How Big Data is Changing the Face of Disaster Response,” is slated to be published next year. The book will chart the rise of digital humanitarian response from the Haiti Earthquake to 2015, highlighting critical lessons learned and best practices. To this end, the book will draw on real-world examples of digital humanitarians in action to explain how they use new technologies and crowdsourcing to make sense of “Big (Crisis) Data”. In sum, the book will describe how digital humanitarians & humanitarian technologies are together reshaping the humanitarian space and what this means for the future of disaster response. The purpose of this book is to inspire and inform the next generation of (digital) humanitarians while serving as a guide for established humanitarian organizations & emergency management professionals who wish to take advantage of this transformation in humanitarian response.

2025

The book will thus consolidate critical lessons learned in digital humanitarian response (such as the verification of social media during crises) so that members of the public along with professionals in both international humanitarian response and domestic emergency management can improve their own relief efforts in the face of “Big Data” and rapidly evolving technologies. The book will also be of interest to academics and students who wish to better understand methodological issues around the use of social media and user-generated content for disaster response; or how technology is transforming collective action and how “Big Data” is disrupting humanitarian institutions, for example. Finally, this book will also speak to those who want to make a difference; to those who of you who may have little to no experience in humanitarian response but who still wish to help others affected during disasters—even if you happen to be thousands of miles away. You are the next wave of digital humanitarians and this book will explain how you can indeed make a difference.

The book will not be written in a technical or academic writing style. Instead, I’ll be using a more “storytelling” form of writing combined with a conversational tone. This approach is perfectly compatible with the clear documentation of critical lessons emerging from the rapidly evolving digital humanitarian space. This conversational writing style is not at odds with the need to explain the more technical insights being applied to develop next generation humanitarian technologies. Quite on the contrary, I’ll be using intuitive examples & metaphors to make the most technical details not only understandable but entertaining.

While this journey is just beginning, I’d like to express my sincere thanks to my mentors for their invaluable feedback on my book proposal. I’d also like to express my deep gratitude to my point of contact at Taylor & Francis Press for championing this book from the get-go. Last but certainly not least, I’d like to sincerely thank the Rockefeller Foundation for providing me with a residency fellowship this Spring in order to accelerate my writing.

I’ll be sure to provide an update when the publication date has been set. In the meantime, many thanks for being an iRevolution reader!

bio

The Best of iRevolution in 2013

iRevolution crossed the 1 million hits mark in 2013, so big thanks to iRevolution readers for spending time here during the past 12 months. This year also saw close to 150 new blog posts published on iRevolution. Here is a short selection of the Top 15 iRevolution posts of 2013:

How to Create Resilience Through Big Data
[Link]

Humanitarianism in the Network Age: Groundbreaking Study
[Link]

Opening Keynote Address at CrisisMappers 2013
[Link]

The Women of Crisis Mapping
[Link]

Data Protection Protocols for Crisis Mapping
[Link]

Launching: SMS Code of Conduct for Disaster Response
[Link]

MicroMappers: Microtasking for Disaster Response
[Link]

AIDR: Artificial Intelligence for Disaster Response
[Link]

Social Media, Disaster Response and the Streetlight Effect
[Link]

Why the Share Economy is Important for Disaster Response
[Link]

Automatically Identifying Fake Images on Twitter During Disasters
[Link]

Why Anonymity is Important for Truth & Trustworthiness Online
[Link]

How Crowdsourced Disaster Response Threatens Chinese Gov
[Link]

Seven Principles for Big Data and Resilience Projects
[Link]

#NoShare: A Personal Twist on Data Privacy
[Link]

I’ll be mostly offline until February 1st, 2014 to spend time with family & friends, and to get started on a new exciting & ambitious project. I’ll be making this project public in January via iRevolution, so stay tuned. In the meantime, wishing iRevolution readers a very Merry Happy Everything!

santahat