Tag Archives: earthquake

Why Robots Are Flying Over Zanzibar and the Source of the Nile

An expedition in 1858 revealed that Lake Victoria was the source of the Nile. We found ourselves on the shores of Africa’s majestic lake this October, a month after a 5.9 magnitude earthquake struck Tanzania’s Kagera Region. Hundreds were injured and dozens killed. This was the biggest tragedy in decades for the peaceful lakeside town of Bukoba. The Ministry of Home Affairs invited WeRobotics to support the recovery and reconstruction efforts by carrying out aerial surveys of the affected areas. 

2016-10-10-08-14-57-hdr

The mission of WeRobotics is to build local capacity for the safe and effective use of appropriate robotics solutions. We do this by co-creating local robotics labs that we call Flying Labs. We use these Labs to transfer the professional skills and relevant robotics solutions to outstanding local partners. Our explicit focus on capacity building explains why we took the opportunity whilst in Kagera to train two Tanzanian colleagues. Khadija and Yussuf joined us from the State University of Zanzibar (SUZA). They were both wonderful to work with and quick learners too. We look forward to working with them and other partners to co-create our Flying Labs in Tanzania. More on this in a future post.

Aerial Surveys of Kagera Region After The Earthquake

We surveyed multiple areas in the region based on the priorities of our local partners as well as reports provided by local villagers. We used the Cumulus One UAV from our technology partner DanOffice to carry out the flights. The Cumulus has a stated 2.5 hour flight time and 50 kilometer radio range. We’re using software from our partner Pix4D to process the 3,000+ very high resolution images captured during our 2 days around Bukoba.

img_6753

Above, Khadija and Yussuf on the left with a local engineer and a local member of the community on the right, respectfully. The video below shows how the Cumulus takes off and lands. The landing is automatic and simply involves the UAV stalling and gently gliding to the ground. 

We engaged directly with local communities before our flights to explain our project and get their permissions to fly. Learn more about our Code of Conduct.

img_6807

Aerial mapping with fixed-wing UAVs can identify large-scale damage over large areas and serve as a good base map for reconstruction. A lot of the damage, however, can be limited to large cracks in walls, which cannot be seen with nadir (vertical) imagery. We thus flew over some areas using a Parrot Bebop2 to capture oblique imagery and to get closer to the damage. We then took dozens of geo-tagged images from ground-level with our phones in order to ground-truth the aerial imagery.

img_6964

We’re still processing the resulting imagery so the results below are simply the low resolution previews of one (out of three) surveys we carried out.

ortho1_bukoba

Both Khadija and Yussuf were very quick learners and a real delight to work with. Below are more pictures documenting our recent work in Kagera. You can follow all our trainings and projects live via our Twitter feed (@werobotics) and our Facebook page. Sincerest thanks to both Linx Global Intelligence and UR Group for making our work in Kagera possible. Linx provided the introduction to the Ministry of Home Affairs while the UR Group provided invaluable support on the logistics and permissions.

img_6827

Yussuf programming the flight plan of the Cumulus

img_6875

Khadija is setting up the Cumulus for a full day of flying around Bukoba area

img_6756

Khadija wants to use aerial robots to map Zanzibar, which is where she’s from

img_6787

Community engagement is absolutely imperative

img_6791

Local community members inspecting the Parrot’s Bebop2

From the shores of Lake Victoria to the coastlines of Zanzibar

Together with the outstanding drone team from the State University of Zanzibar, we mapped Jozani Forest and part of the island’s eastern coastline. This allowed us to further field-test our long-range platform and to continue our local capacity building efforts following our surveys near the Ugandan border. Here’s a picture-based summary of our joint efforts.

2016-10-14-09-09-48

Flying Labs Coordinator Yussuf sets up the Cumulus UAV for flight

2016-10-13-14-44-27-hdr

Turns out selfie sticks are popular in Zanzibar and kids love robots.

2016-10-14-10-01-25

Khairat from Team SUZA is operating the mobile air traffic control tower. Team SUZA uses senseFly eBees for other projects on the island.

2016-10-15-09-03-10

Another successful takeoff, courtesy of Flying Labs Coordinator Yussuf.

2016-10-15-11-11-20

We flew the Cumulus at a speed of 65km/h and at an altitude of 265m.

2016-10-15-13-11-13

The Cumulus flew for 2 hours, making this our longest UAV flight in Zanzibar so far.

2016-10-15-10-38-51-hdr

Khadija from Team SUZA explains to local villagers how and why she maps Zanzibar using flying robots.

2016-10-15-17-26-23

Tide starts rushing back in. It’s important to take the moon into account when mapping coastlines, as the tide can change drastically during a single flight and thus affect the stitching process.

The content above is cross-posted from WeRobotics.

Aerial Robotics for Search & Rescue: State of the Art?

WeRobotics is co-creating a global network of labs to transfer robotics solutions to those who need them most. These “Flying Labs” take on different local flavors based on the needs and priorities of local partners. Santiago Flying Labs is one of the labs under consideration. Our local partners in Chile are interested in the application of robotics for disaster preparedness and Search & Rescue (SaR) operations. So what is the state of the art in rescue robotics?

WP0

One answer may lie several thousand miles away in Lushan, China, which experienced a 7.0 magnitude earthquake in 2013. The mountainous area made it near impossible for the Chinese International Search and Rescue Team (CISAR) to implement a rapid search and post-seismic evaluation. So State Key Robotics Lab at Shenyang Institute of Automation offered aerial support to CISAR. They used their aerial robot (UAV) to automatically and accurately detect collapsed buildings for ground rescue guidance. This saved the SaR teams considerable time. Here’s how.

WP5

A quicker SaR response leads to a higher survival rate. The survival rate is around 90% within the first 30 minutes but closer to 20% by day four. “In traditional search methods, ground rescuers are distributed to all possible places, which is time consuming and inefficient.” An aerial inspection of the disaster damage can help accelerate the ground search for survivors by prioritizing which areas to search first.

WP1

State Key Labs used a ServoHeli aerial robot to capture live video footage of the damaged region. And this is where it gets interesting. “Because the remains of a collapsed building often fall onto the ground in arbitrary directions, their shapes will exhibit random gradients without particular orientations. Thus, in successive aerial images, the random shape of a collapsed building will lead to particular motion features that can be used to discriminate collapsed from non-collapsed buildings.”

WP2

These distinct motion features can be quantified using a histogram of oriented gradient (HOG) as depicted here (click to enlarge):

WP4
As is clearly evident from the histograms, the “HOG variation of a normal building will be much larger than that of a collapsed one.” The team at State Key Labs had already employed this technique to train and test their automated feature-detection algorithm using aerial video footage from the 2010 Haiti Earthquake. Sample results of this are displayed below. Red rectangles denote where the algorithm was successful in identifying damage. Blue rectangles are false alarms while orange rectangles are missed detections.

Screen Shot 2016-02-07 at 2.55.14 AM

Screen Shot 2016-02-07 at 2.54.14 AM

Screen Shot 2016-02-07 at 2.55.57 AM

While the team achieved increasingly accurate detection rates for Haiti, the initial results for Lushan were not as robust. This was due to the fact that Lushan is far more rural than Port-au-Prince, which tripped up the algorithm. Eventually, the software achieved an accurate rate of 83.4% without any missed collapses, however. The use of aerial robotics and automated feature detection algorithms in Xinglong Village (9.5 sq. km) enabled CISAR to cut their search time in half. In sum, the team concluded that videos are more valuable for SaR operations than static images.

Screen Shot 2016-02-07 at 2.53.05 AM

Screen Shot 2016-02-07 at 2.52.26 AM

To learn more about this deployment, see the excellent write-up “Search and Rescue Rotary-Wing UAV and Its Application to the Lushan Ms 7.0 Earthquake” published in the Journal of Field Robotics. I wish all robotics deployments were this well documented. Another point that I find particularly noteworthy about this operation is that it was conducted three years ago already. In other words, real-time feature detection of disaster damage from live aerial video footage was already used operationally years ago.

WP3
What’s more, this paper published in 2002 (!) used computer vision to detect with a 90% accuracy rate collapsed buildings in aerial footage of the 1995 Kobe earthquake captured by television crews in helicopters. Perhaps in the near future we’ll have automated feature detection algorithms for disaster damage assessments running on live video footage from news channels and aerial robots. These could then be complemented by automated change-detection algorithms running on satellite imagery. In any event, the importance of applied research is clearly demonstrated by the Lushan deployments. This explains why WeRobotics
always aims to have local universities involved in Flying Labs.


Thanks to the ICARUS Team for pointing me to this deployment.

360° Aerial View of Taiwan Earthquake Damage

The latest news coming from Taiwan is just tragic and breaks my heart. More than 100 may still be trapped under the rubble. I’ve been blogging about the role that emerging technologies can play in humanitarian response since 2008 but it definitely doesn’t get any easier emotionally to witness these tragic events. I try to draw a psychological line when I can so as not to lose hope and fall apart, but I’m not always successful. Sometimes it is just easier psychologically to focus on the technology and nothing else. My thoughts go out to all the families affected.


My colleague Preston Ward from DronePan kindly connected me to a friend of his, Elton, in Taiwan. Elton is with Super720, a company that creates impressive Virtual Reality (VR) panoramas. They deployed within 4 hours of the earthquake and used aerial robots (UAVs) at 5 different locations to capture both 360° aerial panoramas and 4K video to document the earthquake damage.

Click on the image below to view the pano in full 360° (takes a while to load).

360° visual analysis may prove useful and complementary for inspecting disaster damage. If there were a way to geo-reference and orthorectify these panos, they could potentially play a larger role in the disaster assessment process which uses geospatial data and Geographic Information Systems (GIS).

The team in Taiwan also shared this aerial video footage of the damage:

As one of my upcoming blog posts will note, we should be able to run computer vision algorithms on aerial videos to automatically detect disaster damage. We should also be able to use Virtual Reality to carry out damage assessments. You can already use VR to inspect at the earthquake damage in Taiwan here.

Humanitarian UAV Missions in Nepal: Early Observations (Updated)

Public request from the United Nations (UN) Office for the Coordination of Humanitarian Affairs (OCHA) posted on April 28, 2015:

“OCHA would prefer that all the UAV operators coordinate their efforts. With UAViators (Humanitarian UAV Network) in place, OCHA suggest that they all connect to UAViators and share their activities so everyone knows what is being worked on. Please make sure all UAV teams register at the RDC (Reception and Departure Center) at the airport.” 

Note: UAViators does not self-deploy but rather responds to requests from established humanitarian organizations.


There are at the very least 15 humanitarian UAV teams operating in Nepal. We know this since these teams voluntarily chose to liaise with the Humanitarian UAV Network (UAViators). In this respect, the current humanitarian UAV response is far better coordinated than the one I witnessed in the Philippines right after Typhoon Haiyan in 2013. In fact, there was little to no coordination at the time amongst the multiple civilian UAV teams; let alone between these teams and humanitarian organizations, or the Filipino government for that matter. This lack of coordination coupled with the fact that I could not find any existing “Code of Conduct” for the use of UAVs in humanitarian settings is actually what prompted me to launch UAViators just months after leaving the Philippines.

DCIM100MEDIA

The past few days have made it clear that we still have a long way to go in the humanitarian UAV space. Below are some early observations (not to be taken as criticisms but early reflections only). UAV technology is highly disruptive and is only now starting to have visible impact (both good and bad) in humanitarian contexts. We don’t have all the answers; the institutions are not keeping up with the rapid pace of innovation, nor are the regulators. The challenges below cut across technical, organizational, regulatory challenges that are only growing more complex. So I welcome your constructive input on how to improve these efforts moving forward.

  • Yes, we now have a Code of Conduct which was drafted by several humanitarian professionals, UAV pilots & experts and academics. However, this doesn’t mean that every civilian UAV pilot in Nepal has taken the time to read this document let alone knows that this document exists. As such, most UAV pilots may not even realize that they require legal permission from the government in order to operate or that they should carry some form of insurance. Even professional pilots may not think to inform the local police that they have formal authorization to operate; or know how to communicate with Air Traffic Control or with the military for flight permissions. UAViators can’t force anyone in Nepal to comply with national regulations or the Code. The Network can only encourage UAV pilots to follow best practices. The majority of the problems vis-a-vis the use of UAVs in Nepal would have been avoided had the majority of UAV users followed the Humanitarian UAV Code of Conduct.
  • Yes, more countries have instituted UAV regulations. Some of these tend to be highly restrictive, equating 700-gram micro-UAVs with 50-kilo UAVs. Some apply the same sets of laws for the use of UAVs for amateur movie productions as for the professional use of UAVs for Search & Rescue. In any event, there are no (clear) regulations in Nepal as per research and phone calls made by the Humanitarian UAV Network (see also the UAViators Laws/Travel Wiki). To this end, UAViators has provided contact info to Nepal’s Civil Aviation Authority and Chief of Police. Update: All humanitarian UAV Teams are now required to obtain permission from the Ministry of Home Affairs to operate UAVs in Nepal. Once permission is granted, individual flight plants must be approved by the Nepal Army (via UNDAC). More info here (see May 8 Update). It has taken almost two weeks to get the above process in place. Clearly, without a strong backing or leadership from an established humanitarian group that is able and willing to mediate with appropriate Ministries and Civil Aviation authorities, there is only so much that UAViators can do to support the above process.
  • Yes, we now have all 15 UAV teams on one single dedicated email thread. And yes, UAViators has been able to vet many teams while keeping amateur UAV pilots on standby if the latter have less than 50 hours of flight experience. Incidentally, requests for imagery can be made here. That said, what about all the other civilian UAV pilots operating independently? These other pilots, some of them reporters and disaster junkies, have already undermined the use of UAVs for humanitarian efforts. Indeed, it was reported that “The Nepali Government became very irritated with reporters collecting disaster adventure footage using drones.” This has prompted the government to ban UAV flights with the exception of flights carried out for humanitarian purposes. The latter still require permission from the Ministry of Home Affairs. The problem with so-called “drone journalists” is not simply a safety issue, which is obviously the number one priority of a humanitarian UAV mission. Fact is, there are far more requests for aerial imagery than can be met with just 10 UAV teams on site. So coordination and data sharing is key—even with drone journalists if the latter are prepared to be a part of the solution by liaising with UAViators and following the Code of Conduct. Furthermore, local communities have already expressed anger at the fact that drone & humanitarian journalists have “have visited the same sites with no plans to share data, make the imagery publicly available, or to make an effort to communicate to villages why the flights are important and how the information will be used to assist in relief efforts.”
  • Yes, we have workflows in place for the UAV teams to share their imagery, and some already have. Alas, limited Internet bandwidth is significantly slowing down the pace of data sharing. Some UAV teams have not (yet) expressed an interest in sharing their imagery. Some have not provided information about where they’re flying. Of course, they are incredibly busy. And besides, they are not required to share any data or information. The best UAViators can do is simply to outline the added value of sharing this imagery & their flight plans. And without strong public backing from established humanitarian groups, there is little else the Network can do. Update: several UAV teams are now only sharing imagery with local and national authorities. If the UN and others want this imagery, they need to go through Nepali authorities.
  • Yes, UAViators is indeed in touch with a number of humanitarian organizations who would like aerial imagery for specific areas, however these groups are unable (or not yet willing) to make these requests public or formal until they better understand the risks (legal, political and operational), the extent of the value-added (they want to see the imagery first), the experience and reliability of the UAV teams, etc. They are also weary of having UAV teams take requests for imagery as carte blanche to say they are operating on their behalf. At the same time, these humanitarian organizations do not have the resources (or time) to provide any coordination support between the Humanitarian UAV Network, appropriate government ministries and Nepal’s Civil Aviation Authority.
  • Yes, we have a dedicated UAViators site for Nepal updated multiple times a day. Unfortunately, most UAV Teams are having difficulty accessing this site from Nepal due to continuing Internet connectivity issues. This is also true of the dedicated UAViators Google Spreadsheet being used to facilitate the coordination of UAV operations. This online resource includes each team’s contact info, UAV assets, requests for aerial imagery, data needs, etc. We’re now sharing this information via basic text within the body of emails; but this also contributes to email overload. Incidentally, the UAVs being used by the 7 Teams in Nepal are small UAVs such as DJI’s Phantom and Inspire and Aeryon SkyRangers and eBees for example.
  • Yes, we have set up a UAV-Flights-Twitter map for Nepal (big thanks to colleagues at LinkedIn) to increase the transparency of where and when UAVs are being flown across the country. Alas, none of the UAV teams have made use of this solution yet even though most are tweeting from the field. This service allows UAV teams to send a simple tweet about their next UAV flight which then gets mapped automatically. If not used in Nepal, perhaps this service will be used in the future & combined with SMS/WhatsApp.
  • Yes, UAViators is connected with the Digital Humanitarian Network (DHN); specifically Humanitarian OpenStreetMap (HOT) and the Standby Task Force (SBTF), with the latter ready to deploy QCRI’s MicroMappers platform for the analysis of oblique imagery. Yet we’re still not sure how best to combine the results of nadir imagery and oblique imagery analysis to add value. Every point on a nadir (vertical) image has a GPS coordinate; but this is not true of obliques (photos taken at an angle). The GPS data for oblique photographs is simply the GPS coordinates for the position of the camera at the time the oblique image was taken. (Specialist gimbal mounted cameras can provide GPS info for objects in oblique photographs, but these are not in use in Nepal).
  • Yes, UAViators has access to a local physical office in Kathmandu. Thanks to the kind offer from Kathmandu Living Labs (KLL), UAV pilots can meet and co-work at KLL. However, even finding a time for all the UAV teams to meet at this office has proven impossible. And yet this is so crucial; there are good reasons why humanitarians have Cluster meetings.
  • Yes, 3D models (Point Clouds) of disaster areas can add insights to disaster damage assessments. That said, these are often huge files and thus particularly challenging to upload. And when these do get posted on-line, what is the best way to have them analyzed? GIS experts and other professionals tend to be completely swamped during disasters. But even if a team were available, what methods & software should they be using to assess and indeed quantify the level of damage in each 3D model? Can this assessment be crowdsourced? And how can the results of 3D analysis be added to other datasets and official humanitarian information products?
  • Yes, the majority of UAV teams that have chosen to liaise with the Humanitarian UAV Network are now in Nepal, yet it took a while for some teams to get on site and there were delays with their UAV assets getting into the country. This points to the need for building local capacity within Nepal and other disaster-prone countries so that local organizations can rapidly deploy UAVs and analyze the resulting imagery themselves after major disasters. This explains why my colleague Nama Budhathoki (at KLL) and I have been looking to set up Kathmandu Flying Labs (basically a Humanitarian UAV Innovation Lab) for literally a year now. In any event, thanks to LinkedIn for Good, we were able to identify some local UAV pilots and students right after the earthquake; some of whom have since been paired with the international UAV teams. Building the capacity of local teams is also important because of the local knowledge and local contacts (and potentially the legal permissions) that these teams will already have.

So where do we go from here? Despite the above challenges, there is a lot more coordination and structure to the UAV response in Nepal than there was following Typhoon Haiyan in 2013. Then again, the challenges that come with UAV operations in disaster situations are only going to increase as more UAV teams deploy in future crises alongside members of the public, drone journalists, military UAVs, etc. At some point, hopefully sooner (before accidents and major mistakes happen) rather than later, an established humanitarian organization will take on the responsibility of mediating between UAV teams, UAViators, the government, civil aviation officials, military and other aid groups.

What we may need is something along the lines of what GSMA’s Disaster Response Program has done for Mobile Network Operators (MNOs) and the humanitarian community. GSMA has done a lot since the 2010 Haiti Earthquake to bridge MNOs and humanitarians, acting as convener, developing standard operating procedures, ethical guidelines, a global model agreement, etc. Another suggestion floated by a humanitarian colleague is the INSARAG Secretariat, which classifies and also categorizes Search and Rescue teams. Each teams has to “sign onto agreed guidelines (behavior, coordination, markings, etc). So, when the first one arrives, they know to setup a reception space; they all know that there will be coordination meetings, etc.” Perhaps INSARAG could serve as a model for UAViators 2.0. Update: UNDAC is now serving as liaison for UAV flights, which will likely set a precedence for future humanitarian UAV missions.

Coordination is never easy. And leveraging a new, disruptive technology for disaster response is also a major challenge. I, for one, am ready and want to take on these new challenges, but do I need a willing and able partner in the humanitarian community to take on these challenges with me and others. The added value of timely, very high-resolution aerial data during disaster is significant for disaster response, not to mention the use of UAVs for payload transportation and the provision of communication services via UAV. The World Humanitarian Summit (WHS) is coming up next year. Will we unveil a solution to the above challenges at this pivotal Summit or will we continue dragging our feet and forgo the humanitarian innovation opportunities that are right on front of our eyes in Nepal?

In the meantime, I want to thank and acknowledge the following UAV Teams for liaising with the Humanitarian UAV Network: Team RubiconSkyCatch, Halo Drop, GlobalMedic, Medair, Deploy Media and Paul Borrud. Almost all teams have already been able to share aerial imagery. If other responders on the ground are able to support these efforts in any way, e.g., CISCO providing better Internet connectivity, or if you know of other UAV groups that are moving faster and able to provide guidance, for example, then please do get in touch.

A Force for Good: How Digital Jedis are Responding to the Nepal Earthquake (Updated)

Digital Humanitarians are responding in full force to the devastating earthquake that struck Nepal. Information sharing and coordination is taking place online via CrisisMappers and on multiple dedicated Skype chats. The Standby Task Force (SBTF), Humanitarian OpenStreetMap (HOT) and others from the Digital Humanitarian Network (DHN) have also deployed in response to the tragedy. This blog post provides a quick summary of some of these digital humanitarian efforts along with what’s coming in terms of new deployments.

Update: A list of Crisis Maps for Nepal is available below.

Credit: http://www.thestar.com/content/dam/thestar/uploads/2015/4/26/nepal2.jpg

At the request of the UN Office for the Coordination of Humanitarian Affairs (OCHA), the SBTF is using QCRI’s MicroMappers platform to crowdsource the analysis of tweets and mainstream media (the latter via GDELT) to rapidly 1) assess disaster damage & needs; and 2) Identify where humanitarian groups are deploying (3W’s). The MicroMappers CrisisMaps are already live and publicly available below (simply click on the maps to open live version). Both Crisis Maps are being updated hourly (at times every 15 minutes). Note that MicroMappers also uses both crowdsourcing and Artificial Intelligence (AIDR).

Update: More than 1,200 Digital Jedis have used MicroMappers to sift through a staggering 35,000 images and 7,000 tweets! This has so far resulted in 300+ relevant pictures of disaster damage displayed on the Image Crisis Map and over 100 relevant disaster tweets on the Tweet Crisis Map.

Live CrisisMap of pictures from both Twitter and Mainstream Media showing disaster damage:

MM Nepal Earthquake ImageMap

Live CrisisMap of Urgent Needs, Damage and Response Efforts posted on Twitter:

MM Nepal Earthquake TweetMap

Note: the outstanding Kathmandu Living Labs (KLL) team have also launched an Ushahidi Crisis Map in collaboration with the Nepal Red Cross. We’ve already invited invited KLL to take all of the MicroMappers data and add it to their crisis map. Supporting local efforts is absolutely key.

WP_aerial_image_nepal

The Humanitarian UAV Network (UAViators) has also been activated to identify, mobilize and coordinate UAV assets & teams. Several professional UAV teams are already on their way to Kathmandu. The UAV pilots will be producing high resolution nadir imagery, oblique imagery and 3D point clouds. UAViators will be pushing this imagery to both HOT and MicroMappers for rapid crowdsourced analysis (just like was done with the aerial imagery from Vanuatu post Cyclone Pam, more on that here). A leading UAV manufacturer is also donating several UAVs to UAViators for use in Nepal. These UAVs will be sent to KLL to support their efforts. In the meantime, DigitalGlobePlanet Labs and SkyBox are each sharing their satellite imagery with CrisisMappers, HOT and others in the Digital Humanitarian Network.

There are several other efforts going on, so the above is certainly not a complete list but simply reflect those digital humanitarian efforts that I am involved in or most familiar with. If you know of other major efforts, then please feel free to post them in the comments section. Thank you. More on the state of the art in digital humanitarian action in my new book, Digital Humanitarians.


List of Nepal Crisis Maps

Please add to the list below by posting new links in this Google Spreadsheet. Also, someone should really create 1 map that pulls from each of the listed maps.

Code for Nepal Casualty Crisis Map:
http://bit.ly/1IpUi1f 

DigitalGlobe Crowdsourced Damage Assessment Map:
http://goo.gl/bGyHTC

Disaster OpenRouteService Map for Nepal:
http://www.openrouteservice.org/disaster-nepal

ESRI Damage Assessment Map:
http://arcg.is/1HVNNEm

Harvard WorldMap Tweets of Nepal:
http://worldmap.harvard.edu/maps/nepalquake 

Humanitarian OpenStreetMap Nepal:
http://www.openstreetmap.org/relation/184633

Kathmandu Living Labs Crowdsourced Crisis Map: http://www.kathmandulivinglabs.org/earthquake

MicroMappers Disaster Image Map of Damage:
http://maps.micromappers.org/2015/nepal/images/#close

MicroMappers Disaster Damage Tweet Map of Needs:
http://maps.micromappers.org/2015/nepal/tweets

NepalQuake Status Map:
http://www.nepalquake.org/status-map

UAViators Crisis Map of Damage from Aerial Pics/Vids:
http://uaviators.org/map (takes a while to load)

Visions SDSU Tweet Crisis Map of Nepal:
http://vision.sdsu.edu/ec2/geoviewer/nepal-kathmandu#

Using Flash Crowds to Automatically Detect Earthquakes & Impact Before Anyone Else

It is said that our planet has a new nervous system; a digital nervous system comprised of digital veins and intertwined sensors that capture the pulse of our planet in near real-time. Next generation humanitarian technologies seek to leverage this new nervous system to detect and diagnose the impact of disasters within minutes rather than hours. To this end, LastQuake may be one of the most impressive humanitarian technologies that I have recently come across. Spearheaded by the European-Mediterranean Seismological Center (EMSC), the technology combines “Flashsourcing” with social media monitoring to auto-detect earthquakes before they’re picked up by seismometers or anyone else.

Screen Shot 2014-10-23 at 5.08.30 PM

Scientists typically draw on ground-motion prediction algorithms and data on building infrastructure to rapidly assess an earthquake’s potential impact. Alas, ground-motion predictions vary significantly and infrastructure data are rarely available at sufficient resolutions to accurately assess the impact of earthquakes. Moreover, a minimum of three seismometers are needed to calibrate a quake and said seismic data take several minutes to generate. This explains why the EMSC uses human sensors to rapidly collect relevant data on earthquakes as these reduce the uncertainties that come with traditional rapid impact assess-ment methodologies. Indeed, the Center’s important work clearly demonstrates how the Internet coupled with social media are “creating new potential for rapid and massive public involvement by both active and passive means” vis-a-vis earthquake detection and impact assessments. Indeed, the EMSC can automatically detect new quakes within 80-90 seconds of their occurrence while simultaneously publishing tweets with preliminary information on said quakes, like this one:

Screen Shot 2014-10-23 at 5.44.27 PM

In reality, the first human sensors (increases in web traffic) can be detected within 15 seconds (!) of a quake. The EMSC’s system continues to auto-matically tweet relevant information (including documents, photos, videos, etc.), for the first 90 minutes after it first detects an earthquake and is also able to automatically create a customized and relevant hashtag for individual quakes.

Screen Shot 2014-10-23 at 5.51.05 PM

How do they do this? Well, the team draw on two real-time crowdsourcing methods that “indirectly collect information from eyewitnesses on earthquakes’ effects.” The first is TED, which stands for Twitter Earthquake Detection–a system developed by the US Geological Survey (USGS). TED filters tweets by key word, location and time to “rapidly detect sharing events through increases in the number of tweets” related to an earthquake. The second method, called “flashsourcing” was developed by the European-Mediterranean to analyze traffic patterns on its own website, “a popular rapid earthquake information website.” The site gets an average of 1.5 to 2 million visits a month. Flashsourcing allows the Center to detect surges in web traffic that often occur after earthquakes—a detection method named Internet Earthquake Detection (IED). These traffic surges (“flash crowds”) are caused by “eyewitnesses converging on its website to find out the cause of their shaking experience” and can be detected by analyzing the IP locations of website visitors.

It is worth emphasizing that both TED and IED work independently from traditional seismic monitoring systems. Instead, they are “based on real-time statistical analysis of Internet-based information generated by the reaction of the public to the shaking.” As EMSC rightly notes in a forthcoming peer-reviewed scientific study, “Detections of felt earthquakes are typically within 2 minutes for both methods, i.e., considerably faster than seismographic detections in poorly instrumented regions of the world.” TED and IED are highly complementary methods since they are based on two entirely “different types of Internet use that might occur after an earthquake.” TED depends on the popularity of Twitter while IED’s effectiveness depends on how well known the EMSC website is in the area affected by an earthquake. LastQuake automatically publishes real-time information on earthquakes by automatically merging real-time data feeds from both TED and IED as well as non-crowdsourcing feeds.

infographie-CSEM-LastQuake2

Lets looks into the methodology that powers IED. Flashsourcing can be used to detect felt earthquakes and provide “rapid information (within 5 minutes) on the local effects of earthquakes. More precisely, it can automatically map the area where shaking was felt by plotting the geographical locations of statistically significant increases in traffic […].” In addition, flashsourcing can also “discriminate localities affected by alarming shaking levels […], and in some cases it can detect and map areas affected by severe damage or network disruption through the concomitant loss of Internet sessions originating from the impacted region.” As such, this “negative space” (where there are no signals) is itself an important signal for damage assessment, as I’ve argued before.

remypicIn the future, EMSC’s flashsourcing system may also be able discriminate power cuts between indoor and outdoor Internet connections at the city level since the system’s analysis of web traffic session will soon be based on web sockets rather than webserver log files. This automatic detection of power failures “is the first step towards a new system capable of detecting Internet interruptions or localized infrastructure damage.” Of course, flashsourcing alone does not “provide a full description of earthquake impact, but within a few minutes, independently of any seismic data, and, at little cost, it can exclude a number of possible damage scenarios, identify localities where no significant damage has occurred and others where damage cannot be excluded.”

Screen Shot 2014-10-23 at 5.59.20 PM

EMSC is complementing their flashsourching methodology with a novel mobile app that quickly enables smartphone users to report about felt earthquakes. Instead of requiring any data entry and written surveys, users simply click on cartoonish-type pictures that best describe the level of intensity they felt when the earthquake (or aftershocks) struck. In addition, EMSC analyzes and manually validates geo-located photos and videos of earthquake effects uploaded to their website (not from social media). The Center’s new app will also make it easier for users to post more pictures more quickly.

CSEM-tweets2

What about typical criticisms (by now broken records) that social media is biased and unreliable (and thus useless)? What about the usual theatrics about the digital divide invalidating any kind of crowdsourcing effort given that these will be heavily biased and hardly representative of the overall population? Despite these already well known short-comings and despite the fact that our inchoate digital networks are still evolving into a new nervous system for our planet, the existing nervous system—however imperfect and immature—still adds value. TED and LastQuake demonstrate this empirically beyond any shadow of a doubt. What’s more, the EMSC have found that crowdsourced, user-generated information is highly reliable: “there are very few examples of intentional misuses, errors […].”

My team and I at QCRI are honored to be collaborating with EMSC on integra-ting our AIDR platform to support their good work. AIDR enables uses to automatically detect tweets of interest by using machine learning (artificial intelligence) which is far more effective searching for keywords. I recently spoke with Rémy Bossu, one masterminds behind the EMSC’s LastQuake project about his team’s plans for AIDR:

“For us AIDR could be a way to detect indirect effects of earthquakes, and notably triggered landslides and fires. Landslides can be the main cause of earthquake losses, like during the 2001 Salvador earthquake. But they are very difficult to anticipate, depending among other parameters on the recent rainfalls. One can prepare a susceptibility map but whether there are or nor landslides, where they have struck and their extend is something we cannot detect using geophysical methods. For us AIDR is a tool which could potentially make a difference on this issue of rapid detection of indirect earthquake effects for better situation awareness.”

In other words, as soon as the EMSC system detects an earthquake, the plan is for that detection to automatically launch an AIDR deployment to automatically identify tweets related to landslides. This integration is already completed and being piloted. In sum, EMSC is connecting an impressive ecosystem of smart, digital technologies powered by a variety of methodologies. This explains why their system is one of the most impressive & proven examples of next generation humanitarian technologies that I’ve come across in recent months.

bio

Acknowledgements: Many thanks to Rémy Bossu for providing me with all the material and graphics I needed to write up this blog post.

See also:

  • Social Media: Pulse of the Planet? [link]
  • Taking Pulse of Boston Bombings [link]
  • The World at Night Through the Eyes of the Crowd [link]
  • The Geography of Twitter: Mapping the Global Heartbeat [link]

Humanitarian UAVs Fly in China After Earthquake (updated)

A 6.1 magnitude earthquake struck Ludian County in Yunnan, China earlier this month. Some 600 people lost their lives; over 2,400 were injured and another 200,000 were forced to relocate. In terms of infrastructure damage, about 30,000 buildings were damaged and more than 12,000 homes collapsed. To rapidly search for survivors and assess this damage, responders in China turned to DJI’s office in Hong Kong. DJI is one of leading manufacturers of commercial UAVs in the world.

Rescuers search for survivors as they walk among debris of collapsed buildings after an earthquake hit Longtoushan township of Ludian county

DJI’s team of pilots worked directly with the China Association for Disaster and Emergency Response Medicine (CADERM). According to DJI, “This was the first time [the country] used [UAVs] in its relief efforts and as a result many of the cooperating agencies and bodies working on site have approached us for training / using UAS technology in the future […].” DJI flew two types of quadcopters, the DJI S900 and DJI Phantom 2 Vision+ pictured below (respectively):

DJI S900

Phantom 2

As mentioned here, The DJI Phantom 2 is the same one that the UN Office for the Coordination of Humanitarian Affairs (OCHA) is experimenting with:

Screen Shot 2014-06-24 at 2.22.05 PM

Given the dense rubble and vegetation in the disaster affected region of Ludian County in China, ground surveys were particularly challenging to carry out. So UAVs provided disaster responders with an unimpeded bird’s eye view of the damage, helping them prioritize their search and rescue efforts. DJI reports that the UAVs “were able to relay images back to rescue workers, who used them to determine which roads needed to be cleared first and which areas of the rubble to search for possible survivors. […].”

The video above shows some striking aerial footage of the disaster damage. This is the not first time that UAVs have been used for search and rescue or road clearance operations. Transporting urgent supplies to disaster areas requires that roads be cleared as quickly as possible, which is why UAVs were used for this and other purposes after Typhoon Haiyan in the Philippines. In Ludian, “Aerial images captured by the team were [also] used by workers in the epicenter area […] where most of the traditional buildings in the area collapsed.”

DJI was not the only group to fly UAVs in response to the quake in Yunnan. The Chinese government itself deployed UAVs (days before DJI). As the Associated Press reported several weeks ago already, “A novel part of the Yunnan response was the use of drones to map and monitor a quake-formed lake that threatened to flood areas downstream. China has rapidly developed drone use in recent years, and they helped save time and money while providing highly reliable data, said Xu Xiaokun, an engineer with the army reserves.”

Working with UAV manufacturers directly may prove to be the preferred route for humanitarian organizations requiring access to aerial imagery following major disasters. At the same time, having the capacity and skills in-house to rapidly deploy these UAVs affords several advantages over the partnership model. So combining in-house capacity with a partnership model may ultimately be the way to go but this will depend heavily on the individual mandates and needs of humanitarian organizations.

Bio

See Also:

  • Humanitarians in the Sky: Using UAVs for Disaster Response [link]
  • Live Crisis Map of UAV Videos for Disaster Response [link]
  • Humanitarian UAV Missions During Balkan Floods [link]
  • UAVs, Community Mapping & Disaster Risk Reduction in Haiti [link]
  • “TripAdvisor” for International UAV/Drone Travel [link]

Using AIDR to Collect and Analyze Tweets from Chile Earthquake

Wish you had a better way to make sense of Twitter during disasters than this?

Type in a keyword like #ChileEarthquake in Twitter’s search box above and you’ll see more tweets than you can possibly read in a day let alone keep up with for more than a few minutes. Wish there way were an easy, free and open source solution? Well you’ve come to the right place. My team and I at QCRI are developing the Artificial Intelligence for Disaster Response (AIDR) platform to do just this. Here’s how it works:

First you login to the AIDR platform using your own Twitter handle (click images below to enlarge):

AIDR login

You’ll then see your collection of tweets (if you already have any). In my case, you’ll see I have three. The first is a collection of English language tweets related to the Chile Earthquake. The second is a collection of Spanish tweets. The third is a collection of more than 3,000,000 tweets related to the missing Malaysia Airlines plane. A preliminary analysis of these tweets is available here.

AIDR collections

Lets look more closely at my Chile Earthquake 2014 collection (see below, click to enlarge). I’ve collected about a quarter of a million tweets in the past 30 hours or so. The label “Downloaded tweets (since last re-start)” simply refers to the number of tweets I’ve collected since adding a new keyword or hashtag to my collection. I started the collection yesterday at 5:39am my time (yes, I’m an early bird). Under “Keywords” you’ll see all the hashtags and keywords I’ve used to search for tweets related to the earthquake in Chile. I’ve also specified the geographic region I want to collect tweets from. Don’t worry, you don’t actually have to enter geographic coordinates when you set up your own collection, you simply highlight (on map) the area you’re interested in and AIDR does the rest.

AIDR - Chile Earthquake 2014

You’ll also note in the above screenshot that I’ve selected to only collect tweets in English, but you can collect all language tweets if you’d like or just a select few. Finally, the Collaborators section simply lists the colleagues I’ve added to my collection. This gives them the ability to add new keywords/hashtags and to download the tweets collected as shown below (click to enlarge). More specifically, collaborators can download the most recent 100,000 tweets (and also share the link with others). The 100K tweet limit is based on Twitter’s Terms of Service (ToS). If collaborators want all the tweets, Twitter’s ToS allows for sharing the TweetIDs for an unlimited number of tweets.

AIDR download CSV

So that’s the AIDR Collector. We also have the AIDR Classifier, which helps you make sense of the tweets you’re collecting (in real-time). That is, your collection of tweets doesn’t stop, it continues growing, and as it does, you can make sense of new tweets as they come in. With the Classifier, you simply teach AIDR to classify tweets into whatever topics you’re interested in, like “Infrastructure Damage”, for example. To get started with the AIDR Classifier, simply return to the “Details” tab of our Chile collection. You’ll note the “Go To Classifier” button on the far right:

AIDR go to Classifier

Clicking on that button allows you to create a Classifier, say on the topic of disaster damage in general. So you simply create a name for your Classifier, in this case “Disaster Damage” and then create Tags to capture more details with respect to damage-related tweets. For example, one Tag might be, say, “Damage to Transportation Infrastructure.” Another could be “Building Damage.” In any event, once you’ve created your Classifier and corresponding tags, you click Submit and find your way to this page (click to enlarge):

AIDR Classifier Link

You’ll notice the public link for volunteers. That’s basically the interface you’ll use to teach AIDR. If you want to teach AIDR by yourself, you can certainly do so. You also have the option of “crowdsourcing the teaching” of AIDR. Clicking on the link will take you to the page below.

AIDR to MicroMappers

So, I called my Classifier “Message Contents” which is not particularly insightful; I should have labeled it something like “Humanitarian Information Needs” or something, but bear with me and lets click on that Classifier. This will take you to the following Clicker on MicroMappers:

MicroMappers Clicker

Now this is not the most awe-inspiring interface you’ve ever seen (at least I hope not); reason being that this is simply our very first version. We’ll be providing different “skins” like the official MicroMappers skin (below) as well as a skin that allows you to upload your own logo, for example. In the meantime, note that AIDR shows every tweet to at least three different volunteers. And only if each of these 3 volunteers agree on how to classify a given tweet does AIDR take that into consideration when learning. In other words, AIDR wants to ensure that humans are really sure about how to classify a tweet before it decides to learn from that lesson. Incidentally, The MicroMappers smartphone app for the iPhone and Android will be available in the next few weeks. But I digress.

Yolanda TweetClicker4

As you and/or your volunteers classify tweets based on the Tags you created, AIDR starts to learn—hence the AI (Artificial Intelligence) in AIDR. AIDR begins to recognize that all the tweets you classified as “Infrastructure Damage” are indeed similar. Once you’ve tagged enough tweets, AIDR will decide that it’s time to leave the nest and fly on it’s own. In other words, it will start to auto-classify incoming tweets in real-time. (At present, AIDR can auto-classify some 30,000 tweets per minute; compare this to the peak rate of 16,000 tweets per minute observed during Hurricane Sandy).

Of course, AIDR’s first solo “flights” won’t always go smoothly. But not to worry, AIDR will let you know when it needs a little help. Every tweet that AIDR auto-tags comes with a Confidence level. That is, AIDR will let you know: “I am 80% sure that I correctly classified this tweet”. If AIDR has trouble with a tweet, i.e., if it’s confidence level is 65% or below, the it will send the tweet to you (and/or your volunteers) so it can learn from how you classify that particular tweet. In other words, the more tweets you classify, the more AIDR learns, and the higher AIDR’s confidence levels get. Fun, huh?

To view the results of the machine tagging, simply click on the View/Download tab, as shown below (click to enlarge). The page shows you the latest tweets that have been auto-tagged along with the Tag label and the confidence score. (Yes, this too is the first version of that interface, we’ll make it more user-friendly in the future, not to worry). In any event, you can download the auto-tagged tweets in a CSV file and also share the download link with your colleagues for analysis and so on. At some point in the future, we hope to provide a simple data visualization output page so that you can easily see interesting data trends.

AIDR Results

So that’s basically all there is to it. If you want to learn more about how it all works, you might fancy reading this research paper (PDF). In the meantime, I’ll simply add that you can re-use your Classifiers. If (when?) another earthquake strikes Chile, you won’t have to start from scratch. You can auto-tag incoming tweets immediately with the Classifier you already have. Plus, you’ll be able to share your classifiers with your colleagues and partner organizations if you like. In other words, we’re envisaging an “App Store” of Classifiers based on different hazards and different countries. The more we re-use our Classifiers, the more accurate they will become. Everybody wins.

And voila, that is AIDR (at least our first version). If you’d like to test the platform and/or want the tweets from the Chile Earthquake, simply get in touch!

bio

Note:

  • We’re adapting AIDR so that it can also classify text messages (SMS).
  • AIDR Classifiers are language specific. So if you speak Spanish, you can create a classifier to tag all Spanish language tweets/SMS that refer to disaster damage, for example. In other words, AIDR does not only speak English : )

Results of MicroMappers Response to Pakistan Earthquake (Updated)

Update: We’re developing & launching MicroFilters to improve MicroMappers.

About 47 hours ago, the UN Office for the Coordination of Humanitarian Affairs (OCHA) activated the Digital Humanitarian Network (DHN) in response to the Pakistan Earthquake. The activation request was for 48 hours, so the deployment will soon phase out. As already described here, the Standby Volunteer Task Force (SBTF) teamed up with QCRI to carry out an early test of MicroMappers, which was not set to launch until next month. This post shares some initial thoughts on how the test went along with preliminary results.

Pakistan Quake

During ~40 hours, 109 volunteers from the SBTF and the public tagged just over 30,000 tweets that were posted during the first 36 hours or so after the quake. We were able to automatically collect these tweets thanks to our partnership with GNIP and specifically filtered for said tweets using half-a-dozen hashtags. Given the large volume of tweets collected, we did not require that each tweet be tagged at least 3 times by individual volunteers to ensure data quality control. Out of these 30,000+ tweets, volunteers tagged a total of 177 tweets as noting needs or infrastructure damage. A review of these tweets by the SBTF concluded that none were actually informative or actionable.

Just over 350 pictures were tweeted in the aftermath of the earthquake. These were uploaded to the ImageClicker for tagging purposes. However, none of the pictures captured evidence of infrastructure damage. In fact, the vast majority were unrelated to the earthquake. This was also true of pictures published in news articles. Indeed, we used an automated algorithm to identify all tweets with links to news articles; this algorithm would then crawl these articles for evidence of images. We found that the vast majority of these automatically extracted pictures were related to politics rather than infrastructure damage.

Pakistan Quake2

A few preliminary thoughts and reflections from this first test of MicroMappers. First, however, a big, huge, gigantic thanks to my awesome QCRI team: Ji Lucas, Imran Muhammad and Kiran Garimella; to my outstanding colleagues on the SBTF Core Team including but certainly not limited to Jus Mackinnon, Melissa Elliott, Anahi A. Iaccuci, Per Aarvik & Brendan O’Hanrahan (bios here); to the amazing SBTF volunteers and members of the general public who rallied to tag tweets and images—in particular our top 5 taggers: Christina KR, Leah H, Lubna A, Deborah B and Joyce M! Also bravo to volunteers in the Netherlands, UK, US and Germany for being the most active MicroMappers; and last but certainly not least, big, huge and gigantic thanks to Andrew Ilyas for developing the algorithms to automatically identify pictures and videos posted to Twitter.

So what did we learn over the past 48 hours? First, the disaster-affected region is a remote area of south-western Pakistan with a very light social media footprint, so there was practically no user-generated content directly relevant to needs and damage posted on Twitter during the first 36 hours. In other words, there were no needles to be found in the haystack of information. This is in stark contrast to our experience when we carried out a very similar operation following Typhoon Pablo in the Philippines. Obviously, if there’s little to no social media footprint in a disaster-affected area, then monitoring social media is of no use at all to anyone. Note, however, that MicroMappers could also be used to tag 30,000+ text messages (SMS). (Incidentally, since the earthquake struck around 12noon local time, there was only about 18 hours of daylight during the 36-hour period for which we collected the tweets).

Second, while the point of this exercise was not to test our pre-processing filters, it was clear that the single biggest problem was ultimately with the filtering. Our goal was to upload as many tweets as possible to the Clickers and stress-test the apps. So we only filtered tweets using a number of general hashtags such as #Pakistan. Furthermore, we did not filter out any retweets, which probably accounted for 2/3 of the data, nor did we filter by geography to ensure that we were only collecting and thus tagging tweets from users based in Pakistan. This was a major mistake on our end. We were so pre-occupied with testing the actual Clickers that we simply did not pay attention to the pre-processing of tweets. This was equally true of the images uploaded to the ImageClicker.

Pakistan Quake 3

So where do we go from here? Well we have pages and pages worth of feedback to go through and integrate in the next version of the Clickers. For me, one of the top priorities is to optimize our pre-processing algorithms and ensure that the resulting output can be automatically uploaded to the Clickers. We have to refine our algorithms and make damned sure that we only upload unique tweets and images to our Clickers. At most, volunteers should not see the same tweet or image more than 3 times for verification purposes. We should also be more careful with our hashtag filtering and also consider filtering by geography. Incidentally, when our free & open source AIDR platform becomes operational in November, we’ll also have the ability to automatically identify tweets referring to needs, reports of damage, and much, much more.

In fact, AIDR was also tested for the very first time. SBTF volunteers tagged about 1,000 tweets, and just over 130 of the tags enabled us to create an accurate classifier that can automatically identify whether a tweet is relevant for disaster response efforts specifically in Pakistan (80% accuracy). Now, we didn’t apply this classifier on incoming tweets because AIDR uses streaming Twitter data, not static, archived data which is what we had (in the form of CSV files). In any event, we also made an effort to create classifiers for needs and infrastructure damage but did not get enough tags to make these accurate enough. Typically, we need a minimum of 20 or so tags (i.e., examples of actual tweets referring to needs or damage). The more tags, the more accurate the classifier.

The reason there were so few tags, however, is because there were very few to no informative tweets referring to needs or infrastructure damage during the first 36 hours. In any event, I believe this was the very first time that a machine learning classifier was crowdsourced for disaster response purposes. In the future, we may want to first crowdsource a machine learning classifier for disaster relevant tweets and then upload the results to MicroMappers; this would reduce the number of unrelated tweets  displayed on a TweetClicker.

As expected, we have also received a lot of feedback vis-a-vis user experience and the user interface of the Clickers. Speed is at the top of the list. That is, making sure that once I’ve clicked on a tweet/image, the next tweet/image automatically appears. At times, I had to wait more than 20 seconds for the next item to load. We also need to add more progress bars such as the number of tweets or images that remain to be tagged—a countdown display, basically. I could go on and on, frankly, but hopefully these early reflections are informative and useful to others developing next-generation humanitarian technologies. In sum, there is a lot of work to be done still. Onwards!

bio

MicroMappers Launched for Pakistan Earthquake Response (Updated)

Update 1: MicroMappers is now public! Anyone can join to help the efforts!
Update 2: Results of MicroMappers Response to Pakistan Earthquake [Link]

MicroMappers was not due to launch until next month but my team and I at QCRI received a time-sensitive request by colleagues at the UN to carry out an early test of the platform given yesterday’s 7.7 magnitude earthquake, which killed well over 300 and injured hundreds more in south-western Pakistan.

pakistan_quake_2013

Shortly after this request, the UN Office for the Coordination of Humanitarian Affairs (OCHA) in Pakistan officially activated the Digital Humanitarian Network (DHN) to rapidly assess the damage and needs resulting from the earthquake. The award-winning Standby Volunteer Task Force (SBTF), a founding member of the DHN. teamed up with QCRI to use MicroMappers in response to the request by OCHA-Pakistan. This exercise, however, is purely for testing purposes only. We made this clear to our UN partners since the results may be far from optimal.

MicroMappers is simply a collection of microtasking apps (we call them Clickers) that we have customized for disaster response purposes. We just launched both the Tweet and Image Clickers to support the earthquake relief and may also launch the Tweet and Image GeoClickers as well in the next 24 hours. The TweetClicker is pictured below (click to enlarge).

MicroMappers_Pakistan1

Thanks to our partnership with GNIP, QCRI automatically collected over 35,000 tweets related to Pakistan and the Earthquake (we’re continuing to collect more in real-time). We’ve uploaded these tweets to the TweetClicker and are also filtering links to images for upload to the ImageClicker. Depending on how the initial testing goes, we may be able to invite help from the global digital village. Indeed, “crowdsourcing” is simply another way of saying “It takes a village…” In fact, that’s precisely why MicroMappers was developed, to enable anyone with an Internet connection to become a digital humanitarian volunteer. The Clicker for images is displayed below (click to enlarge).

MicroMappers_Pakistan2

Now, whether this very first test of the Clickers goes well remains to be seen. As mentioned, we weren’t planning to launch until next month. But we’ve already learned heaps from the past few hours alone. For example, while the Clickers are indeed ready and operational, our automatic pre-processing filters are not yet optimized for rapid response. The purpose of these filters is to automatically identify tweets that link to images and videos so that they can be uploaded to the Clickers directly. In addition, while our ImageClicker is operational, our VideoClicker is still under development—as is our TranslateClicker, both of which would have been useful in this response. I’m sure will encounter other issues over the next 24-36 hours. We’re keeping track of these in a shared Google Spreadsheet so we can review them next week and make sure to integrate as much of the feedback as possible before the next disaster strikes.

Incidentally, we (QCRI) also teamed up with the SBTF to test the very first version of the Artificial Intelligence for Disaster Response (AIDR) platform for about six hours. As far as we know, this test represents the first time that machine learning classifiers for disaster resposne were created on the fly using crowdsourcing. We expect to launch AIDR publicly at the 2013 CrisisMappers conference this November (ICCM 2013). We’ll be sure to share what worked and didn’t work during this first AIDR pilot test. So stay tuned for future updates via iRevolution. In the meantime, a big, big thanks to the SBTF Team for rallying so quickly and for agreeing to test the platforms! If you’re interested in becoming a digital humanitarian volunteer, simply join us here.

Bio