Category Archives: Humanitarian Technologies

Crowdsourcing and Humanitarian Action: Analysis of the Literature

Raphael Hörler from Zurich’s ETH University has just completed his thesis on the role of crowdsourcing in humanitarian action. His valuable research offers one of the most up-to-date and comprehensive reviews of the principal players and humanitarian technologies in action today. In short, I highly recommend this important resource. Raphael’s full thesis is available here (PDF).

Crowdsourcing Yolanda Response

bio

Computing Research Institutes as an Innovation Pathway for Humanitarian Technology

The World Humanitarian Summit (WHS) is an initiative by United Nations Secretary-General Ban Ki-moon to improve humanitarian action. The Summit, which is to be held in 2016, stands to be one of the most important humanitarian conferences in a decade. One key pillar of WHS is humanitarian innovation. “Transformation through Innovation” is the WHS Working Group dedicated to transforming humanitarian action by focusing explicitly on innovation. I have the pleasure of being a member of this working group where my contribution focuses on the role of new technologies, data science and advanced computing. As such, I’m working on an applied study to explore the role of computing research institutes as an innovation pathway for humanitarian technology. The purpose of this blog post is to invite feedback on the ideas presented below.

WHS_Logo_0

I first realized that the humanitarian community faced a “Big Data” challenge in 2010, just months after I had joined Ushahidi as Director of Crisis Mapping, and just months after co-founding CrisisMappers: The Humanitarian Technology Network. The devastating Haiti Earthquake resulted in a massive overflow of information generated via mainstream news, social media, text messages and satellite imagery. I launched and spearheaded the Haiti Crisis Map at the time and together with hundreds of digital volunteers from all around the world went head-to head with Big Data. As noted in my forthcoming book, we realized there and then that crowdsourcing and mapping software alone were no match for Big (Crisis) Data.

Digital Humanitarians: The Book

This explains why I decided to join an advanced computing research institute, namely QCRI. It was clear to me after Haiti that humanitarian organizations had to partner directly with advanced computing experts to manage the new Big Data challenge in disaster response. So I “embedded” myself in an institute with leading experts in Big Data Analytics, Data Science and Social Computing. I believe that computing research institutes (CRI’s) can & must play an important role in fostering innovation in next generation humanitarian technology by partnering with humanitarian organizations on research & development (R&D).

There is already some evidence to support this proposition. We (QCRI) teamed up with the UN Office for the Coordination of Humanitarian Affairs (OCHA) to create the Artificial Intelligence for Disaster Response platform, AIDR as well as MicroMappers. We are now extending AIDR to analyze text messages (SMS) in partnership with UNICEF. We are also spearheading efforts around the use and analysis of aerial imagery (captured via UAVs) for disaster response (see the Humanitarian UAV Network: UAViators). On the subject of UAVs, I believe that this new technology presents us (in the WHS Innovation team) with an ideal opportunity to analyze in “real time” how a new, disruptive technology gets adopted within the humanitarian system. In addition to UAVs, we catalyzed a partnership with Planet Labs and teamed up with Zooniverse to take satellite imagery analysis to the next level with large scale crowd computing. To this end, we are working with humanitarian organizations to enable them to make sense of Big Data generated via social media, SMS, aerial imagery & satellite imagery.

The incentives for humanitarian organizations to collaborate with CRI’s are obvious, especially if the latter (like QCRI) commits to making the resulting prototypes freely accessible and open source. But why should CRI’s collaborate with humanitarian organizations in the first place? Because the latter come with real-world challenges and unique research questions that many computer scientists are very interested in for several reasons. First, carrying out scientific research on real-world problems is of interest to the vast majority of computer scientists I collaborate with, both within QCRI and beyond. These scientists want to apply their skills to make the world a better place. Second, the research questions that humanitarian organizations bring enable computer scientists to differentiate themselves in the publishing world. Third, the resulting research can help advanced the field of computer science and advanced computing.

So why are we see not seeing more collaboration between CRI’s & humanitarian organizations? Because of this cognitive surplus mismatch. It takes a Director of Social Innovation (or related full-time position) to serve as a translational leader between CRI’s and humanitarian organizations. It takes someone (ideally a team) to match the problem owners and problem solvers; to facilitate and manage the collaboration between these two very different types of expertise and organizations. In sum, CRI’s can serve as an innovation pathway if the following three ingredients are in place: 1) Translation Leader; 2) Committed CRI; and 3) Committed Humanitarian Organization. These are necessary but not sufficient conditions for success.

While research institutes have a comparative advantage in R&D, they are not the best place to scale humanitarian technology prototypes. In order to take these prototypes to the next level, make them sustainable and have them develop into enterprise level software, they need to be taken up by for-profit companies. The majority of CRI’s (QCRI included) actually do have a mandate to incubate start-up companies. As such, we plan to spin-off some of the above platforms as independent companies in order to scale the technologies in a robust manner. Note that the software will remain free to use for humanitarian applications; other uses of the platform will require a paid license. Therein lies the end-to-end innovation path that computing research institutes can offer humanitarian organization vis-a-vis next generation humanitarian technologies.

As noted above, part of my involvement with the WHS Innovation Team entails working on an applied study to document and replicate this innovation pathway. As such, I am looking for feedback on the above as well as on the research methodology described below.

I plan to interview Microsoft Research, IBM Research, Yahoo Research, QCRI and other institutes as part of this research. More specifically, the interview questions will include:

  • Have you already partnered with humanitarian organizations? Why/why not?
  • If you have partnered with humanitarian organizations, what was the outcome? What were the biggest challenges? Was the partnership successful? If so, why? If not, why not?
  • If you have not yet partnered with humanitarian organizations, why not? What factors would be conducive to such partnerships and what factors serve as hurdles?
  • What are your biggest concerns vis-a-vis working with humanitarian groups?
  • What funding models did you explore if any?

I also plan to interview humanitarian organizations to better understand the prospects for this potential innovation pathway. More specifically, I plan to interview ICRC, UNHCR, UNICEF and OCHA using the following questions:

  • Have you already partnered with computing research groups? Why/why not?
  • If you have partnered with computing research groups, what was the outcome? What were the biggest challenges? Was the partnership successful? If so, why? If not, why not?
  • If you have not yet partnered with computing research groups, why not? What factors would be conducive to such partnerships and what factors serve as hurdles?
  • What are your biggest concerns vis-a-vis working with computing research groups?
  • What funding models did you explore if any?

My plan is to carry out the above semi-structured interviews in February-March 2015 along with secondary research. My ultimate aim with this deliverable is to develop a model to facilitate greater collaboration between computing research institutes and humanitarian organizations. To this end, I welcome feedback on all of the above (feel free to email me and/or add comments below). Thank you.

Bio

See also:

  • Research Framework for Next Generation Humanitarian Technology and Innovation [link]
  • From Gunfire at Sea to Maps of War: Implications for Humanitarian Innovation [link]

Humanitarian UAV/Drones in Conflict Zones: Fears, Concerns and Opportunities

My research team and I at the Humanitarian UAV Network (UAViators) have compiled a list of fears and concerns expressed by humanitarians and others on the use of UAVs in humanitarian settings. To do this, we closely reviewed well over 50 different documents, reports, articles, etc., on humanitarian UAVs as part of this applied research project. The motivation behind this research is to better understand the different and overlapping concerns that humanitarian organizations have over the use of non-lethal drones in crises, particularly crises mired by violent conflict.

Resarch Table

The results of this research are available in this open & editable spreadsheet and summarized in the table above. We identified a total of 9 different categories of concerns and tallied the unique instances in which these appear in the official humanitarian reports, articles, papers, studies, etc., that we reviewed. The top 3 concerns are: Military Association, Data Privacy and Consent. We very much welcome feedback, so feel free to get in touch via the comments section below and/or add additional content directly to the spreadsheet. This research will feed into an upcoming workshop that my colleague Kristin Sandvik (on the Advisory Board of UAViators) and I are looking to organize in the Spring of 2015. The workshop will address the most pressing issues around the use of civilian UAVs in conflict zones.

I tend to believe that UAV initiatives like the Syria Airlift Project (video above) can play a positive role in conflict settings. In fact, I wrote about this exact use of UAVs back in 2008 for PeaceWork Magazine (scroll down) and referred to previous (conventional) humanitarian airlift examples from the Berlin Airlift in the 1940’s to the Biafran Airlift in the 1960’s as a basis for remotely piloted aircraft systems. As such, I suggested that UAVs could be used in Burma at the time to transport relief supplies in response to the complex emergency. While fraught with risks, these risks can at times be managed when approached with care, integrity and professionalism, especially if a people-centered, community-based approach is taken, which prioritizes both safety and direct empowerment.

While some humanitarians may be categorically against any and all uses of non-lethal UAVs in conflict zones regardless of the circumstances, the fact is that their opinions won’t prevent affected communities and others from using UAVs anyway. More and more individuals will have access to cheaper and cheaper UAVs in the months and years ahead. As a UN colleague noted with respect to the Syria Airlift Project, initiatives like these may well be a sign of things to come. This sentiment is also shared by my colleague Jules Frost at World Vision. See her recent piece entitled: “Eyes in the Sky are Inevitable: UAVs and Humanitarian Response.”

Bio

Acknowledgements: Many thanks to my research assistants Peter Mosur and Jus Mackinnon for taking the lead in this research.

See also:

  • On UAVs for Peacebuilding and Conflict Prevention [link]
  • The Use of Drones for Nonviolent Civil Resistance [link]
  • Drones for Human Rights: Brilliant or Foolish [link]

Using Flash Crowds to Automatically Detect Earthquakes & Impact Before Anyone Else

It is said that our planet has a new nervous system; a digital nervous system comprised of digital veins and intertwined sensors that capture the pulse of our planet in near real-time. Next generation humanitarian technologies seek to leverage this new nervous system to detect and diagnose the impact of disasters within minutes rather than hours. To this end, LastQuake may be one of the most impressive humanitarian technologies that I have recently come across. Spearheaded by the European-Mediterranean Seismological Center (EMSC), the technology combines “Flashsourcing” with social media monitoring to auto-detect earthquakes before they’re picked up by seismometers or anyone else.

Screen Shot 2014-10-23 at 5.08.30 PM

Scientists typically draw on ground-motion prediction algorithms and data on building infrastructure to rapidly assess an earthquake’s potential impact. Alas, ground-motion predictions vary significantly and infrastructure data are rarely available at sufficient resolutions to accurately assess the impact of earthquakes. Moreover, a minimum of three seismometers are needed to calibrate a quake and said seismic data take several minutes to generate. This explains why the EMSC uses human sensors to rapidly collect relevant data on earthquakes as these reduce the uncertainties that come with traditional rapid impact assess-ment methodologies. Indeed, the Center’s important work clearly demonstrates how the Internet coupled with social media are “creating new potential for rapid and massive public involvement by both active and passive means” vis-a-vis earthquake detection and impact assessments. Indeed, the EMSC can automatically detect new quakes within 80-90 seconds of their occurrence while simultaneously publishing tweets with preliminary information on said quakes, like this one:

Screen Shot 2014-10-23 at 5.44.27 PM

In reality, the first human sensors (increases in web traffic) can be detected within 15 seconds (!) of a quake. The EMSC’s system continues to auto-matically tweet relevant information (including documents, photos, videos, etc.), for the first 90 minutes after it first detects an earthquake and is also able to automatically create a customized and relevant hashtag for individual quakes.

Screen Shot 2014-10-23 at 5.51.05 PM

How do they do this? Well, the team draw on two real-time crowdsourcing methods that “indirectly collect information from eyewitnesses on earthquakes’ effects.” The first is TED, which stands for Twitter Earthquake Detection–a system developed by the US Geological Survey (USGS). TED filters tweets by key word, location and time to “rapidly detect sharing events through increases in the number of tweets” related to an earthquake. The second method, called “flashsourcing” was developed by the European-Mediterranean to analyze traffic patterns on its own website, “a popular rapid earthquake information website.” The site gets an average of 1.5 to 2 million visits a month. Flashsourcing allows the Center to detect surges in web traffic that often occur after earthquakes—a detection method named Internet Earthquake Detection (IED). These traffic surges (“flash crowds”) are caused by “eyewitnesses converging on its website to find out the cause of their shaking experience” and can be detected by analyzing the IP locations of website visitors.

It is worth emphasizing that both TED and IED work independently from traditional seismic monitoring systems. Instead, they are “based on real-time statistical analysis of Internet-based information generated by the reaction of the public to the shaking.” As EMSC rightly notes in a forthcoming peer-reviewed scientific study, “Detections of felt earthquakes are typically within 2 minutes for both methods, i.e., considerably faster than seismographic detections in poorly instrumented regions of the world.” TED and IED are highly complementary methods since they are based on two entirely “different types of Internet use that might occur after an earthquake.” TED depends on the popularity of Twitter while IED’s effectiveness depends on how well known the EMSC website is in the area affected by an earthquake. LastQuake automatically publishes real-time information on earthquakes by automatically merging real-time data feeds from both TED and IED as well as non-crowdsourcing feeds.

infographie-CSEM-LastQuake2

Lets looks into the methodology that powers IED. Flashsourcing can be used to detect felt earthquakes and provide “rapid information (within 5 minutes) on the local effects of earthquakes. More precisely, it can automatically map the area where shaking was felt by plotting the geographical locations of statistically significant increases in traffic […].” In addition, flashsourcing can also “discriminate localities affected by alarming shaking levels […], and in some cases it can detect and map areas affected by severe damage or network disruption through the concomitant loss of Internet sessions originating from the impacted region.” As such, this “negative space” (where there are no signals) is itself an important signal for damage assessment, as I’ve argued before.

remypicIn the future, EMSC’s flashsourcing system may also be able discriminate power cuts between indoor and outdoor Internet connections at the city level since the system’s analysis of web traffic session will soon be based on web sockets rather than webserver log files. This automatic detection of power failures “is the first step towards a new system capable of detecting Internet interruptions or localized infrastructure damage.” Of course, flashsourcing alone does not “provide a full description of earthquake impact, but within a few minutes, independently of any seismic data, and, at little cost, it can exclude a number of possible damage scenarios, identify localities where no significant damage has occurred and others where damage cannot be excluded.”

Screen Shot 2014-10-23 at 5.59.20 PM

EMSC is complementing their flashsourching methodology with a novel mobile app that quickly enables smartphone users to report about felt earthquakes. Instead of requiring any data entry and written surveys, users simply click on cartoonish-type pictures that best describe the level of intensity they felt when the earthquake (or aftershocks) struck. In addition, EMSC analyzes and manually validates geo-located photos and videos of earthquake effects uploaded to their website (not from social media). The Center’s new app will also make it easier for users to post more pictures more quickly.

CSEM-tweets2

What about typical criticisms (by now broken records) that social media is biased and unreliable (and thus useless)? What about the usual theatrics about the digital divide invalidating any kind of crowdsourcing effort given that these will be heavily biased and hardly representative of the overall population? Despite these already well known short-comings and despite the fact that our inchoate digital networks are still evolving into a new nervous system for our planet, the existing nervous system—however imperfect and immature—still adds value. TED and LastQuake demonstrate this empirically beyond any shadow of a doubt. What’s more, the EMSC have found that crowdsourced, user-generated information is highly reliable: “there are very few examples of intentional misuses, errors […].”

My team and I at QCRI are honored to be collaborating with EMSC on integra-ting our AIDR platform to support their good work. AIDR enables uses to automatically detect tweets of interest by using machine learning (artificial intelligence) which is far more effective searching for keywords. I recently spoke with Rémy Bossu, one masterminds behind the EMSC’s LastQuake project about his team’s plans for AIDR:

“For us AIDR could be a way to detect indirect effects of earthquakes, and notably triggered landslides and fires. Landslides can be the main cause of earthquake losses, like during the 2001 Salvador earthquake. But they are very difficult to anticipate, depending among other parameters on the recent rainfalls. One can prepare a susceptibility map but whether there are or nor landslides, where they have struck and their extend is something we cannot detect using geophysical methods. For us AIDR is a tool which could potentially make a difference on this issue of rapid detection of indirect earthquake effects for better situation awareness.”

In other words, as soon as the EMSC system detects an earthquake, the plan is for that detection to automatically launch an AIDR deployment to automatically identify tweets related to landslides. This integration is already completed and being piloted. In sum, EMSC is connecting an impressive ecosystem of smart, digital technologies powered by a variety of methodologies. This explains why their system is one of the most impressive & proven examples of next generation humanitarian technologies that I’ve come across in recent months.

bio

Acknowledgements: Many thanks to Rémy Bossu for providing me with all the material and graphics I needed to write up this blog post.

See also:

  • Social Media: Pulse of the Planet? [link]
  • Taking Pulse of Boston Bombings [link]
  • The World at Night Through the Eyes of the Crowd [link]
  • The Geography of Twitter: Mapping the Global Heartbeat [link]

Automatically Classifying Text Messages (SMS) for Disaster Response

Humanitarian organizations like the UN and Red Cross often face a deluge of social media data when disasters strike areas with a large digital footprint. This explains why my team and I have been working on AIDR (Artificial Intelligence for Disaster Response), a free and open source platform to automatically classify tweets in real-time. Given that the vast majority of the world’s population does not tweet, we’ve teamed up with UNICEF’s Innovation Team to extend our AIDR platform so users can also automatically classify streaming SMS.

BulkSMS_graphic

After the Haiti Earthquake in 2010, the main mobile network operator there (Digicel) offered to sent an SMS to each of their 1.4 million subscribers (at the time) to accelerate our disaster needs assessment efforts. We politely declined since we didn’t have any automated (or even semi-automated way) of analyzing incoming text messages. With AIDR, however, we should (theoretically) be able to classify some 1.8 million SMS’s (and tweets) per hour. Enabling humanitarian organizations to make sense of “Big Data” generated by affected communities is obviously key for two-way communication with said communities during disasters, hence our work at QCRI on “Computing for Good”.

AIDR/SMS applications are certainly not limited to disaster response. In fact, we plan to pilot the AIDR/SMS platform for a public health project with our UNICEF partners in Zambia next month and with other partners in early 2015. While still experimental, I hope the platform will eventually be robust enough for use in response to major disasters; allowing humanitarian organizations to poll affected communities and to make sense of resulting needs in near real-time, for example. Millions of text messages could be automatically classified according to the Cluster System, for example, and the results communicated back to local communities via community radio stations, as described here.

These are still very early days, of course, but I’m typically an eternal optimist, so I hope that our research and pilots do show promising results. Either way, we’ll be sure to share the full outcome of said pilots publicly so that others can benefit from our work and findings. In the meantime, if your organization is interested in piloting and learning with us, then feel free to get in touch.

bio

UN Experts Meeting on Humanitarian UAVs

Updated: The Experts Meeting Summary Report is now available here (PDF) and also here as an open, editable Google Doc for comments/questions.

The Humanitarian UAV Network (UAViators) and the United Nations Office for the Coordination of Humanitarian Affairs (OCHA) are co-organizing the first ever “Experts Meeting on Humanitarian UAVs” on November 6th at UN Head-quarters in New York. This full-day strategy meeting, which is co-sponsored by the ICT for Peace Foundation (ICT4Peace) and QCRI, will bring together leading UAV experts (including several members of the UAV Network’s Advisory Board, such as DJI) with seasoned humanitarian professionals from OCHA, WFP, UNICEF, UNHCR, UNDAC, IOM, American Red Cross, European Commission and several other groups that are also starting to use civilian UAVs or have a strong interest in leveraging this technology.

The strategy session, which I’ll be running with my colleague Dan Gilman from OCHA (who authored this Policy Brief on Humanitarian UAVs), will provide an important opportunity for information sharing between UAV experts and humanitarian professionals with the explicit goal of catalyzing direct collabo-ration on the operational use of UAVs in humanitarian settings. UAV experts seek to better understand humanitarian information needs (e.g. UNDAC needs) while humanitarians seek to better understand the challenges and opportunities regarding the rapid deployment of UAVs. In sum, this workshop will bring together 30 experts from different disciplines to pave the way forward for the safe and effective use of humanitarian UAVs.

Screen Shot 2014-06-24 at 2.22.05 PM

The Experts Meeting will include presentations from select participants such as Gene Robinson (leading expert in the use of UAVs for Search & Rescue), Kate Chapman (director of Humanitarian OpenStreetMap), Peter Spruyt (European Commission’s Joint Research Center), Jacob Petersen (Anthea Technologies), Charles Devaney (University of Hawaii), Adam Klaptocz (Drone Adventures & senseFly) and several others. Both Matternet and Google’s Project Wing have been formally invited to present on the latest in UAV payload transportation. (Representatives from the Small UAV Coalition have also been invited to attend).

In addition to the above, the strategy meeting will include dedicated sessions on Ethics, Legislation and Regulation facilitated by Brendan Schulman (leading UAV lawyer) and Kristin Sandvik (Norwegian Center for Humanitarian Studies). Other sessions are expected to focus on Community Engagement, Imagery Analysis as well as Training and Certification. The final session of the day will be dedicated to identifying potential joint pilot projects between UAV pro’s and humanitarian organizations as well as the Humanitarian UAV Network.

UAViators Logo

We will be writing up a summary of the Experts Meeting and making this report publicly available via the Humanitarian UAV Network website. In addition, we plan to post videos of select talks given during the strategy meeting along with accompanying slides. This first meeting at UN Headquarters serves as a spring board for 2 future strategy meetings scheduled for 2015. One of these will be a 3-day high-level & policy-focused international workshop on Humanitarian UAVs, which will be held at the Rockefeller Foundation’s Center in Bellagio, Italy (pictured below in an UAV/aerial image I took earlier this year). This workshop will be run by myself, Dan Gilman and Kristin Sandvik (both of whom are on the Advisory Board of the Humanitarian UAV Network).

ProPic35

Kristin and I are also looking to co-organize another workshop in 2015 to focus specifically on the use of non-lethal UAVs in conflict zones. We are currently talking to prospective donors to make this happen. So stay tuned for more information on all three Humanitarian UAV meetings as one of our key goals at the Humanitarian UAV Network is to raise awareness about humanitarian UAVs by publicly disseminating results & findings from key policy discussions and UAV missions. In the meantime, big thanks to UN/OCHA, ICT4Peace and the Rockefeller Foundation for their crucial and most timely support.

Bio

See also:

  • Humanitarians in the Sky: Using UAVs for Disaster Response [link]
  • Low-Cost UAV Applications for Post-Disaster Damage Assessments: A Streamlined Workflow [Link]
  • Humanitarian UAVs Fly in China After Earthquake [link]
  • Humanitarian UAV Missions During Balkan Floods [link]
  • Humanitarian UAVs in the Solomon Islands [link]
  • UAVs, Community Mapping & Disaster Risk Reduction in Haiti [link]

Low-Cost UAV Applications for Post-Disaster Assessments: A Streamlined Workflow

Colleagues Matthew Cua, Charles Devaney and others recently co-authored this excellent study on their latest use of low-cost UAVs/drones for post-disaster assessments, environmental development and infrastructure development. They describe the “streamlined workflow—flight planning and data acquisition, post-processing, data delivery and collaborative sharing,” that they created “to deliver acquired images and orthorectified maps to various stakeholders within [their] consortium” of partners in the Philippines. They conclude from direct hands-on experience that “the combination of aerial surveys, ground observations and collaborative sharing with domain experts results in richer information content and a more effective decision support system.”

Screen Shot 2014-10-03 at 11.26.12 AM

UAVs have become “an effective tool for targeted remote sensing operations in areas that are inaccessible to conventional manned aerial platforms due to logistic and human constraints.” As such, “The rapid development of unmanned aerial vehicle (UAV) technology has enabled greater use of UAVs as remote sensing platforms to complement satellite and manned aerial remote sensing systems.” The figure above (click to enlarge) depicts the aerial imaging workflow developed by the co-authors to generate and disseminate post-processed images. This workflow, the main components of which are “Flight Planning & Data Acquisition,” “Data Post-Processing” and “Data Delivery,” will “continuously be updated, with the goal of automating more activities in order to increase processing speed, reduce cost and minimize human error.”

Screen Shot 2014-10-03 at 11.27.02 AM

Flight Planning simply means developing a flight plan based on clearly defined data needs. The screenshot above (click to enlarge) is a “UAV flight plan of the coastal section of Tacloban city, Leyte generated using APM Mission Planner. The [flight] plan involved flying a small UAV 200 meters above ground level. The raster scan pattern indicated by the yellow line was designed to take images with 80% overlap & 75% side overlap. The waypoints indicating a change in direction of the UAV are shown as green markers.” The purpose of the overlapping is to stitch and accurately geo-referenced the images during post-processing. A video on how to program UAV flight is available here.  This video specifically focuses on post-disaster assessments in the Philippines.

“Once in the field, the team verifies the flight plans before the UAV is flown by performing a pre-flight survey [which] may be done through ground observations of the area, use of local knowledge or short range aerial observations with a rotary UAV to identify launch/recovery sites and terrain characteristics. This may lead to adjustment in the flight plans. After the flight plans have been verified, the UAV is deployed for data acquisition.”

Screen Shot 2014-10-03 at 11.27.33 AM

Matthew, Charles and team initially used a Micropilot MP-Vision UAV for data acquisition. “However, due to increased cost of maintenance and significant skill requirements of setting up the MP-Vision,” they developed their own custom UAV instead, which “uses semi-professional and hobby- grade components combined with open-source software” as depicted in the above figure (click to enlarge). “The UAV’s airframe is the Super SkySurfer fixed-wing EPO foam frame.” The team used the “ArduPilot Mega (APM) autopilot system consisting of an Arduino-based microprocessor board, airspeed sensor, pressure and tem-perature sensor, GPS module, triple-axis gyro and other sensors. The firmware for navigation and control is open-source.”

The custom UAV, which costs approximately $2,000, has “an endurance of about 30-50 minutes, depending on payload weight and wind conditions, and is able to survey an area of up to 4 square kilometers.” The custom platform was “easier to assemble, repair, maintain, modify & use. This allowed faster deploy-ability of the UAV. In addition, since the autopilot firmware is open-source, with a large community of developers supporting it, it became easier to identify and address issues and obtain software updates.” That said, the custom UAV was “more prone to hardware and software errors, either due to assembly of parts, wiring of electronics or bugs in the software code.” Despite these drawbacks, “use of the custom UAV turned out to be more feasible and cost effective than use of a commercial-grade UAV.”

In terms of payloads (cameras), three different kinds were used: Panasonic Lumix LX3, Canon S100, and GoPro Hero 3. These cameras come with both advantages and disadvantages for aerial mapping. The LX3 has better image quality but the servo triggering the shutter would often fail. The S100 is GPS-enabled and does not require mechanical triggering. The Hero-3 was used for video reconnaissance specifically.

Screen Shot 2014-10-04 at 5.31.47 AM

“The workflow at [the Data-Processing] stage focuses on the creation of an orthomosaic—an orthorectified, georeferenced and stitched map derived from aerial images and GPS and IMU (inertial measurement unit values, particularly yaw, pitch and roll) information.” In other words, “orthorectification is the process of stretching the image to match the spatial accuracy of a map by considering location, elevation, and sensor information.”

Transforming aerial images into orthomosaics involves: (1) manually removing take-off/landing, burry & oblique images; (2) applying contrast enhancement to images that are either over- or under-exposed using commercial image-editing software; (3) geo-referencing the resulting images; (4) creating an orthomosaic from the geo-tagged images. The geo-referencing step is not needed if the images are already geo-referenced (i.e., have GPS coordinates, like those taken with the Cannon S100. “For non-georeferenced images, georeferencing is done by a custom Python script that generates a CSV file containing the mapping between images and GPS/IMU information. In this case, the images are not embedded with GPS coordinates.” The sample orthomosaic above uses 785 images taken during two UAV flights (click to enlarge).

Matthew, Charles and team used the “Pix4Dmapper photomapping software developed by Pix4D to render their orthomosaics. “The program can use either geotagged or non-geotagged images. For non-geotagged images, the software accepts other inputs such as the CSV file generated by the custom Python script to georeference each image and generate the photomosaic. Pix4D also outputs a report containing information about the output, such as total area covered and ground resolution. Quantum GIS, an open-source GIS software, was used for annotating and viewing the photomosaics, which can sometimes be too large to be viewed using common photo viewing software.”

Screen Shot 2014-10-03 at 11.28.20 AM

Data Delivery involves uploading the orthomosaics to a common, web-based platform that stakeholders can access. Orthomosaics “generally have large file sizes (e.g around 300MB for a 2 sq. km. render),” so the team created a web-based geographic information systems (GIS) to facilitate sharing of aerial maps. “The platform, named VEDA, allows viewing of rendered maps and adding metadata. The key advantage of using this platform is that the aerial imagery data is located in one place & can be accessed from any computer with a modern Internet browser. Before orthomosaics can be uploaded to the VEDA platform, they need to be converted into an approprate format supported by the platform. The current format used is MBTiles developed by Mapbox. The MBTiles format specifies how to partition a map image into smaller image tiles for web access. Once uploaded, the orthomosaic map can then be annotated with additional information, such as markers for points of interest.” The screenshot above (click to enlarge) shows the layout of a rendered orthomosaic in VEDA.

Matthew, Charles and team have applied the above workflow in various mission-critical UAV projects in the Philippines including damage assessment work after Typhoon Haiyan in 2013. This also included assessing the impact of the Typhoon on agriculture, which was an ongoing concern for local government during the recovery efforts. “The coconut industry, in particular, which plays a vital role in the Philippine economy, was severely impacted due to millions of coconut trees being damaged or flattened after the storm hit. In order to get an accurate assessment of the damage wrought by the typhoon, and to make a decision on the scale of recovery assistance from national government, aerial imagery coupled with a ground survey is a potentially promising approach.”

So the team received permission from local government to fly several missions over areas in Eastern Visayas that [were] devoted to coconut stands prior to Typhoon Haiyan.” (As such, “The UAV field team operated mostly in rural areas and wilderness, which reduced the human risk factor in case of aircraft failure. Also, as a safety guideline, the UAV was not flown within 3 miles from an active airport”). The partners in the Philippines are developing image processing techniques to distinguish “coconut trees from wild forest and vegetation for land use assessment and carbon source and sink estimates. One technique involved use of superpixel classification, wherein the image pixels are divided into homogeneous regions (i.e. collection of similar pixels) called superpixels which serve as the basic unit for classification.”

Screen Shot 2014-10-03 at 11.29.07 AM

The image below shows the “results of the initial test run where areas containing coconut trees [above] have been segmented.”

Screen Shot 2014-10-03 at 11.29.23 AM

“Similar techniques could also be used for crop damage assessment after a disaster such as Typhoon Haiyan, where for example standing coconut trees could be distinguished from fallen ones in order to determine capacity to produce coconut-based products.” This is an area that my team and I at QCRI are exploring in partnership with Matthew, Charles and company. In particular, we’re interested in assessing whether crowdsourcing can be used to facilitate the development of machine learning classifiers for image feature detection. More on this herehere and on CNN here. In addition, since “aerial imagery augmented with ground observations would provide a richer source of informa-tion than either one could provide alone,” we are also exploring the integration of social media data with aerial imagery (as described here).

In conclusion, Matthew, Charles and team are looking to further develop the above framework by automating more processes, “such as image filtering and image contrast enhancement. Autonomous take-off & landing will be configured for the custom UAV in order to reduce the need for a skilled pilot. A catapult system will be created for the UAV to launch in areas with a small clearing and a parachute system will be added in order to reduce the risk of damage due to belly landings.” I very much look forward to following the team’s progress and to collaborating with them on imagery analysis for disaster response.

bio

See Also:

  • Official UN Policy Brief on Humanitarian UAVs [link]
  • Common Misconceptions About Humanitarian UAVs [link]
  • Humanitarians in the Sky: Using UAVs for Disaster Response [link]
  • Humanitarian UAVs Fly in China After Earthquake [link]
  • Humanitarian UAV Missions During Balkan Floods [link]
  • Humanitarian UAVs in the Solomon Islands [link]
  • UAVs, Community Mapping & Disaster Risk Reduction in Haiti [link]

May the Crowd Be With You

Three years ago, 167 digital volunteers and I combed through satellite imagery of Somalia to support the UN Refugee Agency (UNHCR) on this joint project. The purpose of this digital humanitarian effort was to identify how many Somalis had been displaced (easily 200,000) due to fighting and violence. Earlier this year, 239 passengers and crew went missing when Malaysia Flight 370 suddenly disappeared. In response, some 8 million digital volunteers mobilized as part of the digital search & rescue effort that followed.

May the Crowd be With You

So in the first case, 168 volunteers were looking for 200,000+ people displaced by violence and in the second case, some 8,000,000 volunteers were looking for 239 missing souls. Last year, in response to Typhoon Haiyan, digital volunteers spent 200 hours or so tagging social media content in support of the UN’s rapid disaster damage assessment efforts. According to responders at the time, some 11 million people in the Philippines were affected by the Typhoon. In contrast, well over 20,000 years of volunteer time went into the search for Flight 370’s missing passengers.

What to do about this heavily skewed distribution of volunteer time? Can (or should) we do anything? Are we simply left with “May the Crowd be with You”?The massive (and as yet unparalleled) online response to Flight 370 won’t be a one-off. We’re entering an era of mass-sourcing where entire populations can be mobilized online. What happens when future mass-sourcing efforts ask digital volunteers to look for military vehicles and aircraft in satellite images taken of a mysterious, unnamed “enemy country” for unknown reasons? Think this is far-fetched? As noted in my forthcoming book, Digital Humanitarians, this online, crowdsourced military surveillance operation already took place (at least once).

As we continue heading towards this new era of mass-sourcing, those with the ability to mobilize entire populations online will indeed yield an impressive new form of power. And as millions of volunteers continue tagging, tracing various features, this volunteer-generated data combined with machine learning will be used to automate future tagging and tracing needs of militaries and multi-billion dollar companies, thus obviating the need for large volumes of volunteers (especially handy should volunteers seek to boycott these digital operations).

At the same time, however, the rise of this artificial intelligence may level the playing field. But few players out there have ready access to high resolution satellite imagery and the actual technical expertise to turn volunteer-generated tags/traces into machine learning classifiers. To this end, perhaps one way forward is to try and “democratize” access to both satellite imagery and the technology needed to make sense of this “Big Data”. Easier said than done. But maybe less impossible than we may think. Perhaps new, disruptive initiatives like Planet Labs will help pave the way forward.

bio

Humanitarian UAVs in the Solomon Islands

The Solomon Islands experienced heavy rains and flash floods following Tropical Cyclone Ita earlier this year. Over 50,000 people were affected and dozens killed, according to ReliefWeb. Infrastructure damage was extensive; entire houses were washed away and thousands lost their food gardens.

solomonfloods

Disaster responders used a rotary-wing UAV (an “Oktocopter”) to assist with the damage assessment efforts. More specifically, the UAV was used to assess the extent of the flood damage in the most affected area along Mataniko River.

Solomons UAV

The UAV was also used to map an area proposed for resettlement. In addition, the UAV was flown over a dam to assess potential damage. These flights were pre-programmed and thus autonomous. (Here’s a quick video demo on how to program UAV flights for disaster response). The UAV was flown at 110 meters altitude in order to capture very high-resolution imagery. “The altitude of 110m also allowed for an operation below the traditional air space and ensured a continuous visibility of the UAV from the starting / landing position.”

Screen Shot 2014-09-25 at 4.04.49 AM

While responders faced several challenges with the UAV, they nevertheless stated that “The UAV was extremely useful for the required mapping” (PDF). Some of these challenges included the limited availability of batteries, which limited the number of UAV flights. The wind also posed a challenge.

Solomons Analysis

Responders took more than 800 pictures (during one 17 minute flight) over the above area which was proposed for resettlement. About 10% of these images were then stitched together to form the mosaic displayed above. The result below depicts flooded areas along Mataniko River. According to responders, “This image data can be utilized to demonstrate the danger of destruction to people who start to resettle in the Mataniko River Valley. These very high resolution images (~ 3 to 5 cm) show details such as destroyed cars, parts of houses, etc. which demonstrate the force of the high water.” In sum, “The maps together with the images of the river could be utilized to raise awareness not to settle again in these areas.”

Screen Shot 2014-09-25 at 4.14.33 AM

Images taken of the dam were used to create the Digital Terrain Model (DTM) below. This enables responders to determine areas where the dam is most likely to overflow due to damage or future floods.

Screen Shot 2014-09-25 at 4.15.40 AM

The result of this DTM analysis enables responders to target the placement of rubber mats fixed with sand bags around the damn’s most vulnerable points.

Solomons Dam

In conclusion, disaster responders write that the use of “UAVs for data acquisition can be highly recommended. The flexibility of an UAV can be of high benefit for mapping purposes, especially in cases where fast data acquisition is desired, e.g. natural hazards. An important advantage of a UAV as platform is that image data recording is performed at low height and not disturbed by cloud cover. In theory a fixed-wing UAV might be more efficient for rapid mapping. However, the DTM applications would not be possible in this resolution with a fixed wing UAV. Notably due to the flexibility for potential starting and landing areas and the handling of the topography characterized by step valleys and obstacles such as power lines between mountain tops within the study area. Especially within the flooded areas a spatially sufficient start and land area for fixed wing UAVs would have been hard to identify.”

Bio

See Also:

  • Official UN Policy Brief on Humanitarian UAVs [link]
  • Common Misconceptions About Humanitarian UAVs [link]
  • Humanitarians in the Sky: Using UAVs for Disaster Response [link]
  • Crisis Map of UAV Videos for Disaster Response [link]
  • Humanitarian UAV Missions During Balkan Floods [link]
  • UAVs, Community Mapping & Disaster Risk Reduction in Haiti [link]

Piloting MicroMappers: How to Become a Digital Ranger in Namibia (Revised!)

Many thanks to all of you who have signed up to search and protect Namibia’s beautiful wildlife! (There’s still time to sign up here; you’ll receive an email on Friday, September 26th with the link to volunteer).

Our MicroMappers Wildlife Challenge will launch on Friday, September 26th and run through Sunday, September 28th. More specifically, we’ll begin the search for Namibia’s wildlife at 12noon Namibia time that Friday (which is 12noon Geneva, 11am London, 6am New York, 6pm Shanghai, 7pm Tokyo, 8pm Sydney). You can join the expedition at any time after this. Simply add yourself to this list-serve to participate. Anyone who can get online can be a digital ranger, no prior experience necessary. We’ll continue our digital search until sunset on Sunday evening.

Namibia Map 1

As noted here, rangers at Kuzikus Wildlife Reserve need our help to find wild animals in their reserve. This will help our ranger friends to better protect these beautiful animals from poachers and other threats. According to the rangers, “Rhino poaching continues to be a growing problem that threatens to extinguish some rhino species within a decade or two. Rhino monitoring is thus important for their protection. Using digital maps in combination with MicroMappers to trace aerial images of rhinos could greatly improve rhino monitoring efforts.”

NamibiaMap2
At 12noon Namibia time on Friday, September 26th, we’ll send an email to the above list-serve with the link to our MicroMappers Aerial Clicker, which we’ll use to crowdsource the search for Namibia’s wildlife. We’ll also publish a blog post on MicroMappers.org with the link. Here’s what the Clicker looks like (click to enlarge the Clicker):

MM Aerial Clicker Namibia

When we find animals, we’ll draw “digital shields” around them. Before we show you how to draw these shields and what types of animals we’ll be looking for, here are examples of helpful shields (versus unhelpful ones); note that we’ve had to change these instructions, so please review them carefully! 

MM Rihno Zoom

This looks like two animals! So lets draw two shields.

MM Rhine New YES

The white outlines are the shields that we drew using the Aerial Clicker above. Notice that our shields include the shadows of the animals, this important. If the animals are close to each other, the shields can overlap but there can only be one shield per animal (one shield per rhino in this case : )

MM Rhino New NO

These shields are too close to the animals, please give them more room!

MM Rhino No
These shields are too big.

If you’ve found something that may be an animal but you’re not sure, then please draw a shield anyway just in case. Don’t worry if most pictures don’t have any animals. Knowing where the animals are not is just as important as knowing where they are!

MM Giraffe Zoom

This looks like a giraffe! So lets draw a shield.

MM Giraffe No2

This shield does not include the giraffe’s shadow! So lets try again.

MM Giraffe No

This shield is too large. Lets try again!

MM Giraffe New YES

Now that’s perfect!

Here are some more pictures of animals that we’ll be looking for. As a digital ranger, you’ll simply need to draw shields around these animals, that’s all there is to it. The shields can overlap if need be, but remember: one shield per animal, include their shadows and give them some room to move around : )

MM Ostritch

Can you spot the ostriches? Click picture above to enlarge. You’ll be abel to zoom in with the Aerial Clicker during the Wildlife Challenge.

MM Oryx

Can you spot the five oryxes in the above? (Actually, there may be a 6th one, can you see it in the shadows?).

MM Impala

And the impalas in the left of the picture? Again, you’ll be able zoom in with the Aerial Clicker.

So how exactly does this Aerial Clicker work? Here’s a short video that shows just easy it is to draw a digital shield using the Clicker (note that we’ve had to change the instructions, so please review this video carefully!):

Thanks for reading and for watching! The results of this expedition will help rangers in Namibia make sure they have found all the animals, which is important for their wildlife protection efforts. We’ll have thousands of aerial photographs to search through next week, which means that our ranger friends in Namibia need as much help as possible! So this is where you come on in: please spread the word and invite your friends, families and colleagues to search and protect Namibia’s beautiful wildlife.

MicroMappers is a joint project with the United Nations (OCHA), and the purpose of this pilot is also to test the Aerial Clicker for future humanitarian response efforts. More here. Any questions or suggestions? Feel free to email me at patrick@iRevolution.net or add them in the comments section below.

Thank you!