Low-Cost UAV Applications for Post-Disaster Assessments: A Streamlined Workflow

Colleagues Matthew Cua, Charles Devaney and others recently co-authored this excellent study on their latest use of low-cost UAVs/drones for post-disaster assessments, environmental development and infrastructure development. They describe the “streamlined workflow—flight planning and data acquisition, post-processing, data delivery and collaborative sharing,” that they created “to deliver acquired images and orthorectified maps to various stakeholders within [their] consortium” of partners in the Philippines. They conclude from direct hands-on experience that “the combination of aerial surveys, ground observations and collaborative sharing with domain experts results in richer information content and a more effective decision support system.”

Screen Shot 2014-10-03 at 11.26.12 AM

UAVs have become “an effective tool for targeted remote sensing operations in areas that are inaccessible to conventional manned aerial platforms due to logistic and human constraints.” As such, “The rapid development of unmanned aerial vehicle (UAV) technology has enabled greater use of UAVs as remote sensing platforms to complement satellite and manned aerial remote sensing systems.” The figure above (click to enlarge) depicts the aerial imaging workflow developed by the co-authors to generate and disseminate post-processed images. This workflow, the main components of which are “Flight Planning & Data Acquisition,” “Data Post-Processing” and “Data Delivery,” will “continuously be updated, with the goal of automating more activities in order to increase processing speed, reduce cost and minimize human error.”

Screen Shot 2014-10-03 at 11.27.02 AM

Flight Planning simply means developing a flight plan based on clearly defined data needs. The screenshot above (click to enlarge) is a “UAV flight plan of the coastal section of Tacloban city, Leyte generated using APM Mission Planner. The [flight] plan involved flying a small UAV 200 meters above ground level. The raster scan pattern indicated by the yellow line was designed to take images with 80% overlap & 75% side overlap. The waypoints indicating a change in direction of the UAV are shown as green markers.” The purpose of the overlapping is to stitch and accurately geo-referenced the images during post-processing. A video on how to program UAV flight is available here.  This video specifically focuses on post-disaster assessments in the Philippines.

“Once in the field, the team verifies the flight plans before the UAV is flown by performing a pre-flight survey [which] may be done through ground observations of the area, use of local knowledge or short range aerial observations with a rotary UAV to identify launch/recovery sites and terrain characteristics. This may lead to adjustment in the flight plans. After the flight plans have been verified, the UAV is deployed for data acquisition.”

Screen Shot 2014-10-03 at 11.27.33 AM

Matthew, Charles and team initially used a Micropilot MP-Vision UAV for data acquisition. “However, due to increased cost of maintenance and significant skill requirements of setting up the MP-Vision,” they developed their own custom UAV instead, which “uses semi-professional and hobby- grade components combined with open-source software” as depicted in the above figure (click to enlarge). “The UAV’s airframe is the Super SkySurfer fixed-wing EPO foam frame.” The team used the “ArduPilot Mega (APM) autopilot system consisting of an Arduino-based microprocessor board, airspeed sensor, pressure and tem-perature sensor, GPS module, triple-axis gyro and other sensors. The firmware for navigation and control is open-source.”

The custom UAV, which costs approximately $2,000, has “an endurance of about 30-50 minutes, depending on payload weight and wind conditions, and is able to survey an area of up to 4 square kilometers.” The custom platform was “easier to assemble, repair, maintain, modify & use. This allowed faster deploy-ability of the UAV. In addition, since the autopilot firmware is open-source, with a large community of developers supporting it, it became easier to identify and address issues and obtain software updates.” That said, the custom UAV was “more prone to hardware and software errors, either due to assembly of parts, wiring of electronics or bugs in the software code.” Despite these drawbacks, “use of the custom UAV turned out to be more feasible and cost effective than use of a commercial-grade UAV.”

In terms of payloads (cameras), three different kinds were used: Panasonic Lumix LX3, Canon S100, and GoPro Hero 3. These cameras come with both advantages and disadvantages for aerial mapping. The LX3 has better image quality but the servo triggering the shutter would often fail. The S100 is GPS-enabled and does not require mechanical triggering. The Hero-3 was used for video reconnaissance specifically.

Screen Shot 2014-10-04 at 5.31.47 AM

“The workflow at [the Data-Processing] stage focuses on the creation of an orthomosaic—an orthorectified, georeferenced and stitched map derived from aerial images and GPS and IMU (inertial measurement unit values, particularly yaw, pitch and roll) information.” In other words, “orthorectification is the process of stretching the image to match the spatial accuracy of a map by considering location, elevation, and sensor information.”

Transforming aerial images into orthomosaics involves: (1) manually removing take-off/landing, burry & oblique images; (2) applying contrast enhancement to images that are either over- or under-exposed using commercial image-editing software; (3) geo-referencing the resulting images; (4) creating an orthomosaic from the geo-tagged images. The geo-referencing step is not needed if the images are already geo-referenced (i.e., have GPS coordinates, like those taken with the Cannon S100. “For non-georeferenced images, georeferencing is done by a custom Python script that generates a CSV file containing the mapping between images and GPS/IMU information. In this case, the images are not embedded with GPS coordinates.” The sample orthomosaic above uses 785 images taken during two UAV flights (click to enlarge).

Matthew, Charles and team used the “Pix4Dmapper photomapping software developed by Pix4D to render their orthomosaics. “The program can use either geotagged or non-geotagged images. For non-geotagged images, the software accepts other inputs such as the CSV file generated by the custom Python script to georeference each image and generate the photomosaic. Pix4D also outputs a report containing information about the output, such as total area covered and ground resolution. Quantum GIS, an open-source GIS software, was used for annotating and viewing the photomosaics, which can sometimes be too large to be viewed using common photo viewing software.”

Screen Shot 2014-10-03 at 11.28.20 AM

Data Delivery involves uploading the orthomosaics to a common, web-based platform that stakeholders can access. Orthomosaics “generally have large file sizes (e.g around 300MB for a 2 sq. km. render),” so the team created a web-based geographic information systems (GIS) to facilitate sharing of aerial maps. “The platform, named VEDA, allows viewing of rendered maps and adding metadata. The key advantage of using this platform is that the aerial imagery data is located in one place & can be accessed from any computer with a modern Internet browser. Before orthomosaics can be uploaded to the VEDA platform, they need to be converted into an approprate format supported by the platform. The current format used is MBTiles developed by Mapbox. The MBTiles format specifies how to partition a map image into smaller image tiles for web access. Once uploaded, the orthomosaic map can then be annotated with additional information, such as markers for points of interest.” The screenshot above (click to enlarge) shows the layout of a rendered orthomosaic in VEDA.

Matthew, Charles and team have applied the above workflow in various mission-critical UAV projects in the Philippines including damage assessment work after Typhoon Haiyan in 2013. This also included assessing the impact of the Typhoon on agriculture, which was an ongoing concern for local government during the recovery efforts. “The coconut industry, in particular, which plays a vital role in the Philippine economy, was severely impacted due to millions of coconut trees being damaged or flattened after the storm hit. In order to get an accurate assessment of the damage wrought by the typhoon, and to make a decision on the scale of recovery assistance from national government, aerial imagery coupled with a ground survey is a potentially promising approach.”

So the team received permission from local government to fly several missions over areas in Eastern Visayas that [were] devoted to coconut stands prior to Typhoon Haiyan.” (As such, “The UAV field team operated mostly in rural areas and wilderness, which reduced the human risk factor in case of aircraft failure. Also, as a safety guideline, the UAV was not flown within 3 miles from an active airport”). The partners in the Philippines are developing image processing techniques to distinguish “coconut trees from wild forest and vegetation for land use assessment and carbon source and sink estimates. One technique involved use of superpixel classification, wherein the image pixels are divided into homogeneous regions (i.e. collection of similar pixels) called superpixels which serve as the basic unit for classification.”

Screen Shot 2014-10-03 at 11.29.07 AM

The image below shows the “results of the initial test run where areas containing coconut trees [above] have been segmented.”

Screen Shot 2014-10-03 at 11.29.23 AM

“Similar techniques could also be used for crop damage assessment after a disaster such as Typhoon Haiyan, where for example standing coconut trees could be distinguished from fallen ones in order to determine capacity to produce coconut-based products.” This is an area that my team and I at QCRI are exploring in partnership with Matthew, Charles and company. In particular, we’re interested in assessing whether crowdsourcing can be used to facilitate the development of machine learning classifiers for image feature detection. More on this herehere and on CNN here. In addition, since “aerial imagery augmented with ground observations would provide a richer source of informa-tion than either one could provide alone,” we are also exploring the integration of social media data with aerial imagery (as described here).

In conclusion, Matthew, Charles and team are looking to further develop the above framework by automating more processes, “such as image filtering and image contrast enhancement. Autonomous take-off & landing will be configured for the custom UAV in order to reduce the need for a skilled pilot. A catapult system will be created for the UAV to launch in areas with a small clearing and a parachute system will be added in order to reduce the risk of damage due to belly landings.” I very much look forward to following the team’s progress and to collaborating with them on imagery analysis for disaster response.

bio

See Also:

  • Official UN Policy Brief on Humanitarian UAVs [link]
  • Common Misconceptions About Humanitarian UAVs [link]
  • Humanitarians in the Sky: Using UAVs for Disaster Response [link]
  • Humanitarian UAVs Fly in China After Earthquake [link]
  • Humanitarian UAV Missions During Balkan Floods [link]
  • Humanitarian UAVs in the Solomon Islands [link]
  • UAVs, Community Mapping & Disaster Risk Reduction in Haiti [link]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s