The tragic 6.4 magnitude earthquake struck southern Taiwan shortly before 4 in the morning on Saturday, February 6th. Later in the day, aerial robots were used to capture areal videos and images of the disaster damage, like below.
Within 10 hours of the earthquake, Dean Hosp at Taiwan’s National Cheng Kung University used screenshots of aerial videos posted on YouTube by various media outlets to create the 3D model below. As such, Dean used “second hand” data to create the model, which is why it is low resolution. Having the original imagery first hand would enable a far higher-res 3D model. Says Dean: “If I can fly myself, results can produce more fine and faster.”
Click the images below to enlarge.
Update: About 48 hours after the earthquake, Dean and team used their own UAV to create this much higher resolution version (see below), which they also annotated (click to enlarge).
Here’s the embedded 3D model:
These 3D models were processed using AgiSoft PhotoScan and then uploaded to Sketchfab on the same day the earthquake struck. I’ve blogged about Sketchfab in the past—see this first-ever 3D model of a refugee camp, for example. A few weeks ago, Sketchfab added a Virtual Reality feature to their platform, so I just tried this out on the above model.
The model appears equally crisp when viewed in VR mode on a mobile device (using Google Cardboard in my case). Simply open this page on your mobile device to view the disaster damage in VR. This works rather well; the model does seem to be of high resolution in Virtual Reality as well.
This is a good first step vis-a-vis VR applications. As a second step, we need to develop 3D disaster ontologies to ensure that imagery analysts actually interpret 3D models in the same way. As a third step, we need to combine VR headsets with wearable technology that enables the end-user to annotate (or draw on) the 3D models directly within the same VR environment. This would make the damage assessment process more intuitive while also producing 3D training data for the purposes of machine learning—and thus automated feature detection.
I’m still actively looking for a VR platform that will enable this, so please do get in touch if you know of any group, company, research institute, etc., that would be interested in piloting the 3D analysis of disaster damage from the Taiwan or Nepal Earthquakes entirely within a VR solution. Thank you.
Click here to view 360 aerial visual panoramas of the disaster damage.
Many thanks to Sebastien Hodapp for pointing me to the Taiwan model.
Pingback: Making use of Aerial Robotics and Digital Truth to Inspect Earthquake Harm... - : ZeroDrift Drones, UAV, sUAS
Pingback: Using Aerial Robotics and Virtual Reality to Inspect Earthquake Damage in Taiwan - dronespain.es
Pingback: Using Aerial Robotics and Virtual Reality to Inspect Earthquake Damage in Taiwan - Quadcopter Blog
Drones really help get the full picture. This is especially helpful in disaster areas and relief work.
Amazing tech! Great work!
Pingback: Assessing Disaster Damage: How Close Do You Need to Be? | iRevolutions
Pingback: Using Aerial Robotics and Virtual Reality to Inspect Earthquake Damage in Taiwan – Ageekk
Pingback: Using Aerial Robotics and Virtual Reality to Inspect Earthquake Damage in Taiwan | iRevolutions - SKY IS FOR ALL