Crowds—rather than sole individuals—are increasingly bearing witness to disasters large and small. Instagram users, for example, snapped 800,000 #Sandy pictures during the hurricane last year. One way to make sense of this vast volume and velocity of multimedia content—Big Data—during disasters is with PhotoSynth, as blogged here. Another perhaps more sophisticated approach would be to use CrowdOptic, which automatically zeros in on the specific location that eyewitnesses are looking at when using their smartphones to take pictures or recording videos.
How does it work? CrowdOptic simply triangulates line-of-sight intersections using sensory metadata from pictures and videos taken using a smartphone. The basic approach is depicted in the figure below. The areas of intersection is called a focal cluster. CrowdOptic automatically identifies the location of these clusters.
“Once a crowd’s point of focus is determined, any content generated by that point of focus is automatically authenticated, and a relative significance is assigned based on CrowdOptic’s focal data attributes […].” These include: (1) Number of Viewers; (2) Location of Focus; (3) Distance to Epicenter; (4) Cluster Timestamp, Duration; and (5) Cluster Creation, Dissipation Speed.” CrowdOptic can also be used on live streams and archival images & videos. Once a cluster is identified, the best images/videos pointing to this cluster are automatically selected.
Clearly, all this could have important applications for disaster response and information forensics. My colleagues and I recently collected over 12,000 Instagram pictures and more than 5,000 YouTube videos posted to Twitter during the first 48 hours of the Tornado in Oklahoma. These could be uploaded to CrowdOptic for cluster identification. Any focal cluster with several viewers would almost certainly be authentic, particularly if the time-stamps are similar. These clusters could then be tagged by digital humanitarian volunteers based on whether they depict evidence of disaster damage. Indeed, we could have tested out CrowdOptic during in the disaster response efforts we carried out for the United Nations following the devastating Philippines Typhoon. Perhaps CrowdOptic could facilitate rapid damage assessments in the future. Of course, the value of CrowdOptic ultimately depends on the volume of geotagged images and videos shared on social media and the Web.
I once wrote a blog post entitled, “Wag the Dog, or How Falsifying Crowdsourced Data Can Be a Pain.” While an image or video could certainly be falsified, trying to fake several focal clusters of multimedia content with dozens of viewers each would probably require the equivalent organization capacity of a small movie-production or commercial. So I’m in touch with the CrowdOptic team to explore the possibility of carrying out a proof of concept based on the multimedia data we’ve collected following the Oklahoma Tornados. Stay tuned!
Pingback: CrowdOptic, app podría ser útil en respuesta a emergencias | iRescate
Pingback: Could CrowdOptic Be Used For Disaster Response? | Society for Media Psychology and Technology (APA Division 46)
We try to make the user take video in app so it cant be faked. For now we let them load photos from their library but not video. We had a user drop a pin on the map over Lake Thunderbird, OK 20 minutes before it hit Shawnee. Give it a try at StormPins.com (iPhone and Android) and we have a web based first responder platform called PinResponder that lets every agency in town have admin control of all data. Short video at http://www.youtube.com/watch?v=0kIn-DawOAQ
Regards,
Chris Weldon
Pingback: Augmented Reality and Security – 5 Fascinating Use Cases | Augmented Reality Blog
Pingback: Humanitarian Response in 2025 | iRevolution