Tag Archives: Photosynth

Could CrowdOptic Be Used For Disaster Response?

Crowds—rather than sole individuals—are increasingly bearing witness to disasters large and small. Instagram users, for example, snapped 800,000 #Sandy pictures during the hurricane last year. One way to make sense of this vast volume and velocity of multimedia content—Big Data—during disasters is with PhotoSynth, as blogged here. Another perhaps more sophisticated approach would be to use CrowdOptic, which automatically zeros in on the specific location that eyewitnesses are looking at when using their smartphones to take pictures or recording videos.

Instagram-Hurricane-Sandy

How does it work? CrowdOptic simply triangulates line-of-sight intersections using sensory metadata from pictures and videos taken using a smartphone. The basic approach is depicted in the figure below. The areas of intersection is called a focal cluster. CrowdOptic automatically identifies the location of these clusters.

Cluster

“Once a crowd’s point of focus is determined, any content generated by that point of focus is automatically authenticated, and a relative significance is assigned based on CrowdOptic’s focal data attributes […].” These include: (1) Number of Viewers; (2) Location of Focus; (3) Distance to Epicenter; (4) Cluster Timestamp, Duration; and (5) Cluster Creation, Dissipation Speed.” CrowdOptic can also be used on live streams and archival images & videos. Once a cluster is identified, the best images/videos pointing to this cluster are automatically selected.

Clearly, all this could have important applications for disaster response and information forensics. My colleagues and I recently collected over 12,000 Instagram pictures and more than 5,000 YouTube videos posted to Twitter during the first 48 hours of the Tornado in Oklahoma. These could be uploaded to CrowdOptic for cluster identification. Any focal cluster with several viewers would almost certainly be authentic, particularly if the time-stamps are similar. These clusters could then be tagged by digital humanitarian volunteers based on whether they depict evidence of disaster damage. Indeed, we could have tested out CrowdOptic during in the disaster response efforts we carried out for the United Nations following the devastating Philippines Typhoon. Perhaps CrowdOptic could facilitate rapid damage assessments in the future. Of course, the value of CrowdOptic ultimately depends on the volume of geotagged images and videos shared on social media and the Web.

I once wrote a blog post entitled, “Wag the Dog, or How Falsifying Crowdsourced Data Can Be a Pain.” While an image or video could certainly be falsified, trying to fake several focal clusters of multimedia content with dozens of viewers each would probably require the equivalent organization capacity of a small movie-production or commercial. So I’m in touch with the CrowdOptic team to explore the possibility of carrying out a proof of concept based on the multimedia data we’ve collected following the Oklahoma Tornados. Stay tuned!

bio

When the Network Bears Witness: From Photosynth to Allsynth?

I’ve blogged about Photosynth before and toyed around with the idea of an Allsynth platform, the convergence of multiple technologies and sensors for human rights monitoring and disaster response. The idea would be to “stitch” pictures and video footage together to reconstruct evidence in multi-media 3D type format. What if we could do this live and in networked way though?

The thought popped into my head while at the Share Conference in Belgrade recently. The conference included a “Share by Night” track with concerts, live bands, etc., in the evenings. What caught my eye, one night, was not on stage but the dozens of smart phones being held up in the audience to capture the vibes, sounds, movements, etc on stage.

I then thought about that for a moment and the new start up, Color.com, that a colleague of mine co-founded. “Color creates new, dynamic social networks for your iPhone or Android wherever you go. It is meant to be used with other people right next to you who have the Color app on their smartphone. This way, you can take photos and videos together that everyone keeps.”

I was thinking about the concert and all zones bright screens. What if they were streaming live and networked? Meaning one could watch all the streaming videos from one website and pivot between different phones. Because the phones are geo-tagged, I’d be able to move closer to the stage, pivoting to phone users closer to the stage. Why need a film crew anymore? The phones in the audience become your cameras and you could even mix your own music video that way.

I’m at Where 2.0 today and Blaise Agüera y Arcas from Photosynth shared his most recent work, ReadWriteWorld, which he just announced today. “Technically it’s an index-ing, unification, and connection of the world’s geo-linked media. Informally, it’s the magic of:

  • Seeing your photos automatically connected to others;
  • Being able to simply create immersive experiences from your or your friends photos, videos, and panoramas;
  • “Fixing” the world, when the official imagery of your street is out of date;
  • Visually mapping your business, your favorite park, or your real estate for everyone to see;
  • Understanding the emergent information from the density and tagging of media.”
What if we applied this kind of networked, meshed technology to human rights monitoring or disaster response? Granted, most of the world does not own a smart phone. But that won’t always be the case. What if the hundreds of thousands of phones used in Tahrir Square were networked and streaming as if they were covering a concert?

 I don’t know the answer to that question, but I find the idea interesting.