I’ve blogged about Photosynth before and toyed around with the idea of an Allsynth platform, the convergence of multiple technologies and sensors for human rights monitoring and disaster response. The idea would be to “stitch” pictures and video footage together to reconstruct evidence in multi-media 3D type format. What if we could do this live and in networked way though?
The thought popped into my head while at the Share Conference in Belgrade recently. The conference included a “Share by Night” track with concerts, live bands, etc., in the evenings. What caught my eye, one night, was not on stage but the dozens of smart phones being held up in the audience to capture the vibes, sounds, movements, etc on stage.
I then thought about that for a moment and the new start up, Color.com, that a colleague of mine co-founded. “Color creates new, dynamic social networks for your iPhone or Android wherever you go. It is meant to be used with other people right next to you who have the Color app on their smartphone. This way, you can take photos and videos together that everyone keeps.”
I was thinking about the concert and all zones bright screens. What if they were streaming live and networked? Meaning one could watch all the streaming videos from one website and pivot between different phones. Because the phones are geo-tagged, I’d be able to move closer to the stage, pivoting to phone users closer to the stage. Why need a film crew anymore? The phones in the audience become your cameras and you could even mix your own music video that way.
I’m at Where 2.0 today and Blaise Agüera y Arcas from Photosynth shared his most recent work, ReadWriteWorld, which he just announced today. “Technically it’s an index-ing, unification, and connection of the world’s geo-linked media. Informally, it’s the magic of:
- Seeing your photos automatically connected to others;
- Being able to simply create immersive experiences from your or your friends photos, videos, and panoramas;
- “Fixing” the world, when the official imagery of your street is out of date;
- Visually mapping your business, your favorite park, or your real estate for everyone to see;
- Understanding the emergent information from the density and tagging of media.”
I don’t know the answer to that question, but I find the idea interesting.
This current obsession with indexing the living world reminds me a bit of those deep see fisherman who lay huge nets, and scoop up whatever they get, including dolphins and other endangered sea life. I think we can be smarter about how, when and what we index, and give the user and the living world more control of their representation within these real world indices.
I do see the potential benefit and impact of all of this, and your post resonated with me quite a bit. I just want to assure we are not creating some sort of inverted panopticon. I have been seriously considered creating a hat and t-shirt with a large QRCode on it which contains a URL link to the Terms of Use for all of the real time metadata I am creating.
On a related note, one of the best projects to come out of my NYU ITP 2800 class last fall was called “A Bigger Brother” (http://abiggerbrother.org/), created by Harlo Holmes. Harlo’s vision was that the collective power of our citizens mobile phones could outweigh the collective power of surveillance cameras, and provide anyone with a live dashboard in to the realtime world, specifically around breaking news, mass gatherings and social events. Her project sought to provide a geomap interface that aggregated the location of mobile live streamers across multiple services (ustream, justin.tv,qik, etc). The Android app simply worked in the background gathering sensor data (ala the Color app), and then transmitted that in bundles along with information about the online location of the live stream. There is a good slide deck under the “about” section on the site, and the Android app actually is pretty functional.
While she applied for a Knight News Challenge grant, she unfortunately did not receive funding. However, it was ultimately to my benefit, as she is now working on the Guardian Project and our SecureSmartCam effort with Witness.
Hey Nathan, many thanks for your comments, always grateful to get your feedback. I completely understand where you’re coming from, so thanks for the cautionary note. That NYU ITP class project looks really neat, thanks for sharing. If you’ve got any additional advice on how we steer clear of “deep sea fishing,” do let me know, I very much value your advice.
I need to get out there are visit sometime. Am putting together a liberation technology menu that can be leveraged by any of the 5,000 secessionist groups that are out there. Regarding your idea, I love it but rethink it down to simple cell phones doing SMS to a back officer server. Human brains are both cheap and infinitely flexible. We’re getting to the point where five human words are worth a thousand expensive pictures. You’ve got my email, we should talk about this some more. World Brain Institute and Global Game should be co-located with the Stanford Liberation Technology gang, but I got blown off the last time I raised the idea. Looking for funders now, and thinking about keeping it in Virginia.
It’s good to see this idea being picked up again. Wrote about it years ago – Information visualisation through Microsoft Photosynth: Potential for human rights documentation?, http://ict4peace.wordpress.com/2008/07/31/information-visualisation-through-microsoft-photosynth-potential-for-human-rights-documentation/, but crisis-mapping then wasn’t what it is today.