Tag Archives: video

Using Sound and Artificial Intelligence to Detect Human Rights Violations

Video continues to be a powerful way to capture human rights abuses around the world. Videos posted to social media can be used to hold perpetrators of gross violations accountable. But video footage poses a “Big Data” challenge to human rights organizations. Two billion smartphone users means almost as many video cameras. This leads to massive amounts of visual content of both suffering and wrong-doing during conflict zones. Reviewing these videos manually is a very labor intensive, time consuming, expensive and often traumatic task. So my colleague Jay Aronson at CMU has been exploring how artificial intelligence and in particular machine learning might solve this challenge.

blog1pic

As Jay and team rightly note in a recent publication (PDF), “the dissemination of conflict and human rights related video has vastly outpaced the ability of researchers to keep up with it – particularly when immediate political action or rapid humanitarian response is required.” The consequences of this are similar to what I’ve observed in humanitarian aid: At some point (which will vary from organization to organization), time and resource limitations will necessitate an end to the collection, archiving, and analysis of user generated content unless the process can be automated.” In sum, information overload can “prevent human rights researchers from uncovering widely dispersed events taking place over long periods of time or large geographic areas that amount to systematic human rights violations.”

jayblog1

To take on this Big Data challenge, Jay and team have developed a new machine learning-based audio processing system that “enables both synchronization of multiple audio-rich videos of the same event, and discovery of specific sounds (such as wind, screaming, gunshots, airplane noise, music, and explosions) at the frame level within a video.” The system basically “creates a unique “soundprint” for each video in a collection, synchronizes videos that are recorded at the same time and location based on the pattern of these signatures, and also enables these signatures to be used to locate specific sounds precisely within a video. The use of this tool for synchronization ultimately provides a multi-perspectival view of a specific event, enabling more efficient event reconstruction and analysis by investigators.”

Synchronizing image features is far more complex than synchronizing sound. “When an object is occluded, poorly illuminated, or not visually distinct from the background, it cannot always be detected by computer vision systems. Further, while computer vision can provide investigators with confirmation that a particular video was shot from a particular location based on the similarity of the background physical environment, it is less adept at synchronizing multiple videos over time because it cannot recognize that a video might be capturing the same event from different angles or distances. In both cases, audio sensors function better so long as the relevant videos include reasonably good audio.”

Ukrainian human rights practitioners working with families of protestors killed during the 2013-2014 Euromaidan Protests recently approached Jay and company to analyze videos from those events. They wanted to “ locate every video available in their collection of the moments before, during, and just after a specific set of killings. They wanted to extract information from these videos, including visual depictions of these killings, whether the protesters in question were an immediate and direct threat to the security forces, plus any other information that could be used to corroborate or refute other forms of evidence or testimony available for their cases.”

jayblog2

Their plan had originally been to manually synchronize more than 65 hours of video footage from 520 videos taken during the morning of February 20, 2014. But after working full-time over several months, they were only able to stitch together about 4 hours of the total video using visual and audio cues in the recording.” So Jay and team used their system to make sense of the footage. They were able to automatically synchronize over 4 hours of the footage. The figure above shows an example of video clips synchronized by the system.

Users can also “select a segment within the video containing the event they are interested in (for example, a series of explosions in a plaza), and search in other videos for a similar segment that shows similar looking buildings or persons, or that contains a similar sounding noise. A user may for example select a shooting scene with a significant series of gunshots, and may search for segments with a similar sounding series of gunshots. This method increases the chances for finding video scenes of an event displaying different angles of the scene or parallel events.”

Jay and team are quick to emphasize that their system “does not  eliminate human involvement in the process because machine learning systems provide probabilistic, not certain, results.” To be sure, “the synchronization of several videos is noisy and will likely include mistakes—this is precisely why human involvement in the process is crucial.”

I’ve been following Jay’s applied research for many years now and continue to be a fan of his approach given the overlap with my own work in the use of machine learning to make sense of the Big Data generated during major natural disasters. I wholeheartedly agree with Jay when he reflected during a recent call that the use of advanced techniques alone is not the answer. Effective cross-disciplinary collaboration between computer scientists and human rights (or humanitarian) practitioners is really hard but absolutely essential. This explains why I wrote this practical handbook on how to create effective collaboration and successful projects between computer scientists and humanitarian organizations.

Video: Digital Humanitarians & Next Generation Humanitarian Technology

How do international humanitarian organizations make sense of the “Big Data” generated during major disasters? They turn to Digital Humanitarians who craft and leverage ingenious crowdsourcing solutions with trail-blazing insights from artificial intelligence to make sense of vast volumes of social media, satellite imagery and even UAV/aerial imagery. They also use these “Big Data” solutions to verify user-generated content and counter rumors during disasters. The talk below explains how Digital Humanitarians do this and how their next generation humanitarian technologies work.

Many thanks to TTI/Vanguard for having invited me to speak. Lots more on Digital Humanitarians in my new book of the same title.

bookcover

Videos of my TEDx talks and the talks I’ve given at the White House, PopTech, Where 2.0, National Geographic, etc., are all available here.

Summary: Digital Disaster Response to Philippine Typhoon

Update: How the UN Used Social Media in Response to Typhoon Pablo

The United Nations Office for the Coordination of Humanitarian Affairs (OCHA) activated the Digital Humanitarian Network (DHN) on December 5th at 3pm Geneva time (9am New York). The activation request? To collect all relevant tweets about Typhoon Pablo posted on December 4th and 5th; identify pictures and videos of damage/flooding shared in those tweets; geo-locate, time-stamp and categorize this content. The UN requested that this database be shared with them by 5am Geneva time the following day. As per DHN protocol, the activation request was reviewed within an hour. The UN was informed that the request had been granted and that the DHN was formally activated at 4pm Geneva.

pablo_impact

The DHN is composed of several members who form Solution Teams when the network is activated. The purpose of Digital Humanitarians is to support humanitarian organizations in their disaster response efforts around the world. Given the nature of the UN’s request, both the Standby Volunteer Task Force (SBTF) and Humanity Road (HR) joined the Solution Team. HR focused on analyzing all tweets posted December 4th while the SBTF worked on tweets posted December 5th. Over 20,000 tweets were analyzed. As HR will have a blog post describing their efforts shortly (please check here), I will focus on the SBTF.

Geofeedia Pablo

The Task Force first used Geofeedia to identify all relevant pictures/videos that were already geo-tagged by users. About a dozen were identified in this manner. Meanwhile, the SBTF partnered with the Qatar Foundation Computing Research Institute’s (QCRI) Crisis Computing Team to collect all tweets posted on December 5th with the hashtags endorsed by the Philippine Government. QCRI ran algorithms on the dataset to remove (1) all retweets and (2) all tweets without links (URLs). Given the very short turn-around time requested by the UN, the SBTF & QCRI Teams elected to take a two-pronged approach in the hopes that one, at least, would be successful.

The first approach used  Crowdflower (CF), introduced here. Workers on Crowd-flower were asked to check each Tweet’s URL and determine whether it linked to a picture or video. The purpose was to filter out URLs that linked to news articles. CF workers were also asked to assess whether the tweets (or pictures/videos) provided sufficient geographic information for them to be mapped. This methodology worked for about 2/3 of all the tweets in the database. A review of lessons learned and how to use Crowdflower for disaster response will be posted in the future.

Pybossa Philippines

The second approach was made possible thanks to a partnership with PyBossa, a free, open-source crowdsourcing and micro-tasking platform. This effort is described here in more detail. While we are still reviewing the results of this approach, we expect that  this tool will become the standard for future activations of the Digital Humanitarian Network. I will thus continue working closely with the PyBossa team to set up a standby PyBossa platform ready-for-use at a moment’s notice so that Digital Humanitarians can be fully prepared for the next activation.

Now for the results of the activation. Within 10 hours, over 20,000 tweets were analyzed using a mix of methodologies. By 4.30am Geneva time, the combined efforts of HR and the SBTF resulted in a database of 138 highly annotated tweets. The following meta-data was collected for each tweet:

  • Media Type (Photo or Video)
  • Type of Damage (e.g., large-scale housing damage)
  • Analysis of Damage (e.g., 5 houses flooded, 1 damaged roof)
  • GPS coordinates (latitude/longitude)
  • Province
  • Region
  • Date
  • Link to Photo or Video

The vast majority of curated tweets had latitude and longitude coordinates. One SBTF volunteer (“Mapster”) created this map below to plot the data collected. Another Mapster created a similar map, which is available here.

Pablo Crisis Map Twitter Multimedia

The completed database was shared with UN OCHA at 4.55am Geneva time. Our humanitarian colleagues are now in the process of analyzing the data collected and writing up a final report, which they will share with OCHA Philippines today by 5pm Geneva time.

Needless to say, we all learned a lot thanks to the deployment of the Digital Humanitarian Network in the Philippines. This was the first time we were activated to carry out a task of this type. We are now actively reviewing our combined efforts with the concerted aim of streamlining our workflows and methodologies to make this type effort far easier and quicker to complete in the future. If you have suggestions and/or technologies that could facilitate this kind of digital humanitarian work, then please do get in touch either by posting your ideas in the comments section below or by sending me an email.

Lastly, but definitely most importantly, a big HUGE thanks to everyone who volunteered their time to support the UN’s disaster response efforts in the Philippines at such short notice! We want to publicly recognize everyone who came to the rescue, so here’s a list of volunteers who contributed their time (more to be added!). Without you, there would be no database to share with the UN, no learning, no innovating and no demonstration that digital volunteers can and do make a difference. Thank you for caring. Thank you for daring.

Video: Changing the World, One Map at a Time

Hosted in the beautiful city of Berlin, Re:publica 2011 is Germany’s largest annual conference on blogs, new media and the digital society, drawing thousands of participants from across the world for three days of exciting conversations and presentations. The conference venue was truly a spectacular one and while conference presentations are typically limited to 10-20 minutes, the organizers gave us an hour to share our stories. So I’m posting the video of my presentation below for anyone interested in learning more about new media, crowdsourcing, crisis mapping, live maps, crisis response, civil resistance, digital activism and check-in’s. I draw on my experience with Ushahidi and the Standby Volunteer Task Force (SBTF) and share examples from Kenya, Haiti, Libya, Japan, the US and Egypt to illustrate how live maps can change the world. My slides are available on Slideshare here.