Category Archives: Crowdsourcing

How Can Digital Humanitarians Best Organize for Disaster Response?

I published a blog post with the same question in 2012. The question stemmed from earlier conversations I had at 10 Downing Street with colleague Duncan Watts from Microsoft Research. We subsequently embarked on a collaboration with the Standby Task Force (SBTF), a group I co-founded back in 2010. The SBTF was one of the early pioneers of digital humanitarian action. The purpose of this collaboration was to empirically explore the relationship between team size and productivity during crisis mapping efforts.

Pablo_UN_Map

Duncan and Team from Microsoft simulated the SBTF’s crisis mapping efforts in response to Typhoon Pablo in 2012. At the time, the United Nations Office for the Coordination of Humanitarian Affairs (UN/OCHA) had activated the Digital Humanitarian Network (DHN) to create a crisis map of disaster impact (final version pictured above). OCHA requested the map within 24 hours. While we could have deployed the SBTF using the traditional crowdsourcing approach as before, we decided to try something different: microtasking. This was admittedly a gamble on our part.

We reached out to the team at PyBossa to ask them to customize their micro-tasking platform so that we could rapidly filter through both images and videos of disaster damage posted on Twitter. Note that we had never been in touch with the PyBossa team before this (hence the gamble) nor had we ever used their CrowdCrafting platform (which was still very new at the time). But thanks to PyBossa’s quick and positive response to our call for help, we were able to launch this microtasking app several hours after OCHA’s request.

Fast forward to the present research study. We gave Duncan and colleagues at Microsoft the same database of tweets for their simulation experiment. To conduct this experiment and replicate the critical features of crisis mapping, they created their own “CrowdMapper” platform pictured below.

Screen Shot 2016-04-20 at 11.12.36 AM Screen Shot 2016-04-20 at 11.12.53 AM

The CrowdMapper experiments suggest that the positive effects of coordination between digital humanitarian volunteers, i.e., teams, dominate the negative effects of social loafing, i.e., volunteers working independently from others. In social psychology, “social loafing is the phenomenon of people exerting less effort to achieve a goal when they work in a group than when they work alone” (1). In the CrowdMapper exercise, the teams performed comparably to the SBTF deployment following Typhoon Pablo. This suggests that such experiments can “help solve practical problems as well as advancing the science of collective intelligence.”

Our MicroMappers deployments have always included a live chat (IM) feature in the user interface precisely to support collaboration. Skype has also been used extensively during digital humanitarian efforts and Slack is now becoming more common as well. So while we’ve actively promoted community building and facilitated active collaboration over the past 6+ years of crisis mapping efforts, we now have empirical evidence that confirms we’re on the right track.

The full study by Duncan et al. is available here. As they note vis-a-vis areas for future research, we definitely need more studies on the division of labor in crisis mapping efforts. So I hope they or other colleagues will pursue this further.

Many thanks to the Microsoft Team and to SBTF for collaborating on this applied research, one of the few that exist in the field of crisis mapping and digital humanitarian action.


The main point I would push back on vis-a-vis Duncan et al’s study is comparing their simulated deployment with the SBTF’s real-world deployment. The reason it took the SBTF 12 hours to create the map was precisely because we didn’t take the usual crowdsourcing approach. As such, most of the 12 hours was spent on reaching out to PyBossa, customizing their microtasking app, testing said app and then finally deploying the platform. The Microsoft Team also had the dataset handed over to them while we had to use a very early, untested version of the AIDR platform to collect and filter the tweets, which created a number of hiccups. So this too took time. Finally, it should be noted that OCHA’s activation came during early evening (local time) and I for one pulled an all-nighter that night to ensure we had a map by sunrise.

A 10 Year Vision: Future Trends in Geospatial Information Management

Screen Shot 2016-02-07 at 12.35.09 PM

The United Nations Committee of Experts on Global Geospatial Information Management (UN-GGIM) recently published their second edition of Future Trends in Geospatial Information Management. I blogged about the first edition here. Below are some of the excerpts I found interesting or noteworthy. The report itself is a 50-page document (PDF 7.1Mb).

  • The integration of smart technologies and efficient governance models will increase and the mantra of ‘doing more for less’ is more relevant than ever before.
  • There is an increasing tendency to bring together data from multiple sources: official statistics, geospatial information, satellite data, big data and crowdsourced data among them.
  • New data sources and new data collection technologies must be carefully applied to avoid a bias that favors countries that are wealthier and with established data infrastructures. The use of innovative tools might also favor those who have greater means to access technology, thus widening the gap between the ‘data poor’ and the ‘data rich’.
  • The paradigm of geospatial information is changing; no longer is it used just for mapping and visualization, but also for integrating with other data sources, data analytics, modeling and policy-making.
  • Our ability to create data is still, on the whole, ahead of our ability to solve complex problems by using the data.  The need to address this problem will rely on the development of both Big Data technologies and techniques (that is technologies that enable the analysis of vast quantities of information within usable and practical timeframes) and artificial intelligence (AI) or machine learning technologies that will enable the data to be processed more efficiently.
  • In the future we may expect society to make increasing use of autonomous machines and robots, thanks to a combination of aging population, 
rapid technological advancement in unmanned autonomous systems and AI, and the pure volume of data being beyond a human’s ability to process it.
  • Developments in AI are beginning to transform the way machines interact with the world. Up to now machines have mainly carried out well-defined tasks such as robotic assembly, or data analysis using pre-defined criteria, but we are moving into an age where machine learning will allow machines to interact with their environment in more flexible and adaptive ways. This is a trend we expect to 
see major growth in over the next 5 to 10 years as the technologies–and understanding of the technologies–become more widely recognized.
  • Processes based on these principles, and the learning of geospatial concepts (locational accuracy, precision, proximity etc.), can be expected to improve the interpretation of aerial and satellite imagery, by improving the accuracy with which geospatial features can be identified.
  • Tools may run persistently on continuous streams of data, alerting interested parties to new discoveries and events.  Another branch of AI that has long been of interest has been the expert system, in which the knowledge and experience of human experts 
is taught to a machine.
  • The principle of collecting data once only at the highest resolution needed, and generalizing ‘on the fly’ as required, can become reality.  Developments of augmented and virtual reality will allow humans to interact with data in new ways.
  • The future of data will not be the conflation of multiple data sources into a single new dataset, rather there will be a growth in the number of datasets that are connected and provide models to be used across the world.
  • Efforts should be devoted to integrating involuntary sensors– mobile phones, RFID sensors and so
on–which aside from their primary purpose may produce information regarding previously difficult to collect information. This leads to more real-time information being generated.
  • Many developing nations have leapfrogged in areas such as mobile communications, but the lack of core processing power may inhibit some from taking advantage of the opportunities afforded by these technologies.
  • Disaggregating data at high levels down to small area geographies. This will increase the need to evaluate and adopt alternative statistical modeling techniques to ensure that statistics can be produced at the right geographic level, whilst still maintaining the quality to allow them to be reported against.
  • The information generated through use of social media and the use of everyday devices will further reveal patterns and the prediction of behaviour. This is not a new trend, but as the use of social media 
for providing real-time information and expanded functionality increases it offers new opportunities for location based services.
  • There seems to have been
 a breakthrough from 2D to 3D information, and
 this is becoming more prevalent.

 Software already exists to process this information, and to incorporate the time information to create 4D products and services. It 
is recognized that a growth area over the next five to ten years will be the use of 4D information in a wide variety of industries.
  • 
 The temporal element is crucial to a number of applications such as emergency service response, for simulations and analytics, and the tracking of moving objects. 
 4D is particularly relevant in the context of real-time information; this has been linked to virtual reality technologies.
  • Greater coverage, quality and resolution has been achieved by the availability of both low-cost and affordable satellite systems, and unmanned aerial vehicles (UAVs). This has increased both the speed of collection and acquisition in remote areas, but also reduced the cost barriers of entry.
  • UAVs can provide real-time information to decision-makers on the ground providing, for example, information for disaster manage-ment. They are
 an invaluable tool when additional information 
is needed to improve vital decision making capabilities and such use of UAVs will increase.
  • The licensing of data in an increasingly online world is proving to be very challenging. There is a growth in organisations adopting simple machine-readable licences, but these have not resolved the issues to data. Emerging technologies such as web services and the growth of big data solutions drawn from multiple sources will continue to create challenges for the licensing of data.
  • A wider issue is the training and education of a broader community of developers and users of location-enabled content. At the same time there is a need for more automated approaches to ensuring the non-geospatial professional community get the right data at the right time. 
Investment in formal training in the use of geospatial data and its implementation is still indispensable.
  • Both ‘open’ and ‘closed’ VGI 
data play an important and necessary part of the wider data ecosystem.

Increasing the Reliability of Aerial Imagery Analysis for Damage Assessments

In March 2015, I was invited by the World Bank to spearhead an ambitious humanitarian aerial robotics (UAV) mission to Vanuatu following Cyclone Pam, a devastating Category 5 Cyclone. This mission was coordinated with Heliwest and X-Craft, two outstanding UAV companies who were identified through the Humanitarian UAV Network (UAViators) Roster of Pilots. You can learn more about the mission and see pictures here. Lessons learned from this mission (and many others) are available here.

Screen Shot 2016-02-22 at 6.12.21 PM

The World Bank and partners were unable to immediately analyze the aerial imagery we had collected because they faced a Big Data challenge. So I suggested the Bank activate the Digital Humanitarian Network (DHN) to request digital volunteer assistance. As a result, Humanitarian OpenStreetMap (HOT) analyzed some of the orthorectified mosaics and MicroMappers focused on analyzing the oblique images (more on both here).

This in turn produced a number of challenges. To cite just one, the Bank needed digital humanitarians to identify which houses or buildings were completely destroyed, versus partially damaged versus largely intact. But there was little guidance on how to determine what constituted fully destroyed versus partially damaged or what such structures in Vanuatu look like when damaged by a Cyclone. As a result, data quality was not as high as it could have been. In my capacity as consultant for the World Bank’s UAVs for Resilience Program, I decided to do something about this lack of guidelines for imagery interpretation.

I turned to my colleagues at the Harvard Humanitarian Initiative (where I had previously co-founded and co-directed the HHI Program on Crisis Mapping) and invited them to develop a rigorous guide that could inform the consistent interpretation of aerial imagery of disaster damage in Vanuatu (and nearby Island States). Note that Vanuatu is number one on the World Bank’s Risk Index of most disaster-prone countries. The imagery analysis guide has just published (PDF) by the Signal Program on Human Security and Technology at HHI.

Big thanks to the HHI team for having worked on this guide and for my Bank colleagues and other reviewers for their detailed feedback on earlier drafts. The guide is another important step towards improving data quality for satellite and aerial imagery analysis in the context of damage assessments. Better data quality is also important for the use of Artificial Intelligence (AI) and computer vision as explained here. If a humanitarian UAV mission does happen in response to the recent disaster in Fiji, then the guide may also be of assistance there depending on how similar the building materials and architecture is. For now, many thanks to HHI for having produced this imagery guide.

When Bushmen Race Aerial Robots to Protect Wildlife

WP1

The San people are bushmen who span multiple countries in Southern Africa. “For millennia, they have lived as hunter-gatherers in the Kalahari desert relying on an intimate knowledge of the environment.” Thus begins Sonja’s story, written with Matthew Parkan. Friend and colleague Sonja Betschart is a fellow co-founder at WeRobotics. In 2015, she teamed up on with partners in Namibia to support their wildlife conservation efforts. Sonja and team used aerial robots (UAVs) to count the number of wild animals in Kuzikus Wildlife Reserve. All the images in this blog post are credited to Drone Adventures.

Screen Shot 2016-02-07 at 3.18.44 PM

If that name sounds familiar, it’s because of this digital crowdsourcing project, which I spearheaded with the same Wildlife Reserve back in 2014. The project, which was covered on CNN, used crowdsourcing to identify wild animals in very high-resolution aerial imagery. As Sonja notes, having updated counts for the number of animals in each species is instrumental to nature conservation efforts.

While the crowdsourcing project worked rather well to identify the presence of animals, distinguishing between animals is more difficult for “the crowd” even when crowdsourcing is combined with Artificial Intelligence. Local indigenous knowledge is unbeatable, or is it? They decided to put this indigenous technical knowledge to the test: find the baby rhino that had been spotted in the area earlier during the mission. Armed with the latest aerial robotics technology, Sonja and team knew full well that the robots would be stiff—even unfair—competition for the bushmen. After all, the aerial robots could travel much faster and much farther than the fastest bushman in Namibia. 

Game on: Bushmen vs Robots. Ready, get set, go!

WP3

The drones weren’t even up in the air when the bushmen strolled back into camp to make themselves a cup up tea. Yes, they had already identified the location of the rhino. “This technologically humbling experience was a real eye opener,” writes Sonja, “a clear demonstration of how valuable traditional skills are despite new technologies. It was also one of the reasons why we started thinking about ways to involve the trackers in our work.”

They invited the bushmen to analyze the thousands of aerial images taken by the aerial robots to identity the various animal species visible in each image. After providing the San with a crash course on how to use a computer (to navigate between images), the bushmen “provided us with consistent interpretations of the species and sometimes even of the tracks in the high grass. However, what impressed us most was that the San pointed out subtle differences in the shadows [of the animals] that would give away the animals’ identities […].”

WP4

Take this aerial image, for example. See anything? “The San identified three different species on this image: oryx (red circles), blesbok (blue circle) and zebra (green circle).” In another image, “the bushmen were even able to give us the name of one of the Black Rhinos they spotted! For people who had never seen aerial imagery, their performance was truly remarkable.” Clearly, “with a bit of additional training, the San trackers could easily become expert image analysts.”

As Sonja rightly concluded after the mission, the project in Namibia was yet more evidence that combining indigenous knowledge with modern technology is both an elegant way of keeping this knowledge alive and an effective approach to address some of our technical and scientific limitations. As such, this is precisely the kind of project that Sonja and I want to see WeRobotics launch in 2016. It is fully in line with our mission to democratize the Fourth Industrial Revolution.


“The Namibia mission was part of SAVMAP, a joint initiative between the Laboratory of Geographic Information Systems (LASIG) of EPFL, Kuzikus Wildlife Reserve & Drone Adventures that aims at better integrating geospatial technologies with conservation and wildlife management in the Kalahari.”

Ranking Aerial Imagery for Disaster Damage Assessments

Analyzing satellite and aerial imagery to assess disaster damage is fraught with challenges. This is true for both digital humanitarians and professional imagery analysts alike. Why? Because distinguishing between infrastructure that is fully destroyed and partially damaged can be particularly challenging. Professional imagery analysts with years of experience have readily admitted that trained analysts regularly interpret the same sets of images differently. Consistency in the interpretation of satellite and aerial imagery is clearly no easy task. My colleague Joel Kaiser from Medair recently suggested another approach.

PM DJI Selfie Nepal 2

Joel and I both serve on the “Core Team” of the Humanitarian UAV Network (UAViators). It is in this context that we’ve been exploring ways to render aerial imagery more actionable for rapid disaster damage assessments and tactical decision making. To overcome some of the challenges around the consistent analysis of aerial imagery, Joel suggested we take a rank-order approach. His proposal is quite simple: display two geo-tagged aerial images side by side with the following question: “Which of the two images shows more disaster damage?” Each combination of images could be shown to multiple individuals. Images that are voted as depicting more damage would “graduate” to the next display stage and in turn be compared to each other, and so on and so forth along with those images voted as showing less damage.

In short, a dedicated algorithm would intelligently select the right combination of images to display side by side. The number and type of votes could be tabulated to compute reliability and confidence scores for the rankings. Each image would have a unique damage score which could potentially be used to identify thresholds for fully destroyed versus partially damaged versus largely intact infrastructure. Much of this could be done on MicroMappers or similar microtasking solutions. Such an approach would do away with the need for detailed imagery interpretation guides. As noted above, consistent analysis is difficult even when such guides are available. The rank-order approach could help quickly identify and map the most severely affected areas to prioritize tactical response efforts.  Note that this approach could be used with both crowd-sourced analysis and professional analysis. Note also that the GPS coordinates for each image would not be made publicly available for data privacy reasons.

Is this strategy worth pursuing? What are we missing? Joel and I would be keen to get some feedback. So please feel free to use the comments section below to share your thoughts or to send an email here.

QED – Goodbye Doha, Hello Adventure!

Quod Erat Demonstrandum (QED) is Latin for “that which had to be proven.” This abbreviation was traditionally used at the end of mathematical proofs to signal the completion of said proofs. I joined the Qatar Computing Research Institute (QCRI) well over 3 years ago with a very specific mission and mandate: to develop and deploy next generation humanitarian technologies. So I built the Institute’s Social Innovation Program from the ground up and recruited the majority of the full-time experts (scientists, engineers, research assistants, interns & project manager) who have become integral to the Program’s success. During these 3+years, my team and I partnered directly with humanitarian and development organizations to empirically prove that methods from advanced computing can be used to make sense of Big (Crisis) Data. The time has thus come to add “QED” to the end of that proof and move on to new adventures. But first a reflection.

Over the past 3.5 years, my team and I at QCRI developed free and open source solutions powered by crowdsourcing and artificial intelligence to make sense of Tweets, text messages, pictures, videos, satellite and aerial imagery for a wide range of humanitarian and development projects. We co-developed and co-deployed these platforms (AIDR and MicroMappers) with the United Nations and the World Bank in response to major disasters such as Typhoons Haiyan and RubyCyclone Pam and both the Nepal & Chile Earthquakes. In addition, we carried out peer-reviewed, scientific research on these deployments to better understand how to meet the information needs of our humanitarian partners. We also tackled the information reliability question, experimenting with crowd-sourcing (Verily) and machine learning (TweetCred) to assess the credibility of information generated during disasters. All of these initiatives were firsts in the humanitarian technology space.

We later developed AIDR-SMS to auto-classify text messages; a platform that UNICEF successfully tested in Zambia and which the World Food Program (WFP) and the International Federation of the Red Cross (IFRC) now plan to pilot. AIDR was also used to monitor a recent election, and our partners are now looking to use AIDR again for upcoming election monitoring efforts. In terms of MicroMappers, we extended the platform (considerably) in order to crowd-source the analysis of oblique aerial imagery captured via small UAVs, which was another first in the humanitarian space. We also teamed up with excellent research partners to crowdsource the analysis of aerial video footage and to develop automated feature-detection algorithms for oblique imagery analysis based on crowdsourced results derived from MicroMappers. We developed these Big Data solutions to support damage assessment efforts, food security projects and even this wildlife protection initiative.

In addition to the above accomplishments, we launched the Internet Response League (IRL) to explore the possibility of leveraging massive multiplayer online games to process Big Crisis Data. Along similar lines, we developed the first ever spam filter to make sense of Big Crisis Data. Furthermore, we got directly engaged in the field of robotics by launching the Humanitarian UAV Network (UAViators), yet another first in the humanitarian space. In the process, we created the largest repository of aerial imagery and videos of disaster damage, which is ripe for cutting-edge computer vision research. We also spearheaded the World Bank’s UAV response to Category 5 Cyclone Pam in Vanuatu and also directed a unique disaster recovery UAV mission in Nepal after the devastating earthquakes. (I took time off from QCRI to carry out both of these missions and also took holiday time to support UN relief efforts in the Philippines following Typhoon Haiyan in 2013). Lastly, on the robotics front, we championed the development of international guidelines to inform the safe, ethical & responsible use of this new technology in both humanitarian and development settings. To be sure, innovation is not just about the technology but also about crafting appropriate processes to leverage this technology. Hence also the rationale behind the Humanitarian UAV Experts Meetings that we’ve held at the United Nations Secretariat, the Rockefeller Foundation and MIT.

All  of the above pioneering-and-experimental projects have resulted in extensive media coverage, which has placed QCRI squarely on the radar of international humanitarian and development groups. This media coverage has included the New York Times, Washington Post, Wall Street Journal, CNN, BBC News, UK Guardian, The Economist, Forbes and Times Magazines, New Yorker, NPR, Wired, Mashable, TechCrunch, Fast Company, Nature, New Scientist, Scientific American and more. In addition, our good work and applied research has been featured in numerous international conference presentations and keynotes. In sum, I know of no other institute for advanced computing research that has contributed this much to the international humanitarian space in terms of thought-leadership, strategic partnerships, applied research and operational expertise through real-world co-deployments during and after major disasters.

There is, of course, a lot more to be done in the humanitarian technology space. But what we have accomplished over the past 3 years clearly demonstrates that techniques from advanced computing can indeed provide part of the solution to the pressing Big Data challenge that humanitarian & development organizations face. At the same time, as I wrote in the concluding chapter of my new book, Digital Humanitarians, solving the Big Data challenge does not alas imply that international aid organizations will actually make use of the resulting filtered data or any other data for that matter—even if they ask for this data in the first place. So until humanitarian organizations truly shift towards both strategic and tactical evidence-based analysis & data-driven decision-making, this disconnect will surely continue unabated for many more years to come.

Reflecting on the past 3.5 years at QCRI, it is crystal clear to me that the number one most important lesson I (re)learned is that you can do anything if you have an outstanding, super-smart and highly dedicated team that continually goes way above and beyond the call of duty. It is one thing for me to have had the vision for AIDR, MicroMappers, IRL, UAViators, etc., but vision alone does not amount to much. Implementing said vision is what delivers results and learning. And I simply couldn’t have asked for a more talented & stellar team to translate these visions into reality over the past 3+years. You each know who you are, partners included; it has truly been a privilege and honor working with you. I can’t wait to see what you do next at/with QCRI. Thank you for trusting me; thank you for sharing my vision; thanks for your sense of humor, and thank you for your dedication and loyalty to science and social innovation.

So what’s next for me? I’ll be lining up independent consulting work with several organizations (likely including QCRI). In short, I’ll be open for business. I’m also planning to work on a new project that I’m very excited about, so stay tuned for updates; I’ll be sure to blog about this new adventure when the time is right. For now, I’m busy wrapping up my work as Director of Social Innovation at QCRI and working with the best team there is. QED.

Using Computer Vision to Analyze Aerial Big Data from UAVs During Disasters

Recent scientific research has shown that aerial imagery captured during a single 20-minute UAV flight can take more than half-a-day to analyze. We flew several dozen flights during the World Bank’s humanitarian UAV mission in response to Cyclone Pam earlier this year. The imagery we captured would’ve taken a single expert analyst a minimum 20 full-time workdays to make sense of. In other words, aerial imagery is already a Big Data problem. So my team and I are using human computing (crowdsourcing), machine computing (artificial intelligence) and computer vision to make sense of this new Big Data source.

For example, we recently teamed up with the University of Southampton and EPFL to analyze aerial imagery of the devastation caused by Cyclone Pam in Vanuatu. The purpose of this research is to generate timely answers. Aid groups want more than high-resolution aerial images of disaster-affected areas, they want answers; answers like the number and location of damaged buildings, the number and location of displaced peoples, and which roads are still useable for the delivery of aid, for example. Simply handing over the imagery is not good enough. As demonstrated in my new book, Digital Humanitarians, both aid and development organizations are already overwhelmed by the vast volume and velocity of Big Data generated during and post-disasters. Adding yet another source, Big Aerial Data, may be pointless since these organizations may simply not have the time or capacity to make sense of this new data let alone integrate the results with their other datasets.

We therefore analyzed the crowdsourced results from the deployment of our MicroMappers platform following Cyclone Pam to determine whether those results could be used to train algorithms to automatically detect disaster damage in future disasters in Vanuatu. During this MicroMappers deployment, digital volunteers analyzed over 3,000 high-resolution oblique aerial images, tracing houses that were fully destroyed, partially damaged and largely intact. My colleague Ferda Ofli and I teamed up with Nicolas Rey (a graduate student from EPFL who interned with us over the summer) to explore whether these traces could be used to train our algorithms. The results below were written with Ferda and Nicolas. Our research is not just an academic exercise. Vanuatu is the most disaster-prone country in the world. What’s more, this year’s El Niño is expected to be one of the strongest in half-a-century.

Screen Shot 2015-10-11 at 6.11.04 PM

According to the crowdsourced results, 1,145 of the high-resolution images did not contain any buildings. Above is a simple histogram depicting the number of buildings per image. The aerial images of Vanuatu are very heterogeneous, and vary not only in diversity of features they exhibit but also in the angle of view and the altitude at which the pictures were taken. While the vast majority of the images are oblique, some are almost nadir images, and some were taken very close to the ground or even before take off.

Screen Shot 2015-10-11 at 6.45.15 PM

The heterogeneity of our dataset of images makes the automated analysis of this imagery a lot more difficult. Furthermore, buildings that are under construction, of which there are many in our dataset, represent a major difficulty because they look very similar to damaged buildings. Our first task thus focused on training our algorithms to determine whether or not any given aerial image shows some kind of building. This is an important task given that more than ~30% of the images in our dataset do not contain buildings. As such, if we can develop an accurate algorithm to automatically filter out these irrelevant images (like the “noise” below), this will allows us focus the crowdsourced analysis of relevant images only.

Vanuatu3

While our results are purely preliminary, we are still pleased with our findings thus far. We’ve been able to train our algorithms to determine whether or not an aerial image includes a building with just over 90% accuracy at the tile level. More specifically, our algorithms were able to recognize and filter out 60% of the images that do not contain any buildings (recall rate), and only 10% of the images that contain buildings were mistakingly discarded (precision rate of 90%). The example below is an example. There are still quite a number of major challenges, however, so we want to be sure not to over-promise anything at this stage. In terms of next steps, we would like to explore whether our computer vision algorithms can distinguish between destroyed an intact buildings.

Screen Shot 2015-10-11 at 6.57.05 PMScreen Shot 2015-10-11 at 6.57.15 PM

The UAVs we were flying in Vanuatu required that we landed them in order to get access to the collected imagery. Increasingly, newer UAVs offer the option of broadcasting the aerial images and videos back to base in real time. DJI’s new Phantom 3 UAV (pictured below), for example, allows you to broadcast your live aerial video feed directly to YouTube (assuming you have connectivity). There’s absolutely no doubt that this is where the UAV industry is headed; towards real-time data collection and analysis. In terms of humanitarian applications, and search and rescue, having the data-analysis carried out in real-time is preferable.

WP27

This explains why my team and I recently teamed up with Elliot Salisbury & Sarvapali Ramchurn from the University of Southampton to crowdsource the analysis of live aerial video footage of disaster zones and to combine this crowdsourcing with (hopefully) near real-time machine learning and automated feature detection. In other words, as digital volunteers are busy tagging disaster damage in video footage, we want our algorithms to learn from these volunteers in real-time. That is, we’d like the algorithms to learn what disaster damage looks like so they can automatically identify any remaining disaster damage in a given aerial video.

So we recently carried out a MicroMappers test-deployment using aerial videos from the humanitarian UAV mission to Vanuatu. Close to 100 digital volunteers participated in this deployment. Their task? To click on any parts of the videos that show disaster damage. And whenever 80% or more of these volunteers clicked on the same areas, we would automatically highlight these areas to provide near-real time feedback to the UAV pilot and humanitarian teams.

At one point during the simulations, we had some 30 digital volunteers clicking on areal videos at the same time, resulting in an average of 12 clicks per second for more than 5 minutes. In fact, we collectively clicked on the videos a total of 49,706 times! This provided more than enough real-time data for MicroMappers to act as a human-intelligence sensor for disaster damage assessments. In terms of accuracy, we had about 87% accuracy with the collective clicks. Here’s how the simulations looked like to the UAV pilots as we were all clicking away:

Thanks to all this clicking, we can export only the most important and relevant parts of the video footage while the UAV is still flying. These snippets, such as this one and this one, can then be pushed to MicroMappers for additional verification. These animations are small and quick, and reduce a long aerial video down to just the most important footage. We’re now analyzing the areas that were tagged in order to determine whether we can use this data to train our algorithms accordingly. Again, this is far more than just an academic curiosity. If we can develop robust algorithms during the next few months, we’ll be ready to use them effectively during the next Typhoon season in the Pacific.

In closing, big thanks to my team at QCRI for translating my vision of Micro-Mappers into reality and for trusting me well over a year ago when I said we needed to extend our work to aerial imagery. All of the above research would simply not have been possible without MicroMappers existing. Big thanks as well to our excellent partners at EPFL and Southampton for sharing our vision and for their hard work on our joint projects. Last but certainly not least, sincerest thanks to digital volunteers from SBTF and beyond for participating in these digital humanitarian deployments.