Author Archives: Patrick Meier

Using Sound and Artificial Intelligence to Detect Human Rights Violations

Video continues to be a powerful way to capture human rights abuses around the world. Videos posted to social media can be used to hold perpetrators of gross violations accountable. But video footage poses a “Big Data” challenge to human rights organizations. Two billion smartphone users means almost as many video cameras. This leads to massive amounts of visual content of both suffering and wrong-doing during conflict zones. Reviewing these videos manually is a very labor intensive, time consuming, expensive and often traumatic task. So my colleague Jay Aronson at CMU has been exploring how artificial intelligence and in particular machine learning might solve this challenge.

blog1pic

As Jay and team rightly note in a recent publication (PDF), “the dissemination of conflict and human rights related video has vastly outpaced the ability of researchers to keep up with it – particularly when immediate political action or rapid humanitarian response is required.” The consequences of this are similar to what I’ve observed in humanitarian aid: At some point (which will vary from organization to organization), time and resource limitations will necessitate an end to the collection, archiving, and analysis of user generated content unless the process can be automated.” In sum, information overload can “prevent human rights researchers from uncovering widely dispersed events taking place over long periods of time or large geographic areas that amount to systematic human rights violations.”

jayblog1

To take on this Big Data challenge, Jay and team have developed a new machine learning-based audio processing system that “enables both synchronization of multiple audio-rich videos of the same event, and discovery of specific sounds (such as wind, screaming, gunshots, airplane noise, music, and explosions) at the frame level within a video.” The system basically “creates a unique “soundprint” for each video in a collection, synchronizes videos that are recorded at the same time and location based on the pattern of these signatures, and also enables these signatures to be used to locate specific sounds precisely within a video. The use of this tool for synchronization ultimately provides a multi-perspectival view of a specific event, enabling more efficient event reconstruction and analysis by investigators.”

Synchronizing image features is far more complex than synchronizing sound. “When an object is occluded, poorly illuminated, or not visually distinct from the background, it cannot always be detected by computer vision systems. Further, while computer vision can provide investigators with confirmation that a particular video was shot from a particular location based on the similarity of the background physical environment, it is less adept at synchronizing multiple videos over time because it cannot recognize that a video might be capturing the same event from different angles or distances. In both cases, audio sensors function better so long as the relevant videos include reasonably good audio.”

Ukrainian human rights practitioners working with families of protestors killed during the 2013-2014 Euromaidan Protests recently approached Jay and company to analyze videos from those events. They wanted to “ locate every video available in their collection of the moments before, during, and just after a specific set of killings. They wanted to extract information from these videos, including visual depictions of these killings, whether the protesters in question were an immediate and direct threat to the security forces, plus any other information that could be used to corroborate or refute other forms of evidence or testimony available for their cases.”

jayblog2

Their plan had originally been to manually synchronize more than 65 hours of video footage from 520 videos taken during the morning of February 20, 2014. But after working full-time over several months, they were only able to stitch together about 4 hours of the total video using visual and audio cues in the recording.” So Jay and team used their system to make sense of the footage. They were able to automatically synchronize over 4 hours of the footage. The figure above shows an example of video clips synchronized by the system.

Users can also “select a segment within the video containing the event they are interested in (for example, a series of explosions in a plaza), and search in other videos for a similar segment that shows similar looking buildings or persons, or that contains a similar sounding noise. A user may for example select a shooting scene with a significant series of gunshots, and may search for segments with a similar sounding series of gunshots. This method increases the chances for finding video scenes of an event displaying different angles of the scene or parallel events.”

Jay and team are quick to emphasize that their system “does not  eliminate human involvement in the process because machine learning systems provide probabilistic, not certain, results.” To be sure, “the synchronization of several videos is noisy and will likely include mistakes—this is precisely why human involvement in the process is crucial.”

I’ve been following Jay’s applied research for many years now and continue to be a fan of his approach given the overlap with my own work in the use of machine learning to make sense of the Big Data generated during major natural disasters. I wholeheartedly agree with Jay when he reflected during a recent call that the use of advanced techniques alone is not the answer. Effective cross-disciplinary collaboration between computer scientists and human rights (or humanitarian) practitioners is really hard but absolutely essential. This explains why I wrote this practical handbook on how to create effective collaboration and successful projects between computer scientists and humanitarian organizations.

Using Swimming Robots to Warn Villages of Himalayan Tsunamis

Cross-posted from National Geographic 

Climate change is having a devastating impact on the Himalaya. On the Ngozumpa glacier, one of the largest and longest in the region, hundreds of supraglacial lakes dot the glacier surface. One lake in particular is known for its continuous volume purges on an annual basis. Near the start of the monsoon this summer, in less than 48 hours, it loses enough water to fill over 40 Olympic-sized swimming pools. To make matters worse, these glacial lakes act like cancers: they consume Himalayan glaciers from the inside out, making some of them melt twice as fast. As a result, villages down-valley from these glacial lakes are becoming increasingly prone to violent flash floods, which locals call Himalayan Tsunamis.

To provide early warnings of these flash floods requires that we collect a lot more geophysical and hydrologic information on these glacial lakes. So scientists like Ulyana (co-author) are racing to understand exactly how these glacial lakes form and grow, and how they’re connected to each other through seemingly secret subterranean channels. We need to know how deep and steep these lakes are, what the lake floors look like and of what materials they are composed (e.g., mud, rock, bare ice).

Ulyana, her colleagues and a small local team of Sherpa have recently started using autonomous swimming robots to automatically map lake floors and look for cracks that may trigger mountain tsunamis. Using robotics to do this is both faster and more accurate than having humans take the measurements. What’s more, robots are significantly safer. Indeed, even getting near these lakes (let alone in them!) is dangerous enough due to unpredictable collapses of ice called calving and large boulders rolling off of surrounding ice cliffs and into the lakes below. Just imagine being on a small inflatable boat floating on ice-cold water when one of those icefalls happen.

We (Ulyana and Patrick) are actively looking to utilize diving robots as well—specifically the one in the video footage below. This OpenROV Trident robot will enable us to get to the bottom of these glacial lakes to identify deepening ‘hotspots’ before they’re visible from the lake’s surface or from the air. Our plan next year is to pool our efforts, bringing diving, swimming and flying robots to Nepal so we can train our partners—Sherpas and local engineers—on how to use these robotic solutions to essentially take the ‘pulse’ of the changing Himalaya. This way they’ll be able to educate as well as warn nearby villages before the next mountain floods hit.

We plan to integrate these efforts with WeRobotics (co-founded by co-author Patrick) and in particular with the local robotics lab that WeRobotics is already setting up in Kathmandu. This lab has a number of flying robots and trained Nepali engineers. To learn more about how these flying robots are being used in Nepal, check out the pictures here.

We’ll soon be adding diving robots to the robotic lab’s portfolio in Nepal thanks to WeRobotics’s partnership with OpenROV. What’s more, all WeRobotics labs have an expressed goal of spinning off  local businesses that offer robotics as services. Thus, the robotics start-up that spins off from our lab in Nepal will offer a range of mapping services using both flying and diving robots. As such, we want to create local jobs that use robotics (jobs that local partners want!) so that our Nepali friends can make a career out of saving their beautiful mountains.  

Please do get in touch if you’d like to get involved or support in other ways! Email us ulyana@scienceinthewild.com and patrick@werobotics.org

Could These Swimming Robots Help Local Communities?

Flying robots are all over the news these days. But UAVs/drones are hardly the only autonomous robotics solutions out there. Driving robots like the one below by Starship Technologies has already driven more than 5,000 miles in 39 cities across 12 counties, gentling moving around hundreds of thousands of people in the process of delivering small packages. I’m already in touch with Starship and related companies to explore a range of humanitarian applications. Perhaps less well known, however, are the swimming robots that can be found floating and diving in oceans, lakes and rivers around the world.

These swimming robots are often referred to as maritime or marine robots, aquatic robots, remotely operated vehicles and autonomous surface water (or underwater) vehicles. I’m interested in swimming robots for the same reason I’m interested in flying and driving robots: they allow us to collect data, transport cargo and take samples in more efficient and productive ways. Flying Robots, for example, can be used to transport essential vaccines and medicines. They can also collect data by taking pictures to support precision agriculture and they can take air samples to test for pollution. The equivalent is true for swimming and diving robots.

So I’d like to introduce you to this cast of characters and will then elaborate on how they can and have been used to make a difference. Do please let me know if I’m missing any major ones—robots and use-cases.

OpenRov Trident
OpenROV_Trident
This tethered diving robot can reach depths of up to 100 meters with a maximum speed of 2 meters per second (7 km/hour). The Trident has a maximum run-time of 3 hours, weighs just under 3 kg and easily fits in a backpack. It comes with a 25 meter tether (although longer tethers are also available). The robot, which relays a live video feed back to the surface, can be programmed to swim in long straight lines (transects) over a given area to generate a continuous map of the seafloor. The OpenROV software is open source. More here.

MidWest ROV Screen Shot 2016-07-29 at 8.00.07 AM
This remotely operated swimming robot has an maximum cruising speed of just under 5km per hour and weighs 25kg. The ROV is a meter long and has a run time of approximately 4 hours. The platform has full digital and audio recording capabilities with a sonar a scanner that can record a swatch of ~60 meter wide at a depth of 180 meters. This sturdy robot has been swimming in one of the fastest changing glacier lakes in the Himalayas to assess flood hazards. More here. See also MarineTech.

Hydromea Vertex AUV

This small swimming robot can cover a volume of several square kilometers at a depth of up to 300 meters with a maximum speed of 1 meter per second (3.5 km/hour). The Vertex can automatically scan vertically and horizontally, or any other angle for that matter and from multiple locations. The platform, which only weighs 7 kg and has a length of 70 cm, can be used to create 3D scans of the seafloor with up to 10 robots operating in simultaneously in parallel thanks to communication and localization technology that enables them to cooperate as a team. More here.

Liquid Robotics Wave Glider
LiquidRobotics
The Wave Glider is an autonomous swimming robot powered by both wave and solar energy, enabling it to cruise at 5.5 km/hour. The surface component, which measures 3 meters in length, contains solar panels that power the platform and onboard sensors. The tether and underwater component enables the platform to use waves for thrust. This Glider operates individually or in fleets to deliver real-time data for up to a year with no fuel. The platform has already traveled well over one million kilometers and through a range of weather conditions including hurricanes and typhoons. More here.

SeaDrone
SeaDrone
This tethered robot weighs 5kg and can operate at a depth of 100 meters with a maximum of 1.5m per second (3.5km/hour). Tethers are available at a length of 30 meters to 50 meters. The platform has a battery life of 3 hours and provides a live, high-definition video feed. The SeaDrone platform can be easily controlled from an iOS tablet. More here.

Clear Path Robotics Heron
ClearPath
This surface water swimming robot can cruise at a maximum speed of 1.7 meters per second (6km/ hour) for around 2 hours. The Heron, which weighs 28kg, offers a payload bay for submerged sensors and a mounting system for those above water. The robot can carry a maximum payload of 10kg. A single operator can control multiple Herons simultaneously. The platform, like others described below is ideal for ecosystem assessments and bathymetry surveys (to map the topography of lakes and ocean floors). More here.

SailDrone
SailDrone
The Saildrone navigates to its destination using wind power alone, typically cruising at an average speed of 5.5 km/hour. The robot can then stay at a designated spot or perform survey patterns. Like other robots introduced here, the Saildrone can carry a range of sensors for data collection. The data is then transmitted back to shore via satellite. The Saildrone is also capable of carrying an additional 100 kg worth of payload. More here.

EMILY
Screen Shot 2016-07-29 at 2.29.07 PM
EMILY, an acronym for Emergency Integrated Lifesaving Lanyard, is a robotic device used by lifeguards for rescuing swimmers. It operates on battery power and is operated by remote control after being dropped into the water from shore, a boat or pier, or helicopter. EMILY has a maximum cruising speed of 35km per hour (much faster than a human lifeguard can swim) and function as a floatation device for up to 4-6 people. The platform was used in Greece to assist in ocean rescues of refugees crossing the Aegean Sea from Turkey. More here. The same company has also created Searcher, an autonomous marine robot that I hope to learn more about soon.

Platypus
Platypus
Platypus manufactures four different types of swimming robots one which is depicted above. Called the Serval, this platform has a maximum speed of 15 km/hour with a runtime of 4 hours. The Serval weighs 35kg and can carry a payload of 70kg. The Serval can use either wireless, 3G or Edge to communicate. Platypus also offers a base-station package that includes a wireless router and antenna with range up to 2.5 km. The Felis, another Playtpus robot, has a max speed of 30km/hour and a max payload of 200kg. The platform can operate for 12 hours. These platforms can be used for autonomous mapping. More here.

AquaBot
AquaBot
The aim of the AquaBot project is to develop an underwater tethered robot that automates the tasks of visual inspection of fish farm nets and mooring systems. There is little up-to-date information on this project so it is unclear how many prototypes and tests were carried out. Specs for this diving robot don’t seem to be readily available online. More here.

There are of course many more marine robots out there. Have a look at these other companies: Bluefin Robotics, Ocean Server, Riptide Autonomous Systems, Seabotix, Blue Robotics, YSI, AC-CESS and Juice Robotics, for example. The range of applications of maritime robotics can be applied to is also growing. At WeRobotics, we’re actively exploring a wide number of use-cases to determine if and where maritime robots might be able to add value to the work of our local partners in developing countries.

aquaculture

Take aquaculture (also known as aquafarming), for example. Aquaculture is the fastest growing sector of global food production. But many types of aquaculture remain labor intensive. In addition, a combination of “social and environmental pressures and biological necessities are creating opportunities for aquatic farms to locate in more exposed waters further offshore,” which increases both risks and costs, “particularly those associated with the logistics of human maintenance and intervention activities.” These and other factors make “this an excellent time to examine the possibilities for various forms of automation to improve the efficiency and cost-effectiveness of farming the oceans.”

Just like land-based agriculture, aquaculture can also be devastated by major disasters. To this end, aquaculture represents an important food security issue for local communities directly dependent on seafood for their livelihoods. As such, restoring aquafarms can be a vital element of disaster recovery. After the 2011 Japan Earthquake and Tsunami, for example, maritime robots were used to “remediate fishing nets and accelerate the restarting of the fishing economy.” As further noted in the book Disaster Robotics, the robots “cleared fishing beds from debris and pollution” by mapping the “remaining debris in the prime fishing and aquaculture areas, particularly looking for cars and boats leaking oil and gas and for debris that would snag and tear fishing nets.”

JapanTsunami

Ports and shipping channels in both Japan and Haiti were also reopened using marine robots following the major earthquakes in 2011 and 2010. They mapped the debris field that could damage or prevent ships from entering the port. The clearance of this debris allowed “relief supplies to enter devastated areas and economic activities to resume.” To be sure, coastal damage caused by earthquake and tsunamis can render ports inoperable. Marine robots can thus accelerate both response and recovery efforts by reopening ports that represent primary routes for relief supplies, as noted in the book Disaster Robotics

In sum, marine robotics can be used for aquaculture, structural inspections, estimation of debris volume and type, victim recovery, forensics, environmental monitoring as well as search, reconnaissance and mapping. While marine robots remain relatively expensive, new types of low-cost solutions are starting to enter the market. As these become cheaper and more sophisticated in terms of their autonomous capabilities, I am hopeful that they will become increasingly useful and accessible to local communities around the world. Check out WeRobotics to learn about how appropriate robotics solutions can support local livelihoods.

Reverse Robotics: A Brief Thought Experiment

Imagine a world in which manually controlled technologies simply do not exist. The very thought of manual technologies is, in actual fact, hardly conceivable let alone comprehensible. Instead, this seemingly alien world is seamlessly powered by intelligent and autonomous robotics systems. Lets call this world Planet AI.

PlanetAI

Planet AI’s version of airplanes, cars, trains and ships are completely unmanned. That is, they are fully autonomous—a silent symphony of large and small robots waltzing around with no conductor in sight. On one fateful night, a young PhD student awakens in a sweat unable to breathe, momentarily. The nightmare: all the swirling robots of Planet AI were no longer autonomous. Each of them had to be told exactly what to do by the Planet’s inhabitants. Madness.

She couldn’t go back to sleep. The thought of having to tell her robotics transport unit (RTU) in the morning how to get from her studio to the university gave her a panic attack. She would inevitably get lost or worse yet crash, maybe even hurt someone. She’d need weeks of practice to manually control her RTU. And even if she could somehow master manual steering, she wouldn’t be able to steer and work on her dissertation at the same time during the 36-minute drive. What’s more, that drive would easily become a 100-minute drive since there’s no way she would manually steer the RTU at 100 kilometers an hour—the standard autonomous speed of RTUs; more like 30km/h.

And what about the other eight billion inhabits of Planet AI? The thought of having billions of manually controlled RTUs flying, driving & swimming through the massive metropolis of New AI was surely the ultimate horror story. Indeed, civilization would inevitably come to an end. Millions would die in horrific RTU collisions. Transportation would slow to a crawl before collapsing. And the many billions of hours spent working, resting or playing in automated RTU’s every day would quickly evaporate into billions of hours of total stress and anxiety. The Planet’s Global GDP would free fall. RTU’s carrying essential cargo automatically from one side of the planet to the other would need to be steered manually. Where would those millions of jobs require such extensive manual labor come from? Who in their right mind would even want to take such a dangerous and dull assignment? Who would provide the training and certification? And who in the world would be able to pay for all the salaries anyway?

At this point, the PhD student was on her feet. “Call RTU,” she instructed her personal AI assistant. An RTU swung by while she as putting on her shoes on. Good, so far so good, she told herself. She got in slowly and carefully, studying the RTU’s behavior suspiciously. No, she thought to herself, nothing out of the ordinary here either. It was just a bad dream. The RTU’s soft purring power source put her at ease, she had always enjoyed the RTU’s calming sound. For the first time since she awoke from her horrible nightmare, she started to breathe more easily. She took an extra deep and long breath.

starfleet

The RTU was already waltzing with ease at 100km per hour through the metropolis, the speed barely noticeable from inside the cocoon. Forty-six, forty-seven and forty-eight; she was counting the number of other RTU’s that were speeding right alongside her’s, below and above as well. She arrived on campus in 35 minutes and 48 seconds—exactly the time it had taken the RTU during her 372 earlier rides. She breathed a deep sigh of relief and said “Home Please.” It was just past 3am and she definitely needed more sleep.

She thought of her fiancée on the way home. What would she think about her crazy nightmare given her work in the humanitarian space? Oh no. Her heart began to race again. Just imagine the impact that manually steered RTUs would have on humanitarian efforts. Talk about a total horror story. Life-saving aid, essential medicines, food, water, shelter; each of these would have to be trans-ported manually to disaster-affected communities. The logistics would be near impossible to manage manually. Everything would grind and collapse to a halt. Damage assessments would have to be carried manually as well, by somehow steering hundreds of robotics data units (RDU’s) to collect data on affected areas. Goodness, it would take days if not weeks to assess disaster damage. Those in need would be left stranded. “Call Fiancée,” she instructed, shivering at the thought of her fiancée having to carry out her important life-saving relief work entirely manually.


The point of this story and thought experiment? While some on Planet Earth may find the notion of autonomous robotics system insane and worry about accidents, it is worth noting that a future world like Planet AI would feel exactly the same way with respect to our manually controlled technologies. Over 80% of airplane accidents are due to human pilot error and 90% of car accidents are the result of human driver error. Our PhD student on Planet AI would describe our use of manually controlled technologies a suicidal, not to mention a massive waste of precious human time.

Screen Shot 2016-07-07 at 11.11.08 AM

An average person in the US spends 101 minutes per day driving (which totals to more than 4 years in their life time). There are 214 million licensed car drivers in the US. This means that over 360 million hours of human time in the US alone is spent manually steering a car from point A to point B every day. This results in more than 30,000 people killed per year. And again, that’s just for the US. There are over 1 billion manually controlled motor vehicles on Earth. Imagine what we could achieve with an additional billion hours every day if we had Planet AI’s autonomous systems to free up this massive cognitive surplus. And lets not forget the devastating environmental impact of individually-owned, manually controlled vehicles.

If you had the choice, would you prefer to live on Earth or on Planet AI if everything else were held equal?

Humanitarian Robotics: The $15 Billion Question?

The International Community spends around $25 Billion per year to provide life saving assistance to people devastated by wars and natural disasters. According to the United Nations, this is $15 Billion short of what is urgently needed; that’s $15 Billion short every year. So how do we double the impact of humanitarian efforts and do so at half the cost?

wp1

Perhaps one way to deal with this stunning 40% gap in funding is to scale the positive impact of the aid industry. How? By radically increasing the efficiency (time-savings) and productivity (cost-savings) of humanitarian efforts. This is where Artificial Intelligence (AI) and Autonomous Robotics come in. The World Economic Forum refers to this powerful new combination as the 4th Industrial Revolution. Amazon, Facebook, Google and other Top 100 Fortune companies are powering this revolution with billions of dollars in R&D. So whether we like it or not, the robotics arms race will impact the humanitarian industry just like it is impacting other industries: through radical gains in efficiency & productivity.

Take Amazon, for example. The company uses some 30,000 Kiva robots in its warehouses across the globe (pictured below). These ground-based, terrestrial robotics solutions have already reduced Amazon’s operating expenses by no less than 20%. And each new warehouse that integrates these self-driving robots will save the company around $22 million in fulfillment expenses alone. According to Deutsche Bank, “Bringing the Kivas to the 100 or so distribution centers that still haven’t implemented the tech would save Amazon a further $2.5 billion.” As is well known, the company is also experimenting with aerial robotics (drones). A recent study by PwC (PDF) notes that “the labor costs and services that can be replaced by the use of these devices account for about $127 billion today, and that the main sectors that will be affected are infrastructure, agriculture, and transportation.” Meanwhile, Walmart and others are finally starting to enter the robotics arms race. The former is using ground-based robots to ship apparel and is actively exploring the use of aerial robotics to “photograph ware-house shelves as part of an effort to reduce the time it takes to catalogue inventory.”

Amazon Robotics

What makes this new industrial revolution different from those that preceded it is the fundamental shift from manually controlled technologies—a world we’re all very familiar with—to a world powered by increasingly intelligent and autonomous systems—an entirely different kind of world. One might describe this as a shift towards extreme automation. And whether extreme automation powers aerial robotics, terrestrial robotics or maritime robots (pictured below) is besides the point. The disruption here is the one-way shift towards increasingly intelligent and autonomous systems.

All_Robotics

Why does this fundamental shift matter to those of us working in humanitarian aid? For at least two reasons: the collection of humanitarian information and the transportation of humanitarian cargo. Whether we like it or not, the rise of increasingly autonomous systems will impact both the way we collect data and transport cargo by making these processes faster, safer and more cost-effective. Naturally, this won’t happen overnight: disruption is a process.

Humanitarian organizations cannot stop the 4th Industrial Revolution. But they can apply their humanitarian principles and ideals to inform how autonomous robotics are used in humanitarian contexts. Take the importance of localizing aid, for example, a priority that gained unanimous support at the recent World Humanitarian Summit. If we apply this priority to humanitarian robotics, the question becomes: how can access to appropriate robotics solutions be localized so that local partners can double the positive impact of their own humanitarian efforts? In other words, how do we democratize the 4th Industrial Revolution? Doing so may be an important step towards closing the $15 billion gap. It could render the humanitarian industry more efficient and productive while localizing aid and creating local jobs in new industries.

This is What Happens When You Send Flying Robots to Nepal

In September 2015, we were invited by our partner Kathmandu University to provide them and other key stakeholders with professional hands-on training to help them scale the positive impact of their humanitarian efforts following the devastating earthquakes. More specifically, our partners were looking to get trained on how to use aerial robotics solutions (drones) safely and effectively to support their disaster risk reduction and early recovery efforts. So we co-created Kathmandu Flying Labs to ensure the long-term sustainability of our capacity building efforts. Kathmandu Flying Labs is kindly hosted by our lead partner, Kathmandu University (KU). This is already well known. What is hardly known, however, is what happened after we left the country.

Screen Shot 2015-11-02 at 5.17.58 PM

Our Flying Labs are local innovation labs used to transfer both relevant skills and appropriate robotics solutions sustainably to outstanding local partners who need these the most. The co-creation of these Flying Labs include both joint training and applied projects customized to meet the specific needs & priorities of our local partners. In Nepal, we provided both KU and Kathmandu Living Labs (KLL) with the professional hands-on training they requested. What’s more, thanks to our Technology Partner DJI, we were able to transfer 10 DJI Phantoms (aerial robotics solutions) to our Nepali partners (6 to KU and 4 to KLL). In addition, thanks to another Technology Partner, Pix4D, we provided both KU and KLL with free licenses of the Pix4D software and relevant training so they could easily process and analyze the imagery they captured using their DJI platforms. Finally, we carried out joint aerial surveys of Panga, one of the towns hardest-hit by the 2015 Earthquake. Joint projects are an integral element of our capacity building efforts. These projects serve to reinforce the training and enable our local partners to create immediate added value using aerial robotics. This important phase of Kathmandu Flying Labs is already well documented.

WP15

What is less known, however, is what KU did with the technology and software after we left Nepal. Indeed, the results of this next phase of the Flying Labs process (during which we provide remote support as needed) has not been shared widely, until now. KU’s first order of business was to actually finish the joint project we had started with them in Panga. It turns out that our original aerial surveys there were actually incomplete, as denoted by the red circle below.

Map_Before

But because we had taken the time to train our partners and transfer both our skills and the robotics technologies, the outstanding team at KU’s School of Engineering returned to Panga to get the job done without needing any further assistance from us at WeRobotics. They filled the gap:

Map_After

The KU team didn’t stop there. They carried out a detailed aerial survey of a nearby hospital to create the 3D model below (at the hospital’s request). They also created detailed 3D models of the university and a nearby temple that had been partially damaged by the 2015 earthquakes. Furthermore, they carried out additional disaster damage assessments in Manekharka and Sindhupalchowk, again entirely on their own.

Yesterday, KU kindly told us about their collaboration with the World Wildlife Fund (WWF). Together, they are conducting a study to determine the ecological flow of Kaligandaki river, one of the largest rivers in Nepal. According to KU, the river’s ecosystem is particularly “complex as it includes aquatic invertebrates, flora, vertebrates, hydrology, geo-morphology, hydraulics, sociology-cultural and livelihood aspects.” The Associate Dean at KU’s School of Engineering wrote “We are deploying both traditional and modern technology to get the information from ground including UAVs. In this case we are using the DJI Phantoms,” which “reduced largely our field investigation time. The results are interesting and promising.” I look forward to sharing these results in a future blog post.

kali-gandaki-river

Lastly, KU’s Engineering Department has integrated the use of the robotics platforms directly into their courses, enabling Geomatics Engineering students to use the robots as part of their end-of-semester projects. In sum, KU has done truly outstanding work following our capacity building efforts and deserve extensive praise. (Alas, it seems that KLL has made little to no use of the aerial technologies or the software since our training 10 months ago).

Several months after the training in Nepal, we were approached by a British company that needed aerial surveys of specific areas for a project that the Nepal Government had contracted them to carry out. So they wanted to hire us for this project. We proposed instead that they hire our partners at Kathmandu Flying Labs since the latter are more than capable to carry out the surveys themselves. In other words, we actively drive business opportunities to Flying Labs partners. Helping to create local jobs and local businesses around robotics as a service is one of our key goals and the final phase of the Flying Labs framework.

So when we heard last week that USAID’s Global Development Lab was looking to hire a foreign company to carry out aerial surveys for a food security project in Nepal, we jumped on a call with USAID to let them know about the good work carried out by Kathmandu Flying Labs. We clearly communicated to our USAID colleagues that there are perfectly qualified Nepali pilots who can carry out the same aerial surveys. USAID’s Development Lab will be meeting with Kathmandu Flying Labs during their next visit in September.

thumb_IMG_4591_1024

On a related note, one of the participants who we trained in September was hired soon after by Build Change to support the organization’s shelter programs by producing Digital Surface Models (DSMs) from aerial images captured using DJI platforms. More recently, we heard from another student who emailed us with the following: “I had an opportunity to participate in the Humanitarian UAV Training mission in Nepal. It’s because of this training I was able learn how to fly drones and now I can conduct aerial Survey on my own with any hardware.  I would like to thank you and your team for the knowledge transfer sessions.”

This same student (who graduated from KU) added: “The workshop that your team did last time gave us the opportunity to learn how to fly and now we are handling some professional works along with major research. My question to you is ‘How can young graduates from developing countries like ours strengthen their capacity and keep up with their passion on working with technology like UAVs […]? The immediate concern for a graduate in Nepal is a simple job where he can make some money for him and prove to his family that he has done something in return for all the investments they have been doing upon him […]’.

KU campus sign

This is one of several reasons why our approach at WeRobotics is not limited to scaling the positive impact of local humanitarian, development, environmental and public health projects. Our demand-driven Flying Labs model goes the extra (aeronautical) mile to deliberately create local jobs and businesses. Our Flying Labs partners want to make money off the skills and technologies they gain from WeRobotics. They want to take advantage of the new career opportunities afforded by these new AI-powered robotics solutions. And they want their efforts to be sustainable.

In Nepal, we are now interviewing the KU graduate who posed the question above because we’re looking to hire an outstanding and passionate Coordinator for Kathmandu Flying Labs. Indeed, there is much work to be done as we are returning to Nepal in coming months for three reasons: 1) Our local partners have asked us to provide them with the technology and training they need to carry out large scale mapping efforts using long-distance fixed-wing platforms; 2) A new local partner needs to create very high-resolution topographical maps of large priority areas for disaster risk reduction and planning efforts, which requires the use of a fixed-wing platform; 3) We need to meet with KU’s Business Incubation Center to explore partnership opportunities since we are keen to help incubate local businesses that offer robotics as a service in Nepal.

How to Democratize Humanitarian Robotics

Our world is experiencing an unprecedented shift from manually controlled technologies to increasingly intelligent and autonomous systems powered by artificial intelligence (AI). I believe that this radical shift in both efficiency and productivity can have significant positive social impact when it is channeled responsibly, locally and sustainably.

WeRobotics_Logo_New

This is why my team and I founded WeRobotics, the only organization fully dedicated to accelerating and scaling the positive impact of humanitarian, development and environmental projects through the appropriate use of AI-powered robotics solutions. I’m thrilled to announce that the prestigious Rockefeller Foundation shares our vision—indeed, the Foundation has just awarded WeRobotics a start-up grant to take Humanitarian Robotics to the next level. We’re excited to leverage the positive power of robotics to help build a more resilient world in line with Rockefeller’s important vision.

Print

Aerial Robotics (drones/UAVs) represent the first wave of robotics to impact humanitarian sectors by disrupting traditional modes of data collection and cargo delivery. Both timely data and the capacity to act on this data are integral to aid, development and environmental projects. This is why we are co-creating and co-hosting global network of “Flying Labs”; to transfer appropriate aerial robotics solutions and relevant skills to outstanding local partners in developing countries who need these the most.

Our local innovation labs also present unique opportunities for our Technology Partners—robotics companies and institutes. Indeed, our growing network of Flying Labs offer a multitude of geographical, environmental and social conditions for ethical social good projects and responsible field-testing; from high-altitude glaciers and remote archipelagos experiencing rapid climate change to dense urban environments in the tropics subject to intense flooding and endangered ecosystems facing cascading environmental risks.

The Labs also provide our Technology Partners with direct access to local knowledge, talent and markets, and in turn provide local companies and entrepreneurs with facilitated access to novel robotics solutions. In the process, our local partners become experts in different aspects of robotics, enabling them to become service providers and drive new growth through local start-up’s and companies. The Labs thus seek to offer robotics-as-a-service across multiple local sectors. As such, the Labs follow a demand-driven social entrepreneurship model designed to catalyze local businesses while nurturing learning and innovation.

Of course, there’s more to robotics than just aerial robotics. This is why we’re also exploring the use of AI-powered terrestrial and maritime robotics for data collection and cargo delivery. We’ll add these solutions to our portfolio as they become more accessible in the future. In the meantime, sincerest thanks to the Rockefeller Foundation for their trust and invaluable support. Big thanks also to our outstanding Board of Directors and to key colleagues for their essential feed-back and guidance.