Tag Archives: Models

Aerial Robotics for Payload Delivery in Developing Countries: Open Questions

Should developing countries seek to manufacture their own robotics solutions in order to establish payload delivery services? What business models make the most sense to sustain these services? Do decision-support tools already exist to determine which delivery routes are best served by aerial robots (drones) rather than traditional systems (such as motorbikes)? And what mechanisms should be in place to ensure that the impact of robotics solutions on local employment is one of net job creation rather than job loss?

Screen Shot 2016-03-17 at 3.34.52 PM

There are some of the questions I’ve been thinking about and discussing with various colleagues over the past year vis-a-vis humanitarian applications. So let me take the first 2 questions and explore these further here. I’ll plan on writing a follow up post in the near future to address the other two questions.

First, should developing countries take advantage of commercial solutions that already exist to build their robotics delivery infrastructure? Or should they seek instead to manufacture these robotics platforms locally instead? The way I see it, this does not have to be an either/or situation. Developing countries can both benefit from the robust robotics technologies that already exist and take steps to manufacture their own solutions over time.

This is not a hypothetical debate. I’ve spent the past few months going back and forth with a government official in a developing country about this very question. The official is not interested in leveraging existing commercial solutions from the West. As he rightly notes, there are many bright engineers in-country who are able and willing to build these robotics solutions locally.

Here’s the rub, however, this official has no idea just how much work, time and money is needed to develop robust, reliable and safe robotics solutions. In fact, many companies in both Europe and the US have themselves completely under-estimated just how technically challenging (and very expensive) it is to develop reliable aerial robotics solutions to delivery payloads. This endeavor easily takes years and millions of dollars to have a shot at success. It is far from trivial.

The government official in question wants his country’s engineers to build these solutions locally in order to transport essential medicines and vaccines between health clinics and remote villages. Providing this service is relatively urgent because existing delivery mechanisms are slow, unreliable and at times danger-ous. So this official will have to raise a substantial amount of funds to pay local engineers to build home-grown robotics solutions and iterate accordingly. This could take years (with absolutely no guarantee of success mind you).

On the other hand, this same official could decide to welcome the use of existing commercial solutions as part of field-tests in-country. The funding for this would not have to come from the government and the platforms could be field-tested as early as this summer. Not only would this provide local engineers with the ability to learn from the tests and gain important engineering insights, they could also be hired to actually operate the cargo delivery services over the long-term, thus gaining the skills to maintain and fix the platforms. Learning by doing would give these engineers practical training that they could use to build their own home-grown solutions.

One could be even more provocative: Why invest so much time and effort in local manufacturing when in-country engineers and entrepreneurs could simply use commercial solutions that already exist to make money sooner rather than later by providing robotics as a service? We’ve seen, historically, the transition from manufacturing to service-based economies. There’s plenty of profit to be made from the latter with a lot less start-up time and capital required. And again, one strategy does not preclude the other, so why forgo both early training and business opportunities when these same opportunities could help develop and fund the local robotics industry?

Admittedly, I’m somewhat surprised by the official’s zero tolerance for the use of foreign commercial technology to improve his country’s public health services; that same official is using computers, phones, cars, televisions, etc., that are certainly not made in-country. He does not have a background in robotics, so perhaps he assumes that building robust robotics solutions is relatively easy. Simply perusing the past 2 years of crowdfunded aerial robotics projects will clearly demonstrate that most have resulted in complete failure despite raising millions of dollars. That robotics graveyard keeps growing.

But I fully respect the government official’s position even if I disagree with it. In my most recent exchange with said official, I politely re-iterated that one strategy (local manufacturing) does not preclude the other (local business opportunities around robotics as service using foreign commercial solutions). Surely, the country in question can both leverage foreign technology while also building a local manufacturing base to produce their own robotics solutions.

show-me-the-money

Second, on business models, which models can provide sustainability by having aerial delivery services be profitable earlier rather than later? I was recently speaking to a good colleague of mine who works for a very well-respected humanitarian group about their plans to pilot the use of aerial robotics for the delivery of essential medicines. When I asked him about his organization’s business model for sustaining these delivery services, he simply said there was no model, that his humanitarian organization would simply foot the bill.

Surely we can do better. Just think how absurd it would be for a humanitarian organization to pay for their own 50 kilometer paved road to transport essential medicines by truck and decide not to recoup those major costs. You’ve paid for a perfectly good road that only gets used a few times a day by your organization. But 80% of the time there is no one else on that road. That would be absurd. Humanitarians who seek to embark on robotics delivery projects should really take the time to understand local demand for transportation services and use-cases to explore strategies to recoup part of their investments in building the aerial robotics infrastructure.

Surely remote communities who are disconnected from health services are also disconnected from access to other commodities. Of course, these local villages may not benefit from high levels of income; but I’m not suggesting that we look for high margins of return. Point is, if you’ve already purchased an aerial robot (drone) and it spends 80% of its time on the ground, then talk about a missed opportunity. Take commercial aviation as an analogy. Airlines do not make money when their planes are parked at the gate. They make money when said planes fly from point A to point B. The more they fly, the more they transport, the more they profit. So pray tell what is the point of investing in aerial robots only to have them spend most of their lives on the ground? Why not “charter” these robots for other purposes when they’re not busy flying medicines?

The fixed costs are the biggest hurdle with respect to aerial robotics, not the variable costs. Autonomous flights themselves cost virtually nothing; only 1-2 person’s time to operate the robot and swap batteries & payloads. Just like their big sisters (manually piloted aircraft), aerial robots should be spending the bulk of their time in the sky. So humanitarian organizations really ought to be thinking earlier rather than later about how to recoup part of their fixed costs by offering to transport other high-demand goods. For example, by allowing local businesses to use existing robotics aircraft and routes to transport top-up cards or SIM cards for mobile phones. What is the weight of 500 top-up or SIM cards? Around 0.5kg, which is easily transportable via aerial robot. Better yet, identify perishable commodities with a short shelf-life and allow business to fly those via aerial robot.

The business model that I’m most interested in at the moment is a “Per Flight Savings” model. One reason to introduce robotics solutions is to save on costs—variable costs in particular. Lest say that the variable cost of operating robotics solutions is 20% lower than the costs of traditional delivery mechanisms (per flight versus per drive, for example). You offer the client a 10% cost saving and pocket the other 10% as revenue. Over time, with sufficient flights (transactions) and growing demand, you break even and start to create a profit. I realize this is a hugely simplistic description; but this need not be unnecessarily complicated either.  The key will obviously be the level of demand for these transactions.

The way I see it, regardless of the business model, there will be a huge first-mover advantage in developing countries given the massive barriers to entry. Said barriers are primarily due to regulatory issues and air traffic management challenges. For example, once a robotics company manages to get regulatory approval and specific flight permissions for designated delivery routes to supply essential medicines, a second company that seeks to enter the market may face even greater barriers. Why? Because managing aerial robotics platforms from one company and segregating that airspace from manned aircraft can already be a challenge (not to mention a source of concern for Civil Aviation Authorities).

So adding new (and different types of) robots from a second company requires new communication protocols between the different robotics platforms operated by the 2 different companies. In sum, the challenges become more complex more quickly as new competitors seek entry. And for an Aviation Authority that may already be weary of flying robots, the proposal of adding a second fleet from a different company in order to increase competition around aerial deliveries may take said Authority some time to digest. Of course, if these companies can each operate in completely different parts of a given country, then technically this is an easier challenge to manage (and less anxiety provoking for authorities).

But said barriers do not only include technical (though surmountable) barriers. They also include identifying those (few?) use-cases that clearly make the most business sense to recoup one’s investments earlier rather than later given the very high start-up fixed costs associated with developing robotics platforms. Identifying these business cases is typically not something that’s easily done remotely. A considerable amount of time and effort must be spent on-site to identify and meet possible stakeholders in order to brainstorm and discover key use-cases. And my sense is that aerial robots often need to be designed to meet a specific use-case. So even when new use-cases are identified, there may still be the need for Research and Development (R&D) to modify a given robotics platform so it can most efficiently cater to new use-cases.

There are other business models worth thinking through for related services, such as those around the provision of battery-charging services, for example. The group Mobisol has installed solar home systems on the roofs of over 40,000 households in Rwanda and Tanzania to tackle the challenge of energy poverty. Mobisol claims to already cover much of Tanzania with solar panels that are no more than 5 kilometers apart. This could enabling aerial robots (UAVs) to hop from recharging station to recharging station, an opportunity that Mobisol is already actively exploring. Practical challenges aside, this network of charging stations could lead to an interesting business model around the provision of aerial robotics services.

As the astute reader will have gathered, much of the above is simply a written transcript me thinking out load. So I’d very much welcome some intellectual company here along with constructive feedback. What am I missing? Is my logic sound? What else should I be taking into account?

Assessing Disaster Damage from 3D Point Clouds

Humanitarian and development organizations like the United Nations and the World Bank typically carry out disaster damage and needs assessments following major disasters. The ultimate goal of these assessments is to measure the impact of disasters on the society, economy and environment of the affected country or region. This includes assessing the damage caused to building infrastructure, for example. These assessment surveys are generally carried out in person—that is, on foot and/or by driving around an affected area. This is a very time-consuming process with very variable results in terms of data quality. Can 3D (Point Clouds) derived from very high resolution aerial imagery captured by UAVs accelerate and improve the post-disaster damage assessment process? Yes, but a number of challenges related to methods, data & software need to be overcome first. Solving these challenges will require pro-active cross-disciplinary collaboration.

The following three-tiered scale is often used to classify infrastructure damage: “1) Completely destroyed buildings or those beyond repair; 2) Partially destroyed buildings with a possibility of repair; and 3) Unaffected buildings or those with only minor damage . By locating on a map all dwellings and buildings affected in accordance with the categories noted above, it is easy to visualize the areas hardest hit and thus requiring priority attention from authorities in producing more detailed studies and defining demolition and debris removal requirements” (UN Handbook). As one World Bank colleague confirmed in a recent email, “From the engineering standpoint, there are many definitions of the damage scales, but from years of working with structural engineers, I think the consensus is now to use a three-tier scale – destroyed, heavily damaged, and others (non-visible damage).”

That said, field-based surveys of disaster damage typically overlook damage caused to roofs since on-the-ground surveyors are bound by the laws of gravity. Hence the importance of satellite imagery. At the same time, however, “The primary problem is the vertical perspective of [satellite imagery, which] largely limits the building information to the roofs. This roof information is well suited for the identification of extreme damage states, that is completely destroyed structures or, to a lesser extent, undamaged buildings. However, damage is a complex 3-dimensional phenomenon,” which means that “important damage indicators expressed on building façades, such as cracks or inclined walls, are largely missed, preventing an effective assessment of intermediate damage states” (Fernandez Galaretta et al. 2014).

Screen Shot 2015-04-06 at 10.58.31 AM

This explains why “Oblique imagery [captured from UAVs] has been identified as more useful, though the multi-angle imagery also adds a new dimension of complexity” as we experienced first-hand during the World Bank’s UAV response to Cyclone Pam in Vanuatu (Ibid, 2014). Obtaining photogrammetric data for oblique images is particularly challenging. That is, identifying GPS coordinates for a given house pictured in an oblique photograph is virtually impossible to do automatically with the vast majority of UAV cameras. (Only specialist cameras using gimbal mounted systems can reportedly infer photogrammetric data in oblique aerial imagery, but even then it is unclear how accurate this inferred GPS data is). In any event, oblique data also “lead to challenges resulting from the multi-perspective nature of the data, such as how to create single damage scores when multiple façades are imaged” (Ibid, 2014).

To this end, my colleague Jorge Fernandez Galarreta and I are exploring the use of 3D (point clouds) to assess disaster damage. Multiple software solutions like Pix4D and PhotoScan can already be used to construct detailed point clouds from high-resolution 2D aerial imagery (nadir and oblique). “These exceed standard LiDAR point clouds in terms of detail, especially at façades, and provide a rich geometric environment that favors the identification of more subtle damage features, such as inclined walls, that otherwise would not be visible, and that in combination with detailed façade and roof imagery have not been studied yet” (Ibid, 2014).

Unlike oblique images, point clouds give surveyors a full 3D view of an urban area, allowing them to “fly through” and inspect each building up close and from all angles. One need no longer be physically onsite, nor limited to simply one façade or a strictly field-based view to determine whether a given building is partially damaged. But what does partially damaged even mean when this kind of high resolution 3D data becomes available? Take this recent note from a Bank colleague with 15+ years of experience in disaster damage assessments: “In the [Bank’s] official Post-Disaster Needs Assessment, the classification used is to say that if a building is 40% damaged, it needs to be repaired. In my view this is too vague a description and not much help. When we say 40%, is it the volume of the building we are talking about or the structural components?”

Screen Shot 2015-05-17 at 1.45.50 PM

In their recent study, Fernandez Galaretta et al. used point clouds to generate per-building damage scores based on a 5-tiered classification scale (D1-D5). They chose to compute these damage scores based on the following features: “cracks, holes, intersection of cracks with load-carrying elements and dislocated tiles.” They also selected non-damage related features: “façade, window, column and intact roof.” Their results suggest that the visual assessment of point clouds is very useful to identify the following disaster damage features: total collapse, collapsed roof, rubble piles, inclined façades and more subtle damage signatures that are difficult to recognize in more traditional BDA [Building Damage Assessment] approaches. The authors were thus able to compute a per building damage score, taking into account both “the overall structure of the building,” and the “aggregated information collected from each of the façades and roofs of the building to provide an individual per-building damage score.”

Fernandez Galaretta et al. also explore the possibility of automating this damage assessment process based on point clouds. Their conclusion: “More research is needed to extract automatically damage features from point clouds, combine those with spectral and pattern indicators of damage, and to couple this with engineering understanding of the significance of connected or occluded damage indictors for the overall structural integrity of a building.” That said, the authors note that this approach would “still suffer from the subjectivity that characterizes expert-based image analysis.”

Hence my interest in using crowdsourcing to analyze point clouds for disaster damage. Naturally, crowdsourcing alone will not eliminate subjectivity. In fact, having more people analyze point clouds may yield all kinds of disparate results. This is explains why a detailed and customized imagery interpretation guide is necessary; like this one, which was just released by my colleagues at the Harvard Humanitarian Initiative (HHI). This also explains why crowdsourcing platforms require quality-control mechanisms. One easy technique is triangulation: have ten different volunteers look at each point cloud and tag features in said cloud that show cracks, holes, intersection of cracks with load-carrying elements and dislocated tiles. Surely more eyes are better than two for tasks that require a good eye for detail.

Screen Shot 2015-05-17 at 1.49.59 PM

Next, identify which features have the most tags—this is the triangulation process. For example, if one area of a point cloud is tagged as a “crack” by 8 or more volunteers, chances are there really is a crack there. One can then count the total number of distinct areas tagged as cracks by 8 or more volunteers across the point cloud to calculate the total number of cracks per façade. Do the same with the other metrics (holes, dislocated titles, etc.), and you can compute a per building damage score based on overall consensus derived from hundreds of crowdsourced tags. Note that “tags’ can also be lines or polygons; meaning that individual cracks could be traced by volunteers, thus providing information on the approximate lengths/size of a crack. This variable could also be factored in the overall per-building damage score.

In sum, crowdsourcing could potentially overcome some of the data quality issues that have already marked field-based damage assessment surveys. In addition, crowdsourcing could potentially speed up the data analysis since professional imagery and GIS analysts tend to already be hugely busy in the aftermath of major disasters. Adding more data to their plate won’t help anyone. Crowdsourcing the analysis of 3D point clouds may thus be our best bet.

So why hasn’t this all been done yet? For several reasons. For one, creating very high-resolution point clouds requires more pictures and thus more UAV flights, which can be time consuming. Second, processing aerial imagery to construct point clouds can also take some time. Third, handling, sharing and hosting point clouds can be challenging given how large those files quickly get. Fourth, no software platform currently exists to crowdsource the annotation of point clouds as described above (particularly when it comes to the automated quality control mechanisms that are necessary to ensure data quality). Fifth, we need more robust imagery interpretation guides. Sixth, groups like the UN and the World Bank are still largely thinking in 2D rather than 3D. And those few who are considering 3D tend to approach this from a data visualization angle rather than using human and machine computing to analyze 3D data. Seventh, this area, point cloud analysis for 3D feature detection, is still a very new area of research. Many of the methodology questions that need answers have yet to be answered, which is why my team and I at QCRI are starting to explore this area from the perspective of computer vision and machine learning.

The holy grail? Combining crowdsourcing with machine learning for real-time feature detection of disaster damage in 3D point clouds rendered in real-time via airborne UAVs surveying a disaster site. So what is it going to take to get there? Well, first of all, UAVs are becoming more sophisticated; they’re flying faster and for longer and will increasingly be working in swarms. (In addition, many of the new micro-UAVs come with a “follow me” function, which could enable the easy and rapid collection of aerial imagery during field assessments). So the first challenge described above is temporary as are the second and third challenges since computer processing power is increasing, not decreasing, over time.

This leaves us with the software challenge and imagery guides. I’m already collaborate with HHI on the latter. As for the former, I’ve spoken with a number of colleagues to explore possible software solutions to crowdsource the tagging of point clouds. One idea is simply to extend MicroMappers. Another is to add simple annotation features to PLAS.io and PointCloudViz since these platforms are already designed to visualize and interact with point clouds. A third option is to use a 3D model platform like SketchFab, which already enables annotations. (Many thanks to colleague Matthew Schroyer for pointing me to SketchFab last week). I’ve since had a long call with SketchFab and am excited by the prospects of using this platform for simple point cloud annotation.

In fact, Matthew already used SketcFab to annotate a 3D model of Durbar Square neighborhood in downtown Kathmandu post-earthquake. He found an aerial video of the area, took multiple screenshots of this video, created a point cloud from these and then generated a 3D model which he annotated within SketchFab. This model, pictured below, would have been much higher resolution if he had the original footage or 2D images. Click pictures to enlarge.

3D Model 1 Nepal

3D Model 2 Nepal

3D Model 3 Nepal

3D Model 4 Nepal

Here’s a short video with all the annotations in the 3D model:

And here’s the link to the “live” 3D model. And to drive home the point that this 3D model could be far higher resolution if the underlying imagery had been directly accessible to Matthew, check out this other SketchFab model below, which you can also access in full here.

Screen Shot 2015-05-16 at 9.35.20 AM

Screen Shot 2015-05-16 at 9.35.41 AM

Screen Shot 2015-05-16 at 9.37.33 AM

The SketchFab team has kindly given me a SketchFab account that allows up to 50 annotations per 3D model. So I’ll be uploading a number of point clouds from Vanuatu (post Cyclone Pam) and Nepal (post earthquakes) to explore the usability of SketchFab for crowdsourced disaster damage assessments. In the meantime, one could simply tag-and-number all major features in a point cloud, create a Google Form, and ask digital volunteers to rate the level of damage near each numbered tag. Not a perfect solution, but one that works. Ultimately, we’d need users to annotate point clouds by tracing 3D polygons if we wanted a more easy way to use the resulting data for automated machine learning purposes.

In any event, if readers do have any suggestions on other software platforms, methodologies, studies worth reading, etc., feel free to get in touch via the comments section below or by email, thank you. In the meantime, many thanks to colleagues Jorge, Matthew, Ferda & Ji (QCRI), Salvador (PointCloudViz), Howard (PLAS.io) and Corentin (SketchFab) for the time they’ve kindly spent brainstorming the above issues with me.

Crowdsourcing Point Clouds for Disaster Response

Point Clouds, or 3D models derived from high resolution aerial imagery, are in fact nothing new. Several software platforms already exist to reconstruct a series of 2D aerial images into fully fledged 3D-fly-through models. Check out these very neat examples from my colleagues at Pix4D and SenseFly:

What does a castle, Jesus and a mountain have to do with humanitarian action? As noted in my previous blog post, there’s only so much disaster damage one can glean from nadir (that is, vertical) imagery and oblique imagery. Lets suppose that the nadir image below was taken by an orbiting satellite or flying UAV right after an earthquake, for example. How can you possibly assess disaster damage from this one picture alone? Even if you had nadir imagery for these houses before the earthquake, your ability to assess structural damage would be limited.

Screen Shot 2015-04-09 at 5.48.23 AM

This explains why we also captured oblique imagery for the World Bank’s UAV response to Cyclone Pam in Vanuatu (more here on that humanitarian mission). But even with oblique photographs, you’re stuck with one fixed perspective. Who knows what these houses below look like from the other side; your UAV may have simply captured this side only. And even if you had pictures for all possible angles, you’d literally have 100’s of pictures to leaf through and make sense of.

Screen Shot 2015-04-09 at 5.54.34 AM

What’s that famous quote by Henry Ford again? “If I had asked people what they wanted, they would have said faster horses.” We don’t need faster UAVs, we simply need to turn what we already have into Point Clouds, which I’m indeed hoping to do with the aerial imagery from Vanuatu, by the way. The Point Cloud below was made only from single 2D aerial images.

It isn’t perfect, but we don’t need perfection in disaster response, we need good enough. So when we as humanitarian UAV teams go into the next post-disaster deployment and ask what humanitarians they need, they may say “faster horses” because they’re not (yet) familiar with what’s really possible with the imagery processing solutions available today. That obviously doesn’t mean that we should ignore their information needs. It simply means we should seek to expand their imaginations vis-a-vis the art of the possible with UAVs and aerial imagery. Here is a 3D model of a village in Vanuatu constructed using 2D aerial imagery:

Now, the title of my blog post does lead with the word crowdsourcing. Why? For several reasons. First, it takes some decent computing power (and time) to create these Point Clouds. But if the underlying 2D imagery is made available to hundreds of Digital Humanitarians, we could use this distributed computing power to rapidly crowdsource the creation of 3D models. Second, each model can then be pushed to MicroMappers for crowdsourced analysis. Why? Because having a dozen eyes scrutinizing one Point Cloud is better than 2. Note that for quality control purposes, each Point Cloud would be shown to 5 different Digital Humanitarian volunteers; we already do this with MicroMappers for tweets, pictures, videos, satellite images and of course aerial images as well. Each digital volunteer would then trace areas in the Point Cloud where they spot damage. If the traces from the different volunteers match, then bingo, there’s likely damage at those x, y and z coordinate. Here’s the idea:

We could easily use iPads to turn the process into a Virtual Reality experience for digital volunteers. In other words, you’d be able to move around and above the actual Point Cloud by simply changing the position of your iPad accordingly. This technology already exists and has for several years now. Tracing features in the 3D models that appear to be damaged would be as simple as using your finger to outline the damage on your iPad.

What about the inevitable challenge of Big Data? What if thousands of Point Clouds are generated during a disaster? Sure, we could try to scale our crowd-sourcing efforts by recruiting more Digital Humanitarian volunteers, but wouldn’t that just be asking for a “faster horse”? Just like we’ve already done with MicroMappers for tweets and text messages, we would seek to combine crowdsourcing and Artificial Intelligence to automatically detect features of interest in 3D models. This sounds to me like an excellent research project for a research institute engaged in advanced computing R&D.

I would love to see the results of this applied research integrated directly within MicroMappers. This would allow us to integrate the results of social media analysis via MicroMappers (e.g, tweets, Instagram pictures, YouTube videos) directly with the results of satellite imagery analysis as well as 2D and 3D aerial imagery analysis generated via MicroMappers.

Anyone interested in working on this?

Digital Activism, Epidemiology and Old Spice: Why Faster is Indeed Different

The following thoughts were inspired by one of Zeynep Tufekci’s recent posts entitled “Faster is Different” on her Technosociology blog. Zeynep argues “against the misconception that acceleration in the information cycle means would simply mean same things will happen as would have before, but merely at a more rapid pace. So, you can’t just say, hey, people communicated before, it was just slower. That is wrong. Faster is different.”

I think she’s spot on and the reason why goes to the heart of complex systems behavior and network science. “Combined with the reshaping of networks of connectivity from one/few-to-one/few (interpersonal) and one-to-many (broadcast) into many-to-many, we encounter qualitatively different dynamics,” writes Zeynep. In a very neat move, she draws upon “epidemiology and quarantine models to explain why resource-constrained actors, states, can deal with slower diffusion of protests using ‘whack-a-protest’ method whereas they can be overwhelmed by simultaneous and multi-channel uprisings which spread rapidly and ‘virally.’ (Think of it as a modified disease/contagion model).” She then uses the “unsuccessful Gafsa protests in 2008 in Tunisia and the successful Sidi Bouzid uprising in Tunisia in 2010 to illustrate the point.”

I love the use of epidemiology and quarantine models to demonstrate why faster is indeed different. One of the complex systems lectures we had when I was at the Sante Fe Institute (SFI) focused on explaining why epidemics are so unpredictable. It was a real treat to have Duncan Watts himself present his latest research on this question. Back in 1998, he and Steven Strogatz wrote a seminal paper presenting the mathematical theory of the small world phenomenon. One of Duncan’s principle area of research has been information contagion and for his presentation at SFI, he explained that, amazingly, mathematical  epidemiology currently has no way to answer how big a novel outbreak of an infectious disease will get.

I won’t go into the details of traditional mathematical epidemiology and the Standard (SIR) Model but suffice it to say that the main factor thought to determine the spread of an epidemic was the “Basic Reproduction Number”, i.e., the average number of newly infected individuals by a single infected individual in a susceptible population. However, the following epidemics, while differing dramatically in size, all have more or less the same Basic Reproduction Number.

Standard models also imply that outbreaks are “bi-modal” but empirical research clearly shows that epidemics tend to be “multi-modal.” Real epidemics are also resurgent with several peaks interspersed with lulls. So the result is unpredictability: Multi-modal size distributions imply that any given outbreak of the same disease can have dramatically different outcomes while Resurgence implies that even epidemics which seem to be burning out can regenerate themselves by invading new populations.

To this end, there has been a rapid growth in “network epidemiology” over the past 20 years. Studies in network epidemiology suggest that the size of an epidemic depends on Mobility: the expected number of infected individuals “escaping” a local context; and Range: the typical distance traveled.” Of course, the “Basic Reproduction Number” still matters, and has to be greater than 1 as a necessary condition for an epidemic in the first place. However, when this figure is greater than 1, the value itself tells us very little about size or duration. Epidemic size tends to depend instead on mobility and range, although the latter appears to be more influential. To this end, simply restricting the range of travel of infected individuals may be an effective strategy.

There are, however, some important differences in terms of network models being compared here. The critical feature of biological disease in contrast with information spread is that individuals need to be co-located. But recall when during the recent Egyptian revolution the regime had cut off access to the Internet and blocked cell phone use. How did people get their news? The good old fashioned way, by getting out in the streets and speaking in person, i.e., by co-locating. Still, information can be contagious regardless of co-location. This is where Old Spice comes in vis-a-vis their hugely effective marking campaign in 2010 where their popular ads on YouTube went viral and had a significant impact on sales of the deodorant, i.e., massive offline action. Clearly, information can lead to a contagion effect. This is the “information cascade” that Dan Drezner and others refer to in the context of digital activism in repressive environments.

“Under normal circumstances,” Zeynep writes, “autocratic regimes need to lock up only a few people at a time, as people cannot easily rise up all at once. Thus, governments can readily fight slow epidemics, which spread through word-of-mouth (one-to-one), by the selective use of force (a quarantine). No country, however, can jail a significant fraction of their population rising up; the only alternative is excessive violence. Thus, social media can destabilize the situation in unpopular autocracies: rather than relatively low-level and constant repression, regimes face the choice between crumbling in the face of simultaneous protests from many quarters and massive use of force.”
 
For me, the key lesson from mathematical epidemiology is that predicting when an epidemic will go “viral” and thus the size of this epidemic is particularly challenging. In the case of digital activism, the figures for Mobility and Range are even more accentuated than the analogous equivalent for biological systems. Given the ubiquity of information communication networks thanks to the proliferation of social media, Mobility has virtually no limit and nor does Range. That accounts for the speed of “infection” that may ultimately mean the reversal of an information cascade. This unpredictability is why, as Zeynep puts it, “faster is different.” This is also why regimes like that of Mubarak’s and Al-Assad’s try to quarantine information communication and why doing so completely is very difficult, perhaps impossible.
 
Obviously, offline action that leads to more purchases of Old Spice versus offline action that spurs mass protests in Tahrir Square are two very different scenarios. The former may only require weak ties while the latter, due to high-risk actions, may require strong ties. But there are many civil resistance tactics that can be considered as micro-contributions and hence don’t involve relatively high risk to carry out. So communication can still change behavior which may then catalyze high-risk action, especially if said communication comes from someone you know within your own social network. This is one of the keys to effective marketing and advertising strategies. You’re more likely to consider taking offline action if one of your friends or family members do even if there are some risks involved. This is where the “infection” is most likely to take place. These infections can spur low-risk actions at first, which can synchronize “micro-motives” that lead to more risky “macro-behavior” and thus reversals in information cascades.