Tag Archives: social

Drones for Good: Technology, Social Movements and The State

Discussions surrounding use of drones, or UAVs, have typically “centered on their use by governments, often for the purpose of surveillance and warfare.” But as colleague Austin Choi-Fitzpatrick rightly notes in his new study, “[t]his focus on the state’s use obscures the opportunity for civil society actors, including social movements, to make use of these technologies.” Austin thus seeks to high-light civil society uses, “ranging from art to digital disruption.” The latter is what I am particularly interested given my previous writings on the use of non-lethal UAVs for civil resistance and for peacebuilding.

Screen Shot 2015-02-02 at 7.50.33 AM

When I began writing my doctoral dissertation some 7 years ago, scholars and activists were particularly interested in measuring the impact of mobile phones on social movements and civil resistance. Today, civil society is also turning to UAVs as evidenced during the recent protests in Hong Kong, Turkey, Poland, Ukraine and Ferguson. “This innovation represents a technological shift in scale for citizen journalists, human rights advocates, and social movement actors,” writes Austin. “As such, it requires a sophisticated assessment of the ethical issues and policy terrain surrounding its use.”

The most disruptive aspect of today’s small, personal UAVs, “is the fundamental break between the camera and the street level. […] The most memorable photographs of violent conflict, social protest and natural disasters have almost all been taken by a person present on the ground. […] UAVs relocate the boundary between what is public and what is private, because camera-equipped UAVs more the line of sight from the street to the air. This simple shift effectively pushes public space from the sidewalk to the stairwell, courtyard, rooftop, and so forth.” As Austin rightly concludes, “‘Open air’ and ‘free space’ are no longer as ‘open’ or ‘free’ as they once were. They are instead now occupied or vulnerable to occupation.” The use of the words “occupied” and “occupation” here is indeed intentional. Austin also makes another crucial point: UAVs  represent a type of innovation that is a “hallmark of asymmetrical warfare.”

One of my favorite books, Wasp, illustrates this asymmetry perfectly; as does the Syria Air Lift project. The latter seeks to fly swarms of UAVs to deliver aid to civilians caught in conflict zones. Little surprise, then, that the State is clamping down on civil society uses of UAVs. At times, they even shoot the UAVs down, as evidenced when “police in Istanbul shot down a camera-equipped UAV while it was monitoring large anti-government protests […].” Authorities would not be shooting down UAVs if they did not pose some form of (real or imagined) threat. And even when they pose no direct threat, UAVs are clearly annoying enough to react to (like a wasp or annoying mosquito). Annoyance is a key tactic in civil guerrilla warfare and civil resistance.

SAP_ReutersPic

Austin goes on to propose a “broad framework to guide a range of non-state and non-commercial actor uses of drones.” This framework is comprised of the following 6 principles:

1. Subsidiarity: decision-making and problem solving should occur at the lowest and least sophisticated level possible. I take this to mean that decisions surrounding the use of drones should be taken at the local level (implying local ownership) and that drones should “only be used to address situations for which there is not a less sophisticated, invasive, or novel use.”

2. Physical and material security: self-explanatory – “care must be taken so that these devices do not collide with people or with one another.”

3. Do no harm: emphasizes a “rights-based approach as found in the development and humanitarian aid communities. “The principle is one of proportionality, in which the question to be answered is, ‘Are the risks of using UAVs in a given humanitarian setting outweighed by the expected benefits?'”

4. Public interest: also self-explanatory but “especially sensitive to the importance of investigative journalism that holds to account the powerful and well-resourced, despite attempts by established interests to discredit these efforts.” Public interest should also include the interests of the local community.

5. Privacy: straightforward issue but not easily resolved: “creating a [privacy] framework that applies in all circumstances is nearly impossible in an era in which digital privacy appears to be a mirage […].

6. Data protection: of paramount importance. Aerial footage of protests can be used by governments to “create a database of known activists.” As such, “[c]ontext specific protocols must ensure the security of data, thereby protecting against physical or digital theft or corruption.”

Are there other principles that should factor into the “Drones for Good” frame-work? If so, what are they? I’ll also be asking these questions in Dubai this week where I’m speaking at the Drones for Good Festival.

Social Media Generates Social Capital: Implications for City Resilience and Disaster Response

A new empirical and peer-reviewed study provides “the first evidence that online networks are able to produce social capital. In the case of bonding social capital, online ties are more effective in forming close networks than theory predicts.” Entitled, “Tweeting Alone? An Analysis of Bridging and Bonding Social Capital in Online Networks,” the study analyzes Twitter data generated during three large events: “the Occupy movement in 2011, the IF Campaign in 2013, and the Chilean Presidential Election of the same year.”

cityres

What is the relationship between social media and social capital formation? More specifically, how do connections established via social media—in this case Twitter—lead to the formation of two specific forms of social capital, bridging and bonding capital? Does the interplay between bridging and bonding capital online differ to what we see in face-to-face world interactions?

“Bonding social capital exists in the strong ties occurring within, often homogeneous, groups—families, friendship circles, work teams, choirs, criminal gangs, and bowling clubs, for example. Bonding social capital acts as a social glue, building trust and norms within groups, but also potentially increasing intolerance and distrust of out-group members. Bridging social capital exists in the ties that link otherwise separate, often heterogeneous, groups—so for example, individuals with ties to other groups, messengers, or more generically the notion of brokers. Bridging social capital allows different groups to share and exchange information, resources, and help coordinate action across diverse interests.” The authors emphasize that “these are not either/or categories, but that in well-functioning societies the two types or dimensions develop together.”

The study uses social network analysis to measure bonding and bridging social capital. More specifically, they use two associated metrics as indicators of social capital: closure and brokerage. “Closure refers to the level of connectedness between particular groups of members within a broader network and encourages the formation of trust and collaboration. Brokerage refers to the existence of structural holes within a network that are ’bridged’ by a particular member of the network. Brokerage permits the transmission of information across the entire network. Social capital, then, is comprised of the combination of these two elements, which interact over time.”

The authors thus analyze the “observed values for closure and brokerage over time and compare them with different simulations based on theoretical network models to show how they compare to what we would expect offline. From this, [they provide an evaluation of the existence and formation of social capital in online networks.”

The results demonstrate that “online networks show evidence of social capital and these networks exhibit higher levels of closure than what would be expected based on theoretical models. However, the presence of organizations and professional brokers is key to the formation of bridging social capital. Similar to traditional (offline) conditions, bridging social capital in online networks does not exist organically and requires the purposive efforts of network members to connect across different groups. Finally, the data show interaction between closure and brokerage goes in the right direction, moving and growing together.”

These conclusions suggest that the same metrics—closure and brokerage—can be used to monitor “City Resilience” before, during and after major disasters. This is of particular interest to me since my team and I at QCRI are collaborating with the Rockefeller Foundation’s 100 Resilient Cities initiative to determine whether social media can indeed help monitor (proxy indicators of) resilience. Recent studies have shown that changes in employment, economic activity and mobility—each of which is are drivers of resilience—can be gleamed from social media.

While more research is needed, the above findings are compelling enough for us to move forward with Rockefeller on our joint project. So we’ll be launching AIRS in early 2015. AIRS, which stands for “Artificial Intelligence for Resilient Societies” is a free and open source platform specifically designed to enable Rockefeller’s partners cities to monitor proxy indicators of resilience on Twitter.

Bio

See also:

  • Using Social Media to Predict Disaster Resilience [link]
  • Social Media = Social Capital = Disaster Resilience? [link]
  • Does Social Capital Drive Disaster Resilience? [link]
  • Digital Social Capital Matters for Resilience & Response [link]

And the UAV/Drone Social Innovation Award Goes To?

The winner of the Drone Social Innovation Award has just been announced! The $10,000 prize is awarded to the most socially beneficial, documented use of a UAV platform that costs less than $3,000. The purpose of this award is to “spur innovation, investment, and attention to the positive role this technology can play in our society.” Many thanks to my colleague Timothy Reuter for including me on the panel of judges for this novel Social Innovation Award, which was kindly sponsored by NEXA Capital Partners.

Here’s a quick look at our finalists!

Disaster Relief in the Philippines

Using low-cost UAVs to take high-resolution imagery of disaster-affected areas following Typhoon Haiyan in the Philippines. The team behind these efforts is also working with NGOs from around the world to enable them to use this simple technology for situational awareness in times of crisis. The team is also developing platforms to deliver critical items to locations that are difficult to access in post-disaster scenarios.

Taking Autism to the Sky

Teaching young children with autism how to build and fly their own UAVs. The team behind this initiative to scale their work and teach autistic kids both better social skills and “concrete skills in drone technology that could get them a job one day. It’s just one of the many proposed uses of drones in schools and in science and technology education.”

Crowd Estimation with UAVs

While this entry focuses specifically on the use of UAVs and algorithms to determine the size of social movements (e.g., rallies & protests), there may be application to estimating population flows in refugee and IDP settings. I have a blog post on this topic coming up, stay tuned!

Drones for environmental conservation

Aerial photos and videos helped to direct millions in funding to acquire and preserve hundreds of acres of valuable habitat and strategic additions to public park space. “In a single glance, an aerial photo allows you understand so much more about location than a view from the ground.”

Landmine detection

Bosnia-Herzegovina has one of the highest densities of land mines in the world. So this team from Barcelona is exploring how UAVs might accelerate the process of land mine detection. See this post to learn about another UAV land mine detection effort following the massive flooding in the region this summer.

Whale Research and Conservation

Using benign research tools like UAVs to prove you can study whales without killing them. This allows conservationists to study whales’ DNA  along with their stress hormones without disturbing them or requiring the use of loud and expensive airplanes.


And the award goes to… (drum roll please)… not one but two entries (yes it really was a tie)!  Big congratulations to the teams behind the Land Mine Detection and Disaster Response projects! We really look forward to following your progress. Thank you for your important contributions to social innovation!

We are hoping to making this new “Drone Social Innovation Award” an annual competition (if the stars align again next year). So stay tuned. In the meantime, many thanks again to Timothy Reuter for spearheading this social innovation challenge, to my fellow judges, and most importantly to all participants for taking the time to share their remarkable projects!

bio

See Also:

  • Crowdsourcing the Analysis of Aerial Imagery for Wildlife Protection  and Disaster Response [link]
  • Humanitarians in the Sky: Using UAVs for Disaster Response [link]
  • Official UN Policy Brief on Humanitarian UAVs [link]
  • Crisis Map of UAV Videos for Disaster Response [link]
  • Humanitarian UAV Missions During Balkan Floods [link]
  • UAVs, Community Mapping & Disaster Risk Reduction in Haiti [link]

Latest Findings on Disaster Resilience: From Burma to California via the Rockefeller Foundation

I’ve long been interested in disaster resilience particularly when considered through the lens of self-organization. To be sure, the capacity to self-organize is an important feature of resilient societies. So what facilitates self-organization? There are several factors, of course, but the two I’m most interested in are social capital and communication technologies. My interest in disaster resilience also explains why one of our Social Innovation Tracks at QCRI is specifically focused on resilience. So I’m always on the lookout for new research on resilience. The purpose of this blog post is to summarize the latest insights.

Screen Shot 2014-05-12 at 4.23.33 PM

This new report (PDF) on Burma assesses the influence of social capital on disaster resilience. More specifically, the report focuses on the influence of bonding, bridging and linking social capital on disaster resilience in remote rural communities in the Ayerwaddy Region of Myanmar. Bonding capital refers to ties that are shared between individuals with common characteristics characteristics such as religion or ethnicity. Bridging capital relates to ties that connect individuals with those outside their immediate communities. These ties could be the result of shared geographical space, for example. Linking capital refers to vertical links between a community and individuals or groups outside said community. The relationship between a village and the government or a donor and recipients, for example.

As the report notes, “a balance of bonding, bridging and linking capitals is important of social and economic stability as well as resilience. It will also play a large role in a community’s ability to reduce their risk of disaster and cope with external shocks as they play a role in resource management, sustainable livelihoods and coping strategies.” In fact, “social capital can be a substitute for a lack of government intervention in disaster planning, early warning and recovery.” The study also notes that “rural communities tend to have stronger social capital due to their geographical distance from government and decision-making structures necessitating them being more self-sufficient.”

Results of the study reveal that villages in the region are “mutually supportive, have strong bonding capital and reasonably strong bridging capital […].” This mutual support “plays a part in reducing vulnerability to disasters in these communities.” Indeed, “the strong bonding capital found in the villages not only mobilizes communities to assist each other in recovering from disasters and building community coping mechanisms, but is also vital for disaster risk reduction and knowledge and information sharing. However, the linking capital of villages is “limited and this is an issue when it comes to coping with larger scale problems such as disasters.”

sfres

Meanwhile, in San Francisco, a low-income neighborhood is  building a culture of disaster preparedness founded on social capital. “No one had to die [during Hurricane Katrina]. No one had to even lose their home. It was all a cascading series of really bad decisions, bad planning, and corrupted social capital,” says Homsey, San Francisco’s director of neighborhood resiliency who spearheads the city’s Neighborhood Empowerment Network (NEN). The Network takes a different approach to disaster preparedness—it is reflective, not prescriptive. The group also works to “strengthen the relationships between governments and the community, nonprofits and other agencies [linking capital]. They make sure those relationships are full of trust and reciprocity between those that want to help and those that need help.” In short, they act as a local Match.com for disaster preparedness and response.

Providence Baptist Church of San Francisco is unusual because unlike most other American churches, this one has a line item for disaster preparedness. Hodge, who administrates the church, takes issue with the government’s disaster plan for San Francisco. “That plan is to evacuate the city. Our plan is to stay in the city. We aren’t going anywhere. We know that if we work together before a major catastrophe, we will be able to work together during a major catastrophe.” This explains why he’s teaming up with the Neighborhood Network (NEN) which will “activate immediately after an event. It will be entirely staffed and managed by the community, for the community. It will be a hyper-local, problem-solving platform where people can come with immediate issues they need collective support for,” such as “evacuations, medical care or water delivery.”

Screen Shot 2014-05-12 at 4.27.06 PM

Their early work has focused on “making plans to protect the neighborhood’s most vulnerable residents: its seniors and the disabled.” Many of these residents have thus received “kits that include a sealable plastic bag to stock with prescription medication, cash, phone numbers for family and friends. They also have door-hangers to help speed up search-and-rescue efforts (above pics).

Lastly, colleagues at the Rockefeller Foundation have just released their long-awaited City Resilience Framework after several months of extensive fieldwork, research and workshops in six cities: Cali, Columbia; Concepción, Chile; New Orleans, USA; Cape Town, South Africa; Surat, India; and Semarang, Indonesia. “The primary purpose of the fieldwork was to understand what contributes to resilience in cities, and how resilience is understood from the perspective of different city stakeholder groups in different contexts. The results are depicted in the graphic below, which figures the 12 categories identified by Rockefeller and team (in yellow).

City Resilience Framework

These 12 categories are important because “one must be able to relate resilience to other properties that one has some means of ascertaining, through observation.” The four categories that I’m most interested in observing are:

Collective identity and mutual support: this is observed as active community engagement, strong social networks and social integration. Sub-indicators include community and civic participation, social relationships and networks, local identity and culture and integrated communities.

Empowered stakeholders: this is underpinned by education for all, and relies on access to up-to-date information and knowledge to enable people and organizations to take appropriate action. Sub-indicators include risk monitoring & alerts and communication between government & citizens.

Reliable communications and mobility: this is enabled by diverse and affordable multi-modal transport systems and information and communication technology (ICT) networks, and contingency planning. Sub-indicators include emergency communication services.

Effective leadership and management: this relates to government, business and civil society and is recognizable in trusted individuals, multi-stakeholder consultation, and evidence-based decision-making. Sub-indicators include emergency capacity and coordination.

How am I interested in observing these drivers of resilience? Via social media. Why? Because that source of information is 1) available in real-time; 2) enables two-way communication; and 3) remains largely unexplored vis-a-vis disaster resilience. Whether or not social media can be used as a reliable proxy to measure resilience is still very much a  research question at this point—meaning more research is required to determine whether social media can indeed serve as a proxy for city resilience.

As noted above, one of our Social Innovation research tracks at QCRI is on resilience. So we’re currently reviewing the list of 32 cities that the Rockefeller Foundation’s 100 Resilient Cities project is partnering with to identify which have a relatively large social media footprint. We’ll then select three cities and begin to explore whether collective identity and mutual support can be captured via the social media activity in each city. In other words, we’ll be applying data science & advanced computing—specifically computational social science—to explore whether digital data can shed light on city resilience. Ultimately, we hope our research will support the Rockefeller Foundation’s next phase in their 100 Resilient Cities project: the development of a Resilient City Index.

Bio

See also:

  • How to Create Resilience Through Big Data [link]
  • Seven Principles for Big Data & Resilience Projects [link]
  • On Technology and Building Resilient Societies [link]
  • Using Social Media to Predict Disaster Resilience [link]
  • Social Media = Social Capital = Disaster Resilience? [link]
  • Does Social Capital Drive Disaster Resilience? [link]
  • Failing Gracefully in Complex Systems: A Note on Resilience [link]
  • Big Data, Lord of the Rings and Disaster Resilience [link]

New Insights on How To Verify Social Media

The “field” of information forensics has seen some interesting developments in recent weeks. Take the Verification Handbook or Twitter Lie-Detector project, for example. The Social Sensor project is yet another new initiative. In this blog post, I seek to make sense of these new developments and to identify where this new field may be going. In so doing, I highlight key insights from each initiative. 

VHandbook1

The co-editors of the Verification Handbook remind us that misinformation and rumors are hardly new during disasters. Chapter 1 opens with the following account from 1934:

“After an 8.1 magnitude earthquake struck northern India, it wasn’t long before word circulated that 4,000 buildings had collapsed in one city, causing ‘innumerable deaths.’ Other reports said a college’s main building, and that of the region’s High Court, had also collapsed.”

These turned out to be false rumors. The BBC’s User Generated Content (UGC) Hub would have been able to debunk these rumors. In their opinion, “The business of verifying and debunking content from the public relies far more on journalistic hunches than snazzy technology.” So they would have been right at home in the technology landscape of 1934. To be sure, they contend that “one does not need to be an IT expert or have special equipment to ask and answer the fundamental questions used to judge whether a scene is staged or not.” In any event, the BBC does not “verify something unless [they] speak to the person that created it, in most cases.” What about the other cases? How many of those cases are there? And how did they ultimately decide on whether the information was true or false even though they did not  speak to the person that created it?  

As this new study argues, big news organizations like the BBC aim to contact the original authors of user generated content (UGC) not only to try and “protect their editorial integrity but also because rights and payments for newsworthy footage are increasingly factors. By 2013, the volume of material and speed with which they were able to verify it [UGC] were becoming significant frustrations and, in most cases, smaller news organizations simply don’t have the manpower to carry out these checks” (Schifferes et al., 2014).

Credit: ZDnet

Chapter 3 of the Handbook notes that the BBC’s UGC Hub began operations in early 2005. At the time, “they were reliant on people sending content to one central email address. At that point, Facebook had just over 5 million users, rather than the more than one billion today. YouTube and Twitter hadn’t launched.” Today, more than 100 hours of content is uploaded to YouTube every minute; over 400 million tweets are sent each day and over 1 million pieces of content are posted to Facebook every 30 seconds. Now, as this third chapter rightly notes, “No technology can automatically verify a piece of UGC with 100 percent certainty. However, the human eye or traditional investigations aren’t enough either. It’s the combination of the two.” New York Times journalists concur: “There is a problem with scale… We need algorithms to take more onus off human beings, to pick and understand the best elements” (cited in Schifferes et al., 2014).

People often (mistakenly) see “verification as a simple yes/no action: Something has been verified or not. In practice, […] verification is a process” (Chapter 3). More specifically, this process is one of satisficing. As colleagues Leysia Palen et al.  note in this study, “Information processing during mass emergency can only satisfice because […] the ‘complexity of the environment is immensely greater than the computational powers of the adaptive system.'” To this end, “It is an illusion to believe that anyone has perfectly accurate information in mass emergency and disaster situations to account for the whole event. If someone did, then the situation would not be a disaster or crisis.” This explains why Leysia et al seek to shift the debate to one focused on the helpfulness of information rather the problematic true/false dichotomy.

Credit: Ann Wuyts

“In highly contextualized situations where time is of the essence, people need support to consider the content across multiple sources of information. In the online arena, this means assessing the credibility and content of information distributed across [the web]” (Leysia et al., 2011). This means that, “Technical support can go a long way to help collate and inject metadata that make explicit many of the inferences that the every day analyst must make to assess credibility and therefore helpfulness” (Leysia et al., 2011). In sum, the human versus computer debate vis-a-vis the verification of social media is somewhat pointless. The challenge moving forward resides in identifying the best ways to combine human cognition with machine computing. As Leysia et al. rightly note, “It is not the job of the […] tools to make decisions but rather to allow their users to reach a decision as quickly and confidently as possible.”

This may explain why Chapter 7 (which I authored) applies both human and advanced computing techniques to the verification challenge. Indeed, I explicitly advocate for a hybrid approach. In contrast, the Twitter Lie-Detector project known as Pheme apparently seeks to use machine learning alone to automatically verify online rumors as they spread on social networks. Overall, this is great news—the more groups that focus on this verification challenge, the better for those us engaged in digital humanitarian response. It remains to be seen, however, whether machine learning alone will make Pheme a success.

pheme

In the meantime, the EU’s Social Sensor project is developing new software tools to help journalists assess the reliability of social media content (Schifferes et al., 2014). A preliminary series of interviews revealed that journalists were most interested in Social Sensor software for:

1. Predicting or alerting breaking news

2. Verifying social media content–quickly identifying who has posted a tweet or video and establishing “truth or lie”

So the Social Sensor project is developing an “Alethiometer” (Alethia is Greek for ‘truth’) to “meter the credibility of of information coming from any source by examining the three Cs—Contributors, Content and Context. These seek to measure three key dimensions of credibility: the reliability of contributors, the nature of the content, and the context in which the information is presented. This reflects the range of considerations that working journalists take into account when trying to verify social media content. Each of these will be measured by multiple metrics based on our research into the steps that journalists go through manually. The results of [these] steps can be weighed and combined [metadata] to provide a sense of credibility to guide journalists” (Schifferes et al., 2014).

SocialSensor1

On our end, my colleagues and at QCRI are continuing to collaborate with several partners to experiment with advanced computing methods to address the social media verification challenge. As noted in Chapter 7, Verily, a platform that combines time-critical crowdsourcing and critical thinking, is still in the works. We’re also continuing our collaboration on a Twitter credibility plugin (more in Chapter 7). In addition, we are exploring whether we can microtask the computation of source credibility scores using MicroMappers.

Of course, the above will sound like “snazzy technologies” to seasoned journalists with no background or interest in advanced computing. But this doesn’t seem to stop them from complaining that “Twitter search is very hit and miss;” that what Twitter “produces is not comprehensive and the filters are not comprehensive enough” (BBC social media expert, cited in Schifferes et al., 2014). As one of my PhD dissertation advisors (Clay Shirky) noted a while back already, information overflow (Big Data) is due to “Filter Failure”. This is precisely why my colleagues and I are spending so much of our time developing better filters—filters powered by human and machine computing, such as AIDR. These types of filters can scale. BBC journalists on their own do not, unfortunately. But they can act on hunches and intuition based on years of hands-on professional experience.

The “field” of digital information forensics has come along way since I first wrote about how to verify social media content back in 2011. While I won’t touch on the Handbook’s many other chapters here, the entire report is an absolute must read for anyone interested and/or working in the verification space. At the very least, have a look at Chapter 9, which combines each chapter’s verification strategies in the form of a simple check-list. Also, Chapter 10 includes a list of  tools to aid in the verification process.

In the meantime, I really hope that we end the pointless debate about human versus machine. This is not an either/or issue. As a colleague once noted, what we really need is a way to combine the power of algorithms and the wisdom of the crowd with the instincts of experts.

bio

See also:

  • Predicting the Credibility of Disaster Tweets Automatically [link]
  • Auto-Ranking Credibility of Tweets During Major Events [link]
  • Auto-Identifying Fake Images on Twitter During Disasters [link]
  • Truth in the Age of Social Media: A Big Data Challenge [link]
  • Analyzing Fake Content on Twitter During Boston Bombings [link]
  • How to Verify Crowdsourced Information from Social Media [link]
  • Crowdsourcing Critical Thinking to Verify Social Media [link]

Why Digital Social Capital Matters for Disaster Resilience and Response

Recent empirical studies have clearly demonstrated the importance of offline social capital for disaster resilience and response. I’ve blogged about some of this analysis here and here. Social capital is typically described as those “features of social organizations, such as networks, norms, and trust, that facilitate action and cooperation for mutual benefit.” In other words, social capital increases a group’s capacity for collective action and thus self-organization, which is a key driver of disaster resilience. What if those social organizations were virtual and the networks digital? Would these online communities “generate digital social capital”? And would this digital social capital have any impact on offline social capital, collective action and resilience?

Social Capital

A data-driven study published recently, “Social Capital and Pro-Social Behavior Online and Offline” (PDF), presents some fascinating insights. The study, carried out by Constantin M. Bosancianu, Steve Powell and Esad Bratovi, draws on their survey of 1,912 Internet users in Bosnia & Herzegovina, Croatia and Serbia. The authors specifically consider two types of social capital: bonding social capital and bridging social capital. “

“Bridging social capital is described as inclusive, fostered in networks where membership is not restricted to a particular group defined by strict racial, class, linguistic or ethnic criteria.  Regular interactions inside these networks would gradually build norms of generalized trust and reciprocity at the individual level. These relationships […] are able to offer the individual access to new information but are not very adept in providing emotional support in times of need.”

“Bonding social capital, on the other hand, is exclusive, fostered in tight-knit networks of family members and close friends. Although the degree of information redundancy in these networks is likely high (as most members occupy the same social space), they provide […] the “sociological superglue” which gets members through tough emotional stages in their lives.”

The study’s findings reveal that online and offline social capital were correlated with each other. More specifically, online bridging social capital was closely correlated with offline bridging social capital, while online binding social capital was closely correlated with offline binding social capital. Perhaps of most interest with respect to disaster resilience, the authors discovered that “offline bridging social capital can benefit from online interactions.”

bio

Using Twitter to Analyze Secular vs. Islamist Polarization in Egypt (Updated)

Large-scale events leave an unquestionable mark on social media. This was true of Hurricane Sandy, for example, and is also true of the widespread protests in Egypt this week. On Wednesday, the Egyptian Military responded to the large-scale demonstrations against President Morsi by removing him from power. Can Twitter provide early warning signals of growing political tension in Egypt and elsewhere? My QCRI colleagues Ingmar Weber & Kiran Garimella and Al-Jazeera colleague Alaa Batayneh have been closely monitoring (PDF) these upheavals via Twitter since January 2013. Specifically, they developed a Political Polarization Index that provides early warning signals for increased social tensions and violence. I will keep updating this post with new data, analysis and graphs over the next 24 hours.

morsi_protests

The QCRI team analyzed some 17 million Egyptian tweets posted by two types of Twitter users—Secularists and Islamists. These user lists were largely drawn from this previous research and only include users that provide geographical information in their Twitter profiles. For each of these 7,000+ “seed users”, QCRI researchers downloaded their most recent 3,200 tweets along with a set of 200 users who retweet their posts. Note that both figures are limits imposed by the Twitter API. Ingmar, Kiran and Alaa have also analyzed users with no location information, corresponding to 65 million tweets and 20,000+ unique users. Below are word clouds of terms used in Twitter profiles created by Islamists (left) and secularists (right).

Screen Shot 2013-07-06 at 2.58.25 PM

QCRI compared the hashtags used by Egyptian Islamists and secularists over a year to create an insightful Political Polarization Index. The methodology used to create this index is described in more detail in this post’s epilogue. The graph below displays the overall hashtag polarity over time along with the number of distinct hashtags used per time interval. As you’ll note, the graph includes the very latest data published today. Click on the graph to enlarge.

hashtag_polarity_over_time_egypt_7_july

The spike in political polarization towards the end of 2011 appears to coincide with “the political struggle over the constitution and a planned referendum on the topic.” The annotations in the graph refer to the following violent events:

A – Assailants with rocks and firebombs gather outside Ministry of Defense to call for an end to military rule.

B – Demonstrations break out after President Morsi grants himself increased power to protect the nation. Clashes take place between protestors and Muslim Brotherhood supporters.

C, D – Continuing protests after the November 22nd declaration.

E – Demonstrations in Tahrir square, Port Said and all across the country.

F,G – Demonstrations in Tahrir square.

H,I – Massive demonstrations in Tahrir and removal of President Morsi.

In sum, the graph confirms that the political polarization hashtag can serve as a barometer for social tensions and perhaps even early warnings of violence. “Quite strikingly, all outbreaks of violence happened during periods where the hashtag polarity was comparatively high.” This also true for the events of the past week, as evidenced by QCRI’s political polarization dashboard below. Click on the figure to enlarge. Note that I used Chrome’s translate feature to convert hashtags from Arabic to English. The original screenshot in Arabic is available here (PNG).

Hashtag Analysis

Each bar above corresponds to a week of Twitter data analysis. When bars were initially green and yellow during the beginnings of Morsi’s Presidency (scroll left on the dashboard for the earlier dates). The change to red (heightened political polarization) coincides with increased tensions around the constitutional crisis in late November, early December. See this timeline for more information. The “Tending Score” in the table above combines volume with recency. A high trending score means the hashtag is more relevant to the current week. 

The two graphs below display political polarization over time. The first starts from January 1, 2013 while the second from June 1, 2013. Interestingly, February 14th sees a dramatic drop in polarization. We’re not sure if this is a bug in the analysis or whether a significant event (Valentine’s?) can explain this very low level of political polarization on February 14th. We see another major drop on May 10th. Any Egypt experts know why that might be?

graph1

The political polarization graph below reveals a steady increase from June 1st through to last week’s massive protests and removal of President Morsi.

graph2

To conclude, large-scale political events such as widespread political protests and a subsequent regime change in Egypt continue to leave a clear mark on social media activity. This pulse can be captured using a Political Polarization Index based on the hashtags used by Islamists and secularists on Twitter. Furthermore, this index appears to provide early warning signals of increasing tension. As my QCRI colleagues note, “there might be forecast potential and we plan to explore this further in the future.”

Bio

Acknowledgements: Many thanks to Ingmar and Kiran for their valuable input and feedback in the drafting of this blog post.

Methods: (written by Ingmar): The political polarization index was computed as follows. The analysis starts by identifying a set of Twitter users who are likely to support either Islamists or secularists in Egypt. This is done by monitoring retweets posted by a set of seed users. For example, users who frequently retweet Muhammad Morsi  and never retweeting El Baradei would be considered Islamist supporters. (This same approach was used by Michael Conover and colleagues to study US politics).

Once politically engaged and polarized users are identified, their use of hashtags is monitored over time. A “neutral” hashtags such as #fb or #ff is typically used by both camps in Egypt in roughly equal proportions and would hence be assigned a 50-50 Islamist-secular leaning. But certain hashtags reveal much more pronounced polarization. For example, the hashtag #tamarrod is assigned a 0-100 Islamist-secular score. Tamarrod refers to the “Rebel” movement, the leading grassroots movement behind the protests that led to Morsi’s ousting.

Similarly the hashtag #muslimsformorsi is assigned a 90-10 Islamist-secular score, which makes sense as it is clearly in support of Morsi. This kind of numerical analysis is done on a weekly basis. Hashtags with a 50-50 score in a given week have zero “tension” whereas hashtags with either 100-0 or 0-100 have maximal tension. The average tension value across all hashtags used in a given week is then plotted over time. Interestingly, this value, derived from hashtag usage in a language-agnostic manner, seems to coincide with outbreaks of violence on the ground as shown in bar chart above.

Data Science for Social Good: Not Cognitive Surplus but Cognitive Mismatch

I’ve spent the past 12 months working with top notch data scientists at QCRI et al. The following may thus be biased: I think QCRI got it right. They strive to balance their commitment to positive social change with their primary mission of becoming a world class institute for advanced computing research. The two are not mutually exclusive. What it takes is a dedicated position, like the one created for me at QCRI. It is high time that other research institutes, academic programs and international computing conferences create comparable focal points to catalyze data science for social good.

Microsoft Research, to name just one company, carries out very interesting research that could have tremendous social impact, but the bridge necessary to transfer much of that research from knowledge to operation to social impact is often not there. And when it is, it is usually by happenstance. So researchers continue to formulate research questions based on what they find interesting rather than identifying equally interesting questions that could have direct social impact if answered by data science. Hundreds of papers get presented at computing conferences every month, and yet few if any of the authors have linked up with organizations like the United Nations, World Bank, Habitat for Humanity etc., to identify and answer questions with social good potential. The same is true for hundreds of computing dissertations that get defended every year. Doctoral students do not realize that a minor reformulation of their research question could perhaps make a world of difference to a community-based organization in India dedicated to fighting corruption, for example.

Cognitive Mismatch

The challenge here is not one of untapped cognitive surplus (to borrow from Clay Shirky), but rather complete cognitive mismatch. As my QCRI colleague Ihab Ilyas puts it: there are “problem owners” on the one hand and “problem solvers” on the other. The former have problems that prevent them from catalyzing positive social change. The later know how to solve comparable problems and do so every day. But the two are not talking or even aware of each other. Creating and maintaining this two-way conversation requires more than one dedicated position (like mine at QCRI).

sweet spot

In short, I really want to have dedicated counterparts at Microsoft Research, IBM, SAP, LinkedIn, Bitly, GNIP, etc., as well as leading universities, top notch computing conferences and challenges; counterparts who have one foot in the world of data science and the other in the social sector; individuals who have a demonstrated track-record in bridging communities. There’s a community here waiting to be connected and needing to be formed. Again, carrying out cutting edge computing R&D is in no way incompatible with generating positive social impact. Moreover, the latter provides an important return on investment in the form of data, reputation, publicity, connections and social capital. In sum, social good challenges need to be formulated into research questions that have scientific as well as social good value. There is definitely a sweet spot here but it takes a dedicated community to bring problem owners and solvers together and hit that social good sweet spot.

Bio

Data Science for Social Good and Humanitarian Action

My (new) colleagues at the University of Chicago recently launched a new and exciting program called “Data Science for Social Good”. The program, which launches this summer, will bring together dozens top-notch data scientists, computer scientists an social scientists to address major social challenges. Advisors for this initiative include Eric Schmidt (Google), Raed Ghani (Obama Administration) and my very likable colleague Jake Porway (DataKind). Think of “Data Science for Social Good” as a “Code for America” but broader in scope and application. I’m excited to announce that QCRI is looking to collaborate with this important new program given the strong overlap with our Social Innovation Vision, Strategy and Projects.

My team and I at QCRI are hoping to mentor and engage fellows throughout the summer on key humanitarian & development projects we are working on in partnership with the United Nations, Red Cross, World Bank and others. This would provide fellows with the opportunity to engage in  “real world” challenges that directly match their expertise and interests. Second, we (QCRI) are hoping to replicate this type of program in Qatar in January 2014.

Why January? This will give us enough time to design the new program based on the result of this summer’s experiment. More importantly, perhaps, it will be freezing in Chicago ; ) and wonderfully warm in Doha. Plus January is an easier time for many students and professionals to take “time off”. The fellows program will likely be 3 weeks in duration (rather than 3 months) and will focus on applying data science to promote social good projects in the Arab World and beyond. Mentors will include top Data Scientists from QCRI and hopefully the University of Chicago. We hope to create 10 fellowship positions for this Data Science for Social Good program. The call for said applications will go out this summer, so stay tuned for an update.

bio

Digital Humanitarians and The Theory of Crowd Capital

An iRevolution reader very kindly pointed me to this excellent conceptual study: “The Theory of Crowd Capital”. The authors’ observations and insights resonate with me deeply given my experience in crowdsourcing digital humanitarian response. Over two years ago, I published this blog post in which I wrote that, “The value of Crisis Mapping may at times have less to do with the actual map and more with the conversations and new collaborative networks catalyzed by launching a Crisis Mapping project. Indeed, this in part explains why the Standby Volunteer Task Force (SBTF) exists in the first place.” I was not very familiar with the concept of social capital at the time, but that’s precisely what I was describing. I’ve since written extensively about the very important role that social capital plays in disaster resilience and digital humanitarian response. But I hadn’t taken the obvious next step: “Crowd Capital.”

Screen Shot 2013-03-30 at 4.34.09 PM

John Prpić and Prashant Shukla, the authors of “The Theory of Crowd Capital,” find inspiration in F. A. Hayek, “who in 1945 wrote a seminal work titled: The Use of Knowledge in Society. In this work, Hayek describes dispersed knowledge as:

“The knowledge of the circumstances of which we must make use never exists in concentrated or integrated form but solely as the dispersed bits of incomplete and frequently contradictory knowledge which all the separate individuals possess. […] Every individual has some advantage over all others because he possesses unique information of which beneficial use might be made, but of which use can be made only if the decisions depending on it are left to him or are made with his active cooperation.”

“Crowd Capability,” according to John and Prashant, “is what enables an organization to tap this dispersed knowledge from individuals. More formally, they define Crowd Capability as an “organizational level capability that is defined by the structure, content, and process of an organizations engagement with the dispersed knowledge of individuals—the Crowd.” From their perspective, “it is this engagement of dispersed knowledge through Crowd Capability efforts that endows organizations with data, information, and knowledge previously unavailable to them; and the internal processing of this, in turn, results in the generation of Crowd Capital within the organization.”

In other words, “when an organization defines the structure, content, and processes of its engagement with the dispersed knowledge of individuals, it has created a Crowd Capability, which in turn, serves to generate Crowd Capital.” And so, the authors contend, a Crowd Capable organization “puts in place the structure, content, and processes to access Hayek’s dispersed knowledge from individuals, each of whom has some informational advantage over the other, and thus forming a Crowd for the organization.” Note that a crowd can “exist inside of an organization, exist external to the organization, or a combination of the latter and the former.”

Screen Shot 2013-03-30 at 4.30.05 PM

The “Structure” component of Crowd Capability connotes “the geographical divisions and functional units within an organization, and the technological means that they employ to engage a Crowd population for the organization.” The structure component of Crowd Capability is always an Information-Systems-mediated phenomenon. The “Content” of Crowd Capability constitutes “the knowledge, information or data goals that the organization seeks from the population,” while the “Processes” of Crowd Capability are defined as “the internal procedures that the organization will use to organize, filter, and integrate the incoming knowledge, information, and/or data.” The authors observe that in each Crowd Capital case they’ve analyzed , “an organization creates the structure, content, and/or process to engage the knowledge of dispersed individuals through Information Systems.”

Like the other forms of capital, “Crowd Capital requires investments (for example in Crowd Capability), and potentially pays literal or figurative dividends, and hence, is endowed with typical ‘capital-like’ qualities.” But the authors are meticulous when they distinguish Crowd Capital from Intellectual Capital, Human Capital, Social Capital, Political Capital, etc. The main distinguishing factor is that Crowd Capability is strictly an Information-Systems-mediated phenomenon. “This is not to say that Crowd Capability could not be leveraged to create Social Capital for an organization. It likely could, however, Crowd Capability does not require Social Capital to function.”

That said, I would opine that Crowd Capability can function better thanks to Social Capital. Indeed, Social Capital can influence the “structure”, “content” and “processes” integral to Crowd Capability. And so, while the authors argue that  “Crowd Capital can be accrued without such relationship and network concerns” that are typical to Social Capital, I would counter that the presence of Social Capital certainly does not take away Crowd Capability but quite on the contrary builds greater capability. Otherwise, Crowd Capability is little else than the cultivation of cognitive surplus in which crowd workers can never unite. The Matrix comes to mind. So this is where my experience in crowdsourcing digital humanitarian response makes me diverge from the authors’ conceptualization of “Crowd Capital.” Take the Blue Pill to stay in the disenfranchised version of Crowd Capital; or take the Red Pill if you want to build the social capital required to hack the system.

MatrixBluePillRedPill

To be sure, the authors of Crowd Capital Theory point to Google’s ReCaptcha system for book digitization to demonstrate that Crowd Capability does not require a network of relationships for the accrual of Crowd Capital.” While I understand the return on investment to society both in the form of less spam and more digitized books, this mediated information system is authoritarian. One does not have a choice but to comply, unless you’re a hacker, perhaps. This is why I share Jonathan Zittrain’s point about “The future of the Internet and How To Stop It.” Zittrain promotes the notion of a “Generative Technologies,” which he defines as having the ability “to produce unprompted, user-driven change.”

Krisztina Holly makes a related argument in her piece on crowdscaling. “Like crowdsourcing, crowdscaling taps into the energy of people around the world that want to contribute. But while crowdsourcing pulls in ideas and content from outside the organization, crowdscaling grows and scales its impact outward by empowering the success of others.” Crowdscaling is possible when Crowd Capa-bility generates Crowd Capital by the crowd, for the crowd. In contrast, said crowd cannot hack or change a ReCaptcha requirement if they wish to proceed to the page they’re looking for. In The Matrix, Crowd Capital accrues most directly to The Matrix rather than to the human cocoons being farmed for their metrics. In the same vein, Crowd Capital generated by ReCaptcha accrues most directly to Google Inc. In short, ReCaptcha doesn’t even ask the question: “Blue Pill or Red Pill?” So is it only a matter of time until the users that generate the Crowd Capital unite and revolt, as seems to be the case with the lawsuit against CrowdFlower?

I realize that the authors may have intended to take the conversation on Crowd Capital in a different direction. But they do conclude with a number of inter-esting, open-ended questions that suggest various “flavors” of Crowd Capital are possible, and not just the dark one I’ve just described. I for one will absolutely make use of the term Crowd Capital, but will flavor it based on my experience with digital humanitarias, which suggests a different formula: Social Capital + Social Media + Crowdsourcing = Crowd Capital. In short, I choose the Red Pill.

bio