Tag Archives: Chile

Using AIDR to Collect and Analyze Tweets from Chile Earthquake

Wish you had a better way to make sense of Twitter during disasters than this?

Type in a keyword like #ChileEarthquake in Twitter’s search box above and you’ll see more tweets than you can possibly read in a day let alone keep up with for more than a few minutes. Wish there way were an easy, free and open source solution? Well you’ve come to the right place. My team and I at QCRI are developing the Artificial Intelligence for Disaster Response (AIDR) platform to do just this. Here’s how it works:

First you login to the AIDR platform using your own Twitter handle (click images below to enlarge):

AIDR login

You’ll then see your collection of tweets (if you already have any). In my case, you’ll see I have three. The first is a collection of English language tweets related to the Chile Earthquake. The second is a collection of Spanish tweets. The third is a collection of more than 3,000,000 tweets related to the missing Malaysia Airlines plane. A preliminary analysis of these tweets is available here.

AIDR collections

Lets look more closely at my Chile Earthquake 2014 collection (see below, click to enlarge). I’ve collected about a quarter of a million tweets in the past 30 hours or so. The label “Downloaded tweets (since last re-start)” simply refers to the number of tweets I’ve collected since adding a new keyword or hashtag to my collection. I started the collection yesterday at 5:39am my time (yes, I’m an early bird). Under “Keywords” you’ll see all the hashtags and keywords I’ve used to search for tweets related to the earthquake in Chile. I’ve also specified the geographic region I want to collect tweets from. Don’t worry, you don’t actually have to enter geographic coordinates when you set up your own collection, you simply highlight (on map) the area you’re interested in and AIDR does the rest.

AIDR - Chile Earthquake 2014

You’ll also note in the above screenshot that I’ve selected to only collect tweets in English, but you can collect all language tweets if you’d like or just a select few. Finally, the Collaborators section simply lists the colleagues I’ve added to my collection. This gives them the ability to add new keywords/hashtags and to download the tweets collected as shown below (click to enlarge). More specifically, collaborators can download the most recent 100,000 tweets (and also share the link with others). The 100K tweet limit is based on Twitter’s Terms of Service (ToS). If collaborators want all the tweets, Twitter’s ToS allows for sharing the TweetIDs for an unlimited number of tweets.

AIDR download CSV

So that’s the AIDR Collector. We also have the AIDR Classifier, which helps you make sense of the tweets you’re collecting (in real-time). That is, your collection of tweets doesn’t stop, it continues growing, and as it does, you can make sense of new tweets as they come in. With the Classifier, you simply teach AIDR to classify tweets into whatever topics you’re interested in, like “Infrastructure Damage”, for example. To get started with the AIDR Classifier, simply return to the “Details” tab of our Chile collection. You’ll note the “Go To Classifier” button on the far right:

AIDR go to Classifier

Clicking on that button allows you to create a Classifier, say on the topic of disaster damage in general. So you simply create a name for your Classifier, in this case “Disaster Damage” and then create Tags to capture more details with respect to damage-related tweets. For example, one Tag might be, say, “Damage to Transportation Infrastructure.” Another could be “Building Damage.” In any event, once you’ve created your Classifier and corresponding tags, you click Submit and find your way to this page (click to enlarge):

AIDR Classifier Link

You’ll notice the public link for volunteers. That’s basically the interface you’ll use to teach AIDR. If you want to teach AIDR by yourself, you can certainly do so. You also have the option of “crowdsourcing the teaching” of AIDR. Clicking on the link will take you to the page below.

AIDR to MicroMappers

So, I called my Classifier “Message Contents” which is not particularly insightful; I should have labeled it something like “Humanitarian Information Needs” or something, but bear with me and lets click on that Classifier. This will take you to the following Clicker on MicroMappers:

MicroMappers Clicker

Now this is not the most awe-inspiring interface you’ve ever seen (at least I hope not); reason being that this is simply our very first version. We’ll be providing different “skins” like the official MicroMappers skin (below) as well as a skin that allows you to upload your own logo, for example. In the meantime, note that AIDR shows every tweet to at least three different volunteers. And only if each of these 3 volunteers agree on how to classify a given tweet does AIDR take that into consideration when learning. In other words, AIDR wants to ensure that humans are really sure about how to classify a tweet before it decides to learn from that lesson. Incidentally, The MicroMappers smartphone app for the iPhone and Android will be available in the next few weeks. But I digress.

Yolanda TweetClicker4

As you and/or your volunteers classify tweets based on the Tags you created, AIDR starts to learn—hence the AI (Artificial Intelligence) in AIDR. AIDR begins to recognize that all the tweets you classified as “Infrastructure Damage” are indeed similar. Once you’ve tagged enough tweets, AIDR will decide that it’s time to leave the nest and fly on it’s own. In other words, it will start to auto-classify incoming tweets in real-time. (At present, AIDR can auto-classify some 30,000 tweets per minute; compare this to the peak rate of 16,000 tweets per minute observed during Hurricane Sandy).

Of course, AIDR’s first solo “flights” won’t always go smoothly. But not to worry, AIDR will let you know when it needs a little help. Every tweet that AIDR auto-tags comes with a Confidence level. That is, AIDR will let you know: “I am 80% sure that I correctly classified this tweet”. If AIDR has trouble with a tweet, i.e., if it’s confidence level is 65% or below, the it will send the tweet to you (and/or your volunteers) so it can learn from how you classify that particular tweet. In other words, the more tweets you classify, the more AIDR learns, and the higher AIDR’s confidence levels get. Fun, huh?

To view the results of the machine tagging, simply click on the View/Download tab, as shown below (click to enlarge). The page shows you the latest tweets that have been auto-tagged along with the Tag label and the confidence score. (Yes, this too is the first version of that interface, we’ll make it more user-friendly in the future, not to worry). In any event, you can download the auto-tagged tweets in a CSV file and also share the download link with your colleagues for analysis and so on. At some point in the future, we hope to provide a simple data visualization output page so that you can easily see interesting data trends.

AIDR Results

So that’s basically all there is to it. If you want to learn more about how it all works, you might fancy reading this research paper (PDF). In the meantime, I’ll simply add that you can re-use your Classifiers. If (when?) another earthquake strikes Chile, you won’t have to start from scratch. You can auto-tag incoming tweets immediately with the Classifier you already have. Plus, you’ll be able to share your classifiers with your colleagues and partner organizations if you like. In other words, we’re envisaging an “App Store” of Classifiers based on different hazards and different countries. The more we re-use our Classifiers, the more accurate they will become. Everybody wins.

And voila, that is AIDR (at least our first version). If you’d like to test the platform and/or want the tweets from the Chile Earthquake, simply get in touch!

bio

Note:

  • We’re adapting AIDR so that it can also classify text messages (SMS).
  • AIDR Classifiers are language specific. So if you speak Spanish, you can create a classifier to tag all Spanish language tweets/SMS that refer to disaster damage, for example. In other words, AIDR does not only speak English : )

Predicting the Credibility of Disaster Tweets Automatically

“Predicting Information Credibility in Time-Sensitive Social Media” is one of this year’s most interesting and important studies on “information forensics”. The analysis, co-authored by my QCRI colleague ChaTo Castello, will be published in Internet Research and should be required reading for anyone interested in the role of social media for emergency management and humanitarian response. The authors study disaster tweets and find that there are measurable differences in the way they propagate. They show that “these differences are related to the news-worthiness and credibility of the information conveyed,” a finding that en-abled them to develop an automatic and remarkably accurate way to identify credible information on Twitter.

The new study builds on this previous research, which analyzed the veracity of tweets during a major disaster. The research found “a correlation between how information propagates and the credibility that is given by the social network to it. Indeed, the reflection of real-time events on social media reveals propagation patterns that surprisingly has less variability the greater a news value is.” The graphs below depict this information propagation behavior during the 2010 Chile Earthquake.

The graphs depict the re-tweet activity during the first hours following earth-quake. Grey edges depict past retweets. Some of the re-tweet graphs reveal interesting patterns even within 30-minutes of the quake. “In some cases tweet propagation takes the form of a tree. This is the case of direct quoting of infor-mation. In other cases the propagation graph presents cycles, which indicates that the information is being commented and replied, as well as passed on.” When studying false rumor propagation, the analysis reveals that “false rumors tend to be questioned much more than confirmed truths […].”

Building on these insights, the authors studied over 200,000 disaster tweets and identified 16 features that best separate credible and non-credible tweets. For example, users who spread credible tweets tend to have more followers. In addition, “credible tweets tend to include references to URLs which are included on the top-10,000 most visited domains on the Web. In general, credible tweets tend to include more URLs, and are longer than non credible tweets.” Further-more, credible tweets also tend to express negative feelings whilst non-credible tweets concentrate more on positive sentiments. Finally, question- and exclama-tion-marks tend to be associated with non-credible tweets, as are tweets that use first and third person pronouns. All 16 features are listed below.

• Average number of tweets posted by authors of the tweets on the topic in past.
• Average number of followees of authors posting these tweets.
•  Fraction of tweets having a positive sentiment.
•  Fraction of tweets having a negative sentiment.
•  Fraction of tweets containing a URL that contain most frequent URL.
•  Fraction of tweets containing a URL.
•  Fraction of URLs pointing to a domain among top 10,000 most visited ones.
•  Fraction of tweets containing a user mention.
•  Average length of the tweets.
•  Fraction of tweets containing a question mark.
•  Fraction of tweets containing an exclamation mark.
•  Fraction of tweets containing a question or an exclamation mark.
•  Fraction of tweets containing a “smiling” emoticons.
•  Fraction of tweets containing a first-person pronoun.
•  Fraction of tweets containing a third-person pronoun.
•  Maximum depth of the propagation trees.

Using natural language processing (NLP) and machine learning (ML), the authors used the insights above to develop an automatic classifier for finding credible English-language tweets. This classifier had a 86% AUC. This measure, which ranges from 0 to 1, captures the classifier’s predictive quality. When applied to Spanish-language tweets, the classifier’s AUC was still relatively high at 82%, which demonstrates the robustness of the approach.

Interested in learning more about “information forensics”? See this link and the articles below:

Project Cybersyn: Chile 2.0 in 1973

My colleague Lokman Tsui at the Berkman Center kindly added me to the Harvard-MIT-Yale Cyberscholars working group and I attended the second roundtable of the year yesterday. These roundtables typically comprise three sets of presentations followed by discussions.

Introducing Cybersyn

We were both stunned by what was possibly one of the coolest tech presentations we’ve been to at Berkman. Assistant Professor Eden Medina from Indiana University’s School of Informatics presented her absolutely fascinating research on Project Cybsersyn. This project ties together cybernetics, political transitions, organizational theory, complex systems and the history of technology.

cybersyn_control_room

I had never heard of this project but Eden’s talk made we want to cancel all my weekend plans and read her dissertation from MIT, which I’m literally downloading as I type this. If you’d like an abridged version, I’d recommend reading her peer-reviewed article which won the 2007 IEEE Life Member’s Prize in Electrical History: “Designing Freedom, Regulating a Nation: Socialist Cybernetics in Allende’s Chile” (PDF).

Project Cybersyn is an early computer network developed in Chile during the socialist presidency of Salvador Allende (1970–1973) to regulate the growing social property area and manage the transition of Chile’s economy from capitalism to socialism.

Under the guidance of British cybernetician Stafford Beer, often lauded as the ‘father of management cybernetics’, an interdisciplinary Chilean team designed cybernetic models of factories within the nationalized sector and created a network for the rapid transmission of economic data between the government and the factory floor. The article describes the construction of this unorthodox system, examines how its structure reflected the socialist ideology of the Allende government, and documents the contributions of this technology to the Allende administration.

The purpose of Cybersyn was to “network every firm in the expanding nationalized  sector of the economy to a central computer in Santiago, enabling the government to grasp the status of production quickly and respond to economic crises in real time.”

Heartbeat of Cybersyn

Stafford is considered the ‘Father of Management Cybernetics” and at the heart of Stafford’s genius is the “Viable System Model” (VSM). Eden explains that “Cybersyn’s design cannot be understood without a basic grasp of this model, which played a pivotal role in merging the politics of the Allende government with the design of this technological system.”

VSM is a model of the organizational structure of any viable or autonomous system. A viable system is any system organised in such a way as to meet the demands of surviving in the changing environment. One of the prime features of systems that survive is that they are adaptable.

vsm

Beer believed that this five-tier, recursive model existed in all stable organizations—biological, mechanical and social.

VSM recursive

Synergistic Cybersyn

Based on this model, Stafford’s team sought ways to enable communications among factories, state enterprises, sector committees, the management of the country’s development agency and the central mainframe housed at the agency’s headquarters.

Eventually, they settled on an existing telex network previously used to track satellites. Unlike the heterogeneous networked computer systems in use today, telex  networks mandate the use of specific terminals and can only transmit ASCII characters. However, like the Internet of today, this early network of telex machines was driven by the idea of creating a high-speed web of information exchange.

Eden writes that Project Cybersyn eventually consisted of four sub-projects: Cybernet, Cyberstride, Checo and Opsroom.

  • Cybernet: This component “expanded the existing telex network to include every firm in nationalized sector, thereby helping to create a national network of communication throughout Chile’s three-thousand-mile-long territory. Cybersyn team members occasionally used the promise of free telex installation to cajole factory managers into lending their support to the project. Stafford Beer’s early reports describe the system as a tool for real-time economic control, but in actuality each firm could only transmit data once per day.”
  • Cyberstride: This component “encompassed the suite of computer programmes written to collect, process, and distribute data to and from each of the state enterprises. Members of the Cyberstride team created ‘ quantitative flow charts of activities within each enterprise that would highlight all important activities ’, including a parameter for ‘ social unease ’[…]. The software used statistical methods to detect production trends based on historical data, theoretically allowing [headquarters] to prevent problems before they began. If a particular variable fell outside of the range specified by Cyberstride, the system emitted a warning […]. Only the interventor from the affected enterprise would receive the algedonic warning initially and would have the freedom, within a given time frame, to deal with the problem as he saw fit. However, if the enterprise failed to correct the irregularity within this timeframe, members of the Cyberstride team alerted the next level management […].”
  • CHECO: This stood for CHilean ECOnomy, a component of Cybersyn which “constituted an ambitious effort to model the Chilean economy and provide simulations of future economic behaviour. Appropriately, it was sometimes referred to as ‘Futuro’. The simulator would serve as the ‘government’s experimental laboratory ’ – an instrumental equivalent to Allende’s frequent likening of Chile to a ‘social laboratory’. […] The simulation programme used the DYNAMO compiler developed by MIT Professor Jay Forrester […]. The CHECO team initially used national statistics to test the accuracy of the simulation program. When these results failed, Beer and his fellow team members faulted the time differential in the generation of statistical inputs, an observation that re-emphasized the perceived necessity for real-time data.
  • Opsroom: The fourth component “created a new environment for decision making, one modeled after a British WWII war room. It consisted of seven chairs arranged in an inward facing circle flanked by a series of projection screens, each displaying the data collected from the nationalized enterprises. In the Opsroom, all industries were homogenized by a uniform system of iconic representation, meant to facilitate the maximum extraction of information by an individual with a minimal amount of scientific training. […] Although [the Opsroom] never became operational, it quickly captured the imagination of all who viewed it, including members of the military, and became the symbolic heart of the project.

Outcome

Cybersyn never really took off. Stafford had hoped to install “algedonic meters” or early warning public opinion meters in “a representative sample of Chilean homes that would allow Chilean citizens to transmit their pleasure or displeasure with televised political speeches to the government or television studio in real time.”

[Stafford] dubbed this undertaking ‘ The People’s Project ’ and ‘ Project Cyberfolk ’ because he believed the meters would enable the government to respond rapidly to public demands, rather than repress opposing views.

As Cybersyn expanded beyond the initial goals of economic regulation to political-structural transformation, Stafford grew concerned that Cybersyn could prove dangerous if the system wasn’t fully completed and only individual components of the project adopted. He feared this could result in “result in ‘ an old system of government with some new tools … For if the invention is dismantled, and the tools used are not the tools we made, they could become instruments of oppression.” In fact, Stafford soon “received invitations from the repressive governments in Brazil and South Africa to build comparable systems.”

Back in Chile, the Cybernet component of Cybersyn “proved vital to the government during the opposition-led strike of October 1972 (Paro de Octubre).” The strike threatened the government’s survival so high-ranking government officials used Cybernet to monitor “the two thousand telexes sent per day that covered activities from the northern to the southern ends of the country.” In fact, “the rapid flow of messages over the telex lines enabled the government to react quickly to the strike activity  […].”

The project’s telex network was subsequently—albeit briefly—used for economic mapping:

[The] telex network permitted a new form of economic mapping that enabled the government to collapse the data sent from all over the country into a single report, written daily at [headquarters], and hand delivered to [the presidential palace]. The detailed charts and graphs filling its pages provided the government with an overview of national production, transportation, and points of crisis in an easily understood format, using data generated several days earlier. The introduction of this form of reporting represented a considerable advance over the previous six-month lag required to collect statistics on the Chilean economy […].

Ultimately, according to Stafford, Cybersyn did not succeed because it wasn’t accepted as a network of people as well as machines, a revolution in behavior as well as in instrumental capability. In 1973, Allende was overthrown by the military and the Cybersyn project all but vanished from Chilean memory.

Patrick Philippe Meier