Category Archives: Social Media

Crowdsourcing Crisis Response Following Philippine Floods

Widespread and heavy rains resulting from Typhoon Haikui have flooded the Philippine capital Manila. Over 800,000 have been affected by the flooding and some 250,000 have been relocated to evacuation centers. Given the gravity of the situation, “some resourceful Filipinos put up an online spreadsheet where concerned citizens can list down places where help is most urgently needed” (1). Meanwhile, Google’s Crisis Response Team has launched this resource page  which includes links to News updates, Emergency contact information, Person Finder and this shelter map.

Filipinos volunteers are using an open (but not editable) Google Spreadsheet and crowdsourcing reports using this Google Form to collect urgent reports on needs. The spreadsheet (please click the screenshot below to enlarge) includes time of incident, location (physical address), a description of the alert (many include personal names and phone numbers) and the person it was reported by. Additional fields include status of the alert, the urgency of this alert and whether action has been taken. The latter is also color coded.

“The spreadsheet can easily be referenced by any rescue group that can access the web, and is constantly updated by volunteers real-time” (2). This reminds me a lot of the Google Spreadsheets we used following the Haiti Earthquake of 2010. The Standby Volunteer Task Force (SBTF) continues to use Google Spreadsheets in similar aways but for the purposes of media monitoring and these are typically not made public. What is noteworthy about these important volunteer efforts in the Philippines is that the spreadsheet was made completely public in order to crowdsource the response.

As I’ve noted before, emergency management professionals cannot be every-where at the same time, but the crowd is always there. The tradeoff with the use of open data to crowdsource crisis response is obviously privacy and data protection. Volunteers may therefore want to let those filling out the Google Form know that any information they provide will or may be made public. I would also recommend that they create an “About Us” or “Who We Are” link to cultivate a sense of trust with the initiative. Finally, crowdsourcing offers-for-help may facilitate the “matchmaking” of needs and available resources.

I would give the same advice to volunteers who recently setup this Crowdmap of the floods. I would also suggest they set up their own Standby Volunteer Task Force (SBTF) in order to deploy again in the future. In the meantime, reports on flood levels can be submitted to the crisis map via webform, email and SMS.

Traditional vs. Crowdsourced Election Monitoring: Which Has More Impact?

Max Grömping makes a significant contribution to the theory and discourse of crowdsourced election monitoring in his excellent study: “Many Eyes of Any Kind? Comparing Traditional and Crowdsourced Monitoring and their Contribu-tion to Democracy” (PDF). This 25-page study is definitely a must-read for anyone interested in this topic. That said, Max paints a false argument when he writes: “It is believed that this new methodology almost magically improves the quality of elections […].” Perhaps tellingly, he does not reveal who exactly believes in this false magic. Nor does he cite who subscribes to the view that  “[…] crowdsourced citizen reporting is expected to have significant added value for election observation—and by extension for democracy.”

My doctoral dissertation focused on the topic of crowdsourced election observa-tion in countries under repressive rule. At no point in my research or during interviews with activists did I come across this kind of superficial mindset or opinion. In fact, my comparative analysis of crowdsourced election observation showed that the impact of these initiatives was at best minimal vis-a-vis electoral accountability—particularly in the Sudan. That said, my conclusions do align with Max’s principle findings: “the added value of crowdsourcing lies mainly in the strengthening of civil society via a widened public sphere and the accumulation of social capital with less clear effects on vertical and horizontal accountability.”

This is huge! Traditional monitoring campaigns don’t strengthen civil society or the public sphere. Traditional monitoring teams are typically composed of inter-national observers and thus do not build social capital domestically. At times, traditional election monitoring programs may even lead to more violence, as this recent study revealed. But the point is not to polarize the debate. This is not an either/or argument but rather a both/and issue. Traditional and crowdsourced election observation efforts can absolutely complement each other precisely because they each have a different comparative advantage. Max concurs: “If the crowdsourced project is integrated with traditional monitoring from the very beginning and thus serves as an additional component within the established methodology of an Election Monitoring Organization, the effect on incentive structures of political parties and governments should be amplified. It would then include the best of both worlds: timeliness, visualization and wisdom of the crowd as well as a vetted methodology and legitimacy.”

Recall Jürgen Habermas and his treatise that “those who take on the tools of open expression become a public, and the presence of a synchronized public increasingly constrains un-democratic rulers while expanding the right of that public.” Why is this important? Because crowdsourced election observation projects can potentially bolster this public sphere and create local ownership. Furthermore, these efforts can help synchronize shared awareness, an important catalyzing factor of social movements, according to Habermas. Furthermore, my colleague Phil Howard has convincingly demonstrated that a large active online civil society is a key causal factor vis-a-vis political transitions towards more democratic rule. This is key because the use of crowdsourcing and crowd-mapping technologies often requires some technical training, which can expand the online civil society that Phil describes and render that society more active (as occurred in Egypt during the 2010 Parliamentary Elections—see  dissertation).

The problem? There is very little empirical research on crowdsourced election observation projects let alone assessments of their impact. Then again, these efforts at crowdsourcing are only a few years old and many do’ers in this space are still learning how to be more effective through trial and error. Incidentally, it is worth noting that there has also been very little empirical analysis on the impact of traditional monitoring efforts: “Further quantitative testing of the outlined mechanisms is definitely necessary to establish a convincing argument that election monitoring has positive effects on democracy.”

In the second half of his important study, Max does an excellent job articulating the advantages and disadvantages of crowdsourced election observation. For example, he observes that many crowdsourced initiatives appear to be spon-taneous rather than planned. Therein lies part of the problem. As demonstrated in my dissertation, spontaneous crowdsourced election observation projects are highly unlikely to strengthen civil society let alone build any kind of social capital. Furthermore, in order to solicit a maximum number of citizen-generated election reports, a considerable amount of upfront effort on election awareness raising and education needs to take place in addition to partnership outreach not to mention a highly effective media strategy.

All of this requires deliberate, calculated planning and preparation (key to an effective civil society), which explains why Egyptian activists were relatively more successful in their crowdsourced election observation efforts compared to their counterparts in the Sudan (see dissertation). This is why I’m particularly skeptical of Max’s language on the “spontaneous mechanism of protection against electoral fraud or other abuses.” That said, he does emphasize that “all this is of course contingent on citizens being informed about the project and also the project’s relevance in the eyes of the media.”

I don’t think that being informed is enough, however. An effective campaign not only seeks to inform but to catalyze behavior change, no small task. Still Max is right to point out that a crowdsourced election observation project can “encou-rage citizens to actively engage with this information, to either dispute it, confirm it, or at least register its existence.” To this end, recall that political change is a two-step process, with the second—social step—being where political opinions are formed (Katz and Lazarsfeld 1955). “This is the step in which the Internet in general, and social media in particular, can make a difference” (Shirky 2010). In sum, Max argues that “the public sphere widens because this engagement, which takes place in the context of the local all over the country, is now taken to a wider audience by the means of mapping and real-time reporting.” And so, “even if crowdsourced reports are not acted upon, the very engagement of citizens in the endeavor to directly make their voices heard and hold their leaders accountable widens the public sphere considerably.”

Crowdsourcing efforts are fraught with important and very real challenges, as is already well known. Reliability of crowdsourced information, risk of hate speech spread via uncontrolled reports, limited evidence of impact, concerns over security and privacy of citizen reporters, etc. That said, it is important to note that this “field” is evolving and many in this space are actively looking for solutions to these challenges. During the 2010 Parliamentary Elections in Egypt, the U-Shahid project was able to verify over 90% of the crowdsourced reports. The “field” of information forensics is becoming more sophisticated and variants to crowdsourcing such as bounded crowdsourcing and crowdseeding are not only being proposed but actually implemented.

The concern over unconfirmed reports going viral has little to do with crowd-sourcing. Moreover, the vast majority of crowdsourced election observation initiatives I have studied moderate all content before publication. Concerns over security and privacy are issues not limited to crowdsourced election observation and speak to a broader challenge. There are already several key initiatives underway in the humanitarian and crisis mapping community to address these important challenges. And lest we forget, there are few empirical studies that demonstrate the impact of traditional monitoring efforts in the first place.

In conclusion, traditional monitors are sometimes barred from observing an election. In the past, there have been few to no alternatives to this predicament. Today, crowdsourced efforts are sure to swell up. Furthermore, in the event that traditional monitors conclude that an election was stolen, there’s little they can do to catalyze a local social movement to place pressure on the thieves. This is where crowdsourced election observation efforts could have an important contribution. To quote Max: “instead of being fearful of the ‘uncontrollable crowd’ and criticizing the drawbacks of crowdsourcing, […] governments would be well-advised to embrace new social media. Citizens […] will use new techno-logies and new channels for information-sharing anyway, whether endorsed by their governments or not. So, governments might as well engage with ICTs and crowdsourcing proactively.”

Big thanks to Max for this very valuable contribution to the discourse and to my colleague Tiago Peixoto for flagging this important study.

Launching a Library of Crisis Hashtags on Twitter

I recently posted the following question on the CrisisMappers list-serve: “Does anyone know whether a list of crisis hashtags exists?”

There are several reasons why such a hashtag list would be of added value to the CrisisMappers community and beyond. First, an analysis of Twitter hashtags used during crises over the past few years could be quite insightful; interesting new patterns may be evolving. Second, the resulting analysis could be used as a guide to find (and create) new hashtags when future crises unfold. Third, a library of hashtags would make it easier to collect historical datasets of crisis information shared on Twitter for the purposes of analysis & social computing research. To be sure, without this data, developing more sophisticated machine learning platforms like the Twitter Dashboard for the Humanitarian Cluster System would be serious challenge indeed.

After posting my question on CrisisMappers and Twitter, it was clear that no such library existed. So my colleague Sara Farmer launched a Google Spreadsheet to crowdsource an initial list. Since I was working on a similar list, I’ve created a combined spreadsheet which is available and editable here. Please do add any other crisis hashtags you may know about so we can make this the most comprehensive and up-to-date resource available to everyone. Thank you!

Whilst doing this research, I came across two potentially interesting and helpful hashtag websites: Hashonomy.com and Hashtags.org.

Crowdsourcing a Crisis Map of the Beijing Floods: Volunteers vs Government

Flash floods in Beijing have killed over 70 people and forced the evacuation of more than 50,000 after destroying over 8,000 homes and causing $1.6 billion in damages. In total, some 1.5 million people have been affected by the floods after Beijing recorded the heaviest rainfall the city has seen in more than 60 years.

The heavy rains began on July 21. Within hours, users of the Guokr.com social network launched a campaign to create a live crisis map of the flood’s impact using Google Maps. According to TechPresident, “the result was not only more accurate than the government output—it was available almost a day earlier. According to People’s Daily Online, these crowd-sourced maps were widely circulated on Weibo [China’s version of Twitter] the Monday and Tuesday after the flooding.” The crowdsourced, citizen-generated flood map of Beijing is available here and looks like this:

One advantage of working with Google is that the crisis map can also be viewed via Google Earth. That said, the government does block a number of Google services in China, which puts the regime at a handicap during disasters.

This is an excellent example of crowdsourced crisis mapping. My one recommen-dation to Chinese volunteers would be to crowdsource solutions in addition to problems. In other words, map offers of help and turn the crisis map into a local self-help map, i.e., a Match.com for citizen-based humanitarian response. In short, use the map as a platform for self-organization and crowdsource response by matching calls for help with corresponding offers of help. I would also recommend they create their own Standby Volunteer Task Force (SBTF) for crisis mapping to build social capital and repeat these efforts in future disasters.

Several days after Chinese volunteers first launched their crisis map, the Beijing Water Authority released its own map, which looks like a classic example of James Scott’s “Seeing Like a State.” The map is difficult to read and it is unclear whether the map is even a dynamic or interactive, or live for that matter. It appears static and cryptic. One wonders whether these adjectives also describe the government’s response.

Meanwhile, there is growing anger over the state’s botched response to the floods. According to People’s Daily, “Chinese netizens have criticised the munici-pal authority for failing to update the city’s run-down drainage system or to pre-warn residents about the impending disaster.” In other cities, Guangdong Mobile (the local division of China Mobile) sent out 30 million SMS about the storm in cooperation with the provincial government. “Mobile users in Shenzhen, Zhongshan, Zhuhai, Jiangmen, and Yunfu received reminders to be careful from the telecom company because those five cities were forecast to be most affected by the storm.”

All disasters are political. They test the government’s capacity. The latter’s inability to respond swiftly and effectively has repercussions on citizens’ perception of governance and statehood. The more digital volunteers engage in crisis mapping, the more they highlight the local capacity and agency of ordinary citizens to create shared awareness and help themselves—with or without the state. In doing so, volunteers build social capital, which facilitates future collective action both on and offline. If government officials are not worried about their own failures in disaster management, they should be. This failure will continue to have political consequences, in China and elsewhere.

CrisisTracker: Collaborative Social Media Analysis For Disaster Response

I just had the pleasure of speaking with my new colleague Jakob Rogstadius from Madeira Interactive Technologies Institute (Madeira-TTI). Jakob is working on CrisisTracker, a very interesting platform designed to facilitate collaborative social media analysis for disaster response. The rationale for CrisisTracker is the same one behind Ushahidi’s SwiftRiver project and could be hugely helpful for crisis mapping projects carried out by the Standby Volunteer Task Force (SBTF).

From the CrisisTracker website:

“During large-scale complex crises such as the Haiti earthquake, the Indian Ocean tsunami and the Arab Spring, social media has emerged as a source of timely and detailed reports regarding important events. However, indivi-dual disaster responders, government officials or citizens who wish to access this vast knowledge base are met with a torrent of information that quickly results in information overload. Without a way to organize and navigate the reports, important details are easily overlooked and it is challenging to use the data to get an overview of the situation as a whole.”

We (Madeira University, University of Oulu and IBM Research) believe that volunteers around the world would be willing to assist hard-pressed decision makers with information management, if the tools were available. With this vision in mind, we have developed Crisis-Tracker.”

Like SwiftRiver, CrisisTracker combines some automated clustering of content with the crowdsourced curation of said content for further filtering. “Any user of the system can directly contribute tags that make it easier for other users to retrieve information and explore stories by similarity. In addition, users of the system can influence how tweets are grouped into stories.” Stories can be filtered by Report Category, Keywords, Named Entities, Time and Location. CrisisTracker also allows for simple geo-fencing to capture and list only those Tweets displayed on a given map.

Geolocation, Report Categories and Named Entities are all generated manually. The clustering of reports into stories is done automatically using keyword frequencies. So if keyword dictionaries exist for other languages, the platform could be used in these other languages as well. The result is a list of clustered Tweets displayed below the map, with the most popular cluster at the top.

Clicking on an entry like the row in red above opens up a new page, like the one below. This page lists a group of tweets that all discuss the same specific event, in this case an explosion in Syria’s capital.

What is particularly helpful about this setup is the meta-data displayed for this story or event: the number of people who tweeted about the story, the number of tweets about the story, the first day/time the story was shared on twitter. In addition, the first tweet to report the story is listed along, which is very helpful. This list can be ranked according to “Size” which is a figure that reflects the minimum number of original tweets and the number of Twitter users who shared these tweets. This is a particularly useful metric (and way to deal with spammers). Users also have the option of listing the first 50 tweets that referenced the story.

As you may be able to tell from the “Hide Story” and “Remove” buttons on the righthand-side of the display above, each clustered story and indeed tweet can be hidden or removed if not relevant. This is where crowdsourced curation comes in. In addition, CrisisTracker enable users to geo-tag and categorize each tweets according to report type (e.g., Violence, Deaths, Request/Need, etc.), general keywords (e.g., #assad, #blasts, etc.) and named entities. Note the the keywords can be removed and more high-quality tags can be added or crowdsourced by users as well (see below).

CrisisTracker also suggests related stories that may be of interest to the user based on the initial clustering and filtering—assisted manual clustering. In addition, the platform’s API means that the data can then be exported in XML using a simple parser. So interoperability with platforms like Ushahidi’s would be possible. After our call, Jakob added a link on each story page in the system (a small XML icon below the related stories) to get the story in XML format. Any other system can now take this URL and parse the story into its own native format. Jakob is also looking to build a number of extensions to CrisisTracker and a “Share with Ushahidi” button may be one such future extension. Crisis-Tracker is basically Jakob’s core PhD project, which is very cool, so he’ll be working on this for at least one more year.

In sum, this could very well be the platform that many of us in the crisis mapping space have been waiting for. As I wrote in February 2012, turning the Twitter-sphere “into real-time shared awareness will require that our filtering and curation platforms become more automated and collaborative. I believe the key is thus to combine automated solutions with real-time collaborative crowd-sourcing tools—that is, platforms that enable crowds to collaboratively filter and curate real-time information, in real-time. Right now, when we comb through Twitter, for example, we do so on our own, sitting behind our laptop, isolated from others who may be seeking to filter the exact same type of content. We need to develop free and open source platforms that allow for the distributed-but-networked, crowdsourced filtering and curation of information in order to democratize the sense-making of the firehose.”

Actually, I’ve been advocating for this approach since early 2009. So I’m really excited about Jakob’s project. We’ll be partnering with him and the Standby Volunteer Task Force (SBTF) in September 2012 to test the platform and provide him with expert feedback on how to further streamline the tool for collaborative social media analysis and crisis mapping. Jakob is also looking for domain experts to help on this study. In the meantime, I’ve invited Jakob to present Crisis-Tracker at the 2012 CrisisMappers Conference in Washington DC and very much hope he can join us to demo his tool to us in person. In the meantime, the video above provides an excellent overview of CrisisTracker, as does the project website. Finally, the project is also open source and available on Github here.

Epilogue: The main problem with CrisisTracker is that it is still too manual; it does not include any machine learning & artificial intelligence features; and has only focused on Syria. This may explain why it has not gained traction in the humanitarian space so far.

Towards a Twitter Dashboard for the Humanitarian Cluster System

One of the principal Research and Development (R&D) projects I’m spearheading with colleagues at the Qatar Computing Research Institute (QCRI) has been getting a great response from several key contacts at the UN’s Office for the Coordination of Humanitarian Affairs (OCHA). In fact, their input has been instrumental in laying the foundations for our early R&D efforts. I therefore highlighted the initiative during my recent talk at the UN’s ECOSOC panel in New York, which was moderated by OCHA Under-Secretary General Valerie Amos. The response there was also very positive. So what’s the idea? To develop the foundations for a Twitter Dashboard for the Humanitarian Cluster System.

The purpose of the Twitter Dashboard for Humanitarian Clusters is to extract relevant information from twitter and aggregate this information according to Cluster for analytical purposes. As the above graphic shows, clusters focus on core humanitarian issues including Protection, Shelter, Education, etc. Our plan is to go beyond standard keyword search and simple Natural Language Process-ing (NLP) approaches to more advanced Machine Learning (ML) techniques and social computing methods. We’ve spent the past month asking various contacts whether anyone has developed such a dashboard but thus far have not come across any pre-existing efforts. We’ve also spent this time getting input from key colleagues at OCHA to ensure that what we’re developing will be useful to them.

It is important to emphasize that the project is purely experimental for now. This is one of the big advantages of being part of an institute for advanced computing R&D; we get to experiment and carry out applied research on next-generation humanitarian technology solutions. We realize full well what the many challenges and limitations of using Twitter as an information source are, so I won’t repeat these here. The point is not to suggest that a would-be Twitter Dashboard should be used instead of existing information management platforms. As United Nations colleagues themselves have noted, such a dashboard would simply be another dial on their own dashboards, which may at times prove useful, especially when compared or integrated with other sources of information.

Furthermore, if we’re serious about communicating with disaster affected comm-unities and the latter at times share crisis information on Twitter, then we may want to listen to what they are saying. This includes Diasporas as well. The point, quite simply, is to make full use of Twitter by at least extracting all relevant and meaningful information that contributes to situational awareness. The plan, therefore, is to have the Twitter Dashboard for Humanitarian Clusters aggregate information relevant to each specific cluster and to then provide key analytics for this content in order to reveal potentially interesting trends and outliers within each cluster.

Depending on how the R&D goes, we envision adding “credibility computing” to the Dashboard and expect to collaborate with our Arabic Language Technology Center to add Arabic tweets as well. Other languages could also be added in the future depending on initial results. Also, while we’re presently referring to this platform as a “Twitter” Dashboard, adding SMS,  RSS feeds, etc., could be part of a subsequent phase. The focus would remain specifically on the Humanitarian Cluster system and the clusters’ underlying minimum essential indicators for decision-making.

The software and crisis ontologies we are developing as part of these R&D efforts will all be open source. Hopefully, we’ll have some initial results worth sharing by the time the International Conference of Crisis Mappers (ICCM 2012) rolls around in mid-October. In the meantime, we continue collaborating with OCHA and other colleagues and as always welcome any constructive feedback from iRevolution readers.

Introducing GeoXray for Crisis Mapping

My colleague Joel Myhre recently pointed me to Geosemble’s GeoXray platform, which “automatically filters content to your geographic area of interest and to your keywords of interest to provide you with timely, relevant information that enables you and your organization to make better decisions faster.” While I haven’t tested the platform, it seems similar to what Geofeedia offers.

Perhaps the main difference, beyond user-interface and maybe ease-of-use, is that GeoXray pulls in both external public content (from Twitter, Facebook, Blogs, News, PDFs, etc.) and internal sources such as private databases, documents etc. The platform allows users to search content by keyword, location and time. GeoXray also works off the Google Earth Engine, which enables visual-ization from different angles. The tool can also pull in content from Wikimapia and allows users to tag mapped content according to perceived veracity. One of the strengths of the platform appears to be the tool’s automated geo-location feature. For more on GeoXray:

Become a (Social Media) Data Donor and Save a Life

I was recently in New York where I met up with my colleague Fernando Diaz from Microsoft Research. We were discussing the uses of social media in humanitarian crises and the various constraints of social media platforms like Twitter vis-a-vis their Terms of Service. And then this occurred to me: we have organ donation initiatives and organ donor cards that many of us carry around in our wallets. So why not become a “Data Donor” as well in the event of an emergency? After all, it has long been recognized that access to information during a crisis is as important as access to food, water, shelter and medical aid.

This would mean having a setting that gives others during a crisis the right (for a limited time) to use your public tweets or Facebook status updates for the ex-pressed purpose of supporting emergency response operations, such as live crisis maps. Perhaps switching this setting on would also come with the provision that the user confirms that s/he will not knowingly spread false or misleading information as part of their data donation. Of course, the other option is to simply continue doing what many have been doing all along, i.e., keep using social media updates for humanitarian response regardless of whether or not they violate the various Terms of Service.

Enhanced Messaging for the Emergency Response Sector (EMERSE)

My colleague Andrea Tapia and her team at PennState University have developed an interesting iPhone application designed to support humanitarian response. This application is part of their EMERSE project: Enhanced Messaging for the Emergency Response Sector. The other components of EMERSE include a Twitter crawler, automatic classification and machine learning.

The rationale for this important, applied research? “Social media used around crises involves self-organizing behavior that can produce accurate results, often in advance of official communications. This allows affected population to send tweets or text messages, and hence, make them heard. The ability to classify tweets and text messages automatically, together with the ability to deliver the relevant information to the appropriate personnel are essential for enabling the personnel to timely and efficiently work to address the most urgent needs, and to understand the emergency situation better” (Caragea et al., 2011).

The iPhone application developed by PennState is designed to help humanitarian professionals collect information during a crisis. “In case of no service or Internet access, the application rolls over to local storage until access is available. However, the GPS still works via satellite and is able to geo-locate data being recorded.” The Twitter crawler component captures tweets referring to specific keywords “within a seven-day period as well as tweets that have been posted by specific users. Each API call returns at most 1000 tweets and auxiliary metadata […].” The machine translation component uses Google Language API.

The more challenging aspect of EMERSE, however, is the automatic classification component. So the team made use of the Ushahidi Haiti data, which includes some 3,500 reports about half of which came from text messages. Each of these reports were tagged according to a specific (but not mutually exclusive category), e.g., Medical Emergency, Collapsed Structure, Shelter Needed, etc. The team at PennState experimented with various techniques from (NLP) and Machine Learning (ML) to automatically classify the Ushahidi Haiti data according to these pre-existing categories. The results demonstrate that “Feature Extraction” significantly outperforms other methods while Support Vector Machine (SVM) classifiers vary significantly depending on the category being coded. I wonder whether their approach is more or less effective than this one developed by the University of Colorado at Boulder.

In any event, PennState’s applied research was presented at the ISCRAM 2011 conference and the findings are written up in this paper (PDF): “Classifying Text Messages for the Haiti Earthquake.” The co-authors: Cornelia Caragea, Nathan McNeese, Anuj Jaiswal, Greg Traylor, Hyun-Woo Kim, Prasenjit Mitra, Dinghao Wu, Andrea H. Tapia, Lee Giles, Bernard J. Jansen, John Yen.

In conclusion, the team at PennState argue that the EMERSE system offers four important benefits not provided by Ushahidi.

“First, EMERSE will automatically classify tweets and text messages into topic, whereas Ushahidi collects reports with broad category information provided by the reporter. Second, EMERSE will also automatically geo-locate tweets and text messages, whereas Ushahidi relies on the reporter to provide the geo-location information. Third, in EMERSE, tweets and text messages are aggregated by topic and region to better understand how the needs of Haiti differ by regions and how they change over time. The automatic aggregation also helps to verify reports. A large number of similar reports by different people are more likely to be true. Finally, EMERSE will provide tweet broadcast and GeoRSS subscription by topics or region, whereas Ushahidi only allows reports to be downloaded.”

In terms of future research, the team may explore other types of abstraction based on semantically related words, and may also “design an emergency response ontology […].” So I recently got in touch with Andrea to get an update on this since their ISCRAM paper was published 14 months ago. I’ll be sure to share any update if this information can be made public.

Crisis Tweets: Natural Language Processing to the Rescue?

My colleagues at the University of Colorado, Boulder, have been doing some very interesting applied research on automatically extracting “situational awareness” from tweets generated during crises. As is increasingly recognized by many in the humanitarian space, Twitter can at times be an important source of relevant information. The challenge is to make sense of a potentially massive number of crisis tweets in near real-time to turn this information into situational awareness.

Using Natural Language Processing (NLP) and Machine Learning (ML), Colorado colleagues have developed a “suite of classifiers to differentiate tweets across several dimensions: subjectivity, personal or impersonal style, and linguistic register (formal or informal style).” They suggest that tweets contributing to situational awareness are likely to be “written in a style that is objective, impersonal, and formal; therefore, the identification of subjectivity, personal style and formal register could provide useful features for extracting tweets that contain tactical information.” To explore this hypothesis, they studied the follow four crisis events: the North American Red River floods of 2009 and 2010, the 2009 Oklahoma grassfires, and the 2010 Haiti earthquake.

The findings of this study were presented at the Association for the Advancement of Artificial Intelligence. The team from Colorado demonstrated that their system, which automatically classifies Tweets that contribute to situational awareness, works particularly well when analyzing “low-level linguistic features,” i.e., word-frequencies and key-word search. Their analysis also showed that “linguistically-motivated features including subjectivity, personal/impersonal style, and register substantially improve system performance.” In sum, “these results suggest that identifying key features of user behavior can aid in predicting whether an individual tweet will contain tactical information. In demonstrating a link between situational awareness and other markable characteristics of Twitter communication, we not only enrich our classification model, we also enhance our perspective of the space of information disseminated during mass emergency.”

The paper, entitled: “Natural Language Processing to the Rescue? Extracting ‘Situational Awareness’ Tweets During Mass Emergency,” details the findings above and is available here. The study was authored by Sudha Verma, Sarah Vieweg, William J. Corvey, Leysia Palen, James H. Martin, Martha Palmer, Aaron Schram and Kenneth M. Anderson.