Tag Archives: verification

How to Become a Digital Sherlock Holmes and Support Relief Efforts

Humanitarian organizations need both timely and accurate information when responding to disasters. Where is the most damage located? Who needs the most help? What other threats exist? Respectable news organizations also need timely and accurate information during crisis events to responsibly inform the public. Alas, both humanitarian & mainstream news organizations are often confronted with countless rumors and unconfirmed reports. Investigative journalists and others have thus developed a number of clever strategies to rapidly verify such reports—as detailed in the excellent Verification Handbook. There’s just one glitch: Journalists and humanitarians alike are increasingly overwhelmed by the “Big Data” generated during crises, particularly information posted on social media. They rarely have enough time or enough staff to verify the majority of unconfirmed reports. This is where Verily comes in, a new type of Detective Agency for a new type of detective: The Virtual Digital Detective.

Screen Shot 2015-02-26 at 5.47.35 AM

The purpose of Verily is to rapidly crowdsource the verification of unconfirmed reports during major disasters. The way it works is simple. If a humanitarian or news organization has a verification request, they simply submit this request online at Verily. This request must be phrased in the form of a Yes-or-No question, such as: “Has the Brooklyn Bridge been destroyed by the Hurricane?”; “Is this Instagram picture really showing current flooding in Indonesia”?; “Is this new YouTube video of the Chile earthquake fake?”; “Is it true that the bush fires in South Australia are getting worse?” and so on.

Verily helps humanitarian & news organizations find answers to these questions by rapidly crowdsourcing the collection of clues that can help answer said questions. Verification questions are communicated widely across the world via Verily’s own email-list of Digital Detectives and also via social media. This new bread of Digital Detectives then scour the web for clues that can help answer the verification questions. Anyone can become a Digital Detective at Verily. Indeed, Verily provides a menu of mini-verification guides for new detectives. These guides were written by some of the best Digital Detectives on the planet, the authors of the Verification Handbook. Verily Detectives post the clues they find directly to Verily and briefly explain why these clues help answer the verification question. That’s all there is to it.

xlarge

If you’re familiar with Reddit, you may be thinking “Hold on, doesn’t Reddit do this already?” In part yes, but Reddit is not necessarily designed to crowdsource critical thinking or to create skilled Digital Detectives. Recall this fiasco during the Boston Marathon Bombings which fueled disastrous “witch hunts”. Said disaster would not have happened on Verily because Verily is deliberately designed to focus on the process of careful detective work while providing new detectives with the skills they need to precisely avoid the kind of disaster that happened on Reddit. This is no way a criticism of Reddit! One single platform alone cannot be designed to solve every problem under the sun. Deliberate, intentional design is absolutely key.

In sum, our goal at Verily is to crowdsource Sherlock Holmes. Why do we think this will work? For several reasons. First, authors of the Verification Handbook have already demonstrated that individuals working alone can, and do, verify unconfirmed reports during crises. We believe that creating a community that can work together to verify rumors will be even more powerful given the Big Data challenge. Second, each one of us with a mobile phone is a human sensor, a potential digital witness. We believe that Verily can help crowdsource the search for eyewitnesses, or rather the search for digital content that these eyewitnesses post on the Web. Third, the Red Balloon Challenge was completed in a matter of hours. This Challenge focused on crowdsourcing the search for clues across an entire continent (3 million square miles). Disasters, in contrast, are far more narrow in terms of geographic coverage. In other words, the proverbial haystack is smaller and thus the needles easier to find. More on Verily here & here.

So there’s reason to be optimistic that Verily can succeed given the above and recent real-world deployments. Of course, Verily is is still very much in early phase and still experimental. But both humanitarian organizations and high-profile news organizations have expressed a strong interest in field-testing this new Digital Detective Agency. To find out more about Verily and to engage with experts in verification, please join us on Tuesday, March 3rd at 10:00am (New York time) for this Google Hangout with the Verily Team and our colleague Craig Silverman, the Co-Editor of the Verification Handbook. Click here for the Event Page and here to follow on YouTube. You can also join the conversations on Twitter and pose questions or comments using the hashtag #VerilyLive.

Live: Crowdsourced Verification Platform for Disaster Response

Earlier this year, Malaysian Airlines Flight 370 suddenly vanished, which set in motion the largest search and rescue operation in history—both on the ground and online. Colleagues at DigitalGlobe uploaded high resolution satellite imagery to the web and crowdsourced the digital search for signs of Flight 370. An astounding 8 million volunteers rallied online, searching through 775 million images spanning 1,000,000 square kilometers; all this in just 4 days. What if, in addition to mass crowd-searching, we could also mass crowd-verify information during humanitarian disasters? Rumors and unconfirmed reports tend to spread rather quickly on social media during major crises. But what if the crowd were also part of the solution? This is where our new Verily platform comes in.

Verily Image 1

Verily was inspired by the Red Balloon Challenge in which competing teams vied for a $40,000 prize by searching for ten weather balloons secretly placed across some 8,000,0000 square kilometers (the continental United States). Talk about a needle-in-the-haystack problem. The winning team from MIT found all 10 balloons within 8 hours. How? They used social media to crowdsource the search. The team later noted that the balloons would’ve been found more quickly had competing teams not posted pictures of fake balloons on social media. Point being, all ten balloons were found astonishingly quickly even with the disinformation campaign.

Verily takes the exact same approach and methodology used by MIT to rapidly crowd-verify information during humanitarian disasters. Why is verification important? Because humanitarians have repeatedly noted that their inability to verify social media content is one of the main reasons why they aren’t making wider user of this medium. So, to test the viability of our proposed solution to this problem, we decided to pilot the Verily platform by running a Verification Challenge. The Verily Team includes researchers from the University of Southampton, the Masdar Institute and QCRI.

During the Challenge, verification questions of various difficulty were posted on Verily. Users were invited to collect and post evidence justifying their answers to the “Yes or No” verification questions. The photograph below, for example, was posted with the following question:

Verily Image 3

Unbeknownst to participants, the photograph was actually of an Italian town in Sicily called Caltagirone. The question was answered correctly within 4 hours by a user who submitted another picture of the same street. The results of the new Verily experiment are promissing. Answers to our questions were coming in so rapidly that we could barely keep up with posting new questions. Users drew on a variety of techniques to collect their evidence & answer the questions we posted:

Verily was designed with the goal of tapping into collective critical thinking; that is, with the goal of encouraging people think about the question rather than use their gut feeling alone. In other words, the purpose of Verily is not simply to crowdsource the collection of evidence but also to crowdsource critical thinking. This explains why a user can’t simply submit a “Yes” or “No” to answer a verification question. Instead, they have to justify their answer by providing evidence either in the form of an image/video or as text. In addition, Verily does not make use of Like buttons or up/down votes to answer questions. While such tools are great for identifying and sharing content on sites like Reddit, they are not the right tools for verification, which requires searching for evidence rather than liking or retweeting.

Our Verification Challenge confirmed the feasibility of the Verily platform for time-critical, crowdsourced evidence collection and verification. The next step is to deploy Verily during an actual humanitarian disaster. To this end, we invite both news and humanitarian organizations to pilot the Verily platform with us during the next natural disaster. Simply contact me to submit a verification question. In the future, once Verily is fully developed, organizations will be able to post their questions directly.

bio

See Also:

  • Verily: Crowdsourced Verification for Disaster Response [link]
  • Crowdsourcing Critical Thinking to Verify Social Media [link]
  • Six Degrees of Separation: Implications for Verifying Social Media [link]

New Insights on How To Verify Social Media

The “field” of information forensics has seen some interesting developments in recent weeks. Take the Verification Handbook or Twitter Lie-Detector project, for example. The Social Sensor project is yet another new initiative. In this blog post, I seek to make sense of these new developments and to identify where this new field may be going. In so doing, I highlight key insights from each initiative. 

VHandbook1

The co-editors of the Verification Handbook remind us that misinformation and rumors are hardly new during disasters. Chapter 1 opens with the following account from 1934:

“After an 8.1 magnitude earthquake struck northern India, it wasn’t long before word circulated that 4,000 buildings had collapsed in one city, causing ‘innumerable deaths.’ Other reports said a college’s main building, and that of the region’s High Court, had also collapsed.”

These turned out to be false rumors. The BBC’s User Generated Content (UGC) Hub would have been able to debunk these rumors. In their opinion, “The business of verifying and debunking content from the public relies far more on journalistic hunches than snazzy technology.” So they would have been right at home in the technology landscape of 1934. To be sure, they contend that “one does not need to be an IT expert or have special equipment to ask and answer the fundamental questions used to judge whether a scene is staged or not.” In any event, the BBC does not “verify something unless [they] speak to the person that created it, in most cases.” What about the other cases? How many of those cases are there? And how did they ultimately decide on whether the information was true or false even though they did not  speak to the person that created it?  

As this new study argues, big news organizations like the BBC aim to contact the original authors of user generated content (UGC) not only to try and “protect their editorial integrity but also because rights and payments for newsworthy footage are increasingly factors. By 2013, the volume of material and speed with which they were able to verify it [UGC] were becoming significant frustrations and, in most cases, smaller news organizations simply don’t have the manpower to carry out these checks” (Schifferes et al., 2014).

Credit: ZDnet

Chapter 3 of the Handbook notes that the BBC’s UGC Hub began operations in early 2005. At the time, “they were reliant on people sending content to one central email address. At that point, Facebook had just over 5 million users, rather than the more than one billion today. YouTube and Twitter hadn’t launched.” Today, more than 100 hours of content is uploaded to YouTube every minute; over 400 million tweets are sent each day and over 1 million pieces of content are posted to Facebook every 30 seconds. Now, as this third chapter rightly notes, “No technology can automatically verify a piece of UGC with 100 percent certainty. However, the human eye or traditional investigations aren’t enough either. It’s the combination of the two.” New York Times journalists concur: “There is a problem with scale… We need algorithms to take more onus off human beings, to pick and understand the best elements” (cited in Schifferes et al., 2014).

People often (mistakenly) see “verification as a simple yes/no action: Something has been verified or not. In practice, […] verification is a process” (Chapter 3). More specifically, this process is one of satisficing. As colleagues Leysia Palen et al.  note in this study, “Information processing during mass emergency can only satisfice because […] the ‘complexity of the environment is immensely greater than the computational powers of the adaptive system.'” To this end, “It is an illusion to believe that anyone has perfectly accurate information in mass emergency and disaster situations to account for the whole event. If someone did, then the situation would not be a disaster or crisis.” This explains why Leysia et al seek to shift the debate to one focused on the helpfulness of information rather the problematic true/false dichotomy.

Credit: Ann Wuyts

“In highly contextualized situations where time is of the essence, people need support to consider the content across multiple sources of information. In the online arena, this means assessing the credibility and content of information distributed across [the web]” (Leysia et al., 2011). This means that, “Technical support can go a long way to help collate and inject metadata that make explicit many of the inferences that the every day analyst must make to assess credibility and therefore helpfulness” (Leysia et al., 2011). In sum, the human versus computer debate vis-a-vis the verification of social media is somewhat pointless. The challenge moving forward resides in identifying the best ways to combine human cognition with machine computing. As Leysia et al. rightly note, “It is not the job of the […] tools to make decisions but rather to allow their users to reach a decision as quickly and confidently as possible.”

This may explain why Chapter 7 (which I authored) applies both human and advanced computing techniques to the verification challenge. Indeed, I explicitly advocate for a hybrid approach. In contrast, the Twitter Lie-Detector project known as Pheme apparently seeks to use machine learning alone to automatically verify online rumors as they spread on social networks. Overall, this is great news—the more groups that focus on this verification challenge, the better for those us engaged in digital humanitarian response. It remains to be seen, however, whether machine learning alone will make Pheme a success.

pheme

In the meantime, the EU’s Social Sensor project is developing new software tools to help journalists assess the reliability of social media content (Schifferes et al., 2014). A preliminary series of interviews revealed that journalists were most interested in Social Sensor software for:

1. Predicting or alerting breaking news

2. Verifying social media content–quickly identifying who has posted a tweet or video and establishing “truth or lie”

So the Social Sensor project is developing an “Alethiometer” (Alethia is Greek for ‘truth’) to “meter the credibility of of information coming from any source by examining the three Cs—Contributors, Content and Context. These seek to measure three key dimensions of credibility: the reliability of contributors, the nature of the content, and the context in which the information is presented. This reflects the range of considerations that working journalists take into account when trying to verify social media content. Each of these will be measured by multiple metrics based on our research into the steps that journalists go through manually. The results of [these] steps can be weighed and combined [metadata] to provide a sense of credibility to guide journalists” (Schifferes et al., 2014).

SocialSensor1

On our end, my colleagues and at QCRI are continuing to collaborate with several partners to experiment with advanced computing methods to address the social media verification challenge. As noted in Chapter 7, Verily, a platform that combines time-critical crowdsourcing and critical thinking, is still in the works. We’re also continuing our collaboration on a Twitter credibility plugin (more in Chapter 7). In addition, we are exploring whether we can microtask the computation of source credibility scores using MicroMappers.

Of course, the above will sound like “snazzy technologies” to seasoned journalists with no background or interest in advanced computing. But this doesn’t seem to stop them from complaining that “Twitter search is very hit and miss;” that what Twitter “produces is not comprehensive and the filters are not comprehensive enough” (BBC social media expert, cited in Schifferes et al., 2014). As one of my PhD dissertation advisors (Clay Shirky) noted a while back already, information overflow (Big Data) is due to “Filter Failure”. This is precisely why my colleagues and I are spending so much of our time developing better filters—filters powered by human and machine computing, such as AIDR. These types of filters can scale. BBC journalists on their own do not, unfortunately. But they can act on hunches and intuition based on years of hands-on professional experience.

The “field” of digital information forensics has come along way since I first wrote about how to verify social media content back in 2011. While I won’t touch on the Handbook’s many other chapters here, the entire report is an absolute must read for anyone interested and/or working in the verification space. At the very least, have a look at Chapter 9, which combines each chapter’s verification strategies in the form of a simple check-list. Also, Chapter 10 includes a list of  tools to aid in the verification process.

In the meantime, I really hope that we end the pointless debate about human versus machine. This is not an either/or issue. As a colleague once noted, what we really need is a way to combine the power of algorithms and the wisdom of the crowd with the instincts of experts.

bio

See also:

  • Predicting the Credibility of Disaster Tweets Automatically [link]
  • Auto-Ranking Credibility of Tweets During Major Events [link]
  • Auto-Identifying Fake Images on Twitter During Disasters [link]
  • Truth in the Age of Social Media: A Big Data Challenge [link]
  • Analyzing Fake Content on Twitter During Boston Bombings [link]
  • How to Verify Crowdsourced Information from Social Media [link]
  • Crowdsourcing Critical Thinking to Verify Social Media [link]

TEDx: Using Crowdsourcing to Verify Social Media for Disaster Response


My TEDx talk on Digital Humanitarians presented at TEDxTraverseCity. I’ve automatically forwarded the above video to the section on Big (false) Data and the use of time-critical crowdsourcing to verify social media reports shared during disasters. The talk describes the rationale behind the Verily platform that my team and I at QCRI are developing with our partners at the Masdar Institute of Technology (MIT) in Dubai. The purpose of Verily is to accelerate the process of verification by crowdsourcing evidence collection and critical thinking. See my colleague ChaTo’s excellent slide deck on Verily for more information.


bio

Using Crowdsourcing to Counter the Spread of False Rumors on Social Media During Crises

My new colleague Professor Yasuaki Sakamoto at the Stevens Institute of Tech-nology (SIT) has been carrying out intriguing research on the spread of rumors via social media, particularly on Twitter and during crises. In his latest research, “Toward a Social-Technological System that Inactivates False Rumors through the Critical Thinking of Crowds,” Yasu uses behavioral psychology to under-stand why exposure to public criticism changes rumor-spreading behavior on Twitter during disasters. This fascinating research builds very nicely on the excellent work carried out by my QCRI colleague ChaTo who used this “criticism dynamic” to show that the credibility of tweets can be predicted (by topic) with-out analyzing their content. Yasu’s study also seeks to find the psychological basis for the Twitter’s self-correcting behavior identified by ChaTo and also John Herman who described Twitter as a  “Truth Machine” during Hurricane Sandy.

criticalthink

Twitter is still a relatively new platform, but the existence and spread of false rumors is certainly not. In fact, a very interesting study dated 1950 found that “in the past 1,000 years the same types of rumors related to earthquakes appear again and again in different locations.” Early academic studies on the spread of rumors revealed that “that psychological factors, such as accuracy, anxiety, and impor-tance of rumors, affect rumor transmission.” One such study proposed that the spread of a rumor “will vary with the importance of the subject to the individuals concerned times the ambiguity of the evidence pertaining to the topic at issue.” Later studies added “anxiety as another key element in rumormongering,” since “the likelihood of sharing a rumor was related to how anxious the rumor made people feel. At the same time, however, the literature also reveals that counter-measures do exist. Critical thinking, for example, decreases the spread of rumors. The literature defines critical thinking as “reasonable reflective thinking focused on deciding what to believe or do.”

“Given the growing use and participatory nature of social media, critical thinking is considered an important element of media literacy that individuals in a society should possess.” Indeed, while social media can “help people make sense of their situation during a disaster, social media can also become a rumor mill and create social problems.” As discussed above, psychological factors can influence rumor spreading, particularly when experiencing stress and mental pressure following a disaster. Recent studies have also corroborated this finding, confirming that “differences in people’s critical thinking ability […] contributed to the rumor behavior.” So Yasu and his team ask the following interesting question: can critical thinking be crowdsourced?

Screen Shot 2013-03-30 at 3.37.40 PM

“Not everyone needs to be a critical thinker all the time,” writes Yasu et al. As long as some individuals are good critical thinkers in a specific domain, their timely criticisms can result in an emergent critical thinking social system that can mitigate the spread of false information. This goes to the heart of the self-correcting behavior often observed on social media and Twitter in particular. Yasu’s insight also provides a basis for a bounded crowdsourcing approach to disaster response. More on this here, here and here.

“Related to critical thinking, a number of studies have paid attention to the role of denial or rebuttal messages in impeding the transmission of rumor.” This is the more “visible” dynamic behind the self-correcting behavior observed on Twitter during disasters. So while some may spread false rumors, others often try to counter this spread by posting tweets criticizing rumor-tweets directly. The following questions thus naturally arise: “Are criticisms on Twitter effective in mitigating the spread of false rumors? Can exposure to criticisms minimize the spread of rumors?”

Yasu and his colleagues set out to test the following hypotheses: Exposure to criticisms reduces people’s intent to spread rumors; which mean that ex-posure to criticisms lowers perceived accuracy, anxiety, and importance of rumors. They tested these hypotheses on 87 Japanese undergraduate and grad-uate students by using 20 rumor-tweets related to the 2011 Japan Earthquake and 10 criticism-tweets that criticized the corresponding rumor-tweets. For example:

Rumor-tweet: “Air drop of supplies is not allowed in Japan! I though it has already been done by the Self- Defense Forces. Without it, the isolated people will die! I’m trembling with anger. Please retweet!”

Criticism-tweet: “Air drop of supplies is not prohibited by the law. Please don’t spread rumor. Please see 4-(1)-4-.”

The researchers found that “exposing people to criticisms can reduce their intent to spread rumors that are associated with the criticisms, providing support for the system.” In fact, “Exposure to criticisms increased the proportion of people who stop the spread of rumor-tweets approximately 1.5 times [150%]. This result indicates that whether a receiver is exposed to rumor or criticism first makes a difference in her decision to spread the rumor. Another interpretation of the result is that, even if a receiver is exposed to a number of criticisms, she will benefit less from this exposure when she sees rumors first than when she sees criticisms before rumors.”

Screen Shot 2013-03-30 at 3.53.02 PM

Findings also revealed three psychological factors that were related to the differences in the spread of rumor-tweets: one’s own perception of the tweet’s accuracy, the anxiety cause by the tweet, and the tweet’s perceived importance. The results also indicate that “exposure to criticisms reduces the perceived accuracy of the succeeding rumor-tweets, paralleling the findings by previous research that refutations or denials decrease the degree of belief in rumor.” In addition, the perceived accuracy of criticism-tweets by those exposed to rumors first was significantly higher than the criticism-first group. The results were similar vis-à-vis anxiety. “Seeing criticisms before rumors reduced anxiety associated with rumor-tweets relative to seeing rumors first. This result is also consistent with previous research findings that denial messages reduce anxiety about rumors. Participants in the criticism-first group also perceived rumor-tweets to be less important than those in the rumor-first group.” The same was true vis-à-vis the perceived importance of a tweet. That said, “When the rumor-tweets are perceived as more accurate, the intent to spread the rumor-tweets are stronger; when rumor-tweets cause more anxiety, the intent to spread the rumor-tweets is stronger; when the rumor-tweets are perceived as more im-portance, the intent to spread the rumor-tweets is also stronger.”

So how do we use these findings to enhance the critical thinking of crowds and design crowdsourced verification platforms such as Verily? Ideally, such a platform would connect rumor tweets with criticism-tweets directly. “By this design, information system itself can enhance the critical thinking of the crowds.” That said, the findings clearly show that sequencing matters—that is, being exposed to rumor tweets first vs criticism tweets first makes a big differ-ence vis-à-vis rumor contagion. The purpose of a platform like Verily is to act as a repo-sitory for crowdsourced criticisms and rebuttals; that is, crowdsourced critical thinking. Thus, the majority of Verily users would first be exposed to questions about rumors, such as: “Has the Vincent Thomas Bridge in Los Angeles been destroyed by the Earthquake?” Users would then be exposed to the crowd-sourced criticisms and rebuttals.

In conclusion, the spread of false rumors during disasters will never go away. “It is human nature to transmit rumors under uncertainty.” But social-technological platforms like Verily can provide a repository of critical thinking and ed-ucate users on critical thinking processes themselves. In this way, we may be able to enhance the critical thinking of crowds.


bio

See also:

  • Wiki on Truthiness resources (Link)
  • How to Verify and Counter Rumors in Social Media (Link)
  • Social Media and Life Cycle of Rumors during Crises (Link)
  • How to Verify Crowdsourced Information from Social Media (Link)
  • Analyzing the Veracity of Tweets During a Crisis (Link)
  • Crowdsourcing for Human Rights: Challenges and Opportunities for Information Collection & Verification (Link)
  • The Crowdsourcing Detective: Crisis, Deception and Intrigue in the Twittersphere (Link)

Automatically Ranking the Credibility of Tweets During Major Events

In their study, “Credibility Ranking of Tweets during High Impact Events,” authors Aditi Gupta and Ponnurangam Kumaraguru “analyzed the credibility of information in tweets corresponding to fourteen high impact news events of 2011 around the globe.” According to their analysis, “30% of total tweets  about an event contained situational information about the event while 14% was spam.” In addition, about 17% of total tweets contained situational awareness information that was credible.

Workflow

The study analyzed over 35 million tweets posted by ~8 million users based on current trending topics. From this data, the authors identified 14 major events reflected in the tweets. These included the UK riots, Libya crisis, Virginia earthquake and Hurricane Irene, for example.

“Using regression analysis, we identi ed the important content and sourced based features, which can predict the credibility of information in a tweet. Prominent content based features were number of unique characters, swear words, pronouns, and emoticons in a tweet, and user based features like the number of followers and length of username. We adopted a supervised machine learning and relevance feedback approach using the above features, to rank tweets according to their credibility score. The performance of our ranking algorithm signi cantly enhanced when we applied re-ranking strategy. Results show that extraction of credible information from Twitter can be automated with high confi dence.”

The paper is available here (PDF). For more applied research on “information forensics,” please see this link.

See also:

  • Analyzing Fake Content on Twitter During Boston Bombings [link]
  • Predicting the Credibility of Disaster Tweets Automatically [link]
  • Auto-Identifying Fake Images on Twitter During Disasters [link]
  • How to Verify Crowdsourced Information from Social Media [link]
  • Crowdsourcing Critical Thinking to Verify Social Media [link]

Rapidly Verifying the Credibility of Information Sources on Twitter

One of the advantages of working at QCRI is that I’m regularly exposed to peer-reviewed papers presented at top computing conferences. This is how I came across an initiative called “Seriously Rapid Source Review” or SRSR. As many iRevolution readers know, I’m very interested in information forensics as applied to crisis situations. So SRSR certainly caught my attention.

The team behind SRSR took a human centered design approach in order to integrate journalistic practices within the platform. There are four features worth noting in this respect. The first feature to note in the figure below is the automated filter function, which allows one to view tweets generated by “Ordinary People,” “Journalists/Bloggers,” “Organizations,” “Eyewitnesses” and “Uncategorized.” The second feature, Location, “shows a set of pie charts indica-ting the top three locations where the user’s Twitter contacts are located. This cue provides more location information and indicates whether the source has a ‘tie’ or other personal interest in the location of the event, an aspect of sourcing exposed through our preliminary interviews and suggested by related work.”


The third feature worth noting is the “Eyewitness” icon. The SRSR team developed the first ever automatic classifier to identify eyewitness reports shared on Twitter. My team and I at QCRI are developing a second one that focuses specifically on automatically classifying eyewitness reports during sudden-onset natural disasters. The fourth feature is “Entities,” which displays the top five entities that the user has mentioned in their tweet history. These include references to organizations, people and places, which can reveal important patterns about the twitter user in question.

Journalists participating in this applied research found the “Location” feature particularly important when assess the credibility of users on Twitter. They noted that “sources that had friends in the location of the event were more believable, indicating that showing friends’ locations can be an indicator of credibility.” One journalist shared the following: “I think if it’s someone without any friends in the region that they’re tweeting about then that’s not nearly as authoritative, whereas if I find somebody who has 50% of friends are in [the disaster area], I would immediately look at that.”

In addition, the automatic identification of “eyewitnesses” was deemed essential by journalists who participated in the SRSR study. This should not be surprising since “news organizations often use eyewitnesses to add credibility to reports by virtue of the correspondent’s on-site proximity to the event.” Indeed, “Witness-ing and reporting on what the journalist had witnessed have long been seen as quintessential acts of journalism.” To this end, “social media provides a platform where once passive witnesses can become active and share their eyewitness testimony with the world, including with journalists who may choose to amplify their report.”

In sum, SRSR could be used to accelerate the verification of social media con-tent, i.e., go beyond source verification alone. For more on SRSR, please see this computing paper (PDF), which was authored by Nicholas Diakopoulos, Munmun De Choudhury and Mor Naaman.

Six Degrees of Separation: Implications for Verifying Social Media

The Economist recently published this insightful article entitled” Six Degrees of Mobilisation: To what extent can social networking make it easier to find people and solve real-world problems?” The notion, six degrees of separation, comes from Stanley Milgram’s experiment in the 1960s which found that there were, on average, six degrees of separation between any two people in the US. Last year, Facebook found that users on the social network were separated by an average of 4.7 hops. The Economist thus asks the following, fascinating question:

“Can this be used to solve real-world problems, by taking advantage of the talents and connections of one’s friends, and their friends? That is the aim of a new field known as social mobilisation, which treats the population as a distributed knowledge resource which can be tapped using modern technology.”

The article refers to DARPA’s Red Balloon Challenge, which I already blogged about here: “Time-Critical Crowdsourcing for Social Mobilization and Crowd-Solving.”  The Economist also references DARPA’s TagChallenge. In both cases, the winning teams leveraged social media using crowdsourcing and clever incentive mechanisms. Can this approach also be used to verify social media content during a crisis?

This new study on disasters suggests that the “degrees of separation” between any two organizations in the field is 5. So if the location of red balloons and individuals can be crowdsourced surprisingly quickly, then can the evidence necessary to verify social media content during a disaster be collected as rapidly and reliably? If we are only separated by four-to-six degrees, then this would imply that it only takes that many hops to find someone connected to me (albeit indirectly) who could potentially confirm or disprove the authenticity of a particularly piece of information. This approach was used very successfully in Kyrgyzstan a couple years ago. Can we develop a platform to facilitate this process? And if so, what design features (e.g., gamification) are necessary to mobilize participants and make this tool a success?

Accelerating the Verification of Social Media Content

Journalists have already been developing a multitude of tactics to verify user-generated content shared on social media. As noted here, the BBC has a dedicated User-Generated Content (UGC) Hub that is tasked with verifying social media information. The UK Guardian, Al-Jazeera, CNN and others are also developing competency in what I refer to as “information forensics”. It turns out there are many tactics that can be used to try and verify social media content. Indeed, applying most of these existing tactics can be highly time consuming.

So building a decision-tree that combines these tactics is the way to go. But doing digital detective work online is still a time-intensive effort. Numerous pieces of digital evidence need to be collected in order to triangulate and ascertain the veracity of just one given report. We therefore need tools that can accelerate the processing of a verification decision-tree. To be sure, information is the most perishable commodity in a crisis—for both journalists and humanitarian pro-fessionals. This means that after a certain period of time, it no longer matters whether a report has been verified or not because the news cycle or crisis has unfolded further since.

This is why I’m a fan of tools like Rapportive. The point is to have the decision-tree not only serve as an instruction-set on what types of evidence to collect but to actually have a platform that collects that information. There are two general strategies that could be employed to accelerate and scale the verification process. One is to split the tasks listed in the decision-tree into individual micro-tasks that can be distributed and independently completed using crowdsourcing. A second strategy is to develop automated ways to collect the evidence.

Of course, both strategies could also be combined. Indeed, some tasks are far better suited for automation while others can only be carried about by humans. In sum, the idea here is to save journalists and humanitarians time by considerably reducing the time it takes to verify user-generated content posted on social media. I am also particularly interested in gamification approaches to solve major challenges, like the Protein Fold It game. So if you know of any projects seeking to solve the verification challenge described above in novel ways, I’d be very grateful for your input in the comments section below. Thank you!

Using Rapportive for Source and Information Verification

I’ve been using Rapportive for several few weeks now and have found the tool rather useful for assessing the trustworthiness of a source. Rapportive is an extension for Gmail that allows you to automatically visualize an email sender’s complete profile information right inside your inbox.

So now, when receiving emails from strangers, I can immediately see their profile picture, short bio, twitter handle (including latest tweets), links to their Facebook page, Google+ account, LinkedIn profile, blog, SkypeID, recent pictures they’ve posted, etc. As explained in my blog posts on information forensics, this type of meta-data can be particularly useful when assessing the importance or credibility of a source. To be sure, having a source’s entire digital footprint on hand can be quite revealing (as marketers know full well). Moreover, this type of meta-data was exactly what the Standby Volunteer Task Force was manually looking for when they sought to verify the identify of volunteers during the Libya Crisis Map project with the UN last year.

Obviously, the use of Rapportive alone is not a silver bullet to fully determine the credibility of a source or the authenticity of a source’s identity. But it does add contextual information that can make a difference when seeking to better under-stand the reliability of an email. I’d be curious to know whether Rapportive will be available as a stand-alone platform in the future so it can be used outside of Gmail. A simple web-based search box that allows one to search by email address, twitter handle, etc., with the result being a structured profile of that individual’s entire digital footprint. Anyone know whether similar platforms already exist? They could serve as ideal plugins for platforms like CrisisTracker.