Category Archives: Information Forensics

The Best of iRevolution in 2013

iRevolution crossed the 1 million hits mark in 2013, so big thanks to iRevolution readers for spending time here during the past 12 months. This year also saw close to 150 new blog posts published on iRevolution. Here is a short selection of the Top 15 iRevolution posts of 2013:

How to Create Resilience Through Big Data
[Link]

Humanitarianism in the Network Age: Groundbreaking Study
[Link]

Opening Keynote Address at CrisisMappers 2013
[Link]

The Women of Crisis Mapping
[Link]

Data Protection Protocols for Crisis Mapping
[Link]

Launching: SMS Code of Conduct for Disaster Response
[Link]

MicroMappers: Microtasking for Disaster Response
[Link]

AIDR: Artificial Intelligence for Disaster Response
[Link]

Social Media, Disaster Response and the Streetlight Effect
[Link]

Why the Share Economy is Important for Disaster Response
[Link]

Automatically Identifying Fake Images on Twitter During Disasters
[Link]

Why Anonymity is Important for Truth & Trustworthiness Online
[Link]

How Crowdsourced Disaster Response Threatens Chinese Gov
[Link]

Seven Principles for Big Data and Resilience Projects
[Link]

#NoShare: A Personal Twist on Data Privacy
[Link]

I’ll be mostly offline until February 1st, 2014 to spend time with family & friends, and to get started on a new exciting & ambitious project. I’ll be making this project public in January via iRevolution, so stay tuned. In the meantime, wishing iRevolution readers a very Merry Happy Everything!

santahat

#Westgate Tweets: A Detailed Study in Information Forensics

My team and I at QCRI have just completed a detailed analysis of the 13,200+ tweets posted from one hour before the attacks began until two hours into the attack. The purpose of this study, which will be launched at CrisisMappers 2013 in Nairobi tomorrow, is to make sense of the Big (Crisis) Data generated during the first hours of the siege. A summary of our results are displayed below. The full results of our analysis and discussion of findings are available as a GoogleDoc and also PDF. The purpose of this public GoogleDoc is to solicit comments on our methodology so as to inform the next phase of our research. Indeed, our aim is to categorize and study the entire Westgate dataset in the coming months (730,000+ tweets). In the meantime, sincere appreciation go to my outstanding QCRI Research Assistants, Ms. Brittany Card and Ms. Justine MacKinnon for their hard work on the coding and analysis of the 13,200+ tweets. Our study builds on this preliminary review.

The following 7 figures summarize the main findings of our study. These are discussed in more detail in the GoogleDoc/PDF.

Figure 1: Who Authored the Most Tweets?

Figure 2: Frequency of Tweets by Eyewitnesses Over Time?

Figure 3: Who Were the Tweets Directed At?

Figure 4: What Content Did Tweets Contain?

Figure 5: What Terms Were Used to Reference the Attackers?

Figure 6: What Terms Were Used to Reference Attackers Over Time?

Figure 7: What Kind of Multimedia Content Was Shared?

Analyzing Fake Content on Twitter During Boston Marathon Bombings

As iRevolution readers already know, the application of Information Forensics to social media is one of my primary areas of interest. So I’m always on the lookout for new and related studies, such as this one (PDF), which was just published by colleagues of mine in India. The study by Aditi Gupta et al. analyzes fake content shared on Twitter during the Boston Marathon Bombings earlier this year.

bostonstrong

Gupta et al. collected close to 8 million unique tweets posted by 3.7 million unique users between April 15-19th, 2013. The table below provides more details. The authors found that rumors and fake content comprised 29% of the content that went viral on Twitter, while 51% of the content constituted generic opinions and comments. The remaining 20% relayed true information. Interestingly, approximately 75% of fake tweets were propagated via mobile phone devices compared to true tweets which comprised 64% of tweets posted via mobiles.

Table1 Gupta et al

The authors also found that many users with high social reputation and verified accounts were responsible for spreading the bulk of the fake content posted to Twitter. Indeed, the study shows that fake content did not travel rapidly during the first hour after the bombing. Rumors and fake information only goes viral after Twitter users with large numbers of followers start propagating the fake content. To this end, “determining whether some information is true or fake, based on only factors based on high number of followers and verified accounts is not possible in the initial hours.”

Gupta et al. also identified close to 32,000 new Twitter accounts created between April 15-19 that also posted at least one tweet about the bombings. About 20% (6,073 accounts) of these new accounts were subsequently suspended by Twitter. The authors found that 98.7% of these suspended accounts did not include the word Boston in their names and usernames. They also note that some of these deleted accounts were “quite influential” during the Boston tragedy. The figure below depicts the number of suspended Twitter accounts created in the hours and days following the blast.

Figure 2 Gupta et al

The authors also carried out some basic social network analysis of the suspended Twitter accounts. First, they removed from the analysis all suspended accounts that did not interact with each other, which left just 69 accounts. Next, they analyzed the network typology of these 69 accounts, which produced four distinct graph structures: Single Link, Closed Community, Star Typology and Self-Loops. These are displayed in the figure below (click to enlarge).

Figure 3 Gupta et al

The two most interesting graphs are the Closed Community and Star Typology graphs—the second and third graphs in the figure above.

Closed Community: Users that retweet and mention each other, forming a closed community as indicated by the high closeness centrality values produced by the social network analysis. “All these nodes have similar usernames too, all usernames have the same prefix and only numbers in the suffixes are different. This indicates that either these profiles were created by same or similar minded people for posting common propaganda posts.” Gupta et al. analyzed the content posted by these users and found that all were “tweeting the same propaganda and hate filled tweet.”

Star Typology: Easily mistakable for the authentic “BostonMarathon” Twitter account, the fake account “BostonMarathons” created plenty of confusion. Many users propagated the fake content posted by the BostonMarathons account. As the authors note, “Impersonation or creating fake profiles is a crime that results in identity theft and is punishable by law in many countries.”

The automatic detection of these network structures on Twitter may enable us to detect and counter fake content in the future. In the meantime, my colleagues and I at QCRI are collaborating with Aditi Gupta et al. to develop a “Credibility Plugin” for Twitter based on this analysis and earlier peer-reviewed research carried out by my colleague ChaTo. Stay tuned for updates.

Bio

See also:

  • Boston Bombings: Analyzing First 1,000 Seconds on Twitter [link]
  • Taking the Pulse of the Boston Bombings on Twitter [link]
  • Predicting the Credibility of Disaster Tweets Automatically [link]
  • Auto-Ranking Credibility of Tweets During Major Events [link]
  • Auto-Identifying Fake Images on Twitter During Disasters [link]
  • How to Verify Crowdsourced Information from Social Media [link]
  • Crowdsourcing Critical Thinking to Verify Social Media [link]

World Disaster Report: Next Generation Humanitarian Technology

This year’s World Disaster Report was just released this morning. I had the honor of authoring Chapter 3 on “Strengthening Humanitarian Information: The Role of Technology.” The chapter focuses on the rise of “Digital Humanitarians” and explains how “Next Generation Humanitarian Technology” is used to manage Big (Crisis) Data. The chapter complements the groundbreaking report “Humanitarianism in the Network Age” published by UN OCHA earlier this year.

The key topics addressed in the chapter include:

  • Big (Crisis) Data
  • Self-Organized Disaster Response
  • Crowdsourcing & Bounded Crowdsourcing
  • Verifying Crowdsourced Information
  • Volunteer & Technical Communities
  • Digital Humanitarians
  • Libya Crisis Map
  • Typhoon Pablo Crisis Map
  • Syria Crisis Map
  • Microtasking for Disaster Response
  • MicroMappers
  • Machine Learning for Disaster Response
  • Artificial Intelligence for Disaster Response (AIDR)
  • American Red Cross Digital Operations Center
  • Data Protection and Security
  • Policymaking for Humanitarian Technology

I’m particularly interested in getting feedback on this chapter, so feel free to pose any comments or questions you may have in the comments section below.

bio

See also:

  • What is Big (Crisis) Data? [link]
  • Humanitarianism in the Network Age [link]
  • Predicting Credibility of Disaster Tweets [link]
  • Crowdsourced Verification for Disaster Response [link]
  • MicroMappers: Microtasking for Disaster Response [link]
  • AIDR: Artificial Intelligence for Disaster Response [link]
  • Research Agenda for Next Generation Humanitarian Tech [link]

Forensics Analysis of #Westgate Tweets (Updated)

Update 1: Our original Twitter collection of Westgate-related tweets included the following hashtags: #Kenya, #Nairobi #WestgateAttack, #WestagateMall, #WestgatemallAttack, #Westgateshootout & #Westgate. While we overlooked #Westlands and Westlands, we have just fixed the oversight. This explains why the original results below differed from the iHub’s analysis which was based on tweets with the keywords Westgate and Westlands.

Update 2: The list below of first tweets to report the attack has been updated to include tweets referring to Westlands. These are denoted by an asterisk (*). 

I’m carrying out some preliminary “information forensics” research on the 740,000+ tweets posted during the Westgate attack. More specifically, I’m looking for any clues in the hours leading up to the attack that may reveal something out of the ordinary prior to the siege. Other questions I’m hoping to answer: Were any tweets posted during the crisis actionable? Did they add situational awareness? What kind of multimedia content was shared? Which tweets were posted by eyewitnesses? Were any tweets posted by the attackers or their supporters? If so, did these carry tactical information?

Screen Shot 2013-10-03 at 4.20.23 AM

If you have additional suggestions on what else to search for, please feel free to post them in the comments section below, thank you very much. I’ll be working with QCRI research assistants over the next few weeks to dive deeper into the first 24 hours of the attack as reported on Twitter. This research would not be possible where it not for my colleagues at GNIP who very kindly granted me access their platform to download all the tweets. I’ve just reviewed the first hour of tweets (which proved to be highly emotional, as expected). Below are the very first tweets posted about the attack.

[12:38:20 local time]*
gun shots in westlands? wtf??

[12:41:49]*
Weird gunshot like sounds in westlands : (

[12:42:35]
Explosions and gunfight ongoing in #nairobi 

[12:42:38]
Something really bad goin on at #Westgate. Gunshots!!!! Everyone’s fled. 

[12:43:17] *
Somewhere behind Westlands? What’s up RT @[username]: Explosions and gunfight ongoing in #nairobi

[12:44:03]
Are these gunshots at #Westgate? Just heard shooting from the road behind sarit, sounded like it was coming from westgate 

[12:44:37]*
@[username] shoot out at westgate westlands mall. going on for the last 10 min

[12:44:38]
Heavily armed thugs have taken over #WestGate shopping mall. Al occupants and shoppers are on the floor. Few gunshots heard…more to follow 

[12:44:51]*
did anyone else in westlands hear that? #KOT #Nairobi 

[12:45:04]
Seems like explosions and small arms fire are coming from Westlands or Gigiri #nairobi 

[12:46:12]
Gun fight #westgate… @ntvkenya @KTNKenya @citizentvkenya any news… 

[12:46:44]*
Several explosions followed by 10 minutes of running gunfight in Nairobi westlands

[12:46:59]
Small arms fire is continuing to be exchanged intermittently. #nairobi

[12:46:59]
Something’s going on around #Westgate #UkayCentre area. Keep away if you can

[12:47:54]
Gunshots and explosions heard around #Westgate anybody nearby? #Westlands

[12:48:33]*
@KenyaRedCross explosions and gunshots heard near Westgate Mall in Westlands. Fierce shoot out..casualties probable

[12:48:36]
Shoot to kill order #westgate

See also:

  • We Are Kenya: Global Map of #Westgate Tweets [Link]
  • Did Terrorists Use Twitter to Increase Situational Awareness? [Link]
  • Analyzing Tweets Posted During Mumbai Terrorist Attacks [Link]
  • Web 2.0 Tracks Attacks on Mumbai [Link]

bio

TEDx: Using Crowdsourcing to Verify Social Media for Disaster Response


My TEDx talk on Digital Humanitarians presented at TEDxTraverseCity. I’ve automatically forwarded the above video to the section on Big (false) Data and the use of time-critical crowdsourcing to verify social media reports shared during disasters. The talk describes the rationale behind the Verily platform that my team and I at QCRI are developing with our partners at the Masdar Institute of Technology (MIT) in Dubai. The purpose of Verily is to accelerate the process of verification by crowdsourcing evidence collection and critical thinking. See my colleague ChaTo’s excellent slide deck on Verily for more information.


bio

Automatically Identifying Fake Images Shared on Twitter During Disasters

Artificial Intelligence (AI) can be used to automatically predict the credibility of tweets generated during disasters. AI can also be used to automatically rank the credibility of tweets posted during major events. Aditi Gupta et al. applied these same information forensics techniques to automatically identify fake images posted on Twitter during Hurricane Sandy. Using a decision tree classifier, the authors were able to predict which images were fake with an accuracy of 97%. Their analysis also revealed retweets accounted for 86% of all tweets linking to fake images. In addition, their results showed that 90% of these retweets were posted by just 30 Twitter users.

Fake Images

The authors collected the URLs of fake images shared during the hurricane by drawing on the UK Guardian’s list and other sources. They compared these links with 622,860 tweets that contained links and the words “Sandy” & “hurricane” posted between October 20th and November 1st, 2012. Just over 10,300 of these tweets and retweets contained links to URLs of fake images while close to 5,800 tweets and retweets pointed to real images. Of the ~10,300 tweets linking to fake images, 84% (or 9,000) of these were retweets. Interestingly, these retweets spike about 12 hours after the original tweets are posted. This spike is driven by just 30 Twitter users. Furthermore, the vast majority of retweets weren’t made by Twitter followers but rather by those following certain hashtags. 

Gupta et al. also studied the profiles of users who tweeted or retweeted fake images  (User Features) and also the content of their tweets (Tweet Features) to determine whether these features (listed below) might be predictive of whether a tweet posts to a fake image. Their decision tree classifier achieved an accuracy of over 90%, which is remarkable. But the authors note that this high accuracy score is due to “the similar nature of many tweets since since a lot of tweets are retweets of other tweets in our dataset.” In any event, their analysis also reveals that Tweet-based Features (such as length of tweet, number of uppercase letters, etc.), were far more accurate in predicting whether or not a tweeted image was fake than User-based Features (such as number of friends, followers, etc.). One feature that was overlooked, however, is gender.

Information Forensics

In conclusion, “content and property analysis of tweets can help us in identifying real image URLs being shared on Twitter with a high accuracy.” These results reinforce the proof that machine computing and automated techniques can be used for information forensics as applied to images shared on social media. In terms of future work, the authors Aditi Gupta, Hemank Lamba, Ponnurangam Kumaraguru and Anupam Joshi plan to “conduct a larger study with more events for identification of fake images and news propagation.” They also hope to expand their study to include the detection of “rumors and other malicious content spread during real world events apart from images.” Lastly, they “would like to develop a browser plug-in that can detect fake images being shared on Twitter in real-time.” There full paper is available here.

Needless to say, all of this is music to my ears. Such a plugin could be added to our Artificial Intelligence for Disaster Response (AIDR) platform, not to mention our Verily platform, which seeks to crowdsource the verification of social media reports (including images and videos) during disasters. What I also really value about the authors’ approach is how pragmatic they are with their findings. That is, by noting their interest in developing a browser plugin, they are applying their data science expertise for social good. As per my previous blog post, this focus on social impact is particularly rare. So we need more data scientists like Aditi Gupta et al. This is why I was already in touch with Aditi last year given her research on automatically ranking the credibility of tweets. I’ve just reached out to her again to explore ways to collaborate with her and her team.

bio

Over 2 Million Tweets from Oklahoma Tornado Automatically Processed (Updated)

Update: We have now processed a total of 2 million tweets (up from 1 million).

My colleague Hemant Purohit at QCRI has been working with us on automatically extracting needs and offers of help posted on Twitter during disasters. When the 2-mile wide, Category 4 Tornado struck Moore, Oklahoma, he immediately began to collect relevant tweets about the Tornado’s impact and applied the algorithms he developed at QCRI to extract needs and offers of help.

tornado_ok

As long-time readers of iRevolution will know, this is an approach I’ve been advocating for and blogging about for years, including the auto-matching of needs and offers. These algorithms (classifiers) will also be made available as part of our Artificial Intelligence for Disaster Response (AIDR) platform. In the meantime, we have contacted our colleagues at the American Red Cross’s Digital Operations Center (DigiOps) to offer the results of the processed data, i.e., 1,000+ tweets requesting & offering help. If you are an established organization engaged in relief efforts following the Tornado, please feel free to get in touch with us (patrick@iRevolution.net) so we can make the data available to you. 

Bio

Crowdsourcing Critical Thinking to Verify Social Media During Crises

My colleagues and I at QCRI and the Masdar Institute will be launching Verily in the near future. The project has already received quite a bit of media coverage—particularly after the Boston marathon bombings. So here’s an update. While major errors were made in the crowdsourced response to the bombings, social media can help to find quickly find individuals and resources during a crisis. Moreover, time-critical crowdsourcing can also be used to verify unconfirmed reports circulating on social media.

Screen Shot 2013-05-19 at 5.51.06 PM

The errors made following the bombings were the result of two main factors:

(1) the crowd is digitally illiterate
(2) the platforms used were not appropriate for the tasks at hand

The first factor has to do with education. Most of us are still in Kindergarden when it comes to the appropriate use social media. We lack the digital or media literacy required for the responsible use of social media during crises. The good news, however, is that the major backlash from the mistakes made in Boston are already serving as an important lesson to many in the crowd who are very likely to think twice about retweeting certain content or making blind allegations on social media in the future. The second factor has to do with design. Tools like Reddit and 4Chan that are useful for posting photos of cute cats are not always the tools best designed for finding critical information during crises. The crowd is willing to help, this much has been proven. The crowd simply needs better tools to focus and rationalize to goodwill of it’s members.

Verily was inspired from the DARPA Red Balloon Challenge which leveraged social media & social networks to find the location of 10 red weather balloons planted across the continental USA (3 million square miles) in under 9 hours. So Verily uses that same time-critical mobilization approach—negative incentive recursive mechanism—to rapidly collect evidence around a particular claim during a disaster, such as “The bridge in downtown LA has been destroyed by the earthquake”. Users of Verily can share this verification challenge directly from the Verily website (e.g., Share via Twitter, FB, and Email), which posts a link back to the Verily claim page.

This time-critical mobilization & crowdsourcing element is the first main component of Verily. Because disasters are far more geographically bounded than the continental US, we believe that relevant evidence can be crowdsourced in a matter of minutes rather than hours. Indeed, while the degree of separation in the analog world is 6, that number falls closer to 4 on social media, and we believe falls even more in bounded geographical areas like urban centers. This means that the 20+ people living opposite that bridge in LA are only 2 or 3 hops from your social network and could be tapped via Verily to take pictures of the bridge from their window, for example.

pinterest_blog

The second main component is to crowdsource critical thinking which is key to countering the spread of false rumors during crises. The interface to post evidence on Verily is modeled along the lines of Pinterest, but with each piece of content (text, image, video), users are required to add a sentence or two to explain why they think or know that piece of evidence is authentic or not. Others can comment on said evidence accordingly. This workflow prompts users to think critically rather than blindly share/RT content on Twitter without much thought, context or explanation. Indeed, we hope that with Verily more people will share links back to Verily pages rather than to out of context and unsubstantiated links of images/videos/claims, etc.

In other words, we want to redirect traffic to a repository of information that incentivises critical thinking. This means Verily is also looking to be an educational tool; we’ll have simple mini-guides on information forensics available to users (drawn from the BBC’s UGC, NPR’s Andy Carvin, etc). While we’ll include dig ups/downs on perceived authenticity of evidence posted to Verily, this is not the main focus of Verily. Dig ups/downs are similar to retweets and simply do not capture/explain whether said digger has voted based on her/his expertise or any critical thinking.

If you’re interested in supporting this project and/or sharing feedback, then please feel free to contact me at any time. For more background information on Verily, kindly see this post.

Bio

Using Crowdsourcing to Counter the Spread of False Rumors on Social Media During Crises

My new colleague Professor Yasuaki Sakamoto at the Stevens Institute of Tech-nology (SIT) has been carrying out intriguing research on the spread of rumors via social media, particularly on Twitter and during crises. In his latest research, “Toward a Social-Technological System that Inactivates False Rumors through the Critical Thinking of Crowds,” Yasu uses behavioral psychology to under-stand why exposure to public criticism changes rumor-spreading behavior on Twitter during disasters. This fascinating research builds very nicely on the excellent work carried out by my QCRI colleague ChaTo who used this “criticism dynamic” to show that the credibility of tweets can be predicted (by topic) with-out analyzing their content. Yasu’s study also seeks to find the psychological basis for the Twitter’s self-correcting behavior identified by ChaTo and also John Herman who described Twitter as a  “Truth Machine” during Hurricane Sandy.

criticalthink

Twitter is still a relatively new platform, but the existence and spread of false rumors is certainly not. In fact, a very interesting study dated 1950 found that “in the past 1,000 years the same types of rumors related to earthquakes appear again and again in different locations.” Early academic studies on the spread of rumors revealed that “that psychological factors, such as accuracy, anxiety, and impor-tance of rumors, affect rumor transmission.” One such study proposed that the spread of a rumor “will vary with the importance of the subject to the individuals concerned times the ambiguity of the evidence pertaining to the topic at issue.” Later studies added “anxiety as another key element in rumormongering,” since “the likelihood of sharing a rumor was related to how anxious the rumor made people feel. At the same time, however, the literature also reveals that counter-measures do exist. Critical thinking, for example, decreases the spread of rumors. The literature defines critical thinking as “reasonable reflective thinking focused on deciding what to believe or do.”

“Given the growing use and participatory nature of social media, critical thinking is considered an important element of media literacy that individuals in a society should possess.” Indeed, while social media can “help people make sense of their situation during a disaster, social media can also become a rumor mill and create social problems.” As discussed above, psychological factors can influence rumor spreading, particularly when experiencing stress and mental pressure following a disaster. Recent studies have also corroborated this finding, confirming that “differences in people’s critical thinking ability […] contributed to the rumor behavior.” So Yasu and his team ask the following interesting question: can critical thinking be crowdsourced?

Screen Shot 2013-03-30 at 3.37.40 PM

“Not everyone needs to be a critical thinker all the time,” writes Yasu et al. As long as some individuals are good critical thinkers in a specific domain, their timely criticisms can result in an emergent critical thinking social system that can mitigate the spread of false information. This goes to the heart of the self-correcting behavior often observed on social media and Twitter in particular. Yasu’s insight also provides a basis for a bounded crowdsourcing approach to disaster response. More on this here, here and here.

“Related to critical thinking, a number of studies have paid attention to the role of denial or rebuttal messages in impeding the transmission of rumor.” This is the more “visible” dynamic behind the self-correcting behavior observed on Twitter during disasters. So while some may spread false rumors, others often try to counter this spread by posting tweets criticizing rumor-tweets directly. The following questions thus naturally arise: “Are criticisms on Twitter effective in mitigating the spread of false rumors? Can exposure to criticisms minimize the spread of rumors?”

Yasu and his colleagues set out to test the following hypotheses: Exposure to criticisms reduces people’s intent to spread rumors; which mean that ex-posure to criticisms lowers perceived accuracy, anxiety, and importance of rumors. They tested these hypotheses on 87 Japanese undergraduate and grad-uate students by using 20 rumor-tweets related to the 2011 Japan Earthquake and 10 criticism-tweets that criticized the corresponding rumor-tweets. For example:

Rumor-tweet: “Air drop of supplies is not allowed in Japan! I though it has already been done by the Self- Defense Forces. Without it, the isolated people will die! I’m trembling with anger. Please retweet!”

Criticism-tweet: “Air drop of supplies is not prohibited by the law. Please don’t spread rumor. Please see 4-(1)-4-.”

The researchers found that “exposing people to criticisms can reduce their intent to spread rumors that are associated with the criticisms, providing support for the system.” In fact, “Exposure to criticisms increased the proportion of people who stop the spread of rumor-tweets approximately 1.5 times [150%]. This result indicates that whether a receiver is exposed to rumor or criticism first makes a difference in her decision to spread the rumor. Another interpretation of the result is that, even if a receiver is exposed to a number of criticisms, she will benefit less from this exposure when she sees rumors first than when she sees criticisms before rumors.”

Screen Shot 2013-03-30 at 3.53.02 PM

Findings also revealed three psychological factors that were related to the differences in the spread of rumor-tweets: one’s own perception of the tweet’s accuracy, the anxiety cause by the tweet, and the tweet’s perceived importance. The results also indicate that “exposure to criticisms reduces the perceived accuracy of the succeeding rumor-tweets, paralleling the findings by previous research that refutations or denials decrease the degree of belief in rumor.” In addition, the perceived accuracy of criticism-tweets by those exposed to rumors first was significantly higher than the criticism-first group. The results were similar vis-à-vis anxiety. “Seeing criticisms before rumors reduced anxiety associated with rumor-tweets relative to seeing rumors first. This result is also consistent with previous research findings that denial messages reduce anxiety about rumors. Participants in the criticism-first group also perceived rumor-tweets to be less important than those in the rumor-first group.” The same was true vis-à-vis the perceived importance of a tweet. That said, “When the rumor-tweets are perceived as more accurate, the intent to spread the rumor-tweets are stronger; when rumor-tweets cause more anxiety, the intent to spread the rumor-tweets is stronger; when the rumor-tweets are perceived as more im-portance, the intent to spread the rumor-tweets is also stronger.”

So how do we use these findings to enhance the critical thinking of crowds and design crowdsourced verification platforms such as Verily? Ideally, such a platform would connect rumor tweets with criticism-tweets directly. “By this design, information system itself can enhance the critical thinking of the crowds.” That said, the findings clearly show that sequencing matters—that is, being exposed to rumor tweets first vs criticism tweets first makes a big differ-ence vis-à-vis rumor contagion. The purpose of a platform like Verily is to act as a repo-sitory for crowdsourced criticisms and rebuttals; that is, crowdsourced critical thinking. Thus, the majority of Verily users would first be exposed to questions about rumors, such as: “Has the Vincent Thomas Bridge in Los Angeles been destroyed by the Earthquake?” Users would then be exposed to the crowd-sourced criticisms and rebuttals.

In conclusion, the spread of false rumors during disasters will never go away. “It is human nature to transmit rumors under uncertainty.” But social-technological platforms like Verily can provide a repository of critical thinking and ed-ucate users on critical thinking processes themselves. In this way, we may be able to enhance the critical thinking of crowds.


bio

See also:

  • Wiki on Truthiness resources (Link)
  • How to Verify and Counter Rumors in Social Media (Link)
  • Social Media and Life Cycle of Rumors during Crises (Link)
  • How to Verify Crowdsourced Information from Social Media (Link)
  • Analyzing the Veracity of Tweets During a Crisis (Link)
  • Crowdsourcing for Human Rights: Challenges and Opportunities for Information Collection & Verification (Link)
  • The Crowdsourcing Detective: Crisis, Deception and Intrigue in the Twittersphere (Link)