We Need Biological Verification and Attribution Tools to Combat Disinformation Aimed at International Institutions

The Biological Weapons Convention’s Ninth Review Conference (RevCon) took place under a unique geopolitical storm, as the COVID-19 pandemic raged and the Russian invasion of Ukraine took center stage. Russia’s continued claims of United States-sponsored bioweapons research laboratories in Ukraine only added to the tension. Russia asserted that they had uncovered evidence of offensive biological weapons research underway in Ukrainian labs, supported by the United States, and that the invasion of Ukraine was due to the threat they faced so close to their borders. 

While this story has been repeated countless times to countless audiences – including an Article V consultative meeting and the United Nations Security Council (UNSC) as part of an Article VI complaint against the U.S. lodged by Russia, the fact remains that Russia’s assertion is untrue.

The biological laboratories present in Ukraine operate as part of the Department of Defense’s Biological Threat Reduction Program, and are run by Ukrainian scientists charged with detecting and responding to emerging pathogens in the area. Articles V and VI of the Biological Weapons Convention (BWC) are intended to be invoked when a State Party hopes to resolve a problem of cooperation or report a breach of the Convention, respectively. The Article V formal consultative meeting resulted in no consensus being reached, and the Article VI UNSC meeting rejected Russia’s claims. The lack of consensus during the consultative meeting created a foothold for Russia to continue their campaign of disinformation, and the UNSC meeting only further reinforced it. The Article V and VI clauses are meant to provide some means of mediation and recourse to States Parties in the case of BWC violation. However, in this case they were not invoked in good faith, rather, they were used as a springboard for a sweeping Russian disinformation campaign.

Abusing Behavioral Norms for Political Gain

While Russia’s initial steps of calling for an Article V consultative meeting and an Article VI Security Council investigation do not seem outwardly untoward, Russia’s behavior during and after these proceedings dismissed the claims indicated their deeper purpose. 

Example: Misdirection in Documentation Review

During the RevCon, the Russian delegation often brought up how the results of the UNSC investigation would be described in the final document during the article-by-article review, calling for versions of the document that included more slanted versions of the events. They also continually mentioned the U.S.’ refusal to answer their questions, despite the answers being publicly available on the consultative meeting’s UNODA page, the Russians characterized their repeated mentioning of the UNSC investigation findings as an act of defiant heroism, implying that the U.S. was trying to quash their valid concerns, but that Russia would continue to raise them until the world had gotten the answers it deserved. This narrative directly contradicts the facts of the Article V and VI proceedings. The UNSC saw no need to continue investigating Russia’s claims. 

Example: Side Programming with Questionable Intent 

The Russian delegation also conducted a side event during the BWC dedicated to the outcomes of the consultative meeting. The side event included a short ‘documentary’ of Russian evidence that the U.S.-Ukraine laboratories were conducting biological weapons research. This evidence included footage of pesticide-dispersal drones in a parking lot that were supposedly modified to hold bioweapons canisters, cardboard boxes with USAID stickers on them, and a list of pathogen samples supposedly present that were destroyed prior to filming. When asked about next steps, the Russian delegation made thinly veiled threats to hold larger BWC negotiations hostage, stating that if the U.S. and its allies maintain their position and don’t demonstrate any further interest in continuing dialogue, it would be difficult for the 9th RevCon to reach consensus. 

Example: Misuse of ‘Point of Order’

Russia’s behavior at the 9th RevCon emphasizes the unwitting role international institutions can play as springboards for state-sponsored propaganda and disinformation. 

During opening statements, the Russian delegation continually called a point of order upon any mention of the Russian invasion of Ukraine. A point of order allows the delegation to respond to the speaker immediately, effectively interrupting their statement. During the Ukrainian delegation’s opening statement, the Russian delegation called four points of order, citing Ukraine’s “political statements” as disconnected from the BWC discussion. Russia’s use of the rules of procedure to bully other delegations continued – after they concluded a point of order during the NATO delegate’s statement, they called another one almost immediately after the NATO delegate resumed her statement with the singular word, “Russia.” This behavior continued throughout all three weeks of the RevCon. 

Example: Single Vote Disruption Made in Bad Faith

All BWC votes are adopted by consensus, meaning that all states parties have to agree for a decision to be made. While this helps ensure the greatest inclusivity and equality between states parties, as well as promote implementation, it also means that one country can be the ‘spoiler’ and disrupt even widely supported changes. 

For example, in 2001, the United States pulled out of verification mechanism negotiations at the last minute, upending the entire effort. Russia’s behavior in 2022 was similarly disruptive, but made with the goal of subversion. The vote changed how other delegations reacted, as representatives seemed more reluctant to mention the Article V and VI proceedings. The structure of the United Nations as impartial and the BWC as consensus-based means that by their very nature they cannot combat their misuse. Any progress to be had by the BWC relies on states operating in good faith, which is impossible to do when a country has a disinformation agenda.

Thus, the very nature of the UN and associated bodies attenuates their ability to respond to states’ misuse. Russia’s behavior at the 9th RevCon is part of a pattern that shows no signs of slowing down. 

We Need More Sophisticated Biological Verification and Attribution Tools

The actions described above demonstrate the door has been kicked fully open for regimes to use the UN and associated bodies as mouthpieces for state-sponsored propaganda. 

So, it is imperative that 1) more sophisticated biological verification and attribution tools be developed, and 2) the BWC implements a legally binding verification mechanism. 

The development of better verification methods to verify whether biological research is for civil or military purposes will help to remove ambiguity around laboratory activities around the world. It will also make it harder for benign activities to be misidentified as offensive biological weapons activities. 

Further, improved attribution methods will  determine where biological weapons originate from and will further remove ambiguity during a genuine biological attack. 

The development of both these capabilities will strengthen an eventual legally binding verification mechanism. These two changes will also allow Article V consultative meetings and Article VI UNSC meetings to determine the presence of offensive bioweapons research more definitively, thus contributing rather substantively to the strengthening of the convention. As ambiguity around the results of these investigations decreases, so does the space for disinformation to take hold.

“Expert” Opinion More Likely to Damage Vaccine Trust Than Political Propaganda

Our research shows that the biggest threat to reliable information access for military and DoD service members is expert opinion. A high-level analysis of anti-COVID vaccine narratives on social media reveals that opinions from experts achieve higher relevancy and engagement than opinions from news pundits, even if the news source is considered reliable by its base. 

The Department of Defense administers 17 different vaccines for the prevention of infectious diseases among military personnel. These vaccines are distributed upon entering basic training and before deployment. By mid-July, 62% of active duty service members added vaccination against COVID-19 to that array. As impressive as that 62% is, it is not full compliance, and so in late August, Defense Secretary Lloyd Austin ordered service leaders to “impose ambitious timelines for implementation” of a full-court press to vaccinate the entire armed forces.

Some Republican lawmakers, including military veterans, have opted to use policy to fight the military vaccine mandate. Representative Thomas Massie (R-KY) introduced a bill to prohibit federal funds from being used “to force” a member of the USAF to receive a COVID-19, while Representative Mark Green (R-TN) referred repeatedly to a lack of “longitudinal data” of the developed COVID-19 vaccines as the impetus for a bill that would allow unvaccinated servicemembers to be honorably discharged. Green, who is also a board-certified physician, has previously claimed that vaccines cause autism.

Users on Twitter and other social media platforms are looking for advice for themselves or family members in the service about how to navigate the new requirements. Days after Secretary Austin’s memo, a video of a self-proclaimed “tenured Navy surgeon” went viral, where she claims that the COVID-19 vaccine “killed more of our young active duty people than COVID did”, citing the open-source Vaccine Adverse Event Reporting System (VAERS) Database which is unregulated and publicly editable. That doctor, associated with the COVID conspiracy group America’s Frontline Doctors, is one of many influencers who have targeted service members in their battle against vaccination.

Dr. Lee Merritt presenting at an America’s Frontline Doctors event.


On social media, the doctor’s statement took off. A bot network amplified the narrative on Twitter, keeping it at the forefront of the vaccine conversation for weeks. The narrative also gained traction on web forums, where it was cited by soldiers looking for a credible reason to gain an exception to the vaccine mandate. Forums emerged with dozens of service members pointing to the doctor’s statements as fact, and discussing ways to skirt the mandate using religious exemptionsphony immunization cards, or other schemes. The right-wing social media website Gab has created a webpage with templates for users to send to their employers requesting religious exemptions that are being shared throughout military social media circles.

In the below analysis, we take a close look at the 1443 retweets of Dr. Merritt’s video. Assessing the accounts themselves, we see a large influx of accounts created in the last 9 months taking part in the conversation and we see statistically significant declines in the correlations between account behaviors, both of which suggest bot activity.  



From the 1443 retweets, we find 1319 unique accounts, of which user data was available for 1314 accounts. Three comercial bot detector technologies agreed that 466 of these accounts – 35% – were bots. Two of three detectors found 514 bots in common. Assessing the post behaviors and posted content of these bot accounts, we find disinformation content related to COVID-19, vaccines, vaccine mandates, gun control, immigration, and school shootings in America.

Around the same time, Fox News host Tucker Carlson labeled the vaccine mandate a plot to single out “the sincere Christians in the ranks, the free thinkers, the men with high testosterone levels, and anyone else who doesn’t love Joe Biden, and make them leave immediately.” 

Pulling 15,000 random Tweets using key terms focusing on Tucker Carlson’s specific narrative targeting Department of Defense vaccine mandates, we find the Fox News commentator may have stretched the credulity of anti-vax misinformation beyond levels even social media can accept. The conversation was segmented along several sub-narratives. Tweets responding to Tucker Carlson’s assertions that the DoD vaccine mandate was designed to remove high testosterone men from the ranks, the conversation can be seen below. 



The first conversational segment, consisting of 4 overlapping topics, is focused on a reaction tweet from MSNBC, arguing that Fox News bears “responsibility for sowing doubt about the vaccine, validating the anti-vaxxers as the pandemic continues.” The response is overwhelmingly positive towards the MSNBC statement, and condemns Fox News and Tucker Carlson. 

The second conversational segment, consisting of three overlapping components, consolidates around tweets ridiculing Tucker Carlson for his “latest bonkers conspiracy theor[y]” along with discussion of workplace vaccine mandates.

Finally, the third segment of conversation, consisting of two related components, are tweets discussing Tucker Carlson’s previous claims related to COVID vaccines and disbelief in the Pentagon’s association with Satanism.

In assessing the online response to Tucker Carlson’s claims that the Department of Defense is mandating the COVID vaccine to “identify the sincere Christians in the ranks, the free thinkers, the men with high testosterone levels, and anybody else who doesn’t love Joe Biden and make them leave immediately”, we find that the Fox News personality inadvertently consolidated a reactionary conversation online. Instead of contributing to the spread and reach of COVID vaccine misinformation, Carlson’s narrative was met with incredulity, and a near unified condemnation of both Fox News and Carlson himself. Social media users rejected misinformation that the United States military is using vaccine mandates to target “high testosterone” members, and in propagating that narrative Tucker Carlson demonstrates that there is a limit to the believability of lies and conspiracy theories, even on the internet.

But Carlson is not the sole source of disinformation online. Credible or seemingly credible information sources (like expert opinion) are a much more serious threat to service members seeking information about the COVID vaccine than political efforts. Soldiers are doing their research, and it is imperative that DoD (and others) focus on getting more credible expert opinions into the marketplace of ideas so that those narratives can gain traction over these fantastical ones.

Big Tech CEOs questioned about fighting disinformation with AI by the House Energy and Commerce Committee

The amount of content posted on social media platforms is increasing at a dramatic rate, and so is the portion of that content that is false or misleading. For instance, users upload over 500 hours of video to YouTube every minute. While much of this content is innocuous, some content spreads harmful disinformation, and addressing the spread of false or misleading information has been a substantial challenge for social media companies. The spread of disinformation, as well as misinformation, on social media platforms was highlighted during a March 25 hearing in the House Energy and Commerce Committee. Members questioned the CEOs of Facebook, Twitter, and Google on their roles in stopping the spread of false information, much of which contributed to the worsening of the COVID-19 pandemic, as well as the January 6 insurrection at the Capitol.

Artificial intelligence as a solution for disinformation

False or misleading posts on social media spread quickly and can significantly affect people’s views. MIT Sloan researchers found that false information was 70% more likely to be retweeted on Twitter than facts; false information also reached its first 1,500 people six times faster. Furthermore, researchers at Rand discovered that a constant onslaught of false information can even skew people’s political opinions. Specifically, false information exacerbates the views of people in closed or insular social media circles because they receive only a partial picture of how other people feel about specific political issues.

Traditionally, social media companies have relied on human reviewers to find harmful posts. Facebook alone employs over 30,000 reviewers. According to a report published by New York University, Google employs around 10,000 content reviewers for YouTube and its subsidiaries, and Twitter employs around 1,500. However, the human review of content is time-consuming, and, in many instances, extremely traumatic for these reviewers. These Big Tech companies are now developing artificial intelligence (AI) algorithms to automate much of this work.

At Facebook, the algorithms rely on tens of millions of user-submitted reports about potentially harmful content. This dataset is then used to train the algorithms to identify which types of posts are actually harmful. The content is separated into seven different categories: nudity, graphic violence, terrorism, hate speech, fake accounts, spam, and suicide prevention. In the past few years, much of their effort was dedicated to identifying fake accounts that would likely be used for malignant purposes, such as election disinformation. Facebook is also using its AI algorithms to identify fraudulent news outlets publishing fake news and to help its reviewers remove spam accounts.

Google has developed algorithms that skim all search results and rank them based on quality and relevance to a user’s search terms. When the algorithms identify articles promoting misinformation, those articles are ranked lower in the search results and are therefore more difficult to find. For YouTube, the company developed algorithms to screen new content and then demonetized any content related to COVID-19. Videos related to the pandemic are unable to earn any revenue from ads, ideally dissuading those attempting to profit from COVID-19 scams involving the posting of misleading content. YouTube has also redesigned its recommendation algorithms to show users authoritative sources of information about the COVID-19 pandemic and steer them away from disinformation or misinformation.

Twitter is also using AI to detect harmful tweets and remove them as quickly as possible. In 2019, the social media site reported that its algorithms removed 43% of the total number of tweets that were in violation of their content policies. That same year, Twitter purchased a UK-based AI startup to help counter disinformation spreading on its platform. Its algorithms are designed to quickly identify content that can pose a direct risk to the health or well-being of others, and prioritizes that content for review by human moderators. These moderators can then evaluate the potentially problematic tweets to make a final determination as to whether the content is truly harmful.

The limitations of using AI

While AI can be a useful tool in combating disinformation on social media, it can have significant drawbacks. One of the biggest problems is that AI algorithms have not achieved a high enough proficiency in understanding language and have difficulty determining what a specific post actually means. For example, AI systems like Apple’s Siri can follow simple commands or answer straightforward questions, but cannot hold conversations with a person. During the hearing, Twitter CEO Jack Dorsey and Facebook CEO Mark Zuckerberg discussed this point, describing how it is difficult for AI algorithms to parse social media posts denouncing harmful ideas from those that are endorsing them. Another problem with AI is that the decision-making processes for these algorithms can be highly opaque. In other words, the computers are unable to explain why or how they have made their decisions. Lastly, AI algorithms are only as smart as the data on which they are trained. Imperfect or biased data will then lead to ineffective algorithms and flawed decisions. These biases can come from many sources and can be difficult for AI scientists to identify.

More needs to be done

False and misleading posts on social media about the COVID-19 pandemic and the results of the 2020 presidential election have led to significant harm in the real world. In order to fully leverage AI to help mitigate the spread of disinformation and misinformation, much more research needs to be done. As we monitor Congressional activity focused on countering disinformation, we encourage the CSPI community to serve as a resource for federal officials on this topic.

Policy proposals about countering false information from FAS’ Day One Project

A National Strategy to Counter COVID-19 Misinformation – Amir Bagherpour and Ali Nouri

Creating a COVID-19 Commission on Public Health Misinformation – Blair Levin and Ellen Goodman

Combating Digital Disinformation: Resisting Foreign Influence Operations through Federal Policy – Dipayan Ghosh

Digital Citizenship: A National Imperative to Protect and Reinvigorate our Democracy – Joseph South and Ji Soo Song

Spanish-language vaccine news stories hosting malware disseminated via URL shorteners

Key Highlights

Malware hosted within popular news stories about COVID-19 vaccine trials

On September 18, 2020, FAS released a report locating a network of malware files related to the COVID-19 vaccine development on the Spanish-language Sputnik News link mundo.sputniknews.com. The report uncovered 53 websites infected with malware that were spread throughout Twitter, after allegations of adverse reactions led to a pause in the Oxford-AstraZeneca (AZD1222) vaccine trial. 

Whereas our first report collected 136,597 tweets and was only limited to the AstraZeneca COVID-19 vaccine, this update presents a collection of 500,166 tweets from Nov. 18 to Dec. 1 containing key terms “AstraZeneca”, “Sputnik V”, “Moderna”, and “Pfizer”. From that total, 88,555 tweets written in Spanish were analyzed for potential malware infections. 

Our analysis determines that infections on the mundo.sputniknews.com domain are continuing. Eight separate files were discovered, with 52 unique scans detecting various malware — up from the 17 scans in the initial report (see Figure 1).

Figure 1: Russia’s Sputnik Mundo network with infection

Many of the published stories contain information about possible complications or lean sceptically towards vaccine efficacy. The top translated story features the title “The detail that can complicate Moderna and Pfizer vaccines” (see Figure 2). 

Figure 2: Top-visited page on mundo.sputniknews.com, translated

One possible explanation behind the use of malware is that perpetrators can identify and track an audience interested in the state of COVID-19 vaccines. From there, micro-targeting on the interested group could artificially tilt the conversation regarding certain vaccines favorably. Such a strategy works well with these sites, which are already promoting material questioning Western-based vaccines. 

Additionally, within the Spanish-language Twitter ecosystem, 7,074 shortened bit.ly were discovered related to COVID-19 vaccines. The use of link shortening is a new discovery and a worrisome one. Not only does it enable additional messaging on Twitter by reducing URL characters, link-shortening can also obscure the final destination of the URL. The native Spanish-language news network suffering malware infection is structurally different from the Sputnik Mundo infection. Unlike the Sputnik Mundo domain, the bit.ly links routing to Latin American news outlets are doing so indirectly, first connecting to an IP that will refer the traffic to the news story URL but also hosts malware. This process has the potential to indirectly spread malware by clicking on bit.ly-linked embedded within Tweets.  

Of the bit.ly links shared more than 25 times, our analysis randomly selected ten. Half were infected and half were clean links. Infected domains included: an Argentine news site (www.pagina12.com.ar), an eastern Venezuelan newspaper (www.correodelcaroni.com), a Chilean news outlet (https://www.latercera.com), a Peruvian news outlet (https://Elcomercio.com), and a Mexican news outlet (https://www.laoctava.com). 

The typology of malware within the infected network was diverse. Our results indicate 77 unique pieces of malware, including adware-based malware, malware that accesses Windows registry keys on both 32- and 64-bit PC systems, APK exploits, digital coin miners, worms, and others. Our analysis indicates that the malware is designed to monitor personal behavior on users’ devices. 

The malware network is robust but not highly interconnected (see Figure 3).

Figure 3: Network of five infected domains

Examination of the malware contained within this network revealed interesting attribution information. While much of the specific malware (e.g. MD5  hash: 1aa1bb71c250ed857c20406fff0f802c, found on the Chilean news outlet https://www.latercera.com) has neutral encoding standards, two language resources in the file are registered as “Chinese Traditional” (see Figure 4). 

Figure 4: Malware attribution information

As manipulation of language resources in coding is common, the presence of Chinese Traditional characters flagged in the malware’s code suggests the originators of the malware may be trying to confuse malware-detection software. 

However, our analysis identified this malware’s IP address as located in Hungary, while its holding organization is in Amsterdam (see Figure 5). This IP address was also linked to the Undernet (https://www.undernet.org/), one of the largest Internet chat domains with over 17,444 users in 6,621 channels and a known source of malware origination. Again, this is but one malware on one Chilean news outlet pulled for closer inspection. Collectively, our findings demonstrate the networked production and distribution of malware in the COVID-19 vaccine conversation.

Figure 5: Malware IP hosted in Hungary

The malware network is large and presents a clear threat vector for the delivery of payload on vaccine stories. Vaccine malware-disinformation has spread beyond Russia’s Sputnik Mundo network and towards a series of other domains in Argentina, Venezuela, Chile, Peru, and Mexico. This is particularly alarming considering that aggressive conspiracy theories advanced by the Kremlin in Latin America have already tilted the region’s governments towards the use of the Sputnik V vaccine. Indeed, Russia is supplying Mexico with 32 million doses of Sputnik V. Venezuela and Argentina are set to purchase 10 million and 25 million doses  respectively, while Peru is currently in negotiations to purchase the Sputnik V. 

With a malware-curated audience, it will become significantly easier to pair the supply of Sputnik V with targeted information to support its use and delegitimize Western vaccines. 

Considering that COVID-19 vaccine efforts are arguably the most important news topic on any given day, the spike in social media activity from the AstraZeneca COVID-19 clinical trial pause marked a key entry point for malware-disinformation. Since September, however, the network has largely permeated throughout Spanish-language Twitter — as did Sputnik V throughout Latin America. 

With the explosion of reporting on the pandemic and vaccines, it is difficult to know what sites are safe and which are dangerous. This risk is magnified for less-savvy Internet users, who may not even consider the vulnerability of malware. Unfortunately, it is difficult to say the number of individuals that were infected from the malware discovered. Even one person clicking the wrong link could have a disastrous effect, as the malware siphons sensitive information from credit card numbers to confidential information appearing on a user’s screen. 

Most worrisome is that the malware technique could create  a library of users interested in vaccine stories who could be subsequently targeted. If used for micro-targeting, the library would become an effective audience to target with more vaccine misinformation.  

Methodology

We performed a combination of social network analysis, anomalous behavior discovery and malware detection. We scanned  88,555 topic specific URLs run through an open source malware detection platform VirusTotal (www.virustotal.com).  

For more about the FAS Disinformation Research Group and to see previous reports, visit the project page here.

A National Strategy to Counter COVID-19 Misinformation

Summary

The United States accounts for over 20% of global deaths related to COVID-19 despite only having 4% of the world’s population. This unacceptable reality is in part due to the tsunami of misinformation surrounding COVID-19 that has flooded our nation. Misinformation not only decreases current compliance with best practices for containing and mitigating the spread of COVID-19, but will also feed directly into resistance against future administration of a vaccine or additional public-health measures.

The next administration should establish an office at the Department of Health and Human Services dedicated to combating COVID-19 misinformation. This office should lead a coordinated effort that:

  1. Ensures that evidence-based findings are at the core of COVID-19 response strategies.
  2. Utilizes data science and behavioral analytics to detect and counter COVID-19 misinformation.
  3. Works with social-media companies to remove misinformation from online platforms.
  4. Partners with online influencers to promote credible information about COVID-19.
  5. Encourages two-way conversations between public-health officials and the general public.
  6. Ensures that public-health communications are supported by on-the-ground action.

Social Media Conversations in Support of Herd Immunity are Driven by Bots

Key highlights

Online debate surrounding herd immunity shows one-sided automation

For months, debates about so-called herd immunity have occurred internationally. The idea behind this herd immunity strategy is to shield vulnerable populations while allowing COVID-19 transmission to go uncontrolled among less vulnerable populations. This strategy has been dismissed by many scientists for several ethical and practical reasons, such as the large size of vulnerable populations in countries like the US, and a lack of full understanding of longer-term impacts of SARS-CoV-2 on lower-risk groups. However, the conversation has increased in support, partially due to the embrace of the concept by a senior advisor to President Trump, Dr. Scott Atlas, who has called for the US to attempt the strategy. Support for the strategy has been outlined by a small group of public health scientists in the Great Barrington Declaration.

On the other side of the debate, prominent scientists like Dr. Fauci have called herd immunity through natural infection “dangerous” and “nonsense”. Infectious disease experts speculate this strategy could result in two million unnecessary deaths. Another group of scientists have recently published a correspondence in the Lancet, as well as an accompanying John Snow Memorandum, as an alternative to the Great Barrington Declaration.

About Topic Modeling

In our study of hundreds of thousands of tweets, we found a robust conversation related to the issues of herd immunity and the Great Barrington Declaration. To better understand the conversation we applied a technique known as topic modeling. A topic model is a frequently-used text-mining approach that applies statistical analysis of words and phrases across a particular conversation to discover patterns and common themes pertaining to the discussion. We first extracted public social data on text surrounding The Great Barrington conversation, searching for tweets with the terms such as “Barrington”, “Barrington Declaration”, and “focused protection”. We then applied a topic modeling method known as Latent Dirichlet Allocation (LDA) to discover how the conversation clusters into distinct topics.

Results

Through LDA Topic Modeling, we identified three distinct clusters surrounding the pro-herd immunity conversation and three distinct conversations surrounding opposition to herd immunity. The three cluster or distinct topics of conversation surrounding the pro-herd immunity conversation are: 1) Ending lockdowns; 2) political opposition to epidemiological social controls; and 3) support for the Great Barrington Declaration and the scientists who signed the document. The sentiment surrounding the Great Barrington Declaration conversation is largely positive, highlighting the support for this herd immunity strategy within online conversation. However, the conversation demonstrates evidence of significant bot behavior, specifically in the timing of tweets and topic clustering with low content diversity. In fact, when conducting bot-detection analysis, we discovered an estimated 45% of messages surrounding the Great Barrington Declaration are likely bot-driven. Meanwhile, the conversation opposing herd immunity has much greater diversity, is generally organic and not heavily bot-driven. The topic clusters consisted of: 1) Criticism of senior U.S. officials pushing herd immunity, 2) the high potential mortality required to attempt herd immunity, and 3) the ongoing mask debate, and vaccine development timelines. This more organic conversation is highly negative towards the concept of herd immunity.

Figure 1: Key Terms-clusters surrounding Pro-Herd Immunity Conversation*
Figure 2: Key Terms-clusters surrounding Opposition to Herd Immunity**

Detection of Bot-Activity

We assessed the conversation for signs of automation and other artificial signatures (i.e. bot activity). Given the volume of the conversation and the limitations placed on commercial bot detection systems, we took a functional approach to this analysis and focused on identifying periods of the conversation in which a large volume of similar content was posted at the same time. Using one-second breaks, each tweet was plotted by timestamp. Next, anomaly detection was run against a subset of these data that captured the number of tweets per second and anomalous times were identified as one second periods that were statistical outliers with high levels of tweet activity. The red labeled dots in Figure 3 represent anomalous tweets within these identified periods. These tweets consist of retweeted messages, repeated hashtags signifying inorganic activity. We subset tweets labeled as taking place in anomalous periods, whether they contained original or repeated content, labeled them as originating from either the Barrington or herd immunity conversation, and extracted the user data from the tweet. Finally, user data was analyzed for indications of account automation, or bot behavior. We find that an estimated 45% of tweets in the Great Barrington Declaration conversion, with content supporting the idea of herd immunity, were posted by bots. Meanwhile, 22% of the herd immunity conversation is bot driven, less than half the levels of the Great Barrington conversation, and closer to the approximately 15% automation levels found on average in twitter conversation.

We find the conversation related to herd immunity in general to be driven by real accounts and maintaining both conversational diversity with an overall negative sentiment, while breakout analysis of the Great Barrington Declaration conversation consists of nearly half artificial accounts with high levels of retweets, low content diversity, and an overall positive sentiment related to the document. This suggests the conversation supporting herd immunity on Twitter is artificially driven while real accounts engaged in the conversation maintain a negative view of the issue.

The 45% artificiality rate for the Great Barrington is high compared to the 9-15% rate of bots broadly on Twitter. Because bot saturation is relative to a given conversation, it is difficult to have a universal statistical baseline for what level of artificial behavior is abnormal. Instead, the comparison of the percentage of artificial behavior between corpuses is more meaningful.

Artificial promotion of the Great Barrington Declaration suggests that sponsors of the herd immunity strategy are seeking to foster non-expert support over the support of scientists, doctors, and public health officials. While reasons must be explored further, it can be speculated that generating popular support for the strategy against the advice for experts is the fastest way to promote ending closures, partial and full lockdowns, and perhaps other public health measures such as the wearing of face masks as many are growing inpatient with restricted social activity, and other containment strategies.

Methodology

Using open-source investigation techniques, we performed a combination of social network analysis, and anomalous behavior discovery regarding herd immunity. We analyzed a total of 180,578 tweets in the herd immunity conversation, using key terms like “herd immunity”, “Barrington”, and “focused protection.” Out of the 180,578 tweets we sampled, there were 41,870 unique tweets attributed to and shared by 83,765 users.

A breakdown of the tweets is below:

Tweets analyzed:

75,557 tweets with search term “Barrington Declaration”

105,021 tweets with search term “herd immunity”

Unique tweets:

14,507 tweets with the search term “Barrington”

Herd immunity 41,870

Unique users:

73,931 tweets with the search term “Herd”

9,834 tweets with the search term “Barrington”

For more about the FAS Disinformation Research Group and to see previous reports, visit the project page here.

____________________________

Above are the terms comprising characterizing the clusters 1-3 of pro-herd immunity conversation (left to right). The beta is an indicator of the topic-word density, measured by word frequency or repeating pattern of co-occurring terms. The higher the beta, the higher the frequency of words or phrases.

** Above are the terms comprising characterizing the clusters 1-3 on opposition to herd immunity (left to right). The beta is an indicator of the topic-word density, measured by word frequency or repeating pattern of co-occurring terms. The higher the beta, the higher the frequency of words or phrases.

*** The image above displays frequency of messaging across the herd immunity conversation. The red dots display higher unusual frequency of messaging based on either high frequency in tweeting the same message or high frequency of tweets emanating from the same account. The horizontal axis represents tweets over time.

Most Covid Related Disinformation on Social Media Likely Emanating from Known Influencers and Traditional Media Sources

Key Highlights


Top COVID Related Messages on Social by Reach and Engagement

We conducted an  assessment of  COVID related messaging on Twitter in the past week (Oct 6th-12th),  covering conversations regarding vaccines, masks, treatment, and public health responses. The results indicate traditional media reporting by established news sources such as the New York Times and President Donald J. Trump are most influential by audience reach and volume of engagement. In particular, Donald Trump’s false claim suggesting seasonal flu is comparable to COVID-19 in deaths and lethality had the highest reach of any tweet this week at an audience of approximately 87M. The same tweet had the highest audience engagement, amounting to  305K people responding to the message by sharing, liking, or commenting. 

The most significant traditional media posting on Twitter was a New York Times story reporting a substantial shortage of the Regeneron experimental treatment that was administered to President Trump with doses available for only 50,000 patients. This posting had a substantial audience reach of approximately 47.5M; however, it had far less audience engagement compared to influencers with similar reach. This is due to influencer accounts dynamically interacting with their followers compared to the one-way messaging of traditional news sources. Nonetheless, traditional media remains highly influential in spreading both factual and misleading information, as the majority of social media posts share content derived from a linked source typically in the form of a news article.   

In our assessment this week, we also identified a prolific amplifier of disinformation, Tom Fitton, a far right wing conspiracy theorist and climate change denier.  In his Tweet, Fitton denounces the use of masks and insinuates the use of hydroxychloroquine is  being falsely suppressed. Although the effectiveness of hydroxychloroquine is widely refuted, the narrative that it is an effective treatment persists. Fitton’s recent false claims ridden tweet reached 1.19M people at an audience engagement of 7.5K.

Methodology

We applied social and traditional media analysis of Twitter across a seven day period covering over 94M Tweets on the topics of vaccines, masks, public health response, and treatments pertaining to COVID-19.

For more about the FAS Disinformation Research Group and to see previous reports, visit the project page here.

Creating a COVID-19 Commission on Public Health Misinformation

Summary

To better prepare for future public-health emergencies, the next president should establish several high-level COVID Commissions—modeled on the 9/11 Commission—to examine our nation’s response to the 2020 pandemic. One Commission should focus on public health communication and messaging.

The next president should task this Commission with assessing the information about the pandemic: what was made publicly available, how the information affected our societal response, and what should be done to limit the impact of false and dangerously misleading information moving forward.

Vaccine news stories hosting malware disseminated across Spanish-language Twitter

Key Highlights

Malware hosted within popular news stories about COVID-19 vaccine trials

On September 8, 2020 Oxford University and AstraZeneca placed their COVID-19 vaccine (AZD1222) development on hold.  During Phase 3 trials, a woman in the United Kingdom experienced an adverse neurological condition consistent with a rare spinal inflammatory disorder known as transverse myelitis. As is typical with large-scale vaccine trials, the woman’s condition triggered a pause in the trial, lasting until September 12, at which time the trial was officially resumed. Analysis was conducted on the Twitter conversation surrounding the AstraZeneca adverse reaction event and detected social media posts spreading malware and malicious software via links embedded within those posts. One possible goal of the perpetrators is to identify the audience that is most interested in the issue of vaccines in order to micro-target the group with future  items of interest, possibly to artificially tilt the conversation for or against certain vaccines.

Figure 1: Daily Tweet numbers

Seen in Figure 1, our analysis includes 136,597 tweets from September 2nd – 12th. Tweets were collected from the Twitter developer’s API using keyword searches for “AstraZeneca”, “AZN”, “AZD1222” and the stock symbol “$AZN”. Beginning on the 8th, coinciding with the adverse event report, there is a large increase in the volume of tweets, with over 80,000 tweets identified on September 9th.  

Within all collected tweets, 15,820 unique URLs were discovered (approximately 11.5% of tweets contained a URL). Most Twitter users utilize URLs within their tweets with the intent to flag or to redirect their audience to a related news story as shown in Figure 2. This particular URL for the popular news site, Stat News was shared in 1,265 tweets.  

Figure 2: Typical URL sharing tweet

These tweets can also serve as vectors for the spread of malware. Malware can be loaded onto a webpage whose URL is shared with others. This is a technique that is of increasing concern in the spread of disinformation and propaganda. Through the open source malware detection platform VirusTotal (www.virustotal.com), we detected 53 sites hosting malware within the AstraZeneca conversation.  Four URLs in particular were returned as malicious from the domain of Russian state sponsored Spanish-language Sputnik News (mundo.sputniknews.com). We deciphered that these set of domains were hosting seven distinct malware packages.  Seen below in figure 3, these include executable files, android phone-specific malware, Microsoft Office XML, and a zipped folder rated as malicious.  Notably, Russia’s Sputnik news site rests at the center of the malicious network.

Figure 3: Malware sharing network

With mundo.sputnik news at the center of this network (1), series of website links (2), connected IP addresses by nation (3), and the group of both malicious and non-malicious files (4). Examination of the details for the zipped folder of malware highlighted revealed 21 unique detections within that single file. As seen on figure 4,  we detected a series of malware files. 

The malware is designed to access nearly all intel-based hardware running on MS Windows. It accesses core system components including kernel32.dll, shell32.dll, and netmsg.dll files on a host machine, and accesses multiple critical registry files, and creates multiple mutex files during this process.

Mutexes are best understood in an example: web browsers maintain a history log of sites visited. With multiple browser windows open, each browser process will try to update the history file, but only one can lock and update the file at a time. By registering the file with a mutex object, the different processes know when to wait to access the file until other processes have finished updating it.

The analysis indicates the AstraZeneca conversation on Spanish-language SputnikNews is being used to spread malware specifically designed to monitor unwitting user behavior on their personal devices. This is particularly significant considering the outreach by Russian vaccine efforts to Latin American audiences and governments. 

Figure 4: Content of Malware File

With COVID-19 vaccine efforts being among the hottest news topics on any given day, a halt to the vaccine process was all but guaranteed to create a huge spike in social media mentions. The spike in daily tweet numbers seen in figure 1 shows how popular that topic is, but also how vulnerable the conversation is. 

With multiple outlets reporting on any given news story, it is often difficult to parse what sites are safe and which are potential dangers. For less savvy internet users, this consideration might not even come into play, which leaves them vulnerable to malware and virus attacks for little more than just clicking on the wrong news story.

While we do not know how many people were infected by the malware we found, we can say that the wrong person clicking the wrong link could have disastrous effects. One feature in particular could capture sensitive information on a user’s screen, like addresses, to credit card numbers, identification, or even confidential information in sectors like banking, or government. While we were able to catch the malware in advance, those without the same robust security would be at a much steeper risk.

This malware technique can also be used to identify users who are interested in vaccine stories in order to target them with future vaccine news. Micro-targeting allows for companies to define specific, rigid user profiles in order to create an audience for content and ads. If users are placed within one of these audiences, companies placing ads are able to send them content tailored for their interests. For instance, if someone is flagged as being interested in vaccine news, they might be a targeted recipient of an ad buy meant to highlight misleading, false, or sponsored news about the development of another vaccine. Since ads often appear organically in social media feeds, it is sometimes difficult to distinguish between an article your friend shared and an article a company paid to place in front of you. 

Methodology

We performed a combination of social network analysis, anomalous behavior discovery and malware detection. We scanned  15,820 topic specific URLs run through an open source malware detection platform VirusTotal (www.virustotal.com).  The scans returned 53 sites hosting malware within the AstraZeneca conversation.  

For more about the FAS Disinformation Research Group and to see previous reports, visit the project page here.

Global enthusiasm and American trepidation in Russian diplomatic vaccine efforts

Key Highlights

Key Trends

Increased language specific activity linked to Russian vaccine partnerships

Since the announcement of the Russian Sputnik V vaccine on August 11, social media has been awash with speculation, consternation, and detestation of the untested vaccine. The vaccine was immediately turned into a political talking point, with accounts like @ImpeachmentHour, unfoundedly, claiming “Trump says the US will buy 100,000,000 doses of an untested, unproven Russian Covid vaccine! Anybody see this coming?”

While Americans are skeptical of Sputnik V, with some incorrectly claiming that the U.S. government is involved in either its manufacturing or distribution, their skepticism was far outweighed by large spikes of activity that praised and amplified the vaccine internationally.

Spanish language tweet volume increased from August 18 to 22, as both Mexico and Venezuela arranged partnerships with Russia to receive doses of Sputnik V as part of international phase 3 trials. English (green) and Spanish (orange) language tweet volumes by day are seen below.

Image 1: Volume of Russian vaccine conversation by language (Aug 17 – 26)

The increase in Spanish tweet volume over this same period, on average doubling in from 2000 to 4000 per day,  is accompanied by indications of widespread automated account activity to include rapid propagation of specific content, large shifts in narrative structure, and lack of content diversity. In repeated sampling of accounts tweeting between August 18 and 22, publicly available bot detection platforms estimate that 71.5% of accounts are bots, while automated activity before and after this period falls to 28.6%. Moreover, structural changes take place in the conversation about COVID and vaccine development. Seen below, LDA topic modeling of Spanish-language tweets prior to the 18-22 August surge in content focus on the Russian vaccine efforts while prior conversation is less focused.

Image 2: Spanish-language Twitter content before official vaccine agreements (Aug 17 – 19)
Image 3: Spanish-language Twitter content after official vaccine agreements (Aug 19 -25 Aug)

The increased language specific activity is largely the result of retweets of Pro-Russian, anti-American, and/or pro-Sputnik V content. Moreover, it’s mostly repeated and reshared content. The diversity of content drops dramatically and the vast majority of all retweets of this content exists only during the spikes in activity.

Before the vaccine collaboration announcements, we can see there weren’t robust Spanish and Turkish-langugae conversations surrounding the Russian vaccine. We believe that the spikes are largely astroturfed, but that the conversations in the following days could be a combination of organic content and a sustained effort to keep pro-Sputnik V content circulating.

But these efforts in Mexico and Turkey do not stand alone. We know that Russia will soon announce its vaccine efforts in India, as it has formally approached the Indian government, according to press reports. Spikes in English-language content in India are drastically less negative than English-language content from the United States. 

By creating and sharing large swaths of positive reactions to Sputnik V trial announcements, public opinion, especially from those readily persuadable, can be artificially shifted in favor of Sputnik V and Russia, and away from the American vaccine effort. What is most revealing is the assortment of countries that Russia has asked to be a part of their vaccine effort. The potential participation of Mexico, Turkey, Saudi Arabia, South Korea, and Israel, among others, puts the U.S. in a precarious diplomatic position. Though the coronavirus threat is global, America is baited to take note that its allies would readily volunteer to test a widely criticized vaccine. 

False claims about the Sputnik V vaccine

Pro-Sputnik V efforts have stalled in the U.S., where suspicion has taken over the narrative. Online, Americans believe that the Russian vaccine is poorly researched and lays the ground to make similar claims against an American-approved vaccine as well. Influencers speculate that the Russian vaccine provides ample prologue for a vaccine Cold War. Beyond well-founded skepticism, there has been disinformation surrounding the vaccine, like an article making the erroneous claim that Russia President Vladimir Putin’s daughter died after taking the Sputnik V vaccine. The article was shared nearly 11,000 times on Facebook alone. The story has been debunked by Snopes,

Speculation that the US will rush a vaccine, combined with spurious claims that the American president has bought doses of Sputnik V vaccine, will have consequences for vaccine uptake in the US. A recent study in preprint says that 68% of Americans will take a COVID-19 vaccine if “proven safe and effective,” but from what we’ve seen in the reaction to the Sputnik V vaccine, safe and effective as deemed by government institutions might not be convincing enough. In the same study, only 55% of Russians said they would take a vaccine under the same conditions. 

Image 4: Potential acceptance of COVID-19 by country

Methodology

We performed an analysis of 125,000 tweets with the terms sputnikv OR sputnik OR RussianVaccine -filter:verified OR filter:verified”.  between August 17th-25th, in the top ten languages and from the 16 countries that agreed to participate in Phase 3 trials, mass production, and/or receive priority in delivery schedule of a Russian-developed vaccine. The analysis demonstrated spikes in activity in respective languages when Russia announced the willingness of countries where those are primary languages  to work with Russia now (rather than waiting for the United States or other nations to finish vaccine development).  Spikes in tweets in these languages overlap the timing of the announcements.

The top ten languages were: English, Spanish, Japanese, Turkish, German, French, Portuguese, Russian, Arabic, and Catalonian. The 16 countries announced by the Russian Direct Investment Fund were: Belarus, Philippians, Myanmar, UAE, Saudi Arabia, Israel, Jordan, Vietnam, Palestine, Kazakhstan, Azerbaijan, Venezuela, India, Brazil, Mexico and Turkey.)

The increased language specific activity is largely the result of retweets of Pro-Russian, anti-American, and/or pro-Sputnik V content. Moreover, it is mostly repeated content. The diversity of content drops dramatically and the vast majority of all retweets of this content exists only during the spikes in activity.

Samples of retweeters pushing this new pro-Russia/pro-vaccine content demonstrate a high percentage of bot activity while the population of accounts in the conversation pre-introduction of mass retweeting show normal levels of authentic accounts. FAS conducted automated activity (bot) detection analysis by applying distributional analysis account characteristics. Through distributional analysis FAS detected account behaviors outside that of normal human operated accounts, indicating an increased likelihood of automated bot activity.

This demonstrates that dramatic differences of topic and sentiment in COVID-related messaging are seen in specific languages both prior to and immediately after countries that speak those same languages join the Russian vaccine development effort.

For more about the FAS Disinformation Research Group and to see previous reports, visit the project page here.

Weekly COVID-19 Disinformation Report for August 14: #Plandemic to #Scamdemic

Key Highlights

Key Trends

Virology Journal article linked to claim that Dr. Fauci knew HCQ is a cure.
A false claim states that in 2005, Dr. Fauci said that hydroxychloroquine is both “a cure and a vaccine” for coronavirus, referencing a 2005-published study. This article showed the effectiveness of chloroquine against classic SARS, SARS-CoV, in vitro (cells grown in culture in a laboratory.) The study was done on a different virus and did not assess the effectiveness of the drug on people.

A One News Now article published on April 27 made the initial claim. On July 28, tweets about this theory spiked, with 9,245 tweets mentioning “Fauci” and “2005” that were shared in a single day. Since then, an average of 4.2K tweets per day have mentioned “Fauci” and “2005”, in an apparent reference to the article. On August 2, an image of this claim appeared on Facebook with the caption “So where has this been hiding for the past 6 months and how did he manage to forget about it?”

Tweets of the OneNewsNow article with the caption “Fauci knew about HCQ in 2005 — nobody needed to die” has been mentioned and reposted in 27K tweets, for a total 131M impressions. Other attacks on Fauci include tweets of a 2017 video where he warns of a future pandemic. The clip in question is in fact sound advice and a warning to be vigilant against infectious disease outbreaks but has been distorted by @DrDavidSamadi, a @newsmax contributor and former @FoxNews Medical A Team member, as well as others to place blame on Fauci. Between July 28 and 30, this tweet had 51M impressions, through the 15K retweets of the original post.

Dr. Stella Immanuel and America’s Frontline Doctors

On July 28, Trump retweeted a video, which has since been removed from Twitter, showing America’s Frontline Doctors, a group claiming that masks and shutdowns are not required to end the pandemic. One of the key doctors featured in this video was Dr. Stella Immanuel, who makes the claim that hydroxychloroquine is a “cure for covid.”

One of the highest-reaching tweets referring to “Fraudci”’ shared an excerpt of an interview with Dr. Stella Immanuel. The tweet stated “She said, “I am a sniper for the kingdom of God?!” Nothing is going to stop this woman protected by Army Angels?. To think Dr. Fraudci has known since 05’ that this ladies cure WORKS?! I never thought The Deep State was this EVIL?. Trump 2020?”. The full interview is here. In this interview, she also claims that demons exist.

#Scamdemic

Amidst other serious and frivolous claims against Fauci, Bill Gates, and other public health-related figures, a hashtag sprung up to serve as a repository for accusations of government mismanagement and Orwellian machinations. The #Scamdemic hashtag first peaked on July 19-20 with 13.5K mentions, coinciding with Fauci attending the Major League Baseball game in Washington, DC. His public appearance gave bad actors an opportunity once again to call COVID-19 a hoax. Accounts like @InformedNJNurse and @Nimmermaximum shared unsourced claims that nurses received positive test results on unused swabs, and that “if they stopped testing, the pandemic would be over.” Turning Point USA founder Charlie Kirk saw 16K retweets on a tweet saying that Fauci attending Opening Day “should outrage every American who’s been told they can’t have in-person sports this year.”

The hashtag peaked again on August 4, with 24K mentions, likely attributable to PragerU personality Candace Owens who tweeted the following:

Putin vaccine claims

On Tuesday, August 11, Russian President Vladimir Putin announced that Russia’s “Sputnik V” vaccine was registered and approved for distribution. The internet exploded with speculation, with 313K Tweets mentioning “Vaccine” with “Russia” or “Putin.”. The Russian vaccine is controversial because it has not gone through phase III safety and efficacy trials. Twitter accounts, including the liberal @ImpeachmentHour, falsely claimed that the US would buy 100M doses of the Russian vaccine, and other accounts have repeated the false claim, with one user even making a similar claim nearly a week prior to the Russian announcement. Disparate accounts have made similar claims, and while no single tweet has reached virality, the idea is spreading.

Twitter Statistics

The term “Fraudci” reached a peak of 28M impressions on Aug 1 and continues to be used in tweets with disinformation, and referring to the Virology Journal article and America’s Frontline Doctors.

Methodology

Using open-source investigation techniques, FAS evaluated the volume of social media interactions regarding coronavirus using Twitter posts and digital news articles published between July 27-Aug 11. The breakdown of the following information was primarily focused on results in the United States, but also provided data from other English-speaking contexts, including Australia, Canada and the United Kingdom. FAS analysts evaluated the volume of tweets that emerged over time, the reach of trending tweets, and overall public sentiment. The search term used to identify the highest reaching tweets were (“COVID” OR “virus”, OR “coronavirus”).

Weekly COVID-19 Disinformation Report for July 20: Masks, microchips, Michigan, and misinformation

Key Highlights

Key Trends

Ongoing conspiracy theories regarding mask use

video on Twitter claiming that the metal wire in store-bought face masks are 5G antennas received 1.3 million views and 14.7K Retweets and Comments. Another video followed the same pattern of conspiracy theories denouncing mask use as harmful or a systematic and “deep state” effort to control the public. Other videos with similar messaging have been spreading this week on Facebook by a community of anti-vaccination activists.

On July 9, Russ Diamond, a Pennsylvania state representative with over 1,300 followers on Twitter, tweeted combative messages to journalists and misleading posters questioning the safety of face masks for children: 

Tweets that fit our methodology and included the keyword “Fauci” (Dr. Anthony Fauci) indicated an overall 69% negative sentiment during the reporting period. These trends followed recent criticism of Fauci from the White House, including reports that the President has not received a briefing regarding the COVID-19 crisis from Dr. Fauci since early June and an appearance on FOX News in which the President admitted Fauci had made “many mistakes.”

Mortality rates

During the reporting period, President Donald Trump made two false claims about the U.S. mortality rate. These tweets had the highest reach of all tweets searched using our methodology. On July 6, he tweeted: “Why does the Lamestream Fake News Media REFUSE to say that China Virus deaths are down 39%, and that we now have the lowest Fatality (Mortality) Rate in the World. They just can’t stand that we are doing so well for our Country!”

And on July 7, he tweeted ““COVID-19 (China Virus) Death Rate PLUNGES From Peak In U.S.” A Tenfold Decrease In Mortality. The Washington Times @WashTimes  Valerie Richardson.  We have the lowest Mortality Rate in the World. The Fake News should be reporting these most important facts, but they don’t!” On July 8, he retweeted his July 7 tweet. 

Mortality rates have decreased as we have improved measures to shelter the elderly and take care of the ill, but US mortality rate are still extremely high: the Hopkins data shows that US mortality rates are 9th highest worldwide. In fact, when looking at countries that are most impacted by the pandemic, US mortality rates are the second highest of any nation.

Skepticism and systematic campaigns against a COVID-19 vaccine

As the development of a COVID-19 vaccine continues, a number of disinformation campaigns have emerged against the World Health Organization, Big Pharma, and billionaire Bill Gates for planning or creating the virus to profit from the development and distribution of a vaccine. These claims are particularly prominent within the anti-vaxxer community on Facebook and YouTube and have been substantiated by claims by prominent figureheads on Twitter.

In recent months, environmental lawyer and activist Robert Kennedy Jr., who is noted as a vaccine skeptic, was quoted by conspiracy news sites as a prominent figurehead exposing “Big Pharma” and questioning the safety of vaccines. Kennedy has publicly accused Bill Gates of controlling the World Health Organization and has substantiated theories that Gates and Dr. Anthony Fauci both aim to profit from a COVID-19 vaccine. Following conspiracy theories propagated by the Plandemic documentary earlier this summer, Bill Gates has become the target of many conspiracy theories, including one propagated by Kennedy in an op-ed suggesting that the World Health Organization is a marketing arm of Gates’ vaccine empire. 

Our analysis has also revealed overall apprehension and fear among Americans regarding the prospective uptake of a COVID-19 vaccine. A recent poll indicated that most Americans say they will not take a vaccine, when it is produced. This week, Kanye West referred to vaccines as “the mark of the beast” and that a COVID-19 vaccine was an attempt to “put chips inside of us.” On July 4, Nation of Islam leader Louis Farrakhan gave a three-hour speech about the danger of vaccines, imploring Africans and African-Americans to be wary of “their medications.”  

Editor-in-Chief of Uncover DC, Tracy Beanz (219.2K followers), shared a thread (3K retweets, 1.8K likes) and played to fears that if one were to refuse treatment or vaccine, the state health department would have the authority to vaccinate you “By any means necessary”. Comments in response to the tweet used terms like “unconstitutional” and “illegal” and challenged theories regarding vaccination.

Twitter Statistics

Of 6.71M tweets regarding mentioning COVID-19 testing, vaccines, treatment or mask use, overall sentiment was negative (56%). Top locations for tweets were the United States (5.6M) and the United Kingdom (574K). 

Negative sentiment has fluctuated but has remained high since the start of the pandemic:

Methodology

Using open-source investigation techniques, FAS evaluated the volume of social media interactions regarding coronavirus using Twitter posts and digital news articles published between July 3-12. The breakdown of the following information was primarily focused on results in the United States, but also provided data from other English-speaking contexts, including Australia, Canada and the United Kingdom. FAS analysts evaluated the volume of tweets that emerged over time, the reach of trending tweets, and overall public sentiment. The search term used to identify highest reaching tweets were  (“COVID”, “virus”, OR “coronavirus”) AND “(“testing” OR “cases” OR “test” OR “rate”) OR (“Vaccine” OR “treatment”) OR (“mask” OR “masks”).