We Need Biological Verification and Attribution Tools to Combat Disinformation Aimed at International Institutions

The Biological Weapons Convention’s Ninth Review Conference (RevCon) took place under a unique geopolitical storm, as the COVID-19 pandemic raged and the Russian invasion of Ukraine took center stage. Russia’s continued claims of United States-sponsored bioweapons research laboratories in Ukraine only added to the tension. Russia asserted that they had uncovered evidence of offensive biological weapons research underway in Ukrainian labs, supported by the United States, and that the invasion of Ukraine was due to the threat they faced so close to their borders. 

While this story has been repeated countless times to countless audiences – including an Article V consultative meeting and the United Nations Security Council (UNSC) as part of an Article VI complaint against the U.S. lodged by Russia, the fact remains that Russia’s assertion is untrue.

The biological laboratories present in Ukraine operate as part of the Department of Defense’s Biological Threat Reduction Program, and are run by Ukrainian scientists charged with detecting and responding to emerging pathogens in the area. Articles V and VI of the Biological Weapons Convention (BWC) are intended to be invoked when a State Party hopes to resolve a problem of cooperation or report a breach of the Convention, respectively. The Article V formal consultative meeting resulted in no consensus being reached, and the Article VI UNSC meeting rejected Russia’s claims. The lack of consensus during the consultative meeting created a foothold for Russia to continue their campaign of disinformation, and the UNSC meeting only further reinforced it. The Article V and VI clauses are meant to provide some means of mediation and recourse to States Parties in the case of BWC violation. However, in this case they were not invoked in good faith, rather, they were used as a springboard for a sweeping Russian disinformation campaign.

Abusing Behavioral Norms for Political Gain

While Russia’s initial steps of calling for an Article V consultative meeting and an Article VI Security Council investigation do not seem outwardly untoward, Russia’s behavior during and after these proceedings dismissed the claims indicated their deeper purpose. 

Example: Misdirection in Documentation Review

During the RevCon, the Russian delegation often brought up how the results of the UNSC investigation would be described in the final document during the article-by-article review, calling for versions of the document that included more slanted versions of the events. They also continually mentioned the U.S.’ refusal to answer their questions, despite the answers being publicly available on the consultative meeting’s UNODA page, the Russians characterized their repeated mentioning of the UNSC investigation findings as an act of defiant heroism, implying that the U.S. was trying to quash their valid concerns, but that Russia would continue to raise them until the world had gotten the answers it deserved. This narrative directly contradicts the facts of the Article V and VI proceedings. The UNSC saw no need to continue investigating Russia’s claims. 

Example: Side Programming with Questionable Intent 

The Russian delegation also conducted a side event during the BWC dedicated to the outcomes of the consultative meeting. The side event included a short ‘documentary’ of Russian evidence that the U.S.-Ukraine laboratories were conducting biological weapons research. This evidence included footage of pesticide-dispersal drones in a parking lot that were supposedly modified to hold bioweapons canisters, cardboard boxes with USAID stickers on them, and a list of pathogen samples supposedly present that were destroyed prior to filming. When asked about next steps, the Russian delegation made thinly veiled threats to hold larger BWC negotiations hostage, stating that if the U.S. and its allies maintain their position and don’t demonstrate any further interest in continuing dialogue, it would be difficult for the 9th RevCon to reach consensus. 

Example: Misuse of ‘Point of Order’

Russia’s behavior at the 9th RevCon emphasizes the unwitting role international institutions can play as springboards for state-sponsored propaganda and disinformation. 

During opening statements, the Russian delegation continually called a point of order upon any mention of the Russian invasion of Ukraine. A point of order allows the delegation to respond to the speaker immediately, effectively interrupting their statement. During the Ukrainian delegation’s opening statement, the Russian delegation called four points of order, citing Ukraine’s “political statements” as disconnected from the BWC discussion. Russia’s use of the rules of procedure to bully other delegations continued – after they concluded a point of order during the NATO delegate’s statement, they called another one almost immediately after the NATO delegate resumed her statement with the singular word, “Russia.” This behavior continued throughout all three weeks of the RevCon. 

Example: Single Vote Disruption Made in Bad Faith

All BWC votes are adopted by consensus, meaning that all states parties have to agree for a decision to be made. While this helps ensure the greatest inclusivity and equality between states parties, as well as promote implementation, it also means that one country can be the ‘spoiler’ and disrupt even widely supported changes. 

For example, in 2001, the United States pulled out of verification mechanism negotiations at the last minute, upending the entire effort. Russia’s behavior in 2022 was similarly disruptive, but made with the goal of subversion. The vote changed how other delegations reacted, as representatives seemed more reluctant to mention the Article V and VI proceedings. The structure of the United Nations as impartial and the BWC as consensus-based means that by their very nature they cannot combat their misuse. Any progress to be had by the BWC relies on states operating in good faith, which is impossible to do when a country has a disinformation agenda.

Thus, the very nature of the UN and associated bodies attenuates their ability to respond to states’ misuse. Russia’s behavior at the 9th RevCon is part of a pattern that shows no signs of slowing down. 

We Need More Sophisticated Biological Verification and Attribution Tools

The actions described above demonstrate the door has been kicked fully open for regimes to use the UN and associated bodies as mouthpieces for state-sponsored propaganda. 

So, it is imperative that 1) more sophisticated biological verification and attribution tools be developed, and 2) the BWC implements a legally binding verification mechanism. 

The development of better verification methods to verify whether biological research is for civil or military purposes will help to remove ambiguity around laboratory activities around the world. It will also make it harder for benign activities to be misidentified as offensive biological weapons activities. 

Further, improved attribution methods will  determine where biological weapons originate from and will further remove ambiguity during a genuine biological attack. 

The development of both these capabilities will strengthen an eventual legally binding verification mechanism. These two changes will also allow Article V consultative meetings and Article VI UNSC meetings to determine the presence of offensive bioweapons research more definitively, thus contributing rather substantively to the strengthening of the convention. As ambiguity around the results of these investigations decreases, so does the space for disinformation to take hold.

Big Tech CEOs questioned about fighting disinformation with AI by the House Energy and Commerce Committee

The amount of content posted on social media platforms is increasing at a dramatic rate, and so is the portion of that content that is false or misleading. For instance, users upload over 500 hours of video to YouTube every minute. While much of this content is innocuous, some content spreads harmful disinformation, and addressing the spread of false or misleading information has been a substantial challenge for social media companies. The spread of disinformation, as well as misinformation, on social media platforms was highlighted during a March 25 hearing in the House Energy and Commerce Committee. Members questioned the CEOs of Facebook, Twitter, and Google on their roles in stopping the spread of false information, much of which contributed to the worsening of the COVID-19 pandemic, as well as the January 6 insurrection at the Capitol.

Artificial intelligence as a solution for disinformation

False or misleading posts on social media spread quickly and can significantly affect people’s views. MIT Sloan researchers found that false information was 70% more likely to be retweeted on Twitter than facts; false information also reached its first 1,500 people six times faster. Furthermore, researchers at Rand discovered that a constant onslaught of false information can even skew people’s political opinions. Specifically, false information exacerbates the views of people in closed or insular social media circles because they receive only a partial picture of how other people feel about specific political issues.

Traditionally, social media companies have relied on human reviewers to find harmful posts. Facebook alone employs over 30,000 reviewers. According to a report published by New York University, Google employs around 10,000 content reviewers for YouTube and its subsidiaries, and Twitter employs around 1,500. However, the human review of content is time-consuming, and, in many instances, extremely traumatic for these reviewers. These Big Tech companies are now developing artificial intelligence (AI) algorithms to automate much of this work.

At Facebook, the algorithms rely on tens of millions of user-submitted reports about potentially harmful content. This dataset is then used to train the algorithms to identify which types of posts are actually harmful. The content is separated into seven different categories: nudity, graphic violence, terrorism, hate speech, fake accounts, spam, and suicide prevention. In the past few years, much of their effort was dedicated to identifying fake accounts that would likely be used for malignant purposes, such as election disinformation. Facebook is also using its AI algorithms to identify fraudulent news outlets publishing fake news and to help its reviewers remove spam accounts.

Google has developed algorithms that skim all search results and rank them based on quality and relevance to a user’s search terms. When the algorithms identify articles promoting misinformation, those articles are ranked lower in the search results and are therefore more difficult to find. For YouTube, the company developed algorithms to screen new content and then demonetized any content related to COVID-19. Videos related to the pandemic are unable to earn any revenue from ads, ideally dissuading those attempting to profit from COVID-19 scams involving the posting of misleading content. YouTube has also redesigned its recommendation algorithms to show users authoritative sources of information about the COVID-19 pandemic and steer them away from disinformation or misinformation.

Twitter is also using AI to detect harmful tweets and remove them as quickly as possible. In 2019, the social media site reported that its algorithms removed 43% of the total number of tweets that were in violation of their content policies. That same year, Twitter purchased a UK-based AI startup to help counter disinformation spreading on its platform. Its algorithms are designed to quickly identify content that can pose a direct risk to the health or well-being of others, and prioritizes that content for review by human moderators. These moderators can then evaluate the potentially problematic tweets to make a final determination as to whether the content is truly harmful.

The limitations of using AI

While AI can be a useful tool in combating disinformation on social media, it can have significant drawbacks. One of the biggest problems is that AI algorithms have not achieved a high enough proficiency in understanding language and have difficulty determining what a specific post actually means. For example, AI systems like Apple’s Siri can follow simple commands or answer straightforward questions, but cannot hold conversations with a person. During the hearing, Twitter CEO Jack Dorsey and Facebook CEO Mark Zuckerberg discussed this point, describing how it is difficult for AI algorithms to parse social media posts denouncing harmful ideas from those that are endorsing them. Another problem with AI is that the decision-making processes for these algorithms can be highly opaque. In other words, the computers are unable to explain why or how they have made their decisions. Lastly, AI algorithms are only as smart as the data on which they are trained. Imperfect or biased data will then lead to ineffective algorithms and flawed decisions. These biases can come from many sources and can be difficult for AI scientists to identify.

More needs to be done

False and misleading posts on social media about the COVID-19 pandemic and the results of the 2020 presidential election have led to significant harm in the real world. In order to fully leverage AI to help mitigate the spread of disinformation and misinformation, much more research needs to be done. As we monitor Congressional activity focused on countering disinformation, we encourage the CSPI community to serve as a resource for federal officials on this topic.

Policy proposals about countering false information from FAS’ Day One Project

A National Strategy to Counter COVID-19 Misinformation – Amir Bagherpour and Ali Nouri

Creating a COVID-19 Commission on Public Health Misinformation – Blair Levin and Ellen Goodman

Combating Digital Disinformation: Resisting Foreign Influence Operations through Federal Policy – Dipayan Ghosh

Digital Citizenship: A National Imperative to Protect and Reinvigorate our Democracy – Joseph South and Ji Soo Song