Big Tech CEOs questioned about fighting disinformation with AI by the House Energy and Commerce Committee
The amount of content posted on social media platforms is increasing at a dramatic rate, and so is the portion of that content that is false or misleading. For instance, users upload over 500 hours of video to YouTube every minute. While much of this content is innocuous, some content spreads harmful disinformation, and addressing the spread of false or misleading information has been a substantial challenge for social media companies. The spread of disinformation, as well as misinformation, on social media platforms was highlighted during a March 25 hearing in the House Energy and Commerce Committee. Members questioned the CEOs of Facebook, Twitter, and Google on their roles in stopping the spread of false information, much of which contributed to the worsening of the COVID-19 pandemic, as well as the January 6 insurrection at the Capitol.
Artificial intelligence as a solution for disinformation
False or misleading posts on social media spread quickly and can significantly affect people’s views. MIT Sloan researchers found that false information was 70% more likely to be retweeted on Twitter than facts; false information also reached its first 1,500 people six times faster. Furthermore, researchers at Rand discovered that a constant onslaught of false information can even skew people’s political opinions. Specifically, false information exacerbates the views of people in closed or insular social media circles because they receive only a partial picture of how other people feel about specific political issues.
Traditionally, social media companies have relied on human reviewers to find harmful posts. Facebook alone employs over 30,000 reviewers. According to a report published by New York University, Google employs around 10,000 content reviewers for YouTube and its subsidiaries, and Twitter employs around 1,500. However, the human review of content is time-consuming, and, in many instances, extremely traumatic for these reviewers. These Big Tech companies are now developing artificial intelligence (AI) algorithms to automate much of this work.
At Facebook, the algorithms rely on tens of millions of user-submitted reports about potentially harmful content. This dataset is then used to train the algorithms to identify which types of posts are actually harmful. The content is separated into seven different categories: nudity, graphic violence, terrorism, hate speech, fake accounts, spam, and suicide prevention. In the past few years, much of their effort was dedicated to identifying fake accounts that would likely be used for malignant purposes, such as election disinformation. Facebook is also using its AI algorithms to identify fraudulent news outlets publishing fake news and to help its reviewers remove spam accounts.
Google has developed algorithms that skim all search results and rank them based on quality and relevance to a user’s search terms. When the algorithms identify articles promoting misinformation, those articles are ranked lower in the search results and are therefore more difficult to find. For YouTube, the company developed algorithms to screen new content and then demonetized any content related to COVID-19. Videos related to the pandemic are unable to earn any revenue from ads, ideally dissuading those attempting to profit from COVID-19 scams involving the posting of misleading content. YouTube has also redesigned its recommendation algorithms to show users authoritative sources of information about the COVID-19 pandemic and steer them away from disinformation or misinformation.
Twitter is also using AI to detect harmful tweets and remove them as quickly as possible. In 2019, the social media site reported that its algorithms removed 43% of the total number of tweets that were in violation of their content policies. That same year, Twitter purchased a UK-based AI startup to help counter disinformation spreading on its platform. Its algorithms are designed to quickly identify content that can pose a direct risk to the health or well-being of others, and prioritizes that content for review by human moderators. These moderators can then evaluate the potentially problematic tweets to make a final determination as to whether the content is truly harmful.
The limitations of using AI
While AI can be a useful tool in combating disinformation on social media, it can have significant drawbacks. One of the biggest problems is that AI algorithms have not achieved a high enough proficiency in understanding language and have difficulty determining what a specific post actually means. For example, AI systems like Apple’s Siri can follow simple commands or answer straightforward questions, but cannot hold conversations with a person. During the hearing, Twitter CEO Jack Dorsey and Facebook CEO Mark Zuckerberg discussed this point, describing how it is difficult for AI algorithms to parse social media posts denouncing harmful ideas from those that are endorsing them. Another problem with AI is that the decision-making processes for these algorithms can be highly opaque. In other words, the computers are unable to explain why or how they have made their decisions. Lastly, AI algorithms are only as smart as the data on which they are trained. Imperfect or biased data will then lead to ineffective algorithms and flawed decisions. These biases can come from many sources and can be difficult for AI scientists to identify.
More needs to be done
False and misleading posts on social media about the COVID-19 pandemic and the results of the 2020 presidential election have led to significant harm in the real world. In order to fully leverage AI to help mitigate the spread of disinformation and misinformation, much more research needs to be done. As we monitor Congressional activity focused on countering disinformation, we encourage the CSPI community to serve as a resource for federal officials on this topic.
Policy proposals about countering false information from FAS’ Day One Project
A National Strategy to Counter COVID-19 Misinformation – Amir Bagherpour and Ali Nouri
Creating a COVID-19 Commission on Public Health Misinformation – Blair Levin and Ellen Goodman
Combating Digital Disinformation: Resisting Foreign Influence Operations through Federal Policy – Dipayan Ghosh
Digital Citizenship: A National Imperative to Protect and Reinvigorate our Democracy – Joseph South and Ji Soo Song
The Department of Defense has finally released the 2024 version of the China Military Power Report.
With tensions and aggressive rhetoric on the rise, the next administration needs to prioritize and reaffirm the necessity of regular communication with China on military and nuclear weapons issues to reduce the risk of misunderstandings.
Congress should ensure that no amendments dictating the size of the ICBM force are included in future NDAAs.
In early November 2024, the United States released a report describing the fourth revision to its nuclear employment strategy since the end of the Cold War and the third since 2013.