Congressional Science Policy Initiative

Committee hearing resource | House Energy and Commerce Subcommittees on Consumer Protection and Commerce, and Communications and Technology

Send in your questions to help Congress scrutinize how Facebook, Twitter, and Google address misinformation and disinformation

Engage, and take action! If you have a question or idea you think lawmakers should raise with witnesses during this hearing, or you would like to be a part of FAS’ community to contribute your expertise, kindly scroll down and submit via the form below. Or scroll down to learn more about the issues.

The COVID-19 pandemic has brought the spread of misinformation on social media and search engine platforms into stark focus for Congress. Misinformation and disinformation exacerbate national crises in addition to the pandemic, such as the 2020 presidential election, vaccine uptake, and many other issues. Social media and internet platforms have attempted to curb the spread of misinformation, but with limited success.

On March 25th, the House Energy and Commerce Committee will hold a hearing to scrutinize how the CEOs of Facebook, Twitter, and Google are addressing misinformation and disinformation, and discuss what actions the companies or Congress could take to stem the spread. The Committee wants to hear your thoughts on the spread of misinformation or disinformation, how to counter the spread, the impact of potential government or corporate policies or practices, or the negative consequences of misinformation or disinformation, among other issues relevant to Facebook, Twitter, or Google.

This website gives you an opportunity to tell Congress what issues should be discussed during this key hearing. You can submit questions that lawmakers should ask the witnesses (sample questions can be found below), personal stories about your experiences related to this issue, or your general thoughts on how Congress should address misinformation.

House Energy and Commerce Subcommittees on Consumer Protection and Commerce, and Communications and Technology hearing on misinformation and disinformation plaguing online platforms
Thursday, March 25, 2021 at 12:00 PM ET
Witnesses:
Mark Zuckerberg, Chairman and CEO, Facebook Inc.
Sundar Pichai, CEO, Alphabet Inc.
Jack Dorsey, Co-founder and CEO, Twitter Inc.

Evidence-based sample questions lawmakers could ask witnesses. Please share yours for lawmakers.

More sample questions will be added as objective contributions are received from the expert community. Kindly submit your idea via the form below. Last updated Tuesday 3/23/2021.

Establishing a high-level commission on the national COVID-19 pandemic response

This pandemic has demonstrated that there is not enough data on the reach and impact of misinformation and disinformation on the United States. It is vital to address these blind spots prior to facing any future crises.

It has been proposed that the United States should establish a high-level commission, modeled after the 9/11 Commission, to evaluate the national response to the COVID-19 pandemic and develop a federal strategy for the future.

Would your companies be willing to partner with such a commission and aid in providing data on the effects of misinformation and disinformation during the pandemic?

Public-private partnerships to fight misinformation

According to a study conducted by the Reuters Institute, the largest category of misleading or false claims about COVID-19 were misleading messages about public health policies or authorities. Social media companies have had difficulty combating this misinformation during the pandemic and it has proved devastating. Misleading claims about public health policies have likely prevented the United States from quickly stopping the spread of disease.

To mitigate the current threat of COVID-19 misinformation and prevent future misinformation crises, part of the solution could be for scientists to supply verified information to the public as quickly as possible.

Would Google, Twitter, and Facebook be open to establishing partnerships with federal agencies such as the Department of Health and Human Services to do this? In addition, will your companies commit to establishing transparent and robust reporting mechanisms to facilitate the removal of misinformation from your platforms in the future?

Partnering with social media influencers

Misinformation from politicians, celebrities, and other prominent figures accounted for about 20% of false claims about COVID-19, but consisted of 69% of total social media engagement with COVID misinformation.

Are your companies planning to partner with social media influencers with large followings to support science and public health? If so, how do you plan to set up these partnerships?

Instilling digital citizenship to counter the spread of misinformation

Digital citizenship is the concept that Americans should receive enough education about technology in order to be able to discern fact from fiction and leverage internet platforms for the public good.

Educators play a key role in instilling digital citizenship, and “equity gaps in access to professional development for digital learning existed even prior to the pandemic.” Then, “during COVID-19, less than one-fifth of districts are investing their federal education relief dollars towards professional development, and as of September 2020, less than a third address educator training in their fall school reopening plans.”

From your perspective, how important is digital citizenship for Americans, and what should the federal government be doing to improve the quality and consistency of education around digital citizenship

Follow-up: What role do your companies have in promoting digital citizenship? Please explain.

Big Tech's accountability for adhering to content moderation policies

There is limited accountability for technology firms in regard to moderating their own platforms. Because Big Tech is being stripped of liability when it comes to leaving content up or taking content down, the platforms don’t technically have to be accountable for their own content policy moderation practices.

How problematic is this in general, and specifically for countering online influence operations? Should steps be taken to increase the accountability of your companies in regard to moderating your platforms, and if so, what should be done? Please explain.

Internet platforms' business models and democracy

In your view, do the business models of your internet platform firms – deriving commercial value from the spread of virtually any content whatsoever – disincentivize your companies from protecting democratic political processes from nefarious actors? How or how not, and please describe any reforms that may be necessary.

Using data to answer fundamental questions about disinformation

With the help of over 100 experts, the 100 Questions Initiative has identified ten fundamental questions related to the spread of disinformation that could be answered by leveraging data from private-sector entities for societal benefits. These questions range from identifying the factors that contribute to increased exposure to disinformation, how disinformation is amplified, and whether consumption of disinformation can increase the likelihood of online hate speech, radicalization, and real-life violence.

Are your companies investing in research about the consumption and spread of disinformation that could help answer these questions? Please explain.

The harm of misinformation

There seems to be some debate about whether it is harmful that 2% of Americans believe the earth is flat, 14% believe in Bigfoot, or a third of Americans endorse some form of conspiracy theory (about JFK’s assassination, existence of the Illuminati, etc.). With 5G towers being burned and people dying due to misconceptions about a virus, I would think current events make it plain that the proliferation of false beliefs is a problem.

So just to set the stage here, I want to ask: Can we agree that the spread of misinformation, disinformation, and conspiracy theories are in fact harmful to society? Please explain.

The infodemic & social media super-spreaders

Experts have referred to the flood of misinformation – particularly regarding global health events – as an infodemic, forcing them to spend valuable time and resources on countering misconceptions when those resources could be devoted to addressing the health and wellbeing of our citizens. During this pandemic we have also become familiar with the term “super-spreader.” If there was a super-spreader of the infodemic, the research would support that the online environment is particularly fertile grounds for the spread of misinformation.

Does America need to be quarantined from your platforms, or are you going to take strong, consistent, concrete steps to stop the infodemic? Please explain.

Investing in intervention

Research conducted by independent scientists – those outside Google, Facebook, Twitter, etc. are increasingly discovering means to prevent or counter the spread of misinformation. For instance, even something as simple as giving people the opportunity to pause and rethink their decision to share a piece of online content could be beneficial. This is but one suggested intervention, and many studies clearly indicate that a “nothing works” attitude is incorrect.

To what extent is your company investing in funding this important research on building best practices for helping users stop the spread of misinformation into your platforms? For example, what percentage of your budgets go toward funding outside investigations? Please explain.

Proactive measures

In April of last year, Consumer Reports ran an experiment where they bought ads on the Facebook platform advertising false cures for the coronavirus. These ads were approved and voluntarily pulled by Consumer Reports before airing.

Other bad actors do not have the ethics of Consumer Reports, and evidence shows that their lies and misinformation spread on the internet much faster than the rate of the truth. False posts can be seen by millions of viewers – such as with the Plandemic video – before they are removed.

Are any of the strategies you are investigating or deploying proactive? For instance, such that you are not waiting for a false story to be shared thousands and thousands of times before maybe removing it, but instead are implementing methods to prevent it from showing up on your platforms altogether?

Vulnerable groups

A number of studies have revealed that the “baby boomer” generation is more susceptible to online misinformation. This greater susceptibility may be due to lower levels of digital literacy (i.e., familiarity with the workings of technology). And intervention research does show that improving digital literacy – including among the elderly – reduces susceptibility to misinformation, as does improving media and scientific literacy.

Given that research has shown that certain populations are more susceptible to online misinformation, what additional steps or precautions have you put in place to protect these individuals from malicious content? Please explain.

Bad actors targeting conservative accounts with disinformation

Research suggests that conservatives are being disproportionately targeted by disinformation.  For instance, analysis of the content of the tweets shared by 2,700 Russian troll accounts during the 2016 campaign revealed that they had largely conservative content. In another study, researchers tracked disinformation surrounding the 2014 crash of MH17 in Ukrainian airspace and found that exposure was almost 7 times higher among conservative Twitter users than liberals.

How is this information being used to affect procedures or policies to protect citizens being disproportionately targeted by disinformation campaigns? Please explain.

Misinformation most likely to be shared

Research shows that the type of misinformation most likely to spread on your platforms is that which is highly emotional, particularly negative information that indicates there is some sort of looming threat to human life. This research existed before the pandemic. In fact, we had a trial run on virus-related misinformation during the Ebola outbreak. In 2016, following evidence of election interference using social media platforms, many of your platforms claimed to be working on ‘fixing’ the spread of misinformation.

What protections were put in place on your platforms in advance of the inevitable spread of misinformation surrounding the COVID-19 virus? And why have those measures, if any, apparently failed?

Providing a safe haven for extremism and hate

In the wake of the COVID-19 infodemic, research has found a documented increase in Sinophobia, particularly towards Chinese people, on American platforms like 4chan, but also including Twitter. Fake news sources have also been effective in increasing other types of hate, particularly Islamophobia.

We also know that extremist groups are continuing to use mainstream platforms to recruit new members, spread hate-based propaganda, organize, or perpetrate attacks. However, social media has struggled to address this harmful messaging. Before the shooting in Kenosha, WI, for example, the event where a militia encouraged followers to bring guns to an upcoming protest was reported to Facebook over 400 times – making up 66% of reports filed that day – but Facebook did not report nor take down the militia’s posts.

What criteria are you using to gauge whether hate-filled, dangerous content should remain on your sites? Given threats continue to “slip through the cracks,” how might current protocols be improved?

The additional impacts of misinformation

Aside from headline-grabbing tragedies, like mass shootings and arson perpetrated by misinformed extremists, the research highlighting the broader impacts of misinformation should not be overlooked. Emerging social science research has found that the mere exposure to conspiracy theories reduces helping behavior, trust, and belief in science, while increasing susceptibility to subsequent misinformation. Additional research has shown that continued exposure to disinformation results in heightened fear, anxiety, stress, and racism.

Given that online environments facilitate exposure to this sort of clickbaity, sensationalistic “information,” to what extent do you feel you have a responsibility to prioritize addressing this problem more so than you have done so far?

Future efforts to combat misinformation on Facebook

Since the 2016 presidential election, social media companies have spent billions of dollars combating the increased spread of disinformation. However, late last year the German Marshall Fund found that news outlets which regularly publish falsehoods on Facebook saw three times the level of likes, shares, and comments in 2020 than they did in 2016. This growth also exceeded the engagement received by reputable news outlets such as Reuters and the Associated Press on the site.

Mr. Zuckerberg, given the worsening spread of disinformation on social media, particularly on Facebook, since 2016, what do you plan to do differently to mitigate the spread of false information going forward?

Debunking disinformation through consensus

Last month, Twitter launched a new feature to combat misinformation called Birdwatch. It relies on Twitter users to flag and debunk misinformation themselves. These responses are then rated for their quality and the credibility of their sources by other users. Individuals with high ratings from this process can grow their reputations as being reliable sources and gain more prominence on the site.

Mr. Dorsey, many sources of disinformation can be very convincingly disguised as legitimate outlets. How can you ensure that the people debunking disinformation through Birdwatch are actually putting forth correct information? In addition, what will Twitter do to prevent bad actors from taking advantage of Birdwatch to further spread disinformation?

Fact-checking COVID-19 vaccine misinformation

The Google News Initiative launched a $3 million fund to combat misinformation about COVID-19 vaccines in January. It plans to support journalistic efforts to fact-check misinformation about the vaccination process with a focus on groups disproportionately exposed to misinformation.

Mr. Pichai, can you give us an update on this project and how Google plans to expand its efforts to address misinformation in the future? What can Congress do to reduce the impact of misinformation on the internet?

Your question could be here!

Your question could be here!

Nonpartisan analysis and research

Quick reads

Considering the Source: Varieties of COVID-19 Information – CRS In Focus brief

Regulating Big Tech: Legal Implications – CRS Legal Sidebar brief

Deepfakes – GAO brief

Deep dives

Misinformation and Content Moderation Issues for Congress – CRS report

Information Warfare: Issues for Congress – CRS report

Antitrust and “Big Tech” – CRS report

Supplemental resources

Press clips

Our Era’s Defining Battle: Facts vs. Misinformation – Politico piece

Black and Hispanic Communities Grapple with Vaccine Misinformation – The New York Times piece

How Facebook got addicted to spreading misinformation – MIT Technology Review piece

YouTube removed 30,000 videos with vaccine misinformation – The Hill piece

Ukraine says misinformation is scaring its people away from being vaccinated – New York Times piece

The Fight Against Vaccine Misinformation – The New Yorker piece

The disinformation tactics used by China – BBC piece

Congressional correspondence

Letter from the leadership of the House Energy and Commerce Committee to Facebook CEO Mark Zuckerberg on the proliferation of misinformation about the COVID-19 pandemic

Letter from House Energy and Commerce Subcommittee on Consumer Protection and Commerce Chair Jan Schakowsky (D-IL) and colleagues to Facebook CEO Mark Zuckerberg on its content moderation practices

Letters from Consumer Protection and Commerce Subcommittee Chair Jan Schakowsky and colleagues to the CEOs of Facebook, Twitter, and Google on the companies’ efforts to combat misinformation regarding the COVID-19 vaccine

Bipartisan bills

Georgia Support Act, H.R.923

See Something, Say Something Online Act of 2021, S.27

Protecting Seniors from Emergency Scams Act. S.15

Science. Policy. Service. Progress.

The Congressional Science Policy Initiative (CSPI) is a nonpartisan effort to facilitate the engagement of scientists, engineers, technologists, and other experts with the US Legislative Branch to help produce evidence-based public policy.

If you have expertise in a data-driven discipline, join hundreds of specialists who are already taking action to provide critical information to Congress as part of the CSPI community.