![](https://fas.org/wp-content/uploads/2023/04/14d834_79f3e77d44de43c3be76b8ff15214c7a-mv2-scaled.jpg)
A National Cloud for Conducting Disinformation Research at Scale
Summary
Online disinformation continues to evolve and threaten national security, federal elections, public health, and other critical U.S. sectors. Yet the federal government lacks access to data and computational power needed to study disinformation at scale. Those with the greatest capacity to study disinformation at scale are large technology companies (e.g., Google, Facebook, Twitter, etc.), which biases much research and limits federal capacity to address disinformation.
To address this problem, we propose that the Department of Defense (DOD) fund a one-year pilot of a National Cloud for Disinformation Research (NCDR). The NCDR would securely house disinformation data and provide computational power needed for the federal government and its partners to study disinformation. The NCDR should be managed by a governance team led by Federally Funded Research and Development Centers (FFRDCs) already serving the DOD. The FFRDC Governance Team will manage (i) which stakeholders can access the Cloud, (ii) coordinate sharing of data and computational resources among stakeholders, and (iii) motivate participation from diverse stakeholders (including industry; academia; federal, state, and local government, and non-governmental organizations).
A National Cloud for Disinformation Research will help the Biden-Harris administration fulfill its campaign promise to reposition the United States as a leader of the democratic world. The NCDR will benefit the federal government by providing access to data and computational resources needed to combat the threats and harms of disinformation. Our nation needs a National Cloud for Disinformation Research to foresee future disinformation attacks and safeguard our democracy in turbulent times.
Congress should foster a more responsive and evidence-based ecosystem for GenAI-powered educational tools, ensuring that they are equitable, effective, and safe for all students.
Without independent research, we do not know if the AI systems that are being deployed today are safe or if they pose widespread risks that have yet to be discovered, including risks to U.S. national security.
Companies that store children’s voice recordings and use them for profit-driven applications without parental consent pose serious privacy threats to children and families.
Privacy laws are only effective if they include civil rights protections that ensure personal data is processed safely and fairly regardless of race, gender, sexuality, age, or other protected characteristics.