
Federation of American Scientists Statement on the Preemption of State AI Regulation in the One Big Beautiful Bill Act
As the Senate prepares to vote on a provision in the One Big Beautiful Bill Act, which would condition Broadband Equity, Access, and Deployment (BEAD) Program funding on states ceasing enforcement of their AI laws (SEC.0012 Support for Artificial Intelligence Under the Broadband Equity, Access, and Deployment Program), the Federation of American Scientists urges Congress to oppose this measure. This approach threatens to compromise public trust and responsible innovation at a moment of rapid technological change.
The Trump Administration has repeatedly emphasized that public trust is essential to fostering American innovation and global leadership in AI. That trust depends on clear, reasonable guardrails, especially as AI systems are increasingly deployed in high-stakes areas like education, health, employment, and public services. Moreover, the advancement of frontier AI systems is staggering. The capabilities, risks, and use cases of general-purpose models are predicted to evolve dramatically over the next decade. In such a landscape, we require governance structures that are adaptive, multi-layered, and capable of responding in real-time.
While a well-crafted federal framework may ultimately be the right path forward, preempting all state regulation in the absence of federal action would leave a dangerous vacuum, further undermining public confidence in these technologies. According to Pew Research, American concerns about AI are growing, and a majority of US adults and AI experts worry that governments will not go far enough to regulate AI.
State governments have long served as laboratories of democracy, testing policies, implementation strategies, and ways to adapt to local needs. Tying essential broadband infrastructure funding to the repeal of sensible, forward-looking laws would cut off states’ ability to meet the demands of AI evolution in the absence of federal guidance.
We urge lawmakers to protect both innovation and accountability by rejecting this provision. Conditioning BEAD Funding on halting AI regulation sends the wrong message. AI progress does not need to come at the cost of responsible oversight.
A shift toward more circular, transparent systems would not only reduce waste and increase efficiency, but also unlock new business models, strengthen supply chain resilience, and give consumers better, more reliable information about the products they choose.
By better harnessing the power of data, we can build a learning healthcare system where outcomes drive continuous improvement and where healthcare value leads the way.
In this unprecedented inflection point (and time of difficult disruption) for higher education, science funding, and agency structure, we have an opportunity to move beyond incremental changes and advocate for bold, new ideas that envision a future of the scientific research enterprise that looks very different from the current system.
Assigning persistent digital identifiers (Digital Object Identifiers, or DOIs) and using ORCIDs (Open Researcher and Contributor IDs) for key personnel to track outputs for research grants will improve the accountability and transparency of federal investments in research and reduce reporting burden.