
Federation of American Scientists Statement on the Preemption of State AI Regulation in the One Big Beautiful Bill Act
As the Senate prepares to vote on a provision in the One Big Beautiful Bill Act, which would condition Broadband Equity, Access, and Deployment (BEAD) Program funding on states ceasing enforcement of their AI laws (SEC.0012 Support for Artificial Intelligence Under the Broadband Equity, Access, and Deployment Program), the Federation of American Scientists urges Congress to oppose this measure. This approach threatens to compromise public trust and responsible innovation at a moment of rapid technological change.
The Trump Administration has repeatedly emphasized that public trust is essential to fostering American innovation and global leadership in AI. That trust depends on clear, reasonable guardrails, especially as AI systems are increasingly deployed in high-stakes areas like education, health, employment, and public services. Moreover, the advancement of frontier AI systems is staggering. The capabilities, risks, and use cases of general-purpose models are predicted to evolve dramatically over the next decade. In such a landscape, we require governance structures that are adaptive, multi-layered, and capable of responding in real-time.
While a well-crafted federal framework may ultimately be the right path forward, preempting all state regulation in the absence of federal action would leave a dangerous vacuum, further undermining public confidence in these technologies. According to Pew Research, American concerns about AI are growing, and a majority of US adults and AI experts worry that governments will not go far enough to regulate AI.
State governments have long served as laboratories of democracy, testing policies, implementation strategies, and ways to adapt to local needs. Tying essential broadband infrastructure funding to the repeal of sensible, forward-looking laws would cut off states’ ability to meet the demands of AI evolution in the absence of federal guidance.
We urge lawmakers to protect both innovation and accountability by rejecting this provision. Conditioning BEAD Funding on halting AI regulation sends the wrong message. AI progress does not need to come at the cost of responsible oversight.
Preempting all state regulation in the absence of federal action would leave a dangerous vacuum, further undermining public confidence in these technologies.
At this inflection point, the choice is not between speed and safety but between ungoverned acceleration and a calculated momentum that allows our strategic AI advantage to be both sustained and secured.
Improved detection could strengthen deterrence, but only if accompanying hazards—automation bias, model hallucinations, exploitable software vulnerabilities, and the risk of eroding assured second‑strike capability—are well managed.
A dedicated and properly resourced national entity is essential for supporting the development of safe, secure, and trustworthy AI to drive widespread adoption, by providing sustained, independent technical assessments and emergency coordination.