Good information sources, like collections, must be available and maintained if companies are going to successfully implement the vision of AI for science expressed by their marketing and executives.
Nestled in the cuts and investments of interest to the S&T community is a more complex story of how the administration is approaching the practice of science diplomacy.
By structuring licensing-and-talent deals that replicate mergers while avoiding antitrust scrutiny, dominant technology firms are reshaping AI labor markets, venture financing, and the future of U.S. innovation.
It is in the interests of the United States to appropriately protect information that needs to be protected while maintaining our participation in new discoveries to maintain our competitive advantage.
Our analysis of federal AI governance across administrations shows that divergent compliance procedures and uneven institutional capacity challenge the government’s ability to deploy AI in ways that uphold public trust.
To secure the U.S. bio-infrastructure, maintain global leadership in biotechnology, and safeguard American citizens from emerging threats to their privacy, the federal government must modernize its approach to human genetic and biological data.
From use to testing to deployment, the scaffolding for responsible integration of AI into high-risk use cases is just not there.
The Federation of American Scientists supports Congress’ ongoing bipartisan efforts to strengthen U.S. leadership with respect to outer space activities.
By preparing credible, bipartisan options now, before the bill becomes law, we can give the Administration a plan that is ready to implement rather than another study that gathers dust.
Even as companies and countries race to adopt AI, the U.S. lacks the capacity to fully characterize the behavior and risks of AI systems and ensure leadership across the AI stack. This gap has direct consequences for Commerce’s core missions.
As states take up AI regulation, they must prioritize transparency and build technical capacity to ensure effective governance and build public trust.
In the absence of guardrails and guidance, AI can increase inequities, introduce bias, spread misinformation, and risk data security for schools and students alike.