
Securing American AI Leadership: A Strategic Action Plan for Innovation, Adoption, and Trust
The Federation of American Scientists (FAS) submitted the following response to the Request for Information (RFI) issued by the Office of Science and Technology Policy (OSTP) in February 2025 regarding the development of an Artificial Intelligence (AI) Action Plan.
At a time when AI is poised to transform every sector of the economy, the Trump administration has a critical opportunity to solidify America’s leadership in this pivotal technology. Building on the foundations laid during the first Trump administration, bold and targeted policies can unleash innovation, unlocking AI’s vast potential to stimulate economic growth, revolutionize industries, and strengthen national security. However, innovation alone is insufficient; without public trust, AI adoption will stall. Ensuring AI systems are transparent, reliable, and aligned with American values will accelerate responsible adoption and solidify AI as a cornerstone of America’s economic and technological leadership.
To sustain America’s leadership in AI innovation, accelerate adoption across the economy, and guarantee that AI systems remain secure and trustworthy, we offer a set of actionable policy recommendations. Developed by FAS in partnership with prominent AI experts, industry leaders, and research institutions—including contributors to the recent FAS Day One 2025 Project and the 2024 AI Legislative Sprint—these proposals are structured around four strategic pillars: 1) unleashing AI innovation, 2) accelerating AI adoption, 3) ensuring secure and trustworthy AI, and 4) strengthening existing world-class U.S. government institutions and programs.
1) Unleashing AI Innovation. American AI leadership has been driven by bold private-sector investments and world-class academic research. However, critical high-impact areas remain underfunded. The federal government can catalyze investment and innovation by expanding access to essential data, investing strategically in overlooked areas of AI R&D, defining priority research challenges, promoting public-private partnerships, and attracting and retaining global talent.
2) Accelerating AI Adoption Across the Economy. The United States leads in AI breakthroughs, but these breakthroughs must translate into widespread adoption to maximize their economic and societal benefits. Accelerating adoption—a critical yet often overlooked driver of national competitiveness—requires addressing workforce readiness, expanding government capacity, and managing rising energy demands.
3) Ensuring Secure and Trustworthy AI. Ensuring AI systems are secure and trustworthy is essential not only for fostering public confidence and accelerating widespread adoption, but also for improving government efficiency and ensuring the responsible use of taxpayer resources when AI is deployed by public agencies. While the previous Trump administration recognized the necessity of public trust when promoting AI adoption, concerns persist about AI’s rapid evolution, unpredictable capabilities, and potential for misuse. Future AI accidents could further erode this trust, stalling AI progress. To address these risks and fully harness AI’s potential, the U.S. government must proactively monitor emerging threats, rigorously evaluate AI technologies, and encourage innovation that upholds fundamental American values such as privacy.
4) Strengthening Existing World-Class U.S. Government AI Institutions and Programs. Realizing the Trump Administration’s goals will require building on leading government AI capabilities. Key initiatives—including the NIST AI Safety Institute (AISI), the National AI Research Resource (NAIRR) Pilot, the AI Use Case Inventory, and the Department of Energy’s Office of Critical and Emerging Technologies (CET)—advance AI innovation, security, and transparency. The AISI evaluates AI models with broad industry support, while the NAIRR Pilot expands access to AI resources beyond Big Tech. Federal AI use case inventories enhance government transparency and industry engagement, building public trust. DOE’s CET drives AI-powered advancements in science and national security. Integrating these proven initiatives into the AI Action Plan will solidify America’s AI leadership.
By acting decisively, the administration can ensure American AI remains the gold standard, drive economic competitiveness, and accelerate science and innovation.
Overview of Policy Proposals
Policy Proposals to Unleash AI Innovation
- Recommendation 1: Promote innovation in trustworthy AI through a Public-Private National Initiative for AI Explainability.
- Recommendation 2: Direct the Department of Energy (DOE) to use AI to accelerate the discovery of new materials.
- Recommendation 3: Create AI-ready collaborative datasets to accelerate progress in the life sciences.
- Recommendation 4: Establish a NIST Foundation to amplify public-private collaboration, secure private investment, and accelerate innovation.
- Recommendation 5: Attract top global talent by creating a National Security AI Entrepreneur Visa for elite dual-use technology founders.
Policy Proposals to Accelerate AI Adoption Across the Economy
- Recommendation 1: Streamline procurement processes for government use of AI.
- Recommendation 2: Establish a Federal Center of Excellence to expand state and local government capacity for AI procurement and use.
- Recommendation 3: Pilot an AI Corps at HHS to drive government-wide AI adoption.
- Recommendation 4: Make America’s teacher workforce competitive for the AI era.
- Recommendation 5: Prepare U.S. energy infrastructure for AI growth through standardized measurement and forecasting.
Policy Proposals to Ensure Secure and Trustworthy AI
- Privacy:
- Recommendation 1: Secure third party commercial data for AI through FedRAMP authorization.
- Recommendation 2: Catalyze federal data sharing through privacy enhancing technologies.
- Recommendation 3: Establish data-sharing standards to support AI development in healthcare.
- Security, Safety, and Trustworthiness:
- Recommendation 1: Establish an early warning system for AI-powered threats to national security and public safety.
- Recommendation 2: Create a voluntary AI incident reporting hub to monitor security incidents from AI.
- Recommendation 3: Promote AI trustworthiness by providing a safe harbor for AI researchers.
- Recommendation 4: Build a national digital content authentication technologies research ecosystem.
- Recommendation 5: Strengthen national security by evaluating AI-driven biological threats.
Policy Proposals to Strengthen Existing World-Class U.S. Government AI Institutions and Programs that are Key to the Trump Administration’s AI Agenda
- Recommendation 1: Support the NIST AI Safety Institute as a key pillar of American AI excellence.
- Recommendation 2: Expand the National Artificial Intelligence Research Resource from pilot to full program.
- Recommendation 3: Enhance transparency, accountability, and industry engagement by preserving the AI use case inventory.
- Recommendation 4: Propel U.S. Scientific and Security AI Leadership by Supporting AI and Computing at DOE.
Policy Proposals to Unleash AI Innovation
As artificial intelligence continues transforming industries and reshaping global competition, the United States must take bold, coordinated action to maintain its technological leadership. A multi-agency approach could include launching a National Initiative for AI Explainability, accelerating materials science discovery through AI-powered autonomous laboratories, creating AI-ready datasets for the life sciences, establishing a NIST Foundation to enhance public-private collaboration in AI research, and creating a National Security AI Entrepreneur Visa to attract and retain top global talent. Together, these initiatives would strengthen America’s AI ecosystem by addressing critical challenges in transparency, scientific research, standards development, and talent acquisition—while ensuring the U.S. remains at the forefront of responsible AI innovation.
Recommendation 1. Promote Innovation in Trustworthy AI through a Public-Private National Initiative for AI Explainability
Understanding the inner workings of AI systems is critical not only for reliability and risk mitigation in high-stakes areas such as defense, healthcare, and finance, but also for bolstering American technological leadership and maximizing government accountability and efficiency. However, despite promising progress in fields such as “mechanistic interpretability”, the study of explainability in AI systems is still nascent. A lack of explainability risks undermining trust and inhibiting AI adoption, particularly in safety-critical sectors.
To address the challenge of understanding and improving AI systems, we propose the launch of a Public-Private National Initiative for AI Explainability. Following in the footsteps of government-coordinated research projects like the Human Genome Project, this initiative would unite researchers, industry leaders, standards bodies, and government agencies to map the inner workings of advanced AI systems in a public-private partnership.
Federal precedent for such work already exists: DARPA’s 2017-2021 Explainable AI (XAI) program sought to create machine learning systems capable of explaining their decisions in a way humans could understand. While the program advanced techniques for explainable models and human-friendly translations of complex AI reasoning, the rapid development and scaling of AI technologies in the past five years demand a renewed, more ambitious effort.
The objectives of the initiative would include:
- Creating Open-Access Resources: Developing AI models, datasets, and tools accessible to researchers and practitioners, allowing a larger number of actors to contribute to progress.
- Developing Standardized Metrics and Benchmarks: Establishing clear standards to evaluate the explainability of AI systems in different circumstances, ensuring consistency and reliability across applications.
- Defining Common Tasks: Establishing standardized metrics and open datasets to create “common tasks” in explainability—well-defined challenges that drive innovation and encourage widespread progress as the broader ecosystem competes to improve performance.
- Investigating User-Centric Explanation Needs: Conducting research to identify which types of AI explanations are most effective and meaningful, and which provide appropriate degrees of control, to users across various contexts and applications.
- Developing a Repository of Explainability Techniques: Researching and disseminating explainability methods applicable across various AI domains, including an analysis of the strengths and weaknesses of different approaches and where they can be properly applied.
Implementation Strategy:
To launch this effort, the President should issue an executive order to signal national commitment and assign leadership to key federal agencies, including:
- Office of Science and Technology Policy: Playing a coordinating role across the government.
- Defense Advanced Research Projects Agency (DARPA): Building upon its prior experience with the XAI program to spearhead research efforts.
- National Institute of Standards and Technology (NIST): Developing standards and benchmarks for AI explainability, building on previous work in this area.
- National Science Foundation (NSF): Funding academic research through its National AI Research Institutes.
- Department of Energy (DOE): Leveraging its computational resources and expertise in large-scale research projects.
- Other government agencies with relevant expertise: For example, the National Institutes of Health (NIH) could focus on explainability in AI applications within the healthcare sector.
The White House should leverage its convening power to unite leading AI companies, top academic institutions, and government agencies in formal collaborations. These partnerships could encompass co-funded research, shared datasets and computing resources, collaborative access to advanced AI models, and joint development of open-source tools. Establishing a structured public-private partnership will facilitate coordinated funding, align strategic priorities, and streamline resource sharing, ensuring that advancements in AI explainability directly support both national interests and economic competitiveness. To sustain this initiative, the administration should also secure consistent, multi-year federal funding through appropriations requests to Congress.
DARPA’s XAI program showed that AI explainability requires interdisciplinary collaboration to align technical development with human understanding. Building on these insights, this initiative should include experts from computer science, cognitive science, ethics, law, and domain-specific fields to ensure explanations are clear, useful, and actionable for decision-makers across critical sectors.
By implementing this National Initiative for AI Explainability, the Trump administration can significantly enhance public confidence in AI technologies, accelerate responsible adoption by both the public and private sectors, and solidify America’s global leadership in AI innovation. Critically, a modest investment of government resources in this initiative could unlock substantial private-sector investment, spurring innovation and driving economic growth. This strategic approach will also enhance government accountability, optimize the responsible use of taxpayer resources, and ensure that American industry continues to lead in AI development and deployment.
Recommendation 2. Direct the Department of Energy (DOE) to use AI to Accelerate the Discovery of New Materials (link to full memo >>>)
Innovations in AI and robotics could revolutionize materials science by automating experimental processes and dramatically accelerating the discovery of new materials. Currently, materials science research involves manually testing different combinations of elements to identify promising materials, which limits the pace of discovery. Using AI foundation models for physics and chemistry, scientists could simulate new materials, while robotic “self-driving labs” could run 24/7 to synthesize and evaluate them autonomously. This approach would enable continuous data generation, refining AI models in a feedback loop that speeds up research and lowers costs. Given its expertise in supercomputing, AI, and a vast network of national labs, the Department of Energy (DOE) could lead this transformative initiative, potentially unlocking advancements in critical materials, such as improved battery components, that could have immense economic and technological impacts.
Recommendation 3. Create AI-ready Collaborative Datasets to Accelerate Progress in the Life Sciences (link to full memo >>>)
Large, high-quality datasets could revolutionize life science research by powering AI models that unlock new discoveries in areas like drug development and diagnostics. Currently, researchers often work in silos with limited incentives to collaborate and share meticulously curated data, slowing progress. By launching a government-funded, end-to-end initiative—from identifying critical dataset needs to certifying automated collection methods and hosting robust open repositories—scientists could continuously generate and refine data, fueling AI models in a feedback loop that boosts accuracy and lowers costs. Even a relatively modest government investment could produce vital resources for researchers and startups to spark new industries. This model could also be extended to a range of other scientific fields to accelerate U.S.science and innovation.
Recommendation 4. Create a NIST Foundation to Support the Agency’s AI Mandate (link to full memo >>>)
To maintain America’s competitive edge in AI, NIST needs greater funding, specialized talent, and the flexibility to work effectively with private-sector partners. One solution is creating a “NIST Foundation,” modeled on the DOE’s Foundation for Energy Security and Innovation (FESI), which combines federal and private resources to expand capacity, streamline operations, and spur innovation. Legislation enabling such a foundation was introduced with bipartisan support in the 118th Congress, signaling broad consensus on its value. The Trump administration can direct NIST to study how a nonprofit foundation might boost its AI initiatives and broader mission—just as a similar report helped pave the way for FESI—giving Congress the evidence it needs to formally authorize a NIST Foundation. The administration can also support passage of authorizing legislation through Congress.
Recommendation 5. Attract Top Global Talent by Creating a National Security AI Entrepreneur Visa for Elite Dual-use Technology Founders (link to full memo >>>)
America’s leadership in AI has been driven by the contributions of immigrant entrepreneurs, with companies like NVIDIA, Anthropic, OpenAI, X, and HuggingFace—all of which have at least one immigrant co-founder—leading the charge. To maintain this competitive edge as global competition intensifies, the administration should champion a National Security Startup Visa specifically targeted at high-skilled founders of AI firms. These entrepreneurs are at the forefront of developing dual-use technologies critical for both America’s economic leadership and national security. Although the linked proposal above is targeted at legislative action, the administration can take immediate steps to advance this priority by publicly supporting legislation to establish such a visa, engaging with Congressional allies to underscore its strategic importance, and directing agencies like the Department of Homeland Security and the Department of Commerce to explore ways to streamline pathways for these innovators. This decisive action would send a clear signal that America remains the destination of choice for world-class talent, ensuring that the nation stays ahead in the race for AI dominance.
Policy Proposals to Accelerate AI Adoption Across the Economy
AI has transformative potential to boost economic growth and unlock new levels of prosperity for all. The Trump administration should take bold action to encourage greater adoption of AI technologies and AI expertise by leveraging government procurement, hiring, and standards-setting processes, alongside coordinated support for America’s teachers to prepare students to join the future AI workforce. In government, a coordinated set of federal initiatives is needed to modernize and streamline effective AI adoption in the public sector. These proposals include developing a national digital platform through GSA to streamline AI procurement processes, establishing a federal center of excellence to support state and local governments in AI implementation, and pursuing innovative hiring models to expand AI expertise at HHS. Additionally, NIST should develop voluntary standards for measuring AI energy and resource usage to inform infrastructure planning efforts. Finally, the President should announce a national teacher talent surge and set AI as a competitive priority in American education.
Recommendation 1. Streamline Procurement Processes for Government Use of AI (link to full memo >>>)
The federal government has a critical role in establishing standards for AI systems to enhance public services while ensuring they are implemented ethically and transparently. To streamline this effort and support federal agencies, the administration should direct the General Services Administration (GSA) to create a user-friendly, digital platform for AI procurement. This platform would simplify the acquisition process by providing agencies with clear, up-to-date guidelines, resources, and best practices, all tailored to align with existing procurement frameworks. The platform would empower agencies to make informed decisions that prioritize safety, fairness, and effective use of AI technologies, while demonstrating the administration’s commitment to modernizing government operations and ensuring America leads the way in adopting cutting-edge AI solutions.
Recommendation 2. Establish a Federal Center of Excellence to Expand State and Local Government Capacity for AI Procurement and Use (link to full memo >>>)
State and local governments often face challenges in effectively leveraging AI to enhance their efficiency and service capabilities. To support responsible AI adoption at the state, local, tribal, and territorial (SLTT) levels, the administration should establish a federal AI Center of Excellence. This center would provide hands-on guidance from experts in government, academia, and civil society, helping SLTT agencies navigate complex challenges such as limited technical expertise, budget constraints, privacy concerns, and evolving regulations. It would also translate existing federal AI standards—including Executive Order 13960 and the NIST Risk Management Framework—into practical, actionable advice. By developing in-house procurement and deployment expertise, SLTT governments could independently and confidently implement AI solutions, promoting innovation while ensuring responsible, effective, and efficient use of taxpayer resources.
Recommendation 3. Pilot an AI Corps at HHS to Drive Government-Wide AI Adoption (link to full memo >>>)
Federal agencies often struggle to leverage AI effectively, due to limited technical expertise and complex oversight requirements. Modeled after the Department of Homeland Security’s successful AI Corps, which has improved disaster response and cybersecurity, this pilot would embed AI and machine learning experts within the Department of Health and Human Services’s (HHS) 10 agencies, accelerating responsible AI implementation in healthcare, driving greater efficiency, and demonstrating a scalable model that could be replicated across other federal departments. HHS is uniquely suited for piloting an AI Corps because it oversees critical health infrastructure and massive, sensitive datasets—presenting significant opportunities for AI-driven improvements but also requiring careful management. If successful, this pilot could serve as a strategic blueprint to enhance AI adoption, improve government performance, and maximize the responsible use of taxpayer resources across the federal government.
Recommendation 4. Make America’s Teacher Workforce Competitive for the AI Era (link to full memo >>>)
With America facing a significant shortage of teachers and a growing need for AI and digital skills in the workforce, the Trump administration can rebuild America’s teaching profession by launching a coordinated strategy led by the Office of Science and Technology Policy (OSTP). This initiative should begin with a national teacher talent surge to expand annual teacher graduates by 100,000, addressing both the urgent workforce gap and the imperative to equip students for an AI-driven future. The plan includes a Challenge.gov competition to attract innovative recruitment and retention models, updating Department of Education scholarship programs (like the Graduate Assistance in Areas of National Need) to include AI, data science, and machine learning, convening colleges of education to modernize training, and directing agencies to prioritize AI-focused teacher development. By leveraging existing grants (e.g., Teacher Quality Partnerships, SEED, the STEM Corps, and Robert Noyce Scholarships), the administration can ensure a robust pipeline of educators ready to guide the next generation.
Recommendation 5. Prepare U.S. Energy Infrastructure for AI Growth Through Standardized Measurement and Forecasting
As AI adoption accelerates, America’s energy infrastructure faces a critical challenge: next-generation AI systems could place unprecedented demands on the power grid, yet the lack of standardized measurements, and wide variations in forecasted demand, leaves utilities and policymakers unprepared. Without proactive planning, energy constraints could slow AI innovation and undermine U.S. competitiveness.
To address this, the Administration should direct the National Institute of Standards and Technology (NIST) and the Department of Energy (DOE) to develop a standardized framework for measuring and forecasting AI’s energy and resource demands. This framework should be paired with a voluntary reporting program for AI developers—potentially collected by the Energy Information Administration (EIA)—to provide a clearer picture of AI’s impact on energy consumption. The EIA should also be tasked with forecasting AI-driven energy demand, ensuring that utilities, public utility commissions, and state energy planners have the data needed to modernize the grid efficiently.
Greater transparency will enable both government and industry to anticipate energy needs, drive investment in grid modernization, and prevent AI-related power shortages that could hinder economic growth. The proactive integration of AI and energy planning will strengthen America’s leadership in AI innovation while safeguarding the reliability of its infrastructure. FAS is actively developing policy proposals with the science and technology community at the intersection of AI and energy. We plan to share additional recommendations on this topic in the coming months.
Policy Proposals to Ensure Secure and Trustworthy AI
Privacy
Protecting Americans’ privacy while harnessing the potential of AI requires decisive federal action that prioritizes both individual rights and technological advancement. Strengthening privacy protections while enabling responsible data sharing is crucial for ensuring that AI-driven innovations improve public services without compromising sensitive information. Key initiatives include establishing NIST-led guidelines for secure data sharing and maintaining data integrity, implementing a FedRAMP authorization framework for third-party data sources used by government agencies, and promoting the use of Privacy Enhancing Technologies (PETs). Additionally, the administration should create a “Responsible Data Sharing Corps” to provide agencies with expert guidance and build capacity in responsible data practices.
Recommendation 1. Secure Third Party Commercial Data for AI through FedRAMP Authorization (link to full memo >>>)
The U.S. government is a major customer of commercial data brokers and should require a pre-evaluation process before agencies acquire large datasets, ensuring privacy and security from the outset. Thoroughly vetting data brokers and verifying compliance standards can help avert national security risks posed by compromised or unregulated third-party vendors. To formalize these safeguards, OMB and FedRAMP should create an authorization framework for data brokers that provide commercially available information, especially with personally identifiable information. Building on its established role in securing cloud providers FedRAMP is well positioned to guide these protocols, ensuring agencies work only with trusted vendors and strengthening overall data protection.
Recommendation 2. Catalyze Federal Data Sharing through Privacy Enhancing Technologies (link to full memo >>>)
To maintain America’s leadership in AI and digital innovation, the administration must ensure that government agencies can securely leverage data while protecting privacy and maintaining public trust. The federal government can lead by example through the adoption of Privacy Enhancing Technologies (PETs)—tools that enable data analysis while minimizing exposure of sensitive information. Agencies should be encouraged to adopt PETs with support from a Responsible Data Sharing Corps, while NIST develops a decision-making framework to guide their use. OMB should require agencies to apply this framework in data-sharing initiatives and report on PET adoption, with a PET Use Case Inventory and annual reports enhancing transparency. A federal fellowship program could also bring in experts from academia and industry to drive PET innovation. These measures would strengthen privacy, security, and public trust while positioning the U.S. as a global leader in responsible data use.
Recommendation 3. Establish Data-Sharing Standards to Support AI Development in Healthcare (link to full memo >>>)
The U.S. healthcare system generates vast amounts of data daily, yet fragmentation, privacy concerns, and lack of interoperability severely limit its use in AI development, hindering medical innovation. To address this, the AI Action Plan should direct NIST to lead an interagency coalition in developing standardized protocols for health data anonymization, secure sharing, and third-party access. By establishing clear technical and governance standards—similar to NIST’s Cryptographic and Biometric Standards Programs—this initiative would enable responsible research while ensuring compliance with privacy and security requirements. These standards would unlock AI-driven advancements in diagnostics, treatment planning, and health system efficiency. Other nations, including the U.K., Australia, and Finland, are already implementing centralized data-sharing frameworks; without federal leadership, the U.S. risks falling behind. By taking decisive action, the administration can position the U.S. as a global leader in medical AI, accelerating innovation while maintaining strong privacy protections.
Security, Safety, and Trustworthiness
AI holds immense promise for job growth, national security, and innovation, but accidents or misuse risk undermining public trust and slowing adoption—threatening the U.S.’s leadership in this critical field. The following proposals use limited, targeted government action alongside private-sector collaboration to strengthen America’s AI capabilities while upholding public confidence and protecting our national interests.
Recommendation 1. Establish an Early Warning System for AI-Powered Threats to National Security and Public Safety (link to full memo >>>)
Emerging AI capabilities could also pose severe threats to public safety and national security. AI companies are already evaluating their most advanced models to identify dual-use capabilities, such as the capacity to conduct offensive cyber operations, enable the development of biological or chemical weapons, and autonomously replicate and spread. These capabilities can arise unpredictably and undetected during development and after deployment. To prepare for these emerging risks, the federal government should establish a coordinated “early-warning system” for novel dual-use AI capabilities to gain awareness of emerging risks before models are deployed. A government agency could serve as a central information clearinghouse—an approach adapted from the original congressional proposal linked above. Advanced AI model developers could confidentially report newly discovered or assessed dual-use capabilities, and the White House could direct relevant government agencies to form specialized working groups that engage with private sector and other non-governmental partners to rapidly mitigate risks and leverage defensive applications. This initiative would ensure that the federal government and its stakeholders have maximum lead time to prepare for emerging AI-powered threats, positioning the U.S. as a leader in safe and responsible AI innovation.
Recommendation 2. Create a Voluntary AI Incident Reporting Hub to Monitor Security Incidents from AI (link to full memo >>>)
The federal government should establish a voluntary national Artificial Intelligence Incident Reporting Hub to better track, analyze, and address incidents from increasingly complex and capable AI systems that are deployed in the real world. Such an initiative could be modeled after successful incident reporting and info-sharing systems operated by the National Cybersecurity FFRDC, the Federal Aviation Administration, and the Food and Drug Administration. By providing comprehensive yet confidential data collection under the umbrella of an agency (e.g. NIST) this initiative would bolster public trust, facilitate the sharing of critical risk information, and enable prompt government action on emerging threats, from cybersecurity vulnerabilities to potential misuse of AI in sensitive areas like chemical, biological, radiological, or nuclear contexts. This proposal builds on bipartisan legislation introduced in the last Congress, as well as the memo linked above, which was originally targeted at Congressional action.
Recommendation 3. Promote AI Trustworthiness by Providing a Safe Harbor for AI Researchers (link to full memo >>>)
Independent AI research plays a key role in ensuring safe and reliable AI systems. In 2024, over 350 researchers signed an open letter calling for “a safe harbor for independent AI evaluation”, noting that generative AI companies offer no legal protections for independent safety researchers. This situation is unlike established voluntary protections from companies for traditional software, and Department of Justice (DOJ) guidance not to prosecute good faith security research. The proposal linked above was targeted at Congressional action, however the executive branch could adapt these ideas in several ways, by, for example: 1) instructing the Office of Management and Budget (OMB) to issue guidance to all federal agencies requiring that contracting documents for generative AI systems include safe-harbor provisions for good-faith external research, consistent with longstanding federal policies that promote responsible vulnerability disclosure. 2) Coordinating with DOJ and relevant agencies to clarify that good-faith AI security and safety testing—such as red-teaming and adversarial evaluation—does not violate the Computer Fraud and Abuse Act (CFAA) or other laws when conducted according to established guidelines.
Recommendation 4. Build a National Digital Content Authentication Technologies Research Ecosystem (link to full memo >>>)
AI generated synthetic content (such as fake videos, images, and audio) is increasingly used by malicious actors to defraud elderly Americans, spread child sexual abuse material, and impersonate political figures. To counter these threats, the United States must invest in developing technical solutions for reliable synthetic content detection. Through the National Institute of Standards and Technology (NIST), the Trump Administration can: 1) establish dedicated university-led national research centers, 2) develop a national synthetic content database, and 3) run and coordinate prize competitions to strengthen technical countermeasures.These initiatives will help build a robust research ecosystem to keep pace with the rapidly evolving synthetic content threat landscape, maintaining America’s role as a global leader in responsible and secure AI.
Recommendation 5. Strengthen National Security by Evaluating AI-Driven Biological Threats (link to full memo >>>)
Over the past two years, the rapid advance of AI in biology and large language models has highlighted an urgent need for a targeted U.S. Government program to assess and mitigate biosecurity risks. While AI-enabled tools hold immense promise for drug discovery, vaccine research, and other beneficial applications, their dual-use potential (e.g., identifying viral mutations that enhance vaccine evasion) makes them a national security priority. Building on the Department of Homeland Security’s (DHS) previous work on AI and CBRN threats, the Department of Energy (DOE), DHS, and other relevant agencies, should now jointly launch a “Bio Capability Evaluations” program, backed by sustained funding, to develop specialized benchmarks and standards for evaluating dangerous biological capabilities in AI-based research tools. By forming public-private partnerships, creating a DOE “sandbox” for ongoing testing, and integrating results into intelligence assessments, such a program would enable more nuanced, evidence-based regulations and help the United States stay ahead of potential adversaries seeking to exploit AI’s biological capabilities.
Policy Proposals to Strengthen Existing World-Class U.S. Government AI Institutions and Programs that are Key to the Trump Administration’s AI Agenda
A robust institutional framework is essential for ensuring that the government fulfills its role in AI research, industry coordination, and ecosystem development. The previous Trump administration laid the groundwork for American AI leadership, and the institutions established since then can be leveraged to further assert U.S. dominance in this critical technological space.
Recommendation 1. Support the NIST AI Safety Institute as a Key Pillar of American AI Excellence
The NIST AI Safety Institute (AISI) has assembled a world-leading team to ensure that the U.S. leads in safe, reliable, and trustworthy AI development. As AI integrates into critical sectors like national security, healthcare, and finance, strong safety standards are essential. AISI develops rigorous benchmarks, tests model security, and collaborates with industry to set standards, mitigating risks from unreliable AI. Strengthening AISI protects U.S. consumers, businesses, and national security while boosting global trust in the U.S. AI ecosystem—enhancing international adoption of American AI models. AISI has broad support, with bipartisan legislation to codify the AISI advanced in Congress and backing from organizations across industry and academia. The AI Action Plan should prioritize AISI as a pillar of AI policy.
Recommendation 2. Expand the National Artificial Intelligence Research Resource from Pilot to Full Program
For decades, academic researchers have driven AI breakthroughs, laying the foundation for the technologies that now shape global competition. However, as AI development becomes increasingly concentrated within large technology companies, the U.S. risks losing the ecosystem that made these advances possible. The National AI Research Resource (NAIRR) Pilot is a critical initiative to keep American AI innovation competitive and accessible. By providing researchers and educators across the country access to cutting-edge AI tools, datasets, and computing power, NAIRR ensures that innovation is not confined to a handful of dominant firms but widely distributed. To keep America at the forefront of AI, the Trump Administration should expand NAIRR into a full-fledged program. Allowing the program to lapse would erode America’s leadership in AI research, forcing top talent to seek resources elsewhere. To secure its future, the White House should support bipartisan legislation to fully authorize NAIRR and include it in the President’s Budget Request, ensuring sustained investment in this vital initiative.
Recommendation 3. Enhance Transparency, Accountability, and Industry Engagement by Preserving the AI Use Case Inventory (link to letter of support >>>)
The AI Use Case Inventory, established under President Trump’s Executive Order 13960 and later codified in section 7225 of the FY23 National Defense Authorization Act, plays a crucial role in fostering public trust and innovation in government AI use. Recent OMB guidance (M-24-10) has expanded its scope, refining AI classifications and standardizing AI definitions. The inventory enhances public trust and accountability by ensuring transparency in AI deployments, tracks AI successes and risks to improve government services, and supports AI vendors by providing visibility into public-sector AI needs, thereby driving industry innovation. As the federal government considers revisions to M-24-10 and its plan for AI adoption within federal agencies, OMB should uphold the 2024 guidance on federal agency AI Use Case Inventories and ensure agencies have the necessary resources to complete it effectively.
Recommendation 4. Propel U.S. Scientific and Security AI Leadership by Supporting AI and Computing at DOE
The Department of Energy (DOE) hosts leading research and innovation centers, particularly under the Undersecretary for Science and Innovation. The Office of Critical and Emerging Technologies (CET), for example, plays a key role in coordinating AI initiatives, including the proposed Frontiers in Artificial Intelligence for Science, Security, and Technology (FASST) program. To fully harness AI’s potential, DOE should establish a dedicated AI and Computing Laboratory under the Undersecretary, ensuring a strategic, mission-driven approach to AI development. This initiative would accelerate scientific discovery, strengthen national security, and tackle energy challenges by leveraging DOE’s advanced computational infrastructure and expertise. To ensure success, it should be supported by a multi-year funding commitment and flexible operational authorities, modeled after ARPA-E, to streamline hiring, procurement, and industry-academic partnerships.
Conclusion
These recommendations offer a roadmap for securing America’s leadership in artificial intelligence while upholding the fundamental values of innovation, competitiveness, and trustworthiness. By investing in cutting-edge research, equipping government and educators with the tools to navigate the AI era, and ensuring safety, the new administration can position America as a global standard-bearer for trustworthy and effective AI development.
To sustain America’s leadership in AI innovation, accelerate adoption across the economy, and guarantee that AI systems remain secure and trustworthy, we offer a set of policy recommendations.
Current scientific understanding shows that so-called “anonymization” methods that have been widely used in the past are inadequate for protecting privacy in the era of big data and artificial intelligence.
To fully harness the benefits of AI, the public must have confidence that these systems are deployed responsibly and enhance their lives and livelihoods.
The first Trump Administration’s E.O. 13859 commitment laid the foundation for increasing government accountability in AI use; this should continue