Bold Emerging Technologies and Artificial Intelligence Policy for 2025 and Beyond
The United States has long led the way in technological innovation, from the invention of the transistor to the creation of the internet, both of which have fundamentally reshaped the world economy. As President Reagan said, “The best way to predict the future is to create it,” and this holds true today as we stand on the brink of a new technological revolution with artificial intelligence (AI). The global AI market is expected to surpass $1 trillion by 2030, offering immense opportunities in sectors such as healthcare, education, energy, and beyond. In healthcare, AI is already enhancing diagnostics, while in education, it is providing personalized learning that can help bridge achievement gaps. Digital mental health services, powered by AI, are set to expand access to care, offering timely and effective treatment. AI’s potential in energy could lead to breakthroughs in efficiency and cost savings, while its use in government could streamline services, cut waste, and boost taxpayer value. With the right regulations focusing on safety, privacy, and expanding access, the U.S. can not only safeguard its citizens but also strengthen its global leadership in ethical technology development. As competition for AI dominance heats up, this is the moment for America to secure its place at the forefront of the next great wave of innovation.
AI Safety
To ensure AI safety across essential areas of public and private life, the U.S. must establish robust tools, systems, and accountability measures that promote transparency and trust. A key step would be creating a Disaster Management and Technology (DMAT) Center of Excellence, which would leverage AI to improve real-time response capabilities for crises and emergencies. Building on this, “AI provenance click-throughs” could give users transparent information about AI models’ origins and data sources, bolstering public understanding and accountability. In education, protecting children and equipping educators with AI literacy are paramount, as AI shapes learning environments. Establishing child safety protocols, AI fairness tests in educational technology, and targeted professional development for teachers will ensure responsible AI integration, creating equitable, supportive, and safe spaces in schools and digital health. Together, these initiatives can foster trust in AI systems and support responsible innovation in critical domains.
Coming soon: Digital Media Authentication Technologies Center of Excellence (DMAT CoE) by Di Cooke
Coming soon: Reducing Information Integrity Risks from Synthetic Text with Community Guidance on Provenance and Fuzzy Provenance by Marilyn Zhang
Coming soon: An Agenda for Ensuring Child Safety in the AI Era by Amina Fazlulla, and Ariel Fox Johnson
Coming soon: Teacher Education in AI and Data Science by Maggie Beiting-Parrish, and Stephanie Melville
Coming soon: Modernizing AI Fairness Analysis in Education Contexts by John Whitmer, and Maggie Beiting-Parrish
A Safe Harbor for AI Researchers: Promoting Safety and Trustworthiness Through Good-Faith Research by Kevin Klyman, Sayash Kapoor, and Shayne Longpre.
AI companies hinder safety research by deterring independent researchers from exposing flaws in their systems, which poses risks to U.S. national security. To address this issue, Congress should expand existing AI bug bounty programs to include safety research and establish a safe harbor for researchers studying generative AI platforms. These actions will empower independent researchers to stress-test AI systems, enhance transparency, and ensure the safety and trustworthiness of these technologies, particularly after deployment.
An Early Warning System for AI-Powered Threats to National Security and Public Safety by Jam Kraprayoon, Joe O’Brien, and Shaun Lee
In just a few years, state-of-the-art artificial intelligence (AI) models have gone from not reliably counting to 10 to writing software, generating photorealistic videos on demand, combining language and image processing to guide robots, and even advising heads of state in wartime. If responsibly developed and deployed, AI systems could benefit society enormously. However, emerging AI capabilities could also pose severe threats to public safety and national security. To better manage these risks, Congress should set up an early warning system for novel AI-enabled threats to provide defenders maximal time to respond to a given capability before information about it is disclosed or leaked to the public. This system should also be used to share information about defensive AI capabilities.
Privacy and Tech Equity
As technology advances, safeguarding privacy and promoting equity in digital environments is crucial to protecting individual rights and reducing societal disparities. Privacy measures like guidelines for newborn genetic databases, standards for PETS and differential privacy to protect data, and guidelines for safeguarding newborn genetic information are essential. Expanding broadband access will close connectivity gaps, while stringent rules for location data and drone certification can prevent abuses, creating a more inclusive, privacy-respecting digital environment.
Establish Data Standards To Protect Newborn DNA Privacy by Developing Data Storage Standards for Newborn Screening Samples by Christina Del Greco
The incoming administration should encourage states to develop data handling standards for newborn screening data. Specifically these standards should include how long data is stored and who can access it. This can be accomplished by directing the Health and Human Services’ (HHS) Federal Advisory Committee on Heritable Disorders in Newborns and Children (ACHDNC) to provide recommendations that clearly communicate data use and privacy measures to state health departments. In addition, the incoming administration should also encourage development of increased educational materials for parents to explain these privacy concerns, and create funding opportunities to incentivize both of these measures.
Coming soon: Increasing Responsible Data Sharing Capacity throughout Government by Rachel Cummings, Shlomi Hod, Palak Jain, Gabriel Kaptchuk Tamalika Mukherjee, Priyanka Nanayakkara, Jayshree Sarathy, and Jeremy Seeman
Coming soon: Modernizing AI Fairness Analysis in Education Contexts by John Whitmer, and Maggie Beiting-Parrish
Coming soon: Limiting Geolocation Accuracy as a Measure of National Security by Neeraj Chandra
Coming soon: Establishing a National Drone Certification Program: Safeguarding Civil Liberties in Law Enforcement by Evîn Cheikosman
Update COPPA 2.0 to Strengthen Children’s Online Voice Privacy in the AI Era by Satwik Dutta
Advancing AI technologies pose significant risks to children’s voice privacy, as companies increasingly exploit voice data for profit-driven applications. To address these challenges, Congress should strengthen COPPA by clarifying audio file usage and deletion guidelines, establishing standards for de-identification to protect privacy while fostering innovation, and expanding the definition of personal information to include AI-generated avatars. These updates will better safeguard children’s data and shift compliance responsibility to operators.
Improving Health Equity Through AI by Leigh McCormack
Clinical decision support (CDS) artificial intelligence (AI) refers to systems and tools that utilize AI to assist healthcare professionals in making more informed clinical decisions. Inequities in CDS AI pose a significant challenge to healthcare systems and individuals, potentially exacerbating health disparities and perpetuating an already inequitable healthcare system. However, efforts to establish equitable AI in healthcare are gaining momentum, with support from various governmental agencies and organizations. Policymakers have a critical opportunity to enact change through legislation, implementing standards in AI governance, auditing, and regulation. We need regulatory frameworks, investment in AI accessibility, incentives for data collection and collaboration, and regulations for auditing and governance of AI systems used in CDS systems/tools. By addressing these challenges and implementing proactive measures, policymakers can harness AI’s potential to enhance healthcare delivery and reduce disparities, ultimately promoting equitable access to quality care for everyone.
Addressing the Disproportionate Impacts of Student Online Activity Monitoring Software on Students with Disabilities by Nicole Fuller, and Lindsay Kubatzky
Student activity monitoring software is widely used in K-12 schools and has been employed in response to address student mental health needs. Education technology companies have developed algorithms using artificial intelligence (AI) that seek to detect risk for harm or self-harm by monitoring students’ online activities. While teachers, parents, and students largely report the benefits of student activity monitoring outweigh the risks, there is still a need to address the ways that student privacy might be compromised and to avoid perpetuating existing inequities, especially for students with disabilities. To address these issues, Congress and federal agencies should: improve data collection on the proliferation of student activity monitoring software; enhance parental notification and ensure access to free appropriate public education (FAPE); invest in the U.S. Department of Education’s Office for Civil Rights; and support state and local education agencies with technical assistance.
Government Capacity and AI
Building government capacity for AI adoption is vital for harnessing AI’s potential while upholding public values and ethical considerations. Enhancing this capacity involves creating a federal advisory board to guide state and local AI procurement, tailoring grant programs to fund impactful AI projects, and updating acquisition guidelines to include ethical, security, and interoperability standards. These efforts will support government AI adoption that aligns with public interests and ethical standards.
Coming soon: Expanding State and Local Government Capacity for AI Procurement and Use by Anna Kawakami, Haiyi Zhu, and Kenneth Holstein
Coming soon: A National Guidance Platform for AI Acquisition: Streamlining the procurement process for more equitable, safe, and innovative government use of AI by Clara Langevin
Coming soon: Blank Checks for Black Boxes: Bring AI Governance to Grant Competitions by Dan Bateyko
Supporting States in Balanced Approaches to AI in K-12 Education by Tara Courchaine
Although the AI revolution is definitively underway across all sectors of U.S. society, questions still remain about AI’s accuracy, accessibility, how its broad application can influence how students are represented within datasets, and how educators use AI in K-12 classrooms. There is both need and capacity for policymakers to support and promote thoughtful and ethical integration of AI in education and to ensure that its use complements and enhances inclusive teaching and learning while also protecting student privacy and preventing bias and discrimination. Because no federal legislation currently exists that aligns with and accomplishes these goals, Congress should develop a bill that targets grant funds and technical assistance to states and districts so they can create policy that is backed by industry and designed by educators and community stakeholders.
A National Center for AI in Education by Byron Ernest
There are immense opportunities associated with artificial intelligence (AI), yet it is important to vet the tools, establish threat monitoring, and implement appropriate regulations to guide the integration of AI into an equitable education system. Congress should establish a National Center for AI in Education to build the capacity of education agencies to undertake evidence-based continuous improvement in AI in education. It will increase the body of rigorous research and proven solutions in AI use by teachers and students in education.
A NIST Foundation to Support the Agency’s AI Mandate by Aleksandra Srdanovic
The National Institute of Standards and Technology (NIST) faces several obstacles to advancing its mission on artificial intelligence (AI) at a time when the field is rapidly advancing and consequences for falling short are wide-reaching. To enable NIST to quickly and effectively respond, Congress should authorize the establishment of a NIST Foundation to unlock additional resources, expertise, flexible funding mechanisms, and innovation, while ensuring the foundation is stood up with strong ethics and oversight mechanisms.
Innovation and Competitiveness
Maintaining global competitiveness in AI and tech requires robust innovation ecosystems and forward-thinking policy. Strengthening U.S. innovation includes investments in automated labs at the Department of Energy to drive scientific discovery, implementing Digital Product Passports for product transparency and circularity, and deploying computational antitrust tools for quicker market analyses. This strategy promotes sustainable practices, scientific advancement, and competition in tech markets.
Accelerating Materials Science with AI and Robotics by Dean Ball
Innovations in AI and robotics could revolutionize materials science by automating experimental processes and drastically accelerating the discovery of new materials. Currently, materials science research involves manually testing different combinations of elements to identify promising materials, which limits the pace of discovery. Using AI foundation models for physics and chemistry, scientists could simulate new materials, while robotic “self-driving labs” could run 24/7 to synthesize and evaluate them autonomously. This approach would enable continuous data generation, refining AI models in a feedback loop that speeds up research and lowers costs. Given its expertise in supercomputing, AI, and a vast network of national labs, the Department of Energy (DOE) could lead this transformative initiative, potentially unlocking advancements in critical materials, such as improved battery components, that could have immense economic and technological impacts.
Coming soon: Digital Product Passports: Transforming America Linear Economy to Combat Waste, Counterfeits, and Supply Chain Vulnerabilities by Megan Brewster
Coming soon: Modernizing Antitrust Agencies for the AI Era by Shivam Saran
Coming soon: Addressing the Mental Health Crisis with High-Quality Digital Mental Health Treatments by Steven Schueler
America’s Teachers Innovate: A National Talent Surge for Teaching in the AI Era by Zarek Drozda, Erin Mote, Pat Yongpradit, and Talia Milgrom-Elcott
Teaching our young children to be productive and engaged participants in our society and economy is, alongside national defense, the most essential job in our country. Yet the competitiveness and appeal of teaching in the United States has plummeted over the past decade. The new Administration should announce a national talent surge to identify, scale, and recruit into innovative teacher preparation models, expand teacher leadership opportunities, and boost the profession’s prestige. “America’s Teachers Innovate” is an eight-part executive action plan to be coordinated by the White House Office of Science and Technology Policy (OSTP), with implementation support through GSA’s Challenge.Gov and accompanied by new competitive priorities in existing National Science Foundation (NSF), Department of Education (ED), Department of Labor (DoL), and Department of Defense education (DoDEA) programs.
GenAI in Education Research Accelerator (GenAiRA) by Anastasia Betts, Sunil Gunderia, Diana Hughes, and Erin Lenihan
Learning, identifying research priorities, and proposing pilot initiatives for responsible implementation. This initiative seeks to guide policymakers and educators in harnessing AI’s potential while mitigating risks to promote innovation and equity in education systems.
National Security AI Entrepreneur Visa: Creating a New Pathway for Elite Dual-Use Technology Founders to Build in America by Joel Burke
NVIDIA, Anthropic, OpenAI, HuggingFace, and scores of other American startups helping cement America’s leadership in the race for artificial intelligence (AI) dominance all have one thing in common: they have at least one immigrant co-founder. America needs these entrepreneurs more than ever as competition with China for global leadership in key fields like AI heats up. Congress must act to support high-skilled entrepreneurs by creating a National Security Startup Visa specifically targeted at founders of AI firms whose technology is inherently dual-use and critical for America’s economic leadership and national security.
A National Training Program for AI-Ready Students by Zarek Drozda
Computing, data, and AI basics will be critical for every student, yet our education system does not have the capacity to impart them. A national mobilization for the education workforce would ensure U.S. leadership in the global AI talent race, address mounting challenges in teacher shortages and retention, and fill critical workforce preparedness gaps not addressed by the CHIPS and Science Act. In crafting future legislation on artificial intelligence (AI), Congress should introduce a Digital Frontier and AI Readiness Act of 2025 to create educator training sites in emerging technology to ensure our students can graduate AI-ready.
Establish a Teacher AI Literacy Development Program by Amanda Bickerstaff, Amanda Depriest, and Corey Layne Crouch
The rapid advancement of artificial intelligence (AI) technology necessitates a transformation in our educational systems to equip the future workforce with necessary AI skills, starting with our K-12 ecosystem. Congress should establish a dedicated program within the National Science Foundation (NSF) to provide ongoing AI literacy training specifically for K-12 teachers and pre-service teachers. The proposed program would ensure that all teachers have the necessary knowledge and skills to integrate AI into their teaching practices effectively.
Establish Data-Sharing Standards for the Development of AI Models in Healthcare by Daniel Wu
The National Institute for Standards and Technology (NIST) should lead an interagency coalition to produce standards that enable third-party research and development on healthcare data. These standards, governing data anonymization, sharing, and use, have the potential to dramatically expedite development and adoption of medical AI technologies across the healthcare sector.
Conclusion
Technology and innovation has always played a central role in American leadership and fueling the nation’s growth, from the Industrial Revolution to the computer age, and now the AI era. As technology advances, there’s a critical opportunity to harness its potential for the common good, but this requires careful regulation and investment. The U.S. can foster an environment where innovation thrives while still protecting consumers, ensuring privacy, and promoting equity. This balance has always been part of America’s legacy, a reminder that technological progress should go hand-in-hand with the protection of rights and democratic values. Federal technology and AI procurement will be a crucial piece of this puzzle, guiding public-sector adoption in ways that are transparent, fair, and accountable. With effective governance, AI can bring real benefits to healthcare, education, energy, and the economy, driving progress while promoting inclusive growth. In the end, the success of AI will depend on strong partnerships between government, industry, and civil society to ensure that technology works for everyone, reinforcing America’s leadership in tech while staying true to its foundational ideals.
About the Memos
This portfolio of memos presents a diverse array of perspectives and policy ideas, offering multiple approaches to navigating the challenges and opportunities presented by advancing technologies and AI. Together, they highlight the need for adaptable and nuanced approaches that can be tailored to our evolving technological landscape. We consider these ideas the starting point for new conversations on safety and innovation, and invite dialogue about these or other ideas on the topics addressed. Note that each of the memos stands alone and is attributed to a specific contributor or team of contributors. The memos below do not necessarily reflect the views of the full cohort. Additionally, the list of memos as a whole also does not necessarily reflect the views of the full cohort and does not constitute a consensus.