Analytical Literacy First: A Prerequisite for AI, Data, and Digital Fluency
As digital technologies reshape every aspect of society, students must be equipped and proficient in not only specialized literacies (such as digital literacy, data literacy, and AI literacy), but with a foundational skill set that allows them to think critically, reason logically, and solve problems effectively. Analytical literacy is the scaffolding upon which more specialized literacies are built. Students in the 21st century need strong critical thinking skills like reasoning, questioning, and problem-solving, before they can meaningfully engage with more advanced domains like digital, data, or AI literacy. Without these skills, students may struggle to engage critically with the technologies shaping their lives. We urge education leaders at the federal, state, and institutional levels to prioritize development of analytical literacy by incentivizing integration across disciplines, aligning standards, and investing in research and professional development.
Introduction
As society becomes increasingly shaped by digital technologies, data-driven decision-making, and artificial intelligence, the ability to think analytically is no longer optional, it’s essential. While digital, data, and AI literacies focus on domain-specific skills, analytical literacy enables students to engage with these domains critically and ethically. Analytical literacy encompasses critical thinking, logical reasoning, and problem-solving, and equips students to interpret complex information, evaluate claims, and make informed decisions. These skills are foundational not only for academic success but for civic engagement and workforce readiness in the 21st century.
Despite its importance, analytical literacy remains unevenly emphasized in K–12 education. These disparities are often driven by systemic inequities in school funding, infrastructure, and access to qualified educators. According to NCES’s Education Across America report, rural schools and those in under-resourced communities frequently lack the professional development opportunities, instructional materials, and technology needed to support analytical skill-building. In contrast, urban and well-funded districts are more likely to offer inquiry-based curricula, interdisciplinary projects, and formative assessment tools that foster deep thinking. Additionally, while some schools integrate analytical thinking through inquiry-based learning, project-based instruction, or interdisciplinary STEM curricula, there is no consistent national framework guiding its development at this time. Instructional strategies vary widely by state or district, and standardized assessments often prioritize procedural fluency over deeper cognitive engagement like analytical reasoning.
Recent research underscores the urgency of this issue. A 2024 literature review from the Center for Assessment highlights analytical thinking as a core competency for future success, noting its role in supporting other 21st-century skills such as creativity, collaboration, and digital fluency. Similarly, a systematic review published in the International Journal of STEM Education emphasizes the need for early engagement with analytical and statistical thinking to prepare students for a data-rich society.
There is growing consensus among educators, researchers, and policy advocates that analytical literacy deserves a more central role in K–12 education. Organizations such as NWEA and Code.org have called for stronger integration of analytical and data literacy skills into curriculum and professional development efforts. However, without coordinated policy action, these efforts remain fragmented.
This memo builds on that emerging momentum. It argues that analytical literacy should be treated as a skill that underpins students’ ability to engage meaningfully with digital, data, and AI literacies. By elevating analytical literacy through standards, instruction, and investment, we can ensure that all students are prepared to participate, innovate, and thrive in a complex and rapidly changing world.
To understand why analytical literacy must be prioritized, we examine the current landscape of specialized literacies and the foundational skills they require.
Challenges and Opportunities
In today’s interconnected world, digital literacy, data literacy, and AI literacy are no longer optional, they are essential skill sets for civic participation, economic mobility, and ethical decision-making. These literacies enable students to navigate online environments, interpret complex datasets, and engage thoughtfully with emerging technologies.
- Digital literacy encompasses the ability to use technology effectively and critically, including evaluating online information, understanding digital safety, and engaging ethically in digital environments.
- Data Literacy involves the capacity to understand, interpret, evaluate, and communicate data. This includes recognizing data sources, identifying patterns, and drawing informed conclusions.
- AI Literacy entails understanding the basic concepts of artificial intelligence, its applications, ethical implications, and how to interact with AI systems responsibly.
Together, these literacies form a cognitive toolkit that empowers students to be not just consumers of information and technology, but thoughtful participants in civic and digital life.
While these literacies address specific domains, they all fundamentally rely on what should be called Analytical Literacy. Analytical literacy, at its core, involves the ability to:
- Ask insightful questions. Identifying the core issues and seeking relevant information.
- Evaluate information critically. Assessing the credibility, bias, and relevance of sources.
- Identify patterns and relationships. Recognizing connections and trends in complex information.
- Reason logically. Constructing sound arguments and drawing valid inferences.
- Solve problems effectively. Applying analytical skills to find solutions and make informed decisions.
Yet, without structured development of these foundational skills, students risk becoming passive consumers of technology rather than active, informed participants. This presents an urgent opportunity: by centering Analytical Literacy in standards and assessment, instruction, and professional learning, we can create enduring pathways for students to participate, innovate, and thrive in an increasingly data-driven world.
Examples of implementation must include:
- In Standards and Assessment. States should revise academic standards to include grade-level expectations for analytical reasoning across disciplines. For example, middle school science standards might require students to construct evidence-based arguments using data, while high school civics assessments could include open-ended questions that ask students to evaluate competing claims in news media.
- In Instruction. Teachers should embed analytical skill development into daily practice through inquiry-based learning, Socratic seminars, or interdisciplinary projects. A math teacher could guide students in analyzing real-world datasets to identify trends and make predictions, while an English teacher might use argument mapping to help students deconstruct persuasive texts.
- In Professional Learning. Districts should offer workshops that train educators to use formative assessment strategies that surface student reasoning such as think-alouds, peer critiques, or performance tasks. Coaching cycles should focus on how to scaffold questioning techniques that push students beyond recall toward deeper analysis.
By embedding these practices systemically, we move from episodic exposure to analytical thinking toward a coherent, equitable framework that prepares all students for the demands of the digital age.
Addressing these gaps requires coordinated action across multiple levels of the education system. The following plan outlines targeted strategies for federal, state, and institutional leaders.
Plan of Action
To strengthen analytical literacy in K–12 education, we recommend targeted efforts from three federal offices, supported by state agencies, educational organizations, and teacher preparation programs.
Recommendation 1. Federal Offices
Federal agencies have the capacity to set national priorities, fund innovation, and coordinate cross-sector efforts. Their leadership is essential to catalyzing systemic change. For example:
White House Office of Science and Technology Policy (OSTP)
OSTP now chairs the newly established White House Task Force on Artificial Intelligence Education, per the April 2025 Executive Order on Advancing AI Education. This task force is charged with coordinating federal efforts to promote AI literacy and proficiency across the K–12 continuum. We recommend that OSTP:
- Expand the scope of the Task Force to explicitly include analytical literacy as a foundational competency for AI readiness.
- Ensure that public-private partnerships and instructional resources developed under the order emphasize reasoned decision-making as a core component, not just technical fluency.
- Use the Presidential Artificial Intelligence Challenge as a platform to showcase interdisciplinary student work that demonstrates analytical thinking applied to real-world AI problems.
This alignment would ensure that analytical literacy is not treated as an adjacent concern, but as a central pillar of the federal AI education strategy.
Institute of Education Sciences (IES)
IES should coordinate closely with the Task Force to support the Executive Order’s goals through a National Analytical Literacy Research Agenda. This agenda could:
- Fund studies that explore how analytical thinking supports AI literacy across grade levels.
- Evaluate the effectiveness of instructional models that integrate analytical reasoning into AI and computer science curricula.
- Develop scalable tools and assessments that measure students’ analytical readiness for AI-related learning pathways.
IES could also serve as a technical advisor to the Task Force, ensuring that its initiatives are grounded in evidence-based practice.
Office of Elementary and Secondary Education (OESE)
In light of the Executive Order’s directive for educator training and curriculum innovation, OESE should:
Prioritize analytical literacy integration in discretionary grant programs that support AI education.
Develop guidance for states on embedding analytical competencies into AI-related standards and instructional frameworks.
Collaborate with the Task Force to ensure that professional development efforts include training on how to teach analytical thinking—not just how to use AI tools.
National Science Foundation (NSF)
The National Science Foundation plays a pivotal role in advancing STEM education through research, innovation, and capacity-building. To support the goals of the Executive Order and strengthen analytical literacy as a foundation for AI readiness, we recommend that NSF:
- Establish a dedicated grant program focused on developing and scaling instructional models that integrate analytical literacy into STEM and AI education. This could include interdisciplinary curricula, project-based learning frameworks, and performance-based assessments that emphasize reasoning, problem-solving, and data interpretation.
- Fund research-practice partnerships that explore how analytical thinking develops across grade levels and how it supports students’ engagement with AI concepts. These partnerships could include school districts, universities, and professional organizations working collaboratively to design and evaluate scalable models.
- Support educator capacity-building initiatives, such as fellowships or professional learning networks, that equip teachers to foster analytical literacy in STEM classrooms. This aligns with NSF’s recent Dear Colleague Letters on expanding K–12 resources for AI education.
- Invest in technology-enhanced learning tools that provide real-time feedback on student reasoning and support formative assessment of analytical skills. These tools could be piloted in diverse school settings to ensure equity and scalability.
By positioning analytical literacy as a research and innovation priority, NSF can help ensure that K–12 students are not only technically proficient but cognitively prepared to engage with emerging technologies in thoughtful, ethical, and creative ways.
Note: Given the evolving organizational landscape within the U.S. Department of Education—including the elimination of offices like Educational Technology—it is critical to identify stable federal anchors. The agencies named above have longstanding mandates tied to research, policy innovation, and K–12 support, making them well-positioned to advance this work.
Recommendation 2. State Education Policymakers
While federal agencies can provide vision and resources, states hold the levers of implementation. Their role is critical in translating policy into classroom practice.
While federal agencies can provide strategic direction and funding, the implementation of analytical literacy must be led by states. Each state has the authority—and responsibility—to shape standards, assessments, and professional development systems that reflect local priorities and student needs. To advance analytical literacy meaningfully, we recommend the following actions:
Elevate Analytical Literacy in Academic Standards
States should conduct curriculum audits to identify where analytical skills are currently embedded—and where gaps exist. This process should inform the revision of academic standards across disciplines, ensuring that analytical literacy is treated as a foundational competency, not an ancillary skill. California’s ELA/ELD Framework, for example, emphasizes inquiry, argumentation, and evidence-based reasoning across subjects—not just in English language arts. Similarly, the History–Social Science Framework promotes critical thinking and source evaluation as core civic skills.
States can build on these models by:
- Developing cross-disciplinary analytical literacy frameworks that guide integration from elementary through high school.
- Embedding analytical competencies into STEM, humanities, and career technical education standards.
- Aligning revisions with the goals of the Executive Order, which calls for foundational skill-building to support digital and AI literacy.
Invest in Professional Development and Instructional Capacity
States should fund and scale professional learning ecosystems that equip educators to teach analytical thinking explicitly. This includes:
- Training on inquiry-based learning, Socratic dialogue, and formative assessment strategies that surface student reasoning.
- Development of microcredential pathways for educators to demonstrate expertise in fostering analytical literacy across content areas.
- Support for instructional coaches and teacher leaders to model analytical practices and mentor peers.
California’s professional learning modules aligned to the Common Core State Standards and ELA/ELD frameworks offer a useful starting point for designing scalable, standards-aligned training.
Redesign Student Assessments to Capture Deeper Thinking
States should move beyond traditional standardized tests and invest in assessment systems that measure analytical reasoning authentically. States can catalyze this innovation by issuing targeted Requests for Proposals (RFPs) that invite districts, assessment developers, and research-practice partnerships to design and pilot new models of assessment aligned to analytical literacy. These RFPs should prioritize:
- Performance tasks that require students to analyze real-world problems and propose solutions.
- Portfolio assessments that document students’ growth in reasoning and problem-solving over time.
- Open-ended questions that ask students to evaluate claims, synthesize evidence, and construct logical arguments.
- Scalable models that can inform statewide systems over time.
By using the RFP process strategically, states can surface promising practices, support local innovation, and build a portfolio of assessment approaches that reflect the complexity of students’ analytical capabilities.
Recommendation 3. Professional Education Organizations
Beyond government, professional education organizations shape the field through resources, advocacy, and collaboration. They are key partners in scaling analytical literacy.
Professional education organizations play a vital role in shaping the landscape of K–12 education. These groups—ranging from subject-specific associations like the National Council of Teachers of English (NCTE) and the National Science Teaching Association (NSTA), to broader coalitions like ASCD and the National Education Association (NEA)—serve as hubs for professional learning, policy advocacy, resource development, and field-wide collaboration. They influence classroom practice, inform state and federal policy, and support educators through research-based guidance and community-building.
Because these organizations operate at the intersection of practice, policy, and research, they are uniquely positioned to champion analytical literacy as a foundational skill across disciplines. To advance this work, we recommend the following actions:
- Develop Flexible, Discipline-Specific Resources. Create adaptable instructional materials—such as lesson plans, assessment templates, and classroom protocols—that help educators integrate analytical thinking into diverse subject areas. For example, NCTE could develop resources that support argument mapping in English classrooms, while NSTA might offer tools for teaching evidence-based reasoning in science labs.
- Advocate for Analytical Literacy as a National Priority. Publish position papers, host public events, and build strategic partnerships that elevate analytical literacy as essential to digital and civic readiness. Organizations can align their advocacy with the federal directive for AI education, emphasizing the role of analytical thinking in preparing students for ethical and informed engagement with emerging technologies.
- Foster Cross-Sector Collaboration. Convene working groups, research-practice partnerships, and educator networks to share best practices and scale effective models. For example, AERA could facilitate studies on how analytical literacy develops across grade levels, while CoSN might explore how digital tools can support real-time feedback on student reasoning.
By leveraging their convening power, subject-matter expertise, and national reach, professional education organizations can accelerate the adoption of analytical literacy and ensure it is embedded meaningfully into the fabric of K–12 education.
Recommendation 4. Teacher Preparation Programs
To sustain long-term change, we must begin with those entering the profession. Teacher preparation programs are the foundation for instructional capacity and must evolve to meet this moment.
Teacher preparation programs (TPPs) are the gateway to the teaching profession. Housed in colleges, universities, and alternative certification pathways, these programs are responsible for equipping future educators with the knowledge, skills, and dispositions needed to support student learning. Their influence is profound: research consistently shows that well-prepared teachers are the most important in-school factor for student success.
Yet many TPPs face persistent challenges. Too often, graduates report feeling underprepared for the realities of diverse, data-rich classrooms. Coursework may emphasize theory over practice, and clinical experiences vary widely in quality. Critically, few programs offer explicit training in how to foster analytical literacy—despite its centrality to digital, data, and AI readiness. In response to national calls for foundational skill-building and educator capacity, TPPs must evolve to meet this moment.
While federal funding for teacher preparation has become more limited, states are stepping in through innovative models like teacher residencies, registered apprenticeships, and microcredentialing pathways. These initiatives are often supported by modified use of Title II funds, state general funds, and workforce development grants. To accelerate this momentum, federal programs like Teacher Quality Partnership (TQP) grants and Supporting Effective Educator Development (SEED) grants could be adapted to prioritize analytical literacy, while states can issue targeted RFPs to redesign coursework, practicum experiences, and capstone projects that center reasoning, problem-solving, and ethical decision-making. To ensure that new teachers are ready to cultivate analytical thinking in their students, we recommend the following actions:
- Integrate Analytical Pedagogy into Coursework and Practicum. Embed instructional strategies that center analytical literacy into pre-service coursework. This includes training in inquiry-based learning, argumentation, and data interpretation. Practicum experiences should reinforce these strategies through guided observation and practice in real classrooms.
- Ensure Faculty Model Analytical Thinking. Faculty must demonstrate analytical reasoning in their own teaching—whether through modeling how to deconstruct complex texts, facilitating structured debates, or using data to inform instructional decisions. This modeling helps pre-service teachers internalize analytical habits of mind.
- Strengthen Field Placements for Analytical Instruction. Partner with districts to place candidates in classrooms where analytical literacy is actively taught. Provide structured mentorship from veteran teachers who use questioning techniques, performance tasks, and formative assessments to surface student reasoning.
- Develop Capstone Projects Focused on Analytical Literacy. Require candidates to complete a culminating project that demonstrates their ability to design, implement, and assess instruction that builds students’ analytical skills. These projects could be aligned with state standards and local district priorities.
- Align Program Outcomes with Emerging Policy Priorities. Ensure that program goals reflect the competencies outlined in federal initiatives like the AI Education Executive Order. This includes preparing teachers to support foundational literacies that enable students to engage critically with digital and AI technologies.
Together, these actions form a coherent strategy for embedding analytical literacy across the K–12 continuum. But success depends on bold leadership and sustained commitment. By reimagining teacher preparation through the lens of analytical literacy, we can ensure that every new educator enters the classroom equipped to foster deep thinking, ethical reasoning, and problem-solving—skills that students need to thrive in a complex and rapidly changing world.
Conclusion
Analytical literacy is not a nice-to-have, it is a prerequisite for the specialized proficiencies students need in today’s complex world. By embedding critical thinking, logical reasoning, and problem-solving across the K–12 continuum, we empower students to meet challenges with curiosity and discernment. We urge policymakers, educators, and institutions to act boldly by demanding analytical literacy be established as a cornerstone of 21st-century education. and co-create a future where every student has the analytical tools essential for meaningful participation, innovative thinking, and long-term success in the digital age and beyond.
Making Healthcare AI Human-Centered through the Requirement of Clinician Input
Through partnership with the Doris Duke Foundation, FAS is advancing a vision for healthcare innovation that centers safety, equity, and effectiveness in artificial intelligence. Informed by the NYU Langone Health symposium on transforming health systems into learning health systems, FAS seeks to ensure that AI tools are developed, deployed, and evaluated in ways that reflect real-world clinical practice. FAS is leveraging its role in policy entrepreneurship to promote responsible innovation by engaging with key actors in government, research, and software development. These recommendations align with emerging efforts across health systems to integrate human-centered AI and evidence-based decision-making into digital transformation. By shaping AI grant requirements and post-market evaluation standards, these ideas aim to accelerate safe, equitable implementation while supporting ongoing learning and improvement.
The United States must ensure AI improves healthcare while safeguarding patient safety and clinical expertise. There are three priority needs:
- Embedding clinician involvement in the development and testing of AI tools
- Using representative data and promoting human-centered design
- Maintaining continuous oversight through post-market evaluation and outcomes-based contracting
This memo examines the challenges and opportunities related to integrating AI tools into healthcare. It emphasizes how human-centered design must ensure these technologies are tailored to real-world clinical environments. As AI adoption grows in healthcare, it is essential that clinician feedback is embedded into the federal grant requirements for AI development to ensure these systems are effective and aligned with real-world needs. Embedding clinician feedback into grant requirements for healthcare AI development and ensuring the use of representative data will assist with promoting safety, accuracy, and equity in healthcare tools. In addition, regular updates to these tools based on evolving clinical practices and patient populations must be part of the development lifecycle to maintain long-term reliability. Continuous post-market surveillance is necessary to ensure these tools remain both accurate and equitable. By taking these steps, healthcare systems can harness the full potential of AI while safeguarding patient safety and clinician expertise. Federal agencies such as the Office of the National Coordinator for Health Information Technology (ONC), the Food and Drug Administration (FDA) can incentivize clinician involvement through outcomes-based contracting approaches that link funding to measurable improvements in patient care. This strategy ensures that grant recipients embed clinician expertise at key stages of development and testing, ultimately aligning incentives with real-world health outcomes.
Challenge and Opportunity
The use of AI tools such as predictive triage classifiers and large language models (LLMs) have the potential to improve care delivery. However, there are significant challenges in integrating these tools effectively into daily clinical workflows without meaningful clinician involvement. As just one example, AI tools used in chronic illness triage can be particularly useful in helping to prioritize patients based on the severity of their condition, which can lead to timely care delivery. However, without direct involvement from clinicians in validating, interpreting, and guiding AI recommendations, these tools can suffer from poor usability and limited real-world effectiveness. Even highly accurate tools can become irrelevant if they are not adopted and clinicians do not engage with them, thereby reducing the positive impact they can have on patient outcomes.
Mysterious Inner Workings
The mysterious box of AI has fueled skepticism among healthcare providers and undermined trust among patients. Moreover, when AI systems lack clear and interpretable explanations, clinicians are more likely to avoid or distrust them. This response is attributed to what’s known as algorithm aversion. Algorithm aversion occurs when clinicians lose trust in a tool after seeing it make errors, making future use less likely, even if the tool is usually accurate. Designing AI with human-centered principles, particularly offering clinicians a role where they can validate, interpret, and guide AI recommendations, will help build trust and ensure decisions remain grounded in clinical expertise. A key approach to increasing trust and usability would be institutionalizing clinician engagement in the early stages of the development process. By involving clinicians during the development and testing phases, AI developers can ensure the tools fit seamlessly into clinical workflows. This will also help to mitigate concerns about the tool’s real-world effectiveness, as clinicians will be more likely to adopt tools they feel confident in. Without this collaborative approach, AI tools risk being sidelined or misused, preventing health systems from becoming genuinely adaptive and learning oriented.
Lack of Interoperability
A significant challenge in deploying AI tools across healthcare systems is the issue of interoperability. Most patients receive care across multiple providers and healthcare settings, making it essential for AI tools to be able to seamlessly integrate with electronic health records (EHR) and other clinical systems. Not having this integration could lead to tools losing their clinical relevance, effectiveness, and ability to be adopted on a larger scale. This lack of connectivity can lead to inefficiencies, duplicate testing, and other harmful errors. One way to address this is through a contracting process called Outcomes-based contracting (OBC), discussed shortly.
Trust in AI and Skill Erosion
Beyond trust and usability, there are broader risks associated with sidelining clinicians during AI integration. The use of AI tools without clinician input also presents the risk of clinician deskilling. Deskilling refers to the occurrence where clinicians’ skills and decision-making abilities erode over time due to their reliance on AI tools. This skill erosion leads to a decline in the judgement in situations where AI may not be readily available or suitable. Recent evidence from the ACCEPT trial shows that endoscopists’ performance dropped in non-AI settings after months of AI-assisted procedures. This presents a troubling phenomenon that we should aimt to prevent. AI-induced skill erosion also raises ethical concerns, particularly in complex environments where over-reliance on AI could erode clinical judgement and autonomy. If clinicians become too dependent on automated outputs, their ability to make critical decisions may be compromised, potentially impacting patient safety.
Embedded Biases
In addition to the erosion of human skills, AI systems also risk embedding biases if trained on unrepresentative data, leading to unfair or inaccurate outcomes across different patient groups. AI tools may present errors that appear plausible, such as generating nonexistent terms, which pose serious safety concerns, especially when clinicians don’t catch those mistakes. A systematic review of AI tools found that 22% of studies involved clinicians throughout the development phase. This lack of early clinician involvement has contributed to usability and integration issues across AI healthcare tools.
All of these issues underscore how critical clinician involvement is in the development of AI tools to ensure they are usable, effective, and safe. Clinician involvement should include defining relevant clinical tasks, evaluating interpretability of the system, validating performance across diverse patient groups, and setting standards for handoff between AI and clinician decision-making. Therefore, funding agencies should require AI developers to incorporate representative data and meaningful clinician involvement in order to mitigate these risks. Recognizing these challenges, it’s crucial to understand that implementing and maintaining AI requires continual human oversight and substantial infrastructure. Many health systems find this infrastructure too resource-intensive to properly sustain. Given the complexity of these challenges, without adequate governance, transparency, clinician training, and ethical safeguards, AI may hinder rather than help the transition to an enhanced learning health system.
Outcome-based Models (OBM)
To ensure that AI tools deliver properly, the federal contracting process should reinforce clinical involvement through measurable incentives. Outcomes-based contracting (OBC), a model where payments or grants are tied to demonstrated improvements in patient outcomes, can be a powerful tool. This model is not only a financing mechanism, but serves as a lever to institutionalize clinician engagement. By tying funding to real-world clinical impact, this compels developers to design tools that clinicians will use and find value in, ultimately increasing usability, trust, and adoption. This model provides a clear reward for impact rather than just for building tools or producing novel methods.
Leveraging outcomes-based models could also help institutionalize clinician engagement in the funding lifecycle. This would ensure developers demonstrate explicit plans for clinician participation through staff integration or formal consultation as a prerequisite for funding. Although AI tools may be safe and effective at the initial onset of their use, performance can change over time due to various patient populations, changes in clinical practice, and updates to software. This is known as model degradation. Therefore, a crucial component of using these AI tools needs to be regular surveillance to ensure the tools remain accurate, responsive to real-world use with clinicians and patients, and equitable. However, while clinician involvement is essential, it is important to acknowledge that including clinicians in all stages of the AI tool development, testing, deployment, and evaluation may not be realistic given the significant time cost for clinicians, their competing clinical responsibilities, and their limited familiarity with AI technology. Despite these factors, there are ways to engage clinicians effectively at key decision points during the AI development and testing process without requiring their presence at every stage.
Urgency and Federal Momentum
Major challenges associated with integrating AI into clinical workflows due to poor usability, algorithm aversion, clinician skepticism, and the potential for embedded biases in these tools highlight a need for thoughtful deployment of these tools. These challenges have presented a sense of urgency in light of recent healthcare shifts, particularly with the rapid acceleration of AI adoption after the COVID-19 pandemic. This drove breakthroughs in the areas of telemedicine, diagnostics, and pharmaceutical innovation that simply weren’t possible before. However, with the rapid pace of integration also comes the risk of unregulated deployment, potentially embedding safety vulnerabilities. Federal momentum supports this growth, with directives placing emphasis on AI safety, transparency, and responsible deployment, including the authorization of over 1,200 AI powered medical devices, primarily used in radiology, cardiology, and pathology, which tend to be areas that are complex in nature. However, without clinician involvement and the use of representative data for training, algorithms for devices such as the ones mentioned may remain biased and fail to integrate smoothly into care delivery. This disconnect could delay adoption, reduce clinical impact, and increase the risk of patient harm. Therefore, it’s imperative we set standards, embed clinician expertise in AI design, and ensure safe, effective deployment for the specific use of care delivery.
Furthermore, this moment of federal momentum aligns with broader policy shifts. As highlighted by a recent CMS announcement, the White House and national health agencies are working with technology leaders to create a patient-centric healthcare ecosystem. This includes a push for interoperability, clinical collaboration, and outcomes-driven innovation, all of which bolster the case for clinician engagement being woven into the very fabric of AI development. AI can potentially improve patient outcomes dramatically, as well as increase cost-efficiency in healthcare. Yet, without structured safeguards, these tools may deepen existing health inequities. However, with proper input from clinicians, these tools can reduce diagnostic errors, improve accuracy in high-stakes cases such as cancer detection, and streamline workflows, ultimately saving lives and reducing unnecessary costs.
As AI systems become further embedded into clinical practice, they will help to shape standards of care, influencing clinical guidelines and decision-making pathways. Furthermore, interoperability is essential when using these tools because most patients receive care from multiple providers across systems. Therefore, AI tools must be designed to communicate and integrate data from various sources, including electronic health records (EHR), lab databases, imaging systems, and more. Enabling shared access can enhance the coordination of care and reduce redundant testing or conflicting diagnoses. To ensure this functionality, clinicians must help design AI tools that account for real-world care delivery across what is currently a fragmented system.
Reshaping Healthcare AI
These challenges and risks culminate in a moment of opportunity where we can reshape and revolutionize the way AI supports healthcare delivery to ensure that its design is trustworthy and focused on outcomes. To fully realize this opportunity, clinicians must be embedded into various stages of AI development technology to improve its safety, usability, and adoption in healthcare settings. While some developers do involve clinicians during development, this practice is not the standard. Bridging this gap requires targeted action to ensure clinical expertise is consistently incorporated from the start. One way to achieve this is through federal agencies requiring AI developers to integrate representative data and clinician feedback into their AI tools as a condition of funding eligibility. This approach would improve the usability of the tool and enhance its contextual relevance to diverse patient populations and practice environments. Further, it would address current shortcomings as evidence has shown that some AI tools are poorly integrated into clinical workflows, which not only reduces their impact, but also undermines broader adoption and clinician confidence in the systems. Moreover, creating a clinician feedback loop for these systems will reduce the clerical burden that many clinicians experience and allow them to spend more dedicated time with their patients. Through the incorporation of human-centered design, we can mitigate issues that would normally arise by using clinician expertise during the development and testing process. This approach would build trust amongst clinicians and improve patient safety, which is most important when aiming to reduce errors and misinterpretations of diagnoses. With strong requirements and funding standards in place as safeguards, AI can transform health systems into adaptable learning environments that produce evidence and deliver equitable and higher quality care. This is a pivotal opportunity to showcase how innovation can support human expertise and strengthen trust in healthcare.
AI has the potential to dramatically improve patient outcomes and healthcare cost-efficiency, particularly in high-stakes diagnostic and treatment decisions like oncology, and critical care. In these areas, AI can analyze imaging, lab, and genomic data to uncover patterns that may not be immediately apparent to clinicians. For example, AI tools have shown promise in improving diagnostic accuracy in cancer detection and reducing the time clinicians spend on tasks like charting, allowing for more face-to-face time with patients.
However, these tools must be designed with clinician input at key stages, especially for higher-risk conditions, or tools may be prone to errors or fail to integrate into clinical workflows. By embedding outcome-based contracting (OBC) into federal funding and aligning financial incentives with clinical effectiveness, we are encouraging the development and use of AI tools that have the ability to improve patient outcomes and strengthen the healthcare system’s shift toward value-based care. This supports a broader shift toward value-based care where outcomes, not just outputs, define success.
The connection between OBC and clinician involvement is straightforward. When clinicians are involved in the design and testing of AI tools, these tools are more likely to be effective in real-world settings, thereby improving outcomes and justifying the financial incentives tied to OBC. AI tools can provide significant value for healthcare use in high-stakes, diagnostic and treatment decisions (oncology, cardiology, and critical care) where errors have large consequences on patient outcomes. In those settings, AI can assist by analyzing imaging, lab, and genomic data to uncover patterns that may not be immediately apparent to clinicians. However, these tools should not function autonomously, and input from clinicians is critical to validate AI outputs, specifically for issues where mortality or morbidity is high. In contrast, for lower-risk or routine care of common colds or minor dermatologic conditions, AI may be useful as a time-saving tool that does not require the same depth of clinician oversight.
Plan of Action
These actionable recommendations aim to help federal agencies and health systems embed clinician involvement, representative data, and continuous oversight into the lifecycle of healthcare AI.
Recommendation 1. Federal Agencies Should Require Clinician Involvement in the Development and Testing of AI Tools used in Clinical Settings.
Federal agencies should require clinician involvement in all aspects of the development and testing of AI healthcare tools. This mechanism could be enforced through a combination of agency guidance and tying funding eligibility to specific roles and checkpoints for clinicians. Specifically, agencies like the Office of the National Coordinator for Health Information Technology (ONC), the Food and Drug Administration (FDA) can issue guidance mandating clinician participation, and can tie AI tool development funding to the inclusion of clinicians in the design and testing phases. Guidance can mandate clinician involvement at critical stages for: (1) defining clinical tasks and user interface requirements (2) validating interpretability and performance for diverse populations (3) piloting in real workflows and (4) reviewing for safety and bias metrics. This would ensure AI tools used in clinical settings are human-centered, effective, and safe.
Key stakeholders who may wish to be consulted in this process include offices underneath the Department of Health and Human Services (HHS) such as the Office of the National Coordinator for Health Information Technology (ONC), the Food and Drug Administration (FDA), and the Agency for Healthcare Research and Quality (AHRQ). ONC and FDA should work to issue guidance encouraging clinician engagement during the premarket review. This would allow experts thorough review of scientific data and real-world evidence to ensure that the tools used are human-centered and have the ability to improve the quality of care.
Recommendation 2. Incentivize Clinician Involvement Through Outcomes-Based Contracting
Federal agencies such as the Department of Health and Human Services (HHS), the Centers for Medicare and Medicaid Services (CMS), and the Agency for Healthcare Research and Quality (AHRQ) should incorporate outcomes-based contracting requirements into AI-related healthcare grant programs. Funding should be awarded to grantees who: (1) include clinicians as part of their AI design teams or advisory boards, (2) develop formal clinician feedback loops, and (3) demonstrate measurable outcomes such as improved diagnostic accuracy or workflow efficiency. These outcomes are essential when thinking about clinician engagement and how it will improve the usability of AI tools and their clinical impact.
Key stakeholders include HHS, CMS, ONC, AHRQ, as well as clinicians, AI developers, and potentially patient advocacy organizations. These requirements should prioritize funding for entities that demonstrate clear clinician involvement at key development and testing phases, with metrics tied to improvements in patient outcomes and clinician satisfaction. This model would align with CMS’s ongoing efforts to foster a patient-centered, data-driven healthcare ecosystem that uses tools designed with clinical needs in mind, as recently emphasized during the health tech ecosystem initiative meeting. Embedding outcomes-based contracting into the federal grant process will link funding to clinical effectiveness and incentivize developers to work alongside clinicians through the lifecycle of their AI tools.
Recommendation 3. Develop Standards for AI Interoperability
ONC should develop interoperability guidelines that enable AI systems to share information across platforms while simultaneously protecting patient privacy. As the challenge of healthcare data fragmentation has become evident, AI tools must seamlessly integrate with diverse electronic healthcare records (EHRs) and other clinical platforms to ensure their effectiveness.
An example of successful interoperability frameworks can be seen through the Trusted Exchange Framework and Common Agreement (TEFCA), which aims to establish a nationwide interoperability infrastructure for the exchange of health information. Using a model such as this one can establish seamless integration across different healthcare settings and EHR systems, ultimately promoting efficient and accurate patient care. This effort would involve the consultation of clinicians, electronic health record vendors, patients, and AI developers. These guidelines will help ensure that AI tools can be used safely and effectively across clinical settings.
Recommendation 4. Establish Post-Market Surveillance and Evaluation of Healthcare AI Tools to Enhance Performance and Reliability
Federal agencies such as FDA and AHRQ should establish frameworks that can be used for the continuous monitoring of AI tools in clinical settings. These frameworks for privacy-protected data collection should incorporate feedback loops that allow real-world data from clinicians and patients to inform ongoing updates and improvements to the systems. This ensures the effectiveness and accuracy of the tools over time. Special emphasis should be placed on bias audits that can detect disparities in the system’s performance across different patient groups. Bias audits will be key to identifying whether AI tools inadvertently present disadvantages to specific populations based on the data they were trained on. Agencies should require that these audits be conducted routinely as part of the post-market surveillance process. The surveillance data collected can be used for future development cycles where AI tools are updated or re-trained to address shortcomings.
Evaluation methods should track clinician satisfaction, error rates, diagnostic accuracy, and reportability of failures. During this ongoing evaluation process, incorporating routine bias audits into post-market surveillance will ensure that these tools remain equitable and effective over time. Funding for this initiative could potentially be provided through a zero-cost, fee-based structure or federally appropriated grants. Key stakeholders in this process could include clinicians, AI developers, and patients, all of whom would be responsible for providing oversight.
Conclusion
Integrating AI tools into healthcare has an immense amount of potential to improve patient outcomes, streamline clinical workflows, and reduce errors and bias. However, without clinician involvement in the development and testing of these tools, we risk continual system degradation and patient harm. Requiring that all AI systems used for healthcare are human-centered through clinician input will ensure these systems are effective, safe, and align with real-world clinical needs. This human-centered approach is critical not only for usability, but also for building trust among clinicians and patients, fostering the adoption of AI tools, and ensuring they function properly in real-world clinical settings.
In addition, aligning funding and clinical outcomes through outcomes-based contracting adds a mechanism that forces accountability and ensures lasting impact. When developers are rewarded for improving safety, usability, and equity through clinician involvement, we can transform AI tools into safer care. There is an urgency to address these challenges due to the rapid adoption of AI tools which will require safeguards and ethical oversight. By embedding these recommendations into funding opportunities, we will move America toward building trustworthy healthcare systems that enhance patient safety, clinician expertise, and are adaptive while maximizing AI’s potential for improving patient outcomes. Clinician engagement, both in the development process and through ongoing feedback loops will be the foundation of this transformation. With the right structures in place, we can ensure AI becomes a trusted partner in healthcare and not a risk to it.
This memo produced as part of Strengthening Pathways to Disease Prevention and Improved Health Outcomes.
In Honor of Patient Safety Day, Four Recommendations to Improve Healthcare Outcomes
Through partnership with the Doris Duke Foundation, FAS is working to ensure that rigorous, evidence-based ideas on the cutting edge of disease prevention and health outcomes are reaching decision makers in an effective and timely manner. To that end, we have been collaborating with the Strengthening Pathways effort, a series of national conversations held in spring 2025 to surface research questions, incentives, and overlooked opportunities for innovation with potential to prevent disease and improve outcomes of care in the United States. FAS is leveraging its skills in policy entrepreneurship, working with session organizers, to ensure that ideas surfaced in these symposia reach decision-makers to drive impact in active policy windows.
On this World Patient Safety Day 2025, we share a set of recommendations that align with the National Quality Strategy of Centers for Medicare and Medicaid Services (CMS) goal for zero preventable harm in healthcare. Working with Patients for Patient Safety US, which co-led one of Strengthening Pathways conversations this spring with the Johns Hopkins University Armstrong Institute for Patient Safety and Quality, the issue brief below outlines a bold, modernized approach that uses Artificial Intelligence technology to empower patients and drive change. FAS continues to explore the rapidly evolving AI and healthcare nexus.
Patient safety is an often-overlooked challenge in our healthcare systems. Whether safety events are caused by medical error, missed or delayed diagnoses, deviations from standards of care, or neglect, hundreds of billions of dollars and hundreds of thousands of lives are lost each year due to patient safety lapses in our healthcare settings. But most patient safety challenges are not really captured and there are not enough tools to empower clinicians to improve. Here we present four critical proposals for improving patient safety that are worthy of attention and action.
Challenge and Opportunity
Reducing patient death and harm from medical error surfaced as a U.S. public health priority at the turn of the century with the landmark National Academy of Sciences (NAS) report, To Err is Human: Building a Safer Health System (2000). Research shows that medical error is the 3rd largest cause of preventable death in the U.S. Analysis of Medicare claims data and electronic health records by the Department of Health and Human Services (DHHS) Office of the Inspector General (OIG) in a series of reports from 2008 to 2025 consistently finds that 25-30% of Medicare recipients experience harm events across multiple healthcare settings, from hospitals to skilled nursing facilities to long term care hospitals to rehab centers. Research on the broader population finds similar rates for adult patients in hospitals. The most recent study on preventable harm in ambulatory care found that 7% of patients experienced at least one adverse event, with wide variation of 1.8% to 23.6% from clinical setting to clinical setting. Improving diagnostic safety has emerged as the largest opportunity for patient harm prevention. New research estimates 795,000 patients in the U.S. annually experience death or harm due to missed, delayed or ineffectively communicated diagnoses. The annual cost to the health care system of preventable harm and its health care cascades is conservatively estimated to exceed $200 billion. This cost is ultimately borne by families and taxpayers.
In its National Quality Strategy, the Centers for Medicare and Medicaid Services (CMS) articulated an aspirational goal of zero preventable harm in healthcare. The National Action Alliance for Patient and Workforce Safety, now managed by the Agency for Healthcare Research and Quality (AHRQ), has a goal of 50% reduction in preventable harm by 2026. These goals cannot be achieved without a bold, modernized approach that uses AI technology to empower patients and drive change. Under-reporting negative outcomes and patient harms keeps clinicians and staff from identifying and implementing solutions to improve care. In its latest analysis (July 2025), the OIG finds that fewer than 5% of medical errors are ever reported to the systems designed to gather insights from them. Hospitals failed to capture half of harm events identified via medical record review, and even among captured events, few led to investigation or safety improvements. Only 16% of events required to be reported externally to CMS or State entities were actually reported, meaning critical oversight systems are missing safety signals entirely.
Multiple research papers over the last 20 years find that patients will report things that providers do not. But there has been no simple, trusted way for patient observations to reach the right people at the right time in a way that supports learning and Improvement. Patients could be especially effective in reporting missed or delayed diagnoses, which often manifest across the continuum of care, not in one healthcare setting or a single patient visit. The advent of AI systems provides an unprecedented opportunity to address patient safety and improve patient outcomes if we can improve the data available on the frequency and nature of medical errors. Here we present four ideas for improving patient safety.
Recommendation 1. Create AI-Empowered Safety Event Reporting and Learning System With and For Patients
The Department of Health and Human Services (HHS) can, through CMS, AHRQ or another HHS agency, develop an AI-empowered National Patient Safety Learning and Reporting System that enables anyone, including patients and families, to directly report harm events or flag safety concerns for improvement, including in real or near real time. Doing so would make sure everyone in the system has the full picture — so healthcare providers can act quickly, learn faster, and protect more patients.
This system will:
- Develop a reporting portal to collect, triage and analyze patient reported data directly from beneficiaries to improve patient and diagnostic safety.
- Redesign and modernize Consumer Assessment of Healthcare Providers and Systems
(CAHPS) surveys to include questions that capture beneficiaries’ experiences and outcomes related to patient and diagnostic safety events.
- Redefine the Beneficiary and Family Centered Care Quality Improvement Organizations (BFCC QIO) scope of work to integrate the QIOs into the National Patient Safety Learning and Reporting System.
The learning system will:
- Use advanced triage (including AI) to distinguish high-signal events and route credible
reports directly to the care team and oversight bodies that can act on them.
- Solicit timely feedback and insights in support of hospitals, clinics, and nursing homes to prevent recurrence, as well as feedback over time on patient outcomes that manifest later, e.g. as a result of missed or delayed diagnoses.
- Protect patients and providers by focusing on efficacy of solutions, not blame assignment.
- Feed anonymized, interoperable data into a national learning network that will spot systemic risks sooner and make aggregated data available for transparency and system learning.
Recommendation 2. Create a Real-time ‘Patient Safety Dashboard’ using AI
HHS should build an AI-driven platform that integrates patient-reported safety data — including data from the new National Patient Reporting and Learning System, recommended above — with clinical data from electronic health records to create a real-time ‘patient safety dashboard’ for hospitals and clinics. This dashboard will empower providers to improve care in real time, and will:
- Assist health care providers make accurate and timely diagnoses and avoid errors.
- Make patient reporting easy, effective, and actionable.
- Use AI to triage harm signals and detect systemic risk in real time.
- Build shared national infrastructure for healthcare reporting for all stakeholders.
- Align incentives to reward harm reduction and safety.
By harnessing the power of AI providers will be able to respond faster, identify patients at risk more effectively, and prevent harm thereby improving outcomes. This “central nervous system” for patient safety will be deployed nationally to help detect safety signals in real time, connect information across settings, and alert teams before harm occurs.
Recommendation 3. Mine Billing Data for Deviations from Standards of Care
Standards of care are guidelines that define the process, procedures and treatments that patients should receive in various medical and professional contexts. Standards ensure that individuals receive appropriate and effective care based on established practices. Most standards of care are developed and promulgated by medical societies. But not all clinicians and clinical settings adhere to standards of care, and deviations from standards of care are normal depending upon the case before them. Nonetheless, standards of care exist for a reason and deviations from standards of care should be noted when medical errors result in negative outcomes for patients so that clinicians can learn from these outcomes and improve.
Some patient safety challenges are evident right in the billing data submitted to CMS and insurers. For example, deviations from standards of care can be captured in billing data by comparing clinical diagnosis codes with billing codes and then compared to widely accepted standards of care. By using CMS billing data, the government could identify opportunities for driving the development, augmentation, and wider adoption of standards of care by showing variability and compliance with standards of care for patients, reducing medical error and improving outcomes.
Giving standard setters real data to adapt and develop new standards of care is a powerful tool for improving patient outcomes.
Recommendation 4. Create a Patient Safety AI Testbed
HHS can also establish a Patient Safety AI Testbed to evaluate how AI tools used in diagnosis, monitoring, and care coordination perform in real-world settings. This testbed will ensure that AI improves safety, not just efficiency — and can be co-led by patients, clinicians, and independent safety experts. This is an expansion of the testbeds in the HHS AI Strategic Plan.
The Patient Safety Testbed could include:
- Funding for independent AI test environments to monitor real-world safety and performance over time.
- Public reliability benchmarks and “AI safety labeling”.
- Required participation by AI vendors and provider systems.
Conclusion
There are several key steps that the government can take to address the major loss of health, dollars, and lives due to medical errors, while simultaneously bolstering treatment guidelines, driving the development of new transparent data, and holding the medical establishment accountable for improving care. Here we present four proposals. None of them are particularly expensive when juxtaposed against the tremendous savings they will drive throughout our healthcare system. We can only hope that the Administration’s commitment to patient safety is such that they will adopt them and drive a new era where caregivers, healthcare systems and insurance payers work together to improve patient safety and care standards.
This memo produced as part of Strengthening Pathways to Disease Prevention and Improved Health Outcomes.
What’s Progress and What’s Not in the Trump Administration’s AI Action Plan
Artificial intelligence is already shaping how Americans work, learn, and receive vital services—and its influence is only accelerating. To steer this technology toward the public good, the United States needs a coherent, government-wide AI agenda that encourages innovation and trustworthiness, grounded in the best scientific evidence.
In February 2025, the Trump Administration sought public comment on its development of an AI Action Plan. The Federation of American Scientists saw this as an opportunity to contribute expert, nonpartisan guidance, combining insights from our policy team with ideas from the broader science and technology community, developed as part of our Day One Project. In our comments to the White House Office of Science and Technology Policy we recommended incorporating responsible policies to unleash AI innovation, accelerate AI adoption, ensure secure and trustworthy AI, and strengthen our existing world-class government institutions.
Last week, the Trump Administration released their AI Action Plan. The document contains many promising aspects related to AI research and development, interpretability and control, managing national security risks, and new models for accelerating scientific research. However, there are also concerning provisions, such as those inhibiting state regulations and removing mentions of diversity, equity, and inclusion and climate change from the NIST AI Risk Management Framework. These omissions weaken the United States’ ability to lead on some of the most pressing societal challenges associated with AI technologies.
Despite the AI Action Plan’s ambitious proposals, it will remain aspirational without funding, proper staffing, and clear timelines. The deep cuts to budgets and personnel across the government present an incongruous picture of the Administration’s priorities and policy agenda for emerging technologies, and places pressure on Congress to ensure this plan is properly supported.
Promising Advances & Opportunities
AI Interpretability
As an organization, we’ve developed and shared concrete ideas for advancing AI interpretability—the science of understanding how AI works under the hood. The Administration’s elevation of AI interpretability in the plan is a promising step. Improving interpretability is not only critical for technical progress but also essential to fostering public trust and confidence in AI systems.
We have provided a roadmap for the government to deliver on the promise of interpretable AI in both our AI Action Plan comments and a more detailed memo. In these documents we’ve advocated for advancing AI explainability through open-access resources, standardized benchmarks, common tasks, user-centered research, and a robust repository of techniques to ensure consistent, meaningful, and widely applicable progress across the field. We’ve also argued for the federal government to prioritize interpretable AI in procurement—especially for high-stakes applications—and to establish research and development agreements with AI companies and interpretability research organizations to red team critical systems and conduct targeted interpretability research.
AI Research and Development
Beyond interpretability, the AI Action Plan lays out an ambitious and far-reaching agenda for AI research and development, including robustness and control, advancing the science of AI, and building an AI evaluation ecosystem. We recognize that the Administration has incorporated forward-looking proposals that echo those from our Day One Project—such as building world-class scientific datasets and using AI to accelerate materials discovery. These policy proposals showcase our perspective that the federal government has a critical role to play in supporting groundbreaking scientific and technical research.
A Toolbox for AI Procurement
The Administration’s focus on strengthening the federal workforce’s capacity to use and manage AI is an essential step toward responsible deployment, cross-agency coordination, and reliability in government AI use. The proposed GSA-led AI procurement toolbox closely mirrors our recommendation for a resource to guide agencies through the AI acquisition process. Proper implementation of this policy could further support government efficiency and agility to respond to the needs of constituents.
Managing National Security Risks
The Administration also clearly recognizes the emerging national security risks posed by AI. While the exact nature of many of these risks remains uncertain, the plan contains prudent recommendations on key areas like biosecurity and cybersecurity, and highlights the important role that the Center for AI Standards and Innovation can play in responding to these risks. FAS has previously published policy ideas on how to prepare for emerging AI threats and create a system for reporting AI incidents, as well as outlining how CAISI can play a greater role in advancing AI reliability and security. These proposals can help the government implement the recommendations advanced in the Action Plan.
Focused Research Organizations
The Administration’s support of Focused Research Organizations (FROs) is a promising step. FROs are organizations that address well-defined challenges that require scale and coordination but that are not immediately profitable, and are an exciting model for accelerating scientific progress. FAS first published on FROs in 2020, and has since released a range of proposals from experts that are well-suited to the FRO model. Since 2020, various FROs have gained over $100 million in philanthropic funding, but we believe that this is the first time that the U.S. government has explicitly embraced the FRO model.
Where the AI Action Plan Falls Short
Restricting State-Level Guardrails
The Administration’s AI Action Plan proposes to restrict federal AI funding to states when state AI rules “hinder the effectiveness” of that funding. While avoiding unnecessary red tape is sensible, this unclear standard could offer the administration a wide latitude to block state rules at its discretion. FAS has recently opposed preemption of state-level AI regulation by Congress in the absence of federal action. Without national standards for AI, state rules provide an opportunity to develop best practices for responsible AI adoption.
Failing to Address Bias in AI Systems
We are also concerned by the recommended revision to the NIST AI Risk Management Framework (RMF) that would eliminate references to diversity, equity, and inclusion. AI bias is a proven, measurable phenomenon, as documented by a broad scientific consensus from leading researchers and practitioners across sectors. Failing to address such biases leaves the public vulnerable to the harms of discriminatory or unfair systems that can affect people in areas like healthcare, housing, hiring, and access to public services. This includes deeply consequential biases, such as those affecting rural communities. A lack of action to address AI bias will only inhibit beneficial adoption and further erode trust in the accuracy of algorithmic systems.
The AI Action Plan contains a direction for the federal government to only procure AI models from developers who “ensure that their systems are objective and free from top-down ideological bias,” which is implemented via an associated executive order. Building modern AI systems involves a huge range of choices, including which data to use for training, how to “fine tune” the model for particular use-cases, and the “system prompt” which guides model behavior. Each of these stages can affect model outputs in ways that are not well understood and can be difficult to control. There is no standard definition for what constitutes a model that is “free from top-down ideological bias”, and this vague standard could easily be misused or improperly implemented at the agency level with unintended consequences for the public. We encourage the administration to instead focus on increasing transparency and explainability of systems as a mechanism to prevent unintended bias in outputs.
Ignoring the Environmental Costs and Opportunities
The Administration’s direction to remove mention of climate change from the RMF overlooks the very real climate and environment impacts associated with the growing resource demands of large-scale AI systems. Measuring and managing environmental impacts is an important component of AI infrastructure buildout, and removing this policy lever will also restrict AI adoption. This is also a missed opportunity to push forward the ways that AI can help tackle climate change and other environmental issues. In our recent AI and Energy Policy Sprint, we developed policy memos which highlighted the benefits AI could bring to our energy system and environment, while also highlighting ways of responding to AI’s environmental and health impacts.
The Importance of Public Trust
The current lack of public trust in AI risks inhibiting innovation and adoption of AI systems, meaning new methods will not be discovered and new benefits won’t be felt. A failure to uphold high standards in the technology we deploy will also place our nation at a strategic disadvantage compared to our competitors. Recognizing this issue, both the first and second Trump administrations have emphasized public trust as a key theme in their AI policy documents. Many of the research directions outlined in the administration’s AI Action Plan promise to steer AI technology in a more trustworthy direction and deliver widespread benefits to the public. However, several measures simultaneously threaten to undermine important guardrails, while cuts to important government programs also work against the goals the administration has set for itself.
The Federation of American Scientists will continue to collaborate with the scientific community to place rigorous evidence-based policy at the heart of delivering AI that works for all Americans.
Use Artificial Intelligence to Analyze Government Grant Data to Reveal Science Frontiers and Opportunities
President Trump challenged the Director of the Office of Science and Technology Policy (OSTP), Michael Kratsios, to “ensure that scientific progress and technological innovation fuel economic growth and better the lives of all Americans”. Much of this progress and innovation arises from federal research grants. Federal research grant applications include detailed plans for cutting-edge scientific research. They describe the hypothesis, data collection, experiments, and methods that will ultimately produce discoveries, inventions, knowledge, data, patents, and advances. They collectively represent a blueprint for future innovations.
AI now makes it possible to use these resources to create extraordinary tools for refining how we award research dollars. Further, AI can provide unprecedented insight into future discoveries and needs, shaping both public and private investment into new research and speeding the application of federal research results.
We recommend that the Office of Science and Technology Policy (OSTP) oversee a multiagency development effort to fully subject grant applications to AI analysis to predict the future of science, enhance peer review, and encourage better research investment decisions by both the public and the private sector. The federal agencies involved should include all the member agencies of the National Science and Technology Council (NSTC).
Challenge and Opportunity
The federal government funds approximately 100,000 research awards each year across all areas of science. The sheer human effort required to analyze this volume of records remains a barrier, and thus, agencies have not mined applications for deep future insight. If agencies spent just 10 minutes of employee time on each funded award, it would take 16,667 hours in total—or more than eight years of full-time work—to simply review the projects funded in one year. For each funded award, there are usually 4–12 additional applications that were reviewed and rejected. Analyzing all these applications for trends is untenable. Fortunately, emerging AI can analyze these documents at scale. Furthermore, AI systems can work with confidential data and provide summaries that conform to standards that protect confidentiality and trade secrets. In the course of developing these public-facing data summaries, the same AI tools could be used to support a research funder’s review process.
There is a long precedent for this approach. In 2009, the National Institutes of Health (NIH) debuted its Research, Condition, and Disease Categorization (RCDC) system, a program that automatically and reproducibly assigns NIH-funded projects to their appropriate spending categories. The automated RCDC system replaced a manual data call, which resulted in savings of approximately $30 million per year in staff time, and has been evolving ever since. To create the RCDC system, the NIH pioneered digital fingerprints of every scientific grant application using sophisticated text-mining software that assembled a list of terms and their frequencies found in the title, abstract, and specific aims of an application. Applications for which the fingerprints match the list of scientific terms used to describe a category are included in that category; once an application is funded, it is assigned to categorical spending reports.
NIH staff soon found it easy to construct new digital fingerprints for other things, such as research products or even scientists, by scanning the title and abstract of a public document (such as a research paper) or by all terms found in the existing grant application fingerprints associated with a person.
NIH review staff can now match the digital fingerprints of peer reviewers to the fingerprints of the applications to be reviewed and ensure there is sufficient reviewer expertise. For NIH applicants, the RePORTER webpage provides the Matchmaker tool to create digital fingerprints of title, abstract, and specific aims sections, and match them to funded grant applications and the study sections in which they were reviewed. We advocate that all agencies work together to take the next logical step and use all the data at their disposal for deeper and broader analyses.
We offer five recommendations for specific use cases below:
Use Case 1: Funder support. Federal staff could use AI analytics to identify areas of opportunity and support administrative pushes for funding.
When making a funding decision, agencies need to consider not only the absolute merit of an application but also how it complements the existing funded awards and agency goals. There are some common challenges in managing portfolios. One is that an underlying scientific question can be common to multiple problems that are addressed in different portfolios. For example, one protein may have a role in multiple organ systems. Staff are rarely aware of all the studies and methods related to that protein if their research portfolio is restricted to a single organ system or disease. Another challenge is to ensure proper distribution of investments across a research pipeline, so that science progresses efficiently. Tools that can rapidly and consistently contextualize applications across a variety of measures, including topic, methodology, agency priorities, etc., can identify underserved areas and support agencies in making final funding decisions. They can also help funders deliberately replicate some studies while reducing the risk of unintentional duplication.
Use Case 2: Reviewer support. Application reviewers could use AI analytics to understand how an application is similar to or different from currently funded federal research projects, providing reviewers with contextualization for the applications they are rating.
Reviewers are selected in part for their knowledge of the field, but when they compare applications with existing projects, they do so based on their subjective memory. AI tools can provide more objective, accurate, and consistent contextualization to ensure that the most promising ideas receive funding.
Use Case 3: Grant applicant support: Research funding applicants could be offered contextualization of their ideas among funded projects and failed applications in ways that protect the confidentiality of federal data.
NIH has already made admirable progress in this direction with their Matchmaker tool—one can enter many lines of text describing a proposal (such as an abstract), and the tool will provide lists of similar funded projects, with links to their abstracts. New AI tools can build on this model in two important ways. First, they can help provide summary text and visualization to guide the user to the most useful information. Second, they can broaden the contextual data being viewed. Currently, the results are only based on funded applications, making it impossible to tell if an idea is excluded from a funded portfolio because it is novel or because the agency consistently rejects it. Private sector attempts to analyze award information (e.g., Dimensions) are similarly limited by their inability to access full applications, including those that are not funded. AI tools could provide high-level summaries of failed or ‘in process’ grant applications that protect confidentiality but provide context about the likelihood of funding for an applicant’s project.
Use Case 4: Trend mapping. AI analyses could help everyone—scientists, biotech, pharma, investors— understand emerging funding trends in their innovation space in ways that protect the confidentiality of federal data.
The federal science agencies have made remarkable progress in making their funding decisions transparent, even to the point of offering lay summaries of funded awards. However, the sheer volume of individual awards makes summarizing these funding decisions a daunting task that will always be out of date by the time it is completed. Thoughtful application of AI could make practical, easy-to-digest summaries of U.S. federal grants in close to real time, and could help to identify areas of overlap, redundancy, and opportunity. By including projects that were unfunded, the public would get a sense of the direction in which federal funders are moving and where the government might be underinvested. This could herald a new era of transparency and effectiveness in science investment.
Use Case 5: Results prediction tools. Analytical AI tools could help everyone—scientists, biotech, pharma, investors—predict the topics and timing of future research results and neglected areas of science in ways that protect the confidentiality of federal data.
It is standard practice in pharmaceutical development to predict the timing of clinical trial results based on public information. This approach can work in other research areas, but it is labor-intensive. AI analytics could be applied at scale to specific scientific areas, such as predictions about the timing of results for materials being tested for solar cells or of new technologies in disease diagnosis. AI approaches are especially well suited to technologies that cross disciplines, such as applications of one health technology to multiple organ systems, or one material applied to multiple engineering applications. These models would be even richer if the negative cases—the unfunded research applications—were included in analyses in ways that protect the confidentiality of the failed application. Failed applications may signal where the science is struggling and where definitive results are less likely to appear, or where there are underinvested opportunities.
Plan of Action
Leadership
We recommend that OSTP oversee a multiagency development effort to achieve the overarching goal of fully subjecting grant applications to AI analysis to predict the future of science, enhance peer review, and encourage better research investment decisions by both the public and the private sector. The federal agencies involved should include all the member agencies of the NSTC. A broad array of stakeholders should be engaged because much of the AI expertise exists in the private sector, the data are owned and protected by the government, and the beneficiaries of the tools would be both public and private. We anticipate four stages to this effort.
Recommendation 1. Agency Development
Pilot: Each agency should develop pilots of one or more use cases to test and optimize training sets and output tools for each user group. We recommend this initial approach because each funding agency has different baseline capabilities to make application data available to AI tools and may also have different scientific considerations. Despite these differences, all federal science funding agencies have large archives of applications in digital formats, along with records of the publications and research data attributed to those awards.
These use cases are relatively new applications for AI and should be empirically tested before broad implementation. Trend mapping and predictive models can be built with a subset of historical data and validated with the remaining data. Decision support tools for funders, applicants, and reviewers need to be tested not only for their accuracy but also for their impact on users. Therefore, these decision support tools should be considered as a part of larger empirical efforts to improve the peer review process.
Solidify source data: Agencies may need to enhance their data systems to support the new functions for full implementation. OSTP would need to coordinate the development of data standards to ensure all agencies can combine data sets for related fields of research. Agencies may need to make changes to the structure and processing of applications, such as ensuring that sections to be used by the AI are machine-readable.
Recommendation 2. Prizes and Public–Private Partnerships
OSTP should coordinate the convening of private sector organizations to develop a clear vision for the profound implications of opening funded and failed research award applications to AI, including predicting the topics and timing of future research outputs. How will this technology support innovation and more effective investments?
Research agencies should collaborate with private sector partners to sponsor prizes for developing the most useful and accurate tools and user interfaces for each use case refined through agency development work. Prize submissions could use test data drawn from existing full-text applications and the research outputs arising from those applications. Top candidates would be subject to standard selection criteria.
Conclusion
Research applications are an untapped and tremendously valuable resource. They describe work plans and are clearly linked to specific research products, many of which, like research articles, are already rigorously indexed and machine-readable. These applications are data that can be used for optimizing research funding decisions and for developing insight into future innovations. With these data and emerging AI technologies, we will be able to understand the trajectory of our science with unprecedented breadth and insight, perhaps to even the same level of accuracy that human experts can foresee changes within a narrow area of study. However, maximizing the benefit of this information is not inevitable because the source data is currently closed to AI innovation. It will take vision and resources to build effectively from these closed systems—our federal science agencies have both, and with some leadership, they can realize the full potential of these applications.
This memo produced as part of the Federation of American Scientists and Good Science Project sprint. Find more ideas at Good Science Project x FAS
Measuring and Standardizing AI’s Energy and Environmental Footprint to Accurately Access Impacts
The rapid expansion of artificial intelligence (AI) is driving a surge in data center energy consumption, water use, carbon emissions, and electronic waste—yet these environmental impacts, and how they will change in the future, remain largely opaque. Without standardized metrics and reporting, policymakers and grid operators cannot accurately track or manage AI’s growing resource footprint. Currently, companies often use outdated or narrow measures (like Power Usage Effectiveness, PUE) and purchase renewable credits to obscure true emissions. Their true carbon footprint may be as much as 662% higher than the figures they report. A single hyperscale AI data center can guzzle hundreds of thousands of gallons of water per day and contribute to a “mountain” of e-waste, yet only about a quarter of data center operators even track what happens to retired hardware.
This policy memo proposes a set of congressional and federal executive actions to establish comprehensive, standardized metrics for AI energy and environmental impacts across model training, inference, and data center infrastructure. We recommend that Congress directs the Department of Energy (DOE) and the National Institute of Standards and Technology (NIST) to design, collect, monitor and disseminate uniform and timely data on AI’s energy footprint, while designating the White House Office of Science and Technology Policy (OSTP) to coordinate a multi-agency council that coordinates implementation. Our plan of action outlines steps for developing metrics (led by DOE, NIST, and the Environmental Protection Agency [EPA]), implementing data reporting (with the Energy Information Administration [EIA], National Telecommunications and Information Administration [NTIA], and industry), and integrating these metrics into energy and grid planning (performed by DOE’s grid offices and the Federal Energy Regulatory Commission [FERC]). By standardizing how we measure AI’s footprint, the U.S. can be better prepared for the growth in power consumption while maintaining its leadership in artificial intelligence.
Challenge and Opportunity
Inconsistent metrics and opaque reporting make future AI power‑demand estimates extremely uncertain, leaving grid planners in the dark and climate targets on the line.
AI’s Opaque Footprint
Generative AI and large-scale cloud computing are driving an unprecedented increase in energy demand. AI systems require tremendous amounts of computing power both during training (the AI development period) and inference (when AI is used in real world applications). The rapid rise of this new technology is already straining energy and environmental systems at an unprecedented scale. Data centers consumed an estimated 415 Terawatt hours (TWh) of electricity in 2024 (roughly 1.5% of global power demand), and with AI adoption accelerating, the International Energy Agency (IEA) forecasts that data center energy use could more than double to 945 TWh by 2030. This is an added load comparable to powering an entire country the size of Sweden or even Germany. There are a range of projections of AI’s energy consumption, with some estimates suggesting even more rapid growth than the IEA. Estimates suggest that much of this growth will be concentrated in the United States.
The large divergence in estimates for AI-driven electricity demand stem from the different assumptions and methods used in each study. One study uses one of the parameters like the AI Query volume (the number of requests made by users for AI answers), another tries to estimate energy demand from the estimated supply of AI related hardware. Some estimate the Compound Annual Growth Rate (CAGR) of data center growth under different growth scenarios. Different authors make various assumptions about chip shipment growth, workload mix (training vs inference), efficiency gains, and per‑query energy. Amidst this fog of measurement confusion, energy suppliers are caught by surges in demand from new compute infrastructure on top of existing demands from sources like electric vehicles and manufacturing. Electricity grid operators in the United States typically plan for gradual increases in power demand that can be met with incremental generation and transmission upgrades. But if the rapid build-out of AI data centers, on top of other growing power demands, pushes global demand up by an additional hundreds of terawatt hours annually this will shatter the steady-growth assumption embedded in today’s models. Planners need far more granular, forward-looking forecasting methods to avoid driving up costs for rate-payers, last-minute scrambles to find power, and potential electricity reliability crises.
This surge in power demand also threatens to undermine climate progress. Many new AI data centers require 100–1000 megawatts (MW), equivalent to the demands of a medium-sized city, while grid operators are faced with connection lead times of over 2 years to connect to clean energy supplies. In response to these power bottlenecks some regional utilities, unable to supply enough clean electricity, have even resorted to restarting retired coal plants to meet data center loads, undermining local climate goals and efficient operation. Google’s carbon emissions rose 48% over the past five years and Microsoft’s by 23.4% since 2020, largely due to cloud computing and AI.
In spite of the risks to the climate, carbon emissions data is often obscured: firms often claim “carbon neutrality” via purchased clean power credits, while their actual local emissions go unreported. One analysis found Big Tech (Amazon, Meta) data centers may emit up to 662% more CO₂ than they publicly report. For example, Meta’s 2022 data center operations reported only 273 metric tons CO₂ (using market-based accounting with credits), but over 3.8 million metric tons CO₂ when calculated by actual grid mix according to one analysis—a more than 19,000-fold increase. Similarly, AI’s water impacts are largely hidden. Each interactive AI query (e.g. a short session with a language model) can indirectly consume half a liter of fresh water through data center cooling, contributing to millions of gallons used by AI servers—but companies rarely disclose water usage per AI workload. This lack of transparency masks the true environmental cost of AI, hinders accountability, and impedes smart policymaking.
Outdated and Fragmented Metrics
Legacy measures like Power Usage Effectiveness (PUE) miss what is important for AI compute efficiency, such as water consumption, hardware manufacturing, and e-waste.
The metrics currently used to gauge data center efficiency are insufficient for AI-era workloads. Power Usage Effectiveness (PUE), the two-decades-old standard, gives only a coarse snapshot of facility efficiency under ideal conditions. PUE measures total power delivered to a datacenter versus how much of that power actually makes it to the IT equipment inside. The more power used (e.g. for cooling), the worse the PUE ratio will be. However, PUE does not measure how efficiently the IT equipment actually uses the power delivered to it. Think about a car that reports how much fuel reaches the engine but not the miles per gallon of that engine. You can ensure that the fuel doesn’t leak out of the line on its way to the engine, but that engine might not be running efficiently. A good PUE is the equivalent of saying that fuel isn’t leaking out on its way to the engine; it might tell you that a data center isn’t losing too much energy to cooling, but won’t flag inefficient IT equipment. An AI training cluster with a “good” PUE (around 1.1) could still be wasteful if the hardware or software is poorly optimized.
In the absence of updated standards, companies “report whatever they choose, however they choose” regarding AI’s environmental impact. Few report water usage or lifecycle emissions. Only 28% of operators track hardware beyond its use, and just 25% measure e-waste, resulting in tons of servers and AI chips quietly ending up in landfills. This data gap leads to misaligned incentives—for instance, firms might build ever-larger models and data centers, chasing AI capabilities, without optimizing for energy or material efficiency because there is no requirement or benchmark to do so.
Opportunities for Action
Standardizing metrics for AI’s energy and environmental footprint presents a win-win opportunity. By measuring and disclosing AI’s true impacts, we can manage them. With better data, policymakers can incentivize efficiency innovations (from chip design to cooling to software optimization) and target grid investments where AI load is rising. Industry will benefit too: transparency can highlight inefficiencies (e.g. low server utilization or high water-cooled heat that could be recycled) and spur cost-saving improvements. Importantly, several efforts are already pointing the way. In early 2024, bicameral lawmakers introduced the Artificial Intelligence Environmental Impacts Act, aiming to have the EPA study AI’s environmental footprint and develop measurement standards and a voluntary reporting system via NIST. Internationally, the European Union’s upcoming AI Act will require large AI systems to report energy use, resource consumption, and other life cycle impacts, and the ISO is preparing “sustainable AI” standards for energy, water, and materials accounting. The U.S. can build on this momentum. A recent U.S. Executive Order (Jan 2025) already directed DOE to draft reporting requirements for AI data centers covering their entire lifecycle—from material extraction and component manufacturing to operation and retirement—including metrics for embodied carbon (greenhouse-gas emissions that are “baked into” the physical hardware and facilities before a single watt is consumed to run a model), water usage, and waste heat. It also launched a DOE–EPA “Grand Challenge” to push the PUE ratio below 1.1 and minimize water usage in AI facilities. These signals show that there is willingness to address the problem. Now is the time to implement a comprehensive framework that standardizes how we measure AI’s environmental impact. If we seize this opportunity, we can ensure innovation in AI is driven by clean energy, a smarter grid, and less environmental and economic burden on communities.
Plan of Action
To address this challenge, Congress should authorize DOE and NIST to lead an interagency working group and a consortium of public, private and academic communities to enact a phased plan to develop, implement, and operationalize standardized metrics, in close partnership with industry.
Recommendation 1. Identify and Assign Agency Mandates
Creating and Implementing this measurement framework requires concerted action by multiple federal agencies, each leveraging its mandate. The Department of Energy (DOE) should serve as the co-lead federal agency driving this initiative. Within DOE, the Office of Critical and Emerging Technologies (CET) can coordinate AI-related efforts across DOE programs, given its focus on AI and advanced tech integration. The National Institute of Standards and Technology (NIST) will also act as a co-lead for this initiative leading the metrics development and standardization effort as described, convening experts and industry. The White House Office of Science and Technology Policy (OSTP) will act as the coordinating body for this multi-agency effort. OSTP, alongside the Council on Environmental Quality (CEQ), can ensure alignment with broader energy, environment, and technology policy. The Environmental Protection Agency (EPA) should take charge of environmental data collection and oversight. The Federal Energy Regulatory Commission (FERC) should play a supporting role by addressing grid and electricity market barriers. FERC should streamline interconnection processes for new data center loads, perhaps creating fast-track procedures for projects that commit to high efficiency and demand flexibility.
Congressional leadership and oversight will be key. The Senate Committee on Energy and Natural Resources and House Energy & Commerce Committee (which oversee energy infrastructure and data center energy issues) should champion legislation and hold hearings on AI’s energy demands. The House Science, Space, and Technology Committee and Senate Commerce, Science, & Transportation Committee (which oversee NIST, and OSTP) should support R&D funding and standards efforts. Environmental committees (like Senate Environment and Public Works, House Natural Resources) should address water use and emissions. Ongoing committee oversight can ensure agencies stay on schedule and that recommendations turn into action (for example, requiring an EPA/DOE/NIST joint report to Congress within a set timeframe(s).
Congress should mandate a formal interagency task force or working group, co-led by the Department of Energy (DOE) and the National Institute of Standards and Technology (NIST), with the White House Office of Science and Technology Policy (OSTP) serving as the coordinating body and involving all relevant federal agencies. This body will meet regularly to track progress, resolve overlaps or gaps, and issue public updates. By clearly delineating responsibilities, The federal government can address the measurement problem holistically.
Recommendation 2. Develop a Comprehensive AI Energy Lifecycle Measurement Framework
A complete view of AI’s environmental footprint requires metrics that span the full lifecycle, including every layer from chip to datacenter, workload drivers, and knock‑on effects like water use and electricity prices.
Create new standardized metrics that capture AI’s energy and environmental footprint across its entire lifecycle—training, inference, data center operations (cooling/power), and hardware manufacturing/disposal. This framework should be developed through a multi-stakeholder process led by NIST in partnership with DOE and EPA, and in consultation with industry, academia as well as state and local governments.
Key categories should include:
- Data Center Efficiency Metrics: how effectively do data centers use power?
- AI Hardware & Compute Metrics: e.g. Performance per Watt (PPW)—the throughput of AI computations per watt of power.
- Cooling and Water Metrics: How much energy and water are being used to cool these systems?
- Environmental Impact Metrics: What is the carbon intensity per AI task?
- Composite or Lifecycle Metrics: Beyond a single point in time, what are the lifetime characteristics of impact for these systems?
Designing standardized metrics
NIST, with its measurement science expertise, should coordinate the development of these metrics in an open process, building on efforts like NIST’s AI Standards Working Group—a standing body chartered under the Interagency Committee on Standards Policy which brings together technical stakeholders to map the current AI-standards landscape, spot gaps, and coordinate U.S. positions and research priorities. The goal is to publish a standardized metrics framework and guidelines that industry can begin adopting voluntarily within 12 months. Where possible, leverage existing standards (for example, those from the Green Grid consortium on PUE and Water Usage Effectiveness (WUE), or IEEE/ISO standards for energy management) and tailor them to AI’s unique demands. Crucially, these metrics must be uniformly defined to enable apples-to-apples comparisons and periodically updated as technology evolves.
Review, Governance, and improving metrics
We recommend establishing a Metrics Review Committee (led by NIST with DOE/EPA and external experts) to refine the metrics whenever needed, host stakeholder workshops, and public updates. This continuous improvement process will keep the framework current with new AI model types, cooling tech, and hardware advances, ensuring relevance into the future. For example, when we move from the current model of chatbots responding to queries to agentic AI systems that plan, act, remember, and iterate autonomously, traditional “energy per query” metrics no longer capture the full picture.
Recommendation 3. Operationalize Data Collection, Reporting, Analysis and Integrate it into Policy
Start with a six‑month voluntary reporting program, and gradually move towards a mandatory reporting mechanism which feeds straight into EIA outlooks and FERC grid planning.
The task force should solicit inputs via a Request for Information (RFI) — similar to DOE’s recent RFI on AI infrastructure development, asking data center operators, AI chip manufacturers, cloud providers, utilities, and environmental groups to weigh in on feasible reporting requirements and data sharing methods. Within 12 months of starting, this taskforce should complete (a) a draft AI energy lifecycle measurement framework (with standardized definitions for energy, water, carbon, and e-waste metrics across training and data center operations), and (b) an initial reporting template for technology companies, data centers and utilities to pilot.
With standardized metrics in hand, we must shift the focus to implementation and data collection at scale. In the beginning, a voluntary AI energy reporting program can be launched by DOE and EPA (with NIST overseeing the standards). This program would provide guidance to AI developers (e.g. major model-training companies), cloud service providers, and data center operators to report their metrics on an annual or quarterly basis.
After a trial run of the voluntary program, Congress should enact legislation to create a mandatory reporting regime that borrows the best features of existing federal disclosure programs. One useful template is EPA’s Greenhouse Gas Reporting Program, which obliges any facility that emits more than 25,000 tons of CO₂ equivalent per year to file standardized, verifiable electronic reports. The same threshold logic could be adapted for data centers (e.g., those with more than 10 MW of IT load) and for AI developers that train models above a specified compute budget. A second model is DOE/EIA’s Form EIA-923 “Power Plant Operations Report,” whose structured monthly data flow straight into public statistics and planning models. An analogous “Form EIA-AI-01” could feed the Annual Energy Outlook and FERC reliability assessments without creating a new bureaucracy. EIA could also consider adding specific questions or categories in the Commercial Buildings Energy Consumption Survey and Form EIA-861 to identify energy use by data centers and large computing loads. This may involve coordinating with the Census Bureau to leverage industrial classification data (e.g., NAICS codes for data hosting facilities) so that baseline energy/water consumption of the “AI sector” is measured in national statistics. NTIA, which often convenes multi stakeholder processes on technology policy, can host industry roundtables to refine reporting processes and address any concerns (e.g. data confidentiality, trade secrets). NTIA can help ensure that reporting requirements are not overly burdensome to smaller AI startups by working out streamlined methods (perhaps aggregated reporting via cloud providers, for instance). DOE’s Grid Deployment Office (GDO) and Office of Electricity (OE), with better data, should start integrating AI load growth into grid planning models and funding decisions. For example, GDO could prioritize transmission projects that will deliver clean power to regions with clusters of AI data centers, based on EIA data showing rapid load increases. FERC, for its part, can use the reported data to update its reliability and resource adequacy guidelines and possibly issue guidance for regional grid operators (RTOs/ISOs) to explicitly account for projected large computing loads in their plans.
This transparency will let policymakers, researchers, and consumers track improvements (e.g., is the energy per AI training decreasing over time?) and identify leaders/laggards. It will also inform mid-course adjustments that if certain metrics prove too hard to collect or not meaningful, NIST can update the standards. The Census Bureau can contribute by testing the inclusion of questions on technology infrastructure in its Economic Census 2027 and annual surveys, ensuring that the economic data of the tech sector includes environmental parameters (for example, collecting data center utility expenditures, which correlate with energy use). Overall, this would establish an operational reporting system and start feeding the data into both policy and market decisions.
Through these recommendations, responsible offices have clear roles: DOE spearheads efficiency measures in data center initiatives; OE (Office of Electricity and GDO (Grid Deployment Office) use the data to guide grid improvements; NIST creates and maintains the measurement standards; EPA oversees environmental data and impact mitigation; EIA institutionalizes energy data collection and dissemination; FERC adapts regulatory frameworks for reliability and resource adequacy; OSTP coordinates the interagency strategy and keeps the effort a priority; NTIA works with industry to smooth data exchange and involve them; and Census Bureau integrates these metrics into broader economic data. See the table below.Meanwhile, non-governmental actors like utilities, AI companies, and data center operators must not only be data providers but partners. Utilities could use this data to plan investments and can share insights on demand response or energy sourcing; AI developers and data center firms will implement new metering and reporting practices internally, enabling them to compete on efficiency (similar to car companies competing on miles per gallon ratings). Together, these actions create a comprehensive approach: measuring AI’s footprint, managing its growth, and mitigating its environmental impacts through informed policy.
Conclusion
AI’s extraordinary capabilities should not come at the expense of our energy security or environmental sustainability. This memo outlines how we can effectively operationalize measuring AI’s environmental footprint by establishing standardized metrics and leveraging the strengths of multiple agencies to implement them. By doing so, we can address a critical governance gap: what isn’t measured cannot be effectively managed. Standard metrics and transparent reporting will enable AI’s growth while ensuring that data center expansion is met with commensurate increases in clean energy, grid upgrades, and efficiency gains.
The benefits of these actions are far-reaching. Policymakers will gain tools to balance AI innovation with energy and environment goals. For example, by being able to require improvements if an AI service is energy-inefficient, or to fast-track permits for a new data center that meets top sustainability standards. Communities will be better protected: with data in hand, we can avoid scenarios where a cluster of AI facilities suddenly strains a region’s power or water resources without local officials knowing in advance. Instead, requirements for reporting and coordination can channel resources (like new transmission lines or water recycling systems) to those communities ahead of time. The AI industry itself will benefit by building trust and reducing the risk of backlash or heavy-handed regulation; a clear, federal metrics framework provides predictability and a level playing field (everyone measures the same way), and it showcases responsible stewardship of technology. Moreover, emphasizing energy efficiency and resource reuse can reduce operating costs for AI companies in the long run, a crucial advantage as energy prices and supply chain concerns grow.
This memo is part of our AI & Energy Policy Sprint, a policy project to shape U.S. policy at the critical intersection of AI and energy. Read more about the Policy Sprint and check out the other memos here.
While there are existing metrics like PUE for data centers, they don’t capture the full picture of AI’s impacts. Traditional metrics focus mainly on facility efficiency (power and cooling) and not on the computational intensity of AI workloads or the lifecycle impacts. AI operations involve unique factors—for example, training a large AI model can consume significant energy in a short time, and using that AI model continuously can draw power 24/7 across distributed locations. Current standards are outdated and inconsistent: one data center might report a low PUE but could be using water recklessly or running hardware inefficiently. AI-specific metrics are needed to measure things like energy per training run, water per cooling unit, or carbon per compute task, which no standard reporting currently requires. In short, general data center standards weren’t designed for the scale and intensity of modern AI. By developing AI-specific metrics, we ensure that the unique resource demands of AI are monitored and optimized, rather than lost in aggregate averages. This helps pinpoint where AI can be made more efficient (e.g., via better algorithms or chips)—an opportunity not visible under generic metrics.
AI’s environmental footprint is a cross-cutting issue, touching on energy infrastructure, environmental impact, technological standards, and economic data. No single agency has the full expertise or jurisdiction to cover all aspects. Each agency will have clearly defined roles (as outlined in the Plan of Action). For instance, NIST develops the methodology, DOE/EPA collect and use the data, EIA disseminates it, and FERC/Congress use it to adjust policies. This collaborative approach prevents blind spots. A single-agency approach would likely miss critical elements (for instance, a purely DOE-led effort might not address e-waste or standardized methods, which NIST and EPA can). The good news is that frameworks for interagency cooperation already exist, and this initiative aligns with broader administration priorities (clean energy, reliable grid, responsible AI). Thus, while it involves multiple agencies, OSTP and the White House will ensure everyone stays synchronized. The result will be a comprehensive policy that each agency helps implement according to its strength, rather than a piecemeal solution. See below:
Roles and Responsibilities to Measure AI’s Environmental Impact
- Department of Energy (DOE): DOE should serve as the co-lead federal agency driving this initiative. Within DOE, the Office of Critical and Emerging Technologies (CET) can coordinate AI-related efforts across DOE programs, given its focus on AI and advanced tech integration. DOE’s Office of Energy Efficiency and Renewable Energy (EERE) can lead on promoting energy-efficient data center technologies and practices (e.g. through R&D programs and partnerships), while the Office of Electricity (OE) and Grid Deployment Office address grid integration challenges (ensuring AI data centers have access to reliable clean power). DOE should also collaborate with utilities and FERC to plan for AI-driven electricity demand growth and to encourage demand-response or off-peak operation strategies for energy-hungry AI clusters.
- National Institute of Standards and Technology (NIST): NIST will also act as a co-lead for this initiative leading the metrics development and standardization effort as described, convening experts and industry. NIST should revive or expand its AI Standards Coordination Working Group to focus on sustainability metrics, and ultimately publish technical standards or reference materials for measuring AI energy use, water use, and emissions. NIST is also suited to host stakeholder consortium on AI environmental impacts, working in tandem with EPA and DOE.
- White House, including the Office of Science and Technology Policy (OSTP): OSTP will act as the coordinating body for this multi-agency effort. OSTP, alongside the Council on Environmental Quality (CEQ), can ensure alignment with broader climate and tech policy (such as the U.S. Climate Strategy and AI initiatives). The Administration can also use the Federal Chief Sustainability Officer and OMB guidance to integrate AI energy metrics into federal sustainability requirements (for instance, updating OMB’s memos on data center optimization to include AI-specific measures).
- Environmental Protection Agency (EPA): EPA should take charge of environmental data collection and oversight. In the near term, EPA (with DOE) would conduct the comprehensive study of AI’s environmental impacts, examining AI systems’ lifecycle emissions, water and e-waste. EPA’s expertise in greenhouse gas (GHG) accounting will ensure metrics like carbon intensity are rigorously quantified (e.g. using location-based grid emissions factors rather than unreliable REC-based accounting).
- Federal Energy Regulatory Commission (FERC): FERC plays a supporting role by addressing grid and electricity market barriers. FERC should streamline interconnection processes for new data center loads, perhaps creating fast-track procedures for projects that commit to high efficiency and demand flexibility. FERC can ensure that regional grid reliability assessments start accounting for projected AI/data center load growth using data.
- Congressional Committees: Congressional leadership and oversight will be key. The Senate Committee on Energy and Natural Resources and House Energy & Commerce Committee (which oversee energy infrastructure and data center energy issues) should champion legislation and hold hearings on AI’s energy demands. The House Science, Space, and Technology Committee and Senate Commerce, Science, & Transportation Committee (which oversee NIST and OSTP) should support R&D funding and standards efforts. Environmental committees (like Senate Environment and Public Works, House Natural Resources) should address water use and emissions. Ongoing committee oversight can ensure agencies stay on schedule and that recommendations turn into action (for example, requiring the EPA/DOE/NIST joint report to Congress in four years as the Act envisions, and then moving on any further legislative needs).
The plan requires high-level, standardized data that balances transparency with practicality. Companies running AI operations (like cloud providers or big AI model developers) would report metrics such as: total electricity consumed for AI computations (annually), average efficiency metrics (e.g. PUE, Carbon Usage Effectiveness (CUE), and WUE for their facilities), water usage for cooling, and e-waste generated (amount of hardware decommissioned and how it was handled). These data points are typically already collected internally for cost and sustainability tracking but the difference is they would be reported in a consistent format and possibly to a central repository. For utilities, if involved, they might report aggregated data center load in their service territory or significant new interconnections for AI projects (much of this is already in utility planning documents). See below for examples.
Metrics to Illustrate the Types of Shared Information
- Data Center Efficiency Metrics: Power Usage Effectiveness (PUE) (refined for AI workloads), Data Center Infrastructure Efficiency (DCIE) which measures IT versus total facility power (the inverse of PUE), Energy Reuse Factor (ERF) to quantify how much waste heat is reused on-site, and Carbon Usage Effectiveness (CUE) to link energy use with carbon emissions (kg CO₂ per kWh). These give a holistic view of facility efficiency and carbon intensity, beyond just power usage.
- AI Hardware & Compute Metrics: Performance per Watt (PPW)—the throughput of AI computations (like FLOPS or inferences) per watt of power, which encourages energy-efficient model training and inference. Compute Utilization—ensuring expensive AI accelerators (GPUs/TPUs) are well-utilized rather than idling (tracking average utilization rates). Training energy per model—total kWh or emissions per training run (possibly normalized by model size or training-hours). Inference efficiency—energy per 1000 queries or per inference for deployed models. Idle power draw—measure and minimize the energy hardware draws when not actively in use.
- Cooling and Water Metrics: Cooling Energy Efficiency Ratio (EER)—the output cooling power per watt of energy input, to gauge cooling system efficiency. Water Usage Effectiveness (WUE)—liters of water used per kWh of IT compute, or simply total water used for cooling per year. These help quantify and benchmark the significant water and electricity overhead for thermal management in AI data centers.
- Environmental Impact Metrics: Carbon Intensity per AI Task—CO₂ emitted per training or per 1000 inferences, which could be aggregated to an organizational carbon footprint for AI operations. Greenhouse Gas emissions per kWh—linking energy use to actual emissions based on grid mix or backup generation. Also, e-waste metrics—such as total hardware weight decommissioned annually, or a recycling ratio. For instance, tracking the tons of servers/chips retired and the fraction recycled versus landfilled can illuminate the life cycle impact.
- Composite or Lifecycle Metrics: Develop ways to combine these factors to rate overall sustainability of AI systems. For example, an “AI Sustainability Score” could incorporate energy efficiency, renewables use, cooling efficiency, and end-of-life recycling. Another idea is an “AI Energy Star” rating for AI hardware or cloud services that meet certain efficiency and transparency criteria, modeled after Energy Star appliance ratings.
No, the intention is not to force disaggregation down to proprietary details (e.g., exactly how a specific algorithm uses energy) but rather to get macro-level indicators. Regarding trade secrets or sensitive info, the data collected (energy, water, emissions) is not about revealing competitive algorithms or data, it’s about resource use. These are analogous to what many firms already publish in sustainability reports (power usage, carbon footprint), just more uniformly. There will be provisions to protect any sensitive facility-level data (e.g., EIA could aggregate or anonymize certain figures in public releases). The goal is transparency about environmental impact, not exposure of intellectual property.
Once collected, the data will become a powerful tool for evidence-based policymaking and oversight. At the strategic level, DOE and the White House can track whether the AI sector is becoming more efficient or not—for instance, seeing trends in energy-per-AI-training decreasing (good) or total water use skyrocketing (a flag for action).
Energy planning: EIA will incorporate the numbers into its models, which guide national energy policy and investment. If data shows that AI is driving, say, an extra 5% electricity demand growth in certain regions, DOE’s Grid Deployment Office and FERC can respond by facilitating grid expansions or reliability measures in those areas.
Climate policy: EPA can use reported emissions data to update greenhouse gas inventories and identify if AI/data centers are becoming a significant source—if so, that could shape future climate regulations or programs (ensuring this sector contributes to emissions reduction goals).
Water resource management: If we see large water usage by AI in drought-prone areas, federal and state agencies can work on water recycling or alternative cooling initiatives.
Research and incentives: DOE’s R&D programs (through ARPA-E or National Labs) can target the pain points revealed—e.g., if e-waste volumes are high, fund research into longer-lasting hardware or recycling tech; if certain metrics like Energy Reuse Factor are low, push demonstration projects for waste heat reuse.
This could inform everything from ESG investment decisions to local permitting. For example, a company planning a new data center might be asked by local authorities, “What’s your expected PUE and water usage? The national average for AI data centers is X—will you do better?” In essence, the data ensures the government and public can hold the AI industry accountable for progress (or regress) on sustainability. By integrating these data into models and policies, the government can anticipate and avert problems (like grid strain or high emissions) before they grow, and steer the sector toward solutions.
AI services and data centers are worldwide, so consistency in how we measure impacts is important. The U.S. effort will be informed by and contribute to international standards. Notably, the ISO (International Organization for Standardization) is already developing criteria for sustainable AI, including energy, raw materials, and water metrics across the AI lifecycle NIST, which often represents the U.S. in global standards bodies, is involved and will ensure that our metrics framework aligns with ISO’s emerging standards. Similarly, the EU’s AI Act also has requirements for reporting AI energy and resource use. By moving early on our own metrics, the U.S. can actually help shape what those international norms look like, rather than react to them. This initiative will encourage U.S. agencies to engage in forums like the Global Partnership on AI (GPAI) or bilateral tech dialogues to promote common sustainability reporting frameworks. In the end, aligning metrics internationally will create a more level playing field—ensuring that AI companies can’t simply shift operations to avoid transparency. If the U.S., EU, and others all require similar disclosures, it reinforces responsible practices everywhere.
Shining a light on energy and resource use can drive new innovation in efficiency. Initially, there may be modest costs—for example, installing better sub-meters in data centers or dedicating staff time to reporting. However, these costs are relatively small in context. Many leading companies already track these metrics internally for cost management and corporate sustainability goals. We are recommending formalizing and sharing that information. Over time, the data collected can reduce costs: companies will identify wasteful practices (maybe servers idling, or inefficient cooling during certain hours) and correct them, saving on electricity and water bills. There is also an economic opportunity in innovation: as efficiency becomes a competitive metric, we expect increased R&D into low-power AI algorithms, advanced cooling, and longer-life hardware. Those innovations can improve performance per dollar as well. Moreover, policy support can offset any burdens—for instance, the government can provide technical assistance or grants to smaller firms to help them improve energy monitoring. We should also note that unchecked resource usage carries its own risks to innovation: if AI’s growth starts causing blackouts or public backlash due to environmental damage, that would seriously hinder AI progress.
Speed Grid Connection Using ‘Smart AI Fast Lanes’ and Competitive Prizes
Innovation in artificial intelligence (AI) and computing capacity is essential for U.S. competitiveness and national security. However, AI data center electricity use is growing rapidly. Data centers already consume more than 4% of U.S. electricity annually and could rise to 6% to 12% of U.S. electricity by 2028. At the same time, electricity rates are rising for consumers across the country, with transmission and distribution infrastructure costs a major driver of these increases. For the first time in fifteen years, the U.S. is experiencing a meaningful increase in electricity demand. Electricity use from data centers already consumes more than 25% of electricity in Virginia, which leads the world in data center installations. Data center electricity load growth results in real economic and environmental impacts for local communities. It also represents a national policy trial on how the U.S. responds to rising power demand from the electrification of homes, transportation, and manufacturing– important technology transitions for cutting carbon emissions and air pollution.
Federal and state governments need to ensure that the development of new AI and data center infrastructure does not increase costs for consumers, impact the environment, and exacerbate existing inequalities. “Smart AI Fast Lanes” is a policy and infrastructure investment framework that ensures the U.S. leads the world in AI while building an electricity system that is clean, affordable, reliable, and equitable. Leveraging innovation prizes that pay for performance, coupled with public-private partnerships, data center providers can work with the Department of Energy, the Foundation for Energy Security and Innovation (FESI), the Department of Commerce, National Labs, state energy offices, utilities, and the Department of Defense to drive innovation to increase energy security while lowering costs.
Challenge and Opportunity
Targeted policies can ensure that the development of new AI and data center infrastructure does not increase costs for consumers, impact the environment, and exacerbate existing energy burdens. Allowing new clean power sources co-located or contracted with AI computing facilities to connect to the grid quickly, and then manage any infrastructure costs associated with that new interconnection, would accelerate the addition of new clean generation for AI while lowering electricity costs for homes and businesses.
One of the biggest bottlenecks in many regions of the U.S. in adding much-needed capacity to the electricity grid are the so-called “interconnection queues”. There are different regional requirements for power plants to complete (often, a number of studies on how a project affects grid infrastructure) before they are allowed to connect. Solar, wind, and battery projects represented 95% of the capacity waiting in interconnection queues in 2023. The operator of Texas’ power grid, the Electric Reliability Council of Texas (ERCOT), uses a “connect and manage” interconnection process that results in faster interconnections of new energy supplies than the rest of the country. Instead of requiring each power plant to complete lengthy studies of needed system-wide infrastructure investments before connecting to the grid, the “connect and manage” approach in Texas gets power plants online quicker than a “studies first” approach. Texas manages any risks that arise using the power markets and system-wide planning efforts. The results are clear: the median time from an interconnection request to commercial operations in Texas was four years, compared to five years in New York and more than six and a half years in California.
“Smart AI Fast Lanes” expands the spirit of the Texas “connect and manage” approach nationwide for data centers and clean energy, and adds to it investment and innovation prizes to speed up the process, ensure grid reliability, and lower costs.
Data center providers would work with the Department of Energy, the Foundation for Energy Security and Innovation (FESI), the Department of Commerce, National Laboratories, state energy offices, utilities, and the Department of Defense to speed up interconnection queues, spur innovation in efficiency, and re-invest in infrastructure, to increase energy security and lower costs.
Why FESI Should Lead ‘Smart AI Fast Lanes’
With FESI managing this effort, the process can move faster than the government acting alone. FESI is an independent, non-profit, agency-related foundation that was created by Congress in the CHIPS and Science Act of 2022 to help the Department of Energy achieve its mission and accelerate “the development and commercialization of critical energy technologies, foster public-private partnerships, and provide additional resources to partners and communities across the country supporting solutions-driven research and innovation that strengthens America’s energy and national security goals”. Congress has created many other agency-related foundations, such as the Foundation for NIH, the National Fish and Wildlife Foundation, and the National Park Foundation, which was created in 1935. These agency-related foundations have a demonstrated record of raising external funding to leverage federal resources and enabling efficient public-private partnerships. As a foundation supporting the mission of the Department of Energy, FESI has a unique opportunity to quickly respond to emergent priorities and create partnerships to help solve energy challenges.
As an independent organization, FESI can leverage the capabilities of the private sector, academia, philanthropies, and other organizations to enable collaboration with federal and state governments. FESI can also serve as an access point to opening up additional external investment, and shared risk structures and clear rules of engagement make emerging energy technologies more attractive to institutional capital. For example, the National Fish and Wildlife Foundation awards grants that are matched with non-federal private, philanthropic, or local funding sources that multiply the impact of any federal investments. In addition, the National Fish and Wildlife Foundation has partnered with the Department of Defense and external funding sources to enhance coastal resilience near military installations. Both AI compute capabilities and energy resilience are of strategic importance to the Department of Defense, Department of Energy, and other agencies, and leveraging public-private partnerships is a key pathway to enhance capabilities and security. FESI leading a Smart AI Fast Lanes initiative could be a force multiplier to enable rapid deployment of clean AI compute capabilities that are good for communities, companies, and national security.
Use Prizes to Lessen Cost and Maximize Return
The Department of Energy has long used prize competitions to spur innovation and accelerate access to funding and resources. Prize competitions with focused objectives but unstructured pathways for success enables the private sector to compete and advance innovation without requiring a lot of federal capacity and involvement. Federal prize programs pay for performance and results, while also providing a mechanism to crowd in additional philanthropic and private sector investment. In the Smart AI Fast Lane framework, FESI could use prizes to support energy innovation from AI data centers while working with the Department of Energy’s Office of Cybersecurity, Energy Security, and Emergency Response (CESER) to enable a repeatable and scalable public private partnership program. These prizes would be structured so that there is a low administrative and operational effort required for FESI itself, with other groups such as American-Made, National Laboratories, or organizations like FAS, helping to provide technical expertise to review and administer prize applications. This can ensure quality while enabling scalable growth.
Plan of Action
Here’s how “Smart AI Fast Lanes” would work. For any proposed data center investment of more than 250 MW, companies could apply to work with FESI. Successful application would leverage public, private, and philanthropic funds and technical assistance. Projects would be required to increase clean energy supplies, achieve world-leading data center energy efficiency, invest in transmission and distribution infrastructure, and/or deploy virtual power plants for grid flexibility.
Recommendation 1. Use a “Smart AI Fast Lane” Connection Fee to Quickly Connect to the Grid, Further Incentivized by a “Bring Your Own Power” Prize
New large AI data center loads choosing the “Smart AI Fast Lane” would pay a fee to connect to the grid without first completing lengthy pre-connection cost studies. Those payments would go into a fund, managed and overseen by FESI, that would be used to cover any infrastructure costs incurred by regional grids for the first three years after project completion. The fee could be a flat fee based on data center size, or structured as an auction, enabling the data centers bidding the highest in a region to be at the front of the line. This enables the market to incentivize the highest priority additions. Alternatively, large load projects could choose to do the studies first and remain in the regular – and likely slower – interconnection queue to avoid the fee.
In addition, FESI could facilitate a “Bring Your Own Power” prize award that is a combination of public, private, and philanthropic funds that data center developers can match to contract for new, additional zero-emission electricity generated locally that covers twice as much as the data center uses annually. For data centers committing to this “Smart AI Fast Lane” process, both the data center and the clean energy supply would receive accelerated priority in the interconnection queue and technical assistance from National Laboratories. This leverages economies of scale for projects, lowers the cost of locally-generated clean electricity, and gets clean energy connected to the grid quicker. Prize resources would support a “connect and manage” interconnection approach to cover 75% of the costs of any required infrastructure for local clean power projects resulting from the project. FESI prize resources could further supplement these payments to upgrade electrical infrastructure in areas of national need for new electricity supplies to maintain electricity reliability. These include areas assessed by the North American Reliability Corporation to have a high risk of an electricity shortfall in the coming years, such as the Upper Midwest or Gulf Coast, or areas with an elevated risk such as California, the Great Plains, Texas, the Mid-Atlantic, or the Northeast.
Recommendation 2. Create an Efficiency Prize To Establish World-Leading Energy and Water Efficiency at AI Data Centers
Data centers have different design configurations that affect how much energy and water are needed to operate. Data centers use electricity for computing, but also for the cooling systems needed for computing equipment, and there are innovation opportunities to increase the efficiency of both. One historical measure of AI data center energy efficiency is Power Use Effectiveness (PUE), which is the total facility annual energy use, divided by the computing equipment annual energy use, with values closer to 1.0 being more efficient. Similarly, Water Use Effectiveness (WUE) is measured as total annual water use divided by the computing equipment annual energy use, with closer to zero being more efficient. We should continue to push for improvement in PUE and WUE, but these are incomplete current metrics to drive deep innovation because they do nor reflect how much computing power is provided and do not assess impacts in the broader infrastructure energy system. While there have been multiple different metrics for data center energy efficiency proposed over the past several years, what is important for innovation is to improve the efficiency of how much AI computing work we get for the amount of energy and water used. Just like efficiency in a car is measured in miles per gallon (MPG), we need to measure the “MPG” of how AI data centers perform work and then create incentives and competition for continuous improvements. There could be different metrics for different types of AI training and inference workloads, but a starting point could be the tokens per kilowatt-hour of electricity used. A token is a word or portion of a word that AI foundation models use for analysis. Another way could be to measure the efficiency of computing performance, or FLOPS, per kilowatt-hour. The more analysis an AI model or data center can perform using the same amount of energy, the more energy efficient it is.
FESI could deploy sliding scale innovation prizes based on data center size for new facilities that demonstrate leading edge AI data center MPG. These could be based on efficiency targets for tokens per kilowatt-hour, FLOPS per kilowatt-hour, top-performing PUE, or other metrics of energy efficiency. Similar prizes could be provided for water use efficiency, within different classes of cooling technologies that exceed best-in-class performance. These prizes could be modeled after the USDA’s agency-related foundation’s FFAR Egg-Tech Prize, which was a program that was easy to administer and has had great success. A secondary benefit of an efficiency innovation prize is continuous competition for improvement, and open information about best-in-class data center facilities.
Fig. 1. Power Use Efficiency (PUE) and Water Use Efficiency (WUE) values for Data Centers Source: LBNL 2024
Recommendation 3. Create Prizes to Maximize Transmission Throughput and Upgrade Grid Infrastructure
FESI could award prizes for rapid deployment of reconductoring, new transmission, or grid enhancing technologies to increase the transmission capacity for any project in DOE’s Coordinated Interagency Authorizations and Permit Program. Similarly, FESI could award prizes for utilities to upgrade local distribution infrastructure beyond the direct needs for the project to reduce future electricity rate cases, which will keep electricity costs affordable for residential customers. The Department of Energy already has authority to finance up to $2.5 billion in the Transmission Facilitation Program, a revolving fund administered by the Grid Deployment Office (GDO) that helps support transmission infrastructure. These funds could be used for public-private partnerships in a national interest electric transmission corridor and necessary to accommodate an increase in electricity demand across more than one state or transmission planning region.
Recommendation 4. Develop Prizes That Reward Flexibility and End-Use Efficiency Investments
Flexibility in how and when data centers use electricity can meaningfully reduce the stress on the grid. FESI should award prizes to data centers that demonstrate best-in-class flexibility through smart controls and operational improvements. Prizes could also be awarded to utilities hosting data centers that reduce summer and winter peak loads in the local service territory. Prizes for utilities that meet home weatherization targets and deploy virtual power plants could help reduce costs and grid stress in local communities hosting AI data centers.
Conclusion
The U.S. is facing the risk of electricity demand outstripping supplies in many parts of the country, which would be severely detrimental to people’s lives, to the economy, to the environment, and to national security. “Smart AI Fast Lanes” is a policy and investment framework that can rapidly increase clean energy supply, infrastructure, and demand management capabilities.
It is imperative that the U.S. addresses the growing demand from AI and data centers, so that the U.S. remains on the cutting edge of innovation in this important sector. How the U.S. approaches and solves the challenge of new demand from AI, is a broader test on how the country prepares its infrastructure for increased electrification of vehicles, buildings, and manufacturing, as well as how the country addresses both carbon pollution and the impacts from climate change. The “Smart AI Fast Lanes” framework and FESI-run prizes will enable U.S. competitiveness in AI, keep energy costs affordable, reduce pollution, and prepare the country for new opportunities.
This memo is part of our AI & Energy Policy Sprint, a policy project to shape U.S. policy at the critical intersection of AI and energy. Read more about the Policy Sprint and check out the other memos here.
A Holistic Framework for Measuring and Reporting AI’s Impacts to Build Public Trust and Advance AI
As AI becomes more capable and integrated throughout the United States economy, its growing demand for energy, water, land, and raw materials is driving significant economic and environmental costs, from increased air pollution to higher costs for ratepayers. A recent report projects that data centers could consume up to 12% of U.S. electricity by 2028, underscoring the urgent need to assess the tradeoffs of continued expansion. To craft effective, sustainable resource policies, we need clear standards for estimating the data centers’ true energy needs and for measuring and reporting the specific AI applications driving their resource consumption. Local and state-level bills calling for more oversight of utility rates and impacts to ratepayers have received bipartisan support, and this proposal builds on that momentum.
In this memo, we draw on research proposing a holistic evaluation framework for characterizing AI’s environmental impacts, which establishes three categories of impacts arising from AI: (1) Computing-related impacts; (2) Immediate application impacts; and (3) System-level impacts . Concerns around AI’s computing-related impacts, e.g. energy and water use due to AI data centers and hardware manufacturing, have become widely known with corresponding policy starting to be put into place. However, AI’s immediate application and system-level impacts, which arise from the specific use cases to which AI is applied, and the broader socio-economic shifts resulting from its use, remain poorly understood, despite their greater potential for societal benefit or harm.
To ensure that policymakers have full visibility into the full range of AI’s environmental impacts we recommend that the National Institute of Standards and Technology (NIST) oversee creation of frameworks to measure the full range of AI’s impacts. Frameworks should rely on quantitative measurements of the computing and application related impacts of AI and qualitative data based on engagements with the stakeholders most affected by the construction of data centers. NIST should produce these frameworks based on convenings that include academic researchers, corporate governance personnel, developers, utility companies, vendors, and data center owners in addition to civil society organizations. Participatory workshops will yield new guidelines, tools, methods, protocols and best practices to facilitate the evolution of industry standards for the measurement of the social costs of AI’s energy infrastructures.
Challenge and Opportunity
Resource consumption associated with AI infrastructures is expanding quickly, and this has negative impacts, including asthma from air pollution associated with diesel backup generators, noise pollution, light pollution, excessive water and land use, and financial impacts to ratepayers. A lack of transparency regarding these outcomes and public participation to minimize these risks losing the public’s trust, which in turn will inhibit the beneficial uses of AI. While there is a huge amount of capital expenditure and a massive forecasted growth in power consumption, there remains a lack of transparency and scientific consensus around the measurement of AI’s environmental impacts with respect to data centers and their related negative externalities.
A holistic evaluation framework for assessing AI’s broader impacts requires empirical evidence, both qualitative and quantitative, to influence future policy decisions and establish more responsible, strategic technology development. Focusing narrowly on carbon emissions or energy consumption arising from AI’s computing related impacts is not sufficient. Measuring AI’s application and system-level impacts will help policymakers consider multiple data streams, including electricity transmission, water systems and land use in tandem with downstream economic and health impacts.
Regulatory and technical attempts so far to develop scientific consensus and international standards around the measurement of AI’s environmental impacts have focused on documenting AI’s computing-related impacts, such as energy use, water consumption, and carbon emissions required to build and use AI. Measuring and mitigating AI’s computing-related impacts is necessary, and has received attention from policymakers (e.g. the introduction of the AI Environmental Impacts Act of 2024 in the U.S., provisions for environmental impacts of general-purpose AI in the EU AI Act, and data center sustainability targets in the German Energy Efficiency Act). However, research by Kaack et al (2022) highlights that impacts extend beyond computing. AI’s application impacts, which arise from the specific use cases for which AI is deployed (e.g. AI’s enabled emissions, such as application of AI to oil and gas drilling have much greater potential scope for positive or negative impacts compared to AI’s computing impacts alone, depending on how AI is used in practice). Finally, AI’s system-level impacts, which include even broader, cascading social and economic impacts associated with AI energy infrastructures, such as increased pressure on local utility infrastructure leading to increased costs to ratepayers, or health impacts to local communities due to increased air pollution, have the greatest potential for positive or negative impacts, while being the most challenging to measure and predict. See Figure 1 for an overview.
from Kaack et al. (2022). Effectively understanding and shaping AI’s impacts will require going beyond impacts arising from computing alone, and requires consideration and measurement of impacts arising from AI’s uses (e.g. in optimizing power systems or agriculture) and how AI’s deployment throughout the economy leads to broader systemic shifts, such as changes in consumer behavior.
Effective policy recommendations require more standardized measurement practices, a point raised by the Government Accountability Office’s recent report on AI’s human and environmental effects, which explicitly calls for increasing corporate transparency and innovation around technical methods for improved data collection and reporting. But data should also include multi-stakeholder engagement to ensure there are more holistic evaluation frameworks that meet the needs of specific localities, including state and local government officials, businesses, utilities, and ratepayers. Furthermore, while states and municipalities are creating bills calling for more data transparency and responsibility, including in California, Indiana, Oregon, and Virginia, the lack of federal policy means that data center owners may move their operations to states that have fewer protections in place and similar levels of existing energy and data transmission infrastructure.
States are also grappling with the potential economic costs of data center expansion. Ohio’s Policy Matters found that tax breaks for data center owners are hurting tax revenue streams that should be used to fund public services. In Michigan, tax breaks for data centers are increasing the cost of water and power for the public while undermining the state’s climate goals. Some Georgia Republicans have stated that data center companies should “pay their way.” While there are arguments that data centers can provide useful infrastructure, connectivity, and even revenue for localities, a recent report shows that at least ten states each lost over $100 million a year in revenue to data centers because of tax breaks. The federal government can help create standards that allow stakeholders to balance the potential costs and benefits of data centers and related energy infrastructures. We now have an urgent need to increase transparency and accountability through multi-stakeholder engagement, maximizing economic benefits while reducing waste.
Despite the high economic and policy stakes, critical data needed to assess the full impacts—both costs and benefits—of AI and data center expansion remains fragmented, inconsistent, or entirely unavailable. For example, researchers have found that state-level subsidies for data center expansion may have negative impacts on state and local budgets, but this data has not been collected and analyzed across states because not all states publicly release data about data center subsidies. Other impacts, such as the use of agricultural land or public parks for transmission lines and data center siting, must be studied at a local and state level, and the various social repercussions require engagement with the communities who are likely to be affected. Similarly, estimates on the economic upsides of AI vary widely, e.g. the estimated increase in U.S. labor productivity due to AI adoption ranges from 0.9% to 15% due in large part to lack of relevant data on AI uses and their economic outcomes, which can be used to inform modeling assumptions.
Data centers are highly geographically clustered in the United States, more so than other industrial facilities such as steel plants, coal mines, factories, and power plants (Fig. 4.12, IEA World Energy Outlook 2024). This means that certain states and counties are experiencing disproportionate burdens associated with data center expansion. These burdens have led to calls for data center moratoriums or for the cessation of other energy development, including in states like Indiana. Improved measurement and transparency can help planners avoid overly burdensome concentrations of data center infrastructure, reducing local opposition.
With a rush to build new data center infrastructure, states and localities must also face another concern: overbuilding. For example, Microsoft recently put a hold on parts of its data center contract in Wisconsin and paused another in central Ohio, along with contracts in several other locations across the United States and internationally. These situations often stem from inaccurate demand forecasting, prompting utilities to undertake costly planning and infrastructure development that ultimately goes unused. With better measurement and transparency, policymakers will have more tools to prepare for future demands, avoiding the negative social and economic impacts of infrastructure projects that are started but never completed.
While there have been significant developments in measuring the direct, computing-related impacts of AI data centers, public participation is needed to fully capture many of their indirect impacts. Data centers can be constructed so they are more beneficial to communities while mitigating their negative impacts, e.g. by recycling data center heat, and they can also be constructed to be more flexible by not using grid power during peak times. However, this requires collaborative innovation and cross-sector translation, informed by relevant data.
Plan of Action
Recommendation 1. Develop a database of AI uses and framework for reporting AI’s immediate applications in order to understand the drivers of environmental impacts.
The first step towards informed decision-making around AI’s social and environmental impacts is understanding what AI applications are actually driving data center resource consumption. This will allow specific deployments of AI systems to be linked upstream to compute-related impacts arising from their resource intensity, and downstream to impacts arising from their application, enabling estimation of immediate application impacts.
The AI company Anthropic demonstrated a proof-of-concept categorizing queries to their Claude language model under the O*NET database of occupations. However, O*NET was developed in order to categorize job types and tasks with respect to human workers, which does not exactly align with current and potential uses of AI. To address this, we recommend that NIST works with relevant collaborators such as the U.S. Department of Labor (responsible for developing and maintaining the O*NET database) to develop a database of AI uses and applications, similar to and building off of O*NET, along with guidelines and infrastructure for reporting data center resource consumption corresponding to those uses. This data could then be used to understand particular AI tasks that are key drivers of resource consumption.
Any entity deploying a public-facing AI model (that is, one that can produce outputs and/or receive inputs from outside its local network) should be able to easily document and report its use case(s) within the NIST framework. A centralized database will allow for collation of relevant data across multiple stakeholders including government entities, private firms, and nonprofit organizations.
Gathering data of this nature may require the reporting entity to perform analyses of sensitive user data, such as categorizing individual user queries to an AI model. However, data is to be reported in aggregate percentages with respect to use categories without attribution to or listing of individual users or queries. This type of analysis and data reporting is well within the scope of existing, commonplace data analysis practices. As with existing AI products that rely on such analyses, reporting entities are responsible for performing that analysis in a way that appropriately safeguards user privacy and data protection in accordance with existing regulations and norms.
Recommendation 2. NIST should create an independent consortium to develop a system-level evaluation framework for AI’s environmental impacts, while embedding robust public participation in every stage of the work.
Currently, the social costs of AI’s system-level impacts—the broader social and economic implications arising from AI’s development and deployment—are not being measured or reported in any systematic way. These impacts fall heaviest on the local communities that host the data centers powering AI: the financial burden on ratepayers who share utility infrastructure, the health effects of pollutants from backup generators, the water and land consumed by new facilities, and the wider economic costs or benefits of data-center siting. Without transparent metrics and genuine community input, policymakers cannot balance the benefits of AI innovation against its local and regional burdens. Building public trust through public participation is key when it comes to ensuring United States energy dominance and national security interests in AI innovation, themes emphasized in policy documents produced by the first and second Trump administrations.
To develop evaluation frameworks in a way that is both scientifically rigorous and broadly trusted, NIST should stand up an independent consortium via a Cooperative Research and Development Agreement (CRADA). A CRADA allows NIST to collaborate rapidly with non-federal partners while remaining outside the scope of the Federal Advisory Committee Act (FACA), and has been used, for example, to convene the NIST AI Safety Institute Consortium. Membership will include academic researchers, utility companies and grid operators, data-center owners and vendors, state, local, Tribal, and territorial officials, technologists, civil-society organizations, and frontline community groups.
To ensure robust public engagement, the consortium should consult closely with FERC’s Office of Public Participation (OPP)—drawing on OPP’s expertise in plain-language outreach and community listening sessions—and with other federal entities that have deep experience in community engagement on energy and environmental issues. Drawing on these partners’ methods, the consortium will convene participatory workshops and listening sessions in regions with high data-center concentration—Northern Virginia, Silicon Valley, Eastern Oregon, and the Dallas–Fort Worth metroplex—while also making use of online comment portals to gather nationwide feedback.
Guided by the insights from these engagements, the consortium will produce a comprehensive evaluation framework that captures metrics falling outside the scope of direct emissions alone. These system-level metrics could encompass (1) the number, type, and duration of jobs created; (2) the effects of tax subsidies on local economies and public services; (3) the placement of transmission lines and associated repercussions for housing, public parks, and agriculture; (4) the use of eminent domain for data-center construction; (5) water-use intensity and competing local demands; and (6) public-health impacts from air, light, and noise pollution. NIST will integrate these metrics into standardized benchmarks and guidance.
Consortium members will attend public meetings, engage directly with community organizations, deliver accessible presentations, and create plain-language explainers so that non-experts can meaningfully influence the framework’s design and application. The group will also develop new guidelines, tools, methods, protocols, and best practices to facilitate industry uptake and to evolve measurement standards as technology and infrastructure grow.
We estimate a cost of approximately $5 million over two years to complete the work outlined in recommendation 1 and 2, covering staff time, travel to at least twelve data-center or energy-infrastructure sites across the United States, participant honoraria, and research materials.
Recommendation 3. Mandate regular measurement and reporting on relevant metrics by data center operators.
Voluntary reporting is the status quo, via e.g. corporate Environmental, Social, and Governance (ESG) reports, but voluntary reporting has so far been insufficient for gathering necessary data. For example, while the technology firm OpenAI, best known for their highly popular ChatGPT generative AI model, holds a significant share of the search market and likely corresponding share of environmental and social impacts arising from the data centers powering their products, OpenAI chooses not to publish ESG reports or data in any other format regarding their energy consumption or greenhouse gas (GHG) emissions. In order to collect sufficient data at the appropriate level of detail, reporting must be mandated at the local, state, or federal level. At the state level, California’s Climate Corporate Data Accountability Act (SB -253, SB-219) requires that large companies operating within the state report their GHG emissions in accordance with the GHG Protocol, administered by the California Air Resources Board (CARB).
At the federal level, the EU’s Corporate Sustainable Reporting Directive (CSRD), which requires firms operating within the EU to report a wide variety of data related to environmental sustainability and social governance, could serve as a model for regulating companies operating within the U.S. The Environmental Protection Agency’s (EPA) GHG Reporting Program already requires emissions reporting by operators and suppliers associated with large GHG emissions sources, and the Energy Information Administration (EIA) collects detailed data on electricity generation and fuel consumption through forms 860 and 923. With respect to data centers specifically, the Department of Energy (DOE) could require that developers who are granted rights to build AI data center infrastructure on public lands perform the relevant measurement and reporting, and more broadly reporting could be a requirement to qualify for any local, state or federal funding or assistance provided to support buildout of U.S. AI infrastructure.
Recommendation 4. Incorporate measurements of social cost into AI energy and infrastructure forecasting and planning.
There is a huge range in estimates of future data center energy use, largely driven by uncertainty around the nature of demands from AI. This uncertainty stems in part from a lack of historical and current data on which AI use cases are most energy intensive and how those workloads are evolving over time. It also remains unclear the extent to which challenges in bringing new resources online, such as hardware production limits or bottlenecks in permitting, will influence growth rates. These uncertainties are even more significant when it comes to the holistic impacts (i.e. those beyond direct energy consumption) described above, making it challenging to balance costs and benefits when planning future demands from AI.
To address these issues, accurate forecasting of demand for energy, water, and other limited resources must incorporate data gathered through holistic measurement frameworks described above. Further, the forecasting of broader system-level impacts must be incorporated into decision-making around investment in AI infrastructure. Forecasting needs to go beyond just energy use. Models should include predicting energy and related infrastructure needs for transmission, the social cost of carbon in terms of pollution, the effects to ratepayers, and the energy demands from chip production.
We recommend that agencies already responsible for energy-demand forecasting—such as the Energy Information Administration at the Department of Energy—integrate, in line with the NIST frameworks developed above, data on the AI workloads driving data-center electricity use into their forecasting models. Agencies specializing in social impacts, such as the Department of Health and Human Services in the case of health impacts, should model social impacts and communicate those to EIA and DOE for planning purposes. In parallel, the Federal Energy Regulatory Commission (FERC) should update its new rule on long-term regional transmission planning, to explicitly include consideration of the social costs corresponding to energy supply, demand and infrastructure retirement/buildout across different scenarios.
Recommendation 5. Transparently use federal, state, and local incentive programs to reward data-center projects that deliver concrete community benefits.
Incentive programs should attach holistic estimates of the costs and benefits collected under the frameworks above, and not purely based on promises. When considering using incentive programs, policymakers should ask questions such as: How many jobs are created by data centers and for how long do those jobs exist, and do they create jobs for local residents? What tax revenue for municipalities or states is created by data centers versus what subsidies are data center owners receiving? What are the social impacts of using agricultural land or public parks for data center construction or transmission lines? What are the impacts to air quality and other public health issues? Do data centers deliver benefits like load flexibility and sharing of waste heat?
Grid operators (Regional Transmission Organizations [RTOs] and Independent System Operators [ISOs]) can leverage interconnection queues to incentivize data center operators to justify that they have sufficiently considered the impacts to local communities when proposing a new site. FERC recently approved reforms to processing the interconnect request queue, allowing RTOs to implement a “first-ready first-served” approach rather than a first-come first-served approach, wherein proposed projects can be fast-tracked based on their readiness. A similar approach could be used by RTOs to fast-track proposals that include a clear plan for how they will benefit local communities (e.g. through load flexibility, heat reuse, and clean energy commitments), grounded in careful impact assessment.
There is the possibility of introducing state-level incentives in states with existing significant infrastructure. Such incentives could be determined in collaboration with the National Governors Association, who have been balancing AI-driven energy needs with state climate goals.
Conclusion
Data centers have an undeniable impact on energy infrastructures and the communities living close to them. This impact will continue to grow alongside AI infrastructure investment, which is expected to skyrocket. It is possible to shape a future where AI infrastructure can be developed sustainably, and in a way that responds to the needs of local communities. But more work is needed to collect the necessary data to inform government decision-making. We have described a framework for holistically evaluating the potential costs and benefits of AI data centers, and shaping AI infrastructure buildout based on those tradeoffs. This framework includes: establishing standards for measuring and reporting AI’s impacts, eliciting public participation from impacted communities, and putting gathered data into action to enable sustainable AI development.
This memo is part of our AI & Energy Policy Sprint, a policy project to shape U.S. policy at the critical intersection of AI and energy. Read more about the Policy Sprint and check out the other memos here.
Data centers are highly spatially concentrated largely due to reliance on existing energy and data transmission infrastructure; it is more cost-effective to continue building where infrastructure already exists, rather than starting fresh in a new region. As long as the cost of performing the proposed impact assessment and reporting in established regions is less than that of the additional overhead of moving to a new region, data center operators are likely to comply with regulations in order to stay in regions where the sector is established.
Spatial concentration of data centers also arises due to the need for data center workloads with high data transmission requirements, such as media streaming and online gaming, to have close physical proximity to users in order to reduce data transmission latency. In order for AI to be integrated into these realtime services, data center operators will continue to need presence in existing geographic regions, barring significant advances in data transmission efficiency and infrastructure.
bad for national security and economic growth. So is infrastructure growth that harms the local communities in which it occurs.
Researchers from Good Jobs First have found that many states are in fact losing tax revenue to data center expansion: “At least 10 states already lose more than $100 million per year in tax revenue to data centers…” More data is needed to determine if data center construction projects coupled with tax incentives are economically advantageous investments on the parts of local and state governments.
The DOE is opening up federal lands in 16 locations to data center construction projects in the name of strengthening America’s energy dominance and ensuring America’s role in AI innovation. But national security concerns around data center expansion should also consider the impacts to communities who live close to data centers and related infrastructures.
Data centers themselves do not automatically ensure greater national security, especially because the critical minerals and hardware components of data centers depend on international trade and manufacturing. At present, the United States is not equipped to contribute the critical minerals and other materials needed to produce data centers, including GPUs and other components.
Federal policy ensures that states or counties do not become overburdened by data center growth and will help different regions benefit from the potential economic and social rewards of data center construction.
Developing federal standards around transparency helps individual states plan for data center construction, allowing for a high-level, comparative look at the energy demand associated with specific AI use cases. It is also important for there to be a federal intervention because data centers in one state might have transmission lines running through a neighboring state, and resultant outcomes across jurisdictions. There is a need for a national-level standard.
Current cost-benefit estimates can often be extremely challenging. For example, while municipalities often expect there will be economic benefits attached to data centers and that data center construction will yield more jobs in the area, subsidies and short-term jobs in construction do not necessarily translate into economic gains.
To improve the ability of decision makers to do quality cost-benefit analysis, the independent consortium described in Recommendation 2 will examine both qualitative and quantitative data, including permitting histories, transmission plans, land use and eminent domain cases, subsidies, jobs numbers, and health or quality of life impacts in various sites over time. NIST will help develop standards in accordance with this data collection, which can then be used in future planning processes.
Further, there is customer interest in knowing their AI is being sourced from firms implementing sustainable and socially responsible practices. These efforts which can be used in marketing communications and reported as a socially and environmentally responsible practice in ESG reports. This serves as an additional incentive for some data center operators to participate in voluntary reporting and maintain operations in locations with increased regulation.
Advance AI with Cleaner Air and Healthier Outcomes
Artificial intelligence (AI) is transforming industries, driving innovation, and tackling some of the world’s most pressing challenges. Yet while AI has tremendous potential to advance public health, such as supporting epidemiological research and optimizing healthcare resource allocation, the public health burden of AI due to its contribution to air pollutant emissions has been under-examined. Energy-intensive data centers, often paired with diesel backup generators, are rapidly expanding and degrading air quality through emissions of air pollutants. These emissions exacerbate or cause various adverse health outcomes, from asthma to heart attacks and lung cancer, especially among young children and the elderly. Without sufficient clean and stable energy sources, the annual public health burden from data centers in the United States is projected to reach up to $20 billion by 2030, with households in some communities located near power plants supplying data centers, such as those in Mason County, WV, facing over 200 times greater burdens than others.
Federal, state, and local policymakers should act to accelerate the adoption of cleaner and more stable energy sources and address AI’s expansion that aligns innovation with human well-being, advancing the United States’ leadership in AI while ensuring clean air and healthy communities.
Challenge and Opportunity
Forty-six percent of people in the United States breathe unhealthy levels of air pollution. Ambient air pollution, especially fine particulate matter (PM2.5), is linked to 200,000 deaths each year in the United States. Poor air quality remains the nation’s fifth highest mortality risk factor, resulting in a wide range of immediate and severe health issues that include respiratory diseases, cardiovascular conditions, and premature deaths.
Data centers consume vast amounts of electricity to power and cool the servers running AI models and other computing workloads. According to the Lawrence Berkeley National Laboratory, the growing demand for AI is projected to increase the data centers’ share of the nation’s total electricity consumption to as much as 12% by 2028, up from 4.4% in 2023. Without enough sustainable energy sources like nuclear power, the rapid growth of energy-intensive data centers is likely to exacerbate ambient air pollution and its associated public health impacts.
Data centers typically rely on diesel backup generators for uninterrupted operation during power outages. While the total operation time for routine maintenance of backup generators is limited, these generators can create short-term spikes in PM2.5, NOx, and SO2 that go beyond the baseline environmental and health impacts associated with data center electricity consumption. For example, diesel generators emit 200–600 times more NOx than natural gas-fired power plants per unit of electricity produced. Even brief exposure to high-level NOx can aggravate respiratory symptoms and hospitalizations. A recent report to the Governor and General Assembly of Virginia found that backup generators at data centers emitted approximately 7% of the total permitted pollution levels for these generators in 2023. Based on the Environmental Protection Agency’s COBRA modeling tool, the public health cost of these emissions in Virginia is estimated at approximately $200 million, with health impacts extending to neighboring states and reaching as far as Florida. In Memphis, Tennessee, a set of temporary gas turbines powering a large AI data center, which has not undergone a complete permitting process, is estimated to emit up to 2,000 tons of NOx annually. This has raised significant health concerns among local residents and could result in a total public health burden of $160 million annually. These public health concerns coincide with a paradigm shift that favors dirty energy and potentially delays sustainability goals.
In 2023 alone, air pollution attributed to data centers in the United States resulted in an estimated $5 billion in health-related damages, a figure projected to rise up to $20 billion annually by 2030. This projected cost reflects an estimated 1,300 premature deaths in the United States per year by the end of the decade. While communities near data centers and power plants bear the greatest burden, with some households facing over 200 times greater impacts than others, the health impacts of these facilities extend to communities across the nation. The widespread health impacts of data centers further compound the already uneven distribution of environmental costs and water resource stresses imposed by AI data centers across the country.
While essential for mitigating air pollution and public health risks, transitioning AI data centers to cleaner backup fuels and stable energy sources such as nuclear power presents significant implementation hurdles, including lengthy permitting processes. Clean backup generators that match the reliability of diesel remain limited in real-world applications, and multiple key issues must be addressed to fully transition to cleaner and more stable energy.
While it is clear that data centers pose public health risks, comprehensive evaluations of data center air pollution and related public health impacts are essential to grasp the full extent of the harms these centers pose, yet often remain absent from current practices. Washington State conducted a health risk assessment of diesel particulate pollution from multiple data centers in the Quincy area in 2020. However, most states lack similar evaluations for either existing or newly proposed data centers. To safeguard public health, it is essential to establish transparency frameworks, reporting standards, and compliance requirements for data centers, enabling the assessment of PM2.5, NOₓ, SO₂, and other harmful air pollutants, as well as their short- and long-term health impacts. These mechanisms would also equip state and local governments to make informed decisions about where to site AI data center facilities, balancing technological progress with the protection of community health nationwide.
Finally, limited public awareness, insufficient educational outreach, and a lack of comprehensive decision-making processes further obscure the potential health risks data centers pose to public health. Without robust transparency and community engagement mechanisms, communities housing data center facilities are left with little influence or recourse over developments that may significantly affect their health and environment.
Plan of Action
The United States can build AI systems that not only drive innovation but also promote human well-being, delivering lasting health benefits for generations to come. Federal, state, and local policymakers should adopt a multi-pronged approach to address data center expansion with minimal air pollution and public health impacts, as outlined below.
Federal-level Action
Federal agencies play a crucial role in establishing national standards, coordinating cross-state efforts, and leveraging federal resources to model responsible public health stewardship.
Recommendation 1. Incorporate Public Health Benefits to Accelerate Clean and Stable Energy Adoption for AI Data Centers
Congress should direct relevant federal agencies, including the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and the Environmental Protection Agency (EPA), to integrate air pollution reduction and the associated public health benefits into efforts to streamline the permitting process for more sustainable energy sources, such as nuclear power, for AI data centers. Simultaneously, federal resources should be expanded to support research, development, and pilot deployment of alternative low-emission fuels for backup generators while ensuring high reliability.
- Public Health Benefit Quantification. Direct the EPA, in coordination with DOE and public health agencies, to develop standardized methods for estimating the public health benefits (e.g., avoided premature deaths, hospital visits, and economic burden) of using cleaner and more stable energy sources for AI data centers. Require lifecycle emissions modeling of energy sources and translate avoided emissions into quantitative health benefits using established tools such as the EPA’s BenMAP. This should:
- Include modeling of air pollution exposure and health outcomes (e.g., using tools like EPA’s COBRA)
- Incorporate cumulative risks from regional electricity generation and local backup generator emissions
- Account for spatial disparities and vulnerable populations (e.g., children, the elderly, and disadvantaged communities)
- Evaluate both short-term (e.g., generator spikes) and long-term (e.g., chronic exposure) health impacts
- Preferential Permitting. Instruct the DOE to prioritize and streamline permitting for cleaner energy projects (e.g., small modular reactors, advanced geothermal) that demonstrate significant air pollution reduction and health benefits in supporting AI data center infrastructures. Develop a Clean AI Permitting Framework that allows project applicants to submit health benefit assessments as part of the permitting package to justify accelerated review timelines.
- Support for Cleaner Backup Systems. Expand DOE and EPA R&D programs to support pilot projects and commercialization pathways for alternative backup generator technologies, including hydrogen combustion systems and long-duration battery storage. Provide tax credits or grants for early adopters of non-diesel backup technologies in AI-related data center facilities.
- Federal Guidance & Training. Provide technical assistance to state and local agencies to implement the protocol, and fund capacity-building efforts in environmental health departments.
Recommendation 2. Establish a Standardized Emissions Reporting Framework for AI Data Centers
Congress should direct the EPA, in coordination with the National Institute of Standards and Technology (NIST), to develop and implement a standardized reporting framework requiring data centers to publicly disclose their emissions of air pollutants, including PM₂.₅, NOₓ, SO₂, and other hazardous air pollutants associated with backup generators and electricity use.
- Multi-Stakeholder Working Group. Task EPA with convening a multi-stakeholder working group, including representatives from NIST, DOE, state regulators, industry, and public health experts, to define the scope, metrics, and methodologies for emissions reporting.
- Standardization. Develop a federal technical standard that specifies:
- Types of air pollutants that should be reported
- Frequency of reporting (e.g., quarterly or annually)
- Facility-specific disclosures (including generator use and power source profiles)
- Geographic resolution of emissions data
- Public access and data transparency protocols
State-level Action
Recommendation 1. State environmental and public health departments should conduct a health impact assessment (HIA) before and after data center construction to evaluate discrepancies between anticipated and actual health impacts for existing and planned data center operations. To maintain and build trust, HIA findings, methodologies, and limitations should be publicly available and accessible to non-technical audiences (including policymakers, local health departments, and community leaders representing impacted residents), thereby enhancing community-informed action and participation. Reports should focus on the disparate impact between rural and urban communities, with particular attention to overburdened communities that have under-resourced health infrastructure. In addition, states should coordinate HIA and share findings to address cross-boundary pollution risks. This includes accounting for nearby communities across state lines, considering that jurisdictional borders should not constrain public health impacts and analysis.
Recommendation 2. State public health departments should establish a state-funded program that offers community education forums for affected residents to express their concerns about how data centers impact them. These programs should emphasize leading outreach, engaging communities, and contributing to qualitative analysis for HIAs. Health impact assessments should be used as a basis for informed community engagement.
Recommendation 3. States should incorporate air pollutant emissions related to data centers into their implementation of the National Ambient Air Quality Standards (NAAQS) and the development of State Implementation Plans (SIPs). This ensures that affected areas can meet standards and maintain their attainment statuses. To support this, states should evaluate the adequacy of existing regulatory monitors in capturing emissions related to data centers and determine whether additional monitoring infrastructure is required.
Local-level Action
Recommendation 1. Local governments should revise zoning regulations to include stricter and more explicit health-based protections to prevent data center clustering in already overburdened communities. Additionally, zoning ordinances should address colocation factors and evaluate potential cumulative health impacts. A prominent example is Fairfax County, Virginia, which updated its zoning ordinance in September 2024 to regulate the proximity of data centers to residential areas, require noise pollution studies prior to construction, and establish size thresholds. These updates were shaped through community engagement and input.
Recommendation 2. Local governments should appoint public health experts to the zoning boards to ensure data center placement decisions reflect community health priorities, thereby increasing public health expert representation on zoning boards.
Conclusion
While AI can revolutionize industries and improve lives, its energy-intensive nature is also degrading air quality through emissions of air pollutants. To mitigate AI’s growing air pollution and public health risks, a comprehensive assessment of AI’s health impact and transitioning AI data centers to cleaner backup fuels and stable energy sources, such as nuclear power, are essential. By adopting more informed and cleaner AI strategies at the federal and state levels, policymakers can mitigate these harms, promote healthier communities, and ensure AI’s expansion aligns with clean air priorities.
This memo is part of our AI & Energy Policy Sprint, a policy project to shape U.S. policy at the critical intersection of AI and energy. Read more about the Policy Sprint and check out the other memos here.
Federation of American Scientists Statement on the Preemption of State AI Regulation in the One Big Beautiful Bill Act
As the Senate prepares to vote on a provision in the One Big Beautiful Bill Act, which would condition Broadband Equity, Access, and Deployment (BEAD) Program funding on states ceasing enforcement of their AI laws (SEC.0012 Support for Artificial Intelligence Under the Broadband Equity, Access, and Deployment Program), the Federation of American Scientists urges Congress to oppose this measure. This approach threatens to compromise public trust and responsible innovation at a moment of rapid technological change.
The Trump Administration has repeatedly emphasized that public trust is essential to fostering American innovation and global leadership in AI. That trust depends on clear, reasonable guardrails, especially as AI systems are increasingly deployed in high-stakes areas like education, health, employment, and public services. Moreover, the advancement of frontier AI systems is staggering. The capabilities, risks, and use cases of general-purpose models are predicted to evolve dramatically over the next decade. In such a landscape, we require governance structures that are adaptive, multi-layered, and capable of responding in real-time.
While a well-crafted federal framework may ultimately be the right path forward, preempting all state regulation in the absence of federal action would leave a dangerous vacuum, further undermining public confidence in these technologies. According to Pew Research, American concerns about AI are growing, and a majority of US adults and AI experts worry that governments will not go far enough to regulate AI.
State governments have long served as laboratories of democracy, testing policies, implementation strategies, and ways to adapt to local needs. Tying essential broadband infrastructure funding to the repeal of sensible, forward-looking laws would cut off states’ ability to meet the demands of AI evolution in the absence of federal guidance.
We urge lawmakers to protect both innovation and accountability by rejecting this provision. Conditioning BEAD Funding on halting AI regulation sends the wrong message. AI progress does not need to come at the cost of responsible oversight.
Unlocking AI’s Grid Modernization Potential
Surging energy demand and increasingly frequent extreme weather events are bringing new challenges to the forefront of electric grid planning, permitting, operations, and resilience. These hurdles are pushing our already fragile grid to the limit, highlighting decades of underinvestment, stagnant growth, and the pressing need to modernize our system.
While these challenges aren’t new, they are newly urgent. The society-wide emergence of artificial intelligence (AI) is bringing many of these challenges into sharper focus, pushing the already increasing electricity demand to new heights and cementing the need for deployable, scalable, and impactful solutions. Fortunately, many transformational and mature AI tools provide near-term pathways for significant grid modernization.
This policy memo builds on foundational research from the US Department of Energy’s (DOE) AI for Energy (2024) report to present a new matrix that maps these unique AI applications onto an “impact-readiness” scale. Nearly half of the applications identified by DOE are high impact and ready to deploy today. An additional ~40% have high impact potential but require further investment and research to move up the readiness scale. Only 2 of 14 use cases analyzed here fall into the “low-impact / low-readiness” quadrant.
Unlike other emerging technologies, AI’s potential in grid modernization is not simply an R&D story, but a deployment one. However, with limited resources, the federal government should invest in use cases that show high-impact potential and demonstrate feasible levels of deployment readiness. The recommendations in this memo target regulatory actions across the Federal Energy Regulatory Commission (FERC) and the Department of Energy (DOE), data modernization programs at the Federal Permitting Improvement Steering Council (FPISC), and funding opportunities and pilot projects at and the DOE and the Federal Emergency Management Agency (FEMA).
Thoughtful policy coordination, targeted investments, and continued federal support will be needed to realize the potential of these applications and pave the way for further innovation.
Challenge and Opportunity
Surging Load Growth, Extreme Events, and a Fragmented Federal Response
Surging energy demand and more frequent extreme weather events are bringing new challenges to the forefront of grid planning and operations. Not only is electric load growing at rates not seen in decades, but extreme weather events and cybersecurity threats are becoming more common and costly. All the while, our grid is becoming more complex to operate as new sources of generation and grid management tools evolve. Underlying these complexities is the fragmented nature of our energy system: a patchwork of regional grids, localized standards, and often conflicting regulations.
The emergence of artificial intelligence (AI) has brought many of these challenges into sharper focus. However, the potential of AI to mitigate, sidestep, or solve these challenges is also vast. From more efficient permitting processes to more reliable grid operations, many unique AI use cases for grid modernization are ready to deploy today and have high-impact potential.
The federal government has a unique role to play in both meeting these challenges and catalyzing these opportunities by implementing AI solutions. However, the current federal landscape is fragmented, unaligned, and missing critical opportunities for impact. Nearly a dozen federal agencies and offices are engaged across the AI grid modernization ecosystem (see FAQ #2), with few coordinating in the absence of a defined federal strategy.
To prioritize effective and efficient deployment of resources, recommendations for increased investments (both in time and capital) should be based on a solid understanding of where the gaps and opportunities lie. Historically, program offices across DOE and other agencies have focused efforts on early-stage R&D and foundational science activities for emerging technology. For AI, however, the federal government is well-positioned to support further deployment of the technology into grid modernization efforts, rather than just traditional R&D activities.
AI Applications for Grid Modernization
AI’s potential in grid modernization is significant, expansive, and deployable. Across four distinct categories—grid planning, siting and permitting, operations and reliability, and resilience—AI can improve existing processes or enable entirely new ones. Indeed, the use of AI in the power sector is not a new phenomenon. Industry and government alike have long utilized machine learning (ML) models across a range of power sector applications, and the recent introduction of “foundation” models (such as large language models, or LLMs) has opened up a new suite of transformational use cases. While LLMs and other foundation models can be used in various use cases, AI’s potential to accelerate grid modernization will span both traditional and novel approaches, with many applications requiring custom-built models tailored to specific operational, regulatory, and data environments.
The following 14 use cases are drawn from DOE’s AI for Energy (2024) report and form the foundation of this memo’s analytical framework.
Grid Planning
- Capital Allocations and Planned Upgrades. Use AI to optimize utility investment decisions by forecasting asset risk, load growth, and grid needs to guide substation upgrades, reconductoring, or distributed energy resource (DER)-related capacity expansions.
- Improved Information on Grid Capacity. Use AI to generate more granular and dynamic hosting capacity, load forecast, and congestion data to guide DER siting, interconnection acceleration, and non-wires alternatives.
- Improved Transportation and Energy Planning Alignment. Use AI-enabled joint forecasting tools to align EV infrastructure rollout with utility grid planning by integrating traffic, land use, and load growth data.
- Interconnection Issues and Power Systems Models. Use AI-accelerated power flow models and queue screening tools to reduce delays and improve transparency in interconnection studies.
Siting and Permitting
- Zoning and Local Permitting Analysis. Use AI to analyze zoning ordinances, land use restrictions, and local permitting codes to identify siting barriers or opportunities earlier in the project development process.
- Federal Environmental Review Accelerations. Use AI tools to extract, organize, and summarize unstructured and disparate datasets to support more efficient and consistent reviews.
- AI Models to Assist Subject Matter Experts in Reviews. Use AI and document analysis tools to support expert reviewers by checking for completeness, inconsistencies, or precedent in technical applications and environmental documents.
Grid Operations and Reliability
- Load and Supply Matching. Use AI to improve short-term load forecasting and optimize generation dispatch, reducing imbalance costs and improving integration of variable resources.
- Predictive and Risk-Informed Maintenance. Use AI to predict asset degradation or failure and inform maintenance schedules based on equipment health, environmental stressors, and historical failure data.
- Operational Safety and Issues Reporting and Analysis. Apply AI to analyze safety incident logs, compliance records, and operator reports to identify patterns of human error, procedural risks, or training needs.
Grid Resilience
- Self-healing Infrastructure for Reliability and Resilience. Use AI to autonomously isolate faults, reconfigure power flows, and restore service in real time through intelligent switching and local control systems.
- Detection and Diagnosis of Anomalous Events. Use AI to identify and localize grid disturbances such as faults, voltage anomalies, or cyber intrusions using high-frequency telemetry and system behavior data.
- AI-enabled Situational Awareness and Actions for Resilience. Leverage AI to synthesize grid, weather, and asset data to support operator awareness and guide event response during extreme weather or grid stress events.
- Resilience with Distributed Energy Resources. Coordinate DERs during grid disruptions using AI for forecasting, dispatch, and microgrid formation, enabling system flexibility and backup power during emergencies.
However, not all applications are created equal. With limited resources, the federal government should prioritize use cases that show high-impact potential and demonstrate feasible levels of deployment readiness. Additional investments should also be allocated to high-impact / low-readiness use cases to help unlock and scale these applications.
Unlocking the potential of these use cases requires a better understanding of which ones hit specific benchmarks. The matrix below provides a framework for thinking through these questions.
Using the use cases identified above, we’ve mapped AI’s applications in grid modernization onto a “readiness-impact” chart based on six unique scoring scales (see appendix for full methodological and scoring breakdown).
Readiness Scale Questions
- Technical Readiness. Is the AI solution mature, validated, and performant?
- Financial Readiness. Is it cost-effective and fundable (via CapEx, OpEx, or rate recovery)?
- Regulatory Readiness. Can it be deployed under existing rules, with institutional buy-in?
Impact Scale Questions
- Value. Does this AI solution reduce costs, outages, emissions, or delays in a measurable way?
- Leverage. Does it enable or unlock broader grid modernization (e.g., DERs, grid enhancing technologies (GETs), and/or virtual power plant (VPP) integration)?
- Fit. Is AI the right or necessary tool to solve this compared to conventional tools (i.e., traditional transmission planning, interconnection study, and/or compliance software)?
Each AI application receives a score of 0-5 in each category, which are then averaged to determine its overall readiness and impact scores. To score each application, a detailed rubric was designed with scoring scales for each of the above-mentioned six categories. Industry examples and experience, existing literature, and outside expert consultation was utilized to then assign scores to each application.
When plotted on a coordinate plane, each application falls into one of four quadrants, helping us easily identify key insights about each use case.
- High-Impact / High-Readiness use cases → Deploy now
- High-Impact / Low-Readiness → Invest, unlock, and scale
- Low-Impact / High-Readiness → Optional pilots, but deprioritize federal effort
- Low-Impact / Low-Readiness → Monitor private sector action
Once plotted, we can then identify additional insights, such as where the clustering happens, what barriers are holding back the highest impact applications, and if there are recurring challenges (or opportunities) across the four categories of grid modernization efforts.
Plan of Action
Grid Planning
Average Readiness Score: 2.3 | Average Impact Score: 3.8
- AI use cases in grid planning face the highest financial and regulatory hurdles of any category. Reducing these barriers can unlock high-impact potential.
- These tools are high-leverage use cases. Getting these deployed unlocks deeper grid modernization activities system-wide, such as grid-enhancing technology (GETs) integration.
- While many of these AI tools are technically mature, adoption is not yet mainstream.
Recommendation 1. The Federal Energy Regulatory Commission (FERC) should clarify the regulatory pathway for AI use cases in grid planning.
Regional Transmission Organizations (RTOs), utilities, and Public Utility Commissions (PUCs) require confidence that AI tools are approved and supported before they deploy them at scale. They also need financial clarity on viable pathways to rate-basing significant up-front costs. Building on Commissioner Rosner’s Letters Regarding Interconnection Automation, FERC should establish a FERC-DOE-RTO technical working group on “Next-Gen Planning Tools” that informs FERC-compliant AI-enabled planning, modeling, and reporting standards. Current regulations (and traditional planning approaches) leave uncertainty around the explainability, validation, and auditability of AI-driven tools.
Thus, the working group should identify where AI tools can be incorporated into planning processes without undermining existing reliability, transparency, or stakeholder-participation standards. The group should develop voluntary technical guidance on model validation standards, transparency requirements, and procedural integration to provide a clear pathway for compliant adoption across FERC-regulated jurisdictions.
Siting and Permitting
Average Readiness Score: 2.7 | Average Impact Score: 3.8
- Zoning and local permitting tools are promising, but adoption is fragmented across state, local, and regional jurisdictions.
- Federal permitting acceleration tools score high on technical readiness but face institutional distrust and a complicated regulatory environment.
- In general, tools in this category have high value but limited transferability beyond highly specific scenarios (low leverage). Even if unlocked at scale, they have narrower application potential than other tools analyzed in this memo.
Recommendation 2. The Federal Permitting Improvement Steering Council (FPISC) should establish a federal siting and permitting data modernization initiative.
AI tools can increase speed and consistency in siting and permitting processes by automating the review of complex datasets, but without structured data, standardized workflows, and agency buy-in, their adoption will remain fragmented and niche. Furthermore, most grid infrastructure data (including siting and permitting documentation) is confidential and protected, leading to industry skepticism about the ability of AI to maintain important security measures alongside transparent workflows. To address these concerns, FPISC should launch a coordinated initiative that creates structured templates for federal permitting documents, pilots AI integration at select agencies, and develops a public validation database that allows AI developers to test their models (with anonymous data) against real agency workflows. Having launched a $30 million effort in 2024 to improve IT systems across multiple agencies, FPSIC is well-positioned to take those lessons learned and align deeper AI integration across the federal government’s permitting processes. Coordination with the Council on Environmental Quality (CEQ), which was recently called on to develop a Permitting Technology Action Plan, is also encouraged. Additional Congressional appropriations to FPISC can unlock further innovation.
Operations and Reliability
Average Readiness Score: 3.6 | Average Impact Score: 3.6
- Overall, this category has the highest average readiness across technical, financial, and regulatory scales. These use cases are clear “ready-now” wins.
- They also have the highest fit component of impact, representing unique opportunities for AI tools to improve on existing systems and processes in ways that traditional tools cannot.
Recommendation 3. Launch an AI Deployment Challenge at DOE to scale high-readiness tools across the sector.
From the SunShot Initiative (2011) through the Energy Storage Grand Challenge (2020) to the Energy Earthshots (2021), DOE has a long history of catalyzing the deployment of new technology in the power sector. A dedicated grand challenge – funded with new Congressional appropriations at the Grid Deployment Office – could deploy matching grants or performance-based incentives to utilities, co-ops, and municipal providers to accelerate adoption of proven AI tools.
Grid Resilience
Average Readiness Score: 3.4 | Average Impact Score: 4.2
- As a category, resilience applications have the highest overall impact score, including a perfect value score across all four use cases. There is significant potential in deploying AI tools to solve these challenges.
- Alongside operations and reliability use cases, these tools also exhibit the highest technical readiness, demonstrating technical maturity alongside high value potential.
- Anomalous events detection is the highest-scoring use case across all 14 applications, on both readiness and impact scales. It’s already been deployed and is ready to scale.
Recommendation 4. DOE, the Federal Emergency Management Agency (FEMA), and FERC should create an AI for Resilience Program that funds and validates AI tools that support cross-jurisdictional grid resilience.
AI for resilience applications often require coordination across traditional system boundaries, from utilities to DERs, microgrids to emergency managers, as well as high levels of institutional trust. Federal coordination can catalyze system integration by funding demo projects, developing integration playbooks, and clarifying regulatory pathways for AI-automated resilience actions.
Congress should direct DOE and FEMA, in consultation with FERC, to establish a new program (or carve out existing grid resilience funds) to: (1) support demonstration projects where AI tools are already being deployed during real-world resilience events; (2) develop standardized playbooks for integrating AI into utility and emergency management operations; and (3) clarify regulatory pathways for actions like DER islanding, fault rerouting, and AI-assisted load restoration.
Conclusion
Managing surging electric load growth while improving the grid’s ability to weather more frequent and extreme events is a once-in-a-generation challenge. Fortunately, new technological innovations combined with a thoughtful approach from the federal government can actualize the potential of AI and unlock a new set of solutions, ready for this era.
Rather than technological limitations, many of the outstanding roadblocks identified here are institutional and operational, highlighting the need for better federal coordination and regulatory clarity. The readiness-impact framework detailed in this memo provides a new way to understand these challenges while laying the groundwork for a timely and topical plan of action.
By identifying which AI use cases are ready to scale today and which require targeted policy support, this framework can help federal agencies, regulators, and legislators prioritize high-impact actions. Strategic investments, regulatory clarity, and collaborative initiatives can accelerate the deployment of proven solutions while innovating and building trust in new ones. By pulling on the right policy levers, AI can improve grid planning, streamline permitting, enhance reliability, and make the grid more resilient, meeting this moment with both urgency and precision.
This memo is part of our AI & Energy Policy Sprint, a policy project to shape U.S. policy at the critical intersection of AI and energy. Read more about the Policy Sprint and check out the other memos here.
Scoring categories (readiness & impact) were selected based on the literature of challenges to AI deployment in the power sector. An LLM (OpenAI’s GPT-4o model) was utilized to refine the 0-5 scoring scale after careful consideration of the multi-dimensional challenges across each category, based on the author’s personal industry experience and additional consultation with outside technical experts. Where applicable, existing frameworks underpin the scales used in this memo: technology readiness levels for the ‘technical readiness category’ and adoption readiness levels for the ‘financial’ and ‘regulatory’ readiness categories. A rubric was then designed to guide scoring.
Each of the 14 AI applications were then scored against that rubric based on the author’s analysis of existing literature, industry examples, and professional experience. Outside experts were consulted and provided additional feedback and insights throughout the process.
Below is a comprehensive, though not exhaustive, list of the key Executive Branch actors involved in AI-driven grid modernization efforts. A detailed overview of the various roles, authorities, and ongoing efforts can be found here.
Executive Office of the President (Office of Science and Technology Policy (OSTP), Council on Environmental Quality (CEQ)); Department of Commerce (National Institute of Standards and Technology (NIST)); Department of Defense (Energy, Installations, and Environment (EI&E), Defense Advanced Research projects Agency (DARPA)); Department of Energy (Advanced Research Projects Agency-Energy (ARPA-E), Energy Efficiency and Renewable Energy (EERE), Grid Deployment Office (GDO), Office of Critical and Emerging Technologies (CET), Office of Cybersecurity, Energy Security, and Emergency Response (CESER), Office of Electricity (OE), National Laboratories); Department of Homeland Security (Cybersecurity and Infrastructure Agency (CISA)); Federal Energy Regulatory Commission (FERC); Federal Permitting Improvement Steering Council (FPISC); Federal Emergency Management Agency (FEMA); National Science Foundation (NSF)
A full database of how the federal government is using AI across agencies can be found at the 2024 Federal Agency AI Use Case Inventory. A few additional examples of private sector applications, or public-private partnerships are provided below.
Grid Planning
- EPRI’s Open Power AI Consortium
- Google’s Tapestry
- Octopus Energy’s Kraken
Siting and Permitting
Operations and Reliability
- Schneider Electric’s One Digital Grid Platform
- Cammus
- Amperon
Grid Resilience
Enhancing US Power Grid by using AI to Accelerate Permitting
The increased demand for power in the United States is driven by new technologies such as artificial intelligence, data analytics, and other computationally intensive activities that utilize ever faster and power-hungry processors. The federal government’s desire to reshore critical manufacturing industries and shift the economy from service to goods production will, if successful, drive energy demands even higher.
Many of the projects that would deliver the energy to meet rising demand are in the interconnection queue, waiting to be built. There is more power in the queue than on the grid today. The average wait time in the interconnection queue is five years and growing, primarily due to permitting timelines. In addition, many projects are cancelled due to the prohibitive cost of interconnection.
We have identified six opportunities where Artificial Intelligence (AI) has the potential to speed the permitting process.
- AI can be used to speed decision-making by regulators through rapidly analyzing environmental regulations and past decisions.
- AI can be used to identify generation sites that are more likely to receive permits.
- AI can be used to create a database of state and federal regulations to bring all requirements in one place.
- AI can be used in conjunction with the database of state regulations to automate the application process and create visibility of permit status for stakeholders.
- AI can be used to automate and accelerate interconnection studies.
- AI can be used to develop a set of model regulations for local jurisdictions to adapt and adopt.
Challenge and Opportunity
There are currently over 11,000 power generation and consumption projects in the interconnection queue, waiting to connect to the United States power grid. As a result, on average, projects must wait five years for approval, up from three years in 2010.
Historically, a large percentage of projects in the queue, averaging approximately 70%, have been withdrawn due to a variety of factors, including economic viability and permitting challenges. About one-third of wind and solar applications submitted from 2019 to 2024 were cancelled, and about half of these applications faced delays of 6 months or more. For example, the Calico Solar Project in the California Mojave Desert, with a capacity of 850 megawatts, was cancelled due to lengthy multi-year permitting and re-approvals for design changes. Increasing queue wait time is likely to increase the number of projects cancelled and delay those that are viable.
The U.S. grid added 20.2 gigawatts of utility-scale generating capacity in the first half of 2024, a 21% increase over the first half of 2023. However, this is still less power than is likely to be needed to meet increasing power demands in the U.S. Nor does it account for the retirement of generation capacity, which was 5.1 gigawatts in the first half of 2024. In addition to replacing aging energy infrastructure as it is taken offline, this new power is critically needed to address rising energy demands in the U.S. Data centers alone are increasing power usage dramatically, from 1.9% of U.S. energy consumption in 2018 to 4.4% in 2023, and with an expected consumption of at least 6.7% in 2028.
If we want to achieve the Administration’s vision of restoring U.S. domestic manufacturing capacity, a great deal of generation capacity not currently forecast will also need to be added to the grid very rapidly, far faster than indicated by the current pace of interconnections. The primary challenge that slows most power from getting onto the grid is permitting. A secondary challenge that frequently causes projects to be delayed or cancelled is interconnection costs.
Projects frequently face significant permitting challenges. Projects not only need to obtain permits to operate the generation site but must also obtain permits to move power to the point where it connects to the existing grid. Geographically remote projects may require new transmission lines that cover many miles and cross multiple jurisdictions. Even projects relatively close to the existing grid may require multiple permits to connect to the grid.
In addition, poor site selection has resulted in the cancellation of several high-profile renewable installation projects. The Battle Born Solar Project, valued at $1 billion with a 850 megawatt capacity, was cancelled after community concern that the solar farm would impact tourism and archaeological sites in the Mormon Mesa in Nevada. Another project, a 150 megawatt solar facility proposed for Culpeper County, Virginia, was denied permits for interfering with the historic site of a Civil War battle. Similarly, a geothermal plant in Nevada had to be scaled back to less than a third of its original plan after it was found to be in the only known habitat of the endangered Dixie Valley toad. While community outrage over renewable energy installations is not always avoidable, mostly due to complaints about construction impacts and misinformation, better site selection could save developers time and money by avoiding locations that encroach on historical sites, local attractions, or endangered species‘ habitats.
Projects have also historically faced cost challenges as utilities and grid operators could charge the full cost of new operating capacity to each project, even when several pending projects could utilize the same new operating assets. On July 28, 2023, FERC issued a final rule with a compliance date of March 21, 2024, that requires transmission providers to consider all projects in the queue and determine how operating assets would be shared when calculating the cost of connecting a project to the grid. However, the process for calculating costs can be cumbersome when many projects are involved.
On April 15th, 2025, the Trump Administration issued a Presidential Memorandum titled “Updating Permitting Technology for the 21st Century.” This memo directs executive departments and agencies to take full advantage of technology for environmental review and permitting processes and creates a permitting innovation center. While it is unclear how much authority the PIC will have, it demonstrates the Administration’s focus in this area and may serve as a change agent in the future. There is an opportunity to use AI to improve both the speed and the cost of connecting new projects to the grid. Below are recommendations to capitalize on this opportunity.
Plan of Action
Recommendation 1. Funding for PNNL to expand the PolicyAI NEPA model to streamline environmental permitting processes beyond the federal level.
In 2023, Pacific Northwest National Laboratory (PNNL) was tasked by DOE with developing a PermitAI prototype to help regulators understand the National Environmental Policy Act (NEPA) regulations and speed up project environmental reviews. PNNL data scientists created an AI-searchable database of federal impact environmental statements, composed primarily of information that was not readily available to regulators before. The database contains textual data extracted from documents across 2,917 different projects stored as 3.6 million tokens from the GPT-2 tokenizer. Tokens are the units in which text is broken down for natural language processing AI models. The entire dataset is currently publicly available via HuggingFace. The database is then used for generative-AI searching that can quickly find documents and summarize relevant results as a Large Language Model (LLM). While the development of this database is still preliminary and efficiency metrics have not yet been published, based on complaints from those involved in permitting about the complexity of the process and the lack of guidelines, this approach should be a model for tools that could be developed and provided to state and local regulators to assist with permitting reviews.
In 2021, PNNL created a similar process, without using AI, for NEPA permitting for small-to medium-sized nuclear reactors, which simplified the process and reduced the environmental review time from three to six years to between six and twenty-four months. Using AI has the potential to reduce the process exponentially for renewables permitting. The National Renewable Energy Laboratory (NREL) has also studied using LLMs to expedite the processing of policy data from legal documents and found the results to support the expansion of LLMs for policy database analysis, primarily when compared to the current use of manual effort.
State and local jurisdictions can use the “Updating Permitting Technology” Presidential Memorandum as guidance to support the intersection between state and local permitting efforts. The PNNL database of federal NEPA materials, trained on past NEPA cases, would be provided by PNNL to state jurisdictions as a service, through a process similar to that used by EPA to ensure that state jurisdictions do not need to independently develop data collection solutions. Ideally, the initial data analysis model would be trained to be specific to each participating state and continually updated with new material to create a seamless regulatory experience.
Since PNNL has already built a NEPA model and this work is being expanded to a multi-lab effort that includes NREL, Argonne and others The House Energy and Water development committee could appropriate additional funding to the Office of Policy (OP) or EERE (Energy Efficiency and Renewable Energy) to enable the labs to expand the model and make it available to state and local regulatory agencies to integrate it into their permitting processes. States could develop models specific to their ordinances with the backbone of PNNL’s PermitAI. This effort could be expedited through engagement with the Environmental Council of the States (ECOS).
A shared database of NEPA information would reduce time spent reviewing backlogs of data from environmental review documents. State and local jurisdictions would more efficiently identify relevant information and precedent, and speed decision-making while reducing costs. An LLM tool also has the benefit of answering specific questions asked by the user. An example would be answering a question about issues that have arisen for similar projects in the same area.
Recommendation 2. Appropriate funding to expand AI site selection tools and support state and local pilots to improve permitting outcomes and reduce project cancellations.
AI could be used to identify sites that are suitable for energy generation, with different models eventually trained for utility-scale solar siting, onshore and offshore wind siting, and geothermal power plant siting. Key concerns affecting the permitting process include the loss of arable land, impacts on wildlife, and community responses, like opposition based on land use disagreements. Better site selection identifies these issues before they appear during the permitting process.
AI can access data from a range of sources, including satellite imagery from Google Earth, commercially available lidar studies, and local media screening to identify locations with the least number of potential barriers or identify and mitigate barriers for sites that have been selected. Unlike action one, which involves answering questions by pulling from large databases using LLMs, this would primarily utilize machine learning algorithms that process past and current data to identify patterns and predict outcomes, like energy generation potential. Examples of datasets these tools can use are the free, publicly available products created by the Innovative Data Energy Applications (IDEA) group in NREL’s Strategic Energy Analysis Center (SEAC), including the national solar radiation database and the wind resource database. The national solar radiation database visualizes the amount of solar energy potential at a given time and predicts future availability of solar energy for a given location in the dataset, which covers the entirety of the United States.
The wind resource database is a collection of modeled wind resource estimates for locations within the United States. In addition, Argonne National Lab has developed the GEM tool to support the NEPA reviews for transmission projects. A few start-ups have synthesized a variety of datasets like these and created their databases for information like terrain and slope to create site-selection decision-making tools. AI analysis of local news and landmarks important to local communities to identify locations that are likely to oppose renewable installations is particularly important since community opposition is often what kills renewable generation projects that have made it into the permitting process.
The House Committee for Energy and Water Development could appropriate funds to DOE’s Grid Deployment Office which could collaborate with EERE, FECM (Fossil Energy and Carbon Management), NE (Nuclear Energy) and OE (Office of Electricity) to further expand the technology specific models as well as to expand Argonne’s GEM tool. GDO could also provide grant funding to state and local government permitting authorities to pilot AI-powered site selection tools created by start-ups or other organizations. Local jurisdictions, in turn, could encourage use by developers.
Better site selection would speed permitting processes and reduce the number of cancelled projects, as well as wasted time and money by developers.
Recommendation 3. Funding for DOE labs to develop an AI-based permitting database, starting with a state-level pilot, to streamline permit site identification and application for large-scale energy projects.
Use AI to identify all of the non-environmental federal, state, and local permits required for generation projects. A pilot project, focused on one generation type, such as solar, should be launched in a state that is positioned for central coordination. New York may be the best candidate, as the Office of Renewable Energy Siting and Electric Transmission has exclusive jurisdiction over on-shore renewable energy projects of at least 25 megawatts.
A second option could be Illinois, which has statewide standards for utility-scale solar and wind facilities where local governments cannot adopt more restrictive ordinances. This would require the development of a database of regulations and the ability to query that database to provide a detailed list of required permits for each project by jurisdiction, the relevant application process, and forms. The House Energy and Water Development Committee could direct funds to EERE to support PNNL, NREL, Argonne, and other DOE labs to develop this database. Ideally, this tool would be integrated with tools developed by local jurisdictions to automate their individual permitting process.
State-level regulatory coordination would speed the approval of projects contained within a single state, as well as improve coordination between states.
Recommendation 4. Appropriate funds for DOE to develop a state-level AI permitting application to streamline renewable energy permit approvals and improve transparency.
Use AI as a tool to complete the permitting process. While it would be nearly impossible to create a national permitting tool, it would be realistic to create a tool that could be used to manage developers’ permitting processes at the state level.
NREL developed a permitting tool with funding from the DOE Solar Energy Technologies Office (SETO) for residential rooftop solar permitting. The tool, SolarAPP+, automates plan review, permit approval, and project tracking. As of the end of 2023, it had saved more than 33,000 hours of permitting staff time for more than 32,800 projects. However, permitting for rooftop solar is less complex than permitting for utility-scale solar sites or wind farms because of less need for environmental reviews, wildlife endangerment reviews, or community feedback. Using the AI frameworks developed by PNNL mentioned in recommendation one and leveraging the development work completed by NREL could create tools similar to SolarAPP+ for large-scale renewable installations and have similar results in projects approved and time saved. An application that may meet this need is currently under development at NREL.
The House Energy and Water Development Committee should appropriate funds for DOE to create an application through PNNL and NREL that would utilize the NREL SolarAPP+ framework that could be implemented by states to streamline the permitting application process. This would be especially helpful for complex projects that cross multiple jurisdictions. In addition, Congress, through appropriation by the House Energy and Water Development Committee to DOE’s Grid Deployment Office, could establish a grant program to support state and local level implementation of this permitting tool. This tool could include a dashboard to improve permitting transparency, one of the items required by the Presidential Memorandum on Updating Permitting Technology.
Developers are frequently unclear about what permits are required, especially for complex multi-jurisdiction projects. The AI tool would reduce the time a developer spends identifying permits and would support smaller developers who don’t have permitting consultants or prior experience. An integrated electronic permitting solution would reduce the complexity of applying for and approving permits. With a state-wide system, state and local regulators would only need to add their requirements and location-specific requirements and forms into a state-maintained system. Finally, an integrated system with a dashboard could increase status visibility and help resolve issues more quickly. These tools together would allow developers to make realistic budgets and time frames for projects to allocate resources and prioritize projects that have the greatest chance of being approved.
Recommendation 5. Direct FERC to require RTOs to evaluate and possibly implement AI tools to automate interconnection analysis processes.
Use AI tools to reduce the complexity of publishing and analyzing the mandated maps and assigning costs to projects. While FERC has mandated that grid operators consider all projects coming onto the grid when setting interconnection pricing, as well as considering project readiness rather than time in queue for project completion, the requirements are complex to implement.
A number of private sector companies have begun developing tools to model interconnections. Pearl Street has used its model to reproduce a complex and lengthy interconnection cluster study in ten days, and PJM recently announced a collaboration with Google to develop an analysis capability. Given the private sector efforts in this space, the public interest would be best served by FERC requiring RTOs to evaluate and implement, if suitable, an automated tool to speed their analysis process.
Automating parts of interconnection studies would allow developers to quickly understand the real cost of a new generation project, allowing them to quickly evaluate feasibility. It would create more cost certainty for projects and would also help identify locations where planned projects have the potential to reduce interconnection costs, attracting still more projects to share new interconnections. Conversely, the capability would also quickly identify when new projects in an area would exceed expected grid capacity and increase the costs for all projects. Ultimately, the automation would lead to more capacity on the grid faster and at a lower cost as developers optimize their investments.
Recommendation 6. Provide funding to DOE to extend the use of NREL’s AI-compiled permitting data to develop and model local regulations. The results could be used to promote standardization through national stakeholder groups.
As noted earlier, one of the biggest challenges in permitting is the complexity of varying and sometimes conflicting local regulations that a project must comply with. Several years ago, NREL, in support of the DOE Office of Policy, spent 1500 staff hours to manually compile what was believed to be a complete list of local energy permitting ordinances across the country. In 2024, NREL used an LLM to compile the same information with a 90% success rate in a fraction of the time.
The House Energy and Water Development Committee should direct DOE to fund the continued development of the NREL permitting database and evaluate that information with an LLM to develop a set of model regulations that could be promoted to encourage standardization. Adoption of those regulations could be encouraged by policymakers and external organizations through engagement with the National Governors Association, the National Association of Counties, the United States Conference of Mayors, and other relevant stakeholders.
Local jurisdictions often adopt regulations based on a limited understanding of best practices and appropriate standards. A set of model regulations would guide local jurisdictions and reduce complexity for developers.
Conclusion
As demand on the electrical grid grows, the need to speed up the availability of new generation capacity on the grid becomes increasingly urgent. The deployment of new generation capacity is slowed by challenges related to site selection, environmental reviews, permitting, and interconnection costs and wait times. While much of the increasing demand for energy in the United States can be attributed to AI, it can also be a powerful tool to help the nation meet that demand.
The six recommendations for AI to speed up the process of bringing new power to the grid that have been identified in this memo address all of those concerns. AI can be used to assist with site selection, analyze environmental regulations, help both regulators and the regulated community understand requirements, develop better regulations, streamline permitting processes, and reduce the time required for interconnection studies.
This memo is part of our AI & Energy Policy Sprint, a policy project to shape U.S. policy at the critical intersection of AI and energy. Read more about the Policy Sprint and check out the other memos here.
The combined generating capacity of the projects awaiting approval is about 1,900 gigawatts, excluding ERCOT and NYISO which do not report this data. In comparison, the generating capacity of the U.S. grid as of Q4 2023 was 1,189 gigawatts. Even if the current high cancellation rate of 70% is maintained, the queue will yield an approximately 50% increase in the amount of power available on the grid through a $600B investment in US energy infrastructure.
FERC’s five-year growth forecast through 2029 predicts an increased demand for 128 gigawatts of power. In that context, the net addition of 15.1 gigawatts of power in the first half of 2024 suggests an increase of 150 gigawatts of power and little excess capacity over the five-year horizon. This forecast is predicated on the assumption that the power added to the grid does not decline, retirements do not increase, and the load forecast does not increase. All these estimates are being applied to a system where supply and demand are already so closely matched that FERC predicted supply shortages in several regions in the summer of 2024.
Construction delays and cost overruns can be an issue, but this is more frequently a factor in large projects such as nuclear and large oil and gas facilities, and is rarely a factor for wind and solar which are factory built and modular.
While the current administration has declared a National Energy Emergency to expedite approvals for energy projects, the order excludes wind, solar, and batteries, which make up 90% of the power presently in the interconnection queue as well as mirroring the mix of capacity recently added to the grid. Therefore, the expedited permitting processes required by the administration only applies to 10% of the queue, composed of 7% natural gas and 3% that includes nuclear, oil, coal, hydrogen, and pumped hydro. Since solar, wind, and batteries are unlikely to be granted similar permitting relief, and relying on as-yet unplanned fossil fuel projects to bring more energy to the grid is not realistic, other methods must be undertaken to speed new power to the grid.