Emerging Technology
day one project

Measuring and Standardizing AI’s Energy and Environmental Footprint to Accurately Access Impacts

06.27.25 | 15 min read | Text by Mitul Jhaveri & Vijaykumar Palat

The rapid expansion of artificial intelligence (AI) is driving a surge in data center energy consumption, water use, carbon emissions, and electronic waste—yet these environmental impacts, and how they will change in the future, remain largely opaque. Without standardized metrics and reporting, policymakers and grid operators cannot accurately track or manage AI’s growing resource footprint. Currently, companies often use outdated or narrow measures (like Power Usage Effectiveness, PUE) and purchase renewable credits to obscure true emissions. Their true carbon footprint may be as much as 662% higher than the figures they report. A single hyperscale AI data center can guzzle hundreds of thousands of gallons of water per day​ and contribute to a “mountain” of e-waste​, yet only about a quarter of data center operators even track what happens to retired hardware​.

This policy memo proposes a set of congressional and federal executive actions to establish comprehensive, standardized metrics for AI energy and environmental impacts across model training, inference, and data center infrastructure. We recommend that Congress directs the Department of Energy (DOE) and the National Institute of Standards and Technology (NIST) to design, collect, monitor and disseminate uniform and timely data on AI’s energy footprint, while designating the White House Office of Science and Technology Policy (OSTP) to coordinate a multi-agency council that coordinates implementation. Our plan of action outlines steps for developing metrics (led by DOE, NIST, and the Environmental Protection Agency [EPA]), implementing data reporting (with the Energy Information Administration [EIA], National Telecommunications and Information Administration [NTIA], and industry), and integrating these metrics into energy and grid planning (performed by DOE’s grid offices and the Federal Energy Regulatory Commission [FERC]). By standardizing how we measure AI’s footprint, the U.S. can be better prepared for the growth in power consumption while maintaining its leadership in artificial intelligence.

Challenge and Opportunity

Inconsistent metrics and opaque reporting make future AI power‑demand estimates extremely uncertain, leaving grid planners in the dark and climate targets on the line.

AI’s Opaque Footprint

Generative AI and large-scale cloud computing are driving an unprecedented increase in energy demand. AI systems require tremendous amounts of computing power both during training (the AI development period) and inference (when AI is used in real world applications).  The rapid rise of this new technology is already straining energy and environmental systems at an unprecedented scale. Data centers consumed an estimated 415 Terawatt hours (TWh) of electricity in 2024 (roughly 1.5% of global power demand), and with AI adoption accelerating, the International Energy Agency (IEA) forecasts that data center energy use could more than double to 945 TWh by 2030. This is an added load comparable to powering an entire country the size of Sweden or even Germany. There are  a range of projections of AI’s energy consumption, with some estimates suggesting even more rapid growth than the IEA. Estimates suggest that much of this growth will be concentrated in the United States. 

The large divergence in estimates for AI-driven electricity demand stem from the different assumptions and methods used in each study. One study uses one of the parameters like the AI Query volume (the number of requests made by users for AI answers), another tries to estimate energy demand from the estimated supply of AI related hardware. Some estimate the Compound Annual Growth Rate (CAGR) of data center growth under different growth scenarios. Different authors make various assumptions about chip shipment growth, workload mix (training vs inference), efficiency gains, and per‑query energy.  Amidst this fog of measurement confusion, energy suppliers are caught by surges in demand from new compute infrastructure on top of existing demands from sources like electric vehicles and manufacturing. Electricity grid operators in the United States typically plan for gradual increases in power demand that can be met with incremental generation and transmission upgrades. But if the rapid build-out of AI data centers, on top of other growing power demands, pushes global demand up by an additional hundreds of terawatt hours annually this will shatter the steady-growth assumption embedded in today’s models. Planners need far more granular, forward-looking forecasting methods to avoid driving up costs for rate-payers, last-minute scrambles to find power, and potential electricity reliability crises. 

This surge in power demand also threatens to undermine climate progress. Many new AI data centers require 100–1000 megawatts (MW), equivalent to the demands of a medium-sized city, while grid operators are faced with connection lead times of over 2 years to connect to clean energy supplies.  In response to these power bottlenecks some regional utilities, unable to supply enough clean electricity, have even resorted to restarting retired coal plants to meet data center loads, undermining local climate goals​ and efficient operation. Google’s carbon emissions rose 48% over the past five years and Microsoft’s by 23.4% since 2020, largely due to cloud computing and AI.  

In spite of the risks to the climate, carbon emissions data is often obscured: firms often claim “carbon neutrality” via purchased clean power credits, while their actual local emissions go unreported. One analysis found Big Tech (Amazon, Meta) data centers may emit up to 662% more CO₂ than they publicly report​. For example, Meta’s 2022 data center operations reported only 273 metric tons CO₂ (using market-based accounting with credits), but over 3.8 million metric tons CO₂ when calculated by actual grid mix according to one analysis—a more than 19,000-fold increase​. Similarly, AI’s water impacts are largely hidden. Each interactive AI query (e.g. a short session with a language model) can indirectly consume half a liter of fresh water through data center cooling​, contributing to millions of gallons used by AI servers—but companies rarely disclose water usage per AI workload. This lack of transparency masks the true environmental cost of AI, hinders accountability, and impedes smart policymaking.

Outdated and Fragmented Metrics 

Legacy measures like Power Usage Effectiveness (PUE) miss what is important for AI compute efficiency, such as water consumption, hardware manufacturing, and e-waste.

The metrics currently used to gauge data center efficiency are insufficient for AI-era workloads. Power Usage Effectiveness (PUE), the two-decades-old standard, gives only a coarse snapshot of facility efficiency under ideal conditions​. PUE measures total power delivered to a datacenter  versus how much of that power actually makes it to the IT equipment inside. The more power used (e.g. for cooling), the worse the PUE ratio will be. However, PUE does not measure how efficiently the IT equipment actually uses the power delivered to it. Think about a car that reports how much fuel reaches the engine but not the miles per gallon of that engine. You can ensure that the fuel doesn’t leak out of the line on its way to the engine, but that engine might not be running efficiently. A good PUE is the equivalent of saying that fuel isn’t leaking out on its way to the engine; it might tell you that a data center isn’t losing too much energy to cooling, but won’t flag inefficient IT equipment. An AI training cluster with a “good” PUE (around 1.1) could still be wasteful if the hardware or software is poorly optimized.

In the absence of updated standards, companies “report whatever they choose, however they choose” regarding AI’s environmental impact. Few report water usage or lifecycle emissions. Only 28% of operators track hardware beyond its use, and just 25% measure e-waste​, resulting in tons of servers and AI chips quietly ending up in landfills. This data gap leads to misaligned incentives—for instance, firms might build ever-larger models and data centers, chasing AI capabilities, without optimizing for energy or material efficiency because there is no requirement or benchmark to do so.

Opportunities for Action

Standardizing metrics for AI’s energy and environmental footprint presents a win-win opportunity. By measuring and disclosing AI’s true impacts, we can manage them. With better data, policymakers can incentivize efficiency innovations (from chip design to cooling to software optimization) and target grid investments where AI load is rising. Industry will benefit too: transparency can highlight inefficiencies (e.g. low server utilization or high water-cooled heat that could be recycled) and spur cost-saving improvements. Importantly, several efforts are already pointing the way. In early 2024, bicameral lawmakers introduced the Artificial Intelligence Environmental Impacts Act, aiming to have the EPA study AI’s environmental footprint and develop measurement standards and a voluntary reporting system via NIST. Internationally, the European Union’s upcoming AI Act will require large AI systems to report energy use, resource consumption, and other life cycle impacts​, and the ISO is preparing “sustainable AI” standards for energy, water, and materials accounting​. The U.S. can build on this momentum. A recent U.S. Executive Order (Jan 2025) already directed DOE to draft reporting requirements for AI data centers covering their entire lifecycle—from material extraction and component manufacturing to operation and retirement—including metrics for embodied carbon (greenhouse-gas emissions that are “baked into” the physical hardware and facilities before a single watt is consumed to run a model), water usage, and waste heat​. It also launched a DOE–EPA “Grand Challenge” to push the PUE ratio below 1.1 and minimize water usage in AI facilities​. These signals show that there is willingness to address the problem. Now is the time to implement a comprehensive framework that standardizes how we measure AI’s environmental impact. If we seize this opportunity, we can ensure innovation in AI is driven by clean energy, a smarter grid, and less environmental and economic burden on communities.

Plan of Action

To address this challenge, Congress should authorize DOE and NIST to lead an interagency working group and a consortium of public, private and academic communities to enact a phased plan to develop, implement, and operationalize standardized metrics, in close partnership with industry.

Recommendation 1. Identify and Assign Agency Mandates

Creating and Implementing this measurement framework requires concerted action by multiple federal agencies, each leveraging its mandate. The Department of Energy (DOE) should serve as the co-lead federal agency driving this initiative. Within DOE, the Office of Critical and Emerging Technologies (CET) can coordinate AI-related efforts across DOE programs, given its focus on AI and advanced tech integration. The National Institute of Standards and Technology (NIST) will also act as a co-lead for this initiative leading the metrics development and standardization effort as described, convening experts and industry. The White House Office of Science and Technology Policy (OSTP) will act as the coordinating body for this multi-agency effort. OSTP, alongside the Council on Environmental Quality (CEQ), can ensure alignment with broader energy, environment, and technology policy. The Environmental Protection Agency (EPA) should take charge of environmental data collection and oversight. The Federal Energy Regulatory Commission (FERC) should play a supporting role by addressing grid and electricity market barriers. FERC should streamline interconnection processes for new data center loads, perhaps creating fast-track procedures for projects that commit to high efficiency and demand flexibility.

Congressional leadership and oversight will be key. The Senate Committee on Energy and Natural Resources and House Energy & Commerce Committee (which oversee energy infrastructure and data center energy issues) should champion legislation and hold hearings on AI’s energy demands. The House Science, Space, and Technology Committee and Senate Commerce, Science, & Transportation Committee (which oversee NIST, and OSTP) should support R&D funding and standards efforts. Environmental committees (like Senate Environment and Public Works, House Natural Resources) should address water use and emissions. Ongoing committee oversight can ensure agencies stay on schedule and that recommendations turn into action (for example, requiring an EPA/DOE/NIST joint report to Congress within a set timeframe(s).

Congress should mandate a formal interagency task force or working group, co-led by the Department of Energy (DOE) and the National Institute of Standards and Technology (NIST), with the White House Office of Science and Technology Policy (OSTP) serving as the coordinating body and involving all relevant federal agencies. This body will  meet regularly to track progress, resolve overlaps or gaps, and issue public updates. By clearly delineating responsibilities, The federal government can address the measurement problem holistically.

Recommendation 2. Develop a Comprehensive AI Energy Lifecycle Measurement Framework

A complete view of AI’s environmental footprint requires metrics that span the full lifecycle, including every layer from chip to datacenter, workload drivers, and knock‑on effects like water use and electricity prices.

Create new standardized metrics that capture AI’s energy and environmental footprint across its entire lifecycle—training, inference, data center operations (cooling/power), and hardware manufacturing/disposal. This framework should be developed through a multi-stakeholder process led by NIST in partnership with DOE and EPA, and in consultation with industry, academia as well as state and local governments. 

Key categories should include:

  1. Data Center Efficiency Metrics: how effectively do data centers use power?
  2. AI Hardware & Compute Metrics: e.g. Performance per Watt (PPW)—the throughput of AI computations per watt of power.
  3. Cooling and Water Metrics: How much energy and water are being used to cool these systems?
  4. Environmental Impact Metrics: What is the carbon intensity per AI task?
  5. Composite or Lifecycle Metrics: Beyond a single point in time, what are the lifetime characteristics of impact for these systems?

Designing standardized metrics

NIST, with its measurement science expertise, should coordinate the development of these metrics in an open process, building on efforts like NIST’s AI Standards Working Group—a standing body chartered under the Interagency Committee on Standards Policy which brings together technical stakeholders to map the current AI-standards landscape, spot gaps, and coordinate U.S. positions and research priorities. The goal is to publish a standardized metrics framework and guidelines that industry can begin adopting voluntarily within 12 months. Where possible, leverage existing standards (for example, those from the Green Grid consortium on PUE and Water Usage Effectiveness (WUE), or IEEE/ISO standards for energy management) and tailor them to AI’s unique demands. Crucially, these metrics must be uniformly defined to enable apples-to-apples comparisons and periodically updated as technology evolves.

Review, Governance, and improving metrics

We recommend establishing a Metrics Review Committee (led by NIST with DOE/EPA and external experts) to refine the metrics whenever needed, host stakeholder workshops, and public updates. This continuous improvement process will keep the framework current with new AI model types, cooling tech, and hardware advances, ensuring relevance into the future.  For example, when we move from the current model of chatbots responding to queries to agentic AI systems that plan, act, remember, and iterate autonomously, traditional “energy per query” metrics no longer capture the full picture.

Recommendation 3. Operationalize Data Collection, Reporting, Analysis and Integrate it into Policy

Start with a six‑month voluntary reporting program, and gradually move towards a mandatory reporting mechanism which feeds straight into EIA outlooks and FERC grid planning.

The task force should solicit inputs via a Request for Information (RFI) — similar to DOE’s recent RFI on AI infrastructure development​, asking data center operators, AI chip manufacturers, cloud providers, utilities, and environmental groups to weigh in on feasible reporting requirements and data sharing methods. Within 12 months of starting, this taskforce should complete (a) a draft AI energy lifecycle measurement framework (with standardized definitions for energy, water, carbon, and e-waste metrics across training and data center operations), and (b) an initial reporting template for technology companies, data centers and utilities to pilot. 

With standardized metrics in hand, we must shift the focus to implementation and data collection at scale. In the beginning, a voluntary AI energy reporting program can be launched by DOE and EPA (with NIST overseeing the standards). This program would provide guidance to AI developers (e.g. major model-training companies), cloud service providers, and data center operators to report their metrics on an annual or quarterly basis.

After a trial run of the voluntary program, Congress should enact legislation to create a mandatory reporting regime that borrows the best features of existing federal disclosure programs. One useful template is EPA’s Greenhouse Gas Reporting Program, which obliges any facility that emits more than 25,000 tons of CO₂ equivalent per year to file standardized, verifiable electronic reports. The same threshold logic could be adapted for data centers (e.g., those with more than 10 MW of IT load) and for AI developers that train models above a specified compute budget. A second model is DOE/EIA’s Form EIA-923 “Power Plant Operations Report,” whose structured monthly data flow straight into public statistics and planning models. An analogous “Form EIA-AI-01” could feed the Annual Energy Outlook and FERC reliability assessments without creating a new bureaucracy. EIA could also consider adding specific questions or categories in the Commercial Buildings Energy Consumption Survey and Form EIA-861 to identify energy use by data centers and large computing loads. This may involve coordinating with the Census Bureau to leverage industrial classification data (e.g., NAICS codes for data hosting facilities) so that baseline energy/water consumption of the “AI sector” is measured in national statistics. NTIA, which often convenes multi stakeholder processes on technology policy, can host industry roundtables to refine reporting processes and address any concerns (e.g. data confidentiality, trade secrets). NTIA can help ensure that reporting requirements are not overly burdensome to smaller AI startups by working out streamlined methods (perhaps aggregated reporting via cloud providers, for instance). DOE’s Grid Deployment Office (GDO) and Office of Electricity (OE), with better data, should start integrating AI load growth into grid planning models and funding decisions. For example, GDO could prioritize transmission projects that will deliver clean power to regions with clusters of AI data centers, based on EIA data showing rapid load increases. FERC, for its part, can use the reported data to update its reliability and resource adequacy guidelines and possibly issue guidance for regional grid operators (RTOs/ISOs) to explicitly account for projected large computing loads in their plans.

Table 1. Roles and Responsibilities to Measure AI’s Environmental Impact

Agency/EntityRoleKey Responsibilities
Department of Energy (DOE)Co-leadOffice of Critical and Emerging Technologies (CET): coordinate AI-related efforts across DOE programs

Office of Energy Efficiency and Renewable Energy (EERE) can lead on promoting energy-efficient data center technologies and practices (e.g. through R&D programs and partnerships)

Office of Electricity (OE) and Grid Deployment Office address grid integration challenges (ensuring AI data centers have access to reliable clean power).

DOE should also collaborate with utilities and FERC to plan for AI-driven electricity demand growth and to encourage demand-response or off-peak operation strategies for energy-hungry AI clusters.
National Institute of Standards and Technology (NIST)Co-lead for metrics and standardsLead metrics development and standardization efforts

Convene experts and industry stakeholders

Revive/expand AI Standards Coordination Working Group for sustainability metrics

Publish technical standards for measuring AI energy use, water use, and emissions

Host stakeholder consortium on AI environmental impacts (with EPA and DOE)
White House Office of Science and Technology Policy (OSTP)Coordinating bodyCoordinate multi-agency efforts

Work with Council on Environmental Quality (CEQ) to align with climate and tech policy

Integrate AI energy metrics into federal sustainability requirements via Federal Chief Sustainability Officer and OMB guidance

Update OMB memos on data center optimization to include AI-specific measures
Environmental Protection Agency (EPA)Environmental oversightLead environmental data collection and oversight

Conduct comprehensive study of AI’s environmental impacts (with DOE)

Examine AI systems’ lifecycle emissions, water use, and e-waste

Apply greenhouse gas (GHG) accounting expertise

Quantify metrics like carbon intensity using location-based grid emissions factors
Federal Energy Regulatory Commission (FERC)Grid and market supportAddress grid and electricity market barriers

Streamline interconnection processes for new data center loads

Create fast-track procedures for high-efficiency, demand-flexible projects

Ensure regional grid reliability assessments account for projected AI/data center load growth
Congressional CommitteesLegislative oversightEnergy Committees: Champion legislation and hold hearings on AI energy demands
Senate Committee on Energy and Natural Resources

House Energy & Commerce Committee


Science/Technology Committees:Support R&D funding and standards efforts
House Science, Space, and Technology Committee

Senate Commerce, Science, & Transportation Committee


Environmental Committees: Address water use and emissions
Senate Environment and Public Works

House Natural Resources


Oversight Functions:
Ensure agencies stay on schedule

Require EPA/DOE/NIST joint report to Congress

Address further legislative needs

This transparency will let policymakers, researchers, and consumers track improvements (e.g., is the energy per AI training decreasing over time?) and identify leaders/laggards. It will also inform mid-course adjustments that if certain metrics prove too hard to collect or not meaningful, NIST can update the standards. The Census Bureau can contribute by testing the inclusion of questions on technology infrastructure in its Economic Census 2027 and annual surveys, ensuring that the economic data of the tech sector includes environmental parameters (for example, collecting data center utility expenditures, which correlate with energy use). Overall, this would establish an operational reporting system and start feeding the data into both policy and market decisions.

Through these recommendations, responsible offices have clear roles: DOE spearheads efficiency measures in data center initiatives; OE (Office of Electricity and GDO (Grid Deployment Office) use the data to guide grid improvements; NIST creates and maintains the measurement standards; EPA oversees environmental data and impact mitigation; EIA institutionalizes energy data collection and dissemination; FERC adapts regulatory frameworks for reliability and resource adequacy; OSTP coordinates the interagency strategy and keeps the effort a priority; NTIA works with industry to smooth data exchange and involve them; and Census Bureau integrates these metrics into broader economic data. See the table below.Meanwhile, non-governmental actors like utilities, AI companies, and data center operators must not only be data providers but partners. Utilities could use this data to plan investments and can share insights on demand response or energy sourcing; AI developers and data center firms will implement new metering and reporting practices internally, enabling them to compete on efficiency (similar to car companies competing on miles per gallon ratings). Together, these actions create a comprehensive approach: measuring AI’s footprint, managing its growth, and mitigating its environmental impacts through informed policy.

Table 2. Example Metrics to Illustrate the Types of Shared Information

Metric CategoryMetric NameDefinitionPurpose/Benefit
Data Center Efficiency MetricsPower Usage Effectiveness (PUE)Refined for AI workloads – ratio of total facility energy to IT equipment energyMeasures overall data center energy efficiency for AI-specific operations
Data Center Infrastructure Efficiency (DCIE)IT power versus total facility power (inverse of PUE)Alternative perspective on facility efficiency, focusing on IT equipment proportion
Energy Reuse Factor (ERF)Quantifies how much waste heat is reused on-siteMeasures ability to capture and utilize waste heat, reducing overall energy needs
Carbon Usage Effectiveness (CUE)Links energy use with carbon emissions (kg CO₂ per kWh)Provides holistic view of facility carbon intensity beyond just power usage
Environmental MetricsEnergy IntensityAnalyzes energy consumed per unit of data volume processed (Kwh/Gb)Reveals energy cost per data unit. Useful for tuning AI models.
Annual water consumptionMeasures total liters of water used annually at a data center levelTracks overall water consumption, essential for annual planning and sustainability reporting.
AI Hardware & Compute MetricsPerformance per Watt (PPW)Throughput of AI computations (FLOPS or inferences) per watt of powerEncourages energy-efficient model training and inference hardware
Compute UtilizationAverage utilization rates of AI accelerators (GPUs/TPUs)Ensures expensive hardware is well-utilized rather than idling
Training Energy per ModelTotal kWh or emissions per training run (normalized by model size/training-hours)Quantifies energy cost of model development

Conclusion

AI’s extraordinary capabilities should not come at the expense of our energy security or environmental sustainability. This memo outlines how we can effectively operationalize measuring AI’s environmental footprint by establishing standardized metrics and leveraging the strengths of multiple agencies to implement them. By doing so, we can address a critical governance gap: what isn’t measured cannot be effectively managed. Standard metrics and transparent reporting will enable AI’s growth while ensuring that data center expansion is met with commensurate increases in clean energy, grid upgrades, and efficiency gains.

The benefits of these actions are far-reaching. Policymakers will gain tools to balance AI innovation with energy and environment goals. For example, by being able to require improvements if an AI service is energy-inefficient, or to fast-track permits for a new data center that meets top sustainability standards. Communities will be better protected: with data in hand, we can avoid scenarios where a cluster of AI facilities suddenly strains a region’s power or water resources without local officials knowing in advance. Instead, requirements for reporting and coordination can channel resources (like new transmission lines or water recycling systems) to those communities ahead of time. The AI industry itself will benefit by building trust and reducing the risk of backlash or heavy-handed regulation; a clear, federal metrics framework provides predictability and a level playing field (everyone measures the same way), and it showcases responsible stewardship of technology. Moreover, emphasizing energy efficiency and resource reuse can reduce operating costs for AI companies in the long run, a crucial advantage as energy prices and supply chain concerns grow.

This memo is part of our AI & Energy Policy Sprint, a policy project to shape U.S. policy at the critical intersection of AI and energy. Read more about the Policy Sprint and check out the other memos here.

Frequently Asked Questions
Why do we need AI-specific environmental metrics? Don’t data centers already have efficiency standards?

While there are existing metrics like PUE for data centers, they don’t capture the full picture of AI’s impacts. Traditional metrics focus mainly on facility efficiency (power and cooling) and not on the computational intensity of AI workloads or the lifecycle impacts. AI operations involve unique factors—for example, training a large AI model can consume significant energy in a short time, and using that AI model continuously can draw power 24/7 across distributed locations. Current standards are outdated and inconsistent​: one data center might report a low PUE but could be using water recklessly or running hardware inefficiently. AI-specific metrics are needed to measure things like energy per training run, water per cooling unit, or carbon per compute task, which no standard reporting currently requires. In short, general data center standards weren’t designed for the scale and intensity of modern AI. By developing AI-specific metrics, we ensure that the unique resource demands of AI are monitored and optimized, rather than lost in aggregate averages. This helps pinpoint where AI can be made more efficient (e.g., via better algorithms or chips)—an opportunity not visible under generic metrics.

How will multiple agencies work together to implement these recommendations?

AI’s environmental footprint is a cross-cutting issue, touching on energy infrastructure, environmental impact, technological standards, and economic data. No single agency has the full expertise or jurisdiction to cover all aspects. Each agency will have clearly defined roles (as outlined in the Plan of Action). For instance, NIST develops the methodology, DOE/EPA collect and use the data, EIA disseminates it, and FERC/Congress use it to adjust policies. This collaborative approach prevents blind spots. A single-agency approach would likely miss critical elements (for instance, a purely DOE-led effort might not address e-waste or standardized methods, which NIST and EPA can). The good news is that frameworks for interagency cooperation already exist​, and this initiative aligns with broader administration priorities (clean energy, reliable grid, responsible AI). Thus, while it involves multiple agencies, OSTP and the White House will ensure everyone stays synchronized. The result will be a comprehensive policy that each agency helps implement according to its strength, rather than a piecemeal solution. See below:


Roles and Responsibilities to Measure AI’s Environmental Impact



  • Department of Energy (DOE): DOE should serve as the co-lead federal agency driving this initiative. Within DOE, the Office of Critical and Emerging Technologies (CET) can coordinate AI-related efforts across DOE programs, given its focus on AI and advanced tech integration. DOE’s Office of Energy Efficiency and Renewable Energy (EERE) can lead on promoting energy-efficient data center technologies and practices (e.g. through R&D programs and partnerships), while the Office of Electricity (OE) and Grid Deployment Office address grid integration challenges (ensuring AI data centers have access to reliable clean power). DOE should also collaborate with utilities and FERC to plan for AI-driven electricity demand growth and to encourage demand-response or off-peak operation strategies for energy-hungry AI clusters.

  • National Institute of Standards and Technology (NIST): NIST will also act as a co-lead for this initiative leading the metrics development and standardization effort as described, convening experts and industry. NIST should revive or expand its AI Standards Coordination Working Group​ to focus on sustainability metrics, and ultimately publish technical standards or reference materials for measuring AI energy use, water use, and emissions. NIST is also suited to host stakeholder consortium on AI environmental impacts, working in tandem with EPA and DOE.

  • White House, including the Office of Science and Technology Policy (OSTP): OSTP will act as the coordinating body for this multi-agency effort. OSTP, alongside the Council on Environmental Quality (CEQ), can ensure alignment with broader climate and tech policy (such as the U.S. Climate Strategy and AI initiatives). The Administration can also use the Federal Chief Sustainability Officer and OMB guidance to integrate AI energy metrics into federal sustainability requirements (for instance, updating OMB’s memos on data center optimization to include AI-specific measures​).

  • Environmental Protection Agency (EPA): EPA should take charge of environmental data collection and oversight. In the near term, EPA (with DOE) would conduct the comprehensive study of AI’s environmental impacts, examining AI systems’ lifecycle emissions, water and e-waste. EPA’s expertise in greenhouse gas (GHG) accounting will ensure metrics like carbon intensity are rigorously quantified (e.g. using location-based grid emissions factors rather than unreliable REC-based accounting).

  • Federal Energy Regulatory Commission (FERC): FERC plays a supporting role by addressing grid and electricity market barriers. FERC should streamline interconnection processes for new data center loads, perhaps creating fast-track procedures for projects that commit to high efficiency and demand flexibility. FERC can ensure that regional grid reliability assessments start accounting for projected AI/data center load growth using data​.

  • Congressional Committees: Congressional leadership and oversight will be key. The Senate Committee on Energy and Natural Resources and House Energy & Commerce Committee (which oversee energy infrastructure and data center energy issues) should champion legislation and hold hearings on AI’s energy demands. The House Science, Space, and Technology Committee and Senate Commerce, Science, & Transportation Committee (which oversee NIST and OSTP) should support R&D funding and standards efforts. Environmental committees (like Senate Environment and Public Works, House Natural Resources) should address water use and emissions. Ongoing committee oversight can ensure agencies stay on schedule and that recommendations turn into action (for example, requiring the EPA/DOE/NIST joint report to Congress in four years as the Act envisions​, and then moving on any further legislative needs).

What exactly will companies and utilities have to report?

The plan requires high-level, standardized data that balances transparency with practicality. Companies running AI operations (like cloud providers or big AI model developers) would report metrics such as: total electricity consumed for AI computations (annually), average efficiency metrics (e.g. PUE, Carbon Usage Effectiveness (CUE), and WUE for their facilities), water usage for cooling, and e-waste generated (amount of hardware decommissioned and how it was handled). These data points are typically already collected internally for cost and sustainability tracking but the difference is they would be reported in a consistent format and possibly to a central repository. For utilities, if involved, they might report aggregated data center load in their service territory or significant new interconnections for AI projects (much of this is already in utility planning documents). See below for examples.


Metrics to Illustrate the Types of Shared Information



  • Data Center Efficiency Metrics: Power Usage Effectiveness (PUE) (refined for AI workloads), Data Center Infrastructure Efficiency (DCIE) which measures IT versus total facility power (the inverse of PUE), Energy Reuse Factor (ERF) to quantify how much waste heat is reused on-site, and Carbon Usage Effectiveness (CUE) to link energy use with carbon emissions (kg CO₂ per kWh). These give a holistic view of facility efficiency and carbon intensity, beyond just power usage​.

  • AI Hardware & Compute Metrics: Performance per Watt (PPW)—the throughput of AI computations (like FLOPS or inferences) per watt of power, which encourages energy-efficient model training and inference​. Compute Utilization—ensuring expensive AI accelerators (GPUs/TPUs) are well-utilized rather than idling (tracking average utilization rates). Training energy per model—total kWh or emissions per training run (possibly normalized by model size or training-hours). Inference efficiency—energy per 1000 queries or per inference for deployed models. Idle power draw—measure and minimize the energy hardware draws when not actively in use​.

  • Cooling and Water Metrics: Cooling Energy Efficiency Ratio (EER)—the output cooling power per watt of energy input, to gauge cooling system efficiency​. Water Usage Effectiveness (WUE)—liters of water used per kWh of IT compute, or simply total water used for cooling per year​. These help quantify and benchmark the significant water and electricity overhead for thermal management in AI data centers.

  • Environmental Impact Metrics: Carbon Intensity per AI Task—CO₂ emitted per training or per 1000 inferences, which could be aggregated to an organizational carbon footprint for AI operations​. Greenhouse Gas emissions per kWh—linking energy use to actual emissions based on grid mix or backup generation. Also, e-waste metrics—such as total hardware weight decommissioned annually, or a recycling ratio. For instance, tracking the tons of servers/chips retired and the fraction recycled versus landfilled can illuminate the life cycle impact​.

  • Composite or Lifecycle Metrics: Develop ways to combine these factors to rate overall sustainability of AI systems. For example, an “AI Sustainability Score” could incorporate energy efficiency, renewables use, cooling efficiency, and end-of-life recycling. Another idea is an “AI Energy Star” rating for AI hardware or cloud services that meet certain efficiency and transparency criteria, modeled after Energy Star appliance ratings.

Won’t this be a burden or risk revealing trade secrets?

No, the intention is not to force disaggregation down to proprietary details (e.g., exactly how a specific algorithm uses energy) but rather to get macro-level indicators. Regarding trade secrets or sensitive info, the data collected (energy, water, emissions) is not about revealing competitive algorithms or data, it’s about resource use. These are analogous to what many firms already publish in sustainability reports (power usage, carbon footprint), just more uniformly. There will be provisions to protect any sensitive facility-level data (e.g., EIA could aggregate or anonymize certain figures in public releases). The goal is transparency about environmental impact, not exposure of intellectual property.

How will these metrics and data actually be used by the government?

Once collected, the data will become a powerful tool for evidence-based policymaking and oversight. At the strategic level, DOE and the White House can track whether the AI sector is becoming more efficient or not—for instance, seeing trends in energy-per-AI-training decreasing (good) or total water use skyrocketing (a flag for action).

What are some examples?

Energy planning: EIA will incorporate the numbers into its models, which guide national energy policy and investment. If data shows that AI is driving, say, an extra 5% electricity demand growth in certain regions, DOE’s Grid Deployment Office and FERC can respond by facilitating grid expansions or reliability measures in those areas​.


Climate policy: EPA can use reported emissions data to update greenhouse gas inventories and identify if AI/data centers are becoming a significant source—if so, that could shape future climate regulations or programs (ensuring this sector contributes to emissions reduction goals).


Water resource management: If we see large water usage by AI in drought-prone areas, federal and state agencies can work on water recycling or alternative cooling initiatives.


Research and incentives: DOE’s R&D programs (through ARPA-E or National Labs) can target the pain points revealed—e.g., if e-waste volumes are high, fund research into longer-lasting hardware or recycling tech; if certain metrics like Energy Reuse Factor are low, push demonstration projects for waste heat reuse.


This could inform everything from ESG investment decisions to local permitting. For example, a company planning a new data center might be asked by local authorities, “What’s your expected PUE and water usage? The national average for AI data centers is X—will you do better?” In essence, the data ensures the government and public can hold the AI industry accountable for progress (or regress) on sustainability. By integrating these data into models and policies, the government can anticipate and avert problems (like grid strain or high emissions) before they grow, and steer the sector toward solutions.

The tech industry is global, so how will U.S. metrics align internationally?

AI services and data centers are worldwide, so consistency in how we measure impacts is important. The U.S. effort will be informed by and contribute to international standards. Notably, the ISO (International Organization for Standardization) is already developing criteria for sustainable AI, including energy, raw materials, and water metrics across the AI lifecycle NIST, which often represents the U.S. in global standards bodies, is involved and will ensure that our metrics framework aligns with ISO’s emerging standards. Similarly, the EU’s AI Act also has requirements for reporting AI energy and resource use​. By moving early on our own metrics, the U.S. can actually help shape what those international norms look like, rather than react to them. This initiative will encourage U.S. agencies to engage in forums like the Global Partnership on AI (GPAI) or bilateral tech dialogues to promote common sustainability reporting frameworks. In the end, aligning metrics internationally will create a more level playing field—ensuring that AI companies can’t simply shift operations to avoid transparency. If the U.S., EU, and others all require similar disclosures, it reinforces responsible practices everywhere.

What if these measures make AI development more expensive or slow down innovation?

Shining a light on energy and resource use can drive new innovation in efficiency. Initially, there may be modest costs—for example, installing better sub-meters in data centers or dedicating staff time to reporting. However, these costs are relatively small in context. Many leading companies already track these metrics internally for cost management and corporate sustainability goals. We are recommending formalizing and sharing that information. Over time, the data collected can reduce costs: companies will identify wasteful practices (maybe servers idling, or inefficient cooling during certain hours) and correct them, saving on electricity and water bills. There is also an economic opportunity in innovation: as efficiency becomes a competitive metric, we expect increased R&D into low-power AI algorithms, advanced cooling, and longer-life hardware. Those innovations can improve performance per dollar as well. Moreover, policy support can offset any burdens—for instance, the government can provide technical assistance or grants to smaller firms to help them improve energy monitoring. We should also note that unchecked resource usage carries its own risks to innovation: if AI’s growth starts causing blackouts or public backlash due to environmental damage, that would seriously hinder AI progress.

Table 3. Roles of Government and Non-Government Stakeholders

TypeAgency / OfficeMetric DevelopmentData Collection & ReportingAnalysis & Planning IntegrationPolicy & Oversight
GovernmentDOE – EERELead role in energy efficiency metricsSupports voluntary reporting systemsIntegrates energy data into planning toolsLeads clean energy transitions
DOE – OEUses data for grid forecastingCoordinates grid reliability planning
DOE – GDOIntegrates data into infrastructure planningPrioritizes transmission buildout
EPACo-leads lifecycle, emissions, water, and e-waste metricsLeads environmental impact data collectionTracks emissions, water use, and e-wasteOversees regulation and Congressional briefings
NISTLead on standardized metrics (PUE, WUE, etc.)Provides protocols for reportingEnsures data accuracyAligns with international standards
EIAAdvises on metric use in national statsCollects energy/water usage dataPublishes AI-specific trendsMaintains transparency and reporting
FERCCollects grid data from ISOs/RTOsIntegrates demand into reliability planningIssues grid and rate guidance
OSTPCoordinates interagency frameworkOversees implementation roadmapMonitors alignment with national AI goalsEnsures cross-agency cohesion
NTIASupports digital infrastructure metric designIndustry interface for data exchangeHighlights interconnection/data demandAligns broadband/data policy with AI metrics
Census BureauDevelops AI/data infrastructure codesAdds metrics to Economic CensusCross-validates with energy dataIncorporates AI sector into federal stats
Non-GovernmentAI DevelopersWork with NIST to refine compute/task efficiency metricsReport training/inference energy, water, and emissionsShare model-specific data for load estimationParticipate in voluntary federal programs, support transparency
Data Center OperatorsSupport infrastructure-level metric development (PUE, WUE)Report operational metrics (PUE, WUE, emissions)Share utilization/design data for planningEngage in certifications, ESG benchmarks
Utility Companies & Grid OperatorsProvide energy delivery data, load forecasts, interconnection dataInform regional reliability and grid expansion modelsAlign rates, plans with AI load growth
Infrastructure DevelopersReport energy/cooling projections and needsSupport planning/zoning/water coordinationComply with environmental regulations
Industry Consortia & AuditorsAssist in standard-setting and benchmarksAggregate anonymized member dataValidate and synthesize trends for government useProvide third-party verification, build trust
publications
See all publications
Emerging Technology
Blog
Translating Vision into Action: FAS Commentary on the NSCEB Final Report and the Future of U.S. Biotechnology

Advancing the U.S. leadership in emerging biotechnology is a strategic imperative, one that will shape regional development within the U.S., economic competitiveness abroad, and our national security for decades to come.

06.27.25 | 15 min read
read more
Emerging Technology
day one project
Policy Memo
Measuring and Standardizing AI’s Energy and Environmental Footprint to Accurately Access Impacts

Inconsistent metrics and opaque reporting make future AI power‑demand estimates extremely uncertain, leaving grid planners in the dark and climate targets on the line

06.27.25 | 15 min read
read more
Emerging Technology
day one project
Policy Memo
A Holistic Framework for Measuring and Reporting AI’s Impacts to Build Public Trust and Advance AI 

As AI becomes more capable and integrated throughout the United States economy, its growing demand for energy, water, land, and raw materials is driving significant economic and environmental costs, from increased air pollution to higher costs for ratepayers.

06.26.25 | 15 min read
read more
Emerging Technology
Press release
Federation of American Scientists Statement on the Preemption of State AI Regulation in the One Big Beautiful Bill Act

Preempting all state regulation in the absence of federal action would leave a dangerous vacuum, further undermining public confidence in these technologies.

06.25.25 | 2 min read
read more