Solving the Clean Energy Infrastructure Finance Rubik’s Cube
Building Blocks to Make Solutions Stick
Capital is not the constraint, alignment is: Catalyzing large-scale climate and energy infrastructure requires government to act as a systems integrator—synchronizing policy, de-risking commercialization, modernizing valuation, and coordinating markets so private capital can move with speed and confidence.
Implications for democratic governance
- Visible coordination builds credibility: Investors, communities, and companies need to see how policy pieces fit together. Fragmented, asynchronous policymaking erodes trust and slows deployment.
- Risk tolerance must be publicly legitimized: If democratic institutions punish every failed deal but ignore portfolio-level gains, agencies will default to paralysis. A mature democracy must distinguish responsible risk-taking from mismanagement.
- Transparency is market infrastructure: Open data, common modeling tools, and clearer capital pathways empower regulators, communities, and innovators to interrogate and improve investment decisions.
Capacity needs
- Systems-level policy choreography: Agencies capable of synchronizing rules, guidance, financing programs, and permitting reforms on coordinated timelines rather than rolling them out in isolation.
- Transaction-speed infrastructure: Staffing models, underwriting playbooks, and surge capacity that match private-sector deal timelines while maintaining integrity.
- Interstate coordination platforms. Formal mechanisms for states to harmonize standards, pool procurement, share data, and replicate successful pilots without restarting from scratch.
- Accessible technical/economic infrastructure: Publicly credible data sets, modeling tools, and valuation methodologies that lower barriers to entry and allow meaningful third-party scrutiny.
Deal templates and archetypes: Clear, standardized financing pathways that signal how government capital will engage at different risk tiers and technology stages.
Jump to…
- Executive summary
- “Come together”: Defragmenting markets through regional coordination
- “That’s what friends are for”: Overcoming commercialization barriers through partnerships
- “Highway to the deployment zone”: Faster, risk-weighted transaction execution
- “Okay, now let’s get in formation”: programmatic policy synchronization for fast market formation
- “C.R.E.A.M.”: More holistic valuation tools and methodologies
- “Take it to the bridge”: Rethinking the ‘missing middle’ problem
- “The Next Episode”
Executive Summary
Historic commitments. Huge demand. Massive cost reductions. Ready technologies. Yet, infrastructure deployment levels are underperforming their potential. What gives? The U.S. clean energy sector has achieved remarkable milestones: solar and wind have tripled since 2015, costs have fallen 90%, and annual clean energy investment now exceeds $280 billion. Yet deployment has arguably fallen short of what both markets and the climate moment demand. The culprit isn’t a single bottleneck: not permitting, not subsidies, not technology readiness alone. The real constraint is misalignment across the multiple interdependent factors that investors need to see in place before committing capital at scale.
Think of it like a Rubik’s Cube: solving one face means nothing if the other five stay scrambled. This paper identifies six strategic levers that, when pulled in concert, can unlock the conditions for large-scale capital deployment:
- Market defragmentation: breaking down the patchwork of ~3,000 utilities and state-by-state rules that trap promising solutions in regional silos
- Commercialization partnerships: deploying innovative public-private joint ventures to de-risk capital-intensive, first-of-a-kind energy infrastructure that traditional markets won’t move on alone
- Transaction execution speed: closing the yawning gap between private-sector deal timelines and the multi-year gauntlet of public financing processes
- Policy synchronization: coordinating the release of rules, funding programs, and guidance so investors see a complete picture, not a puzzle missing half its pieces
- Holistic valuation: building common information infrastructure and market structures that capture the full economic value of energy solutions, moving beyond narrow cost metrics that systematically undervalue transformative infrastructure
- Proactive investor engagement: later-stage investors jumping in sooner and addressing the hard questions early to help bridge the “missing middle” between venture and infrastructure finance
The good news: the capital exists, the technologies are ready, and infrastructure is a solvable problem. With over 1,000 GW of clean energy in development and electricity demand projected to grow up to 50% over the next decade, the infrastructure build-out represents one of the largest capital deployment opportunities in American history. And global demand for U.S. clean energy technology has never been higher. The barriers identified in this paper are structural and systemic, not fundamental; most of the solutions proposed are actionable in the near term, without waiting for perfect legislation or perfect markets.
The window is open, but not indefinitely. For policymakers and investors alike, the question is not whether to act, but whether to act with the clarity, coordination, and urgency the moment demands. The frameworks, partnerships, and policy tools outlined here offer a practical roadmap for unlocking decades of economic growth, cost-of-living relief, and energy security for communities across every region of the country and beyond. The energy transition is not a cost to be managed; properly coordinated, it is a generational economic opportunity.
America has experienced extraordinary momentum in the growth and transformation of the energy sector. Solar and wind generation has more than tripled since 2015. In 2024, 50 GW of solar power was added to the U.S. grid, which is not only a record, but is the most new capacity that any energy technology has added in a year. Technology costs have plummeted dramatically: utility-scale solar and battery energy storage have each fallen 90% since 2010. Those declines have made them among the lowest cost forms of electricity in many places. Domestic manufacturing capacity has also surged with hundreds of clean energy manufacturing facilities, many of which have already come online. These technologies and projects have been reinvigorating communities and creating jobs across the nation, and we should see the benefits from these advances continue as capital is flowing into this sector at an unprecedented rate. Clean energy investment in the United States more than tripled since 2018 to $280 billion annually, with multiples more in commitments, and private markets alone have raised nearly $3 trillion over the past decade. For more established technologies like utility-scale solar and onshore wind, financing has become standardized. This includes established project finance structures, robust secondary markets, accurate energy production forecasts, and predictable returns that align with the needs of institutional capital. These asset classes now exhibit many of the hallmarks of market maturity: transparent pricing, deep liquidity, sophisticated risk assessment frameworks, and predictable transaction execution. This progress has been galvanized by unprecedented governmental support, including the Bipartisan Infrastructure Law of 2021 and the Inflation Reduction Act of 2022 (IRA), alongside bold state policies, aggressive corporate clean energy procurement, sustained advocacy, and relentless technological innovation.
Yet despite these achievements and the trillions of dollars in committed capital, the pace of deployment has arguably fallen short of what the market opportunity demands and what the climate crisis requires. Hundreds of IRA-supported energy and manufacturing projects have faced delays or cancellations: much of that is due to increased economic and logistical uncertainty (e.g. in the cost and availability of equipment, permitting timelines, import and export regulations); much of that too is due to sharp reversals in federal funding priorities (e.g. tax incentive changes from the One Big Beautiful Bill Act (OBBBA), direct project cancellations). Moreover, emerging solutions are still taking time to achieve true commercial liftoff. Despite billions in federal funding allocations, only a few carbon capture projects have meaningfully progressed, with others indefinitely delayed or cancelled. Growth of some demand-side energy solutions, like behind-the-meter solar and virtual power plants, has remained relatively regional, despite favorable economics. Advanced nuclear energy, though it remains a policy priority, has been challenged with long delivery timelines, and many project investors remain wary of the risk of cost overruns. Sustainable aviation fuel production capacity increased by ten times in 2024, but it is still a very small fraction of jet fuel demand. Furthermore, transmission capacity has not grown nearly as quickly as needed and remains a key constraint to progress.
The situation – deployment deficiencies despite historic support – can be entirely solved with just one thing… and that is… to stop acting as if there is just one thing. The relative underperformance described earlier is not attributable to any singular constraint: contrary to what some have argued, for instance, it’s not solely about removing permitting roadblocks nor creating more subsidies. Rather than seeking silver bullets, real progress can be made by recognizing that there are multiple elements involved and misalignment between those elements have curbed the rate of progress.
Accelerating new energy project investment is somewhat like solving a Rubik’s Cube. The key to solving the puzzle lies in its interdependence: every twist of one face ripples across the others. You can solve one face perfectly, but if the other five remain scrambled, you haven’t solved anything in the grand scheme. Progress requires coordinated progress across multiple dimensions simultaneously in the right sequence. The cube rewards systems-thinking and algorithms over siloed, non-coordinated actions. All six of its faces must align to win.
The same is true for new energy finance. When most project investors look at a sector, they approach it as a puzzle and look for as much alignment of the full picture as possible before being sufficiently comfortable to deploy capital. That’s the indicator for the risk-reward balance being in the right place to justify investment. Rather than defining the theory of progress by singular issues, policy and industry stakeholders need to create sufficient alignment of multiple puzzle pieces at the same time.
This paper offers a few perspectives on how to achieve the conditions for larger-scale capital deployment, drawing on both lessons learned and promising concepts from across the industry. Like the six faces of the Rubik’s cube, six priority strategy areas are articulated: market defragmentation, commercialization partnerships, transaction execution speed, policy synchronization, holistic valuation methodologies, and proactive investor engagement. Note that while some of the underlying strategies may take time and require deep structural realignment, most of the concepts discussed herein are actionable in the near term. A range of stakeholders, from policymakers to infrastructure investors to community and industry advocates, need to move in concert to solve the puzzle and unlock greater investment.
The opportunity before us is immense. With over a thousand gigawatts of clean energy in development and electricity demand projected to grow up to 50% over the next decade, the infrastructure build-out required represents one of the largest capital deployment opportunities in American history. Similarly, global demand for U.S. clean energy technologies to be a bigger part of the mix has soared over the past few years, as many countries are seeking to diversify away from China or access some of the more unique technologies that the U.S. is developing. And solutions can’t come quickly enough, in this era of fast-growing energy demand, spiking electricity bills, aging physical infrastructure, and burgeoning new industries, not to mention a plethora of old and new technology solutions and operational strategies poised to meet the moment. The question is not whether the capital exists (it does!), whether energy solutions are available (they are!), nor whether there is a silver bullet salve (there isn’t!). It’s whether we can align the six faces of our energy finance cube quickly enough and strategically to channel the right types of capital where it’s needed most, when it’s needed most. The energy transition presents a real opportunity to drive economic transformation that, when properly coordinated, can unlock decades of strong economic growth, cost-of-living reductions, innovation, and prosperity across every region of the country and the planet.
Chapter 1. “Come together”: Defragmenting markets through regional coordination
For many companies across multiple sectors, the U.S. market can seem like the golden goose. Its big population, high income, diversified economy, and strong purchasing power typically mean large total addressable markets (TAM). While those drivers can be true, the reality for many energy and climate solutions, especially early on, is that the large TAM is challenging to realize, as the market can be highly fragmented. In those sectors, the U.S. is less of a “market” per se and more of a loose mosaic. There are about 3000 utilities, ranging from large publicly traded corporations to rural cooperatives, in regulated and regulated markets. States and territories not only have different market drivers, but they also have their own regulations and regulators, business processes, permitting requirements, and market rules. This also complicates the go-to-market as you typically need large and locally-focused commercial organizations to tap the markets, which can be expensive and time-consuming to build, especially for newer companies. It also often means a less efficient path to scalability, as each set of local customers and regulators need to be brought up to speed and convinced about the fit of a solution (compared to having a few entities that speak for the entire country). In addition to the commercial elements, this dynamic introduces technical barriers to scalability, especially where deep integration and redesign are required to meet local requirements. Over the years, this has often flummoxed both U.S. startups and experienced foreign investors, who have approached the U.S. market with high expectations, only to be confounded by these complexities.
Harmonize local requirements to avoid the piloting death spiral
To the extent possible, to promote more rapid and widespread investment and deployment of solutions that could benefit their communities, local (and national) governments need to work more closely together to harmonize market designs and project requirements. Oftentimes, a solution provider may implement a solution in one state, but when they go to another state, that utility might make them start from the beginning and prove themselves all over again – many innovators have likened these continuously repeated pilots to death by a thousand cuts. If a good solution is successfully deployed in one place, the barriers associated with deploying the same solution in another market need to be lower. This concept applies to permitting and design as well. The more that permitting processes and tariff structures, or modular elements within, can be templatized, time and uncertainty are reduced. Additionally, uniformity lowers development costs because the solution doesn’t have to be fully reengineered for each locale. This could also correspond to standardizing equipment and project technical requirements between them, to minimize costly product redesigns and reengineering. Furthermore, state stakeholders seeking to deploy similar solutions should consider entering into reciprocal partnerships or MOUs, supporting collaboration that’s both technical (e.g., between their utilities and independent engineering organizations) and policy-focused (e.g., between their policymakers and regulators). As such, when a solution under that type of agreement is evaluated and approved in one state, when that solution is brought forward in a partnering state, the solution can be given an expedited evaluation and approval process.
Not just physically, but digitally
The dynamic described previously is not limited to hardware. It is present for many software solutions as well, particularly those that have to integrate with local operators’ control systems. For example, a locality may be interested in deploying a virtual power plant (VPP), which is a relatively low-cost, software-based approach to aggregate distributed and controllable energy resources in order to provide large-scale energy services. A VPP deployment would have to connect to a utility’s and/or grid operator’s distributed energy management system (DERMS) to talk to devices, energy management systems (EMS), energy dispatch and trading system for wholesale market participation, customer information systems to track billing and energy usage, and also be cybercompliant – note too that several utilities have yet to fully roll out these foundational modern digital systems. Not only that, each utility and grid operator might have their own implementations (vendors, versions, configurations, rules) of these systems. Even outside of controls-oriented functions, the variety of data structures, naming conventions, and IT systems can make it difficult to access available market data (e.g., energy pricing), electricity tariff rate structures, and other highly important information. This is a reason why you tend to see that many energy software solutions have their operations concentrated in just a few markets, as the costs and time associated with integrating with another market’s cadre of systems can be hard to justify and thwart efforts to scale.
This is an area where states can work together (along with their respective utilities, grid operators, technology providers, and regulators) to agree upon more uniform ways to structure data, access market information, and securely interface with market and control systems. This could include partnering with groups that are developing common standards and protocols (such as RMI VP3, LF Energy), building an implementation roadmap across those states and utilities (accelerating implementation of FERC Order 2022), and taking corresponding legislative actions to ensure investments are made to build out the enabling foundational digital systems.
Aggregated demand and collaborative procurement
In a similar vein, collaboration between state and national governments can level the playing field and expand markets. When it comes to infrastructure, states and countries may often endeavor to ensure that local manufacturing capacity and supply chains are set up within their territories – this can create long term economic growth opportunities, reduce equipment delivery risks, and improve the public’s return on their investment.
However, issues can arise when multiple states are trying to duplicate efforts in the same sector. Take offshore wind in the early 2020s. Multiple eastern U.S. states were not only supporting a new wave of projects, many funding programs had requirements that the projects needed to source materials and equipment from suppliers located in the state.. The effect of this for a small, burgeoning industry was dilutive and slowed down factory investments as the scaling factors were harder to justify. After all, there are only so many blade, monopile, vessel, and cabling factories that can be supported at a given time, especially early on in the industry’s development. In response, thirteen states and the federal government signed a memorandum of understanding, where they agreed to take more regionalized, collaborative approaches to procurement and supply chain development.
Relatedly, an area where significant improvements could be made is around the procurement of critical common equipment. To accommodate the load growth from new factories, data centers, and building and vehicle electrification efforts, there are many pieces of equipment that will be needed irrespective of what types of energy are associated: things like transformers, circuit breakers, switchgear, and so on. There are considerable production capacity shortages and long lead times on these, which raise costs and create execution risks for projects. Despite the robust market demand, manufacturers have been somewhat hesitant to invest in expanding production as they worry that the demand will not materialize, which would leave them with underutilized or even stranded assets.
State and local governments can respond to these challenges in multiple ways. For instance, they could pool together their demand and drive standardization of the equipment so that the equipment is more fungible and interchangeable, as has been previously highlighted by the U.S. National Infrastructure Advisory Committee’s report on protecting critical infrastructure. Also, states can create well-defined demand guarantees where they can provide assurances to manufacturers and consumers that necessary equipment will be there, as needed.
For example, in 2013, the Illinois’ Department of Transportation led a seven-state procurement initiative to jointly acquire a standardized set of efficient locomotives and railcars, with additional funding provided by the Federal Railway Authority to support domestic manufacturing. This effort pulled forward new, more efficient railway vehicles into the market and lowered lifecycle costs. These concepts can additionally apply to secondary markets as well, for example, providing residual value guarantees on heavy-duty electric trucking procurement, to help mitigate risks on the initial purchase (e.g., traditional resale markets not emerging or asset residual values not being realized as projected).
Chapter 2. “That’s what friends are for”: Overcoming commercialization barriers through partnerships
The next facet of the cube pertains to early market formation and investment into technologies that are not yet fully commercialized, especially first-of-a-kind (FOAK) and early-of-kind (EOAK) infrastructure, and why capital formation has been easier for some types versus others. Differences in the ability to demonstrate, commercialize, and scale new infrastructure do not purely depend on the ultimate value of the solution; they are often driven by how the characteristics of that infrastructure affect the pathway to value realization, particularly the inherent capital-intensity and modularity of the solution.
For highly modularized solutions with lower capital requirements, the pathway can be much more straightforward. Take solar photovoltaics. Though module R&D and fabrication are far from trivial, technical demonstration and deployment are relatively simple. One can usually install and field-test new solar quickly and inexpensively. The advantage extends when scaling the solution to bigger projects: once you reach megawatt scale, given the modularity of solar cells and their balance of plant (inverters, cables, trackers, etc), you can obtain a reasonably clear picture of how even gigawatt-scale projects should fundamentally work. Other highly-modular solutions like batteries and EV chargers have enjoyed similar advantages. Tesla, for instance, in order to address potential range anxiety issues for its customers, leveraged its own balance sheet and government funding to build out a network of standardized superchargers, taking advantage of charger modularity to build in waves. This characteristic has made it easier for rapid demonstration and scaling of those solutions, as the financial community can enter the market, investigate, learn, and expand with relatively low risk.
However, the commercialization process becomes significantly more challenging with more capital- intensive and complex technologies, such as carbon capture, nuclear, e-fuels, etc. For some of these types of solutions, the early projects can require billions of dollars to construct and demonstrate. For these types of infrastructure, smaller-scale systems might not provide comparatively representative technical proof points of how the larger system needs to operate. Furthermore, larger sums of capital are often needed for early deployments. What’s more, the financial risk can be compounded as the long-term payoff is not guaranteed, as FOAK & EOAK projects typically have more uncertainty, and the learning rates of subsequent projects may also be less obvious. Furthermore, instead of a typical first-mover advantage that you often see with new technologies, early project investors here might actually suffer from a first-mover disadvantage, where they have the risk and cost of participating in the earlier projects, but don’t accrue the benefits and learnings that are seen in projects executed down the line.
To help address these types of challenges often seen with capital-intensive, less modular FOAK and EOAK infrastructure projects, adopting a new suite of partnership structures can greatly help to accelerate market formation and improve investability.
Multi-project joint ventures
Catalyzing capital for this class of infrastructure may mean going significantly farther than providing a few incentives and having strong advocacy efforts. More complex and elaborate agreements, private and/or public, are often necessary to drive deployment. Particularly in the form of deployment coalitions, consortiums, and joint ventures that support multiple projects. At the highest level, this can exist in several forms and can be originated by the private or public sector, as appropriate. For illustration, a few examples of commercial approaches to scale new nuclear energy projects, roughly in increasing order of relative deployment impact:
- Demand-side vehicles/buyers clubs. This is where consumers get together and make commitments to purchase a solution. This can be used as a market signaling mechanism to stimulate demand and attract solutions. There are example programs run by governments, such as the First Movers Coalition, and others led by the private sector, such as the Advanced Clean Electricity Initiative (Google, Nucor, Microsoft). Large corporations and governments both have a long track record of entering into energy procurement contracts for solutions with attributes that they have deemed highly desirable: like low-carbon, rapidly deliverable, and/or resilient. Some of these entities have started to strike above-market offtake contracts and guarantees in order to bring those solutions to market faster: as have some hyperscalers with nuclear, Amazon with electric delivery vans, NYSERDA with offshore wind, or the US Air Force with enhanced geothermal.
- Supply-side vehicles. This is where a development consortium, public or private, might be formed to deliver the new solutions across multiple projects and geographies, immediately delivering sufficient scale to suppliers while also amortizing some of the deployment risk. As an example, the UK government formed Great British [Energy -] Nuclear as a publicly-funded, but arm’s length, national development company of small modular reactor projects. It has been charged with performing activities like finding sites around the country, raising capital, selecting partners, and facilitating construction. As part of partner selection, it ran a fairly competitive solicitation inviting technology vendors from around the world to showcase their offerings and enter into contention to deploy across a range of sites. It ultimately selected a vendor, Rolls-Royce, as its preferred partner and is moving forward with project development. It is leveraging the power of its sovereign backing to derisk the projects, navigate regulatory hurdles, attract capital, and internalize the societal benefits of the investment (job creation, manufacturing, etc).
- Joint supply-demand venture. Here, customers and developers tightly collaborate to deploy new solutions. A version of this was created with Amazon & X-energy’s partnership to deploy small modular reactors. Here, building off X-energy’s DOE Advanced Reactor Demonstration Program-supported deployment grant and partnership with Dow Chemical (serving as both industrial offtaker and infrastructure delivery expert partner), Amazon not only committed to purchase the energy from 5 GW of SMRs across multiple sites to meet their needs, but also invested in the company itself, meaning they are more aligned and get to share in X-energy’s future success. In turn, this could be particularly meaningful as it represents the first close representation of a project ‘orderbook’ for nuclear in the US in recent years, and thereby provides an opportunity to standardize their product (particularly given the same customer and same vendor), reduce future execution risk, and accelerate cost reductions through improved learning rates. Separately, but with some parallels, the U.S. DOE’s Gen III+ SMR program, created in 2024 and awarded in 2025, also prioritized applications that formed deployment consortia.
All of them offer significant advantages over pursuing projects on an individual basis. They provide demand signals to supply chains to create manufacturing capacity and to labor groups to create a workforce. Generally, both of these stakeholders may need to see firm demand signals before they will undertake significant investments, which are in turn typically needed to reach a solution’s cost and performance entitlement (otherwise creating a chicken-and-egg problem). They also create more concrete opportunities to drive project standardization; this not only allows a technology to achieve faster learning curves, but it also helps to derisk and justify the investment by providing a more tangible line of sight to the large market. The point for manufacturers and workforce development groups is equally applicable for financiers, who often want to see a pipeline of repeatable opportunities before spinning up their underwriting teams.
Risk and reward sharing
Having an orderbook of the first several projects, per se, may not be sufficient to create sufficient activation energy at the project level. Though it sends a good signal for supply chains and others, it does not necessarily address the first-mover disadvantage issue that may exist.
A differentiator in partnership approaches, including the ones described previously, is how to think about alignment and value creation. Traditionally, the way in which governmental entities approach financial partnerships is through mechanisms like subsidies, loan guarantees, offtake guarantees, backstops, and fast-tracked processes. These help to reduce financial exposure to stakeholders, but alone, they miss a key part of the story: the long-term upside that can be created by successfully deploying and opening up a market for these solutions.
Usually, for the product owners and corporate investors, this is more naturally accounted for and balanced against the downside risk. For instance, companies, from software providers to aircraft manufacturers, might sell their first products as loss-leaders, providing lower pricing to early adopters to reduce the risk to the buyer, which they justify knowing that they should be able to rapidly recoup the costs of early expenses (and even failures) over the broader market pool, if they are successful. For large infrastructure projects, this is more challenging to achieve. When projects are highly capital-intensive, the financial exposures may be too great for the product company and/or equity investor to bear – that company or fund might be entirely wiped out by an individual project failure (for example, Westinghouse had to declare bankruptcy in 2017 when their two U.S. nuclear projects faced challenges). Project stakeholders and financiers might be asymmetrically exposed to the downside risk and, therefore, be inclined to avoid investing in early projects. For a promising technology, you may often find several customers (e.g., electric utilities) lining up for the ‘third’ or ‘nth-of-a-kind’ project, which would likely be derisked and less expensive, while at the same time taking a passive wait-and-see approach with the first project – which produce a stalemate if the first project is hard to get off the ground.
This is where deployment partnerships with structures that more fully align economic incentives and share in both the downside risk and upside of value creation can be a powerful catalyst for action. Amazon’s structure, for instance, creates this too by being involved in multiple projects where they would ultimately benefit from improvements over time and also through their equity investment into X-energy itself, so they should (depending on terms) continue to benefit down the line if the company is successful.
In addition to encouraging the formation of joint ventures and consortia as described earlier, states and/or national governments can work together to strategically invest in key solutions, run competitive tenders to prospective providers, and strike profits-sharing agreements and/or warrants (as opposed to pure equity) in situations where government investment played an outsized role in value creation.
A potential example of this is the recent $80 billion framework agreement between Brookfield and the US Government to deploy new large nuclear reactors. Notably, beyond packaging existing products and authorities (e.g., low-interest government loans for projects), this proposed deal additionally stipulates a proposed profits sharing mechanism where the US Government would receive a share of future profits from reactor sales. Noting that this partnership is early-stage and important details have yet to be disclosed, this mechanism could be appropriate here as you have a hard-to-commercialize sector, with strategic national and geopolitical value, with effectively no domestic competitive products.
State entrepreneurship
Public sector funders can also play a significant role in creating and incentivizing these kinds of deployment partnerships. Though more commonplace in countries with state-run industries, countries with free market economies have often found it more delicate to navigate. There may be legitimate concerns about how governments’ “picking winners” may create adverse incentives and undermine competitive markets, in some cases. Plus, it may confuse governments’ role of trying to maximize public benefit, versus showing favoritism or extracting economic rents from corporations.
All that said, there are models for state entrepreneurship that can be very powerful here, balancing the need to pull forward solutions and capital, while protecting the public and maintaining market competition – particularly in markets that are pre-commercial, have few players, have outsized national strategic benefits, and otherwise would not develop on their own without heavy external intervention. These are cases where, though the market benefits are considerable, the activation energy may be too high to stimulate deployment without deep governmental intervention.
Furthermore, consider where you have first-mover disadvantage challenges but a strong set of prospective fast-followers. To avoid the risk of a Spiderman-meme-like situation where stakeholders (e.g. local utilities or individual states) are pointing at each other to make the first move, downstream project investors could, for instance, co-invest (debt, equity, or backstops) for the first project, even on a minority basis, which would both mitigate risk on the first project and also enable them to access a cost-effective pathway to the technologies they desire to build down the line. You do not traditionally see state and local entities investing in infrastructure projects in other jurisdictions, but doing so could be net beneficial as a faster and lower-cost way to derisk and execute their own projects.
In addition, where appropriate, public financing entities investing domestically (e.g., states, green banks, federal agencies) could, where appropriate, consider extending their authorities to borrow some concepts from the US’s international playbook. Organizations like the Development Finance Corporation (DFC) can make equity investments in strategic, high-value projects, particularly where the normal capital markets would otherwise struggle to enter until the investment thesis is more clearly actionable. Such a process would have very clear scopes, firm guardrails, clear commercial competition plans, and be compatible with legal and market structures to create the intended benefits without confusing or distorting markets.
In any scenario, there should be a corresponding plan for how the public profits would be used. For example, it could be used to raise capital for other governmental activities or directly returned to the public in some way. Or it could be efficient to recycle the funding towards related activities and balancing of the governmental ‘venture capital’ portfolio, as would a strategic sovereign wealth fund.
Chapter 3. “Highway to the deployment zone”: Faster, risk-weighted transaction execution
There’s a common cliché from finance that time kills or wounds all deals. Increasing the speed of policy formation and deal execution is essential to unlocking growth and investment, especially for newer sectors. We will particularly focus on the public capital side of the equation. Here, there is a great mismatch between public and private investment decision time scales. In the private sector, deals are expected to be completed on the order of months or even weeks. Often in the public sector, this can add many months to years, with a high degree of uncertainty, depending on the program. There can be many idiosyncrasies associated with public funding – e.g. infrastructure projects with federal dollars may have additional compliance requirements (e.g. for environmental regulation or for domestic manufacturing). Though there are many deep policy questions here, for this discussion, this piece will focus on ways to accelerate the process.
Staffing for success
While it’s easy to say that the government should move faster, the reality for many is that the individual government program officers are typically working at a rapid pace. This is especially true at the political level where the motivation to make progress in a short amount of time tends to be very high. Particularly, when it comes to developing new programs, they do a massive amount of work, mostly unseen by the public, and with very few resources and are often overstretching to meet deliverables.
The other side of this is that when new programs and initiatives are rolled out, there often isn’t a similar level of flexibility in staffing levels and allocations. In fact, staffers at the federal, state, and city level can get overwhelmed by the volume of direct work and information requests, following the relevant laws and statutes. A new capital program may get introduced, but the number of people to implement that program might not change rapidly. For instance, the Inflation Reduction Act of 2022 introduced and/or influenced dozens of tax credit programs, and accordingly had to issue almost a hundred pieces of new guidance, so the market could act on them. A relatively small group of people led by the Treasury Department’s Office of Tax Policy were charged with generating that official guidance (as required, to ensure consistency and fairness). In addition, a number of the programs therein had complex elements which required deep technical expertise (e.g. tax law, energy markets, carbon accounting, energy technology) to complete their work well – skills that are in relatively short supply and in high demand, both inside and outside the government. The rate of progress was also slowed by some ambiguity in the law itself, where key technical questions (e.g. accounting methodologies and criteria) accordingly needed additional time to be addressed during implementation instead of beforehand. The associated teams ran at breakneck pace to complete all those issuances in just two years. Yet, many engaged market actors who were excited to proceed with shovel-ready projects experienced challenges, as they waited for guidance, thereby slowing initial progress.
To meet the rapid needs of an eager market and particularly in times when the governments are trying to push comprehensive reforms, agency leaders and legislators need to consider ways to make sure that the implementing organizations are sufficiently well-staffed and resourced. This should not only consider program staff (both existing and new), but also functional teams (e.g. legal, communications, stakeholder engagement) where bottlenecks can often form as they support multiple programs. This can include having surge capacity resources (either short and/or long-term, internal or external) bringing on technical and subject matter experts to help enable fast and fair processing. Also, an implementation staffing needs assessment should ideally be conducted as part of the policy-formation & legislative process so that the appropriate resources can be allocated early and efficiently. To further ensure efficiency of resource allocation and implementation speed, legislators should consider expedient ways to drive greater clarity and specificity at the point of legislation, where applicable.
Iterative capital deployment programs
It is tempting and important to get things totally right on the first try, particularly when it comes to government funding, which is highly scrutinized. That said, an approach which has delivered success is to release capital in phases. Here, instead of issuing all funding at one time, the program office (especially for a large competitive program) might split the deployment into phases over time. The first phase is executed quickly and the subsequent phases are introduced later. While this may introduce some short-term friction, it not only gets capital and projects moving faster, perhaps as importantly, it gives both the funders and market chances to build momentum, learn from one round, and improve towards the next ones. A good example of this was the DOE’s Grid Resilience and Innovation Partnerships (“GRIP”) program. This was a $10 billion program from the Bipartisan Infrastructure Law to enhance grid flexibility and improve the resilience of the power system against extreme weather. That $10 billion allocation was split into three phases to be issued over a few years. Over those three phases, the quality and ambition of the applications and programs funded increased significantly as all stakeholders were able to learn and adapt with each round.
Progress over perfection
For public programs, this is a major challenge driven by misalignments of risk tolerance. Many ambitious government funding programs are faced with a strange pickle. On one end, they have the duty, mandate, and power to drive innovation, do deals ahead of the commercial markets, and derisk promising solutions to a point where they can scale on their own and deliver the broad public benefits associated with that solution. On the other hand, the funds being used are raised from people’s hard-earned money or a state or country’s valuable natural resources, and neither should be handled frivolously. The fear of the political fallout from the latter may drown out the benefits of delivering the projects; that is, in the eyes of an underwriter or program officer, the downside risk may often outweigh the upside creation; doing no deal may feel safer than doing a ‘bad’ deal.
Take the case of two companies that received funding from the DOE’s Loan Programs Office (LPO) in the early 2010s. One is the solar cell manufacturer Solyndra, whose idea was to decrease the cost of solar energy by using cylindrical, thin-film solar cells that could capture the sun’s energy from multiple angles, compared to their conventional flat-panel counterparts, and thereby bring down their levelized costs. Solyndra received $535 million in federal loan guarantees, which it defaulted on and went bankrupt when market conditions changed (they, and other promising new solar companies, were undercut by plummeting prices of silicon solar cells from China). The default filled the news cycles for months, sparked several congressional hearings and investigations, and also left a profound imprint in the minds of many program funders. No government underwriter wants to be dragged to the Hill or see their name in the papers for this reason. On the other end of the spectrum, you have a little-known car company called Tesla, which received a $465 million loan to expand electric vehicle production. Tesla, as we know, went on to become one of the most transformational and successful automobile companies in recent history. And yet, comparatively little fanfare has been made about the government’s role in making that a success. Two loans, about the same size, issued around the same time, by the same organization. Not only did their actual outcomes differ, but the financial outcomes to the upside and the political fallout to the downside were diametrically-opposed.
This contrast becomes even more stark when looking at the broader picture. Again, taking the example of the LPO (now Office of Energy Dominance Financing (EDF)). It has historically had a loss rate of less than 3%, which is on par with most commercial and investment banks – entities which often invest in markets with more proven solutions and less uncertainty. Moreover, other governmental programs, like DARPA, NIH, and ARPA-E, also have strong investment track records. All that said, the perception of risk has led to an overly deep risk conservatism and a sense of fear of doing deals that might go sideways. This creates huge process drag for the entire organization and curbs the rate of progress that can be made, as underwriting processes can become elongated and difficult to navigate. Furthermore, for many loan applicants, it can take several years to get through the loan process; anecdotally, some applicants have complained it was slower and harder to access than what they could wrestle from the commercial market.
Overall, this is a situation where the tolerance for and understanding of losses for public financing needs to be reconceptualized and appropriately balanced, given the mission. Not all losses are bad. Individual losses are not necessarily detrimental if outweighed by net gains. There are significant opportunity costs of not taking risks appropriately. More can be accomplished without jeopardizing public interest.
Given the governments’ having historically demonstrated their ability to be good stewards of capital across long periods of time, the inherent risk that accompanies the pre-commercial asset classes they support, and the urgent need to make progress and unlock markets, this is a situation where more streamlined, faster underwriting processes that increase the speed of execution are critical and warranted. Furthermore, governmental funding organizations need to be given more ‘air-cover’ so that individual misses do not get over-politicized, but are understood to be reasonable elements of a process for progress. Note that accomplishing this process and cultural shift requires major work internally with staff, policymakers, and the broader public. This is an aspect where integrating concepts like state entrepreneurism and more balanced portfolio-based risk and reward approaches, described previously, can also unlock new investment and risk management strategies, greater societal benefits, and increased comfort to staff and leaders to usher in that kind of transformation.
Creating longer and more durable windows of action
A more obvious reason to move quickly on policymaking is to deliver benefits faster to project and community stakeholders – which is, of course, the main objective of the policy in the first place. But beyond that, investors understand that the political time windows covering favorable conditions could be short. This is particularly acute for assets with long development cycles and/or high upfront costs: e.g., building new manufacturing facilities or developing interregional transmission lines can take years. Indeed, it was estimated that 60% of committed IRA-funded clean energy manufacturing projects were originally not slated to come online until between 2025 and 2028.
Moreover, the IRA timeline created some interesting time crunches. Though the law was thoughtfully conceived with some longer time horizons for tax credits, in practice, the actionable investment window for that version ended up being incredibly short. The law was passed in August 2022. It then took time for programs to be formed and guidance to be released, as described earlier. In parallel, the investment community had to learn, come up the learning curve on the new opportunities, and build ecosystem collaborators (which themselves are also reacting and forming). Next thing you knew, as the election window started to ramp and policy uncertainty increased, many investors started to park their capital and take a wait-and-see approach in early 2024, as evidenced by strong increases in fund ‘dry powder’ (raised but uncommitted capital) but a sharp dropoff in actual capital deployment and assets under management at that same time.
Moving quickly is critical to give investors, communities, and other associated stakeholders as much time as possible to understand the landscape, develop deployment pathways, build new solutions, and ideally iterate, given the chance to take more shots.
That was a shorter-term perspective. In the spirit of leveraging speed to open the front end of the window, longer term, some thought should be made to how one extends the investability window. Investors typically do not decide to invest in a project purely due to the project’s merits. Particularly when entering a new sector, it’s also driven by the commercial prospect of potential follow-on deals. Short political windows and the associated ‘stroke-of-pen’ risks often raise major flags for the risk committees at financial institutions. As was mentioned earlier, many IRA programs arguably had much less than two years of impact. Deeper policy stability is critical to ensure continued, long-term investment. That type of stability has, at least historically, been a hallmark of the US regulatory & commercial system and a positive differentiator in the race to attract capital and talent from across the globe. For sectors with high strategic value, high early capital requirements, and long investment cycles, policymakers should consider more mechanisms to provide longer-term policy guarantees to give investors assurance that they have long enough windows to justify their business cases.
Chapter 4. “Okay, now let’s get in formation”: programmatic policy synchronization for fast market formation
Catalyzing the deployment of new infrastructure is usually enabled by a bevy of policy actions. This can be important as transforming a sector may require several changes in economics, behaviors, and processes. Especially when resulting from expansive new legislation and/or executive actions, the government may be required to deliver a host of new policy programs, including new rules (e.g., permitting reforms, categorical exemptions), funding allocations and programs, implementation guidance (e.g., for tax rules), informational reports (e.g., National Lab technical studies, commercialization reports, etc). These types of activities are highly valuable as they tackle different aspects of the deployment challenge and take huge amounts of effort to be effective. However, they often get rolled out and implemented on separate and independent processes. This can actually stall and frustrate deployment efforts as most investors will want to see the major policy puzzle pieces locked in place before getting comfortable enough to deploy capital – for most risk organizations, ‘stroke-of-pen’ risks are often viewed as red flags. Conversely, this hesitation can cause consternation for policymakers and advocates who may feel that they have done the heavy lifting in passing new legislation, but don’t see a corresponding flood of serious commitments immediately after.
Policy deliverable schedule alignment
One way to address this is to implement a visible and synchronized schedule, showing all the related policy efforts and programs for an initiative. The interdependencies between those activities would be easier to identify and allow relevant stakeholders to see when all the major puzzle pieces would be in place, and in turn, also align their investment and advocacy efforts accordingly.
An example of this comes from carbon capture: the federal tax credit for carbon dioxide sequestration (45Q) was first issued in 2008; however, the first set of tax guidance was not issued until 2020, as the IRS, Treasury, EPA, and other agencies had to build a suite of complex regulations around reporting, verification, stakeholder comments, and more. A consequence is that, though the tax credit was in place (and though there were strong complementary financing capabilities from renewables tax equity and thermal power plant development already in place), little to no investment went into this sector, effectively ‘wasting’ many years of eligibility and frustrating many interested stakeholders.
By contrast, take advanced transmission technologies: despite being rapidly-deployable and cost-effective solutions to increase transmission and distribution system capacity and performance, they have been historically underutilized. To increase awareness and deployment, the federal government developed a suite of products including the formation of the Federal-State Modern Grid Deployment Initiative, grant funding via the DOE’s Grid Resilience Innovation Partnerships program, loan funding from the LPO’s Energy Infrastructure Reinvestment program, categorical exemptions in federal environmental permitting on upgrading existing transmission lines, a Pathways to Commercial Liftoff report on grid modernization, new national deployment goals, technical reports and new assistance programs by the National Labs, and more. These were all released within a couple of months of each other in 2024, so the market had a fuller picture to which it could react and begin to make greater progress. Since then, dozens of states have passed new laws, and the number of projects being pursued and funded has also been on the rise.
Capital source navigators
Relatedly, new legislation may result in several new governmental funding programs or changes in missions for existing funding programs. Many of these efforts and changes might go unnoticed or disproportionately utilized. Take the energy- and climate-tech startups, who may be seeking capital to grow or transform their businesses. Government capital tends to be attractive as it is often willing to embrace early technology risk (unlike most commercial capital), is often non-dilutive to the company’s capital stack, and can give the company extra visibility. Most people in the energy sector will know of programs like DOE’s ARPA-E (Advanced Research Program Agency for Energy) or the Loan Programs Office. Far fewer may know that there may be funding available through ‘non-energy’ agencies like the US Department of Agriculture, Small Business Administration (SBA), the Governmental Services Administration (GSA), or the Department of Defense (DOD). These have increased the pool of capital available and provided a wider array of financing products that increase the chances that the right kinds of capital are available to serve the spectrum of company needs.
Initiatives like the Climate Capital Guidebook, published in 2024, can be helpful to make these types of programs less opaque and easier to access, especially for startups and small businesses. At the state level, databases like DSIRE USA have been providing a beneficial service aggregating information on state incentive programs for years.
Making information on federal, state, and/or municipal funding programs highly accessible and searchable from a centralized, common location is key. Or else they may get lost, buried in webpages that few know where to access. This process can be further enhanced using cross-cutting discovery technological tools. For example, AI-based agents could be used to continuously and automatically map these programs and keep information organized. Also, large language models can be deployed to allow stakeholders to more readily identify and compare programs of best fit (matching things like user capital needs versus program ‘ticket sizes’, usage restrictions, and eligibility requirements).
Zooming out from individual needs, this would also augment new solution developers’ and investors’ ability to more comprehensively understand the relationship between governmental capital programs and the role they play in energy solution commercialization and deployment. For instance, related to technology readiness, it would make it easier to chart what programs are available to technologies at different stages of maturity. From the National Science Foundation (NSF) for fundamental research, or the Advanced Research Program Agency-Energy (ARPA-E) for more applied technology development and early manufacturability demonstration, to planning grants from various agencies, and federal tax credits for infrastructure projects.
Similarly, funding programs could be mapped against project development phases. In areas like international project finance, this exercise would be valuable to demystify which programs and institutions are suitable for various phases of project development. In some cases, an international energy project developer using American technology might need to navigate a gauntlet of different funding institutions: from the US Trade and Development Association (USTDA) provide grants for front-end engineering development (FEED) studies, the Export-Import (EXIM) bank for domestic manufacturing loans, to the Development Finance Corporation (DFC) for equity co-investment and political risk insurance. Not to mention multilateral development banks like the World Bank and International Finance Corporation (IFC), which themselves have an array of funding programs and instruments. Here, providing clearer, more cohesive representations of how a patchwork of funding sources can work in tandem and be packaged together can have outsized strategic competitiveness for American companies. This would thereby help level the playing field for companies competing against companies backed by governments that can provide fully wrapped financing solutions.
Chapter 5. “C.R.E.A.M.”: More holistic valuation tools and methodologies
This facet addresses the challenge, which is still too common in solution valuation. Not of valuing the companies themselves, but of proving to customers and investors that the proposed energy solution is worth adopting. Especially because regulation alone is usually not a salve for driving energy transition activities in free market economies. While regulation may steer what should happen, costs and economics are often bigger drivers of how quickly that transition occurs. Borrowing a chemistry analogy, economics determines the activation energy and kinetics of transition policy. Solutions need to demonstrate their fit and attractiveness in often economically competitive and constrained environments. In addition, stakeholders with shared interests (e.g., not just federal, but also at the industry and state & city levels) should invest to build a common valuation infrastructure (e.g., resource characterization data, system models, and more) that helps lower the barriers to deployment and investment. Doing so will also make it easier to appropriately size any associated financial programs (like subsidies and grants), to ensure that there is sufficient catalysis to get multiple stakeholder groups moving and investing.
Understanding end-use unit economics
This means that solution providers, policymakers, and advocates need to develop very deep understandings of the commercial drivers and realities of the markets they are looking to serve. They need to put themselves in the shoes of their customers and related stakeholders. This is particularly important when trying to sell solutions into new and competing markets or applications that may not otherwise be required to change (e.g. regulations on fuel use or emissions). This should seem obvious and has always needed to have been a primary focus, yet it’s a step that some innovators, policymakers, and advocates have not always adequately prioritized.
This is a recipe for failure, particularly in the infrastructure space. Saying it’s good for the world is not sufficient to get traction; a need does not mean there’s a market. Getting a strong, detailed, and accurate understanding of customer unit economics is foundational to the success of any infrastructure. By their nature, this should encompass more rigorous estimations of how much a solution costs to produce and deliver (which often tends to be underestimated in early stages, and leave stakeholders to be surprised later on by cost overruns upon implementation). And it should likewise reflect an understanding of the customer’s cost and value drivers, as this affects project revenue and adoption readiness. This is a question that sometimes gets missed in the early stages, but in later stages, particularly when seeking significant capital to fund projects, it becomes highly pertinent as investors tend to take a much more critical viewpoint of the economic potential of the project, both to the upside and downside. As part of the process of developing detailed assumptions, they should develop reasonable sensitivities and scenarios that illustrate how the financial performance of the project may vary due to changing internal and macroeconomic conditions.
Diagnosing this early not only helps the solution providers to be better positioned for commercial success with their customers but also enables them to catch potential flaws early enough, make different design choices, and ensure the product’s value proposition is more robust and resilient. In turn, this helps reduce project risk and gives comfort to financial investors along the way.
Governments can help by driving easier price discovery and transparency, collaborating with project stakeholders (especially developers and customers) to compile and share relevant cost and value data in more public forums. Reports generated by government agencies (e.g., the series of Pathways to Commercial Liftoff reports by the US Department of Energy’s Office of Technology Transition/Commercialization), national laboratories, and private third-party analysts (e.g., BNEF, S&P, Lazard) have made strong contributions in attempting to fill those information gaps. Continuing to support and drive efforts like those would be valuable. Having state and regional actors (e.g., groups of state economic development organizations) can also help to make them even more granular, and perhaps more local, which would drive even more actionability. They could also compel information disclosure through legislation (akin to efforts in healthcare and drug pricing transparency over the past few years) or require a greater level of disclosure as a part of some government-funded programs, especially with new industries.
Development of accessible, trustworthy technoeconomic evaluation analysis tools
Intending to perform the types of deep technoeconomic analyses described in the previous paragraph is one thing. But having the ability to do that is another. In many situations, you either suffer from data unavailability or asymmetric access. Taking the electricity system, for example, there are very few stakeholders (usually utilities and grid operators) with deep access to information about how the system and its underlying assets are performing. Sometimes, this is intentional due to potential concerns around security and market manipulation. Sometimes, like often in the case of customers requesting their own historical hourly usage data, the process might be archaic and difficult.
That said, a major downside of this situation is that third parties are often not in a position to interrogate resource plans, challenge priorities, or test and validate new ideas. There are third-party analytics tools, but they sometimes don’t have the requisite data, fidelity, or trust to ensure that their results cannot be readily dismissed. That also makes it too easy for grid operators to dismiss new ideas without adequately considering them. Not having accessible data and models can result in excluding beneficial solutions from being part of the menu of options or from making it to market.
This has been a pretty common battle with an array of more ‘disruptive’ energy technologies, like distributed energy systems and advanced transmission technologies. But it also occurs with generation. One example is the prospective transition of the Brandon Shores coal plant in Maryland. The plant’s owner, Talen Energy, filed to retire the facility because it was no longer economically viable to operate (following a trend with many coal plants around the country). However, the grid operator, PJM, sought to force an extension of its operation (via striking a reliability must run (RMR) contract) for four years until new transmission capacity can be brought in, citing potential reliability concerns. State and congressional officials strongly opposed this plan as extending the operation via an RMR contract would increase costs to ratepayers and increase local pollution. A group of advocates and energy experts, led by the Sierra Club and GridLab, proposed replacing the plant with a mix of energy storage, reconductoring, and voltage supports, which they claimed would be not just cleaner, but more cost-effective than what was proposed – even more so in the likely event that the transmission project is delayed. Note too that a similar concept was deployed in New York City at the Ravenswood power plant. However, that suggestion was dismissed by PJM, without real allowance for iteration, arguably for not using the right modeling methodology, and for not being a project sponsor. Aside, the decision is also reflective of PJM’s energy storage market structure’s inability to effectively value energy storage’s benefits as both energy and transmission assets. Though an agreement was ultimately reached, many stakeholders view this settlement as suboptimal, not just because of its outcome (even more so as it does not help wider issues like high-capacity market prices), but because the advocates did not have the modeling tools in place to meaningfully evaluate the options and force a more substantive dialogue with the grid operator.
Rebalancing this situation is essential in order to allow additional key stakeholders like policymakers, regulators, project developers, solution providers, and other experts to interrogate the opportunity space and propose actionable new ideas. This should include creating common, accessible infrastructure for data, models, and evaluation methodologies that an interested stakeholder would need to know to assess potential project options – which are otherwise difficult or prohibitively expensive to access. Note too that government endorsement of these tools greatly assists the credibility of the prospective solution.
The Australia Energy Market Operator (AEMO), for example, did exactly this by implementing the world’s first connection simulation tool. This tool is a digital twin of the country’s electric grid that project developers are using to rapidly evaluate their prospective solutions in an accurate, safe, and trustworthy environment.
Also consider two approaches to accelerate geothermal project development, where insufficient quantification of the resource can add significant development costs and project risks. One example is Project InnerSpace: this is a collaborative effort funded by philanthropy, the US federal government, and Google to provide a common, open set of surface and subsurface characterization data. Another example is the Geothermal Development Company: this is a special purpose vehicle fully-owned by the Kenyan government, which performs the resource characterization and steam development themselves, and shares the information with prospective geothermal power producers, which in turn has significantly lowered the barriers to entry and made Kenya a global leader in geothermal power production. Both approaches are being used to help project investors to be more targeted, capital-efficient, and prolific.
Similarly, Virginia recently passed a grid utilization law requiring their local utilities to measure the utilization of their transmission and distribution systems, including establishing metrics as well as plans to improve those metrics. If implemented well and the associated data is made available, having that data can drive more targeted investments, more cost-effectively manage customer energy bills, and allow new solutions like virtual power plants, distributed energy, and grid-enhancing technologies to be appropriately valued and play bigger roles in the energy solution mix.
Quantify, aggregate, and internalize external benefits and costs
Many solutions labeled for “climate” often have a wide array of other benefits – lowering costs, boosting reliability, creating jobs, improving health, to name a few. In some cases, reducing emissions might be a secondary or even tertiary benefit. This often results in the cost-benefit of a potential solution being understated and capital being underallocated. Alternatively, it can lead to greenwashing, where the benefits are overstated relative to the impact and capital being misallocated.
In some cases, like in electricity markets and industrials, decisions are made on a narrower set of financial criteria, ignoring the broader value proposition – e.g., which solution has the lowest upfront cost? What is the least costly way to meet power demand on hourly basis or does the solution pay itself back in three years? There have been some directional approaches that at least help with the first issue of benefit underquantification.
An example is with FERC Order 1920, a rulemaking that covers new approaches to transmission planning and cost allocation. It called on state decision criteria to be expanded from just cost and reliability to a consideration of seven benefits: avoided or deferred infrastructure costs, reduced power outages, reduced production costs, reduced energy losses, reduced congestion, mitigated extreme weather impact, and reduced peak capacity costs. As it gets implemented across the country, that should provide a considerably more fair and holistic basis to assess the potential benefits of transmission projects and will likely increase the viability of game-changing concepts (e.g., reconductoring, interstate/interregional transmission).
In other cases, like in some larger governmental grantmaking or policy efforts, a suite of benefits may be quantified but are estimated and presented in siloes, where the benefits appear to be orthogonal and nonadditive. So, the key to addressing that is developing valuation frameworks that do the difficult task of weighing the benefits together in a clear manner that’s directly relatable to the investment thesis. Applying ways to translate those benefits commensurately into the project’s financial terms is critical to ensure they get prioritized and realized. The IRA had elements of this at least conceptually, for instance, by applying bonus credits to low-carbon energy projects that paid fair wages, were put in economically disadvantaged areas, or used domestically-manufactured equipment. On a local level, there are state laws like Montana’s transmission law that established a new, elevated cost-recovery mechanism for transmission projects that used more efficient, high-performance conductors.
Going further along the point of internalization, there are more structural issues where markets may not be designed to solve for the outcomes that stakeholders are seeking. In electricity, for example, power markets generally solve for meeting demand for the least cost in a short time period (following a narrowly-defined reliability scheme). This may not only ignore solutions that may save more money over longer periods of time, but it also does not explicitly solve for attributes like resilience, sustainability, or flexibility. That often means external out-of-market solutions are needed to create desired outcomes (e.g., reliability-must-run contracts, tax credits, renewable energy credits). While those have had a great impact on their specific goals, they are imperfect and may have unintended consequences, like distorting market behavior or disincentivizing cost-cutting innovations. Solving that on a greater scale and more fundamental basis in some segments may require greater reforms, like redesigning electricity market structures, revisiting the Energy Policy Act and Federal Power Act, and more.
Chapter 6. “Take it to the bridge”: Rethinking the ‘missing middle’ problem
For the last face of the cube, it is incumbent to describe the role that a whole cadre of investors needs to play. Not just venture and early stage, but particularly later stage capital providers such as project financiers (equity and debt), institutional investors, pension funds, insurance funds, and even utility balance sheets. They have a massively important role and arguably need to be more proactively involved with ensuring the maturation of promising earlier-stage solutions. The ‘missing middle’ problem in energy, where a lack of transition and demonstration capital thwarts promising venture-backed solutions from progressing to mainstream infrastructure, is well-known. While continued innovation is needed to form new capital solutions to fill that gap, there is a lot that investors can do to shrink the gap and make that chasm easier to traverse.
Engaging earlier to pull companies to maturity
Though they control the greatest share of the majority of assets under management and can support bigger ticket sizes, by the nature of their investment mandates, late-stage investors’ risk tolerances tend to skew more conservative. They tend to concentrate their efforts on solutions with established track records and large addressable markets that provide greater certainty of execution and relatively consistent returns. And they typically have more than enough deal volume to justify their focus in those markets. Consequently, though these investors typically at least follow major new trends, they tend to be hesitant to enter newer markets; in fact, many are content to just ‘wait’ for the market to come to them before they engage. This introduces several challenges.
First, at the most basic level, it means that many lower-cost sources of capital may be hard to access for newer solutions, whether climate and otherwise, which makes it more challenging for them to compete on a level playing field early on. Next, as importantly, it means that companies may miss critical opportunities to get sharper earlier. The attributes that are valued by investors change dramatically over the life cycle.
For instance, many early-stage products are rated by things like their uniqueness, differentiation, disruptiveness, and total addressable market. Those attributes that tend to attract venture capital and garner the most market visibility in the media. By contrast, at later stages, especially for project uniqueness and differentiation, might actually be seen as sources of risk: risks that get compounded if the solution is supplied by a new market entrant. Moreover, fungibility and supplier optionality may hold even greater weight. To mitigate the risk of a situation where things could go wrong with a project’s vendor, project investors are often comforted knowing there are substitutes that can be brought in as part of a contingency plan. In addition, though addressable market matters to both groups of investors, early-stage investment tends to focus on alignment with macroeconomic trends. While for later stages, micro arguably supersedes macros, as diligence tends to be more deeply and narrowly-focused on project-specific questions, like contracts, pricing, and execution. Furthermore, late-stage underwriters may need to conduct deeper diligence and acquire more data to get comfortable with the new technical attributes, features, and vendors, which can be a drag on their process as they have to spend more time and money to go through that process. Whereas for earlier stage investors, that deep focus on the new features already tends to be an integral part of the diligence and value creation process, and they get rewarded for that accordingly.
Overall, not understanding these differences can create shocks for new companies that have had great success with attracting capital early, but get stopped in their tracks when graduating to the next level of maturity. This has also often meant that prospective solutions providers miss the opportunity to sharpen their pencils, address more detailed questions, and have their key assumptions stress-tested. Even if they are not prepared to transact, later-stage investors, especially in infrastructure project finance, and their partners (such as independent engineering firms and insurers) should devote additional time to engage with promising technologies early on and bring them along. This is also in the investors’ interest as it allows them to get up the learning curve faster, be better positioned to take advantage of those opportunities when the markets come around, and ensure that the solutions that do make it to later stages are of higher quality and more likely to yield successful transactions.
Formulate deal templates and archetypes
The previous steer for early engagement comes with a conundrum. For many investors, it is hard to meaningfully engage until there is a complete deal on the table. By complete, I mean a fully-fleshed out representation of all core theses, puzzle pieces, assumptions, and more, mapped to a specific, actionable situation. This is the scaffolding onto which the financing packages are built and the basis by which most risk managers are trained to evaluate the financeability of a solution. This can be true for governmental and commercial financing programs; “bring us deals to look at” is a common refrain.
The conundrum comes because many of those details may not be fully known early on, so there may not be a fully-formed deal to bring per se, and in addition, there might not be a clear set of underwriting criteria that the company can aim for. Even for offices like the DOE LPO, which was proactive and engaged, struggled to get significant market traction for a few years, both to their and the market’s frustration. Instead of both sides staring at each other like the Spider-man meme, the impasse can be broken by creating deal templates and archetypes, which can take a more hypothetical representation of assumptions, including reasonable scenarios, and frame what the financial structure and execution pathway would look like for that.
The LPO, for instance, did just that, creating several deal archetypes based on customer type and technology, with terms and execution timing aligned based on the associated risk. For instance, deals involving more established energy solutions (e.g., solar, storage, transmission) with investment-grade utilities providing corporate guarantees were allowed to have faster execution processes than some other deals, commensurate with the comparatively low level of credit risk involved. Next, deals associated with a narrow focus, like the Advanced Technology Vehicle Manufacturing program, tended to have more well-defined execution processes. For others that may have more default risk (e.g., newer technologies, non-investment grade counterparties), those deals might take more time to diligence and underwrite. This benefitted both the Office and the applicants, as this created clear, agreeable expectations for each. Providing this clarity greatly increased deal volume and traction, as more clients brought more loan applications and had greater clarity and confidence in the transaction process they were entering into.
More broadly, this is an area where the companies, industry associations, and other advocates can play an active role, independently and in collaboration with governments. Formulating early pictures and archetypes for financial stakeholders and investors can significantly enhance feedback and capital formation.
Collaborate, Celebrate, and Replicate
Finally, energy investors, especially in less mature sectors, need to find ways to be more open, as appropriate, about their investments and investment strategy. Though for product companies, this usually comes a bit more naturally and is necessary to market their products, later-stage investor communications for individual deals often tend to be more guarded and high-level. This is usually not because the information is unavailable. Sometimes this can be hard as they may be attempting to protect sensitive information, or the move may be to protect potential market share and not lead other players on to the same strategy (particularly if they worked hard to open a new market). The richer information transfer usually happens privately during deal execution (e.g., as part of due diligence) or during project- or fund-level capital raises.
In newer spaces, however, progress itself is often catalytic (rising tides floating all boats). The easiest way to persuade a risk committee to invest is to show precedents and comparables. An underwriter can stand up with greater confidence when they can show that someone else has done it before. It can be even more validating when that ‘someone else’ is a competitor. Project investors should endeavor to share more about how they got comfortable with the deals, market, and/or technologies involved, as appropriate. This is not out of altruism. Especially in emerging sectors, rapidly expanding the market and creating a foundational flywheel can be commercially more beneficial for the firm versus purely protecting market share. Getting more investors comfortable makes the pie bigger and encourages other investors to pursue their own projects. This then sends actionable signals to ecosystem stakeholders (e.g. supply chains) to invest and create production and delivery efficiencies. The efficiencies improve unit economics, reduce risk, and increase returns both down the line and potentially even on the early projects (e.g. operating expense and replacement parts scarcity reduction). These scale efficiencies and flywheel creation should help investors generate more deal flow and revenue, building on the expertise they have built and leadership position they have established. The advantages can be extended where significant public funds were allocated to the project, perhaps in exchange for preferable financing terms from the public funding institution.
Similar ideas apply to public sector funding programs. Making announcements about projects and ribbon cuttings, though important, are shorter-term and quick-hitting communications strategies that tend to be more formulaic. Instead, policymakers should take a page from commercial product marketing. Policymakers should view their deployment policy efforts as their products. As such, they should create consistent, thematic narratives to which individual initiatives, projects, legislation, and rules, can all be framed. Even though they may be exhausted after delivering the policy itself, government officials and program officers should not undervalue the uplift phase. It is crucial to plan to spend ample time and resources to explain and repeat the micro- and macro significance of each product to investors and community stakeholders, especially in today’s competitive information environment. Building greater public buy-in both nationally and with communities, is crucial, especially to longer-term, transformational projects. Government funders should work hard to bring along additional local governments, nonprofits, and investors along to collaborate, celebrate, and replicate the successes.
Akin to what was mentioned in the valuation section about common information infrastructure, working closely with investors, industry, and other stakeholders to collate and amplify key investment theses, lessons learned, and others will be key to building investor confidence and creating more of a flywheel effect for follow-on investments.
Conclusion. “The Next Episode”
Taking a step back, we have laid out many ideas and concepts in this paper: harmonization, collaboration, acceleration, synchronization, valuation, and amplification, to name a few. It may seem daunting to look at policymaking across so many vectors, particularly in a manner where many of those puzzle pieces have to align and move in sync in order to unlock significant and consistent investment. That said, the power of a strong and well-intentioned administrative state, at both national and local levels, lies in its very ability to wrap its arms around big challenges, partner with private industry, and leverage its resources to create high-value solutions with outsized benefits. This has repeatedly been proven in the US and globally across time and multiple sectors: whether it’s going to the moon or inventing life-saving medical treatments; building massive infrastructure, or delivering nanoscale electronics. States and towns should roll up their sleeves, find creative ways to collaborate, develop foundational information tools, and remove unnecessary market barriers. Investors should take an even more active role, making their needs known to early-stage companies and policymakers, building consortia to pull new opportunities forward, and creating an actionable set of commercial opportunities that they would find attractive. What’s more, acting now to design and implement new, actionable administrative structures, especially at the state and local level, will not only create more high-value pathways for progress now, but if it is well-coordinated, it can also lay a foundation for federal actions that can be taken, nearer and longer term. Though the challenges and journey are complex, the opportunity before us is massive, the imperatives are clear, the transformations are tractable, and success is achievable. This can be done, so let’s get busy!
Rebuilding Environmental Governance: Understanding the Foundations
Today we are facing persistent, complex, and accelerating environmental challenges that require adding new approaches to existing environmental governance frameworks. The scale of some of them, such as climate change, require rethinking our regulatory tools, while diffuse sources of pollutants present additional difficulties. At the same time, effective governance systems must accommodate the addition of new infrastructure, housing, and energy delivery to support communities. Our legal framework must be sufficiently stable to enable regulation, investment, and innovation to proceed without the discontinuities and gridlock of the past few decades.
In an increasingly divided atmosphere, it will take candid, multiperspective dialogue to identify paths toward such a framework. This discussion paper explores the baseline that we’re building on and some key dynamics to consider as we think about the durable systems, approaches, and capacity needed to achieve today’s multiple societal goals.
Building Blocks to Make Solutions Stick
Our environmental system was built for 1970s-era pollution control, but today it needs stable, integrated, multi-level governance that can make tradeoffs, share and use evidence, and deliver infrastructure while demonstrating that improved trust and participation are essential to future progress.
Implications for democratic governance
- Invest in strategic communications to build durable public understanding of environmental measures.
- Redesign public participation and engagement to transparently and rapidly weigh the inevitable trade-offs to a decision through open and informed consideration guided by clearly articulated principles.
Capacity needs
Modernize today’s system of cooperative federalism to address the lack of clear and intentional interconnections, adaptive feedback loops, and aligned objective, by:
- Rebuilding and sharing the data backbone at the state/local level, including working to preserve and sustain current environmental data; over time, expand interoperable data collection and make it genuinely usable to support consistent, evidence based state and local action.
- Using state/local powers for climate progress, integrating zoning, land use, infrastructure, and public health authorities into a whole-of-government discussion to address climate.
- Strengthening public sector networks to share data, best practices, and innovations across jurisdictions.
- Investing in tech-enabled capacity for states and local governments, (such as AI for decision support), satellite imagery, remote sensing, and digital modeling to lower the cost of monitoring, risk assessment, permitting, and compliance.
The early 20th Century saw the emergence of our first national laws regulating public resources— the Federal Power Act in the 1930s, the precursor to the Clean Water Act in the 1940s, and the first version of the Clean Air Act in the 1950s. Then, in a concentrated decade of new laws and massive amendments to existing ones, the 1970s saw a focus on assessing, controlling, and reducing pollution, while setting ambitious goals for human and ecosystem health. These statutes generally were constructed around specific resources—airsheds, watersheds, public lands, and wildlife habitat—and articulated specific roles for federal agencies and other levels of government. State efforts were incorporated into a nationwide system of cooperative federalism, while many states undertook their own initiatives to address environmental problems.
For half a century these laws—enacted with overwhelming, bipartisan congressional support— produced a great deal of success, with conventional pollution decreasing across many resources and regions and some species and habitats recovering. But we have plateaued in terms of broad improvements, and meanwhile novel pollutants and more diffuse, global threats have emerged. Political shifts, legacy economic interests, and a changing information landscape have played an important role, as amply recounted elsewhere.
The bipartisan legislation of the 1970s arose from both idealism and necessity, during an Earth Day moment that embraced ecological thinking in response to tangible harms to humans and the environment. The laws enjoyed massive public support and got many things right. Some were aspirational and holistic, such as the Clean Water Act’s “zero-discharge” target or NEPA’s vision “to create and maintain conditions under which man and nature can exist in productive harmony, and fulfill the social, economic, and other requirements of present and future generations of Americans.” The latter Act established the Council on Environmental Quality to coordinate this policy across the entire federal government.
Other advances came piecemeal, focused on specific resources. The U.S. Environmental Protection Agency (EPA) was cobbled together by an executive plan to reorganize several existing agencies and offices, then granted authority in a series of media-specific statutes that began with the Clean Air Act, Clean Water Act, and Safe Drinking Water Act, and later the Toxic Substances Control Act and Federal Insecticide, Fungicide, and Rodenticide Act. The Resource Conservation and Recovery Act, Superfund, and Oil Pollution Act addressed hazardous substances affecting the nation’s health and ecosystems. Implementation of all these laws required the Agency to develop in-house scientific expertise and detailed regulations that fleshed out statutory standards and applied them to specific sectors—an approach upheld for decades by the Supreme Court.
These laws made unquestionable progress on conventional pollution and waste, the visible, toxic byproducts of industrial production and consumer culture that had spurred the environmental movement and drawn a generation of lawyers to the new profession. But with specialization came fragmentation of environmental law into a plethora of subtopics, and a managerial, permit-centric legal culture that risked losing sight of ecological goals. Nor were the benefits distributed equally by race or class, as demonstrated by pioneering studies in the field of environmental justice.
As the field matured, it slowed, with congressional interventions becoming less frequent and more technical. Some of the last major amendments to a bedrock environmental statute were the Clean Air Act Amendments of 1990, enacted by a bipartisan Congress and signed by President George H.W. Bush. (The other prominent example is the Frank R. Lautenberg Chemical Safety for the 21st Century Act (Lautenberg Chemical Safety Act), a major amendment to TSCA in 2016.) Absent updated legislation, EPA regulations became paramount, but these had to run a gauntlet of shifting policy priorities, complex rulemaking procedures, litigation, and a transformed and often skeptical Supreme Court.
Critiques of this system date back almost as far as the statutes themselves. One ELI study listed 34 major “rethinking” efforts emanating from academia, blue-ribbon commissions, and NGOs between 1985 and 2014, across the political spectrum and ranging from incremental reforms to radical reinvention. One highly touted initiative, led by sitting Vice President Al Gore, resulted in some modest administrative streamlining. Most remained paper exercises, appealing to good-government advocates but lacking political support.
The stakes grew higher with increasing awareness of climate change. In June 1988, NASA and book-length treatments followed, sparking broad discussion of what was then a fully bipartisan issue. Vice President Bush campaigned on addressing it, and as President in 1992, he traveled to Rio de Janeiro to sign the U.N. Framework Convention on Climate Change. With successes like the 1987 Montreal Protocol on the ozone layer or EPA’s 1990 Acid Rain Program doubtless in mind, the Senate ratified the Framework Convention 92-0.
But climate change implicates much larger portions of the U.S. economy—energy, transportation, agriculture—at individual as well as industrial scales. While NEPA embodied the 1960s slogan that “everything is connected,” the lesson of climate change is that many things emit greenhouse gases, and all things will be affected by global warming. The need for systemic change proved to be an uneasy fit with existing site-specific, media-specific environmental laws.
Growing awareness of climate change and the scale of action needed to address it also generated a backlash from entrenched economic interests. By the mid-2000s, the Bush/Cheney administration had reversed course on federal climate commitments. It contested and lost Massachusetts v. EPA, a landmark ruling in which a narrowly divided Supreme Court held that the Clean Air Act applies to greenhouse gas emissions that affect the climate.
The Administration’s argument was captured by Justice Antonin Scalia’s flippant remark in dissent that “everything airborne, from Frisbees to flatulence, [would] qualif[y] as an ‘air pollutant.’” In Scalia’s opinion, real pollution must be visible, earthbound, toxic, inhaled, not a matter of colorless molecules interacting in the stratosphere. Even in dissent, this view set the stage for subsequent legal battles, right up to the present effort to revoke EPA’s 2009 “endangerment finding” that is now the underpinning of federal greenhouse gas regulation.
Climate change likewise laid bare the long-standing divide between environmental law, which historically regulated the power sector in terms of its fuel inputs and combustion byproducts, and energy and utility law, which focused more on transmission and distribution of the resulting power. (Both fields are further divided among federal, state, and local authorities, as discussed below.) Vehicle emissions similarly are regulated via both EPA tailpipe standards and National Highway Transportation and Safety Administration mileage standards, with California authorized to propose more stringent ones. When coordinated, this multi-headed structure produces steady advances, but in deregulatory moments it has become fertile ground for opportunism, retrenchment, and delay.
At the federal level, these questions have been exacerbated by massive shifts in administrative law, long the building block of environmental law and climate action, and in federal court rulings on the separation of powers, implicating the authority of federal agencies to issue and enforce rules. Successive administrations have run afoul of the current Supreme Court majority, whose “major questions doctrine” casts a shadow both on attempts to fit new problems into once-expansive environmental statutes, and on “whole of government” approaches that attempt to address climate change’s sources and impacts across the entire economy.
Tentative attempts by presidents to leverage executive power and emergency authority have been curtailed when invoked for regulatory purposes, but are running strong in deregulatory efforts and executive actions in the service of “energy dominance.” Whether the Supreme Court will articulate some principled limits, and whether those will be even-handedly applied to future administrations, remains to be seen. Meanwhile, the past year has seen a large-scale push to reduce environmental regulation, in parallel with abrupt reorganizations and steep reductions in the federal workforce and agency budgets. These actions were joined by sharp declines in environmental enforcement and U.S. withdrawal from environmental and climate-related international instruments and bodies.
In this uncertain atmosphere, attention has turned to new technologies and building the necessary infrastructure to effect growth in low- and zero-carbon energy. As clean energy alternatives have matured and become economically competitive, the climate imperative is pushing against long-standing environmental review and permitting procedures. That may well include NEPA, which is now attracting attention from all three branches of government and a robust debate about whether, or how much, its procedures might be slowing energy deployment.
Environmental issues were federalized for a reason: to counter pollution that crosses state borders and to prevent a race to the bottom. But decades of implementation have seen the blunting of some tools, expansion of others, and identification of gaps. Moving forward requires reaffirming that the environment is inseparable from societal health and well-being, economic stability, and energy systems. Any serious response must orient governance toward decarbonization, while embedding accountability, equity, and justice from the outset rather than inconsistently and often inadequately after the fact. Doing all this without sacrificing hard-won environmental gains will not be easy.
To meet the challenge of the worldwide crises of biodiversity loss, pollution overload, and climate change, creation of any new structure must be rooted in understanding the existing baseline for environmental governance.
- Cross-Cutting Objectives: Effective governance paths must overcome the persistent false dichotomy between the environment and the economy, making clear that energy production, economic prosperity, ecosystem health, and societal well-being are inextricably linked. Improved trust and participation are essential to sustaining and accelerating progress across these interconnected goals.
- Democracy, Expertise, and Regulatory Certainty: Our legacy environmental laws have seen many successes, but their media- and site-specific tendencies are in tension with the scale of action needed to decarbonize our economy, conserve biodiversity, and control pollution. Eroded trust, accreted layers of process, and increasingly extreme political actions and reactions hamstring progress. At the same time, rapidly advancing scientific knowledge and technology have greatly expanded our ability to anticipate environmental challenges and understand and react to the impacts of our actions. Harnessing these tools effectively can help us improve and accelerate our decisionmaking processes.
- Building a Structure Fit for Purpose: Environmental law necessarily operates at multiple scales: global, national, tribal, regional, state, and local. Our system of cooperative federalism centers authority around the federal and state governments, backstopped by treaty obligations, interstate compacts, and traditional state and local authority over land use and public safety and welfare. A strong cooperative federalism framework can foster collaboration across subnational jurisdictions, including by leaning into data collection, analysis, and dissemination to support decisionmaking. In addition, understanding the effects and drivers of private sector environmental actions can help to identify ways to leverage those actions to augment and fill gaps in public governance.
Cross-Cutting Objectives
Inseparable: Environment, Energy, Economy, and Society
The past half-century has demonstrated the impossibility of severing the environment from the economy, energy production, and social well-being. We must ensure the false dichotomy between environmental protection and economic development, characterized by an oversimplified idea that the two are in a zero-sum competition, also fades. The decades-old concept of sustainability (or triple bottom line) has not yet made its way into many of our foundational laws and governance structures.
Ignoring the complex relationships among environment, energy, the economy, and society favors short-term decisions that externalize impacts. This underlies the longstanding debate over the accuracy and efficacy of cost-benefit analyses, throughout their 40-plus year federal history, including questions about scope and how they handle uncertainty. For any project or program, system designers that consider an integrated suite of factors that move beyond basic environmental parameters or economic indicators (from public health to workforce development, from the supply chain to community well-being) have a greater chance of cross-sector success.
These governance challenges are also inseparable from shifts in how finance flows. Public and private financial tools—from subsidies and tax credits to loans, grants, and community-based financing—are increasingly shaping market behavior and determining whether policy objectives translate into real-world outcomes. Who controls these tools, how they are deployed, and when capital is made available all play a central role in driving or constraining environmental progress.
Bridging these gaps is, of course, easier said than done. But widening the aperture of considerations can connect decisionmaking to holistic industrial policies that account for a wider range of economic, social, and environmental factors. Accounting for this wider range isn’t just a nice-to-have, but essential to shared prosperity.
Foundational: Trust and Participation
A process, project, or program will move at the speed of trust—no faster and no slower. This refers to trust in institutions, in science, and in process.
Trust is earned through consistent transparency, clear accountability, and demonstrated responsiveness. For governance systems to function at the scale and pace required today, these principles must be embedded in decisionmaking in ways that are coherent and durable, rather than fragmented across a series of disparate steps and entities. Our traditional frameworks contain mechanisms to solicit and incorporate public input. But those mechanisms have limitations for all involved, both those trying to make their voice heard and those proposing the action and receiving input. (These range from when and how often participation occurs in the decisionmaking process to how the input is incorporated and decisions communicated.) Participation is foundational to our regulatory democracy and must occur early enough and in meaningful ways to improve decisions.
Effective participation also depends on clarity. People must be able to understand how decisions are made, what tradeoffs are being weighed, and where and how engagement can influence outcomes. But our frameworks still reflect reliance on elite and professional representation rather than widespread engagement. Trust—and the durability of outcomes—will increase when our processes have clearly articulated principles, transparently and rapidly weigh tradeoffs, and come to decisions through open and informed consideration.
The Concurrent Risk and Promise of Technology
Mechanization and industrialization created both unprecedented wealth and the pollutants that were the target of the 1970s wave of environmental laws. Emerging technologies likewise offer great promise, but also place familiar stresses—greenhouse gas emissions, water consumption, land use, waste—on the ecosystem and on human health and well-being. Our existing laws will need to respond and adapt to these problems as data centers and other novel demands reach greater scale, even as we evolve new ways of balancing those technologies’ potential against their up-front impacts and opportunity costs.
Technology also offers a potential path through the climate crisis, as solar and wind energy have become scalable and cost-competitive with traditional fossil fuels. Other clean technologies on the horizon, such as geothermal or fusion energy, retain bipartisan support and will require legal and regulatory guardrails if they mature and are integrated into the system. Battery storage and energy efficiency advances will help manage and reduce energy demand, and carbon removal and sequestration technologies may also play a role in curbing emissions. And at the outer limits of our knowledge, various geoengineering concepts are raising difficult questions about feasibility, decisionmaking procedures, unintended consequences, and accountability.
New technologies are also helping shape the implementation of environmental law in important ways. Existing tools such as satellite imaging, GPS location and geographic information systems, remote monitoring and sensing, and drones have fundamentally altered the way we view and record data from the physical world, in close to real time. Computer modeling and simulations have been a mainstay of climate science and policy, and other software innovations may improve environmental governance, including addressing long-standing issues of government transparency and public participation.
Effective messaging is essential to enhancing public understanding of interconnected issues and support for responses. It should be tailored to specific jurisdictions and informed by advances in research (e.g., behavioral science), learn from those thriving in today’s information ecosystem, and embrace strategies for reducing polarization.
How can we identify and address barriers to the development and equitable deployment of technologies that advance environmental protection while limiting their negative impacts.
Democracy, Expertise, and Regulatory Certainty
In a healthy democracy, public policy is guided by evidence, and truth is the shared foundation for collective decisionmaking, whatever the chosen outcome. When facts and scientific expertise are dismissed or minimized in favor of ideology, however, it becomes harder for citizens to deliberate, solve problems, and hold leaders accountable. The diminution and marginalization of science contribute to the erosion of democracy itself.
In the United States, our ability to build necessary infrastructure and take action has been slowed by the long timelines and sometimes overlapping requirements of our regulatory processes. This is exacerbated by the increasingly extreme policy swings we have been experiencing between administrations. The result is the twin challenge of how to increase the pace of our processes without lessening their protections, while also making our decisions more stable and durable.
Aligning Regulatory Certainty and Timelines
Regulatory certainty is not the same thing as rigidity. When done correctly, it can be the backdrop against which communities are able to plan for the future and companies can make informed decisions about where and how to invest. Regulation that is sufficiently clear on stable objectives does not have as much space in which to swing.
Long horizons with clear milestones matter: think of a national clean electricity standard, or the emissions-based equivalent, set on a 15- to 20-year glidepath. Confidence in long-term decisions, however, stems from effective inclusion, holistic analysis, and transparent decisions. The perspectives of subject-matter experts (in-house and external), and of those who manage and care about the resources or land in question, should be considered essential and actively pursued by policymakers.
Program-level thinking can help inform decisions at the project level. The energy transition will be remembered for feats of engineering—the thousands of miles of transmission lines, the buildout of battery storage—but its success will be determined by whether our framework listens, incorporates needed expertise, and produces rules that last long enough for people to plan their lives.
Evidence-Based Decisionmaking
For decades, the principle that good decisions require a good evidence base has been axiomatic. Dating back to 1945, the federal government has invested in science as a discipline and an idea, with government supporting the research to be conducted by public institutions and delivered as socially useful goods by the private sector.
Incorporating meaningful, often complex, evidence—including scientific data, traditional knowledge, and the needs, concerns, and priorities of potentially affected individuals—into decisionmaking is increasingly fraught. Climate change illustrates these challenges: despite decades of understanding by government officials and private sector decisionmakers about its causes and the need to act, economic and social interests have prevented effective policy and legislative response. Decisions are as good as the information they are based on. Emissions reductions ultimately depend not just on technical knowledge, but on institutions and governments capable of acting on that knowledge independently, transparently, and free from corruption and clientelism.
In a study assessing the effectiveness of the federal government’s efforts to improve evidence-based decisionmaking, the U.S. Government Accountability Office found mixed progress in: (1) developing relevant and high-quality evidence; (2) employing it in decisionmaking; and (3) ensuring adequate capacity to undertake those activities. These are foundational problems.
Compounding our challenges in making legislative and policy decisions based on accurate and pertinent evidence is the siren song of AI. Artificial intelligence promises many tools, ranging in complexity and autonomy from providing clerical tasks to generating substantive recommendations. (AI Clerical Assistive Systems automate certain administrative and procedural tasks, such as document classification and automatic transcription, and AI Recommendation Systems can contribute to judicial decision-making, for example, by analyzing legal codes and case precedents. Paul Grimm et al.)
AI is already being used across jurisdictions and agencies for environmental regulation, including planning, reviewing proposals, drafting environmental reviews, public participation and engagement, monitoring compliance, and enforcement. Recent federal policy has fueled the AI flame, with a 2025 AI action plan and multiple Executive Orders that offer the power to expedite permitting processes.
Enormous governance questions around AI have yet to be resolved. Technologies built by people reflect the values and assumptions of those who built them, and their use shifts power in decisionmaking processes. If a judge were called upon to review a decision made by such a tool, how could she determine the finding was reasonable under existing standards of administrative law? Can machine-generated analysis satisfy NEPA’s “hard look” review? These types of governance concerns dog AI tools wherever they are deployed but become particularly critical when they have the potential to become the decisionmaker in our legal and regulatory system.
The importance of having rigorous systems for identifying and considering trusted information to ground collective and democratic decisionmaking cannot be overstated. Until recently, dozens of scientific advisory committees routinely advised federal agencies to help bridge information gaps. Staggering recent losses of federal research funding and government programs and scrubbing of essential data sets means any path forward will likely require significant investments of both financial and human capital. When we rebuild, priority should be placed on ensuring all participants in decisionmaking have access to the same evidence, supported by the same systems.
Frontloading Regulatory Decisionmaking
Even as we work to improve how evidence informs decisionmaking, we face growing risks, uncertainties, and tradeoffs. The challenge is not simply to generate more information, but to make better use of what we already know through regulatory systems that reflect the integrated nature of the problems we face—without mistaking uncertainty for an absence of evidence.
Many conflicts arise because decisions are fragmented across regulatory silos and institutions. Consider a proposed electrical transmission line crossing a wetland. Decisionmakers must balance the imperatives of the energy transition, the conservation of biodiversity, the protection of water resources, and local economic opportunities. Yet these factors may be evaluated at different times, at different scales, and by different agencies. As a result, environmental permitting decisions can be made in isolation, long after foundational choices about the project’s purpose and design have already been locked in.
By the time site-specific questions arise, such as whether a particular wetland falls within the narrowed jurisdiction of the Clean Water Act, many broader tradeoffs have already been foreclosed.
A holistic approach would entail identifying the priority of certain projects and a system for weighing their impacts. For example, infrastructure decisions could happen at a systemic scale such as nationwide grid needs, providing context for decisions about individual projects and resources. Our decisionmaking processes need systems for weighing tradeoffs, and making them transparent, to enable systems-level planning and prioritization and effective engagement.
Hard decisions will have to be made regarding prioritized (and thus deprioritized) objectives. But frontloading data gathering, assessment, and decisionmaking on a national scale—through meaningful scenario planning, for example—could reduce the number of decisions made much further down the line in a project lifecycle and temper the uncertainty that can stem from permitting officials’ discretion.
We will be facing these types of tradeoffs with increasing frequency as needs mount to build infrastructure and housing, retreat from our coasts, manage and conserve species and ecosystems, and respond to and prepare for increasingly frequent and severe emergencies. In addition to an integrated approach for assessing impacts and making tradeoffs transparent, the system will need certain decisions to be made earlier in the decisionmaking processes and with a broader scope.
Acting (and Adapting) Amidst Uncertainty
Core tenets of administrative law structure decisionmaking with up front analysis and assume that we have full—or at least sufficient—information about circumstances and potential impacts to support a decision. But this is not always the case. When there are substantial uncertainties about conditions or the possible impacts of an action or rulemaking, adaptive management can improve outcomes by taking an iterative, systematic approach.
The uncertainties brought on by changing conditions due to climate impacts and unknowns about the consequences of proposed actions may call for an adaptive approach. And there are other situations where establishing sufficient evidence before taking irreversible action is appropriate. For example, we currently have limited understanding of the potential local and global impacts of geoengineering proposals to release aerosols into the atmosphere to block the sun’s rays, nor are there governing mechanisms in place to address them.
There are also situations where it is important to ensure that we do not indefinitely postpone action due to a desire to have all the answers before acting, such as infrastructure for transitioning away from fossil fuel combustion. When appropriate, effective adaptive management plans include procedural and substantive safeguards such as clear goals to set an agenda and provide transparency, an accurate assessment of baseline conditions to compare future monitoring data against, an outline of the thresholds at which management actions should be taken to promote certainty and assist with judicial enforcement, and is linked to response action.
Learning as we go and making appropriate adjustments may be justified in some contexts, and even essential when we do not have the luxury of time and must move ahead without critical information. Adaptive management can increase an agency’s ability to make decisions and allow managers to experiment, learn, and adjust based on data. But adaptive management’s flexibility comes at the cost of more resources and less certainty, which may also invite controversy. The sweet spot for adaptive management may be when managing a dynamic system for which uncertainty and controllability are high and risk is low. While uncertainties are proliferating, situations that meet those conditions are not the norm.
It would be beneficial for our environmental governance systems to explicitly identify conditions under which adaptive management may and may not be used, and to provide clear accountability mechanisms. The approach must fit with the practical realities of the working environment. For example, even if uncertainty and controllability are high and risk is relatively low, tinkering with large-scale energy infrastructure is not practical. Adaptive management may not be suited to regulatory contexts (1) in which long-term stability of decisions is important; (2) where decisions simply can’t easily be adjusted once implemented; or (3) where it is essential that an agency retain firm authority to say “yes” or “no” and leave it at that. It is a valuable tool to be invoked when truly necessary.
The interconnectedness of today’s global environmental challenges is in tension with the accreted framework of media-specific, site-specific laws and siloed agencies. Adjustments that help to align objectives, processes, and structures could scale impact.
Our framework should reflect commitment to and investment in gathering and analyzing information, from intricate science to the concerns of impacted communities; and be designed to incorporate and respond to changing information, such as through judicial review or other checks.
In part because of impacts already set in motion, we must consider when we cannot wait for more information before taking action on environmental and climate challenges. By their nature, some of those actions can be adapted on an ongoing basis, while others cannot. Clear parameters for differentiating will help ensure clear timelines and appropriate, effective processes.
Building a Structure Fit for Purpose
The triple planetary crises, a term coined by the UN Environment Programme, refers to the challenges of biodiversity loss, pollution overload, and climate change. They require large-scale mobilization and societal level adjustments. This magnitude of action requires a multifaceted system that can support and move myriad levers in a coordinated and balanced manner. The year she received the Nobel Prize in Economics, Elinor Ostrom published a paper capturing the tension but also necessity of this layered system, calling for a “polycentric approach” to addressing climate change.
The following discussion focuses largely on federal and state government action. In addition, Tribal Nations are vital sovereign authorities, partners, and voices in governance, including natural resource management, and their needs and knowledge are critical to effective, sustainable, just results. And as Ostrom recognized, private entities will also be instrumental in addressing climate change and other complex challenges; this includes not only corporations, as discussed below, but philanthropic organizations and a variety of other nongovernmental actors.
The Scale Challenge
Environmental regulation occurs at multiple levels: local ordinances, state laws and policies, interstate agreements, tribal laws, federal regulations, and international laws and norms. It also works at different resource scales, from managing a subspecies to protecting regional drinking water to setting nationwide air standards.
Jurisdictional nesting can provide comparative benefits at various levels for specific resources or pollutants. For example, working at the local level may allow for tailoring to specific circumstances to maximize benefits and the building of trust, while working at the state level can allow for the cumulative benefits of collective local action while also allowing for the testing of different approaches to federal implementation. Meanwhile, working at the federal and larger scale allows, among other things, the balancing of voices, and the establishment of shared objectives, standards, or requirements.
However, tiered systems can also be subject to gaps in implementation, such as when there is no mechanism to trigger enforcement of an international mandate at a national level. This may inadvertently impede interoperability and shared learning, such as by using different data standards, tools, or systems, and slow action due to competing or otherwise unaligned priorities. In addition, rarely do jurisdictional boundaries align with resource definitions, whether it be a hydrogeographic basin, extent of an air pollutant, or natural hazard vulnerability zone. Further complexity is added by questions around preemption, with changes occurring in longstanding understandings of federal versus state authorities under key statutes and regulatory structures.
Federal, tribal, state, and local governments must navigate these challenging dynamics as they work to effectively implement existing environmental laws and creatively address new environmental problems.
Cooperative Federalism
Federalism—whereby the federal government and states share power and responsibilities—is a central tenet of the U.S. governance system. A particular form, cooperative federalism, is embodied in most of the major U.S. environmental laws, including the Clean Air Act and the Clean Water Act. These laws establish a legal framework in which minimum standards are established at the federal level and individual states implement the programs. Today, over 90 percent of the delegable federal environmental programs are run by states. As a general matter, states are responsible for ensuring that federal standards are met but have the flexibility to impose standards that are more stringent than the federal standards.
In practice, the Congressional Research Service observes that the “precise relationship and balance of power between federal and state authorities in cooperative federalism systems is the subject of debate.” This debate has manifested in a variety of ways over the decades, including differences over the appropriate scope of federal oversight and levels of federal funding for state-delegated programs.
Environmental protection has advanced in many respects over time with cooperative federalism as its foundation, but few would argue there is no room for improvement. For example, a 2018 memorandum by the Environmental Council of the States (ECOS) captured a consensus among states that the “current relationship between U.S. EPA and state environmental agencies doesn’t consistently and effectively engage nor fully leverage the capacity and expertise of the implementing state environmental agencies or the U.S. EPA.”
In addition to the leeway that cooperative federalism provides to the states in implementing federal environmental laws, states are free to regulate or otherwise address environmental problems that are not covered by federal laws. As a result, states are often referred to as (in Justice Brandeis’ phrase) “laboratories of democracy” for testing innovative policies. Historically, states have served as testing grounds for environmental policies later adopted by the federal government. Given the current federal governance landscape, discussed below, what happens in the states may stay in the states (at least for quite some time)—making state laboratories one of the few promising options for advancing environmental protection.
Barriers to Optimal Functioning of Cooperative Federalism
In addition to the inherent systemic challenges outlined above with respect to multi-tiered jurisdiction and resource scale, there are broad societal barriers to maximizing the efficacy of cooperative federalism. The numerous overarching problems contributing to democratic dysfunction (e.g., channelized communication, primaries that yield extreme candidates who foster dramatic pendulum swings, lack of public trust) will contribute to impeding the optimal functioning of cooperative federalism for the foreseeable future.
The multitude of environmental governance-specific challenges identified earlier also significantly affect the functioning of cooperative federalism. These include, for example, long-standing congressional gridlock; new and emerging environmental harms that cannot be easily addressed within the existing, siloed framework; a Supreme Court changing its review of regulation; and regulatory pendulum swings that make consistency and stability difficult and hinder continuous improvement.
In addition, several additional barriers arguably weaken the foundations of cooperative federalism. These include: ineffective federal oversight of state programs (possibly both too stringent and too lenient in some respects); insufficient collection and dissemination of data (e.g., on environmental conditions, performance, pollution impacts), as well as inconsistent tracking of key environmental indicators; lack of state-specific effective risk communication and messaging; limited state resources for filling federal regulatory gaps or experimenting with innovative ways of implementing federal and state regulations; and insufficient federal funding for state programs. Recent critiques also point to the need to build out state administrative law to improve the functioning of cooperative federalism.
Opportunities for Renewing Cooperative Federalism
Recent developments in federal programs are disrupting many aspects of the country’s environmental protection efforts. These developments include drastic regulatory rollbacks, multiplied industry influence, curtailed input from scientists and other experts, rollback of federal grant funds to states and local governments, and sweeping staffing cuts resulting in loss of critical expertise.
Cooperative federalism has been particularly undermined by federal funding cuts (e.g., withdrawal of federal grants, reductions in revolving loan funds) and cuts to the federal programs that collect and analyze environmental data. Moreover, federal interference with independent or “more stringent than” state initiatives is taking a toll (e.g., response to California’s electric vehicle requirements ).
Given the barriers outlined above that make major statutory change infeasible, building an entirely new structure to replace cooperative federalism will be a nonstarter for the foreseeable future. However, ample opportunities exist to strengthen the existing structure in a manner that yields more effective and innovative approaches to environmental protection.
Front and center is building state and local governmental capacity to fill the gaps created by federal inaction and rollbacks as well as to lead on regulatory innovation. In so doing, states and local governments can serve as more effective laboratories of democracy and foster innovative federal action. And because states and local governments are on the frontlines of managing environmental and climate impacts such as floods and wildfires, as well as aging water infrastructure and other environment-related challenges, they are motivated to address the cause and effects of these harms, despite the intensely politicized nature of environmental issues such as climate change.
To be sure, renewing the existing structure is complicated by an uneven political landscape. For example, the level of political and popular support for environmental protection measures in the 26 states led by Republican governors differs from the levels of support in the 24 states led by Democratic governors, and the relative dominance of a particular party (e.g., trifectas or triplexes) is also a factor. These dynamics likewise influence environmental action by local governments when, for example, the potential for state preemption of local authority is a factor.
Nevertheless, the practical reality of increased extreme weather events, aging water infrastructure, and other environment-related challenges provides a strong incentive for all states and local governments to act. State and local efforts, however, are hindered by limited capacity in the form of staffing, funding, expertise, data, and other factors. For example, virtually all states could benefit in their decisionmaking from more robust data on local environmental conditions, and many states lack adequate funding, staff, and other resources.
Private Sector Synergies and Opportunities
Private environmental governance (PEG)—which can take a range of forms including collective standard-setting, certification and labeling systems, corporate carbon commitments, investor and lender initiatives, and supply chain requirements—is already making its mark across industries as diverse as electronics, forestry, apparel, and AI. For example, roughly 20 percent of the fish caught for human consumption worldwide and 15 percent of all temperate forests are subject to private certification standards. In addition, 80 percent of the largest companies in key sectors impose environmental supply chain contract requirements on their suppliers. And investors are increasingly taking environmental, social, and governance (ESG) into account, including risks related to climate change. A 2022 study estimated, for example, that assets invested in U.S. ESG products could double from 2021 to 2026 and reach $10.5 trillion.
As professors Vandenbergh, Light, and Salzman explain in their book Private Environmental Governance: “If you want to understand the future of environmental policy in the 21st century, you need to understand the actors, strategies, and challenges central to private environmental governance.”
Given the scope of PEG activities, it is not surprising that a range of regulatory regimes are implicated, including corporate governance, contract, antitrust, and consumer protection laws. In some cases, these legal regimes place constraints on the forms and scope of PEG initiatives. Many contend, however, that these constraints are inadequate, as reflected in recent efforts to severely curtail ESG initiatives.
Further, some scholars and advocates have criticized PEG from an entirely different perspective, citing concerns that PEG measures constitute greenwashing—that is, that they do not actually change corporate behavior and environmental conditions. Among other concerns is that PEG may undermine support for public governance measures in certain contexts.
Yet federal legislative gridlock, a dramatically swinging environmental regulatory pendulum, unregulated new technologies, and other factors point to needing a better understanding of how PEG can be leveraged to advance environmental protection efforts—including the improved functioning of cooperative federalism.
How can we use innovative approaches for preserving existing data and collecting new data on environmental conditions, regulated entity performance, and pollution impacts to enhance interoperability of local, state, and federal systems, foster consistency among assessments of risk, and help align priorities and approaches?
Problems such as climate change require a whole of government approach to address and could benefit from leveraging adjacent state and local regulatory authorities in areas such as land use (e.g., zoning), infrastructure, and public health.
Bolstering state and local officials’ networks for sharing data, best practices, and regulatory innovations may help align priorities and produce further progress on cross-jurisdictional problems as well as new challenges such as permitting reforms.
For example, asking—what are the effects of PEG (e.g., emissions reductions); what are the drivers of PEG (e.g., brand reputation, shareholder actions, employees, and corporate customers); are there ways to reduce greenwashing and greenhushing; and how can we ensure that PEG complements public governance.
For example, AI and advanced monitoring technologies—if thoughtfully leveraged—could lessen the burden on state and local governments, particularly those that are under-resourced, in their efforts to assess climate risk, develop resilience plans, and monitor regulatory compliance.
Conclusion
The environmental gains of the last half-century demonstrate that governance choices matter. The United States built a system capable of addressing the urgent environmental crises of its time by combining scientific expertise, democratic accountability, and enforceable legal standards.
Today’s urgent challenges—climate change, biodiversity loss, and pervasive pollution—demand a similar alignment under far more complex conditions. The challenge is not merely to regulate more, faster, or differently, but to recommit to decisionmaking that is credible and durable: by restoring confidence that evidence matters, that participation is meaningful, that tradeoffs get confronted honestly, and that rules will persist long enough to justify investment and collective effort.
The path forward lies neither in abandoning the foundations of environmental law, nor in relying solely on technological or private solutions. It will be found by strengthening and adapting existing governance structures—integrating cross-cutting objectives across domains, clarifying roles across jurisdictions, and rebuilding the shared evidentiary base and institutional capacity needed to act amid uncertainty, rather than deferring action in pursuit of unattainable certainty. And it requires clear communication about today’s complex, dispersed challenges that enhances understanding and reduces polarization.
At its core, the triple planetary crisis is a democratic and governance challenge: how societies decide, together, to protect people and places while sharing costs and benefits fairly. Meeting that challenge will require systems capable of carrying both technical complexity and public trust, as well as a sustained commitment to invest in institutions that can decide, act, and endure.
Costs Come First in a Reset Climate Agenda
Building Blocks to Make Solutions Stick
Durable and legitimate climate action requires a government capable of clearly weighting, explaining, and managing cost tradeoffs to the widest away of audiences, which in turn requires strong technocratic competency.
Democratic governance needs
- Clear articulation of tradeoffs in policy design, including who pays, who benefits (and when), and why.
- Think bigger and wider in building durable coalitions for climate action, mobilizing dispersed beneficiaries and taking advantage of policy Overton windows that cut across partisan lines.
State Capacity needs
- Intergovernmental delivery muscle and partnering capacity to enable state and local actors.
- Invest in technocratic state capacity where the big wins live, like permitting and siting, interconnection and transmission, or power-market governance, and implementation capacity to limit bottleneck-driven policy failures.
- Institutionalize rigorous ex ante and regular cost benefit analysis to guide design and mid-course corrections.
Key Takeaways
- The costs of climate policy influence whether reforms benefit society, as well as their likelihood of passage and durability. Four ways to categorize climate policy costs are: negative-cost policies (pro-growth policies with climate co-benefits); low-cost policies (costs below domestic climate benefits); medium-cost policies (costs below global climate benefits); and high-cost policies (costs above global climate benefits). Cross-partisan alignment is most evident among pro-abundance progressives and pro-market conservatives.
- Negative- and low-cost policies align with domestic self-interest and comprise a growing share of the abatement curve. For example, market liberalization in permitting, siting, electricity regulation, and certain transportation applications lower energy costs and have profound emissions benefits. A prominent low-cost policy is emissions transparency. Negative- and low-cost policies hold the most potential for durable reforms and are often technocratic in nature.
- Chronic underconsideration of costs has induced an overselection of high-cost policies and underpursuit of low- and negative-cost policies. Legislative policies, such as subsidies and fuel mandates or bans, often receive no ex ante cost-benefit analysis before adoption. Interventions receiving cost-benefit analysis, especially regulation, tend to underestimate costs.
- Innovation policy – namely public support for research, development, and early-stage deployment – can align with domestic self-interest and address legitimate market deficiencies. By contrast, industrial policy for mature technology carries high costs, often erodes social welfare, and is not politically durable. Notably, public support for mature technologies in the Inflation Reduction Act was not durable, but support remained for nascent industry.
- We recommend that a reset climate agenda focus on abatement results over symbolic outcomes, prioritize state capacity for technocratic institutions, and emphasize cost considerations in policy formulation and maintenance. Negative cost policies warrant prioritization, with an emphasis on mobilizing beneficiaries like consumer, non-incumbent supplier, and taxpayer groups to overcome the lobbying clout of entrenched interests. Robust benefit-cost analysis should precede any cost-additive policies and be periodically reconducted to guide adjustments.
Introduction
Public policy involves tradeoffs. The primary tradeoff for climate change mitigation is economic cost. Secondary tradeoffs include commercial freedom, consumer choice, and the quality or reliability of goods and services. Political movements seeking to address a collective action problem, such as climate change, are prone to overlook the consequences of tradeoffs on other parties, like consumers and taxpayers. This paper posits that the cost tradeoffs of climate change mitigation have been underappreciated in the formation of public policy. This has resulted in an overselection of high cost policies that are not politically durable and may erode social welfare. It also results in overlooking low or negative-cost policies that are durable and hold deep abatement potential. These policies can have broad political appeal because they align with the self-interest of the United States, however they typically require dispersed beneficiaries to overcome the concentrated lobby of entrenched interests.
A core, normative objective of public policy is to improve social welfare, which “encourages broadminded attentiveness to all positive and negative effects of policy choices”. Environmental economics determines the welfare effects of climate change mitigation policy by the net of its abatement benefits less the costs. The conventional technique to determine abatement benefits is the social cost of carbon (SCC). The barometer for whether climate policy benefits society is to determine whether abatement benefits exceed costs. Accounting for full social welfare effects requires consideration of co-benefits as well, granted these tend to be conventional air emissions with existing mitigation mechanisms covered under the Clean Air Act. Nevertheless, accounting for costs is essential to ensure climate policy benefits society.
Abatement costs also have a discernable bearing on the likelihood and durability of policy reforms. Climate policies exhibit patterns of passage, mid-course adjustments, and political resilience across election cycles based on the constituency support levels linked to benefit-allocation and cost imposition. This paper develops four policy classifications as a function of their abatement benefit-cost profile, and uses this framework to examine the political economy, abatement effectiveness, and economic performance of select past and potential policy instruments.
Political Economy and Policy Taxonomy
The translation of climate policy concepts into legitimate policy options in the eyes of policymakers can be viewed through the Overton Window. That is, politicians tend to support policies when they do not unduly risk their electoral support. The Overton Window for climate policy is constantly shifting within and across political movements with the foremost factor being cost.
In a 2024 survey of voters, the most valued characteristics of energy consumption were 37% for energy cost, 36% for power availability, 19% for climate effect, 6% for U.S. energy security effect, and 1% for something else. Democrats slightly valued energy cost and power availability more than climate effects. Independents and Republicans heavily valued energy cost and power availability more than climate effect.
Progressives have long exhibited greater prioritization of climate change policy, but cost concerns are driving an overhaul of the progressive Overton Window on climate change. In California, which contains perhaps the most climate-concerned electorate in the U.S., progressives have begun a “climate retreat” to recalibrate policy as “[e]lected officials are warning that ambitious laws and mandates are driving up the state’s onerous cost of living”. Nationally, a new progressive thought leadership think tank is encouraging Democrats to downplay climate change for electoral benefit. Importantly, they find that 61% of battleground voters acknowledge that “climate change is at least a very serious problem,” but that “it is far less important than issues like affordability.”
Similarly, veteran progressive thought leaders, such as the Progressive Policy Institute, now stress that “energy costs come first” in a new approach to environmental justice. While emphasising the continued importance of GHG emissions reductions, those policy leaders are making energy affordability the top priority, amid a broader Democratic messaging pivot from climate to the “cheap energy” agenda. The rise of cost-conscious progressives is particularly notable because the progressive electorate has expressed a higher willingness to pay to mitigate climate change than moderate and conservative electoral segments.
Economic tradeoffs, namely costs and more government control, has long been the central concern on climate policy for the conservative movement. The conventional climate movement messaged on fear and the need for economic sacrifice, which is the antithesis of the conservative electoral mantra: economic opportunity. Yet the conservative climate Overton Window emerged with a series of state and federal policy reforms when climate change mitigation aligned with expanded economic opportunity. However, pro-climate conservative thought leaders remain opposed to high cost policies, such as calling to phase out Inflation Reduction Act (IRA) subsidies for mature technologies.
Many leading conservative thought leaders continue to challenge the climate agenda writ large because of its association with high cost policies. For example, President Trump’s 2025 Climate Working Group report was expressly motivated by concerns over “access to reliable, affordable energy” while acknowledging that climate change is a real challenge. Similarly, a 2025 American Enterprise Institute report finds that the public is most interested in energy cost and reliability and unwilling to sacrifice much financially to address climate change. Meanwhile, climate-conscious conservative thought leaders like the Conservative Coalition for Climate Solutions and the R Street Institute continue to emphasize a market-driven, innovation-focused policy agenda that prioritizes American economic interests and drives a cleaner, more prosperous future. Altogether, it indicates a conservative Overton Window on negative and low-cost climate change mitigation.
While cost is driving the Overton Window within each political movement, it also buoys the potential for alignment across political movements. Political movements are not monoliths, but rather exhibit major subsets within each movement. The progressive movement has seen gains in popularity among its populist left flank, often identified as the “democratic socialist” wing, which contributes to ongoing debate about Democrats’ ideological direction. Climate policy initiated by this wing, however, is associated with high economic tradeoffs (e.g., degrowth) and has prompted a backlash within the progressive movement. By contrast, a subset of the progressive movement, sometimes labelled “abundance progressives,” has emerged to support a more pro-market, pro-development posture. This movement is especially responsive to energy cost concerns, and is an emerging substitute for the anti-development traditions of the progressive environmental movement. Overall, variances in the progressive movement are fairly straightforward to categorize linearly on the economic policy spectrum.
The Republican electorate views capitalism far more favorably than Democrats, but with modest decline in recent years. Republicans have trended away from consistently conservative positions associated with limited government, which historically emphasized the rule of law and a strict cost-benefit justification for government intervention in the market economy. They have migrated towards right-wing populism associated with the Make America Great Again (MAGA) movement. Right-wing populism is hard to operationalize for economic policy because it is not a standalone ideology, but a movement vaguely attached to conservative ideology. Generally, the “America First” orientation of MAGA implies positions based on the self-interest of the U.S., with the Trump administration prioritizing cost reductions in energy policy.
MAGA is further to the right of conventional conservatives on environmental regulation and general government reform. For example, conservatives have noted the contrast between conservative “limited, effective government” and the Department of Government Efficiency’s “gutted, ineffective government” reform approach. On the other hand, MAGA will occasionally back leftist policy instruments, such as coal subsidies, wind restrictions, executive orders to override state policies, and emergency authorities for fossil power plants. These are often justified to counteract the leftist policies passed by progressives (e.g., renewables subsidies, fossil restrictions, emergency authorities for renewables), resulting in dueling versions of industrial policy. In other words, ostensible overlap between MAGA and progressives on policy instrument choice actually reflects the use of similar tools used for conflicting purposes (e.g., restrictive permitting or subsidies for opposing resources; i.e. picking different “winners and losers”). Nevertheless, the disciplinary agent for right-wing energy populism has been cost concerns, which have influenced the Trump administration to pursue more traditionally conservative energy policies like permitting reform and lowering electric transmission costs.
This political economy identifies the broadest cross-movement Overton Window between moderate or “abundance progressives” and traditional conservatives. Regardless, both broad movements exhibit cost sensitivity and growing prioritization of U.S. self–interest. Distinguishing the domestic SCC from global SCC is essential to determine what policies are consistent with the self-interest of the U.S. versus the world as a whole. Traditionally, the U.S. government only considers domestic effects in cost-benefit analysis, yet the vast majority of domestic climate change abatement benefits accrue globally.
The first SCC, developed under the Obama administration, relied solely on a global SCC. Leading conservative scholars, including the former regulatory leads for President George W. Bush, criticized the use of the global SCC only to set federal regulations. They argued for a “domestic duty” to refocus regulatory analysis on domestic costs and benefits. Similarly, the first Trump administration used a domestic SCC. Although the second Trump administration moved to discard the SCC outright, this appears to be part of a regulatory containment strategy, not a reflection of the conservative movement’s dismissal of the negative effects of climate change. In other words, even if the SCC is not the explicit basis for policymaking, it is a useful heuristic for policymakers.
The proper value of the SCC is the subject of intense scholarly and political debate. It has fluctuated between $42/ton under President Obama, $1-$8/ton under President Trump, and $190/ton under the Biden administration (all values for 2020). The main methodological disagreement has been over whether to use a domestic or global SCC, with the Trump administration position guided by “domestic self-interest.” This suggests the original domestic and global SCC values may approximate the Overton Window parameters the best. This underscores the following policy taxonomy that characterizes climate abatement policies by cost relative to domestic and global SCC levels:
- Class I policy: negative abatement costs. Such policies are widely viewed as “no regrets” by scholars and political actors across the spectrum because they constitute sound economic policy that happens to carry climate co-benefits. The Overton Window is most robust for Class I policy. It typically takes the form of fixing government failure, such as permitting reform.
- Class II policy: positive abatement costs below the domestic SCC. These low-cost policies often fall within the Overton Window, because they advance U.S. self-interest (i.e., positive domestic net benefits). Class II policies have a small abatement cost range (e.g., up to $8/ton). One estimate puts them at 4-14 times smaller than the global SCC.
- Class III policy: abatement costs between the domestic SCC and global SCC. These medium-cost policies improve global social welfare, but are not in the self-interest of the U.S., excluding co-benefits. Most cost-additive policies that pass a global SCC test fall in this range, underscoring why climate change is an especially challenging strategic problem; those incurring abatement costs do not accrue most abatement benefits. Class III policies face inconsistent domestic support and often require international reciprocation to be in the self-interest of the U.S.
- Class IV policy: abatement costs exceeding the global SCC. These high-cost policies fail a climate-only cost-benefit test. In other words, Class IV policies erode social welfare, excluding co-benefits. Class IV policies may be effective at reducing emissions, but often leave society worse off. Class IV policies are challenging to pass and are hardest to sustain.
Policy Applications
There are myriad policies across the abatement cost spectrum. This analysis applies to particularly popular domestic policies already pursued or readily considered. This includes policies targeting the environmental market failure via direct abatement (GHG regulation) and indirect abatement (public spending, clean technology mandates, and fuel bans). It also includes policies targeting non-climate market failure, yet hold deep climate co-benefits (innovation policy). The analysis also examines policies that correct government failure and have major climate co-benefits (permitting, siting, and electric regulation reform).
Fuel Mandates and Bans
For the last two decades, the most prevalent climate policy type in the U.S. has been state level fuel mandates and bans. Last decade, the environmental movement came to prefer policies that explicitly promote or remove fuels or technologies, not emissions. This is despite ample evidence in the economics literature that market-based policies are more effective and carry far lower abatement costs. Nevertheless, the most common domestic climate policy instrument this century has been state renewable portfolio standards (RPS). The literature notes several key findings from RPS:
- RPS has substantial but diminishing abatement efficacy. RPS compliance drove the bulk of initial renewables deployment, but declined to 35% of U.S. renewables capacity additions in 2023. This reflects the improved economics of renewable energy, which went from an infant industry in the 2000s to a mature technology and the preferred choice of voluntary markets by the 2020s. Renewables also exhibit declining marginal abatement as penetration levels grow. This underscores the environmental underperformance of policies promoting fuel, not emissions reductions.
- Binding RPS increases costs, with large state variances based on target stringency and carveouts. RPS compliance costs average 4% of retail electricity bills in RPS states and reach 11-12% of retail bills in states with solar carve-outs. Stringency is a key factor, as some RPS are not binding due to strong market forces, whereas binding RPS increases costs. Abatement cost estimates of RPS vary widely, with one prominent study placing compliance with RPS from 1990-2015 at $60-$200/ton. Within the Mid-Atlantic region alone, implied states’ RPS compliance costs in 2025 ranged from $11/tonne to $66/tonne, with solar carveout compliance clocking in at $70/tonne to $831/tonne. The future abatement cost of renewables integration is highly sensitive to RPS stringency and technology cost assumptions, with one estimate of implied abatement costs ranging from zero (nonbinding) to $63/tonne at 90% requirement in 2050. This evidence qualifies RPS as a class II to class IV policy, depending on its design.
- States with stringent RPS face challenging compliance targets, prompting calls for reforms to mitigate cost. Compliance with interim targets has generally been strong but stringent RPS states are beginning to fall behind on their targets. For example, renewable energy credit (REC) costs are nearing alternative compliance payment levels. To reduce costs, popular reform ideas have included delaying compliance timelines, adopting a clean energy standard to capture broader resource eligibility, or making RECs emissions weighted.
- Modest RPS exists in some conservative states but aggressive RPS policy has, generally, only proven popular in progressive states. As of late 2024, 15 states plus the District of Columbia had RPS targets of at least 50% retail sales, and four have 100% RPS. Sixteen (16) states have adopted a broader 100% clean electricity standard, though the broad definition of clean energy dilutes expected abatement performance in some states. Overall, renewable or clean portfolio standards do not appear to hold broad Overton Window alignment potential beyond modest applications.
Micro-mandates have also sprung up, primarily in progressive states. These have often targeted the promotion of nascent or symbolic energy sources that the market would not otherwise provide, with the costs obscured from public view (e.g., rolled into non-bypassable electric customer charges). A good example is offshore wind requirements in the Northeast, which carries a high abatement cost (over $100/ton).
Fuel bans have become increasingly popular climate policy in progressive states and municipalities. Beginning in 2016, a handful of progressive states began banning coal. However, this does not appear to have created much cost or abatement benefit, as evidenced by a lack of commercial interest in coal expansion in areas without such restrictions. In fact, neither federal nor state regulation was responsible for steep emissions declines from coal retirements. Coal retirements were mostly driven by market forces, especially breakthroughs in low-cost natural gas production and high efficiency power plants. Policy factors, like the Mercury and Air Toxics Rule, were secondary drivers of coal plant retirement.
Around 2020, California, New York, and most New England states began adopting partial natural gas bans or de facto bans on new gas infrastructure through highly restrictive permitting and siting practices. Unlike coal restrictions, these laws have markedly decreased commercial activity, namely gas pipeline and power plant development, and in some cases caused economically premature retirements. This has caused “pronounced economic costs and reliability risk.” Resulting pipeline constraints drive steep gas price premiums in these states, which translate into a core driver of elevated electricity prices.
Insufficient pipeline service in the Northeast is especially problematic, as demonstrated by a December 2022 winter storm event that nearly led to an unprecedented loss of the Con Edison gas system in New York City that would have taken weeks or months to restore. Further, preventing gas infrastructure development does not provide a clear abatement benefit, because more infrastructure is needed to meet peak conditions even if gas burn declines. A prominent study found a 130 gigawatt increase in gas generation capacity by 2050 was compatible with a 95% decarbonization scenario.
Progressive states and municipalities have also pursued natural gas consumption bans. This policy may carry exceptional cost, especially for existing buildings, with potentially well over $1 trillion in investment cost to replace gas with electric infrastructure. One estimate put the cost of natural gas bans at over $25,600 per New York City household. A Stanford study projected a 56% electric residential rate increase in California from a natural gas appliance ban. Generally, conservative thought leaders and elected officials have opposed natural gas bans for cost as well as non-pecuniary reasons, including security concerns and the erosion of consumer choice. This applies even for prominent members of the Conservative Climate Caucus. Altogether, gas bans are considered class IV policy with virtually no Overton Window alignment.
GHG Transparency
GHG regulation takes various forms. The least stringent is GHG transparency, which addresses an information deficiency and lowers transaction costs in voluntary markets. This begins with reporting and accounting requirements on emitters (Scope 1 emissions). Public policy can help resolve measurement and verification problems that have eroded confidence in voluntary carbon markets. GHG transparency policy can also standardize terminology and provide indirect emissions platforms. For example, making locational marginal emissions rates on power systems publicly available lets market participants identify the indirect power emissions of power consumption (Scope 2 emissions). Progressives have consistently favored GHG transparency policy, while conservatives have typically supported light-touch versions of it like the Growing Climate Solutions Act.
The second Trump administration recently pursued removal of basic GHG reporting requirements on ideological grounds, specifically repeal of the GHG Reporting Program (GHGRP). This appears to reflect an optical deregulatory agenda over an effective one. Conservative groups have warned of the downsides of GHGRP repeal. Pressure to course correct may prove fruitful, given that the industry the Trump administration aims to assist – oil and natural gas – maintain that the U.S. Environmental Protection Agency (EPA) should retain the GHGRP. A recent analysis found that if states replace the GHGRP, new programs will be more expensive (Figure 2).
Many regulated industry and conservative groups instead support a low compliance cost GHG reporting regime with durability across future administrations. This not only applies to direct emissions reporting but indirect emissions reporting, as in the absence of federal policy industry faces a patchwork of compliance requirements across states and foreign governments. The same economic self-interest rationale justifies a role for limited government in emissions accounting, with an emphasis on the capital market appeal of showcasing the “carbon advantage” of the U.S. in emissions-intensive industries. An example is liquified natural gas, whose export market is enhanced by showcasing its lifecycle emissions advantage over foreign gas and coal.
The abatement effectiveness of GHG transparency has grown appreciably in the 2020s, as voluntary industry initiatives have sharply increased. This policy set enables an efficient “greening of the invisible hand” with staying power, as corporate environmental sustainability efforts appear resilient regardless of political sentiment, unlike corporate social endeavors. In fact, the aggregate willingness to pay for voluntary abatement from producers, consumers, and investors suggests that well-informed domestic markets go a long way towards self-correcting the externality of GHGs (e.g., convergence of the private and social cost curves). Certain voluntary corporate behaviors may even exceed the global SCC, especially commitments to nuclear, carbon capture, and other higher cost abatement generation financed by the largest sources of power demand growth. Well-functioning voluntary carbon markets could yield roughly one billion metric tons of domestic carbon dioxide abatement by 2030. Providing locational marginal emissions data can slash abatement costs from $19-$47/ton down to $8-$9/ton while doubling abatement levels from some power generation sources.
Overall, efficient GHG transparency policy described above is a low-cost mitigation strategy consistent with class II designation. Basic, federal GHG transparency policy may even constitute class I policy, because it avoids the higher compliance cost alternative of a patchwork of state and international standards that would manifest in the absence of federal policy. However, stringent GHG transparency policy may constitute class III or IV policy. Prominent examples include a recent California climate disclosure law and a former Securities and Exchange Commission proposed rule to require emissions disclosure related to assets a firm does not own or control (Scope 3). Such efforts may obfuscate material information on climate-related risk and worsen private-sector led emission mitigation efforts.
Direct GHG Regulation
Classic environmental regulation takes the form of a command-and-control approach. These instruments include applying emissions performance standards or technology-forcing mechanisms, typically for power plants or mobile sources. These policies vary widely in stringency and cost. Overall, command-and-control is widely considered in the economics literature to be an unnecessarily costly approach to reducing GHGs relative to market-based alternatives. It can also result in freezing innovation, by discouraging adoption of new technologies.
Federal command-and-control GHG programs have not been particularly environmentally effective, cost-effective, or demonstrated legal or political durability. The first power plant program was the Clean Power Plan, which was struck down in court, and yet its emissions target was achieved a decade early from favorable market forces and subnational climate policy. The most recent federal command-and-control approaches for GHG regulation were 2024 EPA rules for vehicles and power plants. A 2025 review of these and other federal climate regulations over the last two decades of federal climate regulations found:
- EPA’s cost estimates to be “extraordinarily conservative” with suspect methodology that was prone to error and inconsistent with economic theory;
- Assessed costs of $696 billion compared to regulators’ estimate of $171 billion, or an increase in abatement cost from $122/tonne to $487/tonne; and
- EPA is too optimistic in its assumptions of benefits.
The 2025 review study implies that past federal command-and-control had very high cost – well into class IV range. It has also been a top priority of conservatives to undercut. However, it is possible for modest command-and-control policy with class II or III costs.
Some conservatives, noting EPA’s legal obligation to regulate GHGs and the cost of regulatory uncertainty from decades of EPA policy oscillations between administrations, suggested modest requirements as a better option to replace high cost rules in order to mitigate legal risk and provide industry a predictable, low-cost compliance pathway. For example, conservatives argued that replacing high cost requirements for power plants to adopt carbon capture and storage (CCS) with low cost requirements for heat rate improvements may lower compliance costs more than attempting to repeal the Biden era rule for CCS outright. Similarly, the oil and gas industry opposed stringent GHG regulations on power plants and mobile sources, but often validated alternative low cost compliance requirements.
The first Trump administration pursued modest replace-and-repeal GHG regulation. The second Trump administration has opted for repeal policies and to eliminate the endangerment finding via executive rulemaking. However, regulated industry and many conservative thought leaders believe this is a strategic blunder, given the low odds of legal success, resulting in the perpetuation of “regulatory ping-pong that has plagued Washington, D.C., for decades.” If the courts uphold Massachusetts v. EPA and the associated endangerment finding, this implies that modest command-and-control policy may have durable political alignment potential. Yet this does not hold much abatement potential. In the absence of a legal requirement to regulate GHGs, there is unlikely to be broad political alignment for even modest command-and-control policy. Conservatives tend to view this as a gateway to more costly policies that will probably not meaningfully affect global GHG trajectories.
The 2025 review study understates the full cost of U.S. climate regulations because they exclude state and local levels. Although no comprehensive study of state climate regulation is known, command-and-control state regulations often raise major cost concerns as well. The cost and environmental performance of such state programs varies immensely, often owing to differences in the accuracy of abatement technology costs that regulatory decisions are based upon (e.g., the failure of California’s zero-emission vehicle program compared to success with its low-emission vehicle program). A recent example is California’s rail locomotive mandate, which projected to impose tens of billions of dollars in costs before being withdrawn. State command-and-control regulation is commonplace in progressive states, but not beyond, implying meager Overton Window alignment.
A more economical version of GHG regulation is a system of marketable allowances, or cap-and-trade (C&T). Over three decades of experience with C&T programs reveals two things. First, C&T is environmentally effective and economically cost effective relative to command-and-control policy. Second, C&T performance depends on its design quality and interaction with other policies. Abatement costs depend on stringency and other design features, but C&T in a backstop role is generally close to the domestic SCC, rendering it class II policy. Robust C&T generally falls in the class III policy range. C&T is an example of abatement policy that can be cost-effective on a per unit basis, but given the breadth of its coverage its total costs can be substantial. Recent developments in Pennsylvania indicate a possible preference for policies with higher per-unit abatement costs than C&T, which may reflect a political preference for policies with less cost transparency and lower aggregate costs.
Some environmental C&T complaints are valid, such as emissions leakage, but C&T effectiveness concerns are generally readily fixable design flaws. C&T effectiveness complaints are often the result of interference from other government interventions like fuel mandates, relegating C&T to a backstop role and suppressing allowance prices. Such state interventions triggered anti-competitive concerns in wholesale power markets overseen by the Federal Energy Regulatory Commission (FERC). This prompted conservative state electric regulators to call for a conference to validate mechanisms like C&T as a market-compatible alternative to high cost interventions. Conservative expert testimony at that conference, invited by conservative FERC leadership, explained that interventions layered on top of C&T merely reallocate emissions reduction under a binding cap, which raises costs, creates no additional abatement, and undermines innovation. This implies that such states might increase abatement and lower aggregate costs by upgrading the role of C&T and downgrading the role of costlier interventions.
In the 2000s, bipartisan interest in federal C&T policy arose, but it failed and has not resurfaced. In its absence, states have supplanted federal policy with subnational C&T programs. However, the durability of C&T beyond progressive states is unclear. Moderate states have sometimes joined a regional C&T program under Democratic leadership, but sometimes departed them under Republican leadership. Conservative state groups typically challenge C&T adoption and seek repeal of C&T programs like the Regional Greenhouse Gas Initiative. This suggests that C&T is at the fringe, but typically outside, an Overton Window across political movements.
Permitting and Siting
Permitting policy can base decisions explicitly on GHG criteria, or they can be based on non-GHG factors but hold indirect GHG consequences. Generally, only progressive states and presidents have pursued the former. Federally, these include the Obama administration’s “coal study” and Biden administration’s “pause” on liquified natural gas (LNG). The LNG pause did not provide any apparent emissions benefit, yet carried substantial foregone economic opportunity and strategic value to U.S. allies. Pragmatic progressive thought leaders expressed concern with the pause, noting the creation of economic and security risks, and suggested lifting the pause in exchange for companies to commit to strict, third-party verified methane emissions standards. Relatedly, some conservative thought leaders have supported policy that enables voluntary participation in certified programs that provide market clarity and confidence to harness private willingness to pay for lower GHG products. This has been buttressed by support from an industry-led effort to advance a market for environmentally differentiated natural gas based on a standard, secure certification process.
Permitting constraints on clean technology supply chains can have perverse economic and emissions effects. A prime example is critical minerals, which are essential components to clean energy technologies. A net-zero emission energy transition, relative to current consumption, would increase U.S. annual mineral demand by 121% for copper, 504% for nickel, 2,007% for cobalt, and 13,267% for lithium. Market forces, unsubsidized, are poised to produce a sufficient amount of domestic copper and lithium supply to satiate a large share of domestic demand, but face undue barriers to entry that restrict production far below its potential. To meet net-zero objectives, permitting reform allowing all currently proposed projects to enter the market would lower U.S. import reliance for copper from 74% to 41%, while dropping lithium import reliance from 100% to 51%.
Expanding domestic mining no doubt carries local environmental tradeoffs. However, the U.S. has some of the most stringent and comprehensive mining safeguards in the world. Thus, foregoing development domestically is likely to push mining toward foreign countries with inferior environmental, safety, and child labor protections. It is therefore critical that domestic permitting decisions account for the unintended effects of denying permits, not merely the direct consequences of approving a project.
Permitting and siting constraints on energy infrastructure also impose major costs and foregone abatement. These entry barriers largely exist as environmental safeguards, yet almost always inhibit projects with a superior emissions profile to the legacy resources they replace. In fact, 90% of planned and in progress energy projects on the federal dashboard were clean energy related as of July 2023. In 2023, the ratio of clean energy to fossil projects requiring an environmental impact statement to comply with the National Environmental Policy Act (NEPA) was 2:1 for the Department of Energy and nearly 4:1 for the Bureau of Land Management. A 2025 study estimated that bringing down permitting timelines from 60 months to 24 months would reduce 13% of U.S. electric power emissions.
Permitting has proven to be a litmus test for the progressive environmental movement, as the movement bifurcates between anti-development symbolists and pragmatic pro-abundance progressives. While a minority of mainstream environmental groups have become amenable to permitting reform, such as The Nature Conservancy and Audubon Society, the core of progressive environmental groups have not. Instead, new progressive groups like Clean Tomorrow and the Institute for Progress filled the pro-abundance void alongside traditional market-friendly progressive groups like the Progressive Policy Institute. This progressive subset has helped influence moderate Democrats to support permitting reform in a collaborative way with conservatives.
Permitting reform has long been championed by conservatives for its economic benefits, with climate considerations typically a secondary-at-best rationale. Yet permitting reform has become a priority for the newer climate-minded conservative movement. However, permitting has also proven to be a differentiator between conservatives and right-wing populists. The latter engages in forms of government intervention that sometimes contradict conservative principles. For example, the Trump administration enacted an offshore wind energy pause that followed the same problematic blueprint as the Biden administration’s LNG pause. This elevates the importance of technology-neutral permitting reforms with an emphasis on permitting permanence safeguards.
In recent years, a coalition of Republicans, centrist Democrats, and clean energy and abundance advocates have pressed for reform to NEPA. A broad suite of federal permitting reforms with bipartisan appeal was identified in a 2024 report by the Bipartisan Policy Center. Bipartisan alignment led to the passage of the Fiscal Responsibility Act of 2023 into law and the Senate passage of the Energy Permitting Reform Act of 2024 (EPRA). Although a 2025 Supreme Court decision suggests executive actions alone may substantially reduce NEPA obstacles, plenty of NEPA and other federal statutory reforms remain of high value and hold considerable bipartisan potential.
The positions of leading progressive, conservative, and centrist thought leadership organizations highlight alignment on various federal permitting and siting reforms. These include statutory changes to NEPA, the Endangered Species Act, the Clean Water Act, the Clean Air Act and the National Historic Preservation Act. Substantive alignment includes reforms that reduce litigation risk (e.g., judicial review reform), limit executive power to stop project approvals and undermine permitting permanence, maintain technology neutrality, strengthen federal backstop siting authority for interstate infrastructure, codify the Seven County decision, and streamline agency practices while ensuring sufficient state capacity.
Despite considerable positive momentum at the federal level, the greatest permitting and siting barriers generally reside at the state and local levels and trending sharply in a more restrictive direction. Wind and solar ordinances have grown by over 1,500% since the late 2000s. Oil and gas pipelines and power plants face mounting permitting and siting restrictions in progressive states, which not only raise costs but do not necessarily reduce emissions. In fact, the New England Independent System Operator said that a lack of natural gas infrastructure in the region has raised prices and pollution by forcing reliance on higher-cost resources like oil-fired power plants. The only major power generation resource with a less restrictive trend is nuclear, as six states recently modified or repealed nuclear moratoria to ease siting.
Motivation for opposing energy infrastructure permitting has included the well-known “not in my backyard” concerns, such as noise, construction disruptions, or land use conflicts. Interestingly, much opposition appears to come from perception, as much as substantiated negative effects. Relatedly, permitting resistance rationales increasingly appear to result from ideological opposition to particular energy sources. Finally, much opposition and most litigation of energy projects comes from non-governmental organizations, not the land owners directly affected. Altogether, this underscores the importance of permitting and siting reform that improves the quality of information to agencies and parties, ties decisionmaking to specific harms not speculative claims, limits standing to affected parties, and creates appeals processes for landowners to challenge obstructive local government laws and decisions. A key tension to overcome is that technology-agnostic legislation has been more likely to advance in states with one or more Republican chamber, yet environmental advocates resist “all-of-the-above” reforms.
Policies that reduce permitting and siting burdens are class I: they boost economic output and are increasingly key to emissions reductions. Permitting and siting policies that are restrictive on fossil development are not particularly effective at reducing emissions and often add considerable cost, granted costs vary widely depending on the nature of the policies and implementation. Effective fossil restrictions can range from class II to class IV policy, while ineffective ones actually increase emissions. The political economy of permitting and siting must overcome the lobby of entrenched suppliers, who seek to maintain competitive moats. An ironic example was incumbent asset owners funding environmental groups to oppose transmission infrastructure in the Northeast that would import emissions-free hydropower.
Electric Regulation
The power industry is at the forefront of energy cost concerns and decarbonization objectives. In the early 2020s, electric rates have risen most in Democratic states. These concerns reoriented progressives towards cost containment, even at the expense of climate objectives. In the 2024 election, cost of living concerns propelled Republicans to widespread victories as President Trump vowed to halve electricity prices. A year later, voter concerns over rising electricity rates in Georgia, New Jersey, and Virginia boosted Democrats in gubernatorial and public service commission (PSC) elections.
At the same time, electricity is arguably the most important sector for climate abatement given its emissions share and the indirect effects of electrifying other sectors, namely transportation and manufacturing. Ample pathways exist to reduce electric costs and emissions simultaneously, primarily by fixing profound government failure embedded in legacy regulation. Electric industrial organization shapes economic and climate outcomes, with market liberalization an advantage for both.
Electric regulation falls into two basic formats. The first is cost-of-service (CoS) regulation, where the role of government is to substitute for the role of competition in overseeing a monopoly utility. The alternative is for regulation to facilitate competition by using the “visible hand” of market rules to enable the “invisible hand” to go to work.
CoS regulation historically applied to power generation, though about a third of states enacted restructuring to introduce competition into power generation and retail services, in response to rising rates and the recognition that these are not natural monopoly services. Nearly all transmission and distribution (T&D) historically and today remains under CoS regulation. Importantly, CoS regulation motivates a utility to expand the regulated rate base upon which it earns a state-approved return. Generally, the main sources of cost discipline problems in the power industry stem from its CoS regulation segments: transmission, distribution, and the portion of generation that remains on CoS rates.
Generally, restructured jurisdictions see greater innovation and downward pressure on the supply portion of customer bills. The economic performance of restructuring is highly sensitive to the quality of implementation. This includes the quality of wholesale energy price formation and capacity market design. It also includes various elements of retail choice implementation. They have also seen improved governance, whereas CoS utilities are prone to cronyism and corruption given the inherent incentives of their business model. Competitive wholesale and retail power markets hold cost and emissions advantages through several mechanisms:
- Markets accelerate capital stock turnover when it is economic. With the brief exception of nuclear retirements, new entry is dominated by zero emission resources or high efficiency gas plants that displace legacy plants with higher emissions rates. Markets usher in new entry and induce retirements in response to economic conditions. Last decade saw markets outperform in the coal-to-gas transition, and this decade with advances in wind, solar, and storage economics. Texas, the most thoroughly restructured state, leads the country in solar, wind, and energy storage additions while placing second in gas additions. A review of restructuring found that competition worked as intended, facilitating new, low-cost entry while “driving inefficient, high-cost generation out of the market.” A new paper evaluating generator-level data found that from 2010–2023, regulated units were 45% less likely to retire than unregulated units.
- Markets encourage power plant operating efficiencies. Competitive generators adopt technologies and practices that use fuel more efficiently and improve environmental performance. The introduction of competition caused nuclear generators to adopt innovative practices to reduce refueling outage times, boosting operating efficiency by 10%. One study found 9% higher operating efficiencies in the thermal power fleet in restructured states. By contrast, CoS utilities sometimes engage in uneconomic operations because they are financially indifferent to market signals, resulting in overoperation of the fossil fleet.
- Markets reflect customer preferences, including clean power. Footprints with retail choice have seen much higher popularity of voluntary clean power programs. Competition lowers the “green premium” and customer choice allocates it equitably. This is critical as the willingness to pay for clean power varies enormously across customers. Notably, most growing power customers are large companies with ambitious corporate emissions reductions targets, which explains their commercial interest in advancing consumer choice.
- Markets better integrate unconventional resources, namely storage, wind, solar, and demand flexibility. The central planning of monopoly utilities struggles to account for the profile of variable (e.g., wind and solar) and use-limited (e.g., storage) resources. Demand flexibility is valuable to integrate more variable supply sources. Wholesale and retail competition are the only structural pairings that have elicited substantial shifts in demand in response to price signals, because they align the incentives of retailers and end-users to reduce consumption during high price periods.
- Markets induce lower-cost environmental compliance and better environmental lobbying behavior. Restructuring reoriented the incentives to influence and comply with public policy. Notably, competitive enterprises pursue more innovative, lower-cost compliance pathways that tend to deepen abatement. Monopoly utilities have a track record of lobbying for higher cost environmental laws. For example, monopolies have a preference for command-and-control regulation that pads their rate base, and have opposed market-based policies like the 1990 Clean Air Act amendments.
Electric cost increases are multifaceted, prompting many misdiagnoses that blame markets for non-market problems. Utilities have begun pushing campaigns in restructured states to revert back to CoS regulation, whereas the growing consumer segment – namely data centers and industrials – are organizing campaigns to expand consumer choice. Independent economic assessments warn against a return to CoS regulation, and instead encourage state regulators to implement restructuring better. This includes better market design, consumer exposure to wholesale prices, and effective coordination with transmission investment.
T&D costs, generally, are the core driver of electricity cost pressures nationwide. Over the last two decades, utility capital spending on distribution has increased 2.5 times while nearly tripling for transmission. This reflects profound flaws in CoS regulation of T&D, resulting in overinvestment in inefficient infrastructure and underinvestment in cost-effective infrastructure. This projects to worsen, given T&D expansion needed to meet grid reliability criteria as a result of aging infrastructure, turnover in the generation fleet, and load growth.
T&D expansion is also central to abatement. Even partial transmission reforms can reduce carbon dioxide emissions by hundreds of million of tons per year. This explains why progressives have made reforms that expand transmission a top priority. This needs to be reconciled with the cost concerns of consumers and conservatives to result in durable policy. Consumers and conservatives have a budding transmission agenda rooted in upgrading the existing system, removing barriers to voluntary transmission development, using sound economic practices for mandatorily planned transmission, streamlined permitting and siting, and improved governance. A particularly promising frontier is reforms to enhance the existing system, given the expedience of their cost relief and consistency with a Trump administration directive.
Recent federal regulatory actions have demonstrated bipartisan willingness to improve transmission policy and the related issue of interconnection, which has emerged as a major cost and emissions issue. In 2023, FERC passed Order 2023 on a bipartisan basis to reduce barriers to new power plants trying to interconnect to regional transmission systems. Subsequent reforms were motivated by a coalition of consumer groups and the center-right R Street Institute. In 2024, FERC passed Order 1920-A on a bipartisan basis to improve economic practices in regional transmission development. EPRA, a gamechanger for interregional transmission development, passed the Senate with bipartisan support in 2024.
Demand growth has sparked reliability concerns over tight supply margins and recently put upward pressure on wholesale market prices. However, states with the greatest price decreases typically had increasing demand from 2019 to 2024 (Figure 3). This shows the importance of infrastructure utilization on electric rate pressures, as many areas had supply slack previously. The past may not be prologue. Emerging conditions show supply-constrained scenarios where marginal generation and T&D costs increase steeply to meet new load increase. The Energy Information Administration observes steady retail price increases and projects further rises to exceed inflation.
Source: Wiser et al., 2025.
In an era of resurgent power demand growth, the states poised to keep rates and emissions down have wholesale competition, retail competition, efficient generator interconnection processes, economical T&D practices, and low permitting and siting barriers. The only state that reasonably accomplishes all of these is Texas, which is experiencing the most commercial interest among competitive suppliers and growing power consumers. Texas has experienced industry-leading clean energy investment and earned the distinction of Newsweek’s “greenest state” in 2024.
All aforementioned electric reforms are considered class I policy. Despite cost-reduction appeal, power industry reforms have proven challenging for two reasons. First, reforms are highly technical in nature and face limited state capacity among legislative advisors and technocratic agencies, namely PSCs and FERC. For example, recent FERC and PSC activities reveal that these entities do not have the bandwidth or expertise to properly implement existing transmission policy, much less reform it. Secondly, reforms face strong resistance from incumbent utilities who hold concentrated interests in the status quo, creating a strong lobbying incentive. By contrast, the beneficiaries of reform, especially consumers, are dispersed interests that do not organize as effectively as a lobbying force.
Although the Texas electricity experiment and associated federal power market reforms under President George W. Bush is a conservative legacy, most restructured states are progressive. This reflects significant bipartisan historic appeal. However, traditional conservatives have sometimes conflated pro-utility positions as the “pro-business” position, while it is unclear whether right-wing populist influences will catalyze pro-market reforms by challenging the status quo or retrench monopoly utility interests based on technocratic market skepticism (e.g., Project 2025). CoS utilities also commonly oppose cost-effective T&D reform, especially vertically-integrated utilities, which is consistent with their financial incentives to expand rate base and deter lower-cost imports from third parties. Nonetheless, the political economy of bipartisan electric regulatory reform remains promising, given voters’ prioritization of reducing electricity costs.
Public Spending
Government spending occurs through direct spending outlays or indirect spending through tax expenditures. Spending takes the form of industrial policy or innovation policy. The economics literature is historically critical of industrial policy, while positive literature on industrial policy usually conflates it with innovation policy. A distinguishing element is that innovation policy selects policy instruments suited to specific market failures, namely the positive externalities of knowledge spillovers and learning-by-doing. These generally apply to research and development (R&D) and early stage technologies, including those in demonstration stage and infant industries that have not achieved economies of scale.
Predictably, progressives have been consistent backers of robust innovation policy, while conservatives typically scrutinize such expenses closely. Although differences of opinion exist on optimal funding levels, historically conservatives and progressives have agreed on a role for the government in supporting R&D. There is also a history of good governance agreement, such as a joint project between the Center for American Progress and the Heritage Foundation in 2013 on improving the performance of the national lab system. Improving outcomes-based Department of Energy program performance may have broad appeal, including better performance metrics, stronger linkages to private sector needs, and program reevaluation to determine government investment phase-out. Improvements to state capacity are paramount in this regard.
Conservatives are often critical of public spending on infant industry, where government failure can outweigh market failure. For example, policymakers often struggle to identify when to end industry support, while industry engages in rent-maintenance behavior even after it has achieved maturity. Historic evidence indicates that direct subsidies and tax exemptions for infant energy industry continue well after the targeted technologies mature. Conservative and progressive scholars have historically framed the merits over subsidies for infant industry as a debate over government versus market failure.
Since innovation policy targets non-climate market failures (e.g., knowledge spillovers) it may have a high static abatement cost. However, it is an inexpensive abatement policy when accounting for dynamic effects, because of induced innovation and learning-by-doing. Importantly, innovation policy holds massive climate benefits, because achieving abatement cost parity between clean and emitting resources is central to clean technology market adoption. Efficient R&D policy can be classified as class I policy, because the upfront cost of the policy is outweighed by long-term cost savings. Demonstration and infant industry support falls into class II-III range, depending on its implementation, and often exhibits substantial durability.
In recent years, climate-minded conservatives have shown stronger inclinations of public spending for innovation policy. However, there is a stark difference between conservatives and right-wing populism on innovation policy. Conservatives note that the adverse consequences of Department of Government Efficiency’s “gutted, ineffective government” approach to the Department of Energy is inconsistent with limited, effective government practice. The economic self-interest benefits of innovation policy may induce a course-correction with MAGA, which has not deliberately targeted innovation policy insomuch as sacrificing it amid a rash government downsizing exercise.
In contrast to innovation policy, industrial policy aims to directly promote a given industry, typically using mature technology, with interventions untethered to any underlying market failure (e.g., negative emissions externality). This generally takes the form of public spending on mature industries. For decades, traditional conservatives and climate-minded conservative scholars have been critical of green industrial policy for carrying high costs with modest emissions reductions.
The most relevant case study in climate industrial policy versus innovation policy is the Inflation Reduction Act (IRA) of 2022. IRA represented the “largest federal response to climate change to date.” It consisted mostly of subsidies for mature technologies, especially wind, solar, and electric vehicles (EVs). It also contained subsidies for infant industry. IRA was passed exclusively by Democrats, with Republicans voicing concerns over its cost. Republicans then passed the One Big Beautiful Big Act (OBBBA) in 2025, which phased-out subsidies for mature technologies, but generally retained those for infant industry. This underscores the political durability of innovation policy and the fragility of industrial policy.
A broader debrief on IRA and OBBBA reveals:
- Disregard for cost considerations preceded passage of the IRA. All known ex ante modeling of IRA’s abatement benefits before it passed ignored costs. This left Congress unequipped to weigh the merits and tradeoffs of the policy. A simplistic abatement cost technique in 2022 yielded a cost of $72/tonne for the renewable energy subsidies. A more sophisticated modeling exercise in 2023 projected an average abatement cost of $83/tonne. IRA could have been identified as a high abatement cost policy (class IV) before it passed. Before passage, R Street Institute analysis suggested meager additionality from subsidies and identified permitting and electric regulation flaws as the determining factors of energy emissions trajectories, yet Congress neglected those reforms.
- IRA abatement cost estimates escalated sharply after passage. The total abatement cost of IRA subsidies to taxpayers rose from $336/tonne in 2024 to $600/tonne in 2025. The initial 2022 IRA renewables subsidy cost estimate of $72/tonne rose to $142/tonne in 2024 and $208/tonne in 2025. The EV subsidy came in at $1,626/tonne. It is possible that this is understated, since the direction of the emissions effect of EV subsidies may depend on recipient qualifications, especially when accounting for the behavioral tendencies of EV adopters. The subsidies also undermined developer cost reduction in two ways: 1) motivated development in the least efficient areas and 2) weakened incentives for innovation that lowers costs, which translates into long-term cost increases relative to an unsubsidized baseline.
- Government failure precluded most of the anticipated climate benefits of the IRA. IRA abatement was overstated in 2022, because models understated artificial constraints on the core abatement driver: wind and solar deployment. The Energy Information Administration’s renewables projections in 2025, which reflected IRA subsidies, were close to their no-IRA estimates from 2022. Risk, not cost, has consistently been the barrier to wind and solar. A Brookings Institution analysis found that artificial barriers to entry were the leading causes of wind and solar project cancellations from 2016-2023, whereas the lowest cause was “lack of funding.” Renewables subsidies primarily constituted a wealth transfer from taxpayers to suppliers. One analysis suggested 80-90 percent of clean energy backed by the IRA would have occurred anyways. An S&P Global forecast projected OBBBA to cause a 15 percent decline in wind, solar, and battery storage capacity by 2035.
- Wind, solar, and EV tax credit phaseouts should lower costs and increase economic productivity, despite increasing electricity prices. Price and cost are related, but not the same thing. The phase-out of subsidies under OBBBA will put upward pressure on electricity prices. However, it will likely lower costs by restoring dynamic cost management incentives and removing distortions so investment reflects economic fundamentals. Electricity subsidies shift cost burdens from power generators and ratepayers to taxpayers. Because taxpayer funding is expensive – tax collection imposes considerable deadweight loss on the economy – the net effect of taxpayer subsidies tends to shrink economic output. The Tax Foundation projected that IRA would reduce U.S. gross domestic product by 0.2 percent, while OBBBA would increase long-run GDP by 1.2 percent, granted energy tax credits were only one factor in these analyses.
The takeaway from IRA and OBBBA is that subsidies for mature technologies are high cost, likely to erode social welfare, and not politically durable. Efficient public spending for RD&D, however, enhances social welfare and falls in the Overton Window due to its value for economic self-interest. Late-stage infant industry is at the fringe of the Overton Window. It is the area where conservative and progressive scholars have historically had contrasting views on whether market failure outweighs government failure, yet political outcomes have largely supported infant industry.
Generally, the literature finds strong evidence of opportunity cost neglect in public policy, which “creates artificially high demand for public spending.” The IRA was a case-in-point. Meanwhile, the opportunity cost of public spending is rapidly rising given the dire fiscal trajectory of the United States. In 2025, moderate experts emphasized a pivot away from unsustainable and ineffective “Green New Deal thinking” for clean technology subsidies in favor of an innovation-driven strategy.
Takeaways
This analysis finds chronic flaws of cost considerations in ex ante policy analysis. Many medium and high-cost policies have passed without any robust accounting of costs at all (e.g., IRA, fuel bans). Interventions with cost-benefit analysis have had a tendency to underestimate costs (e.g., regulation). These flaws contribute to public misconception and play into political economy dynamics that tend to incent policies with hidden costs over those with transparent ones.
High-cost policies have typically only been enacted by progressive governments and have come under greater scrutiny as energy costs escalate. This calls their social welfare effects and durability into question. It has cast climate action in the public eye as requiring deep economic sacrifice.
Conservatives have been hesitant to engage on climate policy outright, largely over dire economic tradeoff perceptions. Such concerns have instigated a conservative backlash to climate policy, including to policies that are compatible with U.S. economic interests. This has been exacerbated by right-wing populism, which often strays from limited government conservatism in pursuit of cultural identity objectives. For example, in a 2024 piece promoting energy affordability, the Heritage Foundation correctly attributed cost increases to renewable energy mandates, but incorrectly presumed that a broad shift towards renewable energy and away from fossil fuels would always increase costs.
High abatement cost policies not only risk reducing aggregate social welfare, but they create distributional concerns. Policies that raise energy costs tend to be regressive. This has challenged the social justice narrative of progressives, prompting a rethink by progressive leaders to take a “cost-first approach to [the] clean energy transition.” Although subsidies are a common response to lower burdens on low-income households, the most popular green subsidies pursued have exacerbated distributional concerns. Specifically, renewables subsidies favored by progressives have been challenged by conservatives as “green corporate welfare.” Progressives have also faced criticism for EV tax credits for disproportionately benefiting wealthy households.
Encouragingly, negative- and low-cost policies comprise a rising share of the abatement curve. The Overton Window for pursuing such policies has grown remarkably for “abundance progressives” and conventional conservatives. However, populist subsets within both movements challenge the potential for political alignment. Enacting negative-cost policies also faces the collection active problem of dispersed beneficiaries versus a concentrated incumbent supplier lobby favoring the status quo. Mobilizing consumer and taxpayer groups is an underappreciated strategy to enact these policies.
This analysis is far from comprehensive. A notable omission from this paper is transportation policy, the largest GHG sector in the U.S. A scan of the transportation literature underscores major abatement potential for negative and low-cost policies, including reducing government barriers to efficient heavy-duty transportation like railways, shipping, and heavier trucking. Further, the electrification of transportation requires extensive fixes to government failure, such as liberalizing markets to enable competitive charging infrastructure, which lowers costs. The merits of innovation and GHG transparency policy, previously discussed, also appear to hold promise for transportation applications such as aviation fuel. The transportation sector has also been the target of GHG regulation, mostly in progressive states, which warrants close assessment of costs. For example, one study identified a vast abatement cost range for fuel standards ($60-$2,272/tonne).
A shortcoming of this analysis is that it only characterizes costs by their efficiency (i.e., $/ton). Political decisions are highly sensitive to aggregate cost and its visibility to the public, which our taxonomy does not characterize. It is possible that efficient, transparent, and higher aggregate cost policies (e.g., C&T) fare less favorably in some political settings than inefficient, opaque, and sometimes lower aggregate cost policies (e.g., RPS solar carveouts).
Despite the limitations of this analysis, the sample of policies evaluated is sufficient to support the thesis. That is, a retooled climate policy agenda that prioritizes cost considerations should elevate social welfare and achieve greater abatement by selecting more durable policies.
Conclusion
Abatement costs have huge bearing on whether climate policies benefit society, their likelihood of passage, and whether they prove politically durable. Most abatement need not come from dedicated climate policy, per se, but rather sound economic policy that carries deep climate co-benefits. Chronic disregard for cost considerations has led to an overselection of high-cost policies and underpursuit of low- and negative-cost policies. This has undermined policy durability and exacerbated political polarization over climate change abatement.
This paper finds extensive abatement opportunities within negative-cost policies. These largely constitute fixes to government failure and include permitting, siting, and power regulation reforms. This analysis also finds considerable low-cost policies that are compatible with U.S. economic self-interests. These policies primarily spur voluntary private sector abatement through efficient innovation policy and GHG transparency.
We offer three sets of recommendations moving forward for influencers of the climate policy agenda:
- Focus on results. Climate change abatement is a function of global GHG concentrations. Too much attention pursues symbolic objectives, like preventing fossil fuel infrastructure. This tends to undermine abatement goals and impose high costs.
- Emphasize cost considerations in policy agenda setting, formulation, and maintenance. Negative abatement cost policies should take top priority, with an emphasis on mobilizing beneficiaries. Robust cost-benefit analyses should precede all cost-additive policies and be reconducted periodically to guide policy adjustments.
- Prioritize quality state capacity. The net benefits of abatement policies are sensitive to government capacity and performance. Public management is in great jeopardy in an era of institutional decay. Negative-cost policies are often highly technocratic and require sufficient staffing expertise and accountable management at public institutions like DOE, FERC, PSCs, and permitting and siting agencies.
In an era of energy affordability precedence, a reset climate agenda should anchor itself in good policy basics. That is, a sober-minded return to results-driven, net-benefits prioritized policy. This should improve the durability of climate policy and ensure it enhances social welfare. Executing reforms well requires a recommitment to improving the quality of institutions as much as the policy itself.
Federal Climate Policy Is Being Gutted. What Does That Say About How Well It Was Working?
On the left is the Bankside Power Station in 1953. That vast relic of the fossil era once towered over London, oily smoke pouring from its towering chimney. These days, Bankside looks like the right:
The old power plant’s vast turbine hall is now at the heart of the airy Tate Modern Art Museum; sculptures rest where the boilers once churned.
Bankside’s evolution into the Tate illustrates that transformations, both literal and figurative, are possible for our energy and economic systems. Some degree of demolition – if paired with a plan – can open up space for something innovative and durable.
Today, the entire energy sector is undergoing a massive transformation. After years of flat energy demand served by aging fossil power plants, solar energy and battery storage are increasingly dominating energy additions to meet rising load. Global investment in clean energy will be twice as big as investment in fossil fuels this year. But in the United States, the energy sector is also undergoing substantial regulatory demolition, courtesy of a wave of executive and Congressional attacks and sweeping potential cuts to tax credits for clean energy.
What’s missing is a compelling plan for the future. The plan certainly shouldn’t be to cede leadership on modern energy technologies to China, as President Trump seems to be suggesting; that approach is geopolitically unwise and, frankly, economically idiotic. But neither should the plan be to just re-erect the systems that are being torn down. Those systems, in many ways, weren’t working. We need a new plan – a new paradigm – for the next era of climate and clean energy progress in the United States.
Asking Good Questions About Climate Policy Designs
How do we turn demolition into a superior remodel? First, we have to agree on what we’re trying to build. Let’s start with what should be three unobjectionable principles.
Principle 1. Climate change is a problem worth fixing – fast. Climate change is staggeringly expensive. Climate change also wrecks entire cities, takes lives, and generally makes people more miserable. Climate change, in short, is a problem we must fix. Ignoring and defunding climate science is not going to make it go away.
Principle 2. What we do should work. Tackling the climate crisis isn’t just about cleaning up smokestacks or sewer outflows; it’s about shifting a national economic system and physical infrastructure that has been rooted in fossil fuels for more than a century. Our responses must reflect this reality. To the extent possible, we will be much better served by developing fit-for-purpose solutions rather than just press-ganging old institutions, statutes, and technologies into climate service.
Principle 3. What we do should last. The half-life of many climate strategies in the United States has been woefully short. The Clean Power Plan, much touted by President Obama, never went into force. The Trump administration has now turned off California’s clean vehicle programs multiple times. Much of this hyperpolarized back-and-forth is driven by a combination of far-right opposition to regulation as a matter of principle and the fossil fuel industry pushing mass de-regulation for self-enrichment – a frustrating reality, but one that can only be altered by new strategies that are potent enough to displace vocal political constituencies and entrenched legacy corporate interests.
With these principles in mind, the path forward becomes clearer. We can agree that ambitious climate policy is necessary; protecting Americans from climate threats and destabilization (Principle 1) directly aligns with the founding Constitutional objectives of ensuring domestic tranquility, providing for the common defense, and promoting general welfare. We can also agree that the problem in front of us is figuring out which tools we need, not how to retain the tools we had, regardless of their demonstrated efficacy (Principle 2). And we can recognize that achieving progress in the long run requires solutions that are both politically and economically durable (Principle 3).
Below, we consider how these principles might guide our responses to this summer’s crop of regulatory reversals and proposed shifts in federal investment.
Honing Regulatory Approaches
The Trump Administration recently announced that it plans to dismantle the “endangerment finding” – the legal predicate for the Environmental Protection Agency (EPA) to regulate greenhouse gas emissions from power plants and transportation; meanwhile, the Senate revoked permission for California to enforce key car and truck emission standards. It has also proposed to roll back key power plant toxic and greenhouse gas standards. We agree with those who think that these actions are scientifically baseless and likely illegal, and therefore support efforts to counter them. But we should also reckon honestly with how the regulatory tools we are defending have played out so far.
Federal and state pollution rules have indisputably been a giant public-health victory. EPA standards under the Clean Air Act led directly to dramatic reductions in harmful particulate matter and other air pollutants, saving hundreds of thousands of lives and avoiding millions of cases of asthma and other respiratory diseases. Federal regulations similarly caused mercury pollution from coal-fired power plants to drop by 90% in just over a decade. Pending federal rollbacks of mercury rules thus warrant vocal opposition. In the transportation sector, tailpipe emissions standards for traditional combustion vehicles have been impressively effective. These and other rules have indeed delivered some climate benefits by forcing the fossil fuel industry to face pollution clean-up costs and driving development of clean technologies.
But if our primary goal is motivating a broad energy transition (i.e., what needs to happen per Principle 1), then we should think beyond pollution rules as our only tools – and allocate resources beyond immediate defensive fights. Why? The first reason is that, as we have previously written, these rules are poorly equipped to drive that transition. Federal and state environmental agencies can do many things well, but running national economic strategy and industrial policy primarily through pollution statutes is hardly the obvious choice (Principle 2).
Consider the power sector. The most promising path to decarbonize the grid is actually speeding up replacement of old coal and gas plants with renewables by easing unduly complex interconnection processes that would speed adding clean energy to address rising demand, and allow the old plants to retire and be replaced – not bolting pollution-control devices on ancient smokestacks. That’s an economic and grid policy puzzle, not a pollution regulatory challenge, at heart. Most new power plants are renewable- or battery-powered anyway. Some new gas plants might be built in response to growing demand, but the gas turbine pipeline is backed up, limiting the scope of new fossil power, and cheaper clean power is coming online much more quickly wherever grid regulators have their act together. Certainly regulations could help accelerate this shift, but the evidence suggests that they may be complementary, not primary, tools.
The upshot is that economics and subnational policies, not federal greenhouse gas regulation, have largely driven power plant decarbonization to date and therefore warrant our central focus. Indeed, states that have made adding renewable infrastructure easy, like Texas, have often been ahead of states, like California, where regulatory targets are stronger but infrastructure is harder to build. (It’s also worth noting that these same economics mean that the Trump Administration’s efforts to revert back to a wholly fossil fuel economy by repealing federal pollution standards will largely fail – again, wrong tool to substantially change energy trajectories.)
The second reason is that applying pollution rules to climate challenges has hardly been a lasting strategy (Principle 3). Despite nearly two decades of trying, no regulations for carbon emissions from existing power plants have ever been implemented. It turns out to be very hard, especially with the rise of conservative judiciaries, to write legal regulations for power plants under the Clean Air Act that both stand up in Court and actually yield substantial emissions reductions.
In transportation, pioneering electric vehicle (EV) standards from California – helped along by top-down economic leverage applied by the Obama administration – did indeed begin a significant shift and start winning market share for new electric car and truck companies; under the Biden administration, California doubled down with a new set of standards intended to ultimately phase out all sales of gas-powered cars while the EPA issued tailpipe emissions standards that put the industry on course to achieve at least 50% EV sales by 2030. But California’s EV standards have now been rolled back by the Trump administration and a GOP-controlled Congress multiple times; the same is true for the EPA rules. Lest we think that the Republican party is the sole obstacle to a climate-focused regulatory regime that lasts in the auto sector, it is worth noting that Democratic states led the way on rollbacks. Maryland, Massachusetts, Oregon, and Vermont all paused, delayed, or otherwise fuzzed up their plans to deploy some of their EV rules before Congress acted against California. The upshot is that environmental standards, on their own, cannot politically sustain an economic transition at this scale without significant complementary policies.
Now, we certainly shouldn’t abandon pollution rules – they deliver massive health and environmental benefits, while forcing the market to more accurately account for the costs of polluting technologies, But environmental statutes built primarily to reduce smokestack and tailpipe emissions remain important but are simply not designed to be the primary driver of wholesale economic and industrial change. Unsurprisingly, efforts to make them do that anyway have not gone particularly well – so much so that, today, greenhouse gas pollution standards for most economic sectors either do not exist, or have run into implementation barriers. These observations should guide us to double down on the policies that improve the economics of clean energy and clean technology — from financial incentives to reforms that make it easier to build — while developing new regulatory frameworks that avoid the pitfalls of the existing Clean Air Act playbook. For example, we might learn from state regulations like clean electricity standards that have driven deployment and largely withstood political swings.
To mildly belabor the point – pollution standards form part of the scaffolding needed to make climate progress, but they don’t look like the load-bearing center of it.
Refocusing Industrial Policy
Our plan for the future demands fresh thinking on industrial policy as well as regulatory design. Years ago, Nobel laureate Dr. Elinor Ostrom pointed out that economic systems shift not as a result of centralized fiat, from the White House or elsewhere, but from a “polycentric” set of decisions rippling out from every level of government and firm. That proposition has been amply borne out in the clean energy space by waves of technology innovation, often anchored by state and local procurement, regional technology clusters, and pioneering financial institutions like green banks.
The Biden Administration responded to these emerging understandings with the CHIPS and Science Act, Bipartisan Infrastructure Law (BIL), and Inflation Reduction Act (IRA) – a package of legislation intended to shore up U.S. leadership in clean technology through investments that cut across sectors and geographies. These bills included many provisions and programs with top-down designs, but the package as a whole but did engage with, and encourage, polycentric and deep change.
Here again, taking a serious look at how this package played out can help us understand what industrial policies are most likely to work (Principle 2) and to last (Principle 3) moving forward.
We might begin by asking which domestic clean-technology industries need long-term support and which do not in light of (i) the multi-layered and polycentric structure of our economy, and (ii) the state of play in individual economic sectors and firms at the subnational level. IRA revisions that appropriately phase down support for mature technologies in a given sector or region where deployment is sufficient to cut emissions at an adequate pace could be worth exploring in this light – but only if market-distorting supports for fossil-fuel incumbents are also removed. We appreciate thoughtful reform proposals that have been put forward by those on the left and right.
More directly: If the United States wants to phase down, say, clean power tax credits, such changes should properly be phased with removals of support for fossil power plants and interconnection barriers, shifting the entire energy market towards a fair competition to meet increasing load, as well as new durable regulatory structures that ensure a transition to a low-carbon economy at a sufficient pace. Subsidies and other incentives could appropriately be retained for technologies (e.g., advanced battery storage and nuclear) that are still in relatively early stages and/or for which there is a particularly compelling argument for strengthening U.S. leadership. One could similarly imagine a gradual shift away from EV tax credits – if other transportation system spending was also reallocated to properly balance support among highways, EV charging stations, transit, and other types of transportation infrastructure. In short, economic tools have tremendous power to drive climate progress, but must be paired with the systemic reforms needed to ensure that clean energy technologies have a fair pathway to achieving long-term economic durability.
Our analysis can also touch on geopolitical strategy. It is true that U.S. competitors are ahead in many clean technology fields; it is simultaneously true that the United States has a massive industrial and research base that can pivot ably with support. A pure on-shoring approach is likely to be unwise – and we have just seen courts enjoin the administration’s fiat tariff policy that sought that result. That’s a good opportunity to have a more thoughtful conversation (in which many are already engaging) on areas where tariffs, public subsidies, and other on-shoring planning can actually position our nation for long-term economic competition on clean technology. Opportunities that rise to the top include advanced manufacturing, such as for batteries, and critical industries, like the auto sector. There is also a surprising but potent national security imperative to center clean energy infrastructure in U.S. industrial policy, given the growing threat of foreign cyberattacks that are exploiting “seams” in fragile legacy energy systems.
Finally, our analysis suggests that states, which are primarily responsible for economic policy in their jurisdictions, have a role to play in this polycentric strategy that extends beyond simply replicating repealed federal regulations. States have a real opportunity in this moment to wed regulatory initiatives with creative whole-of-the-economy approaches that can actually deliver change and clean economic diversification, positioning them well to outlast this period of churn and prosper in a global clean energy transition.
A successful and “sticky” modern industrial policy must weave together all of the above considerations – it must be intentionally engineered to achieve economic and political durability through polycentric change, rather than relying solely or predominantly on large public subsidies.
Conclusion
The Trump Administration has moved with alarming speed to demolish programs, regulations, and institutions that were intended to make our communities and planet more liveable. Such wholesale demolition is unwarranted, unwise, and should not proceed unchecked. At the same time, it is, as ever, crucial to plan for the future. There is broad agreement that achieving an effective, equitable, and ethical energy transition requires us to do something different. Yet there are few transpartisan efforts to boldly reimagine regulatory and economic paradigms. Of course, we are not naive: political gridlock, entrenched special interests, and institutional inertia are formidable obstacles to overcome. But there is still room, and need, to try – and effort bears better fruit when aimed at the right problems. We can begin by seriously debating which past approaches work, which need to be improved, which ultimately need imaginative recasting to succeed in our ever-more complex world. Answers may be unexpected. After all, who would have thought that the ultimate best future of the vast oil-fired power station south of the Thames with which we began this essay would, a few decades later, be a serene and silent hall full of light and reflection?
Building an Environmental Regulatory System that Delivers for America
The Clean Air Act. The Clean Water Act. The National Environmental Policy Act. These and most of our nation’s other foundational environmental laws were passed decades ago – and they have started to show their age. The Clean Air Act, for instance, was written to cut air pollution, not to drive the whole-of-economy response that the climate crisis now warrants. The Energy Policy and Conservation Act of 1975 was designed to make cars more efficient in a pre-electric vehicle era, and now puts the Department of Transportation in the awkward position of setting fuel economy standards in an era when more and more cars don’t burn gas.
Trying to manage today’s problems with yesterday’s laws results in government by kludge. Legacy regulatory architecture has foundered under a patchwork of legislative amendments and administrative procedures designed to bridge the gap between past needs and present realities. Meanwhile, Congressional dysfunction has made purpose-built updates exceptionally difficult to land. The Inflation Reduction Act, for example, was mostly designed to move money rather than rethink foundational statutes or regulatory processes – because those rethinks couldn’t make it past the filibuster.
As the efficacy of environmental laws has waned, so has their durability. What was once a broadly shared goal – protecting Americans from environmental harm – is now a political football, with rules that whipsaw back and forth depending on who’s in charge.
The second Trump Administration launched the biggest environmental deregulatory campaign in history against this backdrop. But that campaign, coupled with massive reductions in the federal civil service and a suite of landmark court decisions (including Loper Bright) about how federal agencies regulate, risks pushing U.S. regulatory architecture past the point of sensible and much-needed reform and into a state of complete disrepair.
Dismantling old systems has proven surprisingly easy. Building what comes next will be harder. And the work must begin now.
It is time to articulate a long-term vision for a government that can actually deliver in an ever-more complex society. The Federation of American Scientists (FAS) is meeting this moment by launching an ambitious new project to reimagine the U.S. environmental regulatory state, drawing ideas from across ideological lines.
The Beginning of a New Era
Fear of the risks of systemic change often prevent people from entertaining change in earnest. Think of the years of U.S. squabbles over how or whether to reform permitting and environmental review, while other countries simply raced ahead to build clean energy projects and establish dominance in the new world economy. Systemic stagnation, however, comes with its own consequences.
The Inflation Reduction Act (IRA) and the Infrastructure Investment and Jobs Act (IIJA) are a case in point when it comes to climate and the environment. Together, these two pieces of legislation represented the largest global investment in the promise of a healthier, more sustainable, and, yes, cheaper future. Unfortunately, as proponents of the “abundance” paradigm and others have observed, rollout was hampered by inefficient processes and outdated laws. Implementing the IRA and the IIJA via old systems, in short, was like trying to funnel an ocean through a garden hose – and as a result, most Americans experienced only a trickle of real-world impact.
Similar barriers are constraining state progress. For example, the way we govern and pay for electricity has not kept pace with a rapidly changing energy landscape – meaning that the United States risks ceding leadership on energy technologies critical to national security, economic competitiveness, and combating climate change.
But we are perhaps now entering a new era. The United States appears to be on the edge of real political realignments, with transpartisan stakes around the core role of government in economic development that do not match up neatly to current coalitions. This realignment presents a crucial opportunity to catalyze a new era of climate, environmental, and democratic progress.
FAS will leverage this opportunity by providing a forum for debate and engagement on different facets of climate and environmental governance, a platform to amplify insights, and the capacity to drive forward solutions. Examples of topics ripe for exploration include:
- Balancing agility and accountability. As observed, regulatory approaches of the past have struggled to address the interconnected, quickly evolving nature of climate and environmental challenges. At the same time, mechanisms for ensuring accountability have been disrupted by an evolving legal landscape and increasingly muscular executive. There is a need to imagine and test new systems designed to move quickly but responsibly on climate and environmental issues.
- Complementing traditional regulation through novel strategies. There is growing interest in using novel financial, contractual, and other strategies as a complement to regulation for driving climate and environmental progress. There is considerable room to go deeper in this space, identifying both the power of these strategies and their limits.
- Rethinking stakeholder engagement. The effectiveness of regulation depends on its ability to serve diverse stakeholder needs while advancing environmental goals. Public comment and other pipelines for engaging stakeholders and integrating external perspectives and expertise into regulations have been disrupted by technologies such as AI, while the relationship between regulated entities and their regulators has become increasingly adversarial. There is a need to examine synergies and tradeoffs between centering stakeholders and centering outcomes in regulatory processes, as well as examine how stakeholder engagement could be improved to better ensure regulations that are informed, feasible, durable, and distributively fair.
In working through topics like these, FAS seeks to lay out a positive vision of regulatory reconstruction that is substantively superior to either haphazard destruction or incremental change. Our vision is nothing less than to usher in a new paradigm of climate and environmental governance: one that secures a livable world while reinforcing democratic stability, through systems that truly deliver for America.
We will center our focus on the federal government given its important role in climate and environmental issues. However, states and localities do a lot of the work of a federated government day-to-day. We recognize that federal cures are unlikely to fully alleviate the symptoms that Americans are experiencing every day, from decaying infrastructure to housing shortages. We are committed to ensuring that solutions are appropriately matched to the root cause of state capacity problems and that federal climate and environmental regulatory regimes are designed to support successful cooperation with local governments and implementation partners.
FAS is no stranger to ambitious endeavors like these. Since our founding in 1945, we have been committed to tackling the major science policy issues that reverberate through American life. This new FAS workstream will be embedded across our Climate and Environment, Clean Energy, and Government Capacity portfolios. We have already begun engaging and activating the diverse community of scholars, experts, and leaders laying the intellectual groundwork to develop compelling answers to urgent questions surrounding the climate regulatory state, against the backdrop of a broader state capacity movement. True to our nonpartisan commitment, we will build this work on a foundation of cross-ideological curiosity and play on the tension points in existing coalitions that strike us all as most productive.
We invite you to join us in conversation and collaboration. If you want to get involved, contact Zoë Brouns (zbrouns@fas.org).