A Plan for Revitalizing the U.S. Auto Industry

It’s January 20, 2029. The new President, committed to revitalizing America’s once-dominant car and truck industry, requests a briefing on the current state of affairs. The news isn’t great. Global market share of the big American automakers has bottomed out at single digits, well below the nearly 25% it enjoyed in the 1990s. Factories that made cars for export are either closed or closing.

That’s because the rest of the world doesn’t really want American cars any more. Continued tension and uncertainty in the Middle East has kept gas prices high and underscored the security and economic vulnerabilities of overreliance on oil. Outside the United States, consumers are snapping up Chinese-made EVs that go hundreds of miles, charge in minutes, and sell for less than $10,000. Industry is similarly pivoting to electric freight. On the home front, Americans are angry that U.S. policy is preventing them from buying exciting cars hitting the streets in neighboring Canada and Mexico. Workers aren’t much happier, as trade disputes and tariff barriers cause the once-integrated North American auto industry to come apart at the seams. And the U.S. truck oligopoly still runs on diesel, adding cost to almost every consumer good.

The President sighs and asks their team for a plan. Fortunately, they have one.

The World is Jumping Ahead on Electric Vehicles, Whether the U.S. Likes It Or Not

The writing is on the wall: electric vehicles (EVs) are the future. Industry knows it. Workers know it. Investors know it. Politicians on the left and right know it. EVs are simply getting too good, too quickly, for even the most pro-fossil policy agenda to stop. That’s good news for our climate, and for consumers globally, who deserve access to the latest, greatest, and cheapest cars. But it’s bad news for Americans, who are going to suffer the long-term effects of short-sighted transportation policies that fail to acknowledge economic and technological reality.

Eventually, this reality is going to become impossible to ignore. EVs are already dominating in countries as diverse as China, Norway, Vietnam, and India, and the geographic spread is only accelerating. Sooner or later, the United States is going to need a plan to catch up. It’s clear that what the plan won’t be is a simple return to the outdated regulatory frameworks we used for decades to regulate vehicle emissions and fuel economy. We need to think more broadly and more creatively. 

Why Nixon-Era Statutes Won’t Catch Us Up On Their Own

Two statutes have historically governed vehicles. The 1970 Clean Air Act (CAA) structures the tailpipe emissions standards that the EPA and California enforce, while the 1975 Energy Policy and Conservation Act (EPCA), which charges DOT with setting separate (albeit overlapping) fuel economy standards. These laws, as we’ve previously written, were helpful for making gas-powered cars cleaner and more efficient, but they weren’t designed to guide a wholesale transition from fossil to electric technology with the pace and comprehensive scope needed.

There’s a case to be made for doing your best with old tools if those are the tools available. Indeed, that’s why the Biden administration’s vehicle regulatory regime was largely predicated on the CAA and EPCA. But these tools have become far less available in light of the second Trump administration’s actions. The Trump administration, in partnership with Congress, has dismantled the legal basis for regulating greenhouse gas emissions from vehicles under the CAA, gutted the fuel economy program that EPCA sets up, and rolled back California’s EV rules (which traditionally set the most stringent, future-forward vehicle standards). It also fired lots of people from federal agencies who understood how federal vehicle regulations worked. Thus from both a legal and a capacity angle, CAA and EPCA have become significantly weaker policy levers. Compounding these issues is the fact that the Supreme Court (now and likely into the near future) is profoundly hostile to regulatory ambition, limiting the probable reach of new rules set under these statutes.

Even if one could magically snap Biden-era vehicle rules back in place, those rules would only cover one player in the auto market – automakers themselves. Telling automakers that they have to develop cleaner, more innovative vehicles is an important tactic, but it is not a comprehensive policy agenda. CAA and EPCA don’t cover worker pensions, factory refits, trade policy, or continental industrial strategy. They don’t deploy charging networks or build out the grid. They don’t help fossil-fuel-producing counties diversify their economies, don’t structure smart spending priorities for the transportation system, don’t thoughtfully manage refinery closures and other giant infrastructure shifts. And as we’ve seen, CAA and EPCA don’t put real pressure on giant incumbent companies to change, and to make affordable new products, rather than to lobby hard against rules they don’t like.

By 2029, many of these core economic and industrial policy areas will also be in shambles. And as we’ve learned from the partial repeal of the Inflation Reduction Act, a party-line reconciliation bill won’t be a silver-bullet fix. Reconciliation bills are, as the Trump administration has made clear, no basis for stable investments. Fiscal policy is useful. Unstable budget policy will not draw in the billions in investment we’ll need to catch America back up.

These aren’t ancillary problems. Failing to manage the EV transition as the giant economic shift it actually is makes the policy and politics of the transition brittle and subject to reversal. 

And Neither Will Ad Hoc Dealmaking

We need approaches that both compel and encourage giant incumbent companies to change and make affordable new products, rather than lobby hard against rules they don’t like. We need approaches that take the lived experiences of people who can’t afford to finance a new car, who don’t have access to charging at home, or whose livelihoods depend on the fossil economy seriously.

Yet the United States has never had a clear, cohesive, and comprehensive electrification and innovation strategy for the auto sector. In the absence of a clear national program, we’ve cut a series of tenuous deals to structure nearly twenty years of U.S. vehicle policy.

President Obama, during the 2009 auto bailout, was able to use fiscal recovery funds to steer the auto industry into accepting a first round of emissions rules from the feds and California. This “car deal” set the template for several more. The first Trump administration (with tacit support from automakers looking to slow EV investment) blew it up. California stepped into the breach with another set of deals, persuading companies to keep making EVs in exchange for regulatory relief and by threatening to stop buying government cars from holdouts. Team Biden then used Inflation Reduction Act funds to provide EV rebates and support EV manufacturing, as a spoonful of sugar to help a new round of federal and California emissions rules go down. And then the second Trump administration blew up those deals, again with industry support as American companies pivoted back toward short-term profitable gas SUVs.

Same story with trucks. California made a deal with the truck makers to continue progress on clean trucks. Post-Trump 2, those same truckmakers sued California to blow up their own deal. They are now backing Trump on his effort to destroy the entire CAA greenhouse gas vehicle program.

These deals were crucial progress at the time – and it would have been rational for the industry to keep its word. But it didn’t – policymakers shouldn’t be fooled again, or lulled into thinking that even good deals are a long-term substitute for durable policy. Relying on ad hoc dealmaking is an absolutely wild way to run a major industrial sector.

No wonder Chinese companies, able to rely on consistent government policy, have surged ahead while American consumers struggle to navigate the increasingly byzantine and expensive world that is the American auto market. While American workers move their families to be near good-paying auto jobs only to see investment pulled back and factories shutter. And while auto companies, in the absence of a stable and dependable policy environment, almost inevitably engage in short-term profit chasing rather than longer-term innovation.

A Path Back to the Future for U.S. Auto Policy

What breaks this bad pattern? Green industrial policy, embedded in a bipartisan statute that can pass a filibuster-proof majority of the Senate, and that blends competition and innovation.

Because we need both. The U.S. auto industry is highly concentrated, with only a few companies dominating domestic sales of both cars and trucks. These companies are used to calling the shots and setting prices, which is terrible for everyone but CEOs. American auto CEOs are deathly afraid of real competition – witness how Tesla’s new affordable electric semi is sending major ripples through the entire freight sector. But it’s been decades since the United States seriously enforced antitrust laws. Rather, politicians from both parties have raced to erect steep tariffs shielding the American auto sector from essentially all real global competition. There’s a role for any government to play in supporting its domestic industries. But that role isn’t propping up failing legacy companies indefinitely, at the public’s expense.

A well-designed statute could strengthen market oversight and competition while also providing American companies with reasonable support. Don’t like the Chinese-backed EVs that are undercutting your market? Start making and selling better, cleaner, and cheaper vehicles – and we’ll help. This is the logic that motivated Canadian Prime Minister Mark Carney and Mexican President Claudia Sheinbaum to welcome Chinese competition while bolstering their domestic industries. The U.S. can catch up with the same approach. For instance, government can readily provide loan funds and policy support for overseas companies that want to scale up in the United States, in partnership with domestic manufacturers. Such joint ventures are win-wins: the foreign company gets a new market and learns how to build for the U.S. market; the domestic company gains process knowledge and the IP it needs to modernize its products. That is exactly what Toyota and GM did a few decades ago, when the U.S. industry faced major competition from overseas autos (then, from Japan). The strategy can work again, for cars and trucks and batteries. Throw in better federal loan terms for union facilities, and you have a worker-friendly, industry-friendly, pro-competition, pro-consumer package. And this time the government can even explore taking an equity stake. Because why not re-shore profits, too?

Is comprehensive statutory reform ambitious? Certainly. But anything less consigns core national economic and industrial strategy to the vagaries of judicial interpretations of decades-old pollution statutes. Half-measures will, given the context above, be essentially useless. As with riding a bike, we can’t afford to pedal slow. The fate of a vast national industry warrants real national vision, rooted in a democratic vision for ordinary people. Democracy repair is climate repair is industrial reconstruction.

To deliver that vision when the policy window emerges – when the next President asks for it – we’ve got to start structuring it now. We suspect it will include some or all of the following elements:

What Needs to Happen Now

The comprehensive, statutorily based plan we envision is only so much wishful thinking today. But we can actually create the conditions to realize this plan in the future. That’s because states have very substantial economic development authority, complemented by the substantial financial muscle of large state-based green banks and infrastructure banks.

What if the next governor of California provided state-backed loans for any company seeking to expand an EV or battery firm with pro-union policies in the state? The governor might particularly encourage joint ventures, perhaps looking to the Chinese firm BYD’s existing facility outside of Los Angeles (which doesn’t make cars and semis now, but could be expanded to do so). This isn’t wishful thinking. California more or less created Tesla with over $3.2 billion in subsidies. It can now create a host of new competitors, but this time it can also be taking equity to ensure profits flow back into a “transition fund” in the budget that supports economic growth in counties facing declining oil production. It can simultaneously pass state laws strengthening oversight of the U.S. auto industry, forcing transparent prices and pro-competition market structures.

If California leads, it’s unlikely to stand alone. States like Michigan might well jump in, using its existing Office of Future Mobility and Electrification to draw in billions in investment in existing facilities and pairing public investment with scaled-up private capital in powerful capital stacks. So might Texas, Illinois, Ohio, and Georgia – states that have made significant investments in EVs and batteries that they, and their constituents, are loath to lose. After a year or so, we might reasonably expect an MOU linking a growing state coalition that is working to develop the capital instruments and policies needed to grow industry.

With this policy certainty, investments start flowing at scale and a positive cycle begins. States continue to implement the panoply of policies in their control that otherwise support economic modernization, from clean air planning to freight system electrification rules and programs to policies that help vehicle batteries profitably support the grid by storing energy. They update building codes, deploy chargers, insist on durable and quality products using their consumer protection laws, and steer public money towards necessary infrastructure. Critically, they root all that work in expanded capacity to navigate the “messy middle” of the ongoing clean-technology transition, ensuring that neither workers nor communities will be left in the lurch.

At the same time, far-sighted public officials, backed up by civil society and foundation support, start bringing together the interests the new President will need for a major statutory play. They bring in counterparts from Canada and Mexico, recognizing that the continent has to either work together or lag together. Soon, the collaboration starts looking like an actual industrial strategy at the scale of East Asia’s behemoths.

In this scenario, when the next President calls for an auto modernization plan, Governors across the country back her. So do many major investors, donors, local officials, and federal electeds. After all, they’ve already seen the future in their states, and it is in their interest to scale up the national program. When the gavel falls in the first year of the 121st Congress, the conditions for change are in place, and twenty years of shaky rules and ad hoc deals give way to a new approach – a lasting one.

Four Innovations Driving Climate Progress in State Government

Subnational governments—cities, states, and counties—took some of the earliest steps in the United States to protect air quality, water quality, and public health. Over the last several decades, as the federal government has wavered in its commitment or failed to take a leadership role on climate change, states and cities have continued to act, both by bold pronouncements and aggressive targets and in more quiet, subtle ways that have eased the path for clean energy and other investments. This has resulted in a diverse cohort of cities and states that have made great progress in advancing clean energy and other climate solutions and are well-positioned to continue this leadership and innovation. 

However, the federal government is taking aim at subnational authority and leadership, including threats to states’ actions deemed “overreach.” This threat from the federal government not only limits the ability of subnational governments to develop innovative tools to address climate challenges, but can hinder opportunities for economic development by limiting access to lower cost energy solutions, stifling innovation in emerging industries, and hampering global competitiveness. 

Subnational Governments as Innovators and Leaders

Subnational governments, in liberal and conservative states, have long been the drivers of clean energy and climate progress in the United States. 

Iowa adopted the first renewable portfolio standard in the United States in 1983, which laid the foundation for the state to remain one of the top renewable energy producing states in the country. George W. Bush and Christine Todd Whitman took aggressive actions as governors to reduce greenhouse gas emissions. Like Iowa, this early commitment in Texas has proven durable as the state continues to lead on renewable energy and energy storage deployment – due in large part to the state’s work to accelerate deployment of renewable energy projects, including permit streamlining and speeding interconnection for new projects. Renewable electricity generation reached a new high in the United States in 2024 and four of the top five producing states were “red” states .

U.S. cities and states have also taken a lead to affirm global commitments to achieve greenhouse gas emission reductions. In 2005, the U.S. Conference of Mayors launched its climate commitment when 141 mayors committed to meeting the emission targets included in the Kyoto Protocol. That same year, Governor Arnold Schwarzenegger issued an executive order in California, establishing economy-wide greenhouse gas emission reduction targets for 2010, 2020, and 2050. 

Subnational actions are far more than symbolic. As of today, 33 states have climate action plans, 24 states have economy-wide greenhouse gas emission reduction targets, and 36 states have renewable energy or clean electricity standards. These commitments spotlight state leadership and serve as a reminder of the role of states’ authority and commitment to environmental leadership and their ability to serve as a backstop to federal inaction. They are important in several important ways. These commitments to reduced emissions and clean energy targets send a signal to developers, investors, and other governments that they have customers, markets, and willing partners. This has driven investment in companies and created opportunities for workers and industries in these states. 

Subnational leadership is equally important for reimagining the systems, governance, and institutions to make these targets, goals, and investments a reality. For example, cities and states have developed streamlined permitting and inspecting processes for residential solar and energy storage installations (167 jurisdictions in 47 states), identified least conflict areas for large scale renewable energy projects (California and Washington), and developed innovative finance structures and institutions to support clean energy investment and development (e.g., Green Banks in 29 states, the District of Columbia, and Puerto Rico). Scaling these innovations can accelerate the energy transition and boost economic development, workforce development, and local communities.

The Need for Subnational Leadership

Subnational leadership on decarbonization is a practical necessity. Modeling shows that meeting global emission reduction commitments requires actions by subnational governments and businesses. In the United States, existing commitments by subnational governments and businesses could reduce national emissions 25% below 2005 levels by 2030. State level action can be approximately cost-comparable to federal (top-down) action and it tends to focus more on electrification of energy end uses, clean energy, and direct air capture, all solutions that fall under the authority of subnational

Subnational governments hold primary authority over infrastructure development, siting and development of energy generation and transmission, energy efficiency in buildings, and land use decisions that determine development and conservation patterns. For example, scaling electrification to levels needed to achieve climate goals requires massive build out of clean energy resources and installation of heat pumps and other electric appliances at the household level. At the same time, state and local economies and communities are intimately connected to legacy industries that have shaped local fiscal structures, workforce, and cultures. Therefore, realizing the transformation necessary to decarbonize requires strategic action by state and local government.

Current regulatory structures are not optimized to navigate the challenges of decarbonization. Decarbonization is a systems challenge that depends on successful transformation of technical, social, and economic systems. Decarbonizing the energy system requires building new, clean energy and transportation systems, while at the same time making the investments needed for an orderly transition away from carbon in legacy industries like oil and gas. This dual approach enables environmental progress, while also protecting the workers and communities reliant on legacy industry. This requires linking environmental goals and policy in alignment with economic development, industrial policy, and environmental justice and equity goals. Failing to take an integrated approach repeating patterns of earlier transitions that concentrated pollution and contributed to growing inequity and environmental justice problems. Subnational governments are well positioned to take this integrated approach. 

Working at a state or local level means that policies can be tailored to meet local contexts (e.g., economic, political, or social) in a way that top-down federal policies cannot. This is especially important because “successful implementation” means more than reducing emissions. Successful implementation means meeting climate and environmental goals while promoting prosperity and equitable opportunities for all residents, businesses, and communities. Success depends on accelerating project implementation while also addressing affordability issues and promoting society-wide benefits. Successful implementation stories are needed across a diversity of places to demonstrate approaches that are easy for others to follow and will resonate with cities and states that face different political, economic, and social situations. State and local government provide the right scale for building these solutions.

Challenges to Subnational Leadership and Innovation

It must be noted, especially at this point in time, that while the United States has a strong tradition of subnational leadership, it is not guaranteed. Subnational governments are experiencing many challenges that threaten to erode their leadership position—some inherent in the nature of the energy transition and other external factors.  

A challenge of the energy transition is the need to build up new, clean energy systems while also carefully phasing out old, polluting systems. Governments need to accelerate investment in new clean energy, while continuing to invest in legacy industries. Failing to do both can introduce threats that can erode or even stall progress on decarbonization. California is experiencing this challenge now as the state navigates the impact of reduced demand for gasoline, a success of the state’s clean transportation policies, and what that means for the state’s oil and gas industry. The instability for oil and gas has resulted in the planned closure of two of the state’s refineries. While these closures will reduce pollution in host communities, they will also result in lost jobs and revenue and have the potential to increase the price of gasoline for California consumers. These dynamics have required the state to take actions to stabilize the oil and gas industry, while also accelerating its clean energy investments.

These challenges, inherent to the transition, are being exacerbated by other factors that threaten to erode subnational leadership and innovation, including:

While daunting, these challenges make the need for subnational innovation more important than ever. 

Current Opportunities for Subnational Leadership and Innovation

Now, more than ever, subnational governments need to be places for regulatory ingenuity and action in the face of strong headwinds. The urgency of climate change requires immediate acceleration of the implementation of climate solutions—to reduce greenhouse gas emissions, protect public health and wellbeing, align economic development with environmental goals, and build resilience to changing climate and extreme events. Cities and states are best positioned to design policies to accelerate clean energy, innovation, and economic development because they can design approaches that work in different social, political, and economic contexts. 

Innovation lies in the “how”—how to scope the challenges and design solutions that recognize the complexity of the decarbonization challenge. Subnational governments have already demonstrated several the power of regulatory innovation in several areas that show how rethinking regulatory and governance systems can accelerate progress: 

These innovations by subnational governments show how government can accelerate the deployment of clean energy and other climate projects, implement projects that work in specific contexts, and deliver benefits to people and local economies. 

Innovation 1. Least Conflict Siting

Uncertainty, conflict, and complex permitting and siting processes are major impediments to project implementation. Developing new approaches to siting and permitting can reduce uncertainty and delays that can increase project costs and diminish developer confidence. State and local governments can develop and deploy several innovative tools that can make it easier and less expensive to implement climate solutions, while also increasing transparency and engagement.

Cities, states, and counties can improve the siting and permitting processes by removing barriers to implementation, engaging with diverse stakeholders, and reducing costs for developers, government, and residents and businesses. For siting, this can include thinking at a regional or multi-project scale to identify priority areas for development. Engaging stakeholders early in the process can reduce objections and challenges later in the project process. Least-conflict siting processes provide one approach to innovate in the siting process. 

Siting Innovation: Least Conflict Siting Process 

Least conflict siting is a data-driven, participatory siting process guided by stakeholder priorities. A least conflict siting process uses spatial data that reflect stakeholder priorities (e.g., prime agricultural lands, sensitive habitat, etc.) to identify areas for infrastructure siting that avoid areas of high conflict. Using a participatory, stakeholder-driven process can avoid conflict at later stages in the development process, reduce uncertainty for developers, and provide a more transparent process for local residents, business, and other stakeholders. By identifying least conflict lands, stakeholders can then focus on removing obstacles to development on those lands (e.g., access to transmission). The process is non-binding, so can be adjusted as conditions or priorities change. 

A least conflict siting process has been used in two regional contexts. The Center for Law, Energy, and the Environment and the Conservation Biology Institute piloted a least conflict siting approach for solar energy development in the western San Joaquin Valley in California in 2016. The project aimed to identify areas with least conflict for renewable energy development in a six-month process. A least conflict siting process has also been piloted for the Columbia Plateau in Washington. The project spanned eight months and concluded in 2023. 

Both processes used geospatial data made accessible to all stakeholders through a collaborative gateway. Following this pilot program, Washington State passed a law in 2023 to improve project siting and permitting includes this least-conflict approach as a tool to be referenced for large scale renewable projects. 

While this approach was developed in the context of large-scale renewable development, a least conflict-type process can be applied to a range of project types. This could include minerals mining, carbon removal, or transmission projects. The least conflict approach surfaces priorities and concerns in each region and the results of the process can be applied to different types of climate and energy projects, providing an opportunity to provide efficiency. A least conflict siting process requires commitment from a local permitting authority (e.g., local government), project developers, and stakeholders, including environmental, business, economic development, and other groups. The process also requires some investment to establish a robust process, including: spatial data showing energy, environmental, and other characteristics on the landscape, robust engagement, and strong facilitation.

Innovation 2. Automated Permitting

Delays in permitting increase project costs for developers and households. Developing transparent and simpler approaches to permitting can reduce delays, errors, and costs. Analysis of rooftop solar permitting in the United States shows that nearly 80% of a system’s cost is attributable to “soft costs.” These include design, project management, permitting, inspections, and interconnection. Reducing these soft costs through streamlined and/or automated processes can significantly reduce the costs of these projects. Streamlining can address many stages of the permitting process, including application, evaluation, and inspection – saving time and money for applicants, installers, and permitting agencies. For large-scale projects, mapping the permitting process to identify opportunities for creativity, flexibility, and efficiency can improve the permitting process. 

Permitting Innovation: Automated Permitting and Inspection

Cities and counties are the permitting authorities for many clean energy projects, including rooftop solar and storage systems. Permit Power is a U.S.-based non-profit organization focused on reducing the bureaucratic impediments to deploying residential solar and battery storage. The organization’s research finds that a typical residential solar installation in the United States is up to seven times more expensive than a similar installation in Australia or Germany. To address this disparity, Permit Power is working at the state and local level to advance permitting and interconnection reform to reduce barriers to residential solar and battery storage projects. 

Developers have identified permitting and inspection as major barriers to residential solar and storage project deployment—increasing both the cost and timeline for projects. Several states have adopted laws allowing or requiring cities and counties implement automated or streamlined permitting processes, while others have opted in on their own. The state laws that encourage or require automated permitting for solar projects have taken different approaches, generally with a nod to maintaining flexibility. New Jersey’s law directed a state agency to develop an automated permitting platform, but also to allow local jurisdictions to adopt an alternate platform with oversight from the state agency. Texas and Florida passed laws allowing the use of automated platforms but not requiring them. Maryland passed a law requiring local jurisdictions to use an automated permitting platform.

The Solar Automated Permit Processing Plus (SolarAPP+) provides a free, widely applicable platform to streamline permitting for rooftop solar and solar plus storage projects. SolarAPP+ is available free to jurisdictions designed to make the installation process easier for contractors and permitters, reducing needed staff resources, project timelines, and permitting delays. The National Renewable Energy Laboratory (NREL) developed SolarAPP+ in collaboration with industry and building inspectors. The platform streamlines permit approval for installations that meet specific requirements.

SolarAPP+ was launched in 2021. At the end of 2023, 167 permitting jurisdictions have adopted or piloted use of the SolarAPP+, and close to 600 additional jurisdictions have expressed interest in the application. Annual evaluations of the platform’s use show that permits for code-compliant systems is nearly instantaneous and that SolarAPP+ projects complete the full permitting process faster than installations that use the traditional permitting process. The evaluations also document savings in staff time and reductions in project delays. 

SolarAPP+ is now managed by an independent foundation and is available to all jurisdictions free of charge. 

Innovation 3. Making Projects Work for All By Using Community Benefit Tools

A risk of moving projects at a more rapid pace is the potential for harmful, unintended impacts on communities and the environment, including concentration of industrial activities, damage to habitat and natural systems, and reductions local quality of life. At the same time, these projects can bring economic development, workforce, and associated benefits to host communities. Community benefits tools can reduce conflict and improve project delivery by minimizing harms and harnessing benefits from these investments. 

Community benefit tools can provide a mechanism for ongoing accountability and transparency for host communities, businesses, residents, and other stakeholder groups. These tools include community benefits agreements, community and cooperative ownership structures, and community oversight structures. If carefully crafted, community benefits tools have the potential to deliver meaningful benefits to infrastructure host communities, provide opportunities for community oversight and shared governance of projects, and reduce friction between developers and communities. Including community organizations and stakeholders as planning and implementation partners is vital to the success of these tools. 

Subnational governments can require or create incentives for the development and use of community benefits tools for project deployment and they also have an important role to play in building community capacity to engage in the development and deployment of community benefits tools. They are also well-positioned to create the guidance and accountability tools to create the conditions for more effective community benefits structures. However, the existence of a community benefits agreement or related tool is not sufficient in and of itself, these tools need to be developed and designed well to deliver benefits to communities. 

Importantly, linking community benefits to siting and permitting innovations can provide durable assurances of project performance and that a project will deliver benefits to a host community. 

While requirements and incentive structures have promise, it is critical that tools are available to ensure that community benefits agreements are done well. These include guidelines for agreement development and technical and legal assistance for communities to ensure that agreements deliver real benefits to communities. It is worth noting that developing project by project community benefits agreements could result in two unintended and undesirable outcomes: the added process could slow down or discourage wanted projects and host communities end up with piecemeal benefits that cannot deliver meaningful and transformative investments in a community (e.g., comprehensive workforce development, integrated infrastructure investments, etc). Imagining scaled approaches (e.g., across a city or county scale) to deliver community benefits in a holistic manner across multiple projects in that area could increase efficiency for developers and support transformative investments in places hosting multiple projects or project elements. 

Community Innovation: Models to Deliver Community Benefits 

Community benefits agreements associated with development projects can deliver meaningful benefits to host communities, including investments in infrastructure, workforce development, and other community investments. State or local governments can require community benefits frameworks for projects in a specific geography (e.g., a community benefit ordinance) or create incentives for the development community benefits agreements or other structures to streamline project development (e.g., California Assembly Bill 205).

Detroit adopted a Community Benefit Ordinance in 2016. The ordinance requires that projects valued above $75 million or that receive significant subsidies from the city provide additional benefits to the community where a project is sited. When the ordinance is triggered, a Neighborhood Advisory Council from the project’s impact area is formed to work directly with the developer. The City of Detroit tracks progress on the commitments made through the agreements developed under the ordinance. Since its passage, eleven projects have finalized agreements under the ordinance and four more are in progress. Regular review of the ordinance’s performance has identified areas for improvement but monitoring shows that it has delivered measurable benefits in Detroit communities. 

In accordance with California Assembly Bill 205 (2022), the California Energy Commission (CEC) has developed an Opt-In Certification program for clean energy projects including large-scale renewable energy (i.e., greater than 50MW), energy storage, and some clean technology industrial facilities. Through the opt-in certification program, the CEC can issue a permit for the project and enable it to forego permitting by local land use authorities and most, but not all, state permits. To qualify for the Opt-In Program, a project must meet a set of requirements, including entering into at least one legally-binding and enforceable agreement that benefits one of more community-based organizations in the project area (e.g., a community benefits agreement). For most qualifying projects, the Opt-in Program provides a faster timeline for environmental review (within 270 days of a project’s complete application, in most cases). However, implementation of the Opt-In Program is limited to date. 

Innovation 4. Develop Innovative Financing Tools

Funding and finance for climate actions is a major barrier to advancing action, especially as the federal government claws back and reduces federal funding for energy, environmental, emergency response, and other programs that states, cities, and project developers have depended on. Now more than ever, developers, cities, and states need to be innovative in how they access and deploy funding and finance tools to support project development. 

Subnational jurisdictions can initiate various revenue generation strategies to support project development. They can also access or establish different funding (i.e., grants) and financing strategies (i.e., loans) to support public and private project development. Subnational revenue generation tools include bonding authority, taxing structures, credits programs, and implementation of pricing programs (e.g., congestion pricing in New York City). They can also establish and/or access financing institutions like green banks and public-private structures to finance project development. 

Subnational governments need to deploy a suite of revenue generation, funding, and financing strategies to support implementation and unlock access to private capital and investment. This is especially true given current threats to municipal finance, including withdrawal of federal funds, increasing climate risks and disasters, and the fiscal dimensions of the energy transition that affect local revenue structures. An analysis of options to support implementation of San Francisco’s Climate Action Plan found that the City needed to access all tools available to it to achieve the levels of investment necessary to implement the plan. This included development of new tax and fee structures, use of financing districts, integrating climate actions in the City’s schedule of general obligation bonds, establishment of a green bank, and implementation of pricing policies, including congestion pricing. Since the time of the analysis, the City of San Francisco passed Proposition A in March 2024, the city integrated climate-related actions into a scheduled general obligation bond to support affordable housing. 

Financing Innovation: Public Finance for Transmission Infrastructure

Thirty states either have or are considering development of a green bank. A green bank provides access to capital for clean energy and other sustainable projects by issuing loans to projects that might otherwise have difficulty accessing capital. Programs within the California’s Infrastructure and Economic Development Bank (I-Bank), the State’s financing institution to support public infrastructure and private development projects that benefit California’s economy and quality of life, serve as the state’s green bank. Recent legislation established a program within the I-Bank to support investment in transmission infrastructure.  

Transmission infrastructure is needed in many regions to distribute clean, renewable energy from where it is generated to load centers. California anticipates a quadrupling of in-state renewable energy generation by 2045, which will include offshore wind generation off the northern and central coasts, large scale solar generation in the Central Valley, and geothermal energy from the inland south regions of the State. Currently new transmission is funded though investor-owned utilities that pass costs on to ratepayers or by private developers.

To provide an alternative model, California recently established a public financing mechanism for new transmission infrastructure, the California Transmission Accelerator Revolving Loan Fund Program. The Accelerator will exist in the State’s I-Bank, the State’s financing institution to support public infrastructure and private development projects that will benefit California’s economy, jobs, and quality of life. The Governor’s Office of Business and Economic Development will develop a financing and development strategy in coordination with the State’s energy agencies to guide the implementation of the Transmission Accelerator. The program is designed to reduce burdens on ratepayers by using State funds to support needed transmission infrastructure development.    

Closing Thoughts

We are at a critical moment for climate progress—given both the urgency of climate change and political polarization in the United States. However, this combination provides an opportunity for creative thinking and  innovation at the subnational level. State and local governments have an opportunity, and perhaps obligation, to reimagine the regulatory, institutional, and governance structures to design decarbonization strategies that work for state and local economies, communities, and the environment. If undertaken at scale and implemented quickly, these subnational actions can have a significant impact on carbon emission reductions. 

Some ways to help realize this innovation include:

Outcome-Based Contracting Reorients Government IT Acquisition Around Public Value and Mission Results

The effectiveness of federal programs is increasingly determined by the technology that powers them. Yet decades of oversight and research have documented persistent challenges in large-scale IT modernization. The Government Accountability Office has repeatedly designated federal IT management as high risk, citing cost overruns, schedule delays, weak requirements management, and inadequate oversight. Bent Flyvbjerg’s research shows that large public-sector technology and infrastructure programs are especially prone to failure due to scope creep and cumulative risk. The Defense Innovation Board similarly concluded in Software Is Never Done that long development cycles and early requirement lock-in expose missions to unacceptable risk.

Across these analyses, the pattern is consistent: requirements are defined too early and too rigidly; performance is measured too late; incentives reward milestone completion rather than operational outcomes; and risk accumulates until deployment. These failures reflect several structural challenges—fragmented funding, leadership turnover, legacy system complexity, and acquisition models that delay validation and limit adaptation.

Traditional acquisition approaches assume stable requirements and predictable environments. Software-intensive systems do not behave this way. Requirements evolve, dependencies emerge during implementation, and technology ecosystems shift over the life of the contract. In this context, specification-driven models can increase risk by delaying feedback and limiting course correction.

This paper examines Outcome-Based Contracting (OBC) as a model for aligning acquisition with the realities of modern IT delivery. OBC reframes procurement around the staged achievement of measurable mission outcomes rather than the delivery of predefined technical artifacts. OBC ties funding, evaluation, and continuation decisions to mission outcomes and pairs naturally with iterative delivery practices that surface and reduce risk early.

Outcome-Based Contracting

Federal acquisition models have evolved over time in response to changing technologies and risks. Early approaches emphasized detailed specification and cost control, with contracts structured around defined requirements and reimbursement of inputs (e.g., cost-plus and fixed-price models). As systems grew more complex, performance-based contracting emerged to shift focus from activities to measurable outputs and service levels. However, in complex and dynamic environments, even performance-based models often remain tied to predefined deliverables and intermediate metrics, limiting their ability to adapt as conditions, requirements, and understanding evolve over time.

Outcome-based contracting (OBC) represents a further evolution. It structures the government–contractor relationship around shared accountability for mission results rather than delivery of predefined outputs. Its defining feature is not a pricing model, but the alignment of incentives, governance, and performance measurement around measurable mission outcomes.

As Allan Burman notes, building on performance-based contracting, OBC shifts accountability from activities and milestones to mission outcomes. In practice, it establishes a structured process in which government and contractor jointly deliver measurable results, with contracts defining decision rights, evaluation mechanisms, and adaptive processes.

Key features include:

A useful way to understand outcome-based contracting is as a managed performance relationship rather than a one-time procurement transaction. As research from the IBM Center for The Business of Government emphasizes, effective outcome-based models require clearly defined desired results, measurable indicators of success, and ongoing performance management processes that allow both parties to assess progress and adjust course. This includes establishing baseline performance, continuously monitoring results, and linking financial incentives, contract options, and governance decisions to demonstrated improvement. Critically, these models depend on sustained collaboration and transparency: agencies must be able to interpret performance data and engage in joint problem-solving with vendors, rather than relying solely on compliance reviews. In this sense, OBC is not simply a different way to write requirements—it is a different way to manage delivery, in which measurement, incentives, and decision-making are continuously aligned to achieving mission outcomes.

Applying Outcome-Based Contracting to IT Modernization

Applying OBC to IT modernization requires three shifts: defining measurable outcomes, structuring decision rights, and organizing contracts around incremental delivery.

Defining outcomes

Mission objectives must be translated into measurable operational indicators—such as transaction completion rates, time to resolution, system availability, or error reduction. These indicators must be precise enough for evaluation while reflecting real-world service performance.

Effective models distinguish between:

For example, a call center contract might set a mission outcome of reducing resolution time by 30 percent, supported by metrics such as speed of answer, first-contact resolution, and callback completion time.

A central design question is how outcomes are embedded in the contract. Outcomes can function as binding accountability anchors, linked to evaluation, incentives, and option decisions, but not as rigid end-states. This approach is only effective when supported by governance structures that allow agencies to interpret performance and adjust delivery.

Critically, outcomes and the underlying problem definition must be treated as testable and subject to refinement. Initial problem framing is often incomplete in complex systems. Contracts and governance models should therefore include regular check-ins, using data, user research, and operational feedback to assess whether the problem is being solved as intended. Where necessary, agencies and vendors must be jointly empowered to restate or refine the problem to ensure continued alignment with mission needs.

Structuring decision rights

OBC requires clear decision making authority over priorities and tradeoffs. In software delivery, this centers on a strong government Product Owner (PO) role. The PO is responsible for backlog prioritization, acceptance criteria, and aligning delivery with mission outcomes. The PO must be empowered to continuously adjust priorities based on user needs and performance data without requiring contract modifications. Contractors are accountable for delivering measurable progress, but do not control mission priorities.

Governance must reflect agency maturity, and also the nature of the initiative. More mature organizations can rely on PO-driven execution and adaptive metrics, using contract outcomes as high-level anchors. Even in less mature agencies, OBC principles can be applied in targeted ways—particularly in user-facing systems or components where outcomes can be clearly measured. In some cases, especially large enterprise system implementations, hybrid approaches may be required. These may combine clearly defined objectives and outcome metrics with more structured implementation phases for core platform rollout. The key is not strict adherence to a single methodology, but aligning decision rights, outcomes, and delivery approach to the realities of the system being implemented.

Structuring incremental delivery

Contracts must support incremental, evidence-based delivery. Large, multi-year programs defer risk discovery until late in the lifecycle. Iterative delivery reduces this risk by shortening feedback loops: capabilities are deployed incrementally, evaluated under real conditions, and adjusted early. Incremental delivery provides disciplined mechanisms for iteratively paying down risk.

OBC complements this model by tying funding and continuation decisions to demonstrated performance. Agile practices surface risk; OBC aligns accountability and resources to its mitigation.

This has direct implications for funding models. Effective OBC implementations require upfront decisions about how much funding is allocated to a product or service, with mechanisms to adjust that funding over time based on performance. Budgeting should support iterative scaling—expanding or contracting investment based on whether outcomes are being achieved. This, in turn, requires financial flexibility, such as capability-based budgeting, and the ability to reallocate funds or leverage working capital-like mechanisms.

In practice, appropriations constraints can limit this flexibility. For example, agencies operating under single-year appropriations may struggle to dynamically adjust funding in response to performance signals. Addressing this requires coordination between acquisition, product, and financial management functions to ensure that funding structures align with the adaptive nature of outcome-based delivery.

Outcome-Based Contracting In Practice

Outcomes-oriented approaches are not new but remain underutilized in IT acquisition. Existing models demonstrate the value of aligning funding to measurable performance.

Within government, the Department of the Navy’s World Class Alignment Metrics (WAM) evaluates IT investments based on outcomes such as resilience, customer satisfaction, and cost per user. Similarly, Department of Defense Performance-Based Logistics ties compensation to readiness outcomes, and NASA’s Commercial Crew program links payments to demonstrated capability. 

These examples share a core principle: funding follows validated performance rather than predefined inputs. Applied to IT modernization, this requires pairing mission outcomes with iterative delivery, clear decision rights, and sustained technical engagement. Without these elements, outcomes risk becoming abstract goals rather than operational tools.

Despite its advantages, outcome-based contracting is not the default in federal IT acquisition. In practice, existing incentives continue to favor specification-driven models: funding structures are rigid, oversight emphasizes compliance with predefined requirements, and procurement processes reward detailed up-front definition over adaptive execution. The following case illustrates how these dynamics shape real-world outcomes—and how leadership, governance, and delivery choices ultimately determine whether programs succeed or fail.

Case Study: SSA Call Center Modernization

The Social Security Administration (SSA) operates one of the largest public-facing service platforms in the federal government, serving approximately 70 million Americans through its national 800-number network and field offices, processing high volumes of calls. In 2017, the SSA faced growing problems with its aging, complex telephone infrastructure and rising wait times for the tens of millions of Americans who rely on the agency’s national 800-number for assistance with benefits, Social Security numbers, and other services. To address these issues, SSA launched the Next Generation Telephony Project (NGTP), a large IT modernization effort intended to replace legacy telephone systems and unify call handling across the agency. 

NGTP emerged from a traditional acquisition model: a detailed, waterfall-style specification, a large systems-integrator contract, and milestone-based progress tied to predefined technical requirements. In February 2020 SSA awarded an IDIQ contract to Verizon to design, implement, test, transition, operate, and maintain the new telephony platform, including procurement of hardware, software, and services. Implementation faced challenges from the beginning: Verizon’s win was contested, delaying the start of work. SSA’s team didn’t realize the solution Verizon proposed, reinforced by SSA’s own contract requirements, was based on architectural components that were a generation behind leading contact center systems. NGTP’s 10-year planning horizon meant any solution would likely be obsolete before full deployment.

By 2020, with the project still in early development, the COVID-19 pandemic forced SSA call center agents to work remotely — a capability the existing legacy system lacked. Verizon scrambled to assemble a custom stopgap solution, but this was plagued with issues. From May 2021 to December 2022, over 40 service disruptions caused dropped calls, long wait times, and outages. At times, more than half of calls went unanswered as the team capped incoming calls to maintain system stability. 

Meanwhile, NGTP suffered further delays and technical hurdles. SSA executives were frustrated but assumed they were contractually stuck. The system finally launched in December 2023 for the 800-number only, delivering just part of the promised functionality. But the system experienced ongoing performance issues, including increased wait times and disconnected or unanswered calls that hindered the agency’s ability to serve the public. On August 22, 2024, after only about 10 months of operation, SSA transitioned the 800-Number Network off the NGTP platform and moved to a different telephony solution. The NGTP project cost SSA over $160 million and was abandoned within a year of deployment, with the agency reverting to an alternative telephony platform.

The failure was not attributable to a single cause. Interviews and oversight findings point instead to a combination of over-specification, missing mission outcomes, weak accountability mechanisms, long planning horizons, and an acquisition structure that made adaptation difficult. 

It is also important to recognize the scale and complexity of SSA’s operating environment. The agency’s service delivery depends on hundreds of interdependent systems, many of which encode decades of policy and operational logic. Modernization efforts must contend not only with outdated technology, but with deeply embedded business rules and integration dependencies that are not always fully visible at the outset. These conditions increase the difficulty of both specification and implementation, regardless of acquisition approach.

Specificity Did Not Produce Control

A central lesson of NGTP is that specificity in requirements does not necessarily translate into control over outcomes. The solicitation and technical requirements were extensive and highly prescriptive. They incorporated staff input but lacked sustained user-centered validation and focused heavily on defining technical components rather than the operational outcomes the system was intended to achieve. In several cases, the contract mandated architectural approaches that constrained flexibility and effectively locked the program into solutions already lagging prevailing commercial practice.

The NGTP contract required the development of significant custom telephony capabilities in a market where mature commercial Contact-Center-as-a-Service (CCaaS) platforms already existed. Custom software and hardware development inherently carries greater risk than configuring established commercial platforms: the first buyer bears the cost of defects, scaling problems, and design errors that mature products have already identified and resolved. As a result, the program assumed substantial technical risk without clear evidence that SSA’s mission required a bespoke system.

The decision to pursue a custom telephony architecture also introduced structural technical risks. The system was intended to function as a “single enterprise contact center” capable of routing calls across SSA’s national network. In practice, however, the implemented solution consisted of six separate contact centers operating as independent queues rather than a unified system. According to the SSA Office of Inspector General, this configuration prevented calls from being dynamically rerouted between queues, limited agents to answering calls from a single queue, and could disconnect calls when agents logged out of one queue even if capacity existed elsewhere in the system. These limitations increased wait times and created operational inefficiencies. Efforts to resolve the architectural mismatch led to the development of a custom routing “brain” intended to connect the six queues—effectively reinventing load-balancing technologies that have been widely used and commercially mature for decades. The need to retrofit this architecture required multiple contract modifications and created ongoing operational challenges. As one SSA leader later observed, “Some people on the project might have known that load balancers had been mature for 30 years, but managers weren’t listening to them.”

The contract’s prescriptive structure also undermined the flexibility typically associated with its contract vehicle. Although NGTP was structured as an IDIQ, the narrowly defined solution space meant that many necessary adjustments required formal work orders or contract modifications. In practice, the program combined the administrative rigidity of traditional contracting with the technical risk of custom system development.

The detailed specifications locked the implementation into many types of outdated architectural assumptions. For example, certain components were required to be compatible with an old, yet unspecified, version of Internet Explorer, a browser Microsoft formally retired in 2022 in favor of Microsoft Edge. Rapidly evolving technology environments can render highly specific requirements obsolete before systems are delivered. At the same time, the extensive technical detail did not fully address practical operational considerations, such as ensuring that existing SSA call center staff could easily access and use the system in their day-to-day workflows.

Missing Mission Outcomes

The NGTP case also illustrates the limits of operator-focused metrics. SSA understandably focused on call volume and the ability of the system to handle surges in demand. Previous infrastructure could “top out” during predictable spikes, such as cost-of-living adjustment periods. Capacity therefore became a central concern.

But throughput alone is not the same as service performance. For beneficiaries, the meaningful outcomes include how long it takes to reach a representative, whether the issue is resolved on the first contact, how many interactions are required, and how long it takes to complete a request. Those mission outcomes were not adequately embedded in the contract’s performance framework.

Metrics such as average speed of answer did not fully capture the user experience, particularly when calls were dropped, or handled initially by automated systems, or callbacks were counted in ways that reduced reported wait times without necessarily reducing the time required for beneficiaries to obtain help.

The deeper problem was architectural as well as contractual. SSA’s call center is best understood as a front-end interface to a much larger, deeply complex service delivery system involving eligibility determination, identity verification, claims processing, and payments. Yet the contract largely treated telephony modernization as a standalone technical problem rather than as part of an integrated operating model. This narrow framing also limited foresight into how the capability could evolve over time, adopting future emerging technologies or adding integrations with other agency systems to support an omnichannel service model. Defined primarily within a technical infrastructure context, the effort optimized for telephony components rather than positioning customer service as a strategic, cross-agency capability.

Accountability Was Weak Where It Mattered Most

Federal acquisition frameworks already provide multiple mechanisms for vendor accountability, including service level agreements (SLAs), financial incentives and penalties, option periods tied to demonstrated progress, and formal performance reviews. In the private sector, large IT and service contracts routinely embed such operational standards like uptime guarantees, response-time thresholds, incident-resolution timelines, and financial penalties for failure to meet them to ensure that vendors remain accountable for system performance under real operating conditions. In the NGTP case, however, these mechanisms were not sufficiently embedded in the contract structure or tied to mission outcomes and enforceable operational standards.

The SSA Office of Inspector General found that the NGTP contract lacked sufficient performance-based quality standards and incentives to ensure accountability for resolving system-performance issues. The practical result was limited leverage for the government even when the system failed to meet technical and operational needs.

The most striking example came at termination. When SSA stopped work on the NGTP effort, the agency still paid the vendor the remaining portion of the full $125M contract amount. Whatever the legal and operational considerations behind that decision, the message to the market was problematic: poor performance did not produce a proportionate financial consequence.

SSA’s Course Correction

SSA’s response illustrates an alternative approach. Rather than pursuing another large, fully specified replacement effort, the agency adopted a more incremental approach using cloud-native technology and more flexible contract mechanisms. A proof-of-concept deployment of Amazon Connect at a Pennsylvania call center allowed SSA to test the platform in live operating conditions before scaling further.

This approach introduced several disciplines that had been missing from NGTP. It reduced dependence on bespoke infrastructure, created an opportunity to measure performance under real conditions, and allowed the agency to collect operational evidence before broader rollout. Critically, assumptions were tested incrementally rather than embedded upfront. The agency also adopted Product Operating Model best practices: they stood up a cross-functional product team with a product manager, technical lead, design lead, and an SME lead who was responsible for state specific launches, training, and key metrics. 

Early results suggested improvement. SSA’s Office of Inspector General reported that the agency’s telephone service handled substantially more callers in fiscal year 2025 and that reported average speed of answer improved. The subsequent administration leveraged the scalable platform to expand deployment across all field offices. At the same time, oversight and public reporting also highlighted the importance of careful metric design. Some reported gains did not fully reflect the total time beneficiaries waited for callbacks or to resolve their issues. That distinction is key: better performance frameworks depend not simply on more metrics, but on the right metrics.

Lessons for Outcome-Based Acquisition

The SSA case highlights several lessons:

Governance matters as much as contract structure. Strong product ownership and leadership are essential. Critical to the successful turnaround was having a cross-functional “product quad” of product management, engineering, design, and domain expertise. In the NGTP case, requirements were largely defined within an infrastructure-oriented telecommunications function, leading to a solution optimized for technical components rather than end-to-end service outcomes. This organizational starting point constrained problem framing and limited the program’s ability to align delivery with user needs and mission performance.

An outcome-based model would have defined mission metrics such as first-contact resolution and total time to complete transactions, incorporated discovery phases, and tied continuation decisions to demonstrated performance. It also would have created a precedent for early adoption of critical monitoring tools used by leaders in the course correction, like integrating real-time customer experience telemetry into daily operations, which enabled continuous monitoring of user outcomes and rapid reprioritization of features to address emerging issues as they occur.

Finally, contract structure alone is not sufficient. Successful implementation depends on sustained leadership, technical judgment, and the institutional willingness to act on evidence. Several interviewees noted that meaningful progress accelerated only after leadership with prior agile and product delivery experience assumed responsibility for the effort. Acquisition structure can enable better outcomes, but it cannot substitute for leadership capable of making informed technical and operational decisions in complex environments.

Conclusion

Large-scale IT modernization is central to federal mission delivery. Traditional acquisition models remain effective in stable, well-defined environments but are poorly matched to software-intensive systems characterized by uncertainty, interdependence, and continuous change.

Outcome-based contracting provides a more effective framework for these conditions. It strengthens accountability by tying funding and continuation decisions to measurable performance, improves risk management through iterative delivery, and reorients acquisition toward public value. Rather than asking whether a contractor delivered what was specified, it asks whether the government achieved the mission results it needed.

Realizing this shift requires more than changes to contract structure. The authorities to pursue outcome-based approaches largely already exist, but incentives, funding constraints, and workforce capabilities continue to reinforce specification-driven models. Appropriations structures limit flexibility, oversight mechanisms emphasize compliance over performance, and many agencies lack the product management and data capabilities needed to define and act on outcome metrics. Addressing these constraints will require coordinated changes across budgeting, oversight, acquisition practice, and workforce development.

In the near term, IT modernization progress should be visible in concrete ways: contracts that tie option decisions and incentives to mission outcomes; programs operating with empowered Product Owners and real-time performance data; and evaluation frameworks that prioritize whether services are improving, not just whether requirements were met. Over time, this would mark a broader shift from managing compliance with plans to managing performance against outcomes.For technology and IT modernization efforts, the success of outcome-based contracting depends on alignment with product operating model practices, technical expertise, and sustained leadership. The central proposition of OBC is not less discipline, but better discipline—organized around measurable outcomes, empirical evidence, and the continuous identification and reduction of technical and operational risk.

Building Human Infrastructure to Mitigate AI Fairness Harms in K-12 Education

The rapid introduction of tools powered by artificial intelligence (AI) in K-12 education offers promises of data-driven personalized learning, real-time feedback, and relief for educators’ overstretched workloads. However, increasing access to emerging technologies alone is insufficient for achieving this vision. Without sustained, high-quality professional learning (PL), AI risks deepening a “digital design divide“— a gap where educators lack the support necessary to transform learning experiences by leveraging technology responsibly and effectively. 

This challenge is not new. It mirrors a long-standing phenomenon in K-12 education where significant technology acquisitions occur without due efforts to sustainably build educator capacity. To mitigate this risk, state legislatures and education agencies must prioritize investments in human infrastructure– especially teachers, moving beyond systems that prioritize short-term tool training toward durable, high-quality professional learning systems.

Challenge and Opportunity 

While a majority of U.S. educators now use AI in their work, the necessary support to use these tools effectively and responsibly lags significantly. According to RAND, half of the nations’ school districts have not provided training on AI, and high-poverty districts are even less likely to have provided training compared to their low-poverty counterparts. The failure to provide this essential support and the resulting disparity poses a dual fairness risk for vulnerable student groups. They may be subjected to biased or harmful AI practices, and they are also more likely to miss out on the innovative uses of AI, including deeply personalized learning responsive to their strengths, backgrounds, experiences, prior knowledge, and needs.

Furthermore, recent research identifies four systemic issues in current systems that govern professional learning (PL) for high-quality, technology-enabled instruction:

The real opportunity of AI lies not just in the tools, but in an educator workforce prepared to wield them. High-quality PL must thus move beyond short-term tool training to focus on areas necessary for equitable implementation, such as AI fairness and bias mitigation, ethical use of data, critical thinking, data foundations, and deep integration of AI-enabled tools into standards-aligned, high-quality instruction. When done right, this investment in human infrastructure ensures AI accelerates learning outcomes for all students, closing the “digital design divide.”

State legislatures and education agencies are pivotal actors who must address this issue through strategic policy levers. While individual districts manage much of the budget implementation and programmatic decisions, states set the conditions for local success by aligning funding streams and defining clear instructional visions. 

Plan of Action

Recommendation 1. Define and Promote Aligned Visions of AI-Enabled Instruction

Recommendation 2. Align Funding With Instructional Priorities

Recommendation 3. Leverage Compliance Structures for Continuous Improvement

Recommendation 4. Encourage Durable Professional Learning Models

Recommendation 5. Work Across Silos in State Leadership

Recommendation 6. Document, Highlight, and Scale What Works

State education agencies specifically can adapt these recommendations based on their current capacity and context. For example:

Conclusion

According to SETDA’s edtech trends survey, AI is currently the leading state edtech priority and top state initiative. However, with only a small group of states currently prioritizing existing funds for technology training, there is an immediate need to improve the systems governing professional learning. By investing in the “human infrastructure,” as exemplified by states like Wyoming and Massachusetts, state leaders can ensure that AI becomes a tool for accelerating outcomes for all students.

Sustaining Scientific Collections in the Age of AI

Scientific collections play an important but underappreciated role in the American science and technology ecosystem. Their existence has accelerated science by providing repositories for samples, archived information and data. This increases the efficiency of scientific research by mitigating the need to repeat field work while serving as essential information libraries. Collections are particularly important for the study of biological specimens and pathogens, offering significant advantages to human health and food security, but also extend to the physical and geological sciences, reducing the need to develop new observations. The decentralized nature of scientific collections–of which there are more than 10,000 globally and about 2,500 in the United States–increases the value of digital curation technologies and particularly those supported by machine learning and artificial intelligence.  

AI companies have identified that one of the most exciting value propositions for their models is the potential acceleration of science by rapidly synthesizing information from disparate sources and gleaning new and undiscovered insights. Accomplishing this mission inherently requires access to public goods and resources, and (in particular) training information that is relevant for scientific discovery.  We believe there is a compelling case to be made for AI companies to contribute to the sustainability of that public good, whether it be through direct financial support or through the provision of in-kind tools that improve the management and sustainability of these uniquely important sources of training information.

A scientific collection is typically defined as an organized, curated repository of physical or digital specimens that (a) support scientific research, (b) are maintained according to systematic standards, and (c) enable reuse or reanalysis. Artificial intelligence models seeking to accelerate scientific discovery require stable, curated and maintained sources from which to train and draw information, and are already trained using scientific collections (in addition to other, lower quality data sources).  AlphaFold’s Nobel-prize winning efforts, for instance, were supported by information derived from the Protein Data Bank, carefully collected and maintained over decades of scientific experiments.  

While operating collections is relatively inexpensive (with notable exceptions, like biobanks), their position in markets is challenging as the products created by the “goods” in scientific collections tend to be made open and available for the broader scientific community. As a result, collections are often treated by their owners as a non-revenue-generating institutional expense. In order to foster the economic viability of collections while providing accurate outputs that are valuable to the scientific community, collections managers should update their terms of use and require that AI companies play the role of responsible stewards of scientific data and information, consistent with the role other companies play in supporting successful American research infrastructure.  

The Cooperative Stewardship Model for research infrastructure, which has served as a reliable template for balancing openness and utilization with commercial interests, offers a possible pathway for collections’ sustainment. Federal agencies should seek to promote high-quality processed information and primary sources wherever possible, including through grantmaking, contracts, and procurement. Given that the primary method by which the general public, and particularly students, engage with scientific information is already mediated by AI (through search engines or otherwise), the trustworthiness of AI platforms and responsible stewardship of collections as a primary data source is paramount.

Challenge and Opportunity

Viability of Collections

Scientific collections are one of the most critical elements of the research ecosystem, supporting everything from food security, like the National Seed Storage Laboratory in Fort Collins, to the Smithsonian’s mosquito collections which enable research on zoonotic pathogens. To adopt the language of the tech world, these are “full stack” research endeavors requiring specialized knowledge, capabilities, and environments to do successfully. Yet despite their relatively low costs, in recent years major scientific collections, like the herbarium at Duke University, have faced significant maintenance and upkeep challenges due to a lack of revenue sources. The challenges and viability of specific collections can vary dramatically due to the fact that collections represent a form of extremely decentralized infrastructure for financial, cultural, and legal reasons. For instance, the operation of the Smithsonian National Museum of Natural History’s collections may be substantially different than privately-owned collections of insects or those representing artifacts, samples, or libraries stewarded by Indigenous Peoples.

Recent federal policy actions, particularly with respect to the introduction of new constraints on indirect costs, threatens to further undermine the business model of collections and force the loss or closure of vital sources of scientific data and information. Collections, which are relatively inexpensive to maintain relative to other scientific infrastructure, still require a significant number of programmatic activities to curate, maintain, and support at a scale similar to other research activities. An example of the types of costs, which can vary widely based on the size of the collection and what is included, associated with collections activities are provided, below:

Personnel, Training, and Staff TravelFacility Space and ModificationEquipment Acquisition and DevelopmentUtilitiesMaterials and ConsumablesShipping and ReceivingIT, Web, and Communication ServicesMaintenance and Security ContractsContracts for Exhibit/ Material Design and Fabrication
AccessioningXXXXXX
Preserving and MaintainingXXXXXX
Documenting additionsXXX
Providing access to usersXXXXXXX
Data curationXXX
Increasing public understanding through education and outreachXXXX

The cooperative stewardship model for research and development infrastructure has provided the backbone for federal scientific collections for many decades. While most infrastructure is free to use and accessed for scientific purposes, for-profit enterprises are generally charged for access to infrastructure in order to sustain, maintain, and provide new capabilities relevant for industry. Examples include the National Institute of Standards and Technology’s repository of standard reference materials, widely used by industry, which includes everything from $1200 peanut butter reference material to standard cigarettes for ignition resistance testing. Major U.S. scientific facilities generally operate on the basis of full cost recovery for services used by commercial providers as opposed to relying on those providers as a primary source of revenue generation. For collections, this would help reduce pressure or concerns stemming from political pressure on indirect costs from grants.

The AI companies and their leaders have identified science as one of their most exciting areas for development, and suggested new licensing and revenue models for measurable AI outputs. For sure, models have the potential to serve as tools for scientific discovery by providing quick access to massive volumes of information largely derived from both scientific literature and data sources and allowing scientists to rapidly extract novel insights from that information. It is reasonable to imagine a scenario where the primary interaction that scientists have with collections in the future is mediated by training data fed into AI foundation models.

AI models must accurately reflect information from scientific collections

AI model firms, which rely on quality sources of training data, ideally should have an express interest in seeing reliable scientific information sources maintained and managed in a way that improves the performance of models due to the availability of quality data. Good information sources, like collections, must be available and maintained if companies are going to successfully implement the vision of AI for science expressed by their marketing and executives. It requires taking information derived from the physical world, digitizing it in a way that is comprehensible to a computer, and then feeding that data and relevant processed information into an AI model where it may be used to help with the process of scientific discovery.  

Some of these data sources are quite large and processing-intensive. For instance, the world’s largest radio wave telescope, the Square Kilometer Array based out of South Africa, is expected to generate approximately 700 petabytes of astronomical data every year–about seven times more than that which is generated by Google Search in the same time period. Models seeking to leverage these data sources for discovery will likely place additional and significant burdens on scientific data infrastructure.

Current AI models are not always the most reliable sources of scientific information, in part due to frequent “hallucinations.” These models do not always preserve scientific information with fidelity, even if trained on high-quality data. For instance, when asked to produce an image of an anatomically correct blue whale skeleton, OpenAI’s ChatGPT model will output images of a strangely toothed (as opposed to baleen) whale skeleton that incorrectly represents the whale’s tail as a single, large bony structure, among other errors. Search engines have similar issues, with regular Google searches in late 2025 producing a humpback whale image when searching for “blue whale,” citing a popular science website as opposed to a scientific collection or other reliable source.

Image generated by ChatGPT using the prompt “Can you draw me an anatomically correct image of a blue whale skeleton?” on April 2, 2026. Note the teeth-like structures on a baleen whale, the bizarre pectoral fin bones, and the hallucination of a bone representing the tail, among other errors. Also note the lack of a clear watermark identifying the image as AI-generated.

Google search result for “blue whale,” November 12, 2025. Note that the first image, drawn from ZME Science, is actually a photograph of a humpback whale.

The challenge of producing high quality scientific outputs does not (and should not) rest entirely at the feet of model developers. While these data sources provide us with the most complete library of field samples, they are far from perfect, oftentimes tagged with partial, non-standard, and more infrequently incorrect information (for instance, samples collected before 1990 generally are not tagged with latitude and longitude). This makes it more difficult to judge the quality of the inputs, requires human curation, and occasionally additional field work to improve the quality of the collection. This actually presents an opportunity for AI models in science, which could (with appropriate validation) help identify and correct missing, incomplete, or inaccurate information.

There is a general need for improved transparency with respect to training data and the information standards models use to evaluate the quality of data.

Scientific collections are uniquely valuable to foundation models given the high quality of the data and information sources, as distinct from information sourced from the broader internet. Unfortunately, it is difficult to assess whether companies that develop generalized models, like OpenAI or Anthropic, privilege scientific data sources over general information collected from the internet, which may include creative renderings, fantastical artist depictions, and outright falsehoods. Assessing this challenge is not straightforward. Companies frequently do not disclose training information or details about how their models process data and information–if they are even able to explain a model’s output.

Recommendation 1. Scientific collections should update their terms of use to require that data used to train for-profit models be licensed, consistent with existing principles for research and development infrastructure. 

Scientific collections, many of which are already struggling, are now faced with an existential threat as a result of the government’s budget cuts and federal policy actions (particularly those related to indirect costs). Modifications to collections’ business models are already inevitable, especially as industry and philanthropy are rapidly reprioritizing their current investments. Rather than attempting to create a new relationship between these companies and institutions that provide this valuable infrastructure, it makes sense to rely on proven public-private partnership infrastructure governance models identified in the National Academies’ cooperative stewardship report, rather than building a new social contract that might threaten the general accessibility (for both AI companies and the broader scientific community) of this critical source of information.

Collections offer a unique and necessary service if artificial intelligence is to realize its full potential for scientific discovery. Rather than attempting to develop or adopt new and more unfamiliar business models, collections should adopt the same cooperative stewardship principles that work for the remainder of the science and technology ecosystem. There is inherent value in making sure that collections serve as the primary basis for information used to train AI models for scientific discovery, as opposed to other sources of information that might be pulled from less reputable sources in the broader internet (including scientific studies that resist replication).

Companies that wish to use the products of research for proprietary and commercial purposes are similarly charged for access to infrastructure. Organizations seeking to produce non-proprietary models for scientific purposes, similarly, should be given the same access advantages conferred to other parts of the scientific community to ensure dissemination and broad utilization.

It is reasonable to assume that AI companies could respond to fee-for-access by simply not accessing data or information from collections. Efforts to derive information from papers or other sources are likely to be incomplete and less useful for the creation of high-quality models useful to science.  It is reasonable to ask–as is the case for every other industry that utilizes R&D infrastructure–for companies that stand to profit from the products of collections to pay for their sustainment.  

Alternatives, including the provision of in-kind tools or resources to assist with the curation and management of scientific collections (and thereby increasing the efficiency of collections and improving the quality of the services they provide the broader community), may also be explored.

Recommendation 2. Science agencies should prioritize development of specialized tools to curate information and data housed in scientific collections and assist in the development of data and information standards.  These tools should continue to rely on cooperative stewardship principles (i.e.,allowing open access for non-commercial use while maintaining fee-based access for profit-seeking entities).

Given that today’s generalized LLMs, image generators, and search engines cannot be counted on to reliably output accurate scientific information, science agencies should continue to invest in specialized models that are designed and developed to address critical mission needs. This would likely require investment in models that are designed to curate, compare, and manage samples which are frequently tagged with non-standardized or lower-quality information.

Recommendation 3. The government should show preference when procuring solutions for science and technology purposes for AI models that are trained on data and information in scientific collections. This should include investment in R&D to help develop models that appropriately privilege high-quality training information.

Government procurement can provide an important tool for increasing the quality of services through the procurement process. As government agencies invest in artificial intelligence for policy purposes, including for the development of technical tools that affect people’s rights and safety, the need for models to output quality information increases substantially.  

Government agencies can, through the contracting process, request that providers be held to certain standards and, whenever possible, provide appropriately-licensed images drawn from primary sources.  Such requirements could also substantially reduce the compute requirements produced by models, which could simply offer a reference image, video, or other information source as opposed to generating (and possibly hallucinating) a lower-quality output produced by the model itself.

Conclusion

The 2020 Economic Analyses of Federal Scientific Collections produced by the Smithsonian and National Science and Technology Council notes the conception of collections by some authors as a “global public good” underpinning much of the science and technology enterprise. As scientists’ and the publics’ interactions with collections and the knowledge contained within them becomes primarily mediated by AI platforms, and as AI platforms seek to profit from content created from their use, the role of those platforms as stewards of these resources should correspondingly increase. Collections are ideal “senses” through which AI platforms can collect high quality information about the world, enabling the most exciting implementations of AI to be realized. Supporting the infrastructure that enables that realization is crucial for the success of the technology.

Thank you to Dr. Oliver Stephenson for his contributions to the text.

A People-centered, Power-conscious Regulatory Democracy Balancing Distributive Justice and Delivery Efficacy

Building Blocks to Make Solutions Stick

People-centered, power-conscious rulemaking, using deliberate stakeholder engagement strategies, produces faster and better results. 

Implications for democratic governance:

Capacity needs: 


One of the biggest obstacles confronting meaningful action to address climate action are power disparities. Among our governing institutions, the federal administrative state is unique in its potential for overcoming these power disparities by offering an effective mechanism for redistributing political power from corporate interests committed to maintaining the status quo to the general public who are already bearing the costs of global climate disruption. The key to realizing this potential is “regulatory democracy.” 

At present, though, the means for conducting public engagement in the administrative state generally fail to meaningfully engage the public, but instead have the perverse effect of reinforcing power disparities, as has been ably documented by the burgeoning Abundance movement. What is needed, instead, is a better approach to regulatory democracy – one that is people-centered and power-conscious. 

This paper sketches out what a people-centered and power-conscious approach for improving regulatory democracy in the rulemaking process: a reform called “Public Participation Planning.” This reform consists of two major procedural components. First, it calls upon agencies to develop “public engagement strategy blueprints” as a mechanism for deliberately creating a tailored public engagement strategy for each rulemaking. The key innovation here is a recognition that different kinds of “expertise” (democratic vs. technocratic) are required at different stages of a regulatory development – a concept referred to here as “sequential participation.” Second, it calls upon agencies to document the actual performance of that strategy, by including a group of documents called the Initial and Final Public Participation Plan Statements. These statements would capture the impacts that the public engagement actions have on the development of the rule. They would be included in the rulemaking document along with the notice of proposed rulemaking and final rules, respectively, where they can help inform judicial review of the rule. In theory, a president could implement a rigorous version of Public Participation Planning without new statutory authority. Still, the full potential of this reform could be enhanced with additional actions by Congress and the judiciary. Properly implemented, Public Participation Planning would both improve the quality of agency decision-making and permit for more expeditious policy implementation by reducing the ability of powerful interests to use the rulemaking process to reinforce a bias toward maintaining the status quo.

Addressing the Climate Crisis Through Better Regulatory Democracy

One of the underappreciated features of the federal administrative state – at least within our contemporary context – is its potential capacity to prevent politically and economically destabilizing concentrations of power from taking hold by continually redistributing it to the general public. That the architects of the modern administrative state – turn-of-the-20th century policymakers, thinkers, and movement leaders alike – designed it with this particular goal in mind seems to have been lost to history, however.

The key to their radical vision of the administrative state was “regulatory democracy” – that is, the notion that administrative agencies would work cooperatively (and sometimes competitively) with the public to shape policy priorities, design, and implementation. At its best, regulatory democracy would take the form of a working relationship that was ongoing and durable. The dynamic it was meant to yield would be a much thicker form of engagement in our governing institutions than ordinary Americans would experience through the episodic opportunities of casting a ballot and the often-binary choices they would be presented with during those opportunities.

This vision should be particularly resonant today, though it may sound esoteric and peripheral at first blush. For the reformers of the early progressive era, creating a new venue for translating public power into policy change was essential for effectively meeting the then-emerging challenges that many Americans faced due to such societal changes as industrialization and urbanization. Today, we face cascading challenges – climate change, globalization, and rapidly evolving forms of computational technology such as Artificial Intelligence and quantum computing – which have similarly exposed the limits of our governing institutions. Then, as now, society was characterized by vast disparities of economic and political power that further threatened effective policy implementation. The administrative state’s comparatively decentralized and democratized design – relative to Congress – was meant to mitigate these effects.

More to the point, if we are to avert the worst consequences of the climate crisis, we will need to quickly revive these robust democratic traditions of the administrative state. After all, as the Green New Deal movement correctly taught us, power disparities are a root cause of this crisis; effectively decarbonizing our economy and investing in infrastructure that is hardened to withstand the unavoidable impacts of climate change will require confronting these same power disparities. Among our governing institutions, the administrative state is best equipped to meet this kind of challenge under these kinds of circumstances.

Achieving the full democratic potential of the administrative state will require some important reforms, however. As the primary law governing the operations of the administrative state, the Administrative Procedure Act (APA) establishes many of the mechanisms that agencies use for democratic engagement. For informal rulemaking, which has become a leading vehicle for administrative policymaking, the APA creates the notice-and-comment procedures. The benefit of decades of experience has revealed that these procedures not only fail to meaningfully engage large segments of the population; they also reinforce status quo inaction and the underlying power disparities that benefit from such inaction.

This white paper argues that what is needed instead is an approach to public engagement that distinguishes between the different kinds of stakeholders implicated by a given policy action and accounts for the underlying power disparities that define their relationships to the policy problem that the action is intended to solve. As discussed below, the passive, power-agnostic posture of notice and comment fails to accomplish either of these objectives. Among other things, a people-centered, power-conscious approach would require carefully cataloging the universe of relevant stakeholders, the barriers they face to meaningful engagement, and the kind of input those individuals would likely bring to inform a policy decision.

A people-centered, power-conscious regulatory democracy would also require deliberate attention to how best to obtain public input. It would demand that agencies have a ready toolbox of engagement tactics and solutions tailored to effectively obtain different kinds of input from different kinds of stakeholders. It would also require agencies to plot out in advance the different stages in their rule development process for deploying those tactics and solutions – a concept this white paper refers to as sequential participation.

This approach would depend for its success upon a sincere commitment to transparency and reciprocity with stakeholders. Agencies would need to continually communicate with stakeholders and be completely forthright in those communications – even when the news is not what those stakeholders will want to hear. They will also need to carefully document how public engagement was conducted throughout the rulemaking process and the role it had, if any, on the progression of decision-making at the different stages of that process.

Lastly, and perhaps most controversially, this approach should draw on the agonistic model of democracy. Practically speaking, that means the goal of regulatory democracy should be to surface and channel productive disagreement, rather than embark on a quixotic search for consensus or near-consensus on controversial policy matters. As explained below, powerful interests have used consensus-based approaches to decision-making as a kind of veto-gate to defend their preference for the status quo. Reorienting our expectations for regulatory democracy in this manner will thus permit meaningful engagement without unduly sacrificing timely policy implementation – a concern that has achieved greater prominence due to the Abundance movement.

These lessons could, of course, just as readily apply to reforming state- and local-level administrative procedures as well. Indeed, subnational governments are already playing a pivotal role in addressing the climate crisis, particularly while steadfast Republican obstruction has left Congress incapacitated on this issue. The state rulemaking procedures or public utility commission proceedings that are responsible for implementing these policies could be strengthened through a people-centered, power-conscious regulatory democracy program. For simplicity, however, this white paper will focus on developing a version of this approach that applies to the federal rulemaking process.

While there may be several different methods for institutionalizing a people-centered, power-conscious regulatory democracy, this white paper proposes the use of what it calls Public Participation Planning. Under this reform, agencies would develop tailored plans called public engagement strategy blueprints. The purpose of these blueprints is to ensure meaningful engagement by relevant stakeholder groups – especially those representing communities that are structurally marginalized or that historically have been excluded from democratic processes. Critically, these blueprints would account for the entire rulemaking process with the aim of proactively engaging particular stakeholders at stages where their input is most likely to be relevant and useful. This proposal would also call on agencies to document their outreach and engagement actions and any impacts they had on the proposed and final rule in a special report called a Public Participation Planning Statement, which would be made part of the rulemaking record.

The practical advantage of Public Participation Planning is that it could be instituted by a president with existing legal authority. Still, the proposal also outlines how the other federal governing institutions – including Congress and the judiciary – can help ensure that the benefits of public participation plans achieve their full potential. One important task for the coordinate branches would be to address whether and to what extent existing administrative law doctrines, such as Vermont Yankee, present barriers to achieving the full potential of Public Participation Planning for advancing regulatory democracy. It is also worth emphasizing that parallel efforts to reinvigorate the most important democratic institution in our constitutional framework – Congress – will also be necessary to realize the full potential of regulatory democracy. After all, the administrative state can only implement the laws that Congress passes. It will thus be more effective for the administrative state to leverage regulatory democracy to tackle something like the climate crisis if Congress were to pass legislation explicitly directed at that issue.

If properly implemented, a comprehensive reform program to accomplish regulatory democracy that is people-centered and power-conscious could be essential for addressing complex policy changes such as the climate challenge. As Public Participation Planning demonstrates, this approach would both improve the quality of agency decision-making and permit expeditious policy implementation.

Background: Regulatory Democracy at a Crossroads

Regulatory democracy – represented not just by the APA’s notice-and-comment procedures but also the National Environmental Policy Act’s analytical requirements and their state– and local-level analogs – has come under increasing criticism from several directions in recent decades. Concerns that the regulatory system is undermined by too much public participation stretch back to at least the Obama administration. At the behest of Cass Sunstein, then Office of Information and Regulatory Affairs (OIRA) Administrator, agencies during this era strongly embraced cost-benefit analysis and other technocratic decision-making tools as an apparent antidote to the irrationality and “mistakes” that ordinary lay people make. Under this approach, public participation was to be viewed with extreme skepticism – if not outright hostility – and thus minimized as much as possible.

More recently, the Abundance movement that has emerged in recent years has come to single out for criticism many of the existing public participation requirements in administrative law. According to this criticism, powerful entities often abuse such requirements as a means for delaying policies that they oppose. Conservatives have also begun to reject public participation, with President Trump having directed agencies to evade notice-and-comment procedures whenever possible during this second term. This move seems in keeping with his overall crusade to centralize administrative power in the White House and build something akin to an authoritarian administrative state.

At the same time, we have seen a growing movement among policymakers and advocates focused on expanding public participation opportunities. Notably, as part of his administration’s larger Modernizing Regulatory Review project, President Biden issued a memorandum providing agencies with guidance on how to strengthen public engagement in the rulemaking process – with a particular focus on marginalized communities. Among other things, this memo encouraged agencies to deploy various strategies for engaging members of these communities at the earliest stages of the rulemaking process. Separately, a group of progressive members of Congress have promoted a comprehensive regulatory reform bill called the EXPERTS Act. One of its provisions would create the Office of Public Advocate, which would be charged with supporting individuals and other underrepresented groups in the notice-and-comment process.

What these competing movements reveal is that almost no one is satisfied with the current approaches to regulatory democracy. This dissatisfaction, in turn, arises from both practical flaws and theoretical disagreements associated with these current approaches.

Practical Flaws of Regulatory Democracy

Due to poor design, the prevailing approaches to regulatory democracy generally fail to effectively engage most members of the public in critical administrative state tasks of decisionmaking and implementation. Worse still, in many cases, these design flaws can combine in ways that function to systematically exclude members of many structurally marginalized communities, thereby reinforcing the very power disparities that are often at the root of the policy problems that regulations are often designed to address.

Many of these approaches follow a rigid, one-size-fits-all design that prevents their implementation from being adapted to meet the unique demands that can arise in different policymaking contexts. This can be seen in the APA’s informal rulemaking context. The same basic notice-and-comment process applies for policies as varied as setting Medicaid reimbursement rates and regulating the use of non-compete clauses in employee contracts. Yet, each of these policymaking contexts involve very different kinds of stakeholders whose relationships are characterized by different kinds of power structures. As such, effectively obtaining input from these different kinds of stakeholders is likely to require different, tailored engagement tactics.

To be sure, the procedural requirements setting out the public participation mechanisms set a legal floor; a president generally has the authority under Article II of the Constitution to go above and beyond by adding new public participation strategies aimed at alleviating the flaws that arise from the mandatory approaches. Indeed, as noted above, the Biden administration undertook some steps along these lines. In general, presidents are unlikely to undertake such steps without a clear strategy for doing so due to budget constraints and the growing criticism of public participation in the regulatory system noted above.

One important adjustment a future president could take to help make regulatory democracy opportunities more inclusive for structurally marginalized communities is to introduce them earlier in the policy development process. Presently, most public engagement opportunities tend to occur relatively late in policy development, as the APA notice-and-comment process illustrates. By this point in regulatory development, many of the foundational decisions leading up to the regulatory proposal have been resolved, including problem definition and solution scoping (not to mention the decision to prioritize this particular rulemaking at all). The remaining issues left open for public input are the kind of esoteric or technical details that are typically well beyond the knowledge or expertise of ordinary people.

Put differently, the manner in which regulatory democracy is currently conducted fails to account for the “sequential logic” of the policymaking process – a logic that necessarily draws on different kinds of expertise at different steps. An agency’s authorizing legislation, of course, sets the key parameters for what regulatory actions an agency might undertake and how it might design those actions. From there, though, important decisions remain such as which “public problems” are worthy of priority attention and how best to begin constructing the scope of policy solutions to meet those problems. These earliest stages in the policy development process call for a more democratic form of expertise that finds its source in stakeholder’s lived experience and situated knowledge. This kind of input might, for instance, spur the EPA to prioritize tackling pollution from industrialized agriculture or determine how stringently the Consumer Financial Protection Bureau regulates the use of forced arbitration clauses in consumer contracts.

In contrast, the more conventional understanding of technocratic expertise tends to become more relevant at later stages, such as when decision-makers must refine policies to account for such complex questions as applicable legal constraints, economic factors, the state of technology, or the body’s toxicological mechanisms. These issues might include what control equipment should be required to limit emissions of a toxic air pollutant or the potential energy use impacts of strengthened appliance efficiency standards. Not incidentally, social philosopher and an early intellectual force behind the modern administrative state John Dewey memorably captured the ordinal nature of the policymaking process with his observation that “the man who wears the shoe knows best that it pinches and where it pinches, even if the expert shoemaker is the best judge of how the trouble is to be remedied.”

The upshot of this failure is that the regulatory democracy currently privileges the kind of technocratic input that well-resourced interests committed to maintaining the status quo are uniquely well positioned to provide. Those who lack access to this expertise – including the resources and training to obtain it – are thus effectively prevented from meaningful participation.

Another important adjustment that agencies could make to improve public engagement is to conduct more affirmative outreach to specific stakeholders. Instead, current regulatory democracy approaches adhere to an “open door” model by which agencies invite input on equal terms from all interested stakeholders and then passively wait to receive whatever input is provided. The notice-and-comment procedures, of course, best illustrate this model, and it has been replicated in other regulatory forums as well, such as the “lobbying meetings” during OIRA’s centralized regulatory review process. At best, this model favors stakeholders with the resources to monitor and answer open door invitations for participation. In contrast, individuals and smaller community-based organizations are unlikely to consult the Federal Register on a daily basis or to have the technical capacity to parse a large rulemaking proposal to identify whether and how it implicates their unique interests.

In the worst cases, entrenched interests can abuse the open-door model by leveraging their vast superior resources to excessively voluminous comments containing information of only marginal utility or relevance – a practice known as “packing the record.” As legal scholar Wendy Wagner noted, it is not uncommon for industry interests to submit hundreds of pages worth of highly technical comments, resulting in rulemaking records that are more than 10,000 pages in length. These stakeholders engage in this kind of gamesmanship because they treat the notice-and-comment process more as a prelude to litigation over the final rule rather than as good faith attempt to improve the quality of the rule. Significantly, from a power perspective, this record-packing scheme only works if the party engaged is able to back it up with a credible threat of bringing litigation – something individuals and community-based organizations are unable to do. 

In effect, this gamesmanship has enabled powerful interests to install the courts as the primary locus of regulatory decision-making, on the apparent presumption that they will afford a more sympathetic audience for their policy arguments – usually in favor of maintaining the status quo – than what they may find at the agencies. This tactic has the additional benefit of contributing to regulatory ossification, as agencies seek to “bulletproof” their rules as much as possible to avoid adverse results on subsequent judicial review. The advent of Artificial Intelligence suggests that this problem could grow even worse in the future.

Significantly, various studies on the practice of regulatory democracy have documented the vast quantitative and qualitative disparities in participation that exists between unaffiliated individuals and organized interest groups. Empirical research on the APA’s notice-and-comment process in particular seems to confirm that the design of those procedures has the effect of systematically excluding most members of the public, particularly those from structurally marginalized communities. These results suggest that the notice-and-comment process is failing at its purported task of assembling something approximating comprehensive policy-relevant information for agency decision-makers by systematically depriving them of critical forms of “expertise” that tend to be in the exclusive possession of the individuals and communities who live closest to the problems that the policies are meant to solve. They are left to fix shoes without knowing where exactly they pinch their wearers.

The results of these studies also suggest that these kinds of shortcomings in the notice-and-comment process do not merely limit effective engagement; they also serve to exacerbate underlying power disparities. That is because the skewed perspective that agency decision-makers obtain through these procedures necessarily favors entrenched sites of political or economic power. In terms of substantive results, the public input obtained through the notice-and-comment process often translate into a strong status quo bias toward inaction – or, at best, towards actions that only minimally inconvenience empowered stakeholders. Thus, for example, the public comments for a rulemaking to address the climate crisis are likely to be dominated by fossil fuel interests – as opposed to those who are disproportionately harmed. To the extent that this public input creates a skewed picture for agencies of the harms that significant disruption to our climate systems will create, it risks militating against the kind of aggressive climate policies needed to avert these harms.

Lack of a Coherent Theoretical Basis for Regulatory Democracy

One of the major sticking points is that students of the administrative state have never really achieved something like a real universal agreement on why regulatory democracy is important in the first place. The business community and conservatives have questioned the legitimacy of the modern administrative state within the U.S. tripartite constitutional governing framework at least since the advent of the Great Society programs. In response, scholars have invoked a variety of democratic theories to salvage the constitutional legitimacy of the administrative state. (Though, for the most ardent of conservative critics, such as Philip Hamburger, no theory is likely to suffice.) In turn, the lack of a clear theoretical basis has hampered efforts to design effective public participation mechanisms – contributing to many of the practical flaws described above.

According to the pluralist model, the administrative state’s democratic features provided a forum in which competing interests could shape policy. As long as the forum provides a reasonably fair opportunity for engagement for all interested stakeholders, this theory assumes that the substantive results that emerge roughly approximate the common good.

Another theory is the civic republican model. Unlike the pluralist model, which holds that the administrative state becomes imbued with democratic legitimacy through the balancing of competing interests, civic republicanism starts from the position that some idealized notion of the common good exists externally from the administrative state. Revealing this concept of the common good – which, presumably, all stakeholders would recognize as worthy of their consent – can only be achieved through a process of careful reason-giving and deliberation. The administrative state’s democratic legitimacy thus hinges on its ability to enable such a process to take place.

The influence of these competing schools of thought can be seen in institutional and legal reforms over the last several decades. For instance, the Regulatory Flexibility Act’s procedural requirements aimed at ensuring that regulators account for small business concerns reflects the pluralist model, while the embrace of cost-benefit analysis sounds more in the key of civic republicanism. Regardless of the theoretical grounding, both have tended to promote the addition of new procedural embellishments to the existing notice-and-comment framework that produce suboptimal results. Specifically, they slow down the rulemaking process without improving the quality of regulatory decisionmaking, and they have reinforced power inequality by giving entrenched interests new tools for blocking or weakening new policies they oppose – that is, maintaining a status quo bias towards inaction. In turn, these consequences seem to be artifacts of a characteristic that pluralism and civic republicanism both share: an abiding belief that consensus can be achieved and should be the goal of regulatory decision-making.

Third Theory of Regulatory Democracy: Agonism

Yet, this shared belief has come under increasing criticism in recent years, leading scholars to entertain a third theory of regulatory democracy: agonism. This model begins from the premise that consensus is impossible to achieve in many policy contexts, particularly during times such as now marked by polarized discord and seemingly incommensurable division over values and worldviews. If conflict is inevitable, agonism posits, then the appropriate function for our democratic institutions is to channel that conflict so that it is as productive as possible. Under this view, the legitimacy of policy outcomes comes not from how they are ultimately resolved, since they will not be accepted as such by many stakeholders. Instead, legitimacy flows from affording dissatisfied stakeholders with an ongoing realistic opportunity to contest and displace those outcomes with those that more closely align with their preferences. Administrative law already contains agonistic features. Adherents of this model envision other institutional and legal reforms that would infuse a more agonistic orientation to regulatory democracy. For instance, they would require more frequent use of retrospective review for regulations and greater use of adjudication for policymaking in place of rulemaking where possible.

One of the practical benefits of redesigning regulatory democracy mechanisms consistent with agonism is that it could help prevent some of the abusive gamesmanship practices in the notice-and-comment process described above. By demanding consensus, prevailing theories of pluralism of civic republicanism create perverse incentives for entrenched interests to manipulate public participation mechanisms in order to prevent consensus from ever being achieved. For instance, such interests might seek to delay final action on a rule by packing the rulemaking record with the intent to manufacture uncertainty or continually raise new issues – rather than productively inform a regulatory decision. In this way, public participation becomes a tool for translating power into paralysis. This concern is well worth considering given, as adherents of Abundance liberalism have noted, such paralysis can be exploited by would-be authoritarians to support an agenda of democratic backsliding.

Modernizing Regulatory Democracy Through Public Participation Planning

In light of these challenges, it is time to consider not just incremental changes, but a fundamental rethink of how public engagement is conducted within the administrative state. In contrast to current approaches, effective regulatory democracy must be both people-centered and power-conscious.

This paper will concentrate on applying this reform framework to the rulemaking process, though it could also be applied to other aspects of the federal administrative state that involve public participation, such as NEPA and permitting. Likewise, they could be applied to state-level administrative analogs. While there might be several ways to build a people-centered, power-conscious, rulemaking process, this paper outlines what it refers to as Public Participation Planning. This would involve agencies:

  1. developing and executing a strategy that is
  2. tailored to each of their planned regulatory actions in order to
  3. engage relevant stakeholders throughout each step of the rulemaking process
  4. with the explicit purpose of building a reasonably comprehensive record of the public’s views on the action for the rulemaking record.

The validation of Public Participation Planning as a viable mechanism for achieving truly meaningful engagement comes through its embrace of four cross-cutting principles: proportionality transparency, communication, and pragmatic learning. First, the rigor of a particular plan should be fairly proportional to the significance of the rule under development. Second, strenuous adherence to transparency is essential for achieving the agonistic goal of productive disagreement. In particular, agencies must always be completely forthright with all stakeholders about how decisions were reached and what evidence and arguments proved determinative. Third, and related to transparency, agencies should strive to maintain open lines of communication with stakeholders throughout the entire rulemaking process. This will enable agencies to serve as the effective mediators of productive disagreement through the rule’s development and beyond.

Fourth, it requires agencies to commit to an ethic of pragmatic learning. Implementing Public Participation Planning is not as simple as plugging a few numbers into an equation and expecting an optimal result to emerge; it is impossible to predict ex ante what will work in any given situation. Instead, to make the most of Public Participation Planning, agencies will need to become adept at building and rebuilding the proverbial plane, even as they are flying it. Moreover, what has worked in the past will have to be continually reassessed in light of underlying power inequities. If history is any guide, powerful incumbents will eventually devise ways to use their resource advantage to corrupt even the best public participation mechanisms for engaging structurally marginalized communities.

As discussed in greater detail, one of the obvious objections to Public Participation Planning is that its apparent emphasis on proceduralism will exhaust scarce agency resources and contribute to excessive delays in a rulemaking process that has already become too bogged down to permit for effective and timely policy implementation. These four cross-cutting principles, however, are intended to alleviate those concerns – adherence to them will help to strike the needed balance between public engagement and expeditious policymaking.  

Putting Public Participation Planning into Action

Public Engagement Strategy Blueprints

One of the distinguishing features of Public Participation Planning is the requirement that agencies assemble a public engagement strategy blueprint for engaging stakeholders for each planned regulation at the time that the action is initiated that is tailored to the unique circumstances of the rule. This step provides agencies with a mechanism to anticipate potential challenges and think through and identify effective solutions that are calculated to enable them to build a reasonably comprehensive record of the stakeholders’ views on the rule. 

It is worth emphasizing at the outset that the development of a public engagement strategy blueprints need not be a resource-intensive and time-consuming task. As noted above, these should be tailored to match the significance and controversy level of the rule. In addition, agencies will learn by doing, resulting in increased efficiencies over time. For instance, analyses used for past rules will often be readily usable for future rules addressing similar subjects. Another crucial source of efficiencies will be for agencies to use their existing institutional resources to perform these tasks. Most rulemaking agencies already have various forms of public engagement offices and regional and local offices that can and should be tapped. Public affairs offices can also be brought into the rule development process earlier, rather than announcing decisions – such as proposals and final rules – after the fact. Lastly, agencies may build new institutions that increase efficiencies for implementing public engagement strategy blueprints. For instance, agencies may consider creating a standing process for conducting periodic townhalls or other listening sessions. This relatively modest investment could in turn yield significant value for informing the development of public engagement strategy blueprints for future rulemakings covering a wide variety of issues.

More broadly, as explained in greater detail below, the implementation of all aspects of Public Participation Planning, including the public engagement strategy blueprints, should be thought of in terms as an investment. There is no denying they will involve the dedication of resources and time – mostly at the front end of the rulemaking process. But, by directly addressing power disparities and by more constructively channeling irreconcilable disagreements over the competing values implicated by a given rulemaking, Public Participation Planning will mitigate the sources of delay that crop up later in the regulatory process, including, most notably, litigation and the problem of ossification it creates. And to be perfectly frank, much of this work involves things agencies should be doing anyway. The failure to do so often helps to explain why many agency rules have fallen short of accomplishing their stated goals.  In short, more “process” at the beginning will lead to less process and less delay overall.

Step 1: Public Engagement Strategy Blueprint

The first step in assembling a public engagement strategy blueprint is to conduct a thorough and deliberate stakeholder mapping exercise. By the time an agency has completed this exercise, they should be in a position to identify the relevant range of stakeholders for a given rulemaking and the anticipated role or roles they might be expected to play in the rulemaking process. For each category of stakeholders, agencies should also perform a general capacity assessment. Specifically, they should seek to answer such questions as what kind of input or expertise members of a given stakeholder group are likely to bring, whether and how those individuals have participated in similar rulemaking processes in the past, and what barriers might prevent them from participating effectively in the current rulemaking process.

Step 2: Examine Power Disparities and Structural Injustice

The second step is to assess the role, if any, that underlying power disparities or other forms of structural injustice (e.g., racism or patriarchy) play in contributing to the problem that the regulation is meant to solve. To be sure, agencies should be performing such assessments anyway since a failure to do so could yield policy responses that either fail or have other unintended perverse effects, including making the underlying problem worse. With this background in place, the agency should then give careful consideration to how stakeholders identified through the mapping exercise might help them to better understand these underlying power disparities.

Step 3: Stakeholder Engagement to Inform Rulemaking

The third step is to use any learnings from the stakeholder mapping exercise and power disparities assessment to construct a strategy for conducting stakeholder engagement to inform the development of the rulemaking proposal before it is formally published. Most agencies already have an established “action development process” they follow for drafting proposals and building a supporting evidentiary record. Agencies can build off the procedural framework that process creates when designing this engagement strategy. They can ask which types of stakeholders are likely to have input that would help with the successful completion of each stage of the policy development process and what specific engagement tactics would likely be most successful in obtaining that input at a reasonable cost in terms of time and resources. For example, agencies might use more time-consuming and resource-intensive focus groups for particularly weighty matters such as scoping out alternative regulatory designs. In contrast, they might employ informal remote public hearings to quickly gather ideas for sourcing certain kinds of evidence related to the rulemaking. (A bonus of this process is that it might reveal ways in which an agency policy development process could be strengthened, by adding, removing, or combining steps, or by altering their order.)

In preparation for this step, agencies will likely also want to have created a general library or “menu” of engagement tactics, with a brief assessment of their strengths and weaknesses. This will enable agency staff to quickly pull tactics “off the shelf” and insert them into the individual public engagement strategy blueprint. Indeed, this is an example of how moving along the learning curve will help agencies to implement Public Participation Planning more quickly and at reduced cost.

The capacity assessment performed during the stakeholder mapping exercise will especially be important for successfully implementing this step of plan development. For instance, if that exercise revealed that an important group of stakeholders is unlikely to have reliable access to high-speed internet, then the agency should refrain from relying on something like a remote public hearing to obtain input from those stakeholders. This assessment will also help agencies to identify potential affirmative steps they can take to eliminate barriers to public participation. For example, agencies can take steps to provide translation services if a large number of crucial stakeholders do not speak English as a first language. At the same time, the capacity assessment might reveal that a particular stakeholder group exercises an unusually high degree of dominance over a particular issue. In such cases, the agency may find that imposing certain constraints on their participation during the pre-proposal time period. These might include limiting or barring ex parte contacts or placing reasonable page limits on documentary submissions. (Such actions will also have the advantage of expediting the rulemaking process by preventing well-resourced contacts abuse these contacts as a means for delay.)

Step 4: Identify Mechanisms to Include Marginalized Communities, Including Storytelling

The fourth step in building a public engagement strategy blueprint is to identify mechanisms for ensuring that even stakeholders from structurally marginalized communities are able to participate in the notice-and-comment process as effectively as possible. As noted above, the notice-and-comment procedures often systematically exclude such individuals. While agencies cannot completely obviate this dynamic, they should still strive to sand off its worst effects – especially as these procedures are likely to remain part of the rulemaking process for the foreseeable future. Somewhat regrettably, the general consequence of these auxiliary mechanisms would be to get ordinary individuals to more effectively behave like sophisticated lobbyists instead of their true, authentic selves. This means providing various kinds of educational resources and specialized training to individuals so that their input can fit the “technocratic” mold, much as the Federal Energy Regulatory Commission’s (FERC) Office of Public Participation (OPP) undertakes now. It might also involve creating institutional mechanisms to serve as representatives or ombudsmen on behalf of unaffiliated individuals, though this would likely require significant additional resources and perhaps even legislative change to effectuate.

A more radical option would be to undertake institutional reforms that make notice-and-comment procedures more amenable to obtaining and utilizing non-technocratic forms of input, such as storytelling. This approach has the advantage of permitting individuals to share their more authentic expertise – including their situated knowledge and lived experiences – though such input may be of limited relevance at this later stage in the rulemaking process. The degree of institutional reforms required to fully realize this procedure – ranging from changes in agency hiring practices to modifications of administrative law doctrines to recognize these different kinds of “expertise” – makes it unlikely that this approach will bear fruit any time soon.

Step 5: A Plan for Public Engagement After Rulemaking

The fifth and final step in building a public engagement strategy blueprint is to create a plan for how the public might remain engaged after the rule is finalized – that is, to identify opportunities, if relevant and possible, for the public to participate in the rule’s implementation and ensure those are reflected in the rule’s final design. Examples of such engagement include the public’s role in monitoring compliance, measuring the rule’s impacts through citizen science activities, and holding regulated entities accountable for violations of the rule’s requirements through citizen suits when legally available. The final rule may also seek to explicitly incorporate opportunities for the public to participate in any future retrospective review actions for the rule, though Congress will need to ensure agencies receive sufficient budgetary resources to carry out such reviews. Similarly, many statutes authorize agencies to grant individual businesses different kinds of compliance relief, such as deadline extensions, variances, waivers, and exceptions. The final rule could provide the public with a meaningful role in considering and awarding these grants of relief.

As noted above, the rigor and detail of the blueprint should be roughly proportional to the rules’ economic and social consequences as well as to the level of controversy it is anticipated to engender. As with other aspects of implementing Public Participation Planning, accomplishing this proportionality goal in practice will improve with practical experience.

Resource constraints and political pressure for expeditious policy implementation are likely to provide strong incentives for agencies to fall short of the desirable level of rigor and detail. Consequently, countervailing incentive structures will be necessary to offset that tendency. Perhaps that could be accomplished through well designed judicial review standards, as noted below. Political leadership – including from the White House agency appointees – could also signal the importance of careful implementation of Public Participation Planning. For instance, this could be institutionalized through agency strategic planning exercises or encouraged as part of performance review and promotion decisions for career staff. Of course, Congress can do its part by fully funding agency implementation. And over time, as agencies advance along the learning curve for implementation, they will achieve increased efficiencies that will alleviate some of the incentives to do insufficiently rigorous Public Participation Planning.

Strategy Blueprint Implementation, Tracking, and Public Participation Plan Statements

As indicated above, each public engagement strategy blueprint that an agency develops should focus on creating meaningful participation opportunities for members of structurally marginalized communities early in the pre-proposal process, since that is when their input is likely to be of greatest relevance and utility for agency decision-makers. Rather than be a mere “check the box” exercise, the execution of these early participation mechanisms (informal hearings, focus groups, etc.) should have a discernible impact on the structure and substance of the proposal. Thus, as agencies turn to implementation of these mechanisms, they should carefully track whether and to what extent the public engagement strategy blueprint is accomplishing what they expected it would.

This, of course, is not to say that the agency should use these engagement activities to build evidence for decisions that were already made by other means – much as occurs with cost-benefit analysis now. Rather, it means that agencies should base their monitoring on other more objective benchmarks. One question agencies should ask is whether the quantity of participants matches the predicted expectations. (Again, agencies will likely struggle at first to make these kinds of predictions with much accuracy – there will be a learning curve. But, as noted above, a crucial ethic for Public Participation Planning is a commitment to learning by doing.) Similarly, agencies should find that the input they are receiving through these early engagement mechanisms are providing answers to the questions they need to answer to develop the proposal – whatever those answers happen to be. Another good indicator that the early engagement mechanisms are working well is that they are uncovering important “unknown unknowns” – things that the agency did not realize it did not know when it launched the rulemaking.

If, on the other hand, agencies are not finding that the early engagement mechanisms are working as expected – that they are not helping to build a reasonably complete record of public input on important policy-relevant questions undergirding the proposed rule – then they should make adjustments to the engagement strategy. This goal, of course, does not mean agencies should strive to accomplish something akin to comprehensive accounting for all relevant views from the impacted public. Instead, the goal should be to obtain a reasonably representative level of input from each of the major stakeholders included in the agency’s initial mapping exercise. What constitutes a reasonable level of input will necessarily be a subjective determination, and one that agencies will improve on as they learn through implementation of the Public Participation Planning scheme over time. In making this determination, though, agencies will want to be especially attentive to the concern that they have not adequately engaged members of stakeholder groups they have initially identified as being structurally marginalized or as facing particularly high barriers to participation. When in doubt, an agency may wish to attempt other forms of engagement for these groups. As in other forms of research, if the input they obtain sounds repetitive, that would indicate a good stopping point has been reached. 

Drawing on lessons learned from actual practice, agencies may want to consider employing different forms of affirmative outreach to targeted stakeholder groups, undertaking alternative engagement tactics, or finding other creative ways to minimize barriers that might be preventing effective engagement. For example, if an important stakeholder category is young families, then the agency may consider securing resources to provide childcare during in-person hearings. To be sure, agencies may encounter legal constraints that may prevent them from instituting strategies like this. One suspects that these constraints are not as significant as feared, however, and that agency counsel have been overly cautious in interpreting these constraints. Nevertheless, clarifying legal authority from Congress on these matters would be welcome.

As it carries out the specific components of its public engagement strategy blueprint, the agency should begin assembling a comprehensive Initial Public Participation Plan Statement, which documents its outreach and engagement activities, carefully summarizes the input that was received through each component, and briefly explains what impact, if any, that input had on the agency’s proposal. Consistent with the principles of transparency and communication noted above, it is particularly important that the agency use this document to identify instances when a stakeholders’ input did not influence a particular outcome and explain why that was the case.

Explaining the Democratic Basis for a Rule

The agency should include the completed Initial Public Participation Plan Statement in the rulemaking docket when the rule proposal is formally published so that it is available to the public when they are developing the comments. In this way, the Initial Public Participation Plan Statement will function similarly to an Initial Regulatory Impact Analysis (i.e., the initial cost-benefit analysis), only it explains the “democratic” basis for the rule instead of its “economic” basis. Ideally, as the Initial Public Participation Plan Statement becomes more institutionalized, it can even replace the Initial Regulatory Impact Analysis that agencies now perform as the most prominent supporting document for a proposal. This would conserve agency resources and symbolize that democracy has replaced technocracy as the key driver of regulatory decision-making.

After the proposal is published, agencies should likewise carefully monitor the implementation of any specific components from the public engagement strategy blueprint for supporting public participation while the public comment period is open. Again, they should strive to make appropriate adjustments whenever they discover that these mechanisms are not producing expected or helpful results. During this period, agencies should continue documenting their progress in implementing the public engagement strategy blueprint by updating Initial Public Participation Plan Statement.

In conjunction with releasing the final rule, agencies should then include in the rulemaking docket a Final Public Participation Plan Statement. (Again, this final statement would be democratic analog to the Final Regulatory Impact Analysis.) This document should describe the public engagement strategy blueprint that was originally created, any changes that were made during the rulemaking process, what input was received through the agencies’ engagement mechanisms, and what impact they had on the proposed and final rules, if any. Again, agencies should be forthright in identifying the input that did not impact the rule and briefly explaining why.  

Lastly, after the final rule has been published, agencies should dedicate resources and time to reflecting on lessons learned from the implementation of public engagement strategy blueprint. They should be prepared to incorporate these lessons into the design and implementation of future public engagement strategy blueprints. This will lead to implementation of Public Participation Planning that is more effective, less expensive, and quicker. In addition, agencies will also need to be prepared to track the implementation of any public participation mechanisms related to implementation incorporated into the final rule design. As noted above, these mechanisms might relate to compliance monitoring and enforcement, retrospective review, and grants of compliance relief.

Advantages of Public Participation Planning

Public Participation Planning stands in stark contrast to the largely one-sized-fits-all approach to public engagement – basic notice-and-comment procedures with occasional public hearings – that characterize the current rulemaking process. As noted above, essentially no deliberation goes into the creation of this engagement strategy – it is effectively reflexive – nor does it recognize, much less attempt to address, realistic concerns that important categories of stakeholders may not be accounted for in its strategy or that such incomplete input risks aggravating the very power disparities and social inequities that gave rise to the problem that the rule is meant to address in the first place.

In addition, successful implementation of Public Participation Planning will promote better regulatory democracy in the following ways. First, it will provide agencies with a mechanism for systematically identifying all the relevant stakeholders for a given policy, particularly members of communities who might otherwise be systematically excluded from such decision-making processes by structural or other barriers. Second, it will ensure that input is elicited from these stakeholders consistent with the sequential logic of the rulemaking process, providing agency decision-makers with the information they need when it is most useful.

Third, it will empower agencies to tailor their outreach and engagement strategies to the unique policymaking context implicated by the rule under development. Fourth, it will enable agencies to use public engagement to surface and account for any underlying power disparities that contribute to the policy problems a rule is meant to address, leading to more effective and durable policies. Fifth, it will highlight productive disagreement among stakeholders rather than engage in a quixotic pursuit of consensus – that is, it seeks to move regulatory democracy in a more agonistic direction. This is essential to recalibrate public engagement so that it is more attentive to power disparities and to avoid being a source of excessive delay in the policy development process.

How Other Federal Institutions Can Support the Successful Implementation of Public Participation Planning

The White House

With the advent of presidential administration under Reagan, the White House has played an increasingly active role in coordinating and steering the actions of the administrative state. The White House would thus be well-positioned to support the effective implementation of public participation planning. Indeed, as noted above, the Biden administration took some important initial steps on strengthening public participation in the rulemaking process as part of its broader Modernizing Regulatory Review initiative.

A logical place to start would be for staff at the White House Office of Management and Budget (OMB) to produce a comprehensive list of public outreach and engagement tactics for agencies to use to inform their own public engagement strategy blueprints. They could create this list by surveying the relevant academic literature, reviewing agencies’ past experiments with innovative approaches, and even looking at examples offered by peer democratic states abroad.

To support ongoing agency learning, OMB could also convene a standing working group composed of representatives from the public engagement offices at the various agencies. This working group could provide a forum in which these offices regularly share their best practices and lessons learned. Just as significantly, by signaling that public engagement is a priority of administration leadership, the working group would also by its mere presence help to reinforce a broader ethic and commitment to democratic inclusiveness across the administrative state.

Inviting OMB support in the implementation of Public Participation Planning is not without risk, given its historic role of interfering with and unduly politicizing the rulemaking process. It would certainly be preferable if Congress created a new standalone office outside of the White House that is explicitly charged with these tasks, as suggested below. But short of that, OMB is institutionally best positioned to play this role – provided that it does so in a strictly auxiliary fashion, leaving individual agencies the ultimate discretion on how to implement Public Participation Planning. In addition, carrying out such an auxiliary role would be a far better use of OMB’s resources than its current practice of superintending agency decisionmaking through the centralized regulatory review process.

The implementation of Public Participation Planning would also benefit greatly from having staff with different kinds of skillsets and life experiences. For instance, staff with backgrounds in social work or community organizing and specialized training in sociology might be particularly valuable. The White House Office of Personnel Management (OPM), the main human resources agency for the administrative state, could be instrumental in helping agencies to identify and hire such individuals. OPM could also help make necessary revisions to hiring standards and practices to make it easier and quicker to bring them on board.

Congress

Agencies have adequate legal authority to undertake Public Participation Planning. Still, Congress can ensure that even future administrations that might be hostile to the goals of regulatory democracy will implement this reform, even if reluctantly, by codifying this procedure into law through an amendment to the APA.

Similarly, implementation of Public Participation Planning likely would not require a significant commitment of agency resources – especially, if agencies are able to redirect resources to it from other rulemaking requirements, such as cost-benefit analysis or the myriad energy-related analyses that agencies must conduct pursuant to various executive orders. Ideally, Public Participation Planning will also reduce the incidence of legal challenges against final rules, which would promise to save on direct litigation costs. With these reduced litigation risks, agencies may also find that they are no longer compelled to “bulletproof” their rules through elaborate rulemaking records and gargantuan preambles to their final rules. This resulting streamlining effect of Public Participation Planning could also yield significant cost savings for agencies over the long run.

Nevertheless, Congress should still commit adequate appropriations for agencies to launch this reform, especially while they are still overcoming the incremental additional costs required to move through the early stages of the learning curve. With increased experience and specialization, agencies will likely be able to implement Public Participation Planning in an increasingly cost-effective manner.

Congress can take other steps to affirmatively support Public Participation Planning. For instance, they can authorize and fully fund a new institution that affirmatively supports public participation in the notice-and-comment process. The EXPERTS Act, a comprehensive progressive regulatory reform bill now pending in Congress, offers one potential model. Specifically, it would create something called the Office of the Public Advocate, which would be charged with this responsibility.

In addition, Congress can tap the Administrative Conference of the United States (ACUS) – which is effectively the federal government’s in-house “think tank” on administrative law – to study existing administrative law doctrines that might present a barrier to effective implementation of Public Participation Planning (to the extent that the doctrines arise from statutory, as opposed to constitutional, law). Such doctrines might include Vermont Yankees bar on judicially created administrative procedures (the codification of public participation planning, recommended above, would obviously address this), Loper Bright (which gives the judiciary, instead of agencies, the primary responsibility in interpreting agencies’ statutory authority), and the “logical outgrowth” test (which constrains how significantly a final rule’s substance can deviate from what’s contained in the proposal). ACUS could develop recommendations for how agencies and reviewing courts can avoid running afoul of these doctrines or propose legislative fixes for Congress to adopt.

Lastly, it goes without saying that Public Participation Planning would benefit immeasurably from having a functional Congress in place. According to the common conservative myth, an empowered Congress is necessary to restrain the administrative state. Just the opposite is true, however. By recommitting to doing the people’s business and passing public interest legislation again, Congress would provide agencies with fresh opportunities to put Public Participation Planning into action with up-to-date legal authority to tackle the new and emergent problems that pose a great threat of harm to structurally marginalized populations.

The Judiciary

The most obvious way that the judiciary could support the implementation of Public Participation Planning is by devising new judicial review doctrines that reward rulemakings with exceptional democratic pedigrees with enhanced levels of deference. This, of course, would require conservative judges to apply an even hand to all regulations challenged on judicial review before them. That means they would have to deploy enhanced deference consistently and in the service of promoting regulatory democracy, rather than wield it as a weapon to justify striking down rules they oppose on policy grounds. Under present circumstances, one might be forgiven for doubting this will take place. Yet, since we cannot avoid this institution either, we must still do the best we can with our politicized judiciary until such time as we are able to accomplish significant judicial reform – a topic beyond the scope of this paper.

Such enhanced deference might have a role to play in assessing the statutory authority for the agency’s rule, even under the new Loper Bright review framework. For instance, reviewing courts might modify the application of Skidmore to apply a special degree of “respect” for agency interpretations that rest on public input received during the rulemaking process.

More likely, though, the influential weight of public input would be greatest during the review of agency policy decisions under the arbitrary-and-capricious standard. In theory, courts already employ a “super deference” for agencies’ determinations based upon science and other forms of technocratic expertise, though courts rarely follow this approach in practice. Courts could easily create (and actually follow) an analogous super-deference doctrine for agency determinations based on “democratic expertise.” The inclusion of the Final Public Participation Plan Statements in the rulemaking record, as outlined above, would provide the essential informational foundation for the application of such a doctrine. In developing this doctrine, courts would have to account for applicable doctrinal constraints, including, most notably, Vermont Yankee.

Short of that, though, much of the analysis involved in assessing Final Public Participation Plan Statements would fit comfortably within the “hard look review” that courts already perform as part of the arbitrary-and-capricious standard of review. In other words, implementation of Public Participation Planning would not create any insurmountable barriers for courts conducting judicial review pursuant to the APA.

Courts have long recognized that the APA’s arbitrary-and-capricious judicial review standard implies a general duty for agencies to build a sufficiently complete rulemaking record to enable such review. Contained within this broader duty is a more specific responsibility to have procedures or mechanisms in place for ensuring that the information before the agency meets some minimal level of quality. While this responsibility might traditionally be thought of as applying to more technocratic inputs, there is no reason why it should apply equally to the unique on-the-ground expertise of individuals and community-based organizations. Similarly, this general duty also implies a more specific requirement that agencies ensure that the scope of information available to them be sufficiently broad to permit for evidence-based, reasoned decision-making required by arbitrary-and-capricious review. Again, this concern should apply equally to all forms of expertise, not just those regarded as technical or scientific in nature.

Importantly, this same judicial review standard would also guard against attempts by any president who is hostile to regulatory democracy to implement Public Participation Planning with insufficient rigor. Just as the Trump administration is now seeking to bypass the notice-and-comment process altogether, a future administration might reduce this process to a mere check-the-box exercise or conduct woefully inadequate outreach and consideration of input. The Final Public Participation Plan Statements would afford a reviewing court with a basis for applying the arbitrary-and-capricious standard to the agency’s public engagement efforts and, ultimately, to remand an agency rulemaking to correct this aspect of the record where any flaws or gaps are identified.

The notion of courts policing Public Participation Planning raises a separate concern that this aspect of arbitrary-and-capricious review could be abused by conservative activist judges who are opposed to climate policy or other aspects of the progressive policy agenda. The Supreme Court’s recent extreme application of the arbitrary-and-capricious standard in Ohio v. EPA confirms that this is not an idle concern. Still, it seems clear that activist judges will find plenty of opportunities for abusing arbitrary-and-capricious review even in the absence of Public Participation Planning. On balance, then, the benefits of this reform would still seem to outweigh these risks.

Public Participation Planning as Part of a Broader Agenda to Increase Administrative Effectiveness

The past year has seen the Abundance Liberal movement spark a robust debate within the broader liberal community over the appropriate role of legal procedure in our governing institutions. Under the circumstances, then, it may seem like an unusually inopportune time to champion something like Public Participation Planning – an essentially proceduralist reform. As explained below, though, this reform strives to take seriously Abundance’s critiques and is consciously predicated on the recognized imperative to strike an appropriate balance between, on the one hand, public engagement and, on the other, effective, responsive administrative action that delivers concrete results.

The Abundance Liberal movement, as best captured in the recent book by its most prominent advocates Ezra Klein and Derek Thompson, argues that the Left’s reflexive embrace of proceduralism and litigation is an antiquated relic from a bygone era and is already becoming a political liability. That is because many of the progressive movement’s policy priorities – from addressing the climate crisis to promoting affordable housing – requires quick policy implementation, a goal that is ultimately defeated by excessive proceduralism. Instead, the book argues, the Left should dispense with most procedures as a mechanism for legitimizing government action and instead let the popular results of those actions (e.g., affordable housing, cheap clean energy, etc.) serve that legitimizing function after the fact.

Significantly, Klein and Thompson’s book singles out public participation in the policymaking process as emblematic of the broader problem they are trying to solve. Indeed, their specific critiques of notice-and-comment procedures largely tracks with those that have motivated the proposal Public Participation Planning, as detailed above. In particular, they correctly identify these procedures as excluding structurally marginalized communities and reinforcing broader power disparities in our society.

Where the Public Participation Planning proposal departs from Abundance adherents such as Klein and Thompson is its core claim that the problem is not procedure per se, but power disparities. More specifically, it posits that the dysfunctional procedures that cause excessive and unnecessary delays in policy implementation are better understood as a symptom of the deeper problem of power disparities in our society. After all, even if we were to remove all existing procedural requirements – that is, to take Abundance to its logical extreme – it is by no means clear that we would see more expeditious policy implementation, particularly where those policies go against the preferences of entrenched elites. Such interests would simply find other, non-procedural mechanisms for blocking policies they opposed.

Given that we are unlikely to eliminate the structural sources of power disparities in our society any time soon, it is worth exploring more practicable near-term mechanisms for alleviating the worst consequences of those power disparities. What Abundance potentially misses with its black-and-white diagnosis of the procedure problem is that procedure actually holds a lot of promise for accomplishing this goal. After all, if procedures can aggravate power disparities, as Abundance Liberals would freely stipulate, then it also follows that well-designed procedures can do the opposite as well. Legal scholar Nicholas Bagley, who has provided part of the intellectual foundation for Abundance, highlighted this intrinsic feature of procedure when he wrote: “government action — whether it involves dispensing public benefits or regulating private conduct — allocates resources, risk, and power within the United States.”

To put Bagley’s point differently, procedure can never be neutral in its effects on underlying power dynamics; it will tend to cut in the favor of one set of stakeholders or another. Taking this reality seriously means that policymakers have a lot of tools at their disposal to shape and reshape those power dynamics in more productive ways through carefully designed procedures. More to the point, Abundance overlooks the tantalizing possibility that, by effectively redistributing power within the administrative state, well-designed procedures can actually expedite policy implementation – or at least add value, such as improving the quality of decision-making, without causing new or undue delay. This is precisely the project that Public Participation Planning sets out.

As described above, Public Participation Planning illustrates how procedures might be designed with the goal of affirmatively redistributing power from entrenched interests committed to maintaining a suboptimal, unjust status quo to members of structurally marginalized communities that are both underrepresented in our political processes and disproportionately burdened by the harms that arise from status quo economic, social, and political arrangements. Specifically, it seeks to carve out new spaces in the rulemaking process for bringing in the unique expertise of historically underrepresented populations at precisely the points in that process when their expertise is most germane. At the same time, it contemplates placing reasonable constraints on the participation opportunities available to entrenched powers such that their input is channeled to maximize its utility for agency decision-making. This would also have the additional benefit of preventing these interests from abusing their resource advantages to overwhelm agency decision-makers with extraneous information for the purpose of causing unnecessary delay.

Public Participation Planning’s commitment to transparency and ongoing communication with stakeholders plays an important contributing role in this power shifting dynamic. For structurally marginalized stakeholders, it will be essential for them to know concretely how their input is materially shaping the policy and factual determinations that undergird agency decision-makers. Otherwise, it will be entirely rational for members of these communities to assume that their participation is little more than a check-the-box exercise, rather than a genuine effort at promoting regulatory democracy. Understandably, members of such communities will still harbor some degree of skepticism even despite agencies’ good faith commitment to transparency and communication. Trust will take time to build, but it can be built. Somewhat counterintuitively, one way that agencies can do this is by always honestly explaining to these stakeholders when their input did not substantively influence a particular decision – that is, by delivering the bad news as well as the good.

Equally as important, Public Participation Planning demonstrates how these kinds of investments in well-designed procedures can actually pay off in terms of more expeditious policy implementation. In other words, it helps to refute the commonly held belief – one clearly embraced by the Abundance Liberal movement – that public engagement and expeditious policy action are fundamentally at odds. Instead, these tradeoffs can effectively be mitigated when such public engagement procedures are designed to correct power disparities that are implicated by a given rulemaking.

The key to accomplishing this seeming procedural alchemy is by addressing the primary driver of rulemaking delay: litigation over final rules. (To be sure, Public Participation Planning would also endorse Abundance’s more general call to clear out unnecessary procedural requirements that have accreted over the years, such as cost-benefit analysis, the Regulatory Flexibility Act, and the Unfunded Mandates Reform Act, to name a few. Doing so would be entirely consistent with its analytical framework. These procedures tend to aggravate power disparities – often by design – and thus contribute to delays in the rulemaking process.) Litigation is time-consuming in its own right, often taking several years to reach a definitive conclusion. During this time, agencies are increasingly subject to court orders barring them from implementing the rule’s provisions.

In addition, as noted above, the near certainty of legal challenges creates strong incentives for agencies to “bulletproof” their rules to reduce the chances they will be struck down on judicial review. While some degree of deliberative and evidence-based rigor underlying rules is desirable, of course, the perverse result of this incentive structure is that agencies go far beyond what the law reasonably requires for substantiating rules, an increasingly resource-intensive undertaking that significantly delays the completion of new actions.

As noted above, many of Public Participation Planning’s distinguishing features are designed to reduce the incidence of litigation. Particularly significant in this regard is its emphasis on early engagement. Not only does engagement provide agencies with better and more timely input; it also has the additional benefit of promoting greater buy-in from stakeholders, which in turn defuses litigation risk down the road. In other words, frontloading public engagement seems to shorten the length of the rulemaking process even if agencies are technically conducting a greater number of procedural steps overall.

Several empirical studies of the implementation of the National Environmental Policy Act’s (NEPA) analytical requirements appear to confirm this effect, finding that NEPA processes that involve early public engagement are statistically shorter than those that do not. Reduced litigation and, by extension, the reduced incentives for bulletproofing NEPA analyses seem to explain this discrepancy. (Not incidentally, extensive NEPA-related delays are also the subject of extensive criticism by the Abundance movement) It is reasonable to expect that this dynamic would translate to the functionally similar rulemaking context as well.

Also important, Public Participation Planning seeks to institutionalize a more agonistic orientation into the regulatory process. As explained above, agonism seeks to avoid the pursuit of near-universal consensus over policy outcomes, which is typically impossible to achieve in practice anyway, and instead create conditions for productive disagreement. Similar to early engagement, one of the desired effects of administrative agonism is to reduce litigation risk and the perverse effects such risks create. Instead, final regulations would be treated as more contingent and subject to realistic mechanisms for ongoing revision and refinement or to various forms of implementation flexibility. In this way, agonism enabled by Public Participation Planning would attempt to lower the stakes on final rules, such that stakeholder focus could be gradually shifted from post-finalization litigation to implementation where it can be put to more productive use.

To be sure, achieving the full agonistic potential of Public Participation Planning would require other legislative changes. For instance, Congress could amend agencies’ authorizing statutes to give them greater authority to deploy back-end implementation adjustments and flexibilities, such as waivers, exemptions, and compliance extension deadlines (all subject to vigorous public participation mechanisms, as noted above). For agencies that already enjoy these kinds of authorities, implementing this general approach could serve as a valuable proof-of-concept demonstration that could help catalyze this legislative action in the near future.

Another option would be for Congress to incorporate more rigorous schedules for reviewing and updating regulations into their statutory design, such as those that exist for appliance energy efficiency standards. To ensure that these reviews are carried out expeditiously, Congress could also experiment with different kinds of “hammer provisions” that would kick in automatically – setting default regulations or standards, for instance – if an agency is unable to work with relevant stakeholders to adopt the update according to the statutory schedule. Ultimately, the goal of these reforms would be to rebuild a rulemaking process in which stakeholder contestation is increasingly shifted to the implementation phase, where dynamism and flexibility can be permitted to flourish.

Even better, the judiciary could further reinforce this litigation-dampening effect of Public Participation Planning by adopting a strong deference doctrine based upon a rule’s democratic pedigree, as noted above. Such doctrines have the potential to substantially alter the calculus for stakeholders who are contemplating a challenge against the rule. If a rule’s legal and policy bases are strongly supported by public input such that it is likely to earn some measure of enhanced judicial deference, stakeholders may refrain from undertaking the expense of bringing a legal challenge, given the reduced likelihood that the challenge would succeed.

Even with all this careful attention to the design and implementation of Public Participation Planning, ongoing vigilance from policymakers will still be required to ensure that its performance delivers on its promises of better-informed decision-making and more expeditious policymaking. Regrettably, while effective use of this tool can help to alleviate the consequences of power disparities, those underlying power disparities will still remain in place, absent other more radical power interventions. The practical upshot is that over time entrenched interests may learn to “capture” parts of the Public Participation Planning program and deploy them to advance their narrow policy preferences at the expense of the broader public. (Both the APA notice-and-comment process and NEPA’s analytical requirements appear to illustrate this general dynamic.) This is why Public Participation Planning demands that agencies continually reevaluate and update their public engagement and outreach actions. Ideally, though, the general Public Participation Planning framework will remain resistant to such capture risks, even if more specific tactics and strategies for instituting that framework do not.

Conclusion

An effective response to the climate crisis faces numerous obstacles. One of these is a regulatory system that inadvertently reinforces underlying power disparities that help to maintain status quo conditions on how we obtain and use energy. A better approach to the practice of regulatory democracy – one that takes seriously and affirmatively addresses power disparities – will be essential for overcoming this obstacle.

This paper proposes a comprehensive reform to how agencies engage the public during the rulemaking process called Public Participation Planning. The distinguishing feature of this reform program is that it would require agencies to develop tailored public engagement strategy blueprints for each of their planned rulemakings. The purpose of these blueprints is to enable agencies to draw on a variety of engagement tactics that are calculated to build a reasonably comprehensive record of the stakeholders’ views on the rule. Importantly, the blueprint would also enable agencies to determine when best to engage different stakeholders throughout the various stages of the rulemaking process – a concept known as sequential participation. This reform would also require agencies to assemble Initial and Final Public Participation Planning Statements, which document the implementation of the blueprints and the ultimate impact that the resulting public input had on the substance of the rule. These Statements would be made part of the rulemaking record where they could be considered by judges, if necessary, during judicial review of the rule.

Public Participation Planning offers two major advantages. First, and most directly, by improving the quality of public input, it will lead to better decision-making, and thus better policy outcomes. Second, its implementation is likely to expedite rulemakings in many instances. Public Participation Planning would help to accomplish this outcome by using procedures to alleviate power disparities among relevant stakeholders. Properly understood, such power disparities appear to be the root cause of delayed rulemakings in the past. Taking these two advantages together, Public Participation Planning would offer a crucial piece in the larger puzzle of addressing the climate crisis effectively and with the urgency needed to avoid its worst consequences.

CELS Playbook: Clean Electricity for Local and State Governments

Elected leaders across the country are staring down interlocking crises. Families and businesses are struggling to pay skyrocketing utility bills. Large new demands are straining the grid and overtaking the buildout of new power plants. And the public’s faith in government has hit new lows. We need a new playbook to solve these problems and make the government responsive to peoples’ needs. 

What’s going wrong?

Utility bills are rising rapidly for households and businesses due to an administrative state ill-equipped to protect customers from costs and risks. The cost of power supply is increasing due to growing demand, long timelines to build new cheap clean energy, and volatile natural gas prices. Utilities are spending more money on the transmission and distribution grid for both maintenance and recovery from wildfires and other disasters. Today’s regulatory construct allows utilities to drive spending decisions and pass on all these costs to customers, and regulators are under-resourced and unwilling to find alternative solutions. 

Meanwhile, we are not building clean energy nor upgrading the grid fast enough to meet demand growth and address climate change. And this problem will get worse as power-hungry data centers connect to the grid and electrification of buildings, vehicles, and factories adds additional electricity demand.

The old climate policy playbook is not equipped for this moment. While it has driven significant deployment of low-cost clean energy, it was not designed to address non-financial obstacles to building projects and upgrading the grid nor to fully mobilize the suite of finance tools needed for the energy transition, nor to demonstrate that the government can make peoples’ lives better, now and long term.

Where do we go from here? 

Policymakers and advocates need an expanded playbook. One that addresses the full set of barriers impeding financing and construction of clean energy and grid modernization projects. One that targets the root causes of high energy costs. One that reworks the administrative state to make government work for the people. 

FAS, with the help of partner organizations spanning ideology and function, launched the Center for Regulatory to Ingenuity to build a vision for a government that is agile and responsive and delivers affordable energy, abundant housing, and safe transportation for all Americans. 

As part of this work, we have developed an updated set of policies and actions for state and local leaders to meet this moment. We started by identifying the barriers to deployment and the flaws in the old playbook, published in our report Barriers to Building. Now we are developing the “plays” in a new playbook—tangible actions that state and local leaders can take now to make near-term progress and pilot new solutions. These plays will live on this landing page, which we will continue to update with additional actions.

This playbook is not a laundry list of policies but rather a cohesive strategy to achieve two goals: (1) deploy the clean energy and grid upgrades necessary to make energy affordable and combat climate change and (2) create governments that tangibly improve peoples’ lives.1


Contents (click to jump to a section)


Main Character Energy: Make Regulators Main Characters in Planning and Ratemaking

Utilities and their regulators are responsible for major decisions about what infrastructure we build and how much people pay for energy. Utilities—which can be owned by investors, the public (e.g., municipal utilities), or members (i.e., electric cooperatives)—conduct detailed analysis and provide proposals on planning and ratemaking to their regulators. The set of solutions below focuses on investor-owned utilities, who are incentivized to prioritize projects that maximize the returns for their shareholders. As a result, they underutilize solutions that could save customers money but do not earn companies a profit, like rooftop solar or technology-or maintenance-based upgrades to existing transmission lines.

In a well-functioning system, regulators—whether Public Utility Commissions (PUCs) or locally elected officials—would rigorously interrogate utility analyses and direct the utilities to shape or revise their proposals to maximize benefits to the public at lowest public cost. This lens is needed to ensure that utilities are spending money wisely in the public interest and prevent unnecessary bill increases from overspending on the wrong solutions. Active regulators are also needed to incorporate long-term considerations in planning and ensure consideration of strategies that provide long-term benefits. However, regulators are often not well-equipped or politically willing to conduct detailed analysis and push back on utility proposals. Other intervenors, like consumer advocates and environmental organizations, are outspent by the utilities, who can recover the costs of their analysis and interventions through customer bills in most states.

The result is a reactive, short-term-focused administrative state that leaves the public frustrated. Regular people are frustrated with skyrocketing bills, clean energy companies are frustrated with slow processes and broken incentives, and both are frustrated with the government’s ability to solve big problems. The administrative state itself—the officials and staff who make up regulatory body and state and local governments—are also frustrated with their perpetually reactive role and with limited say in outcomes.

We should not accept the status-quo regulatory process as a given. As representatives of the public, regulators should have both the ability and motivation to actively drive toward an abundance of cheap clean energy, affordable bills, and a modernized reliable grid. Achieving this vision requires the right personnel, clear direction and support from governors, and adequate analytical capacity.

Solutions

I. Direct Regulators to Use All Tools to Lower Energy Bills and Deploy Clean Energy (Governors and Legislatures)

When regulators take a backseat and let utilities drive, they narrow the toolkit of resources that can help meet demand and as a result leave savings on the table. Regulators with a mandate to prioritize affordability and clean energy buildout can reduce bills by both better scrutinizing utility plans and taking a more active role in enabling clean energy deployment. This includes finding creative tools to get more out of the grid through distributed energy resources, alternative transmission technologies, and flexible sources of demand like electric vehicle charging and factories with electric appliances. 

Pathways to implementation
Governors

Governors can direct PUCs to audit utility investments to find opportunities for savings, re-evaluate utility business models and incentive structures, consider distributed energy resources and alternative transmission technologies in planning, and consider climate impacts in planning and ratemaking decisions.

Legislatures

Legislatures can set statutory requirements for PUCs to consider these opportunities and expand their mandates to include clean energy goals and highest net benefit criteria. 

Legislators can require PUCs to find savings across the gas and electric systems, including by using beneficial electrification to reduce costs. 

Examples
Maine

In 2021, the Maine legislature directed the PUC to consider emissions reduction targets and equity impacts in regulatory decisions.

Oregon

Oregon Governor Brown directed its PUC to integrate the state’s climate pollution reduction goals and promote equity by prioritizing vulnerable populations and affected communities.

New Jersey

New Jersey Governor Sherrill directed the PUC to review utility business models and assess whether they are aligned with cost reductions for customers.

Minnesota

In 2024, Minnesota S.F.4942 mandated that the PUC establish standards for sharing utility costs for system upgrades, ensuring fair cost-sharing and advancing state renewable and carbonfree energy goals along with provisions for energy conservation programs for low-income households.

II. Even the Playing Field by Providing More Resources to the PUC and Consumer Intervenors and Increasing Data Transparency (Governors, Legislatures, and PUCs)

Electricity rates are determined by proceedings called rate cases, in which utilities submit proposals and justifications to regulators, other intervenors (such as consumer advocates, environmental organizations, and state and local elected officials) submit testimony, and the regulators hold hearings and make a decision. Most rate cases end in settlement agreements between the utilities and other intervenors, facilitated by the PUC. Utilities drive this process—they file initial proposals and have more information about their system than other participants. Well-resourced PUCs and public interest intervenors are important to interrogate utility proposals and ensure that settlement agreements are a good deal for regular people. 

In order to take on more responsibility in grid planning and utility oversight, PUCs need additional staff and analytical capacity. For example, a legislative commission in Texas found that the PUC needs more staff and resources to independently analyze utility sector data and provide sufficient oversight to ensure reliability. Funding for staff and analysis has a great return on investment—state leaders can save customers money and get better outcomes for a relatively small price. 

Moreover, consumer advocate intervenors are typically underfunded compared to utilities and so cannot compete with utility proposals. This disparity in funding places consumer advocates in a position of exclusively reacting to utility requests, rather than having the bandwidth to interrogate existing system inequities or to develop potential innovative solutions to address ratepayer needs. Utilities also determine the pacing of their rate case applications, which can put consumer advocates even more on their heels. For example, in a 2019 rate case in Colorado, Xcel Energy brought 21 witnesses, while only a few consumer advocate intervenors testified. 

Utilities in most states can recover the full costs of analysis and intervention legal fees from customers, giving the utilities significant resources to drive the process. States can prohibit this practice, directly reducing bills for customers and reducing utility influence over the process. 

In addition to resources for PUCs and intervenors, data transparency can help even the playing field. Utility data are often difficult to access, embedded in filings that often run thousands of pages, and not standardized. This lack of data transparency makes it difficult for PUCs and consumer advocates to track utility spending and effectively intervene in rate cases. 

Pathways to Implementation
Legislatures

Legislatures can provide additional funding for PUCs to hire additional staff and conduct independent analysis.

Legislatures and PUCs can prohibit utilities from recovering the costs of political activities from customers and limit the amount of legal fees that are recoverable from customers.

Legislatures can establish mechanisms to ensure that low‑income, consumer, and environmental justice advocates can participate meaningfully in PUC proceedings. Several U.S. states have implemented intervenor compensation programs or similar initiatives that reimburse reasonable costs for nonprofit organizations and community groups engaged in utility regulatory processes.

Public Utility Commissions

PUCs can charge utilities to fund independent analysis of utility proposals on behalf of customers.

PUCs can assess their processes with an eye toward reducing participation barriers for non-traditional docket participants, such as groups representing low-income or environmental justice communities.

PUCs can standardize reporting on utility costs and increase data transparency both during and in between rate cases.

Governors

Governors can direct agencies to conduct analysis to inform PUC proceedings and hire technical talent to engage with the PUC. Legislators can authorize and fund state agencies to conduct independent, proactive analysis to inform PUC proceedings, with opportunities for public input on the analysis.

Examples
California

California passed AB 1167 in 2025 that put an end to the use of ratepayer funds for political lobbying and strengthening enforcement against investor-owned utilities (IOUs) that illegally use ratepayer funds.

Colorado

A 2023 Colorado law prohibited utilities from charging customers for lobbying expenses, political spending, trade association dues, and other similar activities.

Illinois

In Illinois, a 2021 law expanded the Consumer Intervenor Compensation Fund to compensate consumer interest intervenors in planning and rate cases.

Oregon

The Oregon PUC provides both Intervenor Funding and a dedicated Justice Funding program, supporting groups representing environmental justice communities and low‑income customers, with clearly defined funding caps for eligible participants.

Build Administrative Capacity to Plan for an Affordable & Reliable Grid

Today, the U.S. bulk transmission system faces significant constraints that limit where new clean energy projects can be built and threaten overall grid reliability. Many regions with abundant clean energy resources simply do not have enough high-voltage transmission capacity to deliver that power to population centers. As a result, developers are increasingly unable to move generation projects forward even when siting, permitting, financing, and interconnection queue positions are in place. Without new transmission capacity, interconnection backlogs grow, power costs increase, and states are forced to rely on older fossil resources simply because they are already in place.

Transmission buildout is thwarted by barriers such as long planning timelines of 7 to 15 years, route identification, environmental review, litigation, supply chain constraints, and fragmented and inadequate planning processes. 

While the permitting reforms described elsewhere in the playbook would help, we won’t build the transmission system that we need without improved planning. Building transmission lines requires utilities, developers, customers, and grid operators to work together to determine where a transmission line is needed and appropriately allocate costs across different stakeholders. Without a strong administrative state that can facilitate the process and collect and share all the required information (such as congestion on current lines, hotspots of demand growth, areas with high potential for cheap clean energy, etc.), this process often fails and very rarely results in optimal expansion of the transmission system. Today, states and grid operators lack administrative capacity to conduct this planning process, which is hamstringing our ability to expand the grid. 

States can build the capacity to improve planning in order to spur development of transmission lines with the greatest benefit for the public.

Solutions

I. Include Advanced Transmission Technologies in Planning (Legislatures, Governors, and Public Utility Commissions)

Advanced Transmission Technologies (ATTs) can be used to increase grid capacity on current rights-of-way, alleviating congestion and allowing for more efficient energy transfer without building new infrastructure. Utilities being able to increase efficiency and cost effectiveness of their infrastructure is especially important as load growth continues to increase across the country and raise retail electricity bills. For example, installing high-performance conductors increases the amount of electricity that can be transferred over an existing transmission line. By one estimate, reconductoring with these technologies could double transmission capacity on the current grid. Dynamic line ratings allow lines to carry more electricity when weather conditions are good, rather than defaulting to conservative limits on line capacity. Each type of ATT has its own advantages and benefits.

Pathways to Implementation
Public Utility Commissions

PUCs can dictate standards, enforce rules, conduct studies, and establish new policies that require and incentivize utilities to evaluate and deploy ATTs.

Legislatures

Legislatures can require utilities to include evaluation of ATTs in planning processes, conduct studies on ATT potential and deployment opportunities, and analyze ATTs as potential enhancements to new transmission infrastructure.

Governors

Governors can petition ATT rulemakings to the PUC via an executive order, can integrate ATTs into funding criteria for grid or resilience projects and direct economic development agencies to study the economic impacts of ATTs, and convene ATT task forces to set direction and collaborate with educational institutions to develop workforce training programs focused on ATTs installation, operation, and maintenance.

Examples
Utah

Utah’s SB 191 requires utilities to conduct an alternatives analysis for ATTs in IRPs and also provides language that the Commission can approve cost-recovery for ATTs if it is determined the deployment is cost-effective.

Ohio

Ohio’s HB 15 requires that utilities summarize ATT evaluation in power siting board certificates and furnish annual 5-year reports on ATT deployment opportunities, including congestion mitigation studies and that the PUC evaluate the potential of ATT deployment including consultation from stakeholders via two public workshops.

Maryland

Governor Wes Moore’s December 2025 Executive Order Building an Affordable and Reliable Energy Future creates a Transmission Modernization Working Group that makes ATT policy recommendations to the Maryland Energy Administration, which in turn makes formal petitions to the PUC.

Montana

Montana House Bill 729, adopted in 2023, enables the state PUC to set cost-effectiveness criteria to allow utilities to deploy advanced transmission conductor technologies and recoup the cost via their ratepayers, similar to investments in new energy generation.

II. Create a New Transmission Planning Authority (Governors and Legislatures)

Lack of coordination between transmission and generation planning creates inefficiencies and prevents smart clean energy development. In deregulated markets—and in some vertically integrated states—transmission and generation planning processes occur largely in isolation without systematic processes to align long-term clean energy expansion with major grid upgrades. While the federal government has authority to set the rules for planning regional and interregional transmission lines, state leaders have tools at their disposal to expand transmission buildout and improve planning.

Pathways to Implementation
Legislatures

Legislatures can create transmission planning authorities explicitly authorized to identify transmission corridors that can expand low-cost clean energy generation, lead on the permitting and siting of transmission lines, secure project finance, negotiate and collaborate with other states on interstate transmission plans, provide advice on transmission priorities and planning needs for the state, and enter into public or private partnerships to help with project development. These authorities must be empowered and resourced to collect all the necessary information (e.g., congestion on the existing system, load forecasts, sites of cheap clean energy, etc.) and to attract top talent with expertise in utility planning, project development, and financing.

Governors

Governors can create a transmission advisory or coordinating committee and reorganize state agencies, boards, and commissions to serve the purpose of a transmission authority or to create one.

Examples
New Mexico

New Mexico passed the Renewable Energy Transmission Authority (RETA) Act in 2007, creating RETA and authorizing it to “plan, license, finance, develop and acquire high-voltage transmission lines and storage projects to help diversify New Mexico’s economy through the development of renewable energy resources.”

Colorado

In Colorado, SB21-072 created the Colorado Electric Transmission Authority (CETA) to plan and develop transmission lines to increase reliability and deploy more clean energy. CETA has very similar powers to New Mexico’s authority.

III. Require Integrated Transmission and Generation Planning (Governors, Legislatures, and Public Utility Commissions)

Coordinated planning is essential to ensure that transmission is expanded in the right places and that new clean energy investments can flow to areas with sufficient transmission capacity. Around 35 states require their utilities to develop Integrated Resource Plans (IRPs), which act as a roadmap for how the utility will meet future forecasted electricity demand over a specific time period. Although transmission and generation are key inputs for energy supply, they are usually not included in these plans. The result is piecemeal grid planning, as transmission providers and developers focus on smaller lines which meet near-term needs and are profitable within their own footprint.This shortcoming is a product of both process—regulators and state agencies have not been mandated to link transmission and generation planning—and capacity, where the administrative state lacks the right staff and resources to conduct integrated planning.

Integrating these processes can ensure better coordination between load and generator interconnection, a more holistic understanding and roadmap of current and future grid reliability and supply chain needs, help avoid duplicative investments and ensure costs for upgrades remain reasonable, and can lower the likelihood of stranded or undersized assets. This integrated planning is especially important in places with projected load growth, whether from data center buildout or electrification of buildings, heavy-duty transportation, or factories. 

Pathways to Implementation
Governors

Governors can direct relevant agencies to work with grid operators, PUCs, and utilities to encourage integrated planning.

Legislatures

Legislatures in vertically integrated states can require utilities to conduct IRPs where they don’t already do so and further require generation and transmission planning to be integrated.

Public Utility Commissions

PUCs can require utilities to link transmission and generation planning.

Examples
Nevada

Enacted in 2021, Nevada S.B.448 requires an electric utility to amend its most recently filed resource plan to include a plan for certain high-voltage transmission infrastructure construction projects that will be placed into service before 2029.

California

In 2022, a Memorandum of Understanding (MOU) between the California Independent System Operator (California ISO), the California Public Utilities Commission (CPUC), and the California Energy Commission (CEC) ensured that the planning and implementation of new transmission and other resources were linked, synchronized, and transparent.

IV. Ensure Effective Implementation of FERC Order 1920 (Governors, Legislators, and Public Utility Commissions)

Recent federal actions, such as FERC Order 1920, have the potential to be a useful tool for states if implemented correctly and efficiently. FERC Order 1920 requires long-term, forward-looking, multi-value regional planning. It was designed to improve transparency in local transmission planning, including by conducting local stakeholder meetings. Under this filing, transmission providers must produce long-term, at least 20-year, regional transmission plans at least every five years, which must utilize seven specific categories of forward looking factors, select projects based on different economic and reliability benefits, and consider the use of grid-enhancing technologies.

Pathways to Implementation
Governors

Governors can take a more active role with PUCs to guide their involvement in regional transmission planning processes established under FERC Order No. 1920.

Legislatures

State legislators can hold hearings with PUCs on how utilities, regional transmission planners, and state officials plan to participate and support regional planning and put the order into action.

Example
Mid-Atlantic

In the mid-Atlantic, 69 legislators from 10 states called on PJM to implement FERC Order 1920 without delay due to the benefits of reliable, affordable and clean electricity it will bring to their constituents. 

Wield Creative Finance Tools to Drive Investment and Reduce Capital Costs

Rollbacks of federal financial support have threatened the viability of many clean energy projects. State and local leaders can help keep projects alive and build new ones with creative financing tools. In some cases, this means taking a more active role in coordinating across public and private sector actors, while in others that means building entirely new administrative capacities to perform more ambitious financial transactions or act as a public developer. 

In addition, the grid is facing new challenges that require massive investments. For example, recovery from and preparation for wildfires is inflating energy bills in the west. Gulf states are facing similar costs from hurricanes. States need creative finance tools to ensure that these costs do not continue to raise bills for regular people and small businesses. 

Beyond merely acting as a source of capital, governments of every shape and size actively participate at every stage in the project development and planning lifecycle to bring down the total cost of projects. These include lowering financing costs, securing stable or catalytic financing, and providing an avenue to complement other functions the state is undertaking. Local governments can engage in public development functions, including through creative finance tools and engagement with community choice aggregators, rural electric cooperatives, and energy service companies.

Solutions

I. Empower development entities with the legal authority and staffing to pursue high priority projects (Governors, Legislatures, Local Leaders)

State leaders can help ensure that infrastructure authorities, city and county development corporations, or energy departments of a given jurisdiction have the relevant borrowing authority, ownership and operation powers, and partnerships capabilities to support project development.

To be successful, state financing entities or public developers need clarity and certainty on how projects they support can participate in electricity market operations, including whether projects can participate in utility procurement processes or interact with grid operator interconnection processes. State financing also must be coordinated with other grid planning processes. 

Given the overlapping interests state and local economic development agencies may hold, this process will demand adequate staffing resources and may require significant stakeholder engagement with private sector actors, government officials, and others.

Pathways to Implementation
Legislators

Legislators can write or amend enabling authorities to explicitly provide state and local entities with the financing, bonding, ownership, and partnership authorities necessary to support, finance, own, and/or operate projects. These authorities should include co-financing and co-development options to blend public and private support. Legislators can also make sure that these authorities are flexible and broad so that state development can be competitive with private developers.

Legislators can allow use of public financing tools to support certain projects. In particular, legislators can expand the bonding authority available to state agencies for use on clean energy projects.

Legislators can establish state and/or utility procurement targets for clean energy, storage, and grid projects and provide direction and clarity for state financing entities to service these procurements.

Governors

Governors can use their authority over appointments and interagency coordination to align disparate entities around specific tangible objectives.

Governors can draw on recent public private partnerships in the offshore wind industry to structure offtake, procurement, and other commercial activities with utilities and developers across a range of clean energy projects. State entities can seed virtual power plants, solar, wind, energy storage and other clean power projects that mutually derisk projects for both public and private developers alike.

Governors and legislators

Governors and legislators can provide expedited permitting and siting processes for publicly sponsored projects.

Local leaders

City and county officials can form project-specific entities or special purpose authorities to make projects financeable.

Examples
New Mexico

In New Mexico, the Renewable Energy Transmission Authority (RETA) was established in 2007 and was granted statutory power to exercise eminent domain to acquire property or rights of way for eligible renewable energy projects. This authority has been critical in overcoming fragmented land acquisition barriers.

Connecticut

The Connecticut Green Bank’s Solar Marketplace Assistance Program (Solar MAP) serves as a public developer to finance and build solar projects for K-12 schools, allowing the state to own the assets and sell power back to districts at a discount. While the Green Bank has acted as a public public developer in some form since 2014, projects from Solar MAP are projected to deliver tens of millions of dollars in savings all without incurring any upfront costs for districts.

Colorado

In Colorado, SB 21-072 in 2021 created the Colorado Electric Transmission Authority as a special-purpose development authority granted power to issue bonds and corridor acquisition tools.

II. Use pooled loan funds like state bond banks to lower borrowing costs and build project pipelines (Governors, Legislatures, and Local Leaders)

Pooled borrowing authorities offer transaction efficiency and credit strength for cities, counties or small utilities paying the fixed costs of a standalone bond issuance by aggregating relatively modest projects into standardized pools. This reduces the issuance and underwriting costs, and can often enhance credit resulting in lower borrowing costs. 

Bond banks are valuable in practice because they are a repeatable financing infrastructure that can be improved and expanded over time. Governors offices, county executives, and mayors can direct agencies to build a steady pipeline of eligible projects (using Requests for Information or direct engagement) and then work with relevant financing authorities to standardize project intake, selection, and reporting and make the whole process more repeatable.

Once local governments experience lower borrowing costs and faster execution through a standardized conduit, the model becomes politically sticky and easier to scale, especially when paired with complementary tools like revolving funds or credit enhancement that can serve smaller borrowers and accelerate project turnover.

Pathways to Implementation
Legislators

Legislators in states that lack a bond bank can establish one capable of pooling local loans, issuing bonds, and relending the proceeds. They can further work to standardize project solicitation, underwriting, and closing cycles to ensure the institution creates a regular cadence.

Legislators in states that have a bond bank can expand eligible project types (to include clean energy projects and resilience priorities like building retrofits, microgrids, etc.) and create standard project templates.

Governors

Governors can work with state agencies to centralize the origination of bonds for a public developer in their state’s bond bank or otherwise help public developers and other financing agencies exercise their bonding authority.

Governors can make regular use of bond banking authority a priority by directing agencies to run a standing intake process, and appoint or empower relevant state personnel to highlight pooled lending as an innovative solution.

Local Leaders

City and county officials can create a rolling inventory of eligible projects, bundle them into multi-jurisdiction project aggregators and engage with existing bond banks on technical assistance for project scoping and diligence.

Examples
Vermont

Vermont’s Bond Bank issues bonds backed by repayments of its loans to individual municipalities, school districts, etc. and maintains a dedicated Municipal Climate Recovery Fund. The bank has the ability to backstop non-payment by municipal or county entities that fail to pay based on state funds allocated to municipal or district borrowers in what is known as an “intercept mechanism.”

Virginia

Virginia Resource Authority’s Resilient Virginia Revolving Fund was established in 2022. Jointly administered with the state’s Department of Conservation and Recreation, the pooled borrowing platform provides financial assistance for flood-mitigation projects across the state.

III. Require energy utilities to supplement portions of their debt or equity with public bonds (Governors, Legislatures, and Local Leaders)

A unique characteristic of public development is that strategic capital deployment has the potential to derisk private investment. Mandating that utilities replace a portion of their high-cost equity with state backed public debt or revenue bonds optimizes the project’s capital stack, thereby reducing the average cost of capital and reducing the total financing costs for capital-intensive grid infrastructure. Investor-owned utilities typically finance large infrastructure projects through a mix of debt and equity with regulators guaranteeing a return on equity (ROE) to attract private investors. Because this ROE is significantly higher than the interest rates on public debt, requiring public bonds to supplement the capital stack can dramatically reduce the long-term costs that are ultimately passed on to ratepayers.

This mechanism leverages the state’s superior credit rating and tax-exempt status to fund the most expensive portions of development while leaving the utility to focus on its core competencies of construction and grid operation. Establishing a public financing facility in this way allows the public sector to act as a sponsor investor for projects of high public interest, such as interregional transmission lines. By providing lower cost debt, states can ensure that critical energy targets are met without placing an undue financial burden on households. This approach creates a more stable investment environment and allocates risks more effectively across public and private stakeholders.

Pathways to Implementation
Legislators

Legislators can mandate investor-owned utilities make use of state-backed revenue bonds or other forms of public debt to finance high-priority capital investments such as grid resilience or interregional transmission.

Legislators can authorize state infrastructure banks or other financing authorities to act as sponsor investors and displace high cost equity of a project’s capital.

Governors

Governors can establish dedicated clean energy project finance working groups to examine the full scope of infrastructure financing tools needed to derisk capital investment in transmission, generation, distribution and other electricity assets.

Public Utility Commissions

Regulators and state energy offices can lower the costs passed along to ratepayers by integrating public financing facilities directly into RFP processes, allowing bidders to access lower-cost capital.

Local Leaders

City and county officials can pass local resolutions advocating for a specific local utility project to be financed via public bond rather than traditional utility equity to ensure the lowest possible rate impact for their residents. A similar strategy can be pursued via written submission or intervention within PUC docket proceedings.

City and county officials can collaborate with state energy offices to identify projects that are ideal candidates for public debt supplements.

Examples
California

California’s 2025 law SB 254 establishes a state public financing facility (the Transmission Infrastructure Accelerator) to replace high-cost utility equity with lower-cost public debt for new transmission projects, directly reducing the ratepayer impact of CAISO’s multi-decade development plans. The law requires utilities to finance billions of dollars of grid hardening investments using bonds instead of utility equity financing, reducing costs for customers and preventing the utility from excessively profiting off of this set of expenditures.

Maine

Maine’s Clean Energy Financing Study recommends operationalizing state revenue bond authority and establishing a working group on large clean energy project finance to optimize the capital stack for clean energy and transmission projects.

IV. Develop greater public understanding about the development levers available to public or quasi-public entities (Governors, Legislatures, and Local Leaders)

Some financial functions like loan issuance, co-financing, and non-dilutive debt financing may be well known to state energy offices, green banks, and certain infrastructure authorities. But in general, public financing is hampered by a lack of clarity, information, and standardization of different agencies’ authorities.

States can maximize the impact of public resources by establishing clear financing authorities and responsibilities, providing state authorities with broad powers to flexibly support projects, ensure that public finance is prioritizing the right investments, and providing clear direction on how publicly sponsored projects support utility procurement or grid operator processes. In addition, standardizing state and local financing entities drives down costs by making processes more repeatable and can pave the way for more effective federal support in the future. By surfacing all the capabilities public entities currently have and may wish to develop in the future, policymakers and advocates can align on objectives to strengthen the public developer toolkit and bring clean energy projects closer to fruition.

Pathways for Implementation
Governors

Governors can inventory borrowing, contracting, and financing authorities and provide clear guidance on roles and responsibilities between agencies.

Governors and legislatures can require reporting on key performance metrics like deal volume, borrower participation, and time-to-close to help encourage institutionalization.

Governors and legislators

Governors and legislators can publish analysis and information on areas to focus energy project development and create special zones for the installation, procurement, manufacturing, or operation of energy projects of various kinds. These industrial zones could provide access to a variety of benefits: expedited permitting, siting, interconnection, specific public finance facilities, funds for resiliency + operation, and various other coordination benefits from other interested state agencies.

Legislatures

Legislatures can provide agencies with clear financing authorities, direction on what types of projects to support, and a broad set of tools to flexibly support projects.

Local Leaders

City and county officials can examine if there are relevant state laws that require additional ordinance/resolution to use. Some tools to activate and then specify rules to create repeatable administrative playbooks.

Examples
Texas

Houston, Texas had to pass an authorizing city ordinance to activate a state program known as Property Assessed Clean Energy (PACE). The program allows commercial and multifamily property owners to finance energy efficiency, renewable energy, and water conservation improvements and has invested over $540 million dollars statewide since its inception in 2016.

Maryland

Montgomery County, Maryland created a green bank in 2016. In 2022, the county passed a statute to direct 10% of the county’s fuel tax revenue to the Montgomery County Green Bank each year. The green bank completed a new bus depot for EV buses in 2022 co-located with a 6.5 MW microgrid that can run independent of the local utility.

Colorado

Colorado has an EPC program that lends against a project’s anticipated cost savings to finance building retrofits.

Protecting America’s S&T Ecosystem

Over the past eighty years, the United States has maintained both economic and military primacy thanks to our technological superiority. Other countries have sought to replicate this advantage, some of whom (particularly the People’s Republic of China, Russia, Iran, and North Korea) have an interest in replacing the United States as the primary global power, including through coercive and clandestine measures. 

Current efforts are centered around National Security Presidential Memorandum 33 (NSPM-33). The memorandum was designed to deal with concerns related to a very specific problem: state-motivated concealment of ties with an adversarial military in activities at universities that were sponsored by the U.S. government.  Obviously, concealment and fraud cannot and should not be rewarded in the U.S. system.  

Over time, the bridge between concealment and espionage have proven difficult to establish in court, the research community writ large has (consequently) remained resistant to change, and the PRC has amplified the impact of U.S. research security measures and prosecutorial failures to drive a wedge between the government and members of the academic and Asian American communities.

It is prudent to recalibrate U.S. research security efforts to 1) maintain America’s ability to participate in global discovery by aligning research security efforts with associated risk of specific research activities and 2) create a clearer system for the identification of national security information (NSI) so as to enable protective measures, like the Espionage Act, to do their job.  The status quo, which relies on the notion that information must remain in the open and be widely shared while somehow remaining out of reach of our adversaries, must be rationalized.

It is in the interests of the United States to appropriately protect information that needs to be protected while maintaining our participation in new discoveries to maintain our competitive advantage.  Current efforts, which are focused on university faculty and partnerships, should be rebalanced to focus on risky technologies and ensuring that the source of most patented or sensitive technologies – the private sector – is adequately protected.  Our current efforts are costly, excluding talented researchers with global connections from participating in the science and technology ecosystem and cutting the United States out of global discovery.

Challenge and Opportunity 

There are a number of key challenges that a more effective research security regime will need to address:

NSPM-33 protects the government’s money from people, but it does not protect research or ideas from foreign appropriation

Ideas are the fundamental currency of technology competition.  When a researcher or team of researchers have an idea, they frequently turn to their government for the financial backing necessary to nurture it.  Under the NSPM-33 common forms requirement, the government’s evaluation of the proposal for funding includes an analysis of any conflicts of interest or commitment that individuals on the research team might have with an adversarial foreign government.  

The research agency is expected to decline the funding opportunity if an applicant for funding has an ongoing or recent relationship with institutions that are backed by adversarial governments or military organizations (among other potential conflicts).  In some circumstances, particularly when an individual or team intentionally misrepresents their relationships with adversarial governments, the federal agency may seek additional administrative remedies, including suspension and debarment.  Similar processes are in place in government agencies that operate cooperative user facilities, primarily our national laboratories. Lying is bad, and it goes without saying that lying to conceal a relationship with an adversarial government is also bad.

Unfortunately, the story doesn’t end there. Because the idea is still wholly in the possession of the individual who applied for a federal grant, the individual or team is now free (and even incentivized) to seek alternative sources of funding. The fact that the government declined to support their work based on research security concerns as opposed to merit could validate the researcher’s belief that they have a meritorious idea worth pursuing elsewhere.  A rare unscrupulous researcher could use that knowledge to determine the idea may be of interest to foreign militaries and actively look to adversarial governments for support.  In effect, the government’s review of a person’s institutional affiliations, followed by a decision to exclude them from U.S. government funding, and subsequent visa revocations or revocations of permanent residency increases the probability of hostile foreign acquisition of talent and technology. 

This conclusion is not based on speculation.

We’ve Been Here Before

History shows that strategic catastrophes follow when the government shuts people out of the American R&D ecosystem. In the 1950s, the United States detained and deported Qian Xuesen, co-founder of our Jet Propulsion Laboratory, based on fear that he supported the Chinese Communist Party.  He was tapped to build the PRC’s ballistic missile and space programs, creating the strategic situation we find ourselves in today.  Xuesen was so angry at the United States government that he later refused to take part in activities celebrating the normalization of relations.  The United States made a similar error with Erdal Arıkan, the father of 5G wireless technology’s polar codes, albeit not out of concern for his affiliations.  A lack of U.S. government support for long-term theoretical mathematics research led Arıkan to return from his positions in the U.S. to Turkey.  Some years later, he was approached by Huawei. Huawei representatives quickly assimilated Arıkan’s work and became the world’s leading supplier of wireless technology.  Subsequent government efforts to try to mitigate Huawei’s competitive advantage undoubtedly cost the U.S. taxpayer more than if we provided adequate or tailored funding for Arıkan’s career (and those of many other researchers) in an environment where it was more likely to be acquired by U.S. technology companies with less risk to the national security and the security of our international partners.  

I have been briefed on other examples.

Funding Cuts Compound the Talent Crisis & Encourage Exodus

The Executive Branch’s massive funding cuts for research make the current situation even more problematic, creating compounding incentives encouraging talent to leave the country.  Other countries have become better at attracting and retaining talent in recent years, weighing against the longstanding U.S. competitive advantage in talent attraction and retention.  This has been particularly notable in fields like artificial intelligence, where particularly productive individuals have made significant contributions to the development of AI models like DeepSeek.  The British-based talent intelligence firm Zeki Data points to the strengthening of markets for talent in Europe, the Persian Gulf, India, and China as additional reasons for a decline in U.S. AI talent attraction.  Recent surveys suggest that as much as seventy five percent of the scientific community is looking for opportunities overseas.  Notable recent departures from the United States due to green card denials, federal funding challenges, and federal layoffs include ChatGPT developer Kai Chen (moved to Canada), spaceflight safety expert Jonathan McDowell (moved to the UK), and carbon capture expert Yi Shouliang (moved to China).  Neuroscientist Ardem Patapoutian was offered 20 years of funding in China in “any city, any university” after being cut off from NIH funding as part of the Trump Administration’s recent cuts, earning himself a mention during Marcia McNutt’s State of Science address at the National Academies.

Result: Innovation Increasing Elsewhere, Excluding Americans

When we assume that genius stems primarily from the U.S. system, and in particular from government funding, we are surprised when innovation happens somewhere else.  The government’s principal method for controlling ideas and innovation only happens after a legal relationship between the researcher, their home institution, and the U.S. government is established.  For advanced technology development, the overwhelming majority of which is funded by the private sector, we are principally reliant on export controls (which have a dismal history of successfully limiting knowledge and advanced technology transfer).

Science agencies aren’t adequately equipped to mitigate harm (and some NSI probably isn’t appropriately identified or protected)

For decades, National Security Decision Directive 189 (NSDD-189) was the primary governing document for research security measures. When it was released at the height of U.S.-Soviet tensions during the Reagan Administration, the academic community focused primarily on one line–”It is the policy of this Administration that, to the maximum extent possible, the products of fundamental research remain unrestricted.” NSDD-189 was meant to cover only a subset of research – fundamental research – defined in the document as “basic and applied research in science and engineering, the results of which ordinarily are published and shared broadly within the scientific community, as distinguished from proprietary research and from industrial development, design, production, and product utilization, the results of which ordinarily are restricted for proprietary or national security reasons.”  On balance, the overwhelming majority of U.S. research performance and expenditure is not covered by NSDD-189, but rather developed by industry and the defense sector (which are covered later).

Understanding When Controls Must Be Used

Relatively little attention has been paid to NSDD-189’s second policy idea, which emphasizes that “where the national security requires control, the mechanism for control of information generated during federally-funded fundamental research in science, technology and engineering at colleges, universities and laboratories is classification.”  Proper identification of potentially classifiable information and NSI is essential for the Justice Department and Intelligence Community, who wish to protect such research under national security statutes.  The policy also acknowledges there may be alternative applicable statutes, like export control regimes, which have generally remained consistent with NSDD-189 (more on that later).  

The classification system remains the most straightforward way to give our security community the tools they need to protect America’s critical national assets.  This mirrors the findings of the 2019 JASON report on Fundamental Research Security, which argued that “Making the case, for classification reasons, that a new technology might be of national security value is far simpler than assessing its potential economic impact, even if economic security is equated in some way with national security.”

To fully appreciate what this means, one needs to understand when and why information should be classified. Classification is governed by Executive Order 13526 (E.O. 13526), which establishes the processes by which agencies establish national security classification protections.  The lowest level of classification, which is termed “confidential,” is defined as information that “the unauthorized disclosure of which reasonably could be expected to cause damage to the national security that the original classification authority is able to identify or describe.” The bar for identifying and labeling “confidential” classified information is relatively low, and yet it is one that many science agencies lack the authority to implement. 

Whether information should be treated as classified or unclassified is the responsibility of an Original Classification Authority (OCA).  These authorities are a short list of senior government officials authorized by the President to review information and determine whether or not it is classified.  Agencies that do not have representatives on the list are limited to being able to label information that is derived from other classified sources.  To determine whether new information must be classified, individuals from other agencies must depend on determinations made by individuals granted OCA or delegated classification authorities.

Classification Constraints for NSF, NIH, others

The trouble begins when we realize that extramural funding agencies like the National Science Foundation (NSF) and National Institutes of Health (NIH) have no officials possessing OCA at the Top Secret (or even Secret) level, making officials in those agencies entirely dependent on members of the security community and White House to make such determinations. A broad lack of access to classified facilities limits the access that program managers have to relevant intelligence sources.  Many NSF officials, for instance, need to travel offsite to engage with classified materials.  

As a result, I have been told that classification reviews of new and novel information produced by these agencies almost never happens.  If a researcher or program officer believes that the information that is produced in the course of research may damage national security and reports their concern to their funder in accordance with the terms of their grant (as is the case for NSF), the research agency lacks the authority to evaluate the information and defend their determination. Put another way, the research agency also lacks the authority to determine when edge-case information it produces should remain unclassified. 

E.O. 13526 anticipates this and allows for the referral of information to original classification authorities at other agencies.  Again, the need to refer elsewhere means that such reviews almost never happen, except for in those agencies which already possess the ability to identify and classify information (such as the Department of Defense or Department of Energy). Things get more absurd—one of the co-chairs in the Research Security Subcommittee of the National Science and Technology Council wasn’t able to read relevant research security-related intelligence products due to their reduced level of access.  This had the effect of limiting the materials that could be discussed by the research security subcommittee.

Science Agencies Placed in a Bind

When an officer from a science agency or some other official makes the claim that the research they support could harm U.S. national security if it is improperly disclosed, the question “why wasn’t it classified” must immediately follow.  A lack of original or derived classification authority means that the individual in question is not empowered to make the initial assessment of whether or not the information needs to be protected through the classification system.  This effectively places them in conflict with the substance of E.O.13526 and unable to fully implement NSDD-189.

Defenders of the status quo would correctly point out the few examples of harm caused by the unauthorized disclosure of early stage research. While this is true, especially for agencies like the NSF or NIH, the inability to make a determination when paired with more recent Congressionally-directed emphasis on later-stage research (under organizations like the NSF Technology and Innovation Partnerships Directorate, ARPA-H, and other longer-standing programs involving gain of function research) increases the likelihood that later-stage research that could have national security implications will not be appropriately identified and protected.  The expansion of research portfolios into applications must be accompanied by policies and a readiness to accept the resulting changes in responsibility.

The definition of basic and fundamental research has lost meaning over time and must be reestablished

During my time as Assistant Director for Research Security in the Office of Science and Technology Policy, security agencies would routinely point out that a fair number of basic research projects have fairly easily describable national security implications. I would agree with them and then take this one step further–a great deal of research that is categorized as basic research by science agencies fails to satisfy the statutory definition of basic research established for the Department of Defense.  That definition is as follows:

Basic research is a systematic study directed toward greater knowledge or understanding of the fundamental aspects of phenomena and of observable facts without specific applications towards processes or products in mind.  It includes all scientific study and experimentation directed toward increasing fundamental knowledge and understanding in those fields of the physical, engineering, environmental, and life sciences related to long-term national security needs. It is farsighted high payoff research that provides the basis for technological progress.

Similarly, the Federal Acquisition Regulations define the term as follows:

Basic research means that research directed toward increasing knowledge in science. The primary aim of basic research is a fuller knowledge or understanding of the subject under study, rather than any practical application of that knowledge.

The Export Administration Regulations (EAR) create two definitions to help establish what is considered fundamental research:

Fundamental research means research in science, engineering, or mathematics, the results of which ordinarily are published and shared broadly within the research community, and for which the researchers have not accepted restrictions for proprietary or national security reasons.

It is not considered fundamental research when there are restrictions placed on the outcome of the research or restrictions on methods used during the research. Proprietary research, industrial development, design, production, and product utilization the results of which are restricted and government funded research that specifically restricts the outcome for national security reasons are not considered fundamental research.

A fair number of defense research activities (including research supporting computing or hypersonics, as well as almost all “use-inspired research”) does not meet the statutory definition of basic research, but is often categorized as such despite the fact that the name of research or research field describes the applications or processes in mind.  In the decades after NSDD-189, agencies have expanded on this definition in order to become inclusive of use-inspired research, muddying the waters.  

For a final level-set, classified information and controlled unclassified information (CUI), cannot be defined as fundamental research in the eyes of the government as the act of controlling the information places the work outside the EAR’s fundamental research definition.  This should be fairly straightforward given the fundamental incompatibility between the two definitions.

Applied Research Can Also Suffocate in a Closed System

Individuals who defend applied research activities being primarily handled in the open often point out that the overwhelming majority of these activities need to happen in the open in order to maintain technological competitiveness, pointing to the success and strength of open source technology platforms.  I would agree and go one step farther: the vast majority of applied research would be worse off in a restricted environment, and there are many good reasons for why NSDD-189 included applied research in its definition of fundamental research.  I also would point out that the strength of the EAR’s definition is also the acknowledgement that as soon as one feels the need to protect information from misappropriation, that information cannot and should not be managed in the open.  It is incoherent to say that information needs to be in the open but also remain inaccessible by the Chinese government–the two notions are obviously mutually exclusive.  To quote the JASON report on Fundamental Research Security, “The fundamental research exemption is based on the idea that the general nature of the knowledge produced in fundamental research cannot be controlled.”

Gold Standard Science and Industrial Diffusion Require Openness

While it is true that basic and fundamental research provide the basis for information that could eventually be used to support the development of sensitive and national security relevant technologies, expanding the “grey space” of risky information to include almost all basic research (as the Department of Energy and other departments are apparently now doing, as reported by some university officials) risks information paralysis.  After all, literature review is an important first step in the scientific method, and shielding information from external eyes prevents research from being scrutinized in a way that results in the development of new questions or ideas, not to mention the entire system of scientific rigor.  While it may be true that basic research may be used to develop security relevant technologies, it is not necessarily possible to entirely derive security relevant technologies from individual discoveries.  Getting information into the hands of U.S. industrial players becomes much more difficult once it is restricted.  This is especially for startups, small businesses, and smaller universities that cannot afford to sponsor vetting all of their employees at the level of “public trust”or higher in order to gain access to protected U.S. government-sponsored discoveries, creating consequences for American innovation, competition, and participation.

All of this is more difficult in a world where artificial intelligence can aggregate and synthesize information more rapidly than a team of graduate research assistants. Such risks must be accounted for. Still, it makes more sense to establish a strong governance framework around AI use cases rather than reordering how we govern the rest of society after every technological breakthrough.  

Need to Limit the Grey Space

It is true that a great deal of research exists in a grey space where it needs to happen in the open in order to advance despite the fact that it may cause harm.  Decisions to allow research that could cause harm to remain in the open should reflect the careful consideration of government program managers paired with unambiguous instructions to award recipients.  It is in the capacity of the government (and not within the natural capabilities of our universities who have an incentive to seek the least restrictive framework possible) to determine and assess whether information might present a threat to national security and require that defensive measures be implemented, accordingly.  

Muddying the waters around which research needs to be protected and which research does not has real world consequences for U.S. universities and non-governmental research organizations.  When faced with ambiguity about whether or not they should be engaged in a particular research activity, university administrators frequently take the most cautious approach possible to protect themselves from potential liability and reputational damage.  The talent, on the other hand, retains their ability to choose to do their work elsewhere if an environment becomes too restrictive. 

Securitization has significant costs (including to American leadership)

Later in his life, Richard Feynman recounted a story about the impact of compartmentalization on the Manhattan Project.  Uranium-235 separation happened at Oak Ridge National Laboratory while the research underpinning that production took place at Los Alamos. Because the researchers at Oak Ridge didn’t have the fundamental understanding of nuclear physics necessary to troubleshoot their work, Uranium-235 production ran into numerous headwinds until Feynman and others managed to convince the government of the necessity of sharing knowledge with the Oak Ridge team.  Once the knowledge was shared, production went back on track.  Failure to share the information with other scientists risked a major accident at Oak Ridge, not to mention an end to the Manhattan Project.  In today’s environment, where incentives are structured in such a way that critical decisionmakers frequently sit on research security-related decisions rather than subjecting themselves to internal or Congressional scrutiny, I am less confident that someone like Feynman would reach a similar outcome.

Measuring Direct and Indirect Costs

It is relatively easy to measure the financial costs associated with research security measures. It is much more difficult to evaluate their effectiveness. The fact that there have been few publicized significant examples of national security harm resulting from the sharing of government-sponsored basic research makes having a serious discussion about the efficacy of our existing research security measures outside of classified environments almost impossible.

Costs include increased administrative burden, support staff, defensive cyber systems, facility access controls, and more.  These costs cannot be easily accounted for in an individual grant’s direct costs, so we can safely assume that decreases in support for indirect costs under the current administration will result in the cost of CHIPS and Science-mandated research security programs placing additional pressure on other scientific activities. Research security compliance costs are overwhelmingly treated by the government as mandatory expenditures, while (confoundingly) the cost of operating a world-class laboratory is not (except for in instances where a grant is explicitly for the development and maintenance of advanced scientific infrastructure).

Costs to American Competitiveness

While we can also measure the economic impact of fewer students and scholars in the United States, it is more difficult to measure the cost of the United States being absent from international opportunities.  Proposed legislation like the SAFE Research Act, while intended to prevent U.S. government funding from supporting research involving adversarial nations, effectively could result in U.S. researchers being forced to abandon high-potential discoveries produced by large and diverse international research consortia if researchers from adversarial countries are part of the team.  This would give the PRC veto power over the involvement of the U.S. government-supported individuals in international research consortia–an alarming consequence, especially when one considers the Australian Science Policy Institute’s recent finding that Chinese research institutions dominate 57 out of 64 critical technology fields and the fact that some international projects require the work of thousands of scientists from dozens of countries. In fields like space research and development, the PRC’s domestic talent base is sufficient to maintain competitiveness while they actively test capabilities where the United States is years away from deployment (and might beat the United States back to the moon).

The absence of U.S. researchers in such consortia does not mean that the research does not happen–it merely means that the consortium needs to find other partners who are able to do the work.  The fact that the infrastructure supporting such discoveries is frequently located exclusively in Europe and Asia (thanks in part to decades of U.S. underinvestment in research and development infrastructure) means that there is additional incentive for American researchers to travel and work abroad in order to maintain access to unique capabilities located in adversarial countries to advance their careers and test new technologies.  

This problem is particularly acute in fusion energy research and development, where U.S. companies maintain a technological advantage but lack the domestic extreme radiation environment testing infrastructure necessary to develop critical components necessary for long term operation (the inner walls of the reactor, or blanket).

A protect-oriented research security framework makes sense if the United States is safely in the lead and competitors are decades away from catching up.  But in a more competitive environment, when other countries become more productive and competitive, decisions that force the United States to abandon collaborative efforts are much more consequential. Given the relative size of the U.S. population with that of our competitors, this mitigates the ability of the United States to unilaterally define global rules governing the conduct of research.

Taken together with my previous point about the need to re-establish the definition of basic and fundamental research as a bright line test and giving science agencies greater ability to control information, I’m placing an awful lot of responsibility on the backs of federal research managers.  Having spent most of my professional life in the civil service as a clearance-holder, I would argue that managing such weighty decisions is exactly what federal employees are paid to do.  In the same talk mentioned earlier, Feynman described the admiration he had for the speed with which members of the security establishment could make decisions upon which the fate of the nation rests.  If we abandon the flexibility of officers to give informed answers to our research apparatus promptly, we do so at our own peril.

We know dangerously little about what is happening in Chinese laboratories, and fear is getting in the way of dealing with our knowledge gaps and urgent security challenges

As a former CIA colleague recently briefed Congressional staff, it is very difficult for trained intelligence officers from both the United States and the PRC to successfully infiltrate the academic environment due to the level of academic training required and fluency in domain-specific technical terms and practice.  When the United States occupies a position of preeminence, and the primary sources of technical knowledge originate in American laboratories, then it makes sense to limit partnerships with individuals and organizations who have competing interests.  Unfortunately, the United States cannot count on itself being in the lead in many scientific domains any longer.  Our ability to avoid technological surprise depends, in part, on knowing what’s going on in Chinese labs.

The research security push over the past decade makes this much more difficult.  The growing absence of routine technical interactions with Chinese counterparts means that our scientists and engineers have limited insight into the PRC’s work.  The emphasis of research security requirements aims to restrict interactions with the PRC’s Seven Sons of National Defense, which also happen to be the PRC’s top performers in science and technology discovery.  Without knowledge of what is happening in these institutions or the ability of our scientific community to ground-truth intelligence assessments (which are frequently compiled by non-experts), we increase the probability that we will not have the immediate knowledge necessary to replicate such discoveries or worse, fall victim to technological surprise.

Degraded Capacity Creates Larger Competitiveness Risks

The challenge is compounded when we consider the degradation of our own technical capacity.  In fields like nuclear fusion and radio astronomy, the United States simply does not have the laboratory capabilities that are necessary for our scientists to remain at the cutting edge. In 2023, the Fusion Energy Science Advisory Committee found that “rapid progress toward commercial fusion power will likely rely in part on research at existing or near-term international facilities that provide capabilities presently unavailable in the U.S., such as long-pulse magnetic confinement” – an area where the PRC has invested billions of dollars in recent years.  Decoupling and fear of information transfer also has limited our ability to communicate with the PRC in areas like spaceflight safety, where we generate dozens of conjunction warnings every day involving potential near-misses with Chinese satellites.  The consequences of non-communication or inefficient communication with PRC satellite operators resulting from a collision cascade includes the loss of access to particular orbits, significant damage to U.S. and allied commercial and security assets, and potentially unlimited liabilities for the United States under the Outer Space Treaty.  Such a disaster would have dire consequences for our economy and space-dependent national security establishment.

This creates a Catch-22 for American scientists and engineers who want to see the United States advance in the field or see their fields advance.  On the one hand, testing new technologies on foreign platforms creates substantial risk of foreign appropriation, similar to the risks experienced by American companies operating in China since normalization.  On the other, the lack of access to similar domestic or allied capabilities freezes the ability of U.S. commercial interests to keep pace with foreign competitors. 

Increased Departures, Stunted Discovery Process 

The U.S. government, likewise, cannot count on individuals who have devoted their lives to a particular field of inquiry to wait for the government to invest in these capabilities in the next five to ten years (an overly optimistic projection for the budget process, alone). History shows us that brain drain (or loss) in such situations is inevitable.  America’s ability to remain competitive will increasingly depend in some part on our access to PRC-derived knowledge sources absent new and significant U.S. government investment.  A research security regime that unnecessarily restricts U.S. collaborations based on institutional affiliation or connections, rather than domains where the risk of loss exceeds potential benefits, is more likely than not to limit our insight into new knowledge early in the discovery process, and throttle U.S. technological development in a way that will make it more difficult to compete in the future.

Research integrity has become securitized, confusing the ways in which we approach domestic challenges

Research integrity and research security are interlinked, and it is absolutely true that when contracts developed by a foreign government encourage researchers to lie on federal grant applications that is a major cause for concern and demands a national-level response.  But no matter how egregious such behavior is, and no matter how proximate violations of the False Claims Act are to violations of the Espionage Act, they are different parts of the criminal code.  If the government wishes to bring a case before the courts where an individual has both lied to the government on official documents and provided NSI to a foreign government, it is in the power of a prosecutor to do so and seek appropriate penalties.

On the domestic side, organizations such as Retraction Watch have justifiably brought attention to research misconduct over the past ten years.  There are significant examples of senior researchers, including 13 retractions from a Nobel laureate, of having engaged in alleged violations of research integrity.  Many of these cases involve the knowing manipulation or falsification of data.  I would argue that such cases do more to damage the integrity of the U.S. research ecosystem than individuals omitting information about their affiliations in grant applications, primarily because the publication of falsehoods into the scientific corpus, exacerbating the “replication crisis” in biology, psychology, and behavioral economics, and poisoning the scientific establishment’s credibility.

In several recent domestic misconduct cases, the researchers in question were able to return to their laboratories, continue their day jobs, and even found unicorn technology startups.  On the other hand, undisclosed affiliations, patents, or grant support have resulted in the termination of 119 scientists (almost half) investigated by the National Institutes of Health as part of their foreign interference efforts.  Some researchers have been denied access to funding despite no credible link to foreign talent programs or other similarly concerning affiliations.  A reasonable observer would be left to wonder why an undisclosed patent or grant support is a greater violation than the intentional falsification of the results of their research.  To the observer the answer may seem obvious–in one instance, we have a nexus with a foreign government; in the other instance, we don’t.  

As former Assistant Attorney General Matt Olsen noted when the Department of Justice drew down the China Initiative in 2022, “by grouping cases under the China Initiative rubric, we helped give rise to a harmful perception that the department applies a lower standard to investigate and prosecute criminal conduct related to that country or that we in some way view people with racial, ethnic or familial ties to China differently.”  The same must be true of administrative actions related to academic misconduct, especially as countersuits undermine the government’s earlier claims.  

Proper NSI Identification Removes Ambiguity, Releases Burden

This is why it is important that national security-sensitive research be clearly identified as NSI through the classification system.  Once information has been appropriately classified, then who gets to handle the information becomes just as important as how the information is handled.  Individuals who handle classified information are expected to report their contacts, affiliations with foreign governments, and all other information that is necessary to maintain the public trust as a matter of national security.  Violations related to improper handling may be appropriately elevated under the Espionage Act and successfully prosecuted in court.  Our current attempts to substitute judicial action with administrative penalties creates a culture of fear and suspicion in our university establishment, rhyming with some of the darkest periods in the history of our nation’s research and development enterprise.  The culture of fear, compounded with a lack of clarity about what needs to be protected, dramatically increases the probability that universities will improperly exclude individuals from certain racial or ethnic backgrounds on activities that have little relevance to national security.

Plan of Action 

Recommendation 1.  Congress should, with the support of the Administration, reinvest in American research and innovation, including in foreign talent attraction.

The United States is no longer in a position where American technological superiority is assured; the fact that the PRC is leading in publications in many critical and emerging technology fields, and in the deployment of advanced technologies in certain domains, should be of immediate cause for concern.  Likewise, anticipated declines in foreign talent enrollment in U.S. universities and shrinking PhD programs at major U.S. universities in response to federal policy actions should be of significant cause for concern. In 2024, the Defense Department reported that “unfunded research, development, test, and evaluation (RDT&E) infrastructure requirements were shown to have grown significantly since annual reporting began in 2018, putting the military at risk of losing its technological superiority.”

Funding also provides the government with the leverage needed to protect American interests.  The Department of Defense’s use of contracts to protect U.S. security interests is already common practice, most famously recently used to ensure that SpaceX’s Starlink would not cut service in Ukraine. Similarly, the government cannot classify, restrict, or otherwise currently control information that is produced outside the federal research ecosystem. Even if it is possible that an idea may make it back to a foreign government or adversarial military, it is far better for the United States to have participated in the development of that idea (and have the chance to exploit relevant intellectual property) than it is to surrender the innovation, in total, to a foreign power, robbing us of the chance to compete. Sometimes, in order to control the dissemination of an idea, the government must invest in it, and that means that the government should actively seek to invest in projects in order to participate in the discovery and gain access to its benefits, even when our partners might be less than ideal or when there is risk that the information could also be used by a hostile foreign power.

The government can and should create terms in its funding mechanisms that enable it to mitigate risk where necessary, as well as giving it graduated negotiating flexibility in circumstances where we may require enhanced measures (like encouraging researchers to sever ties with adversarial entities like the PRC).  These should not be mandatory, as is the case with the SAFE Research Act, which effectively gives the PRC the power to veto U.S. participation in large multinational consortia if individual PRC researchers become affiliated with an effort, thereby reducing the ability of the United States to set terms for increasingly multilateral scientific activities.

Recommendation 2. The Office of Management and Budget (OMB) should require rigorous and independent cost-benefit analysis of existing research security efforts before approving new research security-related regulations and requirements as part of the Paperwork Reduction Act review process.  Congress should exercise similar care before imposing similar requirements on academic institutions given the significant costs associated with research security programs.

The use of grants to impose research security requirements that alter an organization’s behavior at the institutional level are, in effect, de-facto regulatory action with significant economic impact (as defined in Executive Order 12866).  As attesting that an institution has a research security program that meets the requirements of a granting agency requires significant economic expenditure across many academic institutions, new research security requirements should be subject to the same cost-benefit analysis as other significant regulatory actions.  Paired with declining support for indirect costs and limited public evidence that disclosures of federally-supported basic research has harmed U.S. national security interests, there is a financial imperative to ensure that research security programs are aligned with actual instances of observed harm to national security and the unintended or illicit transfer of national security information (NSI).

Consistent with recommendations from the recent National Academies report on simplifying research regulations and policies, existing research security efforts should be recalibrated to mitigate risk of loss as opposed to blanket bans on interactions as a result of institutional affiliation.  Efforts that result in reduced American participation in global discovery around critical technologies directly cut against our country’s technological competitiveness and should be retired and rescinded, as the most likely outcome of such measures is not the protection of information, but rather the inability of the United States to develop and deploy new and novel capabilities.

What could the costs for expanded research security programs look like, especially given that some agencies are allegedly treating basic research activities as CUI?  As noted in the JASON report on Safeguarding the Research Enterprise:

“The supporting apparatus for access controls would impose significant cost on the conduct of research and reduce research funding efficiency. JASON received from NSF cost estimates for what the University of Oklahoma has spent to support such work, for example. A warehouse-type building for CUI experiments was estimated to have cost $2M, and a new office building with access control adequate for classified work cost $7M. Building construction costs are only about 10–20% of their life-cycle ownership costs, translating to roughly $1–2M per year for both buildings. Required security and compliance staff add cost of four full-time equivalent personnel, equating to another $1M per year. Thus, a medium to large ($1–3M/year) research program might incur security costs around $1–3M per year above the baseline research cost, roughly doubling the cost of carrying out that research. This would constitute a serious loss of research efficiency. Slowing research by half could easily allow countries like the People’s Republic of China (PRC) to pull ahead in strategic fundamental research areas.”

Strangling our research enterprise is an unacceptable outcome and must be avoided.  Likewise, expanding research security reviews within agencies would likely require significant expansion of agency personnel available to conduct rigorous national security assessments and cut against the availability of funds to do actual science.  Such a tax on research funding is an additional cost to American competitiveness, reduces the number of grants made to institutions, and serves to further exclude institutions in EPSCoR jurisdictions, HBCUs, and other MSIs, emergent research institutions, and nonprofit laboratories that already face significant resource constraints.  

In light of preliminary assessments of cost to the research enterprise, the Government Accountability Office (GAO, which is within the legislative branch) may wish to also assess the effectiveness of research security programs compared with their cost of implementation, and in particular the impact on programs that do not have a clear NSI nexus.

The Administration and Congress must ask if they are prepared to significantly increase funding for research programs in order to deal with such significant cost inflation due to mandatory research security program requirements.  The situation will only become more dire if the government caps indirect costs at 15 percent, as several science agencies recently attempted to impose, which would place research security costs in even greater conflict with the fundamental resources necessary for an institution to conduct research.  Such security measures are meaningless if the research institution lacks the staffing, facilities, or administrative support necessary to conduct cutting-edge research.  Care should be taken to ensure that federal policy actions attempting to reign in administrative costs do not cut against our ability to operate world-class research institutions.

Recommendation 3. Congress should figure out how best to handle research security concerns that originate from research supported by non-governmental entities, and empower industry consortia to take measures to secure their sensitive intellectual property.  Incentivizing participation through contracts, cost deferrals, or reduction of administrative burden for protective measures can help.

NSDD-189 warns that “as the emerging government-university-industry partnership in research activities continues to grow, a more significant problem may well develop.”  Contemporaries involved in the drafting of NSDD-189 have told me that the growing role of the private sector in funding university research was of concern to Reagan’s OSTP, and that as industry became a more prominent funder, then the government would have limited controls to prevent the transfer of industry-derived information to foreign powers.  Given the increased presence of commercial research laboratories, and more recently focused research organizations, the emphasis of research security efforts on federal funding for universities is inconsistent with the balance of U.S. research performance. 

The majority of research considered proprietary or sensitive (as well as the overwhelming majority of all U.S. research) is produced or supported by American companies.  As a matter of policy, most universities will not accept controls on the research which they produce, appropriately intending for it to enter into the public domain where it can be more rapidly developed and used for practical benefit.  Academia’s role as a producer and broad distributor of knowledge is fundamental to its role in society and must be protected.  Rather than suggesting that most university research is sensitive, when its primary purpose is to be shared, Congress should be willing to enact enforcement mechanisms that strengthen the ability of the government to protect research relevant to defense and industry.

On this point there are no easy answers.  The danger, of course, is that such measures will inherently restrict the competitiveness of American businesses and their ability to participate in international markets.  Strengthening CFIUS can hinder the ability of American companies to secure sources of investment and invite foreign retaliation.  Expanding our use of export controls will create incentives for foreign governments to diversify away from American suppliers and limit the ability of our companies to shape global supply chains.  Restricting foreign countries’ access to U.S. technology will instead create new dependencies on foreign technology sources, creating a new system of incentives that limit the ability of the United States to set global standards, and force countries to make concessions to the PRC in order to maintain access to their resources and technology.  Our efforts to limit the PRC’s access to semiconductors has already convinced the European Union of the need to become less dependent on the United States in other technology areas.  The difficulty the United States has had in managing the spread of technology from Huawei, Bytedance, and BYD should be instructive, as should growing challenges to U.S. influence in multilateral fora.

The most straightforward way to control information produced outside the government is by getting knowledge producers on contract and creating a system of incentives for companies to do so.  Doing so has two purposes.  First, it allows the government to create a financial incentive for companies to participate in enhanced security measures.  Second, it creates a legal relationship where companies can be required to implement enhanced security measures without significant sacrifices to their bottom line.  This could be through offering to defer costs associated with security clearances and technology measures needed to provide enhanced security and IP protection.  Legal relationships are more effective than education campaigns, given that some producers of high value technology don’t seek government protection given the significant financial and administrative costs.  Still, some firms may refuse to take government grants or contracts to preserve their organizational flexibility.  This is a feature of American capitalism, not a bug.

Managing these tradeoffs is fundamental to addressing our research security challenges.  If we want to get serious about addressing research security, our greatest efforts need to be directed to research that is higher in the value chain with national security consequences.  Most of that research doesn’t happen in universities.  Policymakers must be prepared to accept the inherent tradeoffs that come with decisions to place restrictions on the American research enterprise.

Recommendation 4. The Executive Branch should grant Original Classification Authority to the heads of extramural granting agencies and more strictly apply the federally-recognized definitions of basic and fundamental research.

As federal extramural funding agencies move toward more technology-relevant solutions and later-stage technology development and deployment, the ability of agencies to defend decisions to keep research in the open environment will become much more acute.  While the risks of overclassification are real, the risks are not greater than in fields like diplomatic engagement or global development (both by definition require continuous engagement with foreign partners, especially with those who wish to challenge American interests). The Department of Defense and Intelligence Advanced Research Projects Activity (IARPA) experience and history of prudent and limited classification determinations should provide some relief.  

This will not be cheap, and agencies will need appropriate resourcing in terms of personnel and capital to manage the workload and specialized technology facilities (Sensitive Compartmented Information Facilities, or SCIFs, and secure work areas) to manage classified information.  

More strictly applying the federal definition of basic research to research that is “without specific applications or processes in mind” should help agencies separate activities that have clear industrial, commercial, or defense-related applications from research that is truly foundational.  The task of establishing a “bright line” between applications with national security potential is likely to be significant, but it is likely to be far less costly than passing that cost onto thousands of laboratories around the country to make their own risk management calculations based on less complete information.  NSF has created the SECURE Center to mitigate this, but again, the solicitation for the Center notes that the Center does not conduct investigations, hold or manage classified information, or assume liability for the consequences of its products.

Agencies will also need to deal with the fact that many universities and other non-governmental research organizations frequently do not accept funding that comes with classification, CUI, or other burdensome requirements. In such instances agencies will need to weigh the value of the research in question and balance that with the risk of foreign appropriation.  In instances where the risk of foreign appropriation is significant, the value of the research is high, and the consequences of the United States not participating in discovery are anticipated to be significant, agencies should be willing to provide supplemental funding to enable the research activity to take place and to devote an appropriate level of oversight to ensure that U.S. involvement in the activity is appropriately managed and consistent with overarching federal interests.

The government might also wish to implement alternative funding measures when there is an interest in participating in discovery while excluding the participation of non-aligned governments.  Using the contract or collaborative agreement systems are probably more appropriate than grants in such circumstances, allowing the government to more expressly dictate terms for collaboration.  The government should be willing to use financial leverage to do this, including through funding for large international research consortia, to counter the efforts of adversarial governments while maintaining the ability of the United States to participate in overseas discovery processes.

Yes, this will result in a culture change in some of our premier research-supporting agencies. I would argue that culture change became necessary immediately after the creation of the NSF TIP Directorate, ARPA-H, and similar organizations.  If we expect these agencies to engage more directly in critical and emerging technology development, especially around technologies which are relevant to great power competition, then increased scrutiny of these and similar programs is necessary to protect our critical national assets.

Recommendation 5. The Executive Branch should move all research integrity efforts under the rubric of Gold Standard Science along with other issues related to academic misconduct.

This is not a lengthy recommendation; the integrity of our efforts to preserve and protect the integrity of the research enterprise should be managed under a single umbrella where penalties for unethical conduct can be calibrated to the severity of the misconduct.  This is essential for maintaining buy-in within the research community around both research security and research integrity measures.  Invoking Justice Holmes, if we take the view of an unscrupulous researcher (whom we shall find does not care two straws for ethical conduct in the sciences but does want to know what will result in a loss of tenure) then it is reasonable for them to assume that the government’s ethical framework for science is mediated primarily by our relationship with Chinese research institutions as opposed to Gold Standard Science. It is in the interest of the science and technology ecosystem to correct that notion at the earliest possible opportunity (and presumably to also clarify when the government will charge them with espionage).

Conclusion

It is highly probable that this administration and Congress will seek to implement additional measures related to research security in the near term.  Current and proposed frameworks incorrectly assume persistent U.S. supremacy in science and technology and that new ideas are a product of government innovation.  As the PRC deploys new capabilities that have not been demonstrated by U.S. government or commercial actors, the posture of the United States toward our research security apparatus must change to match the strategic moment.  Measures that isolate the United States from discovery should be retired in favor of new measures that selectively identify and protect critical knowledge vital to maintaining national security.  For the sake of our security and future technological leadership, we must recognize that innovation comes from the work of teams of individuals who are motivated to change the world, and accept that America is less secure when that change happens somewhere else.

Solving the Clean Energy Infrastructure Finance Rubik’s Cube

Building Blocks to Make Solutions Stick

Capital is not the constraint, alignment is: Catalyzing large-scale climate and energy infrastructure requires government to act as a systems integrator—synchronizing policy, de-risking commercialization, modernizing valuation, and coordinating markets so private capital can move with speed and confidence.

Implications for democratic governance

Capacity needs

Deal templates and archetypes: Clear, standardized financing pathways that signal how government capital will engage at different risk tiers and technology stages. 


Jump to…


Executive Summary

Historic commitments. Huge demand. Massive cost reductions. Ready technologies. Yet, infrastructure deployment levels are underperforming their potential. What gives?  The U.S. clean energy sector has achieved remarkable milestones: solar and wind have tripled since 2015, costs have fallen 90%, and annual clean energy investment now exceeds $280 billion. Yet deployment has arguably fallen short of what both markets and the climate moment demand. The culprit isn’t a single bottleneck: not permitting, not subsidies, not technology readiness alone. The real constraint is misalignment across the multiple interdependent factors that investors need to see in place before committing capital at scale.

Think of it like a Rubik’s Cube: solving one face means nothing if the other five stay scrambled. This paper identifies six strategic levers that, when pulled in concert, can unlock the conditions for large-scale capital deployment:

The good news: the capital exists, the technologies are ready, and infrastructure is a solvable problem. With over 1,000 GW of clean energy in development and electricity demand projected to grow up to 50% over the next decade, the infrastructure build-out represents one of the largest capital deployment opportunities in American history. And global demand for U.S. clean energy technology has never been higher. The barriers identified in this paper are structural and systemic, not fundamental; most of the solutions proposed are actionable in the near term, without waiting for perfect legislation or perfect markets.

The window is open, but not indefinitely. For policymakers and investors alike, the question is not whether to act, but whether to act with the clarity, coordination, and urgency the moment demands. The frameworks, partnerships, and policy tools outlined here offer a practical roadmap for unlocking decades of economic growth, cost-of-living relief, and energy security for communities across every region of the country and beyond. The energy transition is not a cost to be managed; properly coordinated, it is a generational economic opportunity.

America has experienced extraordinary momentum in the growth and transformation of the energy sector. Solar and wind generation has more than tripled since 2015. In 2024, 50 GW of solar power was added to the U.S. grid, which is not only a record, but is the most new capacity that any energy technology has added in a year. Technology costs have plummeted dramatically: utility-scale solar and battery energy storage have each fallen 90% since 2010. Those declines have made them among the lowest cost forms of electricity in many places. Domestic manufacturing capacity has also surged with hundreds of clean energy manufacturing facilities, many of which have already come online. These technologies and projects have been reinvigorating communities and creating jobs across the nation, and we should see the benefits from these advances continue as capital is flowing into this sector at an unprecedented rate. Clean energy investment in the United States more than tripled since 2018 to $280 billion annually, with multiples more in commitments, and private markets alone have raised nearly $3 trillion over the past decade. For more established technologies like utility-scale solar and onshore wind, financing has become standardized. This includes established project finance structures, robust secondary markets, accurate energy production forecasts, and predictable returns that align with the needs of institutional capital. These asset classes now exhibit many of the hallmarks of market maturity: transparent pricing, deep liquidity, sophisticated risk assessment frameworks, and predictable transaction execution. This progress has been galvanized by unprecedented governmental support, including the Bipartisan Infrastructure Law of 2021 and the Inflation Reduction Act of 2022 (IRA), alongside bold state policies, aggressive corporate clean energy procurement, sustained advocacy, and relentless technological innovation. 

Yet despite these achievements and the trillions of dollars in committed capital, the pace of deployment has arguably fallen short of what the market opportunity demands and what the climate crisis requires. Hundreds of IRA-supported energy and manufacturing projects have faced delays or cancellations: much of that is due to increased economic and logistical uncertainty (e.g. in the cost and availability of equipment, permitting timelines, import and export regulations); much of that too is due to sharp reversals in federal funding priorities (e.g. tax incentive changes from the One Big Beautiful Bill Act (OBBBA), direct project cancellations). Moreover, emerging solutions are still taking time to achieve true commercial liftoff. Despite billions in federal funding allocations, only a few carbon capture projects have meaningfully progressed, with others indefinitely delayed or cancelled. Growth of some demand-side energy solutions, like behind-the-meter solar and virtual power plants, has remained relatively regional, despite favorable economics. Advanced nuclear energy, though it remains a policy priority, has been challenged with long delivery timelines, and many project investors remain wary of the risk of cost overruns. Sustainable aviation fuel production capacity increased by ten times in 2024, but it is still a very small fraction of jet fuel demand. Furthermore, transmission capacity has not grown nearly as quickly as needed and remains a key constraint to progress. 

The situation – deployment deficiencies despite historic support – can be entirely solved with just one thing… and that is… to stop acting as if there is just one thing. The relative underperformance described earlier is not attributable to any singular constraint: contrary to what some have argued, for instance, it’s not solely about removing permitting roadblocks nor creating more subsidies. Rather than seeking silver bullets, real progress can be made by recognizing that there are multiple elements involved and misalignment between those elements have curbed the rate of progress.

Accelerating new energy project investment is somewhat like solving a Rubik’s Cube. The key to solving the puzzle lies in its interdependence: every twist of one face ripples across the others. You can solve one face perfectly, but if the other five remain scrambled, you haven’t solved anything in the grand scheme. Progress requires coordinated progress across multiple dimensions simultaneously in the right sequence. The cube rewards systems-thinking and algorithms over siloed, non-coordinated actions. All six of its faces must align to win.

The same is true for new energy finance. When most project investors look at a sector, they approach it as a puzzle and look for as much alignment of the full picture as possible before being sufficiently comfortable to deploy capital. That’s the indicator for the risk-reward balance being in the right place to justify investment. Rather than defining the theory of progress by singular issues, policy and industry stakeholders need to create sufficient alignment of multiple puzzle pieces at the same time. 

This paper offers a few perspectives on how to achieve the conditions for larger-scale capital deployment, drawing on both lessons learned and promising concepts from across the industry. Like the six faces of the Rubik’s cube, six priority strategy areas are articulated: market defragmentation, commercialization partnerships, transaction execution speed, policy synchronization, holistic valuation methodologies, and proactive investor engagement. Note that while some of the underlying strategies may take time and require deep structural realignment, most of the concepts discussed herein are actionable in the near term. A range of stakeholders, from policymakers to infrastructure investors to community and industry advocates, need to move in concert to solve the puzzle and unlock greater investment. 

The opportunity before us is immense. With over a thousand gigawatts of clean energy in development and electricity demand projected to grow up to 50% over the next decade, the infrastructure build-out required represents one of the largest capital deployment opportunities in American history. Similarly, global demand for U.S. clean energy technologies to be a bigger part of the mix has soared over the past few years, as many countries are seeking to diversify away from China or access some of the more unique technologies that the U.S. is developing. And solutions can’t come quickly enough, in this era of fast-growing energy demand, spiking electricity bills, aging physical infrastructure, and burgeoning new industries, not to mention a plethora of old and new technology solutions and operational strategies poised to meet the moment. The question is not whether the capital exists (it does!), whether energy solutions are available (they are!), nor whether there is a silver bullet salve (there isn’t!). It’s whether we can align the six faces of our energy finance cube quickly enough and strategically to channel the right types of capital where it’s needed most, when it’s needed most. The energy transition presents a real opportunity to drive economic transformation that, when properly coordinated, can unlock decades of strong economic growth, cost-of-living reductions, innovation, and prosperity across every region of the country and the planet.


Chapter 1. “Come together”: Defragmenting markets through regional coordination

For many companies across multiple sectors, the U.S. market can seem like the golden goose. Its big population, high income, diversified economy, and strong purchasing power typically mean large total addressable markets (TAM). While those drivers can be true, the reality for many energy and climate solutions, especially early on, is that the large TAM is challenging to realize, as the market can be highly fragmented. In those sectors, the U.S. is less of a “market” per se and more of a loose mosaic. There are about 3000 utilities, ranging from large publicly traded corporations to rural cooperatives, in regulated and regulated markets. States and territories not only have different market drivers, but they also have their own regulations and  regulators, business processes, permitting requirements, and market rules. This also complicates the go-to-market as you typically need large and locally-focused commercial organizations to tap the markets, which can be expensive and time-consuming to build, especially for newer companies. It also often means a less efficient path to scalability, as each set of local customers and  regulators need to be brought up to speed and convinced about the fit of a solution (compared to having a few entities that speak for the entire country). In addition to the commercial elements, this dynamic introduces technical barriers to scalability, especially where deep integration and redesign are required to meet local requirements. Over the years, this has often flummoxed both U.S. startups and experienced foreign investors, who have approached the U.S. market with high expectations, only to be confounded by these complexities.

Harmonize local requirements to avoid the piloting death spiral

To the extent possible, to promote more rapid and widespread investment and deployment of solutions that could benefit their communities, local (and national) governments need to work more closely together to harmonize market designs and project requirements.  Oftentimes, a solution provider may implement a solution in one state, but when they go to another state, that utility might make them start from the beginning and prove themselves all over again – many innovators have likened these continuously repeated pilots to death by a thousand cuts. If a good solution is successfully deployed in one place, the barriers associated with deploying the same solution in another market need to be lower. This concept applies to permitting and design as well. The more that permitting  processes and tariff structures, or modular elements within, can be templatized, time and uncertainty are reduced. Additionally, uniformity lowers development costs because the solution doesn’t have to be fully reengineered for each locale. This could also correspond to standardizing equipment and project technical requirements between them, to minimize costly product redesigns and reengineering. Furthermore, state stakeholders seeking to deploy similar solutions should consider entering into reciprocal partnerships or MOUs, supporting collaboration that’s both technical (e.g., between their utilities and independent engineering organizations) and policy-focused (e.g., between their policymakers and regulators). As such, when a solution under that type of agreement is evaluated and approved in one state, when that solution is brought forward in a partnering state, the solution can be given an expedited evaluation and approval process. 

Not just physically, but digitally

The dynamic described previously is not limited to hardware. It is present for many software solutions as well, particularly those that have to integrate with local operators’ control systems. For example, a locality may be interested in deploying a virtual power plant (VPP), which is a relatively low-cost, software-based approach to aggregate distributed and controllable energy resources in order to provide large-scale energy services. A VPP deployment would have to connect to a utility’s and/or grid operator’s distributed energy management system (DERMS) to talk to devices, energy management systems (EMS), energy dispatch and trading system for wholesale market participation, customer information systems to track billing and energy usage, and also be cybercompliant – note too that several utilities have yet to fully roll out these foundational modern digital systems. Not only that, each utility and grid operator might have their own implementations (vendors, versions, configurations, rules) of these systems. Even outside of controls-oriented functions, the variety of data structures, naming conventions, and IT systems can make it difficult to access available market data (e.g., energy pricing), electricity tariff rate structures, and other highly important information. This is a reason why you tend to see that many energy software solutions have their operations concentrated in just a few markets, as the costs and time associated with integrating with another market’s cadre of systems can be hard to justify and thwart efforts to scale. 

This is an area where states can work together (along with their respective utilities, grid operators, technology providers, and regulators) to agree upon more uniform ways to structure data, access market information, and securely interface with market and control systems. This could include partnering with groups that are developing common standards and protocols (such as RMI VP3, LF Energy), building an implementation roadmap across those states and utilities (accelerating implementation of FERC Order 2022), and taking corresponding legislative actions to ensure investments are made to build out the enabling foundational digital systems.   

Aggregated demand and collaborative procurement

In a similar vein, collaboration between state and national governments can level the playing field and expand markets. When it comes to infrastructure, states and countries may often endeavor to ensure that local manufacturing capacity and supply chains are set up within their territories – this can create long term economic growth opportunities, reduce equipment delivery risks, and improve the public’s return on their investment. 

However, issues can arise when multiple states are trying to duplicate efforts in the same sector. Take offshore wind in the early 2020s. Multiple eastern U.S. states were not only supporting a new wave of projects, many funding programs had requirements that the projects needed to source materials and equipment from suppliers located in the state.. The effect of this for a small, burgeoning industry was dilutive and slowed down factory investments as the scaling factors were harder to justify. After all, there are only so many blade, monopile, vessel, and cabling factories that can be supported at a given time, especially early on in the industry’s development. In response, thirteen states and the federal government signed a memorandum of understanding, where they agreed to take more regionalized, collaborative approaches to procurement and supply chain development. 

Relatedly, an area where significant improvements could be made is around the procurement of critical common equipment. To accommodate the load growth from new factories, data centers, and building and vehicle electrification efforts, there are many pieces of equipment that will be needed irrespective of what types of energy are associated: things like transformers, circuit breakers, switchgear, and so on. There are considerable production capacity shortages and long lead times on these, which raise costs and create execution risks for projects. Despite the robust market demand, manufacturers have been somewhat hesitant to invest in expanding production as they worry that the demand will not materialize, which would leave them with underutilized or even stranded assets. 

State and local governments can respond to these challenges in multiple ways. For instance, they could pool together their demand and drive standardization of the equipment so that the equipment is more fungible and interchangeable, as has been previously highlighted by the U.S. National Infrastructure Advisory Committee’s report on protecting critical infrastructure. Also, states  can create well-defined demand guarantees where they can provide assurances to manufacturers and consumers that necessary  equipment will be there, as needed.  

For example, in 2013, the Illinois’ Department of Transportation led a seven-state procurement initiative to jointly acquire a standardized set of efficient locomotives and railcars, with additional funding provided by the Federal Railway Authority to support domestic manufacturing. This effort pulled forward new, more efficient railway vehicles into the market and lowered lifecycle costs. These concepts can additionally apply to secondary markets as well, for example, providing residual value guarantees on heavy-duty electric trucking procurement, to help mitigate risks on the initial purchase (e.g., traditional resale markets not emerging or asset residual values not being realized as projected).


Chapter 2. “That’s what friends are for”: Overcoming commercialization barriers through partnerships

The next facet of the cube pertains to early market formation and investment into technologies that are not yet fully commercialized, especially first-of-a-kind (FOAK) and early-of-kind (EOAK) infrastructure, and why capital formation has been easier for some types versus others. Differences in the ability to demonstrate, commercialize, and scale new infrastructure do not purely depend on the ultimate value of the solution; they are often driven by how the characteristics of that infrastructure affect the pathway to value realization, particularly the inherent capital-intensity and modularity of the solution. 

For highly modularized solutions with lower capital requirements, the pathway can be much more straightforward. Take solar photovoltaics. Though module R&D and fabrication are far from trivial, technical demonstration and deployment are relatively simple. One can usually install and field-test new solar quickly and inexpensively. The advantage extends when scaling the solution to bigger projects: once you reach megawatt scale, given the modularity of solar cells and their balance of plant (inverters, cables, trackers, etc), you can obtain a reasonably clear picture of how even gigawatt-scale projects should fundamentally work. Other highly-modular solutions like batteries and EV chargers have enjoyed similar advantages. Tesla, for instance, in order to address potential range anxiety issues for its customers, leveraged its own balance sheet and government funding to build out a network of standardized superchargers, taking advantage of charger modularity to build in waves. This characteristic has made it easier for rapid demonstration and scaling of those solutions, as the financial community can enter the market, investigate, learn, and expand with relatively low risk.   

However, the commercialization process becomes significantly more challenging with more capital- intensive and complex technologies, such as carbon capture, nuclear, e-fuels, etc. For some of these types of solutions, the early projects can require billions of dollars to construct and demonstrate.  For these types of infrastructure, smaller-scale systems might not provide comparatively representative technical proof points of how the larger system needs to operate. Furthermore, larger sums of capital are often needed for early deployments. What’s more, the financial risk can be compounded as the long-term payoff is not guaranteed, as FOAK & EOAK projects typically have more uncertainty, and the learning rates of subsequent projects may also be less obvious. Furthermore, instead of a typical first-mover advantage that you often see with new technologies, early project investors here might actually suffer from a first-mover disadvantage, where they have the risk and cost of participating in the earlier projects, but don’t accrue the benefits and learnings that are seen in projects executed down the line. 

To help address these types of challenges often seen with capital-intensive, less modular FOAK and EOAK infrastructure projects, adopting a new suite of partnership structures can greatly help to accelerate market formation and improve investability.

Multi-project joint ventures

Catalyzing capital for this class of infrastructure may mean going significantly farther than providing a few incentives and having strong advocacy efforts. More complex and elaborate agreements, private and/or public, are often necessary to drive deployment. Particularly in the form of deployment coalitions, consortiums, and joint ventures that support multiple projects. At the highest level, this can exist in several forms and can be originated by the private or public sector, as appropriate. For illustration, a few examples of commercial approaches to scale new nuclear energy projects, roughly in increasing order of relative deployment impact:

All of them offer significant advantages over pursuing projects on an individual basis. They provide demand signals to supply chains to create manufacturing capacity and to labor groups to create a workforce. Generally, both of these stakeholders may need to see firm demand signals before they will undertake significant investments, which are in turn typically needed to reach a solution’s cost and performance entitlement (otherwise creating a chicken-and-egg problem). They also create more concrete opportunities to drive project standardization; this not only allows a technology to achieve faster learning curves, but it also helps to derisk and justify the investment by providing a more tangible line of sight to the large market. The point for manufacturers and workforce development groups is equally applicable for financiers, who often want to see a pipeline of repeatable opportunities before spinning up their underwriting teams. 

Risk and reward sharing

Having an orderbook of the first several projects, per se, may not be sufficient to create sufficient activation energy at the project level. Though it sends a good signal for supply chains and others, it does not necessarily address the first-mover disadvantage issue that may exist. 

A differentiator in partnership approaches, including the ones described previously, is how to think about alignment and value creation. Traditionally, the way in which governmental entities approach financial partnerships is through mechanisms like subsidies, loan guarantees, offtake guarantees, backstops, and fast-tracked processes. These help to reduce financial exposure to stakeholders, but alone, they miss a key part of the story: the long-term upside that can be created by successfully deploying and opening up a market for these solutions. 

Usually, for the product owners and corporate investors, this is more naturally accounted for and balanced against the downside risk. For instance, companies, from software providers to aircraft manufacturers, might sell their first products as loss-leaders, providing lower pricing to early adopters to reduce the risk to the buyer, which they justify knowing that they should be able to rapidly recoup the costs of early expenses (and even failures) over the broader market pool, if they are successful. For large infrastructure projects, this is more challenging to achieve. When projects are highly capital-intensive, the financial exposures may be too great for the product company and/or equity investor to bear – that company or fund might be entirely wiped out by an individual project failure (for example, Westinghouse had to declare bankruptcy in 2017 when their two U.S. nuclear projects faced challenges). Project stakeholders and financiers might be asymmetrically exposed to the downside risk and, therefore, be inclined to avoid investing in early projects. For a promising technology, you may often find several customers (e.g., electric utilities) lining up for the ‘third’ or ‘nth-of-a-kind’ project, which would likely be derisked and less expensive, while at the same time taking a passive wait-and-see approach with the first project – which produce a stalemate if the first project is hard to get off the ground. 

This is where deployment partnerships with structures that more fully align economic incentives and share in both the downside risk and upside of value creation can be a powerful catalyst for action. Amazon’s structure, for instance, creates this too by being involved in multiple projects where they would ultimately benefit from improvements over time and also through their equity investment into X-energy itself, so they should (depending on terms) continue to benefit down the line if the company is successful. 

In addition to encouraging the formation of joint ventures and consortia as described earlier, states and/or national governments can work together to strategically invest in key solutions, run competitive tenders to prospective providers, and strike profits-sharing agreements and/or warrants (as opposed to pure equity) in situations where government investment played an outsized role in value creation. 

A potential example of this is the recent $80 billion framework agreement between Brookfield and the US Government to deploy new large nuclear reactors. Notably, beyond packaging existing products and authorities (e.g., low-interest government loans for projects), this proposed deal additionally stipulates a proposed profits sharing mechanism where the US Government would receive a share of future profits from reactor sales. Noting that this partnership is early-stage and important details have yet to be disclosed, this mechanism could be appropriate here as you have a hard-to-commercialize sector, with strategic national and geopolitical value, with effectively no domestic competitive products. 

State entrepreneurship

Public sector funders can also play a significant role in creating and incentivizing these kinds of deployment partnerships. Though more commonplace in countries with state-run industries, countries with free market economies have often found it more delicate to navigate. There may be legitimate concerns about how governments’ “picking winners” may create adverse incentives and undermine competitive markets, in some cases. Plus, it may confuse governments’ role of trying to maximize public benefit, versus showing favoritism or extracting economic rents from corporations.  

All that said, there are models for state entrepreneurship that can be very powerful here, balancing the need to pull forward solutions and capital, while protecting the public and maintaining market competition – particularly in markets that are pre-commercial, have few players, have outsized national strategic benefits, and otherwise would not develop on their own without heavy external intervention. These are cases where, though the market benefits are considerable, the activation energy may be too high to stimulate deployment without deep governmental intervention. 

Furthermore, consider where you have first-mover disadvantage challenges but a strong set of prospective fast-followers. To avoid the risk of a Spiderman-meme-like situation where stakeholders (e.g. local utilities or individual states) are pointing at each other to make the first move, downstream project investors could, for instance, co-invest (debt, equity, or backstops) for the first project, even on a minority basis, which would both mitigate risk on the first project and also enable them to access a cost-effective pathway to the technologies they desire to build down the line. You do not traditionally see state and local entities investing in infrastructure projects in other jurisdictions, but doing so could be net beneficial as a faster and lower-cost way to derisk and execute their own projects. 

In addition, where appropriate, public financing entities investing domestically (e.g., states, green banks, federal agencies) could, where appropriate, consider extending their authorities to borrow some concepts from the US’s international playbook. Organizations like the Development Finance Corporation (DFC) can make equity investments in strategic, high-value projects, particularly where the normal capital markets would otherwise struggle to enter until the investment thesis is more clearly actionable. Such a process would have very clear scopes, firm guardrails, clear commercial competition plans, and be compatible with legal and market structures to create the intended benefits without confusing or distorting markets. 

In any scenario, there should be a corresponding plan for how the public profits would be used. For example, it could be used to raise capital for other governmental activities or directly returned to the public in some way. Or it could be efficient to recycle the funding towards related activities and balancing of the governmental ‘venture capital’ portfolio, as would a strategic sovereign wealth fund.


Chapter 3. “Highway to the deployment zone”: Faster, risk-weighted transaction execution

There’s a common cliché from finance that time kills or wounds all deals. Increasing the speed of policy formation and deal execution is essential to unlocking growth and investment, especially for newer sectors. We will particularly focus on the public capital side of the equation. Here, there is a great mismatch between public and private investment decision time scales.  In the private sector, deals are expected to be completed on the order of months or even weeks. Often in the public sector, this can add many months to years, with a high degree of uncertainty, depending on the program. There can be many idiosyncrasies associated with public funding – e.g. infrastructure projects with federal dollars may have additional compliance requirements (e.g. for environmental regulation or for domestic manufacturing). Though there are many deep policy questions here, for this discussion, this piece will focus on ways to accelerate the process.  

Staffing for success

While it’s easy to say that the government should move faster, the reality for many is that the individual government program officers are typically working at a rapid pace. This is especially true at the political level where the motivation to make progress in a short amount of time tends to be very high. Particularly, when it comes to developing new programs, they do a massive amount of work, mostly unseen by the public, and with very few resources and are often overstretching to meet deliverables. 

The other side of this is that when new programs and initiatives are rolled out, there often isn’t a similar level of flexibility in staffing levels and allocations. In fact, staffers at the federal, state, and city level can get overwhelmed by the volume of direct work and information requests, following the relevant laws and statutes. A new capital program may get introduced, but the number of people to implement that program might not change rapidly. For instance, the Inflation Reduction Act of 2022 introduced and/or influenced dozens of tax credit programs, and accordingly had to issue almost a hundred pieces of new guidance, so the market could act on them. A relatively small group of people led by the Treasury Department’s Office of Tax Policy were charged with generating that official guidance (as required, to ensure consistency and fairness). In addition, a number of the programs therein had complex elements which required deep technical expertise (e.g. tax law, energy markets, carbon accounting, energy technology) to complete their work well – skills that are in relatively short supply and in high demand, both inside and outside the government. The rate of progress was also slowed by some ambiguity in the law itself, where key technical questions (e.g. accounting methodologies and criteria) accordingly needed additional time to be addressed during implementation instead of beforehand. The associated teams ran at breakneck pace to complete all those issuances in just two years. Yet, many engaged market actors who were excited to proceed with shovel-ready projects experienced challenges, as they waited for guidance, thereby slowing initial progress. 

To meet the rapid needs of an eager market and particularly in times when the governments are trying to push comprehensive reforms, agency leaders and legislators need to consider ways to make sure that the implementing organizations are sufficiently well-staffed and resourced. This should not only consider program staff (both existing and new), but also functional teams (e.g. legal, communications, stakeholder engagement) where bottlenecks can often form as they support multiple programs. This can include having surge capacity resources (either short and/or long-term, internal or external) bringing on technical and subject matter experts to help enable fast and fair processing. Also, an implementation staffing needs assessment should ideally be conducted as part of the policy-formation & legislative process so that the appropriate resources can be allocated early and efficiently. To further ensure efficiency of resource allocation and implementation speed, legislators should consider expedient ways to drive greater clarity and specificity at the point of legislation, where applicable. 

Iterative capital deployment programs 

It is tempting and important to get things totally right on the first try, particularly when it comes to government funding, which is highly scrutinized. That said, an approach which has delivered success is to release capital in phases. Here, instead of issuing all funding at one time, the program office (especially for a large competitive program) might split the deployment into phases over time. The first phase is executed quickly and the subsequent phases are introduced later. While this may introduce some short-term friction, it not only gets capital and projects moving faster, perhaps as importantly, it gives both the funders and market chances to build momentum, learn from one round, and improve towards the next ones. A good example of this was the DOE’s Grid Resilience and Innovation Partnerships (“GRIP”) program. This was a $10 billion program from the Bipartisan Infrastructure Law to enhance grid flexibility and improve the resilience of the power system against extreme weather.  That $10 billion allocation was split into three phases to be issued over a few years. Over those three phases, the quality and ambition of the applications and programs funded increased significantly as all stakeholders were able to learn and adapt with each round. 

Progress over perfection

For public programs, this is a major challenge driven by misalignments of risk tolerance. Many ambitious government funding programs are faced with a strange pickle. On one end, they have the duty, mandate, and power to drive innovation, do deals ahead of the commercial markets, and derisk promising solutions to a point where they can scale on their own and deliver the broad public benefits associated with that solution. On the other hand, the funds being used are raised from people’s hard-earned money or a state or country’s valuable natural resources, and neither should be handled frivolously. The fear of the political fallout from the latter may drown out the benefits of delivering the projects; that is, in the eyes of an underwriter or program officer, the downside risk may often outweigh the upside creation; doing no deal may feel safer than doing a ‘bad’ deal. 

Take the case of two companies that received funding from the DOE’s Loan Programs Office (LPO) in the early 2010s. One is the solar cell manufacturer Solyndra, whose idea was to decrease the cost of solar energy by using cylindrical, thin-film solar cells that could capture the sun’s energy from multiple angles, compared to their conventional flat-panel counterparts, and thereby bring down their levelized costs. Solyndra received $535 million in federal loan guarantees, which it defaulted on and went bankrupt when market conditions changed (they, and other promising new solar companies, were undercut by plummeting prices of silicon solar cells from China). The default filled the news cycles for months, sparked several congressional hearings and investigations, and also left a profound imprint in the minds of many program funders. No government underwriter wants to be dragged to the Hill or see their name in the papers for this reason. On the other end of the spectrum, you have a little-known car company called Tesla, which received a $465 million loan to expand electric vehicle production. Tesla, as we know, went on to become one of the most transformational and successful automobile companies in recent history. And yet, comparatively little fanfare has been made about the government’s role in making that a success. Two loans, about the same size, issued around the same time, by the same organization. Not only did their actual outcomes differ, but the financial outcomes to the upside and the political fallout to the downside were diametrically-opposed.

This contrast becomes even more stark when looking at the broader picture. Again, taking the example of the LPO (now Office of Energy Dominance Financing (EDF)). It has historically had a loss rate of less than 3%, which is on par with most commercial and investment banks – entities which often invest in markets with more proven solutions and less uncertainty. Moreover, other governmental programs, like DARPA, NIH, and ARPA-E, also have strong investment track records. All that said, the perception of risk has led to an overly deep risk conservatism and a sense of fear of doing deals that might go sideways. This creates huge process drag for the entire organization and curbs the rate of progress that can be made, as underwriting processes can become elongated and difficult to navigate. Furthermore, for many loan applicants, it can take several years to get through the loan process; anecdotally, some applicants have complained it was slower and harder to access than what they could wrestle from the commercial market. 

Overall, this is a situation where the tolerance for and understanding of losses for public financing needs to be reconceptualized and appropriately balanced, given the mission. Not all losses are bad. Individual losses are not necessarily detrimental if outweighed by net gains. There are significant opportunity costs of not taking risks appropriately. More can be accomplished without jeopardizing public interest. 

Given the governments’ having historically demonstrated their ability to be good stewards of capital across long periods of time, the inherent risk that accompanies the pre-commercial asset classes they support, and the urgent need to make progress and unlock markets, this is a situation where more streamlined, faster underwriting processes that increase the speed of execution are critical and warranted. Furthermore, governmental funding organizations need to be given more ‘air-cover’ so that individual misses do not get over-politicized, but are understood to be reasonable elements of a process for progress. Note that accomplishing this process and cultural shift requires major work internally with staff, policymakers, and the broader public. This is an aspect where integrating concepts like state entrepreneurism and more balanced portfolio-based risk and reward approaches, described previously, can also unlock new investment and risk management strategies, greater societal benefits, and increased comfort to staff and leaders to usher in that kind of transformation.  

Creating longer and more durable windows of action

A more obvious reason to move quickly on policymaking is to deliver benefits faster to project and community stakeholders – which is, of course, the main objective of the policy in the first place. But beyond that, investors understand that the political time windows covering favorable conditions could be short. This is particularly acute for assets with long development cycles and/or high upfront costs: e.g., building new manufacturing facilities or developing interregional transmission lines can take years. Indeed, it was estimated that 60% of committed IRA-funded clean energy manufacturing projects were originally not slated to come online until between 2025 and 2028. 

Moreover, the IRA timeline created some interesting time crunches. Though the law was thoughtfully conceived with some longer time horizons for tax credits, in practice, the actionable investment window for that version ended up being incredibly short. The law was passed in August 2022. It then took time for programs to be formed and guidance to be released, as described earlier. In parallel, the investment community had to learn, come up the learning curve on the new opportunities, and build ecosystem collaborators (which themselves are also reacting and forming). Next thing you knew, as the election window started to ramp and policy uncertainty increased, many investors started to park their capital and take a wait-and-see approach in early 2024, as evidenced by strong increases in fund ‘dry powder’ (raised but uncommitted capital) but a sharp dropoff in actual capital deployment and assets under management at that same time. 

Moving quickly is critical to give investors, communities, and other associated stakeholders as much time as possible to understand the landscape, develop deployment pathways, build new solutions, and ideally iterate, given the chance to take more shots. 

That was a shorter-term perspective. In the spirit of leveraging speed to open the front end of the window, longer term, some thought should be made to how one extends the investability window. Investors typically do not decide to invest in a project purely due to the project’s merits. Particularly when entering a new sector, it’s also driven by the commercial prospect of potential follow-on deals. Short political windows and the associated ‘stroke-of-pen’ risks often raise major flags for the risk committees at financial institutions. As was mentioned earlier, many IRA programs arguably had much less than two years of impact. Deeper policy stability is critical to ensure continued, long-term investment. That type of stability has, at least historically, been a hallmark of the US regulatory & commercial system and a positive differentiator in the race to attract capital and talent from across the globe. For sectors with high strategic value, high early capital requirements, and long investment cycles, policymakers should consider more mechanisms to provide longer-term policy guarantees to give investors assurance that they have long enough windows to justify their business cases.


Chapter 4. “Okay, now let’s get in formation”: programmatic policy synchronization for fast market formation

Catalyzing the deployment of new infrastructure is usually enabled by a bevy of policy actions. This can be important as transforming a sector may require several changes in economics, behaviors, and processes. Especially when resulting from expansive new legislation and/or executive actions, the government may be required to deliver a host of new policy programs, including new rules (e.g., permitting reforms, categorical exemptions), funding allocations and programs, implementation guidance (e.g., for tax rules), informational reports (e.g., National Lab technical studies, commercialization reports, etc). These types of activities are highly valuable as they tackle different aspects of the deployment challenge and take huge amounts of effort to be effective. However, they often get rolled out and implemented on separate and independent processes. This can actually stall and frustrate deployment efforts as most investors will want to see the major policy puzzle pieces locked in place before getting comfortable enough to deploy capital – for most risk organizations, ‘stroke-of-pen’ risks are often viewed as red flags. Conversely, this hesitation can cause consternation for policymakers and advocates who may feel that they have done the heavy lifting in passing new legislation, but don’t see a corresponding flood of serious commitments immediately after. 

Policy deliverable schedule alignment

One way to address this is to implement a visible and synchronized schedule, showing all the related policy efforts and programs for an initiative. The interdependencies between those activities would be easier to identify and allow relevant stakeholders to see when all the major puzzle pieces would be in place, and in turn, also align their investment and advocacy efforts accordingly. 

An example of this comes from carbon capture: the federal tax credit for carbon dioxide sequestration (45Q) was first issued in 2008; however, the first set of tax guidance was not issued until 2020, as the IRS, Treasury, EPA, and other agencies had to build a suite of complex regulations around reporting, verification, stakeholder comments, and more. A consequence is that, though the tax credit was in place (and though there were strong complementary financing capabilities from renewables tax equity and thermal power plant development already in place), little to no investment went into this sector, effectively ‘wasting’ many years of eligibility and frustrating many interested stakeholders. 

By contrast, take advanced transmission technologies: despite being rapidly-deployable and cost-effective solutions to increase transmission and distribution system capacity and performance, they have been historically underutilized. To increase awareness and deployment, the federal government developed a suite of products including the formation of the Federal-State Modern Grid Deployment Initiative, grant funding via the DOE’s Grid Resilience Innovation Partnerships program, loan funding from the LPO’s Energy Infrastructure Reinvestment program, categorical exemptions in federal environmental permitting on upgrading existing transmission lines, a Pathways to Commercial Liftoff report on grid modernization, new national deployment goals, technical reports and new assistance programs by the National Labs, and more. These were all released within a couple of months of each other in 2024, so the market had a fuller picture to which it could react and begin to make greater progress. Since then, dozens of states have passed new laws, and the number of projects being pursued and funded has also been on the rise.

Capital source navigators

Relatedly, new legislation may result in several new governmental funding programs or changes in missions for existing funding programs. Many of these efforts and changes might go unnoticed or disproportionately utilized. Take the energy- and climate-tech startups, who may be seeking capital to grow or transform their businesses. Government capital tends to be attractive as it is often willing to embrace early technology risk (unlike most commercial capital), is often non-dilutive to the company’s capital stack, and can give the company extra visibility. Most people in the energy sector will know of programs like DOE’s ARPA-E (Advanced Research Program Agency for Energy) or the Loan Programs Office. Far fewer may know that there may be funding available through ‘non-energy’ agencies like the US Department of Agriculture, Small Business Administration (SBA), the Governmental Services Administration (GSA), or the Department of Defense (DOD). These have increased the pool of capital available and provided a wider array of financing products that increase the chances that the right kinds of capital are available to serve the spectrum of company needs.

Initiatives like the Climate Capital Guidebook, published in 2024, can be helpful to make these types of programs less opaque and easier to access, especially for startups and small businesses. At the state level, databases like DSIRE USA have been providing a beneficial service aggregating information on state incentive programs for years. 

Making information on federal, state, and/or municipal funding programs highly accessible and searchable from a centralized, common location is key. Or else they may get lost, buried in webpages that few know where to access. This process can be further enhanced using cross-cutting discovery technological tools. For example, AI-based agents could be used to continuously and automatically map these programs and keep information organized. Also, large language models can be deployed to allow stakeholders to more readily identify and compare programs of best fit (matching things like user capital needs versus program ‘ticket sizes’, usage restrictions, and eligibility requirements). 

Zooming out from individual needs, this would also augment new solution developers’ and investors’ ability to more comprehensively understand the relationship between governmental capital programs and the role they play in energy solution commercialization and deployment. For instance, related to technology readiness, it would make it easier to chart what programs are available to technologies at different stages of maturity. From the National Science Foundation (NSF) for fundamental research, or the Advanced Research Program Agency-Energy (ARPA-E) for more applied technology development and early manufacturability demonstration, to planning grants from various agencies, and federal tax credits for infrastructure projects. 

Similarly, funding programs could be mapped against project development phases. In areas like international project finance, this exercise would be valuable to demystify which programs and institutions are suitable for various phases of project development. In some cases, an international energy project developer using American technology might need to navigate a gauntlet of different funding institutions: from the US Trade and Development Association (USTDA) provide grants for front-end engineering development (FEED) studies, the Export-Import (EXIM) bank for domestic manufacturing loans, to the Development Finance Corporation (DFC) for equity co-investment and political risk insurance. Not to mention multilateral development banks like the World Bank and International Finance Corporation (IFC), which themselves have an array of funding programs and instruments. Here, providing clearer, more cohesive representations of how a patchwork of funding sources can work in tandem and be packaged together can have outsized strategic competitiveness for American companies. This would thereby help level the playing field for companies competing against companies backed by governments that can provide fully wrapped financing solutions.


Chapter 5. “C.R.E.A.M.”: More holistic valuation tools and methodologies

This facet addresses the challenge, which is still too common in solution valuation. Not of valuing the companies themselves, but of proving to customers and investors that the proposed energy solution is worth adopting. Especially because regulation alone is usually not a salve for driving energy transition activities in free market economies. While regulation may steer what should happen, costs and economics are often bigger drivers of how quickly that transition occurs. Borrowing a chemistry analogy, economics determines the activation energy and kinetics of transition policy. Solutions need to demonstrate their fit and attractiveness in often economically competitive and constrained environments. In addition, stakeholders with shared interests (e.g., not just federal, but also at the industry and state & city levels) should invest to build a common valuation infrastructure (e.g., resource characterization data, system models, and more) that helps lower the barriers to deployment and investment. Doing so will also make it easier to appropriately size any associated financial programs (like subsidies and grants), to ensure that there is sufficient catalysis to get multiple stakeholder groups moving and investing. 

Understanding end-use unit economics 

This means that solution providers, policymakers, and advocates need to develop very deep understandings of the commercial drivers and realities of the markets they are looking to serve. They need to put themselves in the shoes of their customers and related stakeholders. This is particularly important when trying to sell solutions into new and competing markets or applications that may not otherwise be required to change (e.g. regulations on fuel use or emissions).  This should seem obvious and has always needed to have been a primary focus, yet it’s a step that some innovators, policymakers, and advocates have not always adequately prioritized.  

This is a recipe for failure, particularly in the infrastructure space. Saying it’s good for the world is not sufficient to get traction; a need does not mean there’s a market. Getting a strong, detailed, and accurate understanding of customer unit economics is foundational to the success of any infrastructure. By their nature, this should encompass more rigorous estimations of how much a solution costs to produce and deliver (which often tends to be underestimated in early stages, and leave stakeholders to be surprised later on by cost overruns upon implementation). And it should likewise reflect an understanding of the customer’s cost and value drivers, as this affects project revenue and adoption readiness. This is a question that sometimes gets missed in the early stages, but in later stages, particularly when seeking significant capital to fund projects, it becomes highly pertinent as investors tend to take a much more critical viewpoint of the economic potential of the project, both to the upside and downside. As part of the process of developing detailed assumptions, they should develop reasonable sensitivities and scenarios that illustrate how the financial performance of the project may vary due to changing internal and macroeconomic conditions. 

Diagnosing this early not only helps the solution providers to be better positioned for commercial success with their customers but also enables them to catch potential flaws early enough, make different design choices, and ensure the product’s value proposition is more robust and resilient. In turn, this helps reduce project risk and gives comfort to financial investors along the way. 

Governments can help by driving easier price discovery and transparency, collaborating with project stakeholders (especially developers and customers) to compile and share relevant cost and value data in more public forums. Reports generated by government agencies (e.g., the series of Pathways to Commercial Liftoff reports by the US Department of Energy’s Office of Technology Transition/Commercialization), national laboratories, and private third-party analysts (e.g., BNEF, S&P, Lazard) have made strong contributions in attempting to fill those information gaps. Continuing to support and drive efforts like those would be valuable. Having state and regional actors (e.g., groups of state economic development organizations) can also help to make them even more granular, and perhaps more local, which would drive even more actionability. They could also compel information disclosure through legislation (akin to efforts in healthcare and drug pricing transparency over the past few years) or require a greater level of disclosure as a part of some government-funded programs, especially with new industries. 

Development of accessible, trustworthy technoeconomic evaluation analysis tools

Intending to perform the types of deep technoeconomic analyses described in the previous paragraph is one thing. But having the ability to do that is another. In many situations, you either suffer from data unavailability or asymmetric access. Taking the electricity system, for example, there are very few stakeholders (usually utilities and grid operators) with deep access to information about how the system and its underlying assets are performing.  Sometimes, this is intentional due to potential concerns around security and market manipulation.  Sometimes, like often in the case of customers requesting their own historical hourly usage data, the process might be archaic and difficult. 

That said, a major downside of this situation is that third parties are often not in a position to interrogate resource plans, challenge priorities, or test and validate new ideas. There are third-party analytics tools, but they sometimes don’t have the requisite data, fidelity, or trust to ensure that their results cannot be readily dismissed. That also makes it too easy for grid operators to dismiss new ideas without adequately considering them. Not having accessible data and models can result in excluding beneficial solutions from being part of the menu of options or from making it to market. 

This has been a pretty common battle with an array of more ‘disruptive’ energy technologies, like distributed energy systems and advanced transmission technologies. But it also occurs with generation. One example is the prospective transition of the Brandon Shores coal plant in Maryland. The plant’s owner, Talen Energy, filed to retire the facility because it was no longer economically viable to operate (following a trend with many coal plants around the country). However, the grid operator, PJM, sought to force an extension of its operation (via striking a reliability must run (RMR) contract) for four years until new transmission capacity can be brought in, citing potential reliability concerns. State and congressional officials strongly opposed this plan as extending the operation via an RMR contract would increase costs to ratepayers and increase local pollution. A group of advocates and energy experts, led by the Sierra Club and GridLab, proposed replacing the plant with a mix of energy storage, reconductoring, and voltage supports, which they claimed would be not just cleaner, but more cost-effective than what was proposed – even more so in the likely event that the transmission project is delayed. Note too that a similar concept was deployed in New York City at the Ravenswood power plant. However, that suggestion was dismissed by PJM, without real allowance for iteration, arguably for not using the right modeling methodology, and for not being a project sponsor. Aside, the decision is also reflective of PJM’s energy storage market structure’s inability to effectively value energy storage’s benefits as both energy and transmission assets. Though an agreement was ultimately reached, many stakeholders view this settlement as suboptimal, not just because of its outcome (even more so as it does not help wider issues like high-capacity market prices), but because the advocates did not have the modeling tools in place to meaningfully evaluate the options and force a more substantive dialogue with the grid operator. 

Rebalancing this situation is essential in order to allow additional key stakeholders like policymakers, regulators, project developers, solution providers, and other experts to interrogate the opportunity space and propose actionable new ideas. This should include creating common, accessible infrastructure for data, models, and evaluation methodologies that an interested stakeholder would need to know to assess potential project options – which are otherwise difficult or prohibitively expensive to access. Note too that government endorsement of these tools greatly assists the credibility of the prospective solution. 

The Australia Energy Market Operator (AEMO), for example, did exactly this by implementing the world’s first connection simulation tool. This tool is a digital twin of the country’s electric grid that project developers are using to rapidly evaluate their prospective solutions in an accurate, safe, and trustworthy environment. 

Also consider two approaches to accelerate geothermal project development, where insufficient quantification of the resource can add significant development costs and project risks. One example is Project InnerSpace: this is a collaborative effort funded by philanthropy, the US federal government, and Google to provide a common, open set of surface and subsurface characterization data. Another example is the Geothermal Development Company: this is a special purpose vehicle fully-owned by the Kenyan government, which performs the resource characterization and steam development themselves, and shares the information with prospective geothermal power producers, which in turn has significantly lowered the barriers to entry and made Kenya a global leader in geothermal power production. Both approaches are being used to help project investors to be more targeted, capital-efficient, and prolific. 

Similarly, Virginia recently passed a grid utilization law requiring their local utilities to measure the utilization of their transmission and distribution systems, including establishing metrics as well as plans to improve those metrics. If implemented well and the associated data is made available, having that data can drive more targeted investments, more cost-effectively manage customer energy bills, and allow new solutions like virtual power plants, distributed energy, and grid-enhancing technologies to be appropriately valued and play bigger roles in the energy solution mix.  

Quantify, aggregate, and internalize external benefits and costs

Many solutions labeled for “climate” often have a wide array of other benefits – lowering costs, boosting reliability, creating jobs, improving health, to name a few. In some cases, reducing emissions might be a secondary or even tertiary benefit. This often results in the cost-benefit of a potential solution being understated and capital being underallocated. Alternatively, it can lead to greenwashing, where the benefits are overstated relative to the impact and capital being misallocated. 

In some cases, like in electricity markets and industrials, decisions are made on a narrower set of financial criteria, ignoring the broader value proposition – e.g., which solution has the lowest upfront cost? What is the least costly way to meet power demand on hourly basis or does the solution pay itself back in three years? There have been some directional approaches that at least help with the first issue of benefit underquantification. 

An example is with FERC Order 1920, a rulemaking that covers new approaches to transmission planning and cost allocation. It called on state decision criteria to be expanded from just cost and reliability to a consideration of seven benefits: avoided or deferred infrastructure costs, reduced power outages, reduced production costs, reduced energy losses, reduced congestion, mitigated extreme weather impact, and reduced peak capacity costs. As it gets implemented across the country, that should provide a considerably more fair and holistic basis to assess the potential benefits of transmission projects and will likely increase the viability of game-changing concepts (e.g., reconductoring, interstate/interregional transmission). 

In other cases, like in some larger governmental grantmaking or policy efforts, a suite of benefits may be quantified but are estimated and presented in siloes, where the benefits appear to be orthogonal and nonadditive. So, the key to addressing that is developing valuation frameworks that do the difficult task of weighing the benefits together in a clear manner that’s directly relatable to the investment thesis. Applying ways to translate those benefits commensurately into the project’s financial terms is critical to ensure they get prioritized and realized. The IRA had elements of this at least conceptually, for instance, by applying bonus credits to low-carbon energy projects that paid fair wages, were put in economically disadvantaged areas, or used domestically-manufactured equipment. On a local level, there are state laws like Montana’s transmission law that established a new, elevated cost-recovery mechanism for transmission projects that used more efficient, high-performance conductors. 

Going further along the point of internalization, there are more structural issues where markets may not be designed to solve for the outcomes that stakeholders are seeking.  In electricity, for example, power markets generally solve for meeting demand for the least cost in a short time period (following a narrowly-defined reliability scheme). This may not only ignore solutions that may save more money over longer periods of time, but it also does not explicitly solve for attributes like resilience, sustainability, or flexibility. That often means external out-of-market solutions are needed to create desired outcomes (e.g., reliability-must-run contracts, tax credits, renewable energy credits). While those have had a great impact on their specific goals, they are imperfect and may have unintended consequences, like distorting market behavior or disincentivizing cost-cutting innovations. Solving that on a greater scale and more fundamental basis in some segments may require greater reforms, like redesigning electricity market structures, revisiting the Energy Policy Act and Federal Power Act, and more.


Chapter 6. “Take it to the bridge”: Rethinking the ‘missing middle’ problem

For the last face of the cube, it is incumbent to describe the role that a whole cadre of investors needs to play. Not just venture and early stage, but particularly later stage capital providers such as project financiers (equity and debt), institutional investors, pension funds, insurance funds, and even utility balance sheets. They have a massively important role and arguably need to be more proactively involved with ensuring the maturation of promising earlier-stage solutions. The ‘missing middle’ problem in energy, where a lack of transition and demonstration capital thwarts promising venture-backed solutions from progressing to mainstream infrastructure, is well-known. While continued innovation is needed to form new capital solutions to fill that gap, there is a lot that investors can do to shrink the gap and make that chasm easier to traverse. 

Engaging earlier to pull companies to maturity

Though they control the greatest share of the majority of assets under management and can support bigger ticket sizes, by the nature of their investment mandates, late-stage investors’ risk tolerances tend to skew more conservative. They tend to concentrate their efforts on solutions with established track records and large addressable markets that provide greater certainty of execution and relatively consistent returns. And they typically have more than enough deal volume to justify their focus in those markets. Consequently, though these investors typically at least follow major new trends, they tend to be hesitant to enter newer markets; in fact, many are content to just ‘wait’ for the market to come to them before they engage. This introduces several challenges. 

First, at the most basic level, it means that many lower-cost sources of capital may be hard to access for newer solutions, whether climate and otherwise, which makes it more challenging for them to compete on a level playing field early on. Next, as importantly, it means that companies may miss critical opportunities to get sharper earlier. The attributes that are valued by investors change dramatically over the life cycle. 

For instance, many early-stage products are rated by things like their uniqueness, differentiation, disruptiveness, and total addressable market. Those attributes that tend to attract venture capital and garner the most market visibility in the media. By contrast, at later stages, especially for project uniqueness and differentiation, might actually be seen as sources of risk: risks that get compounded if the solution is supplied by a new market entrant. Moreover, fungibility and supplier optionality may hold even greater weight. To mitigate the risk of a situation where things could go wrong with a project’s vendor, project investors are often comforted knowing there are substitutes that can be brought in as part of a contingency plan. In addition, though addressable market matters to both groups of investors, early-stage investment tends to focus on alignment with macroeconomic trends. While for later stages, micro arguably supersedes macros, as diligence tends to be more deeply and narrowly-focused on project-specific questions, like contracts, pricing, and execution. Furthermore, late-stage underwriters may need to conduct deeper diligence and acquire more data to get comfortable with the new technical attributes, features, and vendors, which can be a drag on their process as they have to spend more time and money to go through that process. Whereas for earlier stage investors, that deep focus on the new features already tends to be an integral part of the diligence and value creation process, and they get rewarded for that accordingly. 

Overall, not understanding these differences can create shocks for new companies that have had great success with attracting capital early, but get stopped in their tracks when graduating to the next level of maturity. This has also often meant that prospective solutions providers miss the opportunity to sharpen their pencils, address more detailed questions, and have their key assumptions stress-tested. Even if they are not prepared to transact, later-stage investors, especially in infrastructure project finance, and their partners (such as independent engineering firms and insurers) should devote additional time to engage with promising technologies early on and bring them along. This is also in the investors’ interest as it allows them to get up the learning curve faster, be better positioned to take advantage of those opportunities when the markets come around, and ensure that the solutions that do make it to later stages are of higher quality and more likely to yield successful transactions. 

Formulate deal templates and archetypes

The previous steer for early engagement comes with a conundrum. For many investors, it is hard to meaningfully engage until there is a complete deal on the table. By complete, I mean a fully-fleshed out representation of all core theses, puzzle pieces, assumptions, and more, mapped to a specific, actionable situation. This is the scaffolding onto which the financing packages are built and the basis by which most risk managers are trained to evaluate the financeability of a solution. This can be true for governmental and commercial financing programs; “bring us deals to look at” is a common refrain.  

The conundrum comes because many of those details may not be fully known early on, so there may not be a fully-formed deal to bring per se, and in addition, there might not be a clear set of underwriting criteria that the company can aim for. Even for offices like the DOE LPO, which was proactive and engaged, struggled to get significant market traction for a few years, both to their and the market’s frustration. Instead of both sides staring at each other like the Spider-man meme, the impasse can be broken by creating deal templates and archetypes, which can take a more hypothetical representation of assumptions, including reasonable scenarios, and frame what the financial structure and execution pathway would look like for that. 

The LPO, for instance, did just that, creating several deal archetypes based on customer type and technology, with terms and execution timing aligned based on the associated risk. For instance, deals involving more established energy solutions (e.g., solar, storage, transmission) with investment-grade utilities providing corporate guarantees were allowed to have faster execution processes than some other deals, commensurate with the comparatively low level of credit risk involved. Next, deals associated with a narrow focus, like the Advanced Technology Vehicle Manufacturing program, tended to have more well-defined execution processes. For others that may have more default risk (e.g., newer technologies, non-investment grade counterparties), those deals might take more time to diligence and underwrite. This benefitted both the Office and the applicants, as this created clear, agreeable expectations for each.  Providing this clarity greatly increased deal volume and traction, as more clients brought more loan applications and had greater clarity and confidence in the transaction process they were entering into. 

More broadly, this is an area where the companies, industry associations, and other advocates can play an active role, independently and in collaboration with governments. Formulating early pictures and archetypes for financial stakeholders and investors can significantly enhance feedback and capital formation. 

Collaborate, Celebrate, and Replicate 

Finally, energy investors, especially in less mature sectors, need to find ways to be more open, as appropriate, about their investments and investment strategy. Though for product companies, this usually comes a bit more naturally and is necessary to market their products, later-stage investor communications for individual deals often tend to be more guarded and high-level. This is usually not because the information is unavailable. Sometimes this can be hard as they may be attempting to protect sensitive information, or the move may be to protect potential market share and not lead other players on to the same strategy (particularly if they worked hard to open a new market). The richer information transfer usually happens privately during deal execution (e.g., as part of due diligence) or during project- or fund-level capital raises. 

In newer spaces, however, progress itself is often catalytic (rising tides floating all boats). The easiest way to persuade a risk committee to invest is to show precedents and comparables. An underwriter can stand up with greater confidence when they can show that someone else has done it before.  It can be even more validating when that ‘someone else’ is a competitor. Project investors should endeavor to share more about how they got comfortable with the deals, market, and/or technologies involved, as appropriate. This is not out of altruism. Especially in emerging sectors, rapidly expanding the market and creating a foundational flywheel can be commercially more beneficial for the firm versus purely protecting market share. Getting more investors comfortable makes the pie bigger and encourages other investors to pursue their own projects. This then sends actionable signals to ecosystem stakeholders (e.g. supply chains) to invest and create production and delivery efficiencies. The efficiencies improve unit economics, reduce risk, and increase returns both down the line and potentially even on the early projects (e.g. operating expense and replacement parts scarcity reduction). These scale efficiencies and flywheel creation should help investors generate more deal flow and revenue, building on the expertise they have built and leadership position they have established. The advantages can be extended where significant public funds were allocated to the project, perhaps in exchange for preferable financing terms from the public funding institution. 

Similar ideas apply to public sector funding programs. Making announcements about projects and ribbon cuttings, though important, are  shorter-term and quick-hitting communications strategies that tend to be more formulaic. Instead, policymakers should take a page from commercial product marketing. Policymakers should view their deployment policy efforts as their products. As such, they should create consistent, thematic narratives to which individual initiatives, projects, legislation, and rules, can all be framed. Even though they may be exhausted after delivering the policy itself, government officials and program officers should not undervalue the uplift phase. It is crucial to plan to spend ample time and resources to explain and repeat the micro- and macro significance of each product to investors and community stakeholders, especially in today’s competitive information environment. Building greater public buy-in both nationally and with communities, is crucial, especially to longer-term, transformational projects. Government funders should work hard to bring along additional local governments, nonprofits, and investors along to collaborate, celebrate, and replicate the successes. 

Akin to what was mentioned in the valuation section about common information infrastructure, working closely with investors, industry, and other stakeholders to collate and amplify key investment theses, lessons learned, and others will be key to building investor confidence and creating more of a flywheel effect for follow-on investments. 


Conclusion. “The Next Episode”

Taking a step back, we have laid out many ideas and concepts in this paper: harmonization, collaboration, acceleration, synchronization, valuation, and amplification, to name a few. It may seem daunting to look at policymaking across so many vectors, particularly in a manner where many of those puzzle pieces have to align and move in sync in order to unlock significant and consistent investment. That said, the power of a strong and well-intentioned administrative state, at both national and local levels, lies in its very ability to wrap its arms around big challenges, partner with private industry, and leverage its resources to create high-value solutions with outsized benefits. This has repeatedly been proven in the US and globally across time and multiple sectors: whether it’s going to the moon or inventing life-saving medical treatments; building massive infrastructure, or delivering nanoscale electronics. States and towns should roll up their sleeves, find creative ways to collaborate, develop foundational information tools, and remove unnecessary market barriers. Investors should take an even more active role, making their needs known to early-stage companies and policymakers, building consortia to pull new opportunities forward, and creating an actionable set of commercial opportunities that they would find attractive. What’s more, acting now to design and implement new, actionable administrative structures, especially at the state and local level, will not only create more high-value pathways for progress now, but if it is well-coordinated, it can also lay a foundation for federal actions that can be taken, nearer and longer term. Though the challenges and journey are complex, the opportunity before us is massive, the imperatives are clear, the transformations are tractable, and success is achievable. This can be done, so let’s get busy!

Your DNA, Your Data: Preventing Genetic Discrimination in the Growing Bioeconomy

The U.S. bioeconomy is a growing economic sector driving technological innovation and global competitiveness. A significant portion of this innovation, especially in biotechnologies that improve health, like drug therapeutics and precision medicines, relies on the collection of genetic and non-genetic biological information through varied methods, including academic studies, direct-to-consumer testing services, and pharmaceutical companies. While this can lead to improvements in U.S. public health and biotechnology, there has been a growing fear among scientists and legal experts that this information is insufficiently protected against exploitation by foreign actors seeking to supplant U.S. leadership in biotechnology, as well as against domestic actors who might use this data to target or discriminate against certain subsets of the population 

Legislation and policy outlining the storage and use of human-derived biological data by federally-funded research, such as the NIH Genomic Data Sharing Policy, lessen the risks surrounding this data but are insufficient due to advancements in biotechnology and the multifaceted collection, use, and selling of this information by private industry and law enforcement. Meeting the moment and protecting the American people will require: 1) expanded legislative protections for biological data; 2) biological data use protocols developed for federal agencies; and 3) standardized development, storage, and use of biological data. Pursuing these policy enhancements will safeguard fundamental rights and secure national infrastructure as we enter a new era of biological understanding and innovation.

Challenge and Opportunity

The U.S. bioeconomy is an increasingly important facet of the GDP due to the growing role of biotechnology in economic sectors, including defense, agriculture, energy, and manufacturing, with the total market size of biotechnology expected to reach $2-4 trillion in 2030-40. An important driver of this growth is the increased role of biotechnology in drug discovery, therapeutics development, and precision medicine

This innovation, which includes novel treatments for cystic fibrosis, has necessitated the collection of massive datasets of biological data, including genetic, molecular, and biometric information. The federal government supports this through the direct creation and management or funding through grants of large-scale biobanks of individuals from varied geographic, demographic, and health backgrounds to be primarily used in biomedical research. Various pharmaceutical, technology, and biotechnology companies have additionally collected millions of primarily genetic samples from members of the public; for example, more than 26 million people have taken direct-to-consumer genetic tests through companies, such as 23andMe and Ancestry. 

However, as more human-derived biological datasets grow, they become strategic targets. There is an increasing concern that this information is insufficiently protected against exploitation by foreign actors seeking to supplant U.S. biotechnological leadership, as well as against malicious domestic actors who may misuse biological data to perpetrate genetic discrimination and biological data discrimination. While this may initially seem like a concern limited to employment or insurance, genetic discrimination has potential negative ramifications for the unchecked surveillance and intrusion into the private lives of Americans. Cases of genetic discrimination have already been identified in education, such as the case of Colman Chadam, a middle-schooler forced to leave his school because of a genetic susceptibility to cystic fibrosis. Additional civil liberties concerns arise around the non-consensual misuse of biological data in law enforcement investigations. Even while measures have been taken to secure biological datasets to minimize the number of people who might misuse this information, both public and private data collections are under scrutiny from sectors of the public about the risk of anonymized biological data being reidentified and the ability of these collectors to prevent data leaks.

The preeminent federal law guiding the use of this data in non-research settings is the Genetic Information Nondiscrimination Act (GINA). GINA, in combination with the Health Insurance Portability and Accountability Act (HIPAA) and other legislation, outlines the general use of genetic and biological information within the U.S. However, these laws leave regulatory gaps that enable the previously mentioned civil rights violations to rise. HIPAA only provides protections to biological information in the context of “protected health information” for covered entities, such as healthcare providers and their business associates. And GINA only prohibits the use of genetic data to discriminate in employment and health insurance coverage, and only restricts organizations with greater than 15 employees. Moreover, there are no laws that protect against discrimination surrounding the use of non-genetic biological information, such as those collected by private companies and personal health trackers. Protections beyond GINA at the state level are inconsistent and lacking, especially given the highly personal and largely unchanging nature of this information. While guidance exists for the storage, sharing, and oversight of biological data for institutions that receive federal funding, there is a lack of this same technical standard for commercial and direct-to-consumer companies

This regulatory vacuum allows deeply personal information about an individual’s disease history, familial relationships, and potential traits to be used to cause harm. This opens the door to dangerous infringements on personal safety and human rights, threatening the stability of the growing bioeconomy.

Plan of Action

To secure the U.S. bio-infrastructure, maintain global leadership in biotechnology, and safeguard American citizens from emerging threats to their privacy, the federal government must modernize its approach to human genetic and biological data. The current regulatory patchwork leaves the bioeconomy vulnerable to foreign exploitation and American citizens open to unchecked surveillance. The following recommendations establish a necessary framework to build trust in U.S. innovation while protecting individual liberty.

Recommendation 1. Modernize Genetic Privacy Laws to Close Security Gaps

Congress should advance legislation that comprehensively expands GINA to include all forms of biological data, including but not limited to: genetic, protein, microbiome, and biometric data, in order to close the loopholes present in the original law.

To ensure that an American’s biological information remains their private property and not a tool of overreach, this legislation should expand nondiscrimination protections beyond health insurance and employment. This legislation could be modeled on CalGINA, a 2011 California law that adds “genetic information” to existing protected classes, such as race, sex, and age. New federal legislation would expand on this model by codifying “biological information” as a protected class with “genetic information” within existing civil rights law. Additionally, this legislation should include direct language from CalGINA that prevents business establishments, health facilities, housing providers, and state-funded programs from demanding genetic tests, and penalizing Americans for their biological makeup. This legislation can also use language from the EU’s General Data Protection Regulation (GDPR) that classifies genetic and biometric data under a special “sensitive data” category. GDPR language would assist in classifying the scope of genetic and biological data as well as the protections individuals possess in regards to this information.

By setting this new federal baseline, Congress will harmonize the current fragmented regulatory landscape, clarifying compliance for businesses that may seek this information, and will assure the American public that their biological information cannot be weaponized against them.

Recommendation 2. Establish Guidelines For The Federal Use Of Biological Data

To prevent unwarranted surveillance and privacy erosion, the president should issue a memorandum tasking the National Institute of Standards and Technology (NIST) and the Office of Science and Technology Policy (OSTP) with developing a “Federal Human-Derived Biological Data Use Standard”. 

To ensure the standard accounts for the full spectrum of federal use cases, NIST and OSTP, in coordination with the Office of Management and Budget (OMB) and the National Security Council, must conduct an interagency review of all current and potential federal uses of biological data. The Standard should specifically adopt a privacy-centric model, similar to that established by the 2019 Interim Department of Justice policy on genetic genealogy. Once developed, federal agencies must make the Standard available to state and local partners to serve as a model for non-federal policy. Additionally, OSTP should publish a public-facing framework that clarifies federal use cases. This framework must include clear definitions of biological data types, transparent access standards, a list of actions explicitly prohibited by the new protocol, and clear accountability mechanisms.

This standard will define strict policies for permissible federal use of biological data to streamline disparate protocols and prevent the over-exposure of citizen data by the federal government. It will additionally serve as a model to ensure consistent protection for Americans across all levels of government.

Recommendation 3. Implement Technical Standards For Biological Data Security And Innovation

The president should direct OMB to issue a Biological Data Protection Directive. This directive must mandate that federal agencies standardize the technical infrastructure regarding how human-derived biological data is collected, stored, and shared. 

Specifically, the Directive should:

To implement these activities, the president should request Congress to appropriate $50-80 million over three years for staffing, training, and technical infrastructure. Standardizing this infrastructure will close security gaps that currently allow foreign adversaries to target American biological data while driving market-wide adoption of secure protocols and reducing friction for U.S. businesses.

Conclusion

Biological data is becoming as central to modern society as a traditional digital footprint and carries similar far-reaching risks if misused. Without proactive federal action, the expanding role of biological data will continue to enable new forms of discrimination, privacy violations, and civil-rights harms, while leaving critical national assets vulnerable to exploitation by foreign competitors and unchecked domestic surveillance.

If successfully and fully implemented, these new policies would protect individual rights and secure the bioeconomy, establishing the United States as a leader in responsible biotechnology  innovation. The first recommendation would provide clear and enforceable civil protections for all Americans, ensuring that individuals, businesses, and institutions cannot misuse biological information regardless of how it was obtained. This would prevent cases like that of Colman Chadam from recurring. The second recommendation would support more effective and accountable law enforcement by establishing rigorous, updated guidelines that limit federal overreach, and ultimately reduce privacy risks while improving public trust. Finally, the third recommendation would strengthen federally funded and private biomedical research by developing standards that make biological data interoperable, AI-ready, and secure.

The combination of all the recommendations will provide clarity to both state and private actors on appropriate development, storage, and use of biological information. This approach ensures that U.S. values define the global bioeconomy, creating lasting protections for the use of this information in critical facets of society.

Frequently Asked Questions (FAQ)
How will “biological information” be defined to prevent loopholes that allow discrimination based on inferred traits (e.g., ancestry, disability risk, or behavioral genetics)?

The proposed legislation will define “biological information” broadly to encompass all data derived from biological samples or measurements that can reveal health, behavioral, or ancestry-related traits. This includes molecular, biometric, and physiological data that can infer or predict personal or familial traits and diseases.

How will this expanded protection interact with existing civil rights laws and state-level equal protection statutes?

The proposal complements, rather than replaces, existing civil rights frameworks. By adding “biological information” to the list of protected classes, the law provides a clear and enforceable basis for addressing discrimination that current statutes do not explicitly cover.

Who will enforce these expanded protections, and what recourse will individuals have if they experience discrimination based on biological data?

Enforcement will rely on existing federal civil rights and consumer protection infrastructure. The Equal Employment Opportunity Commission will be empowered to investigate biological-data–based discrimination in employment, while the Department of Justice’s Civil Rights Division can address systemic violations across public institutions. The Federal Trade Commission will continue to regulate unfair or deceptive data practices by private companies.

How will this policy affect innovation in biotechnology?

Many small businesses and startups are already taking scattered approaches to protecting their data. This policy would remove burdens and accelerate biotechnological innovation by providing clear standards for the use of biological data for those entering the field and lowering the delays necessary to understand a scattered regulatory landscape.

What are the national security implications?

Due to advancements in biotechnology, malicious domestic actors may use biological data to target, blackmail, or exploit different segments of the American public who have voluntarily provided this information. This policy would minimize those risks by securing personal information and provide clear ramifications for misuses.

How does this policy support U.S. global leadership?

By implementing this policy, the U.S. will be the first country to make comprehensive policy on the security and use of personal biological datasets by federal and private actors. This policy will thus serve as a model for other nations realizing the dangers and necessity of protecting this type of information.

Why Credit Access Makes or Breaks Clean Tech Adoption and What Policy Makers Can Do About It

Building Blocks to Make Solutions Stick

For clean energy to reach everyone, government can’t just regulate behavior. It has to actively shape credit markets in partnership with the private sector.

Implications for democratic governance

Capacity needs


Access to affordable credit is a necessary condition for an equitable energy transition and an inclusive economy. Markets naturally concentrate capital where risk is low and returns are predictable, leaving low-income communities, rural areas, and smaller projects behind. Well-designed federal policy can change that dynamic by shaping markets—reducing risk, creating incentives, and unlocking private capital so clean technologies reach everyone, everywhere. This paper explores how policy-enabled finance must be part of the toolkit if we are going to drive widespread adoption of clean technologies, and can be summarized as follows: 

The critical role of policy-enabled finance to drive widespread economic opportunity  

Access to affordable credit is not just a financial tool—it is a cornerstone of economic opportunity. It enables families to buy homes, entrepreneurs to launch businesses, and communities to invest in technologies that reduce costs and improve quality of life. Yet, across the United States, access to credit remains deeply uneven. Nearly one in five Americans and entire regions – particularly rural and Tribal communities – are excluded from the financial mainstream, limiting their ability to thrive.

Private-sector financial institutions—banks, private equity firms, and other lenders—are designed to maximize profit. They concentrate on markets where risk is predictable, transaction costs are low, and deals are easy to close. This business model leaves behind borrowers and communities that fall outside these parameters. Without intervention, capital flows toward the familiar and away from the places that need it most.

Public policy can change this dynamic. By creating incentives or mitigating risk, policy can make lending to or investing in underserved markets viable and attractive. These interventions are not distortions — they are strategic investments that unlock economic potential where the market alone cannot, generating economic value and vitality for the direct recipients while yielding positive externalities and public benefit for local communities. And, importantly, these policy interventions act as a critical complement to regulation. Increasing access to credit is often the carrot that can be paired with, or precede, a regulatory stick so that people are not only led to a particular economic intervention, but they are also incentivized and enabled.  

For decades, policy-enabled finance has delivered measurable impact through multiple programs and agencies designed to support local financial institutions – regulated and unregulated, depository and non-depository – that are built to drive economic mobility and local growth. These policies and programs have taken multiple forms, but can generally be put in three categories: 

These tools enjoy broad recognition and bipartisan support because they work. They increase access, availability, and affordability of credit—fueling job creation, housing stability, and economic resilience. Policy-enabled finance is not charity; it is a proven strategy for broad and inclusive economic growth and a key tool for the policy-maker toolkit to support capital investment, project development, and adoption of beneficial technologies in a market-driven context that can increase the effectiveness of a regulatory agenda. 

Most importantly, policy-enabled finance has led to major improvements in wealth-building and quality of life for millions of Americans. The 30-year mortgage was created by the Federal Housing Administration in the 1930s as a response to the Great Depression. Before this intervention, only the very wealthy could afford to buy a home given the high downpayment requirements and short-term loans. Since this policy change, thousands of financial institutions have offered long-term mortgages to millions of Americans who have bought homes that provide safety and security for their families, strong communities, and an opportunity to build wealth through appreciating assets. Broad home ownership is a public good, but until the government created the right policy and regulatory framework for the markets, it was out of reach for the majority of Americans. 

Similarly, the Small Business Administration’s loan guarantee programs started in the 1950s supported financial institutions, including banks and non-bank lenders, in extending credit to small businesses that would otherwise be difficult to serve with affordable credit. These programs have collectively helped millions of small businesses access the credit they need to grow their businesses, create wealth for themselves and their families, provide critical goods and services in their communities, and create a diverse and vibrant local tax base. 

The financial markets, without these types of interventions, are not structured to prioritize access and affordability. Well-designed policy and complementary regulatory interventions have been proven to drive different behaviors in the capital markets that yield real benefits for American families and businesses.  

The role of access to credit in driving an equitable energy transition 

The public and private sectors have spent decades and billions of dollars investing in the development of clean technologies that reduce greenhouse gas emissions, create economic benefits, and deliver a better customer experience. Now that these technologies exist, the challenge is to deploy them for everyone, everywhere. 

The barrier to widespread deployment is that most clean technologies require an upfront investment to yield long-term benefits and savings (i.e., an initial capital expense to reduce ongoing operational expenses) – technologies like solar and battery storage, electric vehicles, electric HVAC and appliances, etc. – which means that people and companies with cash or access to credit are adopting these better technologies while those without access to cash or credit are being left behind. This is yielding an even greater divide – creating economic savings, health benefits, and better technologies for those who can afford them, while leaving dirty, volatile, and increasingly expensive energy sources for the lowest-income communities. 

Many of the federal policy interventions to support deployment of these new technologies to date have been through tax credits. These policies have been very popular, but are not often widely adopted, particularly in rural and lower-income communities, because, (a) they are complex, (b) they often require working with individuals or businesses with large tax liabilities, and (c) they typically come with high transaction costs, making smaller, more distributed projects harder to make work. The energy transition is a huge wave of change, but it is made up of many small component parts – individual buildings, machines, vehicles, grids – so if our policies fail to enable small projects to get done, we will fail to transition quickly and equitably.

To deploy everywhere, households and businesses need credit to offset capital expenses. To expand access to credit, we need supportive clean energy policies that work within and alongside local financial services ecosystems – just like we’ve seen with housing and small businesses. 

Regulation is insufficient to drive widespread adoption 

Pursuing a carbon-free economy is a massive undertaking and, understandably, much of the state and federal government’s toolkit has focused on regulation of people and businesses to drive behavior change – policies like fuel economy standards, pollution restrictions, renewable energy standards, and electrification mandates. This is an important piece of the puzzle – but insufficient to drive broad (and willing) adoption. 

Take, for example, the goal of electrifying heavy-duty trucks in and around port communities. States like California have attempted to set a date at which all new trucks on the registry must be zero-emissions vehicles. Predictably, this mandate was met with a lot of pushback from truck drivers, small operators, and industry associations who struggled to see a path to complying with this regulation without a major increase in cost. 

It wasn’t until the regulation was paired with direct incentives for truck purchases and an attractive and feasible financing package for vehicle acquisition and charging infrastructure that the industry actors started to come around. This has helped change behavior of both buyers and incumbent sellers in the market. 

Policy-enabled finance creates tools – often used in conjunction with other policy mechanisms – that can more effectively meet people where they are with affordable, appropriate, and tailored solutions and can help demonstrate a feasible path to adoption that can help buyers and sellers in these markets adapt accordingly. 

The Greenhouse Gas Reduction Fund as an innovative policy-enabled finance program 

The Greenhouse Gas Reduction Fund (GGRF) is more than an emissions initiative—it is a strategic investment in economic equity and market innovation that took lessons in program design from many sectors and programs of the past. Designed with three core objectives, the program aims to:

GGRF programs, including the National Clean Investment Fund, the Clean Communities Investment Accelerator, and Solar for All, were built to complement other Inflation Reduction Act (IRA) programs by occupying a critical middle ground between grant programs and tax credits. Grant programs provide direct, one-time support for projects and programs that are not financeable (i.e., not generating revenue). Tax credits are put into the market to incentivize private investment for anyone interested in taking advantage but are not typically targeted to any specific project or population. 

GGRF bridges these approaches. It channels capital into markets where funding does not naturally flow in the form of loans and investments, ensuring that clean energy and climate solutions reach every community—but does so in a way that often extends the benefits of the tax credits and incentive programs so that they reach a broader set of projects and communities where the incentive is insufficient to drive adoption. GGRF focuses on increasing access to credit and investment in places that traditional finance overlooks by reducing risk and creating scalable financing structures, empowering local lenders, community organizations, and national financing hubs to deploy resources where they are needed most. Also, because the program makes loans and investments, it recycles capital continuously – akin to a revolving loan fund – so that the work filling gaps in market adoption can continue for decades. 

GGRF’s design was built on a strong foundation of successful direct investment programs for local lenders, such as CDFI Fund awards and USDA programs. What makes it unique is its scale—tens of billions of dollars—and its centralized approach, leveraging national financing hubs to drive systemic change with and through new and existing local financial capillaries (i.e., credit unions, community banks, green banks, and loan funds). This program was not built to drive incremental progress; it is a market-shaping intervention designed to accelerate the clean energy transition while promoting widespread economic growth.

Unfortunately, the program was stopped in its tracks when the Trump administration illegally froze funds already disbursed to awardees, leading to multiple lawsuits to restore funding. Without this disruption, awardees and their partners across the country would be driving direct economic benefits for families and communities across all 50 states. In the first six months of the program, awardees had pipelines of projects and investments that were projected to create over 49,000 jobs, drive $866 million in local economic benefits, save families and businesses $2.7 billion in energy costs, and leverage nearly $17 billion in private capital. The intention and mechanics of the program were working – and working fast – to deliver direct economic, health and environmental benefits for millions of Americans.  

Moving at the speed of trust: Bringing the public and private sectors together for effective implementation 

For a program like the Greenhouse Gas Reduction Fund to succeed, both the private and public sectors need clarity, confidence and accountability. But most importantly, they need a baseline of trust between the parties to support ongoing creative problem solving to implement a new, scaled program with exciting promise and a limited blueprint. 

For the private sector, certainty is paramount. Investors and lenders (and importantly, their lawyers) require clear definitions, consistent requirements, and transparency about the availability of funds, requirements of use, and the ability to forward commit capital to projects and businesses. They need mechanisms to leverage public dollars with private capital and assurances that counterparties will be shielded from political, compliance, and policy risk. Flexibility is equally critical, allowing actors to adapt to rapid market shifts and technological innovations without being constrained by rigid program structures. Understanding these requirements – and the needs of the financial market actors involved – is outside the comfort zone of most government agencies and employees and requires significant experience and capacity building to strengthen this muscle. Nimble thinking is not often associated with government agencies, but in policy-driven financial services, it is paramount. 

At the same time, the public sector has its own requirements which require patience and understanding from the private sector. Policymakers and the EPA, the implementing agency of the GGRF, must ensure that funds are used properly and that Congressional and public oversight is robust. This means designing programs that comply with all laws and regulations while advancing policy priorities. It requires mechanisms for accountability—certifications, reporting, and transparency in how funds flow – along with safeguards against undue influence from purely profit-motivated private actors. Balancing these needs is not optional when managing taxpayer funds; it is the foundation for building trust and ensuring that the program delivers on its promise of reducing emissions, benefiting communities, and transforming markets. 

Implementation requires striking the balance between the needs of the private and public actors; this was difficult and time consuming for both the federal employees and for us as private recipients. There was pressure to deploy quickly to demonstrate impact and the value of the program, but it took a long time to get contracts signed and funds in the market because of the many requirements of the public and private parties involved. We speak different languages, are solving for different constraints, and work in drastically different environments – all which led to complexity and delays. 

Internal EPA requirements and federal crosscutters (i.e., federal requirements from other related laws that applied to this program) increased time to market and transaction costs. Many of these requirements came with high-level policy objectives without the ability to get to a level of detail required for capital deployment. 

For example, two of the major policy crosscutters were the Davis Bacon and Related Acts (DBRA) requirements around labor and workforce, and the Build America Buy America (BABA) requirements for equipment manufacturing and component parts. While the agency and private awardees were aligned at a high level on policy intention – good-paying jobs and domestically-manufactured goods – down streaming these requirements to borrowers and projects required significantly more detail and nuance than was available to the agency, adding weeks and months onto implementation and frustration among private counterparties. 

Clear expectations up front on how to manage the trade-offs – policy priorities versus capital deployment – could have helped create a high-level framework for implementation, which was a one-by-one review of use cases to determine feasibility and applicability. This added complexity and friction to the process without driving outsized results. 

More requirements and complexity led to slower, more costly deployment, which meant fewer communities would benefit from the program’s goals of cutting emissions, creating jobs, and cutting household and business costs. 

Another key feature of the program for the National Clean Investment Fund and Clean Communities Investment Accelerator was the ability for the federal government to leverage a Financial Agent to administer the funds. This arrangement was developed between the EPA and Treasury, leveraging a long-standing practice of the Treasury Department of contracting with external banks to provide financial services that were hard for the government to provide directly. This was particularly important for the National Clean Investment Fund program because the disbursement of funds into awardee accounts enabled the awardees to meet a core statutory requirement to leverage funds with private capital. Without this function, the cash would not be available on the balance sheet of the awardees and would be difficult to leverage with private investment. 

Lastly, the reporting requirements for the program were complex, making it hard to provide clarity on what data collection was required for early transactions. Again, both parties recognized the importance of transparent data collection and dissemination but implementing that intent in practice was time consuming. A simple, standardized framework to get started that could evolve over time would have helped reduce uncertainty and supported faster deployment. 

Altogether, the cross-sector translation – finding common ground between two disparate worlds – added many months onto the process of getting the program to the market which, in the current political climate, was time not spent doing the important work to educate a broad set of stakeholders on the program’s promise, potential, and purpose. A lot of this complexity could have been reduced by developing a baseline of trust between the parties through the application and award process, complemented by a common goal to improve program implementation over time. 

Strange bedfellows create weak alliances 

In addition to the programmatic elements of translation, the actors involved in implementing direct investment strategies tend to be unknown entities to government agencies and Congress. Even though many of the implementing organizations – the “awardees” – have been around for decades doing similar work, there were weak ties with Congress, federal agencies, and other related stakeholders. Similarly, there was a lack of understanding of the role that nonprofit and community-based financial organizations play in addressing market gaps. This mutual lack of understanding and engagement leaves room for misunderstanding, distrust or generalizations that can hinder the ability to make collective progress. 

Within the agency, this was a new program type for the EPA, so requirements and design process took many months before anything was shared publicly. The Notice of Funding Opportunity was released nearly a year after the legislation was signed. 

The unique form and function of the program and limited direct engagement with lawmakers and other stakeholders about the program left a vacuum of information, which led to skepticism and confusion. Because the funds were provided to awardees as grants, many interpreted this as just another grant program – a large federal spending package that would lead to “handouts” – instead of what it was, the federal government seeding a sustainable fund with “equity” that would be lent out, returned, and reinvested in perpetuity. For example, here is the Wall Street Journal editorial page,and later, the EPA press release conflating investments with “handouts”: 

Imagine if Republicans gave the Trump Administration tens of billions of dollars to dole out to right-wing groups to sprinkle around to favored businesses. That’s what Democrats did in the Inflation Reduction Act (IRA). The Trump team’s effort to break up this spending racket has led to a court brawl, which could be educational.

The fact that this policy structure and the private sector entities charged with implementing it were relative strangers led to confusion and delay during a period that could have been spent on outreach, engagement, and education. Without that broad base of support, the program unnecessarily became a political punching bag.

To mitigate this risk going forward, there needs to be greater investment in relationship building, education, stakeholder engagement and capacity building within and among the implementing partners across all relevant government actors and their private sector counterparts, especially after award selections are made. This connective tissue would go a long way in creating a baseline of common understanding of the policy objectives, program design, and implementation partners involved so all parties are aligned on strategic intent and path forward. 

Making policy-enabled finance programs work in the future 

If we agree that policy-enabled finance is essential to drive the energy transition and deliver broad benefits, the next step is asking the right questions about how to design these interventions for success, drawing lessons from the GGRF and other related programs.

First, what mechanisms should we use, and what are the trade-offs for each? Federally supported direct investment programs, such as managed funds, can deploy capital quickly and target underserved markets, but they require strong governance, thoughtful program design, and radical transparency, otherwise they are susceptible to the “slush fund” narrative or similar risks (i.e. conflicts of interest and political favors). 

Tax credits and incentives have proven effective in attracting private investment, yet they often favor actors with existing tax liability and can leave smaller players behind. Guarantees reduce risk for lenders and unlock private capital, but they demand careful structuring to avoid moral hazard and can struggle to reach communities that are truly under-resourced. 

Despite the many pitfalls of direct investment programs, they address a challenge that has plagued many of the more distributed policies: centralization and market making. Often in an attempt to let a thousand flowers bloom, policymakers underestimate the need for centralized or regional infrastructure to help with asset aggregation, data collection, product standardization, and scaled capital access. This yields local infrastructure that is sub-scale, inefficient, and unable to access the capital markets for private leverage – too small to truly shape markets.

While the GGRF’s future is uncertain given pending litigation, its purpose and role as a set of centralized financial institutions within the broader community-based financial ecosystem is critical – and needs to be more broadly understood as policymakers set future priorities. 

Second, should government manage funds and programs internally or partner with external experts? Internal management within an agency offers control and accountability but can strain agency capacity and impede the ability to be an active market participant. It is also difficult to attract the right talent within the government’s pay scale, leading to an inability to recruit and high turnover. This model has been attempted through programs like the Department of Energy’s Loan Programs Office (LPO), but even that market-based program has been slower to execute, delaying critical infrastructure and technology investments by months, if not years.   

On the other hand, external management brings specialized expertise and market agility, yet it raises questions about oversight and influence. No matter who the private party is, there is skepticism around the use of funds, their personal or professional gain, and their intentions with taxpayer money. In our deeply politicized world, this puts a target on the leaders of these organizations that may limit who is willing to play this role. 

Quasi-public Structures

Despite the challenges, on balance it seems that internal agency management or a quasi-public structure is the most feasible path. Internal management pushes the boundaries of public agency function but goes a long way to build trust and accountability. Quasi-public structures seem to be a good compromise when feasible. Other countries have figured out how to manage these programs within a government or quasi-government agency (see the Clean Energy Finance Corporation and Reconstruction Finance Corporation, both in Australia). We can too. 

At the federal level, credit programs should be managed by agencies with the skills and capacities to hold an investment function, like the Department of Energy or the Treasury Department, and leverage lessons learned from programs like DOE’s LPO and EPA’s GGRF to structure new entities. Or – like many of the state and local green banks have done – create quasi-public entities that have public sector governance and appropriations but otherwise operate independently as financial institutions with their own balance sheets, bonding authority, and staffing structure. 

Lastly, if public-private partnerships are preferred, who should the government work with to implement policies meant to expand access to capital and credit? Nonprofit financial institutions often prioritize mission, community impact and are willing to arrange complex financings that require a higher touch approach but often lack scale and institutional capital access. For-profit firms bring scale and expertise but often find it hard to manage a government program with a mindset or culture that differs from their typical profit-maximization frameworks. 

Depository institutions such as banks offer stability and regulatory oversight, whereas non-depositories can innovate more freely to reach the hardest to serve communities. Regulated entities provide robust and trusted infrastructure and controls, but unregulated actors may move faster and can be more creative in supporting traditionally under-resourced opportunities. Specialty firms bring deep sector or asset-class knowledge, while generalists offer broad reach and experience in managing across asset classes. 

To identify the optimal path, it is helpful to look to existing programs for lessons. The U.S. Treasury’s Emergency Capital Investment Program (ECIP) demonstrates how direct investment into regulated depository institutions can mobilize significant capital for underserved communities through an existing financial ecosystem. The Loan Programs Office shows what internal management can achieve for large-scale projects. Tax credit programs like the New Markets Tax Credit (NMTC) and Investment Tax Credit (ITC)/Production Tax Credit (PTC) illustrate how incentives can transform markets, while guarantee programs such as the U.S. Department of the Treasury’s Community Development Financial Institutions Fund (CDFI) Fund Bond Guarantee and SBA 7(a) and 504 guarantees highlight the power of risk mitigation in activating and standardizing products to support secondary market access. These precedents offer valuable insights as we design future policies to accelerate a broadly beneficial energy transition.

Educating policymakers to build trust in the community finance ecosystem

Regardless of path forward, one thing remains critical – building better relationships between policymakers and the community finance industry, including community banks, credit unions, loan funds, and green banks. These are the boots-on-the-ground organizations that share a mission with many policymakers to expand economic opportunity and broaden access to capital and credit. And they are often the organizations navigating multiple public products and programs to bring affordable, quality financial services to communities. 

The challenge is that most advocacy and educational work for these organizations has been siloed – there are groups representing credit unions big and small, those representing housing lenders, loan funds, green banks, and community banks. The disaggregation of these efforts has diluted the potential for policymakers to look at this ecosystem as a whole to determine how best to leverage it for public good. This is not to say that each of these individual groups does not have a role to play for their members – they all have different needs and requirements and deserve representation. But the broader industry would benefit from collaboration across these organizations to create a mechanism for these institutions to help with outreach, advocacy and education around policy-enabled finance overall. This would bring a strong and powerful group of actors together for a higher collective purpose and, ideally, create a large and diverse constituency with common goals. 

State and local governments stepping up  

In the near-term, the absence of federal support for clean technology deployment through policy-enabled finance creates an enormous opportunity for state and local governments to step up and push forward. Hundreds of local financial institutions were doing work to prepare for the delivery of GGRF funds to and through local projects and businesses to drive broader adoption of clean technologies. These organizations continue to have the skillsets, capacity, and pipeline to finance these projects – but need access to flexible and affordable capital to do so. 

State funding efforts could mirror the program and product design of the GGRF to get deals done locally, working with one or more of the constellation of financial institutions preparing to deploy federal funds. Just because the GGRF’s programs were cut short, it doesn’t mean that the infrastructure and learnings generated should go to waste – if there are public institutions willing to commit capital, there should be many financial institutions across the country ready to put it to good use. 

Conclusion 

If our shared goal is an equitable, rapid energy transition, policy must do more than regulate — it must enable finance and focus on deployment, or getting great projects done. The Greenhouse Gas Reduction Fund showed both the promise and the pitfalls of large-scale, policy-enabled finance: when designed and governed well, these tools can unlock private capital, deliver measurable local benefits, and sustain long-term market transformation. When implementation gaps and weak relationships persist, even well-intentioned programs become politically vulnerable and ripe for attack. To make these programs successful within our current political context, future efforts should prioritize clear governance, cross-sector capacity, and sustained stakeholder engagement so public dollars can catalyze private investment that reaches every community. 

Policy-enabled finance snapshot (illustrative, not exhaustive)

ProgramDirectTax IncentiveGuaranteeFederal agencyImplementing entit(ies)
CDFI Fund Financial Assistance ProgramXTreasuryCertified nonprofit loan funds
Emergency Capital Investment ProgramXTreasuryCommunity banks and credit unions
Opportunity ZonesXTreasuryPrivate funds and other financial intermediaries
Low-income Housing Tax Credit (LIHTC) ProgramXIRS, State Housing Finance AgenciesPrivate housing developers, lenders and syndicators
New Markets Tax Credit (NMTC) ProgramXTreasuryPrivate Community Development Entities (CDEs)
Investment and Production Tax Credit (ITC/PTC) ProgramXTreasuryPrivate developers, investors, and syndicators
USDA Business and Industry Loan GuaranteeXUSDABanks, credit unions, and farm credit lenders
USDA Single Family Loan GuaranteeXUSDAPrivate mortgage originators
SBA Loan Guarantees (7a, 504, etc.)XSBABank and non-bank private business lenders
DOE Loan Programs Office Guarantee ProgramXDepartment of EnergyDOE direct to companies, alongside private lenders and investors
CDFI Fund Bond Guarantee ProgramXTreasuryCertified Community Development Financial Institutions (CDFIs)
Greenhouse Gas Reduction Fund National Clean Investment Fund and Clean Communities Investment AcceleratorXEPANational nonprofit specialty finance organizations in partnership with local lenders (community banks, credit unions, green banks, and loan funds)

Rebuilding Environmental Governance: Understanding the Foundations

Today we are facing persistent, complex, and accelerating environmental challenges that require adding new approaches to existing environmental governance frameworks. The scale of some of them, such as climate change, require rethinking our regulatory tools, while diffuse sources of pollutants present additional difficulties. At the same time, effective governance systems must accommodate the addition of new infrastructure, housing, and energy delivery to support communities. Our legal framework must be sufficiently stable to enable regulation, investment, and innovation to proceed without the discontinuities and gridlock of the past few decades. 

In an increasingly divided atmosphere, it will take candid, multiperspective dialogue to identify paths toward such a framework. This discussion paper explores the baseline that we’re building on and some key dynamics to consider as we think about the durable systems, approaches, and capacity needed to achieve today’s multiple societal goals.


Building Blocks to Make Solutions Stick

Our environmental system was built for 1970s-era pollution control, but today it needs stable, integrated, multi-level governance that can make tradeoffs, share and use evidence, and deliver infrastructure while demonstrating that improved trust and participation are essential to future progress. 

Implications for democratic governance

Capacity needs

Modernize today’s system of cooperative federalism to address the lack of clear and intentional interconnections, adaptive feedback loops, and aligned objective, by:


The early 20th Century saw the emergence of our first national laws regulating public resources— the Federal Power Act in the 1930s, the precursor to the Clean Water Act in the 1940s, and the first version of the Clean Air Act in the 1950s. Then, in a concentrated decade of new laws and massive amendments to existing ones, the 1970s saw a focus on assessing, controlling, and reducing pollution, while setting ambitious goals for human and ecosystem health. These statutes generally were constructed around specific resources—airsheds, watersheds, public lands, and wildlife habitat—and articulated specific roles for federal agencies and other levels of government. State efforts were incorporated into a nationwide system of cooperative federalism, while many states undertook their own initiatives to address environmental problems.

For half a century these laws—enacted with overwhelming, bipartisan congressional support— produced a great deal of success, with conventional pollution decreasing across many resources and regions and some species and habitats recovering. But we have plateaued in terms of broad improvements, and meanwhile novel pollutants and more diffuse, global threats have emerged. Political shifts, legacy economic interests, and a changing information landscape have played an important role, as amply recounted elsewhere. 

The bipartisan legislation of the 1970s arose from both idealism and necessity, during an Earth Day moment that embraced ecological thinking in response to tangible harms to humans and the environment. The laws enjoyed massive public support and got many things right. Some were aspirational and holistic, such as the Clean Water Act’s “zero-discharge” target or NEPA’s vision “to create and maintain conditions under which man and nature can exist in productive harmony, and fulfill the social, economic, and other requirements of present and future generations of Americans.” The latter Act established the Council on Environmental Quality to coordinate this policy across the entire federal government.

Other advances came piecemeal, focused on specific resources. The U.S. Environmental Protection Agency (EPA) was cobbled together by an executive plan to reorganize several existing agencies and offices, then granted authority in a series of media-specific statutes that began with the Clean Air Act, Clean Water Act, and Safe Drinking Water Act, and later the Toxic Substances Control Act and Federal Insecticide, Fungicide, and Rodenticide Act. The Resource Conservation and Recovery Act, Superfund, and Oil Pollution Act addressed hazardous substances affecting the nation’s health and ecosystems. Implementation of all these laws required the Agency to develop in-house scientific expertise and detailed regulations that fleshed out statutory standards and applied them to specific sectors—an approach upheld for decades by the Supreme Court.

These laws made unquestionable progress on conventional pollution and waste, the visible, toxic byproducts of industrial production and consumer culture that had spurred the environmental movement and drawn a generation of lawyers to the new profession. But with specialization came fragmentation of environmental law into a plethora of subtopics, and a managerial, permit-centric legal culture that risked losing sight of ecological goals. Nor were the benefits distributed equally by race or class, as demonstrated by pioneering studies in the field of environmental justice.

As the field matured, it slowed, with congressional interventions becoming less frequent and more technical. Some of the last major amendments to a bedrock environmental statute were the Clean Air Act Amendments of 1990, enacted by a bipartisan Congress and signed by President George H.W. Bush. (The other prominent example is the Frank R. Lautenberg Chemical Safety for the 21st Century Act (Lautenberg Chemical Safety Act), a major amendment to TSCA in 2016.) Absent updated legislation, EPA regulations became paramount, but these had to run a gauntlet of shifting policy priorities, complex rulemaking procedures, litigation, and a transformed and often skeptical Supreme Court. 

Critiques of this system date back almost as far as the statutes themselves. One ELI study listed 34 major “rethinking” efforts emanating from academia, blue-ribbon commissions, and NGOs between 1985 and 2014, across the political spectrum and ranging from incremental reforms to radical reinvention. One highly touted initiative, led by sitting Vice President Al Gore, resulted in some modest administrative streamlining. Most remained paper exercises, appealing to good-government advocates but lacking political support.

The stakes grew higher with increasing awareness of climate change. In June 1988, NASA and book-length treatments followed, sparking broad discussion of what was then a fully bipartisan issue. Vice President Bush campaigned on addressing it, and as President in 1992, he traveled to Rio de Janeiro to sign the U.N. Framework Convention on Climate Change. With successes like the 1987 Montreal Protocol on the ozone layer or EPA’s 1990 Acid Rain Program doubtless in mind, the Senate ratified the Framework Convention 92-0.

But climate change implicates much larger portions of the U.S. economy—energy, transportation, agriculture—at individual as well as industrial scales. While NEPA embodied the 1960s slogan that “everything is connected,” the lesson of climate change is that many things emit greenhouse gases, and all things will be affected by global warming. The need for systemic change proved to be an uneasy fit with existing site-specific, media-specific environmental laws.

Growing awareness of climate change and the scale of action needed to address it also generated a backlash from entrenched economic interests. By the mid-2000s, the Bush/Cheney administration had reversed course on federal climate commitments. It contested and lost Massachusetts v. EPA, a landmark ruling in which a narrowly divided Supreme Court held that the Clean Air Act applies to greenhouse gas emissions that affect the climate. 

The Administration’s argument was captured by Justice Antonin Scalia’s flippant remark in dissent that “everything airborne, from Frisbees to flatulence, [would] qualif[y] as an ‘air pollutant.’” In Scalia’s opinion, real pollution must be visible, earthbound, toxic, inhaled, not a matter of colorless molecules interacting in the stratosphere. Even in dissent, this view set the stage for subsequent legal battles, right up to the present effort to revoke EPA’s 2009 “endangerment finding” that is now the underpinning of federal greenhouse gas regulation. 

Climate change likewise laid bare the long-standing divide between environmental law, which historically regulated the power sector in terms of its fuel inputs and combustion byproducts, and energy and utility law, which focused more on transmission and distribution of the resulting power. (Both fields are further divided among federal, state, and local authorities, as discussed below.) Vehicle emissions similarly are regulated via both EPA tailpipe standards and National Highway Transportation and Safety Administration mileage standards, with California authorized to propose more stringent ones. When coordinated, this multi-headed structure produces steady advances, but in deregulatory moments it has become fertile ground for opportunism, retrenchment, and delay. 

At the federal level, these questions have been exacerbated by massive shifts in administrative law, long the building block of environmental law and climate action, and in federal court rulings on the separation of powers, implicating the authority of federal agencies to issue and enforce rules. Successive administrations have run afoul of the current Supreme Court majority, whose “major questions doctrine” casts a shadow both on attempts to fit new problems into once-expansive environmental statutes, and on “whole of government” approaches that attempt to address climate change’s sources and impacts across the entire economy. 

Tentative attempts by presidents to leverage executive power and emergency authority have been curtailed when invoked for regulatory purposes, but are running strong in deregulatory efforts and executive actions in the service of “energy dominance.” Whether the Supreme Court will articulate some principled limits, and whether those will be even-handedly applied to future administrations, remains to be seen. Meanwhile, the past year has seen a large-scale push to reduce environmental regulation, in parallel with abrupt reorganizations and steep reductions in the federal workforce and agency budgets. These actions were joined by sharp declines in environmental enforcement and U.S. withdrawal from environmental and climate-related international instruments and bodies.

In this uncertain atmosphere, attention has turned to new technologies and building the necessary infrastructure to effect growth in low- and zero-carbon energy. As clean energy alternatives have matured and become economically competitive, the climate imperative is pushing against long-standing environmental review and permitting procedures. That may well include NEPA, which is now attracting attention from all three branches of government and a robust debate about whether, or how much, its procedures might be slowing energy deployment. 

Environmental issues were federalized for a reason: to counter pollution that crosses state borders and to prevent a race to the bottom. But decades of implementation have seen the blunting of some tools, expansion of others, and identification of gaps. Moving forward requires reaffirming that the environment is inseparable from societal health and well-being, economic stability, and energy systems. Any serious response must orient governance toward decarbonization, while embedding accountability, equity, and justice from the outset rather than inconsistently and often inadequately after the fact. Doing all this without sacrificing hard-won environmental gains will not be easy.

To meet the challenge of the worldwide crises of biodiversity loss, pollution overload, and climate change, creation of any new structure must be rooted in understanding the existing baseline for environmental governance. 

Cross-Cutting Objectives

Inseparable: Environment, Energy, Economy, and Society

The past half-century has demonstrated the impossibility of severing the environment from the economy, energy production, and social well-being. We must ensure the false dichotomy between environmental protection and economic development, characterized by an oversimplified idea that the two are in a zero-sum competition, also fades. The decades-old concept of sustainability (or triple bottom line) has not yet made its way into many of our foundational laws and governance structures.

Ignoring the complex relationships among environment, energy, the economy, and society favors short-term decisions that externalize impacts. This underlies the longstanding debate over the accuracy and efficacy of cost-benefit analyses, throughout their 40-plus year federal history, including questions about scope and how they handle uncertainty. For any project or program, system designers that consider an integrated suite of factors that move beyond basic environmental parameters or economic indicators (from public health to workforce development, from the supply chain to community well-being) have a greater chance of cross-sector success. 

These governance challenges are also inseparable from shifts in how finance flows. Public and private financial tools—from subsidies and tax credits to loans, grants, and community-based financing—are increasingly shaping market behavior and determining whether policy objectives translate into real-world outcomes. Who controls these tools, how they are deployed, and when capital is made available all play a central role in driving or constraining environmental progress.

Bridging these gaps is, of course, easier said than done. But widening the aperture of considerations can connect decisionmaking to holistic industrial policies that account for a wider range of economic, social, and environmental factors. Accounting for this wider range isn’t just a nice-to-have, but essential to shared prosperity. 

Foundational: Trust and Participation 

A process, project, or program will move at the speed of trust—no faster and no slower. This refers to trust in institutions, in science, and in process. 

Trust is earned through consistent transparency, clear accountability, and demonstrated responsiveness. For governance systems to function at the scale and pace required today, these principles must be embedded in decisionmaking in ways that are coherent and durable, rather than fragmented across a series of disparate steps and entities. Our traditional frameworks contain mechanisms to solicit and incorporate public input. But those mechanisms have limitations for all involved, both those trying to make their voice heard and those proposing the action and receiving input. (These range from when and how often participation occurs in the decisionmaking process to how the input is incorporated and decisions communicated.) Participation is foundational to our regulatory democracy and must occur early enough and in meaningful ways to improve decisions.

Effective participation also depends on clarity. People must be able to understand how decisions are made, what tradeoffs are being weighed, and where and how engagement can influence outcomes. But our frameworks still reflect reliance on elite and professional representation rather than widespread engagement. Trust—and the durability of outcomes—will increase when our processes have clearly articulated principles, transparently and rapidly weigh tradeoffs, and come to decisions through open and informed consideration. 

The Concurrent Risk and Promise of Technology 

Mechanization and industrialization created both unprecedented wealth and the pollutants that were the target of the 1970s wave of environmental laws. Emerging technologies likewise offer great promise, but also place familiar stresses—greenhouse gas emissions, water consumption, land use, waste—on the ecosystem and on human health and well-being. Our existing laws will need to respond and adapt to these problems as data centers and other novel demands reach greater scale, even as we evolve new ways of balancing those technologies’ potential against their up-front impacts and opportunity costs. 

Technology also offers a potential path through the climate crisis, as solar and wind energy have become scalable and cost-competitive with traditional fossil fuels. Other clean technologies on the horizon, such as geothermal or fusion energy, retain bipartisan support and will require legal and regulatory guardrails if they mature and are integrated into the system. Battery storage and energy efficiency advances will help manage and reduce energy demand, and carbon removal and sequestration technologies may also play a role in curbing emissions. And at the outer limits of our knowledge, various geoengineering concepts are raising difficult questions about feasibility, decisionmaking procedures, unintended consequences, and accountability. 

New technologies are also helping shape the implementation of environmental law in important ways. Existing tools such as satellite imaging, GPS location and geographic information systems, remote monitoring and sensing, and drones have fundamentally altered the way we view and record data from the physical world, in close to real time. Computer modeling and simulations have been a mainstay of climate science and policy, and other software innovations may improve environmental governance, including addressing long-standing issues of government transparency and public participation.

Sample Topics for Multi-Perspective Discussions
Communicating environmental challenges, conditions, and risks

 Effective messaging is essential to enhancing public understanding of interconnected issues and support for responses. It should be tailored to specific jurisdictions and informed by advances in research (e.g., behavioral science), learn from those thriving in today’s information ecosystem, and embrace strategies for reducing polarization.

Advancing the beneficial use of technologies while establishing reasonable guardrails

How can we identify and address barriers to the development and equitable deployment of technologies that advance environmental protection while limiting their negative impacts.

Democracy, Expertise, and Regulatory Certainty

In a healthy democracy, public policy is guided by evidence, and truth is the shared foundation for collective decisionmaking, whatever the chosen outcome. When facts and scientific expertise are dismissed or minimized in favor of ideology, however, it becomes harder for citizens to deliberate, solve problems, and hold leaders accountable. The diminution and marginalization of science contribute to the erosion of democracy itself.

In the United States, our ability to build necessary infrastructure and take action has been slowed by the long timelines and sometimes overlapping requirements of our regulatory processes. This is exacerbated by the increasingly extreme policy swings we have been experiencing between administrations. The result is the twin challenge of how to increase the pace of our processes without lessening their protections, while also making our decisions more stable and durable.

Aligning Regulatory Certainty and Timelines 

Regulatory certainty is not the same thing as rigidity. When done correctly, it can be the backdrop against which communities are able to plan for the future and companies can make informed decisions about where and how to invest. Regulation that is sufficiently clear on stable objectives does not have as much space in which to swing. 

Long horizons with clear milestones matter: think of a national clean electricity standard, or the emissions-based equivalent, set on a 15- to 20-year glidepath. Confidence in long-term decisions, however, stems from effective inclusion, holistic analysis, and transparent decisions. The perspectives of subject-matter experts (in-house and external), and of those who manage and care about the resources or land in question, should be considered essential and actively pursued by policymakers. 

Program-level thinking can help inform decisions at the project level. The energy transition will be remembered for feats of engineering—the thousands of miles of transmission lines, the buildout of battery storage—but its success will be determined by whether our framework listens, incorporates needed expertise, and produces rules that last long enough for people to plan their lives.

Evidence-Based Decisionmaking

For decades, the principle that good decisions require a good evidence base has been axiomatic. Dating back to 1945, the federal government has invested in science as a discipline and an idea, with government supporting the research to be conducted by public institutions and delivered as socially useful goods by the private sector.

Incorporating meaningful, often complex, evidence—including scientific data, traditional knowledge, and the needs, concerns, and priorities of potentially affected individuals—into decisionmaking is increasingly fraught. Climate change illustrates these challenges: despite decades of understanding by government officials and private sector decisionmakers about its causes and the need to act, economic and social interests have prevented effective policy and legislative response. Decisions are as good as the information they are based on. Emissions reductions ultimately depend not just on technical knowledge, but on institutions and governments capable of acting on that knowledge independently, transparently, and free from corruption and clientelism.

In a study assessing the effectiveness of the federal government’s efforts to improve evidence-based decisionmaking, the U.S. Government Accountability Office found mixed progress in: (1) developing relevant and high-quality evidence; (2) employing it in decisionmaking; and (3) ensuring adequate capacity to undertake those activities. These are foundational problems.

Compounding our challenges in making legislative and policy decisions based on accurate and pertinent evidence is the siren song of AI. Artificial intelligence promises many tools, ranging in complexity and autonomy from providing clerical tasks to generating substantive recommendations. (AI Clerical Assistive Systems automate certain administrative and procedural tasks, such as document classification and automatic transcription, and AI Recommendation Systems can contribute to judicial decision-making, for example, by analyzing legal codes and case precedents. Paul Grimm et al.)

 AI is already being used across jurisdictions and agencies for environmental regulation, including planning, reviewing proposals, drafting environmental reviews, public participation and engagement, monitoring compliance, and enforcement. Recent federal policy has fueled the AI flame, with a 2025 AI action plan and multiple Executive Orders that offer the power to expedite permitting processes.

Enormous governance questions around AI have yet to be resolved. Technologies built by people reflect the values and assumptions of those who built them, and their use shifts power in decisionmaking processes. If a judge were called upon to review a decision made by such a tool, how could she determine the finding was reasonable under existing standards of administrative law? Can machine-generated analysis satisfy NEPA’s “hard look” review? These types of governance concerns dog AI tools wherever they are deployed but become particularly critical when they have the potential to become the decisionmaker in our legal and regulatory system.

The importance of having rigorous systems for identifying and considering trusted information to ground collective and democratic decisionmaking cannot be overstated. Until recently, dozens of scientific advisory committees routinely advised federal agencies to help bridge information gaps. Staggering recent losses of federal research funding and government programs and scrubbing of essential data sets means any path forward will likely require significant investments of both financial and human capital. When we rebuild, priority should be placed on ensuring all participants in decisionmaking have access to the same evidence, supported by the same systems. 

Frontloading Regulatory Decisionmaking 

Even as we work to improve how evidence informs decisionmaking, we face growing risks, uncertainties, and tradeoffs. The challenge is not simply to generate more information, but to make better use of what we already know through regulatory systems that reflect the integrated nature of the problems we face—without mistaking uncertainty for an absence of evidence.

Many conflicts arise because decisions are fragmented across regulatory silos and institutions.  Consider a proposed electrical transmission line crossing a wetland. Decisionmakers must balance the imperatives of the energy transition, the conservation of biodiversity, the protection of water resources, and local economic opportunities. Yet these factors may be evaluated at different times, at different scales, and by different agencies. As a result, environmental permitting decisions can be made in isolation, long after foundational choices about the project’s purpose and design have already been locked in.

By the time site-specific questions arise, such as whether a particular wetland falls within the narrowed jurisdiction of the Clean Water Act, many broader tradeoffs have already been foreclosed. 

A holistic approach would entail identifying the priority of certain projects and a system for weighing their impacts. For example, infrastructure decisions could happen at a systemic scale such as nationwide grid needs, providing context for decisions about individual projects and resources. Our decisionmaking processes need systems for weighing tradeoffs, and making them transparent, to enable systems-level planning and prioritization and effective engagement. 

Hard decisions will have to be made regarding prioritized (and thus deprioritized) objectives. But frontloading data gathering, assessment, and decisionmaking on a national scale—through meaningful scenario planning, for example—could reduce the number of decisions made much further down the line in a project lifecycle and temper the uncertainty that can stem from permitting officials’ discretion. 

We will be facing these types of tradeoffs with increasing frequency as needs mount to build infrastructure and housing, retreat from our coasts, manage and conserve species and ecosystems, and respond to and prepare for increasingly frequent and severe emergencies. In addition to an integrated approach for assessing impacts and making tradeoffs transparent, the system will need certain decisions to be made earlier in the decisionmaking processes and with a broader scope. 

Acting (and Adapting) Amidst Uncertainty 

Core tenets of administrative law structure decisionmaking with up front analysis and assume that we have full—or at least sufficient—information about circumstances and potential impacts to support a decision. But this is not always the case. When there are substantial uncertainties about conditions or the possible impacts of an action or rulemaking, adaptive management can improve outcomes by taking an iterative, systematic approach. 

The uncertainties brought on by changing conditions due to climate impacts and unknowns about the consequences of proposed actions may call for an adaptive approach. And there are other situations where establishing sufficient evidence before taking irreversible action is appropriate. For example, we currently have limited understanding of the potential local and global impacts of geoengineering proposals to release aerosols into the atmosphere to block the sun’s rays, nor are there governing mechanisms in place to address them. 

There are also situations where it is important to ensure that we do not indefinitely postpone action due to a desire to have all the answers before acting, such as infrastructure for transitioning away from fossil fuel combustion. When appropriate, effective adaptive management plans include procedural and substantive safeguards such as clear goals to set an agenda and provide transparency, an accurate assessment of baseline conditions to compare future monitoring data against, an outline of the thresholds at which management actions should be taken to promote certainty and assist with judicial enforcement, and is linked to response action.

Learning as we go and making appropriate adjustments may be justified in some contexts, and even essential when we do not have the luxury of time and must move ahead without critical information. Adaptive management can increase an agency’s ability to make decisions and allow managers to experiment, learn, and adjust based on data. But adaptive management’s flexibility comes at the cost of more resources and less certainty, which may also invite controversy. The sweet spot for adaptive management may be when managing a dynamic system for which uncertainty and controllability are high and risk is low. While uncertainties are proliferating, situations that meet those conditions are not the norm. 

It would be beneficial for our environmental governance systems to explicitly identify conditions under which adaptive management may and may not be used, and to provide clear accountability mechanisms. The approach must fit with the practical realities of the working environment. For example, even if uncertainty and controllability are high and risk is relatively low, tinkering with large-scale energy infrastructure is not practical. Adaptive management may not be suited to regulatory contexts (1) in which long-term stability of decisions is important; (2) where decisions simply can’t easily be adjusted once implemented; or (3) where it is essential that an agency retain firm authority to say “yes” or “no” and leave it at that.  It is a valuable tool to be invoked when truly necessary.

Sample Topics for Multi-Perspective Discussions
Realigning to reflect today’s challenges

The interconnectedness of today’s global environmental challenges is in tension with the accreted framework of media-specific, site-specific laws and siloed agencies. Adjustments that help to align objectives, processes, and structures could scale impact. 

Evidence-based decisionmaking is foundational to U.S. governance and essential to progress towards today’s environmental imperatives

Our framework should reflect commitment to and investment in gathering and analyzing information, from intricate science to the concerns of impacted communities; and be designed to incorporate and respond to changing information, such as through judicial review or other checks. 

Designing effective certainty

In part because of impacts already set in motion, we must consider when we cannot wait for more information before taking action on environmental and climate challenges. By their nature, some of those actions can be adapted on an ongoing basis, while others cannot. Clear parameters for differentiating will help ensure clear timelines and appropriate, effective processes.

Building a Structure Fit for Purpose

The triple planetary crises, a term coined by the UN Environment Programme, refers to the challenges of biodiversity loss, pollution overload, and climate change. They require large-scale mobilization and societal level adjustments. This magnitude of action requires a multifaceted system that can support and move myriad levers in a coordinated and balanced manner. The year she received the Nobel Prize in Economics, Elinor Ostrom published a paper capturing the tension but also necessity of this layered system, calling for a “polycentric approach” to addressing climate change.

The following discussion focuses largely on federal and state government action. In addition, Tribal Nations are vital sovereign authorities, partners, and voices in governance, including natural resource management, and their needs and knowledge are critical to effective, sustainable, just results. And as Ostrom recognized, private entities will also be instrumental in addressing climate change and other complex challenges; this includes not only corporations, as discussed below, but philanthropic organizations and a variety of other nongovernmental actors.

The Scale Challenge 

Environmental regulation occurs at multiple levels: local ordinances, state laws and policies, interstate agreements, tribal laws, federal regulations, and international laws and norms. It also works at different resource scales, from managing a subspecies to protecting regional drinking water to setting nationwide air standards.

Jurisdictional nesting can provide comparative benefits at various levels for specific resources or pollutants. For example, working at the local level may allow for tailoring to specific circumstances to maximize benefits and the building of trust, while working at the state level can allow for the cumulative benefits of collective local action while also allowing for the testing of different approaches to federal implementation. Meanwhile, working at the federal and larger scale allows, among other things, the balancing of voices, and the establishment of shared objectives, standards, or requirements. 

However, tiered systems can also be subject to gaps in implementation, such as when there is no mechanism to trigger enforcement of an international mandate at a national level. This may inadvertently impede interoperability and shared learning, such as by using different data standards, tools, or systems, and slow action due to competing or otherwise unaligned priorities. In addition, rarely do jurisdictional boundaries align with resource definitions, whether it be a hydrogeographic basin, extent of an air pollutant, or natural hazard vulnerability zone. Further complexity is added by questions around preemption, with changes occurring in longstanding understandings of federal versus state authorities under key statutes and regulatory structures. 

Federal, tribal, state, and local governments must navigate these challenging dynamics as they work to effectively implement existing environmental laws and creatively address new environmental problems. 

Cooperative Federalism

Federalism—whereby the federal government and states share power and responsibilities—is a central tenet of the U.S. governance system. A particular form, cooperative federalism, is embodied in most of the major U.S. environmental laws, including the Clean Air Act and the Clean Water Act. These laws establish a legal framework in which minimum standards are established at the federal level and individual states implement the programs. Today, over 90 percent of the delegable federal environmental programs are run by states. As a general matter, states are responsible for ensuring that federal standards are met but have the flexibility to impose standards that are more stringent than the federal standards. 

In practice, the Congressional Research Service observes that the “precise relationship and balance of power between federal and state authorities in cooperative federalism systems is the subject of debate.” This debate has manifested in a variety of ways over the decades, including differences over the appropriate scope of federal oversight and levels of federal funding for state-delegated programs. 

Environmental protection has advanced in many respects over time with cooperative federalism as its foundation, but few would argue there is no room for improvement. For example, a 2018 memorandum by the Environmental Council of the States (ECOS) captured a consensus among states that the “current relationship between U.S. EPA and state environmental agencies doesn’t consistently and effectively engage nor fully leverage the capacity and expertise of the implementing state environmental agencies or the U.S. EPA.”

In addition to the leeway that cooperative federalism provides to the states in implementing federal environmental laws, states are free to regulate or otherwise address environmental problems that are not covered by federal laws. As a result, states are often referred to as (in Justice Brandeis’ phrase) “laboratories of democracy” for testing innovative policies. Historically, states have served as testing grounds for environmental policies later adopted by the federal government. Given the current federal governance landscape, discussed below, what happens in the states may stay in the states (at least for quite some time)—making state laboratories one of the few promising options for advancing environmental protection. 

Barriers to Optimal Functioning of Cooperative Federalism 

In addition to the inherent systemic challenges outlined above with respect to multi-tiered jurisdiction and resource scale, there are broad societal barriers to maximizing the efficacy of cooperative federalism. The numerous overarching problems contributing to democratic dysfunction (e.g., channelized communication, primaries that yield extreme candidates who foster dramatic pendulum swings, lack of public trust) will contribute to impeding the optimal functioning of cooperative federalism for the foreseeable future. 

The multitude of environmental governance-specific challenges identified earlier also significantly affect the functioning of cooperative federalism. These include, for example, long-standing congressional gridlock; new and emerging environmental harms that cannot be easily addressed within the existing, siloed framework; a Supreme Court changing its review of regulation; and regulatory pendulum swings that make consistency and stability difficult and hinder continuous improvement.

In addition, several additional barriers arguably weaken the foundations of cooperative federalism. These include: ineffective federal oversight of state programs (possibly both too stringent and too lenient in some respects); insufficient collection and dissemination of data (e.g., on environmental conditions, performance, pollution impacts), as well as inconsistent tracking of key environmental indicators; lack of state-specific effective risk communication and messaging; limited state resources for filling federal regulatory gaps or experimenting with innovative ways of implementing federal and state regulations; and insufficient federal funding for state programs. Recent critiques also point to the need to build out state administrative law to improve the functioning of cooperative federalism.

Opportunities for Renewing Cooperative Federalism

Recent developments in federal programs are disrupting many aspects of the country’s environmental protection efforts. These developments include drastic regulatory rollbacks, multiplied industry influence, curtailed input from scientists and other experts, rollback of federal grant funds to states and local governments, and sweeping staffing cuts resulting in loss of critical expertise. 

Cooperative federalism has been particularly undermined by federal funding cuts (e.g., withdrawal of federal grants, reductions in revolving loan funds) and cuts to the federal programs that collect and analyze environmental data. Moreover, federal interference with independent or “more stringent than” state initiatives is taking a toll (e.g., response to California’s electric vehicle requirements ).

Given the barriers outlined above that make major statutory change infeasible, building an entirely new structure to replace cooperative federalism will be a nonstarter for the foreseeable future. However, ample opportunities exist to strengthen the existing structure in a manner that yields more effective and innovative approaches to environmental protection. 

Front and center is building state and local governmental capacity to fill the gaps created by federal inaction and rollbacks as well as to lead on regulatory innovation. In so doing, states and local governments can serve as more effective laboratories of democracy and foster innovative federal action. And because states and local governments are on the frontlines of managing environmental and climate impacts such as floods and wildfires, as well as aging water infrastructure and other environment-related challenges, they are motivated to address the cause and effects of these harms, despite the intensely politicized nature of environmental issues such as climate change. 

To be sure, renewing the existing structure is complicated by an uneven political landscape. For example, the level of political and popular support for environmental protection measures in the 26 states led by Republican governors differs from the levels of support in the 24 states led by Democratic governors, and the relative dominance of a particular party (e.g., trifectas or triplexes) is also a factor. These dynamics likewise influence environmental action by local governments when, for example, the potential for state preemption of local authority is a factor. 

Nevertheless, the practical reality of increased extreme weather events, aging water infrastructure, and other environment-related challenges provides a strong incentive for all states and local governments to act. State and local efforts, however, are hindered by limited capacity in the form of staffing, funding, expertise, data, and other factors. For example, virtually all states could benefit in their decisionmaking from more robust data on local environmental conditions, and many states lack adequate funding, staff, and other resources.

Private Sector Synergies and Opportunities

Private environmental governance (PEG)—which can take a range of forms including collective standard-setting, certification and labeling systems, corporate carbon commitments, investor and lender initiatives, and supply chain requirements—is already making its mark across industries as diverse as electronics, forestry, apparel, and AI. For example, roughly 20 percent of the fish caught for human consumption worldwide and 15 percent of all temperate forests are subject to private certification standards. In addition, 80 percent of the largest companies in key sectors impose environmental supply chain contract requirements on their suppliers. And investors are increasingly taking environmental, social, and governance (ESG) into account, including risks related to climate change. A 2022 study estimated, for example, that assets invested in U.S. ESG products could double from 2021 to 2026 and reach $10.5 trillion. 

As professors Vandenbergh, Light, and Salzman explain in their book Private Environmental Governance: “If you want to understand the future of environmental policy in the 21st century, you need to understand the actors, strategies, and challenges central to private environmental governance.” 

Given the scope of PEG activities, it is not surprising that a range of regulatory regimes are implicated, including corporate governance, contract, antitrust, and consumer protection laws. In some cases, these legal regimes place constraints on the forms and scope of PEG initiatives. Many contend, however, that these constraints are inadequate, as reflected in recent efforts to severely curtail ESG initiatives. 

Further, some scholars and advocates have criticized PEG from an entirely different perspective, citing concerns that PEG measures constitute greenwashing—that is, that they do not actually change corporate behavior and environmental conditions. Among other concerns is that PEG may undermine support for public governance measures in certain contexts. 

Yet federal legislative gridlock, a dramatically swinging environmental regulatory pendulum, unregulated new technologies, and other factors point to needing a better understanding of how PEG can be leveraged to advance environmental protection efforts—including the improved functioning of cooperative federalism.

Sample Topics for Multi-Perspective Discussions
Building a robust and widely disseminated information base

How can we use innovative approaches for preserving existing data and collecting new data on environmental conditions, regulated entity performance, and pollution impacts to enhance interoperability of local, state, and federal systems, foster consistency among assessments of risk, and help align priorities and approaches?

Leveraging traditional state and local powers

Problems such as climate change require a whole of government approach to address and could benefit from leveraging adjacent state and local regulatory authorities in areas such as land use (e.g., zoning), infrastructure, and public health.

Enhancing connectivity within jurisdictional nesting and fostering networks of state-level and local-level regulators to align priorities

Bolstering state and local officials’ networks for sharing data, best practices, and regulatory innovations may help align priorities and produce further progress on cross-jurisdictional problems as well as new challenges such as permitting reforms.

Examining how PEG can be leveraged to advance environmental protection

For example, asking—what are the effects of PEG (e.g., emissions reductions); what are the drivers of PEG (e.g., brand reputation, shareholder actions, employees, and corporate customers); are there ways to reduce greenwashing and greenhushing; and how can we ensure that PEG complements public governance.

Leveraging new technologies for capacity-building

For example, AI and advanced monitoring technologies—if thoughtfully leveraged—could lessen the burden on state and local governments, particularly those that are under-resourced, in their efforts to assess climate risk, develop resilience plans, and monitor regulatory compliance.

Conclusion

The environmental gains of the last half-century demonstrate that governance choices matter. The United States built a system capable of addressing the urgent environmental crises of its time by combining scientific expertise, democratic accountability, and enforceable legal standards. 

Today’s urgent challenges—climate change, biodiversity loss, and pervasive pollution—demand a similar alignment under far more complex conditions. The challenge is not merely to regulate more, faster, or differently, but to recommit to decisionmaking that is credible and durable: by restoring confidence that evidence matters, that participation is meaningful, that tradeoffs get confronted honestly, and that rules will persist long enough to justify investment and collective effort.

The path forward lies neither in abandoning the foundations of environmental law, nor in relying solely on technological or private solutions. It will be found by strengthening and adapting existing governance structures—integrating cross-cutting objectives across domains, clarifying roles across jurisdictions, and rebuilding the shared evidentiary base and institutional capacity needed to act amid uncertainty, rather than deferring action in pursuit of unattainable certainty. And it requires clear communication about today’s complex, dispersed challenges that enhances understanding and reduces polarization. 

At its core, the triple planetary crisis is a democratic and governance challenge: how societies decide, together, to protect people and places while sharing costs and benefits fairly. Meeting that challenge will require systems capable of carrying both technical complexity and public trust, as well as a sustained commitment to invest in institutions that can decide, act, and endure.