Protecting America’s S&T Ecosystem

Over the past eighty years, the United States has maintained both economic and military primacy thanks to our technological superiority. Other countries have sought to replicate this advantage, some of whom (particularly the People’s Republic of China, Russia, Iran, and North Korea) have an interest in replacing the United States as the primary global power, including through coercive and clandestine measures. 

Current efforts are centered around National Security Presidential Memorandum 33 (NSPM-33). The memorandum was designed to deal with concerns related to a very specific problem: state-motivated concealment of ties with an adversarial military in activities at universities that were sponsored by the U.S. government.  Obviously, concealment and fraud cannot and should not be rewarded in the U.S. system.  

Over time, the bridge between concealment and espionage have proven difficult to establish in court, the research community writ large has (consequently) remained resistant to change, and the PRC has amplified the impact of U.S. research security measures and prosecutorial failures to drive a wedge between the government and members of the academic and Asian American communities.

It is prudent to recalibrate U.S. research security efforts to 1) maintain America’s ability to participate in global discovery by aligning research security efforts with associated risk of specific research activities and 2) create a clearer system for the identification of national security information (NSI) so as to enable protective measures, like the Espionage Act, to do their job.  The status quo, which relies on the notion that information must remain in the open and be widely shared while somehow remaining out of reach of our adversaries, must be rationalized.

It is in the interests of the United States to appropriately protect information that needs to be protected while maintaining our participation in new discoveries to maintain our competitive advantage.  Current efforts, which are focused on university faculty and partnerships, should be rebalanced to focus on risky technologies and ensuring that the source of most patented or sensitive technologies – the private sector – is adequately protected.  Our current efforts are costly, excluding talented researchers with global connections from participating in the science and technology ecosystem and cutting the United States out of global discovery.

Challenge and Opportunity 

There are a number of key challenges that a more effective research security regime will need to address:

NSPM-33 protects the government’s money from people, but it does not protect research or ideas from foreign appropriation

Ideas are the fundamental currency of technology competition.  When a researcher or team of researchers have an idea, they frequently turn to their government for the financial backing necessary to nurture it.  Under the NSPM-33 common forms requirement, the government’s evaluation of the proposal for funding includes an analysis of any conflicts of interest or commitment that individuals on the research team might have with an adversarial foreign government.  

The research agency is expected to decline the funding opportunity if an applicant for funding has an ongoing or recent relationship with institutions that are backed by adversarial governments or military organizations (among other potential conflicts).  In some circumstances, particularly when an individual or team intentionally misrepresents their relationships with adversarial governments, the federal agency may seek additional administrative remedies, including suspension and debarment.  Similar processes are in place in government agencies that operate cooperative user facilities, primarily our national laboratories. Lying is bad, and it goes without saying that lying to conceal a relationship with an adversarial government is also bad.

Unfortunately, the story doesn’t end there. Because the idea is still wholly in the possession of the individual who applied for a federal grant, the individual or team is now free (and even incentivized) to seek alternative sources of funding. The fact that the government declined to support their work based on research security concerns as opposed to merit could validate the researcher’s belief that they have a meritorious idea worth pursuing elsewhere.  A rare unscrupulous researcher could use that knowledge to determine the idea may be of interest to foreign militaries and actively look to adversarial governments for support.  In effect, the government’s review of a person’s institutional affiliations, followed by a decision to exclude them from U.S. government funding, and subsequent visa revocations or revocations of permanent residency increases the probability of hostile foreign acquisition of talent and technology. 

This conclusion is not based on speculation.

We’ve Been Here Before

History shows that strategic catastrophes follow when the government shuts people out of the American R&D ecosystem. In the 1950s, the United States detained and deported Qian Xuesen, co-founder of our Jet Propulsion Laboratory, based on fear that he supported the Chinese Communist Party.  He was tapped to build the PRC’s ballistic missile and space programs, creating the strategic situation we find ourselves in today.  Xuesen was so angry at the United States government that he later refused to take part in activities celebrating the normalization of relations.  The United States made a similar error with Erdal Arıkan, the father of 5G wireless technology’s polar codes, albeit not out of concern for his affiliations.  A lack of U.S. government support for long-term theoretical mathematics research led Arıkan to return from his positions in the U.S. to Turkey.  Some years later, he was approached by Huawei. Huawei representatives quickly assimilated Arıkan’s work and became the world’s leading supplier of wireless technology.  Subsequent government efforts to try to mitigate Huawei’s competitive advantage undoubtedly cost the U.S. taxpayer more than if we provided adequate or tailored funding for Arıkan’s career (and those of many other researchers) in an environment where it was more likely to be acquired by U.S. technology companies with less risk to the national security and the security of our international partners.  

I have been briefed on other examples.

Funding Cuts Compound the Talent Crisis & Encourage Exodus

The Executive Branch’s massive funding cuts for research make the current situation even more problematic, creating compounding incentives encouraging talent to leave the country.  Other countries have become better at attracting and retaining talent in recent years, weighing against the longstanding U.S. competitive advantage in talent attraction and retention.  This has been particularly notable in fields like artificial intelligence, where particularly productive individuals have made significant contributions to the development of AI models like DeepSeek.  The British-based talent intelligence firm Zeki Data points to the strengthening of markets for talent in Europe, the Persian Gulf, India, and China as additional reasons for a decline in U.S. AI talent attraction.  Recent surveys suggest that as much as seventy five percent of the scientific community is looking for opportunities overseas.  Notable recent departures from the United States due to green card denials, federal funding challenges, and federal layoffs include ChatGPT developer Kai Chen (moved to Canada), spaceflight safety expert Jonathan McDowell (moved to the UK), and carbon capture expert Yi Shouliang (moved to China).  Neuroscientist Ardem Patapoutian was offered 20 years of funding in China in “any city, any university” after being cut off from NIH funding as part of the Trump Administration’s recent cuts, earning himself a mention during Marcia McNutt’s State of Science address at the National Academies.

Result: Innovation Increasing Elsewhere, Excluding Americans

When we assume that genius stems primarily from the U.S. system, and in particular from government funding, we are surprised when innovation happens somewhere else.  The government’s principal method for controlling ideas and innovation only happens after a legal relationship between the researcher, their home institution, and the U.S. government is established.  For advanced technology development, the overwhelming majority of which is funded by the private sector, we are principally reliant on export controls (which have a dismal history of successfully limiting knowledge and advanced technology transfer).

Science agencies aren’t adequately equipped to mitigate harm (and some NSI probably isn’t appropriately identified or protected)

For decades, National Security Decision Directive 189 (NSDD-189) was the primary governing document for research security measures. When it was released at the height of U.S.-Soviet tensions during the Reagan Administration, the academic community focused primarily on one line–”It is the policy of this Administration that, to the maximum extent possible, the products of fundamental research remain unrestricted.” NSDD-189 was meant to cover only a subset of research – fundamental research – defined in the document as “basic and applied research in science and engineering, the results of which ordinarily are published and shared broadly within the scientific community, as distinguished from proprietary research and from industrial development, design, production, and product utilization, the results of which ordinarily are restricted for proprietary or national security reasons.”  On balance, the overwhelming majority of U.S. research performance and expenditure is not covered by NSDD-189, but rather developed by industry and the defense sector (which are covered later).

Understanding When Controls Must Be Used

Relatively little attention has been paid to NSDD-189’s second policy idea, which emphasizes that “where the national security requires control, the mechanism for control of information generated during federally-funded fundamental research in science, technology and engineering at colleges, universities and laboratories is classification.”  Proper identification of potentially classifiable information and NSI is essential for the Justice Department and Intelligence Community, who wish to protect such research under national security statutes.  The policy also acknowledges there may be alternative applicable statutes, like export control regimes, which have generally remained consistent with NSDD-189 (more on that later).  

The classification system remains the most straightforward way to give our security community the tools they need to protect America’s critical national assets.  This mirrors the findings of the 2019 JASON report on Fundamental Research Security, which argued that “Making the case, for classification reasons, that a new technology might be of national security value is far simpler than assessing its potential economic impact, even if economic security is equated in some way with national security.”

To fully appreciate what this means, one needs to understand when and why information should be classified. Classification is governed by Executive Order 13526 (E.O. 13526), which establishes the processes by which agencies establish national security classification protections.  The lowest level of classification, which is termed “confidential,” is defined as information that “the unauthorized disclosure of which reasonably could be expected to cause damage to the national security that the original classification authority is able to identify or describe.” The bar for identifying and labeling “confidential” classified information is relatively low, and yet it is one that many science agencies lack the authority to implement. 

Whether information should be treated as classified or unclassified is the responsibility of an Original Classification Authority (OCA).  These authorities are a short list of senior government officials authorized by the President to review information and determine whether or not it is classified.  Agencies that do not have representatives on the list are limited to being able to label information that is derived from other classified sources.  To determine whether new information must be classified, individuals from other agencies must depend on determinations made by individuals granted OCA or delegated classification authorities.

Classification Constraints for NSF, NIH, others

The trouble begins when we realize that extramural funding agencies like the National Science Foundation (NSF) and National Institutes of Health (NIH) have no officials possessing OCA at the Top Secret (or even Secret) level, making officials in those agencies entirely dependent on members of the security community and White House to make such determinations. A broad lack of access to classified facilities limits the access that program managers have to relevant intelligence sources.  Many NSF officials, for instance, need to travel offsite to engage with classified materials.  

As a result, I have been told that classification reviews of new and novel information produced by these agencies almost never happens.  If a researcher or program officer believes that the information that is produced in the course of research may damage national security and reports their concern to their funder in accordance with the terms of their grant (as is the case for NSF), the research agency lacks the authority to evaluate the information and defend their determination. Put another way, the research agency also lacks the authority to determine when edge-case information it produces should remain unclassified. 

E.O. 13526 anticipates this and allows for the referral of information to original classification authorities at other agencies.  Again, the need to refer elsewhere means that such reviews almost never happen, except for in those agencies which already possess the ability to identify and classify information (such as the Department of Defense or Department of Energy). Things get more absurd—one of the co-chairs in the Research Security Subcommittee of the National Science and Technology Council wasn’t able to read relevant research security-related intelligence products due to their reduced level of access.  This had the effect of limiting the materials that could be discussed by the research security subcommittee.

Science Agencies Placed in a Bind

When an officer from a science agency or some other official makes the claim that the research they support could harm U.S. national security if it is improperly disclosed, the question “why wasn’t it classified” must immediately follow.  A lack of original or derived classification authority means that the individual in question is not empowered to make the initial assessment of whether or not the information needs to be protected through the classification system.  This effectively places them in conflict with the substance of E.O.13526 and unable to fully implement NSDD-189.

Defenders of the status quo would correctly point out the few examples of harm caused by the unauthorized disclosure of early stage research. While this is true, especially for agencies like the NSF or NIH, the inability to make a determination when paired with more recent Congressionally-directed emphasis on later-stage research (under organizations like the NSF Technology and Innovation Partnerships Directorate, ARPA-H, and other longer-standing programs involving gain of function research) increases the likelihood that later-stage research that could have national security implications will not be appropriately identified and protected.  The expansion of research portfolios into applications must be accompanied by policies and a readiness to accept the resulting changes in responsibility.

The definition of basic and fundamental research has lost meaning over time and must be reestablished

During my time as Assistant Director for Research Security in the Office of Science and Technology Policy, security agencies would routinely point out that a fair number of basic research projects have fairly easily describable national security implications. I would agree with them and then take this one step further–a great deal of research that is categorized as basic research by science agencies fails to satisfy the statutory definition of basic research established for the Department of Defense.  That definition is as follows:

Basic research is a systematic study directed toward greater knowledge or understanding of the fundamental aspects of phenomena and of observable facts without specific applications towards processes or products in mind.  It includes all scientific study and experimentation directed toward increasing fundamental knowledge and understanding in those fields of the physical, engineering, environmental, and life sciences related to long-term national security needs. It is farsighted high payoff research that provides the basis for technological progress.

Similarly, the Federal Acquisition Regulations define the term as follows:

Basic research means that research directed toward increasing knowledge in science. The primary aim of basic research is a fuller knowledge or understanding of the subject under study, rather than any practical application of that knowledge.

The Export Administration Regulations (EAR) create two definitions to help establish what is considered fundamental research:

Fundamental research means research in science, engineering, or mathematics, the results of which ordinarily are published and shared broadly within the research community, and for which the researchers have not accepted restrictions for proprietary or national security reasons.

It is not considered fundamental research when there are restrictions placed on the outcome of the research or restrictions on methods used during the research. Proprietary research, industrial development, design, production, and product utilization the results of which are restricted and government funded research that specifically restricts the outcome for national security reasons are not considered fundamental research.

A fair number of defense research activities (including research supporting computing or hypersonics, as well as almost all “use-inspired research”) does not meet the statutory definition of basic research, but is often categorized as such despite the fact that the name of research or research field describes the applications or processes in mind.  In the decades after NSDD-189, agencies have expanded on this definition in order to become inclusive of use-inspired research, muddying the waters.  

For a final level-set, classified information and controlled unclassified information (CUI), cannot be defined as fundamental research in the eyes of the government as the act of controlling the information places the work outside the EAR’s fundamental research definition.  This should be fairly straightforward given the fundamental incompatibility between the two definitions.

Applied Research Can Also Suffocate in a Closed System

Individuals who defend applied research activities being primarily handled in the open often point out that the overwhelming majority of these activities need to happen in the open in order to maintain technological competitiveness, pointing to the success and strength of open source technology platforms.  I would agree and go one step farther: the vast majority of applied research would be worse off in a restricted environment, and there are many good reasons for why NSDD-189 included applied research in its definition of fundamental research.  I also would point out that the strength of the EAR’s definition is also the acknowledgement that as soon as one feels the need to protect information from misappropriation, that information cannot and should not be managed in the open.  It is incoherent to say that information needs to be in the open but also remain inaccessible by the Chinese government–the two notions are obviously mutually exclusive.  To quote the JASON report on Fundamental Research Security, “The fundamental research exemption is based on the idea that the general nature of the knowledge produced in fundamental research cannot be controlled.”

Gold Standard Science and Industrial Diffusion Require Openness

While it is true that basic and fundamental research provide the basis for information that could eventually be used to support the development of sensitive and national security relevant technologies, expanding the “grey space” of risky information to include almost all basic research (as the Department of Energy and other departments are apparently now doing, as reported by some university officials) risks information paralysis.  After all, literature review is an important first step in the scientific method, and shielding information from external eyes prevents research from being scrutinized in a way that results in the development of new questions or ideas, not to mention the entire system of scientific rigor.  While it may be true that basic research may be used to develop security relevant technologies, it is not necessarily possible to entirely derive security relevant technologies from individual discoveries.  Getting information into the hands of U.S. industrial players becomes much more difficult once it is restricted.  This is especially for startups, small businesses, and smaller universities that cannot afford to sponsor vetting all of their employees at the level of “public trust”or higher in order to gain access to protected U.S. government-sponsored discoveries, creating consequences for American innovation, competition, and participation.

All of this is more difficult in a world where artificial intelligence can aggregate and synthesize information more rapidly than a team of graduate research assistants. Such risks must be accounted for. Still, it makes more sense to establish a strong governance framework around AI use cases rather than reordering how we govern the rest of society after every technological breakthrough.  

Need to Limit the Grey Space

It is true that a great deal of research exists in a grey space where it needs to happen in the open in order to advance despite the fact that it may cause harm.  Decisions to allow research that could cause harm to remain in the open should reflect the careful consideration of government program managers paired with unambiguous instructions to award recipients.  It is in the capacity of the government (and not within the natural capabilities of our universities who have an incentive to seek the least restrictive framework possible) to determine and assess whether information might present a threat to national security and require that defensive measures be implemented, accordingly.  

Muddying the waters around which research needs to be protected and which research does not has real world consequences for U.S. universities and non-governmental research organizations.  When faced with ambiguity about whether or not they should be engaged in a particular research activity, university administrators frequently take the most cautious approach possible to protect themselves from potential liability and reputational damage.  The talent, on the other hand, retains their ability to choose to do their work elsewhere if an environment becomes too restrictive. 

Securitization has significant costs (including to American leadership)

Later in his life, Richard Feynman recounted a story about the impact of compartmentalization on the Manhattan Project.  Uranium-235 separation happened at Oak Ridge National Laboratory while the research underpinning that production took place at Los Alamos. Because the researchers at Oak Ridge didn’t have the fundamental understanding of nuclear physics necessary to troubleshoot their work, Uranium-235 production ran into numerous headwinds until Feynman and others managed to convince the government of the necessity of sharing knowledge with the Oak Ridge team.  Once the knowledge was shared, production went back on track.  Failure to share the information with other scientists risked a major accident at Oak Ridge, not to mention an end to the Manhattan Project.  In today’s environment, where incentives are structured in such a way that critical decisionmakers frequently sit on research security-related decisions rather than subjecting themselves to internal or Congressional scrutiny, I am less confident that someone like Feynman would reach a similar outcome.

Measuring Direct and Indirect Costs

It is relatively easy to measure the financial costs associated with research security measures. It is much more difficult to evaluate their effectiveness. The fact that there have been few publicized significant examples of national security harm resulting from the sharing of government-sponsored basic research makes having a serious discussion about the efficacy of our existing research security measures outside of classified environments almost impossible.

Costs include increased administrative burden, support staff, defensive cyber systems, facility access controls, and more.  These costs cannot be easily accounted for in an individual grant’s direct costs, so we can safely assume that decreases in support for indirect costs under the current administration will result in the cost of CHIPS and Science-mandated research security programs placing additional pressure on other scientific activities. Research security compliance costs are overwhelmingly treated by the government as mandatory expenditures, while (confoundingly) the cost of operating a world-class laboratory is not (except for in instances where a grant is explicitly for the development and maintenance of advanced scientific infrastructure).

Costs to American Competitiveness

While we can also measure the economic impact of fewer students and scholars in the United States, it is more difficult to measure the cost of the United States being absent from international opportunities.  Proposed legislation like the SAFE Research Act, while intended to prevent U.S. government funding from supporting research involving adversarial nations, effectively could result in U.S. researchers being forced to abandon high-potential discoveries produced by large and diverse international research consortia if researchers from adversarial countries are part of the team.  This would give the PRC veto power over the involvement of the U.S. government-supported individuals in international research consortia–an alarming consequence, especially when one considers the Australian Science Policy Institute’s recent finding that Chinese research institutions dominate 57 out of 64 critical technology fields and the fact that some international projects require the work of thousands of scientists from dozens of countries. In fields like space research and development, the PRC’s domestic talent base is sufficient to maintain competitiveness while they actively test capabilities where the United States is years away from deployment (and might beat the United States back to the moon).

The absence of U.S. researchers in such consortia does not mean that the research does not happen–it merely means that the consortium needs to find other partners who are able to do the work.  The fact that the infrastructure supporting such discoveries is frequently located exclusively in Europe and Asia (thanks in part to decades of U.S. underinvestment in research and development infrastructure) means that there is additional incentive for American researchers to travel and work abroad in order to maintain access to unique capabilities located in adversarial countries to advance their careers and test new technologies.  

This problem is particularly acute in fusion energy research and development, where U.S. companies maintain a technological advantage but lack the domestic extreme radiation environment testing infrastructure necessary to develop critical components necessary for long term operation (the inner walls of the reactor, or blanket).

A protect-oriented research security framework makes sense if the United States is safely in the lead and competitors are decades away from catching up.  But in a more competitive environment, when other countries become more productive and competitive, decisions that force the United States to abandon collaborative efforts are much more consequential. Given the relative size of the U.S. population with that of our competitors, this mitigates the ability of the United States to unilaterally define global rules governing the conduct of research.

Taken together with my previous point about the need to re-establish the definition of basic and fundamental research as a bright line test and giving science agencies greater ability to control information, I’m placing an awful lot of responsibility on the backs of federal research managers.  Having spent most of my professional life in the civil service as a clearance-holder, I would argue that managing such weighty decisions is exactly what federal employees are paid to do.  In the same talk mentioned earlier, Feynman described the admiration he had for the speed with which members of the security establishment could make decisions upon which the fate of the nation rests.  If we abandon the flexibility of officers to give informed answers to our research apparatus promptly, we do so at our own peril.

We know dangerously little about what is happening in Chinese laboratories, and fear is getting in the way of dealing with our knowledge gaps and urgent security challenges

As a former CIA colleague recently briefed Congressional staff, it is very difficult for trained intelligence officers from both the United States and the PRC to successfully infiltrate the academic environment due to the level of academic training required and fluency in domain-specific technical terms and practice.  When the United States occupies a position of preeminence, and the primary sources of technical knowledge originate in American laboratories, then it makes sense to limit partnerships with individuals and organizations who have competing interests.  Unfortunately, the United States cannot count on itself being in the lead in many scientific domains any longer.  Our ability to avoid technological surprise depends, in part, on knowing what’s going on in Chinese labs.

The research security push over the past decade makes this much more difficult.  The growing absence of routine technical interactions with Chinese counterparts means that our scientists and engineers have limited insight into the PRC’s work.  The emphasis of research security requirements aims to restrict interactions with the PRC’s Seven Sons of National Defense, which also happen to be the PRC’s top performers in science and technology discovery.  Without knowledge of what is happening in these institutions or the ability of our scientific community to ground-truth intelligence assessments (which are frequently compiled by non-experts), we increase the probability that we will not have the immediate knowledge necessary to replicate such discoveries or worse, fall victim to technological surprise.

Degraded Capacity Creates Larger Competitiveness Risks

The challenge is compounded when we consider the degradation of our own technical capacity.  In fields like nuclear fusion and radio astronomy, the United States simply does not have the laboratory capabilities that are necessary for our scientists to remain at the cutting edge. In 2023, the Fusion Energy Science Advisory Committee found that “rapid progress toward commercial fusion power will likely rely in part on research at existing or near-term international facilities that provide capabilities presently unavailable in the U.S., such as long-pulse magnetic confinement” – an area where the PRC has invested billions of dollars in recent years.  Decoupling and fear of information transfer also has limited our ability to communicate with the PRC in areas like spaceflight safety, where we generate dozens of conjunction warnings every day involving potential near-misses with Chinese satellites.  The consequences of non-communication or inefficient communication with PRC satellite operators resulting from a collision cascade includes the loss of access to particular orbits, significant damage to U.S. and allied commercial and security assets, and potentially unlimited liabilities for the United States under the Outer Space Treaty.  Such a disaster would have dire consequences for our economy and space-dependent national security establishment.

This creates a Catch-22 for American scientists and engineers who want to see the United States advance in the field or see their fields advance.  On the one hand, testing new technologies on foreign platforms creates substantial risk of foreign appropriation, similar to the risks experienced by American companies operating in China since normalization.  On the other, the lack of access to similar domestic or allied capabilities freezes the ability of U.S. commercial interests to keep pace with foreign competitors. 

Increased Departures, Stunted Discovery Process 

The U.S. government, likewise, cannot count on individuals who have devoted their lives to a particular field of inquiry to wait for the government to invest in these capabilities in the next five to ten years (an overly optimistic projection for the budget process, alone). History shows us that brain drain (or loss) in such situations is inevitable.  America’s ability to remain competitive will increasingly depend in some part on our access to PRC-derived knowledge sources absent new and significant U.S. government investment.  A research security regime that unnecessarily restricts U.S. collaborations based on institutional affiliation or connections, rather than domains where the risk of loss exceeds potential benefits, is more likely than not to limit our insight into new knowledge early in the discovery process, and throttle U.S. technological development in a way that will make it more difficult to compete in the future.

Research integrity has become securitized, confusing the ways in which we approach domestic challenges

Research integrity and research security are interlinked, and it is absolutely true that when contracts developed by a foreign government encourage researchers to lie on federal grant applications that is a major cause for concern and demands a national-level response.  But no matter how egregious such behavior is, and no matter how proximate violations of the False Claims Act are to violations of the Espionage Act, they are different parts of the criminal code.  If the government wishes to bring a case before the courts where an individual has both lied to the government on official documents and provided NSI to a foreign government, it is in the power of a prosecutor to do so and seek appropriate penalties.

On the domestic side, organizations such as Retraction Watch have justifiably brought attention to research misconduct over the past ten years.  There are significant examples of senior researchers, including 13 retractions from a Nobel laureate, of having engaged in alleged violations of research integrity.  Many of these cases involve the knowing manipulation or falsification of data.  I would argue that such cases do more to damage the integrity of the U.S. research ecosystem than individuals omitting information about their affiliations in grant applications, primarily because the publication of falsehoods into the scientific corpus, exacerbating the “replication crisis” in biology, psychology, and behavioral economics, and poisoning the scientific establishment’s credibility.

In several recent domestic misconduct cases, the researchers in question were able to return to their laboratories, continue their day jobs, and even found unicorn technology startups.  On the other hand, undisclosed affiliations, patents, or grant support have resulted in the termination of 119 scientists (almost half) investigated by the National Institutes of Health as part of their foreign interference efforts.  Some researchers have been denied access to funding despite no credible link to foreign talent programs or other similarly concerning affiliations.  A reasonable observer would be left to wonder why an undisclosed patent or grant support is a greater violation than the intentional falsification of the results of their research.  To the observer the answer may seem obvious–in one instance, we have a nexus with a foreign government; in the other instance, we don’t.  

As former Assistant Attorney General Matt Olsen noted when the Department of Justice drew down the China Initiative in 2022, “by grouping cases under the China Initiative rubric, we helped give rise to a harmful perception that the department applies a lower standard to investigate and prosecute criminal conduct related to that country or that we in some way view people with racial, ethnic or familial ties to China differently.”  The same must be true of administrative actions related to academic misconduct, especially as countersuits undermine the government’s earlier claims.  

Proper NSI Identification Removes Ambiguity, Releases Burden

This is why it is important that national security-sensitive research be clearly identified as NSI through the classification system.  Once information has been appropriately classified, then who gets to handle the information becomes just as important as how the information is handled.  Individuals who handle classified information are expected to report their contacts, affiliations with foreign governments, and all other information that is necessary to maintain the public trust as a matter of national security.  Violations related to improper handling may be appropriately elevated under the Espionage Act and successfully prosecuted in court.  Our current attempts to substitute judicial action with administrative penalties creates a culture of fear and suspicion in our university establishment, rhyming with some of the darkest periods in the history of our nation’s research and development enterprise.  The culture of fear, compounded with a lack of clarity about what needs to be protected, dramatically increases the probability that universities will improperly exclude individuals from certain racial or ethnic backgrounds on activities that have little relevance to national security.

Plan of Action 

Recommendation 1.  Congress should, with the support of the Administration, reinvest in American research and innovation, including in foreign talent attraction.

The United States is no longer in a position where American technological superiority is assured; the fact that the PRC is leading in publications in many critical and emerging technology fields, and in the deployment of advanced technologies in certain domains, should be of immediate cause for concern.  Likewise, anticipated declines in foreign talent enrollment in U.S. universities and shrinking PhD programs at major U.S. universities in response to federal policy actions should be of significant cause for concern. In 2024, the Defense Department reported that “unfunded research, development, test, and evaluation (RDT&E) infrastructure requirements were shown to have grown significantly since annual reporting began in 2018, putting the military at risk of losing its technological superiority.”

Funding also provides the government with the leverage needed to protect American interests.  The Department of Defense’s use of contracts to protect U.S. security interests is already common practice, most famously recently used to ensure that SpaceX’s Starlink would not cut service in Ukraine. Similarly, the government cannot classify, restrict, or otherwise currently control information that is produced outside the federal research ecosystem. Even if it is possible that an idea may make it back to a foreign government or adversarial military, it is far better for the United States to have participated in the development of that idea (and have the chance to exploit relevant intellectual property) than it is to surrender the innovation, in total, to a foreign power, robbing us of the chance to compete. Sometimes, in order to control the dissemination of an idea, the government must invest in it, and that means that the government should actively seek to invest in projects in order to participate in the discovery and gain access to its benefits, even when our partners might be less than ideal or when there is risk that the information could also be used by a hostile foreign power.

The government can and should create terms in its funding mechanisms that enable it to mitigate risk where necessary, as well as giving it graduated negotiating flexibility in circumstances where we may require enhanced measures (like encouraging researchers to sever ties with adversarial entities like the PRC).  These should not be mandatory, as is the case with the SAFE Research Act, which effectively gives the PRC the power to veto U.S. participation in large multinational consortia if individual PRC researchers become affiliated with an effort, thereby reducing the ability of the United States to set terms for increasingly multilateral scientific activities.

Recommendation 2. The Office of Management and Budget (OMB) should require rigorous and independent cost-benefit analysis of existing research security efforts before approving new research security-related regulations and requirements as part of the Paperwork Reduction Act review process.  Congress should exercise similar care before imposing similar requirements on academic institutions given the significant costs associated with research security programs.

The use of grants to impose research security requirements that alter an organization’s behavior at the institutional level are, in effect, de-facto regulatory action with significant economic impact (as defined in Executive Order 12866).  As attesting that an institution has a research security program that meets the requirements of a granting agency requires significant economic expenditure across many academic institutions, new research security requirements should be subject to the same cost-benefit analysis as other significant regulatory actions.  Paired with declining support for indirect costs and limited public evidence that disclosures of federally-supported basic research has harmed U.S. national security interests, there is a financial imperative to ensure that research security programs are aligned with actual instances of observed harm to national security and the unintended or illicit transfer of national security information (NSI).

Consistent with recommendations from the recent National Academies report on simplifying research regulations and policies, existing research security efforts should be recalibrated to mitigate risk of loss as opposed to blanket bans on interactions as a result of institutional affiliation.  Efforts that result in reduced American participation in global discovery around critical technologies directly cut against our country’s technological competitiveness and should be retired and rescinded, as the most likely outcome of such measures is not the protection of information, but rather the inability of the United States to develop and deploy new and novel capabilities.

What could the costs for expanded research security programs look like, especially given that some agencies are allegedly treating basic research activities as CUI?  As noted in the JASON report on Safeguarding the Research Enterprise:

“The supporting apparatus for access controls would impose significant cost on the conduct of research and reduce research funding efficiency. JASON received from NSF cost estimates for what the University of Oklahoma has spent to support such work, for example. A warehouse-type building for CUI experiments was estimated to have cost $2M, and a new office building with access control adequate for classified work cost $7M. Building construction costs are only about 10–20% of their life-cycle ownership costs, translating to roughly $1–2M per year for both buildings. Required security and compliance staff add cost of four full-time equivalent personnel, equating to another $1M per year. Thus, a medium to large ($1–3M/year) research program might incur security costs around $1–3M per year above the baseline research cost, roughly doubling the cost of carrying out that research. This would constitute a serious loss of research efficiency. Slowing research by half could easily allow countries like the People’s Republic of China (PRC) to pull ahead in strategic fundamental research areas.”

Strangling our research enterprise is an unacceptable outcome and must be avoided.  Likewise, expanding research security reviews within agencies would likely require significant expansion of agency personnel available to conduct rigorous national security assessments and cut against the availability of funds to do actual science.  Such a tax on research funding is an additional cost to American competitiveness, reduces the number of grants made to institutions, and serves to further exclude institutions in EPSCoR jurisdictions, HBCUs, and other MSIs, emergent research institutions, and nonprofit laboratories that already face significant resource constraints.  

In light of preliminary assessments of cost to the research enterprise, the Government Accountability Office (GAO, which is within the legislative branch) may wish to also assess the effectiveness of research security programs compared with their cost of implementation, and in particular the impact on programs that do not have a clear NSI nexus.

The Administration and Congress must ask if they are prepared to significantly increase funding for research programs in order to deal with such significant cost inflation due to mandatory research security program requirements.  The situation will only become more dire if the government caps indirect costs at 15 percent, as several science agencies recently attempted to impose, which would place research security costs in even greater conflict with the fundamental resources necessary for an institution to conduct research.  Such security measures are meaningless if the research institution lacks the staffing, facilities, or administrative support necessary to conduct cutting-edge research.  Care should be taken to ensure that federal policy actions attempting to reign in administrative costs do not cut against our ability to operate world-class research institutions.

Recommendation 3. Congress should figure out how best to handle research security concerns that originate from research supported by non-governmental entities, and empower industry consortia to take measures to secure their sensitive intellectual property.  Incentivizing participation through contracts, cost deferrals, or reduction of administrative burden for protective measures can help.

NSDD-189 warns that “as the emerging government-university-industry partnership in research activities continues to grow, a more significant problem may well develop.”  Contemporaries involved in the drafting of NSDD-189 have told me that the growing role of the private sector in funding university research was of concern to Reagan’s OSTP, and that as industry became a more prominent funder, then the government would have limited controls to prevent the transfer of industry-derived information to foreign powers.  Given the increased presence of commercial research laboratories, and more recently focused research organizations, the emphasis of research security efforts on federal funding for universities is inconsistent with the balance of U.S. research performance. 

The majority of research considered proprietary or sensitive (as well as the overwhelming majority of all U.S. research) is produced or supported by American companies.  As a matter of policy, most universities will not accept controls on the research which they produce, appropriately intending for it to enter into the public domain where it can be more rapidly developed and used for practical benefit.  Academia’s role as a producer and broad distributor of knowledge is fundamental to its role in society and must be protected.  Rather than suggesting that most university research is sensitive, when its primary purpose is to be shared, Congress should be willing to enact enforcement mechanisms that strengthen the ability of the government to protect research relevant to defense and industry.

On this point there are no easy answers.  The danger, of course, is that such measures will inherently restrict the competitiveness of American businesses and their ability to participate in international markets.  Strengthening CFIUS can hinder the ability of American companies to secure sources of investment and invite foreign retaliation.  Expanding our use of export controls will create incentives for foreign governments to diversify away from American suppliers and limit the ability of our companies to shape global supply chains.  Restricting foreign countries’ access to U.S. technology will instead create new dependencies on foreign technology sources, creating a new system of incentives that limit the ability of the United States to set global standards, and force countries to make concessions to the PRC in order to maintain access to their resources and technology.  Our efforts to limit the PRC’s access to semiconductors has already convinced the European Union of the need to become less dependent on the United States in other technology areas.  The difficulty the United States has had in managing the spread of technology from Huawei, Bytedance, and BYD should be instructive, as should growing challenges to U.S. influence in multilateral fora.

The most straightforward way to control information produced outside the government is by getting knowledge producers on contract and creating a system of incentives for companies to do so.  Doing so has two purposes.  First, it allows the government to create a financial incentive for companies to participate in enhanced security measures.  Second, it creates a legal relationship where companies can be required to implement enhanced security measures without significant sacrifices to their bottom line.  This could be through offering to defer costs associated with security clearances and technology measures needed to provide enhanced security and IP protection.  Legal relationships are more effective than education campaigns, given that some producers of high value technology don’t seek government protection given the significant financial and administrative costs.  Still, some firms may refuse to take government grants or contracts to preserve their organizational flexibility.  This is a feature of American capitalism, not a bug.

Managing these tradeoffs is fundamental to addressing our research security challenges.  If we want to get serious about addressing research security, our greatest efforts need to be directed to research that is higher in the value chain with national security consequences.  Most of that research doesn’t happen in universities.  Policymakers must be prepared to accept the inherent tradeoffs that come with decisions to place restrictions on the American research enterprise.

Recommendation 4. The Executive Branch should grant Original Classification Authority to the heads of extramural granting agencies and more strictly apply the federally-recognized definitions of basic and fundamental research.

As federal extramural funding agencies move toward more technology-relevant solutions and later-stage technology development and deployment, the ability of agencies to defend decisions to keep research in the open environment will become much more acute.  While the risks of overclassification are real, the risks are not greater than in fields like diplomatic engagement or global development (both by definition require continuous engagement with foreign partners, especially with those who wish to challenge American interests). The Department of Defense and Intelligence Advanced Research Projects Activity (IARPA) experience and history of prudent and limited classification determinations should provide some relief.  

This will not be cheap, and agencies will need appropriate resourcing in terms of personnel and capital to manage the workload and specialized technology facilities (Sensitive Compartmented Information Facilities, or SCIFs, and secure work areas) to manage classified information.  

More strictly applying the federal definition of basic research to research that is “without specific applications or processes in mind” should help agencies separate activities that have clear industrial, commercial, or defense-related applications from research that is truly foundational.  The task of establishing a “bright line” between applications with national security potential is likely to be significant, but it is likely to be far less costly than passing that cost onto thousands of laboratories around the country to make their own risk management calculations based on less complete information.  NSF has created the SECURE Center to mitigate this, but again, the solicitation for the Center notes that the Center does not conduct investigations, hold or manage classified information, or assume liability for the consequences of its products.

Agencies will also need to deal with the fact that many universities and other non-governmental research organizations frequently do not accept funding that comes with classification, CUI, or other burdensome requirements. In such instances agencies will need to weigh the value of the research in question and balance that with the risk of foreign appropriation.  In instances where the risk of foreign appropriation is significant, the value of the research is high, and the consequences of the United States not participating in discovery are anticipated to be significant, agencies should be willing to provide supplemental funding to enable the research activity to take place and to devote an appropriate level of oversight to ensure that U.S. involvement in the activity is appropriately managed and consistent with overarching federal interests.

The government might also wish to implement alternative funding measures when there is an interest in participating in discovery while excluding the participation of non-aligned governments.  Using the contract or collaborative agreement systems are probably more appropriate than grants in such circumstances, allowing the government to more expressly dictate terms for collaboration.  The government should be willing to use financial leverage to do this, including through funding for large international research consortia, to counter the efforts of adversarial governments while maintaining the ability of the United States to participate in overseas discovery processes.

Yes, this will result in a culture change in some of our premier research-supporting agencies. I would argue that culture change became necessary immediately after the creation of the NSF TIP Directorate, ARPA-H, and similar organizations.  If we expect these agencies to engage more directly in critical and emerging technology development, especially around technologies which are relevant to great power competition, then increased scrutiny of these and similar programs is necessary to protect our critical national assets.

Recommendation 5. The Executive Branch should move all research integrity efforts under the rubric of Gold Standard Science along with other issues related to academic misconduct.

This is not a lengthy recommendation; the integrity of our efforts to preserve and protect the integrity of the research enterprise should be managed under a single umbrella where penalties for unethical conduct can be calibrated to the severity of the misconduct.  This is essential for maintaining buy-in within the research community around both research security and research integrity measures.  Invoking Justice Holmes, if we take the view of an unscrupulous researcher (whom we shall find does not care two straws for ethical conduct in the sciences but does want to know what will result in a loss of tenure) then it is reasonable for them to assume that the government’s ethical framework for science is mediated primarily by our relationship with Chinese research institutions as opposed to Gold Standard Science. It is in the interest of the science and technology ecosystem to correct that notion at the earliest possible opportunity (and presumably to also clarify when the government will charge them with espionage).

Conclusion

It is highly probable that this administration and Congress will seek to implement additional measures related to research security in the near term.  Current and proposed frameworks incorrectly assume persistent U.S. supremacy in science and technology and that new ideas are a product of government innovation.  As the PRC deploys new capabilities that have not been demonstrated by U.S. government or commercial actors, the posture of the United States toward our research security apparatus must change to match the strategic moment.  Measures that isolate the United States from discovery should be retired in favor of new measures that selectively identify and protect critical knowledge vital to maintaining national security.  For the sake of our security and future technological leadership, we must recognize that innovation comes from the work of teams of individuals who are motivated to change the world, and accept that America is less secure when that change happens somewhere else.

Solving the Energy & Climate Infrastructure Finance Rubik’s Cube

Building Blocks to Make Solutions Stick

Capital is not the constraint, alignment is: Catalyzing large-scale climate and energy infrastructure requires government to act as a systems integrator—synchronizing policy, de-risking commercialization, modernizing valuation, and coordinating markets so private capital can move with speed and confidence.

Implications for democratic governance

Capacity needs

Deal templates and archetypes: Clear, standardized financing pathways that signal how government capital will engage at different risk tiers and technology stages. 


Jump to…


Executive Summary

Historic commitments. Huge demand. Massive cost reductions. Ready technologies. Yet, infrastructure deployment levels are underperforming their potential. What gives?  The U.S. clean energy sector has achieved remarkable milestones: solar and wind have tripled since 2015, costs have fallen 90%, and annual clean energy investment now exceeds $280 billion. Yet deployment has arguably fallen short of what both markets and the climate moment demand. The culprit isn’t a single bottleneck: not permitting, not subsidies, not technology readiness alone. The real constraint is misalignment across the multiple interdependent factors that investors need to see in place before committing capital at scale.

Think of it like a Rubik’s Cube: solving one face means nothing if the other five stay scrambled. This paper identifies six strategic levers that, when pulled in concert, can unlock the conditions for large-scale capital deployment:

The good news: the capital exists, the technologies are ready, and infrastructure is a solvable problem. With over 1,000 GW of clean energy in development and electricity demand projected to grow up to 50% over the next decade, the infrastructure build-out represents one of the largest capital deployment opportunities in American history. And global demand for U.S. clean energy technology has never been higher. The barriers identified in this paper are structural and systemic, not fundamental; most of the solutions proposed are actionable in the near term, without waiting for perfect legislation or perfect markets.

The window is open, but not indefinitely. For policymakers and investors alike, the question is not whether to act, but whether to act with the clarity, coordination, and urgency the moment demands. The frameworks, partnerships, and policy tools outlined here offer a practical roadmap for unlocking decades of economic growth, cost-of-living relief, and energy security for communities across every region of the country and beyond. The energy transition is not a cost to be managed; properly coordinated, it is a generational economic opportunity.

America has experienced extraordinary momentum in the growth and transformation of the energy sector. Solar and wind generation has more than tripled since 2015. In 2024, 50 GW of solar power was added to the U.S. grid, which is not only a record, but is the most new capacity that any energy technology has added in a year. Technology costs have plummeted dramatically: utility-scale solar and battery energy storage have each fallen 90% since 2010. Those declines have made them among the lowest cost forms of electricity in many places. Domestic manufacturing capacity has also surged with hundreds of clean energy manufacturing facilities, many of which have already come online. These technologies and projects have been reinvigorating communities and creating jobs across the nation, and we should see the benefits from these advances continue as capital is flowing into this sector at an unprecedented rate. Clean energy investment in the United States more than tripled since 2018 to $280 billion annually, with multiples more in commitments, and private markets alone have raised nearly $3 trillion over the past decade. For more established technologies like utility-scale solar and onshore wind, financing has become standardized. This includes established project finance structures, robust secondary markets, accurate energy production forecasts, and predictable returns that align with the needs of institutional capital. These asset classes now exhibit many of the hallmarks of market maturity: transparent pricing, deep liquidity, sophisticated risk assessment frameworks, and predictable transaction execution. This progress has been galvanized by unprecedented governmental support, including the Bipartisan Infrastructure Law of 2021 and the Inflation Reduction Act of 2022 (IRA), alongside bold state policies, aggressive corporate clean energy procurement, sustained advocacy, and relentless technological innovation. 

Yet despite these achievements and the trillions of dollars in committed capital, the pace of deployment has arguably fallen short of what the market opportunity demands and what the climate crisis requires. Hundreds of IRA-supported energy and manufacturing projects have faced delays or cancellations: much of that is due to increased economic and logistical uncertainty (e.g. in the cost and availability of equipment, permitting timelines, import and export regulations); much of that too is due to sharp reversals in federal funding priorities (e.g. tax incentive changes from the One Big Beautiful Bill Act (OBBBA), direct project cancellations). Moreover, emerging solutions are still taking time to achieve true commercial liftoff. Despite billions in federal funding allocations, only a few carbon capture projects have meaningfully progressed, with others indefinitely delayed or cancelled. Growth of some demand-side energy solutions, like behind-the-meter solar and virtual power plants, has remained relatively regional, despite favorable economics. Advanced nuclear energy, though it remains a policy priority, has been challenged with long delivery timelines, and many project investors remain wary of the risk of cost overruns. Sustainable aviation fuel production capacity increased by ten times in 2024, but it is still a very small fraction of jet fuel demand. Furthermore, transmission capacity has not grown nearly as quickly as needed and remains a key constraint to progress. 

The situation – deployment deficiencies despite historic support – can be entirely solved with just one thing… and that is… to stop acting as if there is just one thing. The relative underperformance described earlier is not attributable to any singular constraint: contrary to what some have argued, for instance, it’s not solely about removing permitting roadblocks nor creating more subsidies. Rather than seeking silver bullets, real progress can be made by recognizing that there are multiple elements involved and misalignment between those elements have curbed the rate of progress.

Accelerating new energy project investment is somewhat like solving a Rubik’s Cube. The key to solving the puzzle lies in its interdependence: every twist of one face ripples across the others. You can solve one face perfectly, but if the other five remain scrambled, you haven’t solved anything in the grand scheme. Progress requires coordinated progress across multiple dimensions simultaneously in the right sequence. The cube rewards systems-thinking and algorithms over siloed, non-coordinated actions. All six of its faces must align to win.

The same is true for new energy finance. When most project investors look at a sector, they approach it as a puzzle and look for as much alignment of the full picture as possible before being sufficiently comfortable to deploy capital. That’s the indicator for the risk-reward balance being in the right place to justify investment. Rather than defining the theory of progress by singular issues, policy and industry stakeholders need to create sufficient alignment of multiple puzzle pieces at the same time. 

This paper offers a few perspectives on how to achieve the conditions for larger-scale capital deployment, drawing on both lessons learned and promising concepts from across the industry. Like the six faces of the Rubik’s cube, six priority strategy areas are articulated: market defragmentation, commercialization partnerships, transaction execution speed, policy synchronization, holistic valuation methodologies, and proactive investor engagement. Note that while some of the underlying strategies may take time and require deep structural realignment, most of the concepts discussed herein are actionable in the near term. A range of stakeholders, from policymakers to infrastructure investors to community and industry advocates, need to move in concert to solve the puzzle and unlock greater investment. 

The opportunity before us is immense. With over a thousand gigawatts of clean energy in development and electricity demand projected to grow up to 50% over the next decade, the infrastructure build-out required represents one of the largest capital deployment opportunities in American history. Similarly, global demand for U.S. clean energy technologies to be a bigger part of the mix has soared over the past few years, as many countries are seeking to diversify away from China or access some of the more unique technologies that the U.S. is developing. And solutions can’t come quickly enough, in this era of fast-growing energy demand, spiking electricity bills, aging physical infrastructure, and burgeoning new industries, not to mention a plethora of old and new technology solutions and operational strategies poised to meet the moment. The question is not whether the capital exists (it does!), whether energy solutions are available (they are!), nor whether there is a silver bullet salve (there isn’t!). It’s whether we can align the six faces of our energy finance cube quickly enough and strategically to channel the right types of capital where it’s needed most, when it’s needed most. The energy transition presents a real opportunity to drive economic transformation that, when properly coordinated, can unlock decades of strong economic growth, cost-of-living reductions, innovation, and prosperity across every region of the country and the planet.


Chapter 1. “Come together”: Defragmenting markets through regional coordination

For many companies across multiple sectors, the U.S. market can seem like the golden goose. Its big population, high income, diversified economy, and strong purchasing power typically mean large total addressable markets (TAM). While those drivers can be true, the reality for many energy and climate solutions, especially early on, is that the large TAM is challenging to realize, as the market can be highly fragmented. In those sectors, the U.S. is less of a “market” per se and more of a loose mosaic. There are about 3000 utilities, ranging from large publicly traded corporations to rural cooperatives, in regulated and regulated markets. States and territories not only have different market drivers, but they also have their own regulations and  regulators, business processes, permitting requirements, and market rules. This also complicates the go-to-market as you typically need large and locally-focused commercial organizations to tap the markets, which can be expensive and time-consuming to build, especially for newer companies. It also often means a less efficient path to scalability, as each set of local customers and  regulators need to be brought up to speed and convinced about the fit of a solution (compared to having a few entities that speak for the entire country). In addition to the commercial elements, this dynamic introduces technical barriers to scalability, especially where deep integration and redesign are required to meet local requirements. Over the years, this has often flummoxed both U.S. startups and experienced foreign investors, who have approached the U.S. market with high expectations, only to be confounded by these complexities.

Harmonize local requirements to avoid the piloting death spiral

To the extent possible, to promote more rapid and widespread investment and deployment of solutions that could benefit their communities, local (and national) governments need to work more closely together to harmonize market designs and project requirements.  Oftentimes, a solution provider may implement a solution in one state, but when they go to another state, that utility might make them start from the beginning and prove themselves all over again – many innovators have likened these continuously repeated pilots to death by a thousand cuts. If a good solution is successfully deployed in one place, the barriers associated with deploying the same solution in another market need to be lower. This concept applies to permitting and design as well. The more that permitting  processes and tariff structures, or modular elements within, can be templatized, time and uncertainty are reduced. Additionally, uniformity lowers development costs because the solution doesn’t have to be fully reengineered for each locale. This could also correspond to standardizing equipment and project technical requirements between them, to minimize costly product redesigns and reengineering. Furthermore, state stakeholders seeking to deploy similar solutions should consider entering into reciprocal partnerships or MOUs, supporting collaboration that’s both technical (e.g., between their utilities and independent engineering organizations) and policy-focused (e.g., between their policymakers and regulators). As such, when a solution under that type of agreement is evaluated and approved in one state, when that solution is brought forward in a partnering state, the solution can be given an expedited evaluation and approval process. 

Not just physically, but digitally

The dynamic described previously is not limited to hardware. It is present for many software solutions as well, particularly those that have to integrate with local operators’ control systems. For example, a locality may be interested in deploying a virtual power plant (VPP), which is a relatively low-cost, software-based approach to aggregate distributed and controllable energy resources in order to provide large-scale energy services. A VPP deployment would have to connect to a utility’s and/or grid operator’s distributed energy management system (DERMS) to talk to devices, energy management systems (EMS), energy dispatch and trading system for wholesale market participation, customer information systems to track billing and energy usage, and also be cybercompliant – note too that several utilities have yet to fully roll out these foundational modern digital systems. Not only that, each utility and grid operator might have their own implementations (vendors, versions, configurations, rules) of these systems. Even outside of controls-oriented functions, the variety of data structures, naming conventions, and IT systems can make it difficult to access available market data (e.g., energy pricing), electricity tariff rate structures, and other highly important information. This is a reason why you tend to see that many energy software solutions have their operations concentrated in just a few markets, as the costs and time associated with integrating with another market’s cadre of systems can be hard to justify and thwart efforts to scale. 

This is an area where states can work together (along with their respective utilities, grid operators, technology providers, and regulators) to agree upon more uniform ways to structure data, access market information, and securely interface with market and control systems. This could include partnering with groups that are developing common standards and protocols (such as RMI VP3, LF Energy), building an implementation roadmap across those states and utilities (accelerating implementation of FERC Order 2022), and taking corresponding legislative actions to ensure investments are made to build out the enabling foundational digital systems.   

Aggregated demand and collaborative procurement

In a similar vein, collaboration between state and national governments can level the playing field and expand markets. When it comes to infrastructure, states and countries may often endeavor to ensure that local manufacturing capacity and supply chains are set up within their territories – this can create long term economic growth opportunities, reduce equipment delivery risks, and improve the public’s return on their investment. 

However, issues can arise when multiple states are trying to duplicate efforts in the same sector. Take offshore wind in the early 2020s. Multiple eastern U.S. states were not only supporting a new wave of projects, many funding programs had requirements that the projects needed to source materials and equipment from suppliers located in the state.. The effect of this for a small, burgeoning industry was dilutive and slowed down factory investments as the scaling factors were harder to justify. After all, there are only so many blade, monopile, vessel, and cabling factories that can be supported at a given time, especially early on in the industry’s development. In response, thirteen states and the federal government signed a memorandum of understanding, where they agreed to take more regionalized, collaborative approaches to procurement and supply chain development. 

Relatedly, an area where significant improvements could be made is around the procurement of critical common equipment. To accommodate the load growth from new factories, data centers, and building and vehicle electrification efforts, there are many pieces of equipment that will be needed irrespective of what types of energy are associated: things like transformers, circuit breakers, switchgear, and so on. There are considerable production capacity shortages and long lead times on these, which raise costs and create execution risks for projects. Despite the robust market demand, manufacturers have been somewhat hesitant to invest in expanding production as they worry that the demand will not materialize, which would leave them with underutilized or even stranded assets. 

State and local governments can respond to these challenges in multiple ways. For instance, they could pool together their demand and drive standardization of the equipment so that the equipment is more fungible and interchangeable, as has been previously highlighted by the U.S. National Infrastructure Advisory Committee’s report on protecting critical infrastructure. Also, states  can create well-defined demand guarantees where they can provide assurances to manufacturers and consumers that necessary  equipment will be there, as needed.  

For example, in 2013, the Illinois’ Department of Transportation led a seven-state procurement initiative to jointly acquire a standardized set of efficient locomotives and railcars, with additional funding provided by the Federal Railway Authority to support domestic manufacturing. This effort pulled forward new, more efficient railway vehicles into the market and lowered lifecycle costs. These concepts can additionally apply to secondary markets as well, for example, providing residual value guarantees on heavy-duty electric trucking procurement, to help mitigate risks on the initial purchase (e.g., traditional resale markets not emerging or asset residual values not being realized as projected).


Chapter 2. “That’s what friends are for”: Overcoming commercialization barriers through partnerships

The next facet of the cube pertains to early market formation and investment into technologies that are not yet fully commercialized, especially first-of-a-kind (FOAK) and early-of-kind (EOAK) infrastructure, and why capital formation has been easier for some types versus others. Differences in the ability to demonstrate, commercialize, and scale new infrastructure do not purely depend on the ultimate value of the solution; they are often driven by how the characteristics of that infrastructure affect the pathway to value realization, particularly the inherent capital-intensity and modularity of the solution. 

For highly modularized solutions with lower capital requirements, the pathway can be much more straightforward. Take solar photovoltaics. Though module R&D and fabrication are far from trivial, technical demonstration and deployment are relatively simple. One can usually install and field-test new solar quickly and inexpensively. The advantage extends when scaling the solution to bigger projects: once you reach megawatt scale, given the modularity of solar cells and their balance of plant (inverters, cables, trackers, etc), you can obtain a reasonably clear picture of how even gigawatt-scale projects should fundamentally work. Other highly-modular solutions like batteries and EV chargers have enjoyed similar advantages. Tesla, for instance, in order to address potential range anxiety issues for its customers, leveraged its own balance sheet and government funding to build out a network of standardized superchargers, taking advantage of charger modularity to build in waves. This characteristic has made it easier for rapid demonstration and scaling of those solutions, as the financial community can enter the market, investigate, learn, and expand with relatively low risk.   

However, the commercialization process becomes significantly more challenging with more capital- intensive and complex technologies, such as carbon capture, nuclear, e-fuels, etc. For some of these types of solutions, the early projects can require billions of dollars to construct and demonstrate.  For these types of infrastructure, smaller-scale systems might not provide comparatively representative technical proof points of how the larger system needs to operate. Furthermore, larger sums of capital are often needed for early deployments. What’s more, the financial risk can be compounded as the long-term payoff is not guaranteed, as FOAK & EOAK projects typically have more uncertainty, and the learning rates of subsequent projects may also be less obvious. Furthermore, instead of a typical first-mover advantage that you often see with new technologies, early project investors here might actually suffer from a first-mover disadvantage, where they have the risk and cost of participating in the earlier projects, but don’t accrue the benefits and learnings that are seen in projects executed down the line. 

To help address these types of challenges often seen with capital-intensive, less modular FOAK and EOAK infrastructure projects, adopting a new suite of partnership structures can greatly help to accelerate market formation and improve investability.

Multi-project joint ventures

Catalyzing capital for this class of infrastructure may mean going significantly farther than providing a few incentives and having strong advocacy efforts. More complex and elaborate agreements, private and/or public, are often necessary to drive deployment. Particularly in the form of deployment coalitions, consortiums, and joint ventures that support multiple projects. At the highest level, this can exist in several forms and can be originated by the private or public sector, as appropriate. For illustration, a few examples of commercial approaches to scale new nuclear energy projects, roughly in increasing order of relative deployment impact:

All of them offer significant advantages over pursuing projects on an individual basis. They provide demand signals to supply chains to create manufacturing capacity and to labor groups to create a workforce. Generally, both of these stakeholders may need to see firm demand signals before they will undertake significant investments, which are in turn typically needed to reach a solution’s cost and performance entitlement (otherwise creating a chicken-and-egg problem). They also create more concrete opportunities to drive project standardization; this not only allows a technology to achieve faster learning curves, but it also helps to derisk and justify the investment by providing a more tangible line of sight to the large market. The point for manufacturers and workforce development groups is equally applicable for financiers, who often want to see a pipeline of repeatable opportunities before spinning up their underwriting teams. 

Risk and reward sharing

Having an orderbook of the first several projects, per se, may not be sufficient to create sufficient activation energy at the project level. Though it sends a good signal for supply chains and others, it does not necessarily address the first-mover disadvantage issue that may exist. 

A differentiator in partnership approaches, including the ones described previously, is how to think about alignment and value creation. Traditionally, the way in which governmental entities approach financial partnerships is through mechanisms like subsidies, loan guarantees, offtake guarantees, backstops, and fast-tracked processes. These help to reduce financial exposure to stakeholders, but alone, they miss a key part of the story: the long-term upside that can be created by successfully deploying and opening up a market for these solutions. 

Usually, for the product owners and corporate investors, this is more naturally accounted for and balanced against the downside risk. For instance, companies, from software providers to aircraft manufacturers, might sell their first products as loss-leaders, providing lower pricing to early adopters to reduce the risk to the buyer, which they justify knowing that they should be able to rapidly recoup the costs of early expenses (and even failures) over the broader market pool, if they are successful. For large infrastructure projects, this is more challenging to achieve. When projects are highly capital-intensive, the financial exposures may be too great for the product company and/or equity investor to bear – that company or fund might be entirely wiped out by an individual project failure (for example, Westinghouse had to declare bankruptcy in 2017 when their two U.S. nuclear projects faced challenges). Project stakeholders and financiers might be asymmetrically exposed to the downside risk and, therefore, be inclined to avoid investing in early projects. For a promising technology, you may often find several customers (e.g., electric utilities) lining up for the ‘third’ or ‘nth-of-a-kind’ project, which would likely be derisked and less expensive, while at the same time taking a passive wait-and-see approach with the first project – which produce a stalemate if the first project is hard to get off the ground. 

This is where deployment partnerships with structures that more fully align economic incentives and share in both the downside risk and upside of value creation can be a powerful catalyst for action. Amazon’s structure, for instance, creates this too by being involved in multiple projects where they would ultimately benefit from improvements over time and also through their equity investment into X-energy itself, so they should (depending on terms) continue to benefit down the line if the company is successful. 

In addition to encouraging the formation of joint ventures and consortia as described earlier, states and/or national governments can work together to strategically invest in key solutions, run competitive tenders to prospective providers, and strike profits-sharing agreements and/or warrants (as opposed to pure equity) in situations where government investment played an outsized role in value creation. 

A potential example of this is the recent $80 billion framework agreement between Brookfield and the US Government to deploy new large nuclear reactors. Notably, beyond packaging existing products and authorities (e.g., low-interest government loans for projects), this proposed deal additionally stipulates a proposed profits sharing mechanism where the US Government would receive a share of future profits from reactor sales. Noting that this partnership is early-stage and important details have yet to be disclosed, this mechanism could be appropriate here as you have a hard-to-commercialize sector, with strategic national and geopolitical value, with effectively no domestic competitive products. 

State entrepreneurship

Public sector funders can also play a significant role in creating and incentivizing these kinds of deployment partnerships. Though more commonplace in countries with state-run industries, countries with free market economies have often found it more delicate to navigate. There may be legitimate concerns about how governments’ “picking winners” may create adverse incentives and undermine competitive markets, in some cases. Plus, it may confuse governments’ role of trying to maximize public benefit, versus showing favoritism or extracting economic rents from corporations.  

All that said, there are models for state entrepreneurship that can be very powerful here, balancing the need to pull forward solutions and capital, while protecting the public and maintaining market competition – particularly in markets that are pre-commercial, have few players, have outsized national strategic benefits, and otherwise would not develop on their own without heavy external intervention. These are cases where, though the market benefits are considerable, the activation energy may be too high to stimulate deployment without deep governmental intervention. 

Furthermore, consider where you have first-mover disadvantage challenges but a strong set of prospective fast-followers. To avoid the risk of a Spiderman-meme-like situation where stakeholders (e.g. local utilities or individual states) are pointing at each other to make the first move, downstream project investors could, for instance, co-invest (debt, equity, or backstops) for the first project, even on a minority basis, which would both mitigate risk on the first project and also enable them to access a cost-effective pathway to the technologies they desire to build down the line. You do not traditionally see state and local entities investing in infrastructure projects in other jurisdictions, but doing so could be net beneficial as a faster and lower-cost way to derisk and execute their own projects. 

In addition, where appropriate, public financing entities investing domestically (e.g., states, green banks, federal agencies) could, where appropriate, consider extending their authorities to borrow some concepts from the US’s international playbook. Organizations like the Development Finance Corporation (DFC) can make equity investments in strategic, high-value projects, particularly where the normal capital markets would otherwise struggle to enter until the investment thesis is more clearly actionable. Such a process would have very clear scopes, firm guardrails, clear commercial competition plans, and be compatible with legal and market structures to create the intended benefits without confusing or distorting markets. 

In any scenario, there should be a corresponding plan for how the public profits would be used. For example, it could be used to raise capital for other governmental activities or directly returned to the public in some way. Or it could be efficient to recycle the funding towards related activities and balancing of the governmental ‘venture capital’ portfolio, as would a strategic sovereign wealth fund.


Chapter 3. “Highway to the deployment zone”: Faster, risk-weighted transaction execution

There’s a common cliché from finance that time kills or wounds all deals. Increasing the speed of policy formation and deal execution is essential to unlocking growth and investment, especially for newer sectors. We will particularly focus on the public capital side of the equation. Here, there is a great mismatch between public and private investment decision time scales.  In the private sector, deals are expected to be completed on the order of months or even weeks. Often in the public sector, this can add many months to years, with a high degree of uncertainty, depending on the program. There can be many idiosyncrasies associated with public funding – e.g. infrastructure projects with federal dollars may have additional compliance requirements (e.g. for environmental regulation or for domestic manufacturing). Though there are many deep policy questions here, for this discussion, this piece will focus on ways to accelerate the process.  

Staffing for success

While it’s easy to say that the government should move faster, the reality for many is that the individual government program officers are typically working at a rapid pace. This is especially true at the political level where the motivation to make progress in a short amount of time tends to be very high. Particularly, when it comes to developing new programs, they do a massive amount of work, mostly unseen by the public, and with very few resources and are often overstretching to meet deliverables. 

The other side of this is that when new programs and initiatives are rolled out, there often isn’t a similar level of flexibility in staffing levels and allocations. In fact, staffers at the federal, state, and city level can get overwhelmed by the volume of direct work and information requests, following the relevant laws and statutes. A new capital program may get introduced, but the number of people to implement that program might not change rapidly. For instance, the Inflation Reduction Act of 2022 introduced and/or influenced dozens of tax credit programs, and accordingly had to issue almost a hundred pieces of new guidance, so the market could act on them. A relatively small group of people led by the Treasury Department’s Office of Tax Policy were charged with generating that official guidance (as required, to ensure consistency and fairness). In addition, a number of the programs therein had complex elements which required deep technical expertise (e.g. tax law, energy markets, carbon accounting, energy technology) to complete their work well – skills that are in relatively short supply and in high demand, both inside and outside the government. The rate of progress was also slowed by some ambiguity in the law itself, where key technical questions (e.g. accounting methodologies and criteria) accordingly needed additional time to be addressed during implementation instead of beforehand. The associated teams ran at breakneck pace to complete all those issuances in just two years. Yet, many engaged market actors who were excited to proceed with shovel-ready projects experienced challenges, as they waited for guidance, thereby slowing initial progress. 

To meet the rapid needs of an eager market and particularly in times when the governments are trying to push comprehensive reforms, agency leaders and legislators need to consider ways to make sure that the implementing organizations are sufficiently well-staffed and resourced. This should not only consider program staff (both existing and new), but also functional teams (e.g. legal, communications, stakeholder engagement) where bottlenecks can often form as they support multiple programs. This can include having surge capacity resources (either short and/or long-term, internal or external) bringing on technical and subject matter experts to help enable fast and fair processing. Also, an implementation staffing needs assessment should ideally be conducted as part of the policy-formation & legislative process so that the appropriate resources can be allocated early and efficiently. To further ensure efficiency of resource allocation and implementation speed, legislators should consider expedient ways to drive greater clarity and specificity at the point of legislation, where applicable. 

Iterative capital deployment programs 

It is tempting and important to get things totally right on the first try, particularly when it comes to government funding, which is highly scrutinized. That said, an approach which has delivered success is to release capital in phases. Here, instead of issuing all funding at one time, the program office (especially for a large competitive program) might split the deployment into phases over time. The first phase is executed quickly and the subsequent phases are introduced later. While this may introduce some short-term friction, it not only gets capital and projects moving faster, perhaps as importantly, it gives both the funders and market chances to build momentum, learn from one round, and improve towards the next ones. A good example of this was the DOE’s Grid Resilience and Innovation Partnerships (“GRIP”) program. This was a $10 billion program from the Bipartisan Infrastructure Law to enhance grid flexibility and improve the resilience of the power system against extreme weather.  That $10 billion allocation was split into three phases to be issued over a few years. Over those three phases, the quality and ambition of the applications and programs funded increased significantly as all stakeholders were able to learn and adapt with each round. 

Progress over perfection

For public programs, this is a major challenge driven by misalignments of risk tolerance. Many ambitious government funding programs are faced with a strange pickle. On one end, they have the duty, mandate, and power to drive innovation, do deals ahead of the commercial markets, and derisk promising solutions to a point where they can scale on their own and deliver the broad public benefits associated with that solution. On the other hand, the funds being used are raised from people’s hard-earned money or a state or country’s valuable natural resources, and neither should be handled frivolously. The fear of the political fallout from the latter may drown out the benefits of delivering the projects; that is, in the eyes of an underwriter or program officer, the downside risk may often outweigh the upside creation; doing no deal may feel safer than doing a ‘bad’ deal. 

Take the case of two companies that received funding from the DOE’s Loan Programs Office (LPO) in the early 2010s. One is the solar cell manufacturer Solyndra, whose idea was to decrease the cost of solar energy by using cylindrical, thin-film solar cells that could capture the sun’s energy from multiple angles, compared to their conventional flat-panel counterparts, and thereby bring down their levelized costs. Solyndra received $535 million in federal loan guarantees, which it defaulted on and went bankrupt when market conditions changed (they, and other promising new solar companies, were undercut by plummeting prices of silicon solar cells from China). The default filled the news cycles for months, sparked several congressional hearings and investigations, and also left a profound imprint in the minds of many program funders. No government underwriter wants to be dragged to the Hill or see their name in the papers for this reason. On the other end of the spectrum, you have a little-known car company called Tesla, which received a $465 million loan to expand electric vehicle production. Tesla, as we know, went on to become one of the most transformational and successful automobile companies in recent history. And yet, comparatively little fanfare has been made about the government’s role in making that a success. Two loans, about the same size, issued around the same time, by the same organization. Not only did their actual outcomes differ, but the financial outcomes to the upside and the political fallout to the downside were diametrically-opposed.

This contrast becomes even more stark when looking at the broader picture. Again, taking the example of the LPO (now Office of Energy Dominance Financing (EDF)). It has historically had a loss rate of less than 3%, which is on par with most commercial and investment banks – entities which often invest in markets with more proven solutions and less uncertainty. Moreover, other governmental programs, like DARPA, NIH, and ARPA-E, also have strong investment track records. All that said, the perception of risk has led to an overly deep risk conservatism and a sense of fear of doing deals that might go sideways. This creates huge process drag for the entire organization and curbs the rate of progress that can be made, as underwriting processes can become elongated and difficult to navigate. Furthermore, for many loan applicants, it can take several years to get through the loan process; anecdotally, some applicants have complained it was slower and harder to access than what they could wrestle from the commercial market. 

Overall, this is a situation where the tolerance for and understanding of losses for public financing needs to be reconceptualized and appropriately balanced, given the mission. Not all losses are bad. Individual losses are not necessarily detrimental if outweighed by net gains. There are significant opportunity costs of not taking risks appropriately. More can be accomplished without jeopardizing public interest. 

Given the governments’ having historically demonstrated their ability to be good stewards of capital across long periods of time, the inherent risk that accompanies the pre-commercial asset classes they support, and the urgent need to make progress and unlock markets, this is a situation where more streamlined, faster underwriting processes that increase the speed of execution are critical and warranted. Furthermore, governmental funding organizations need to be given more ‘air-cover’ so that individual misses do not get over-politicized, but are understood to be reasonable elements of a process for progress. Note that accomplishing this process and cultural shift requires major work internally with staff, policymakers, and the broader public. This is an aspect where integrating concepts like state entrepreneurism and more balanced portfolio-based risk and reward approaches, described previously, can also unlock new investment and risk management strategies, greater societal benefits, and increased comfort to staff and leaders to usher in that kind of transformation.  

Creating longer and more durable windows of action

A more obvious reason to move quickly on policymaking is to deliver benefits faster to project and community stakeholders – which is, of course, the main objective of the policy in the first place. But beyond that, investors understand that the political time windows covering favorable conditions could be short. This is particularly acute for assets with long development cycles and/or high upfront costs: e.g., building new manufacturing facilities or developing interregional transmission lines can take years. Indeed, it was estimated that 60% of committed IRA-funded clean energy manufacturing projects were originally not slated to come online until between 2025 and 2028. 

Moreover, the IRA timeline created some interesting time crunches. Though the law was thoughtfully conceived with some longer time horizons for tax credits, in practice, the actionable investment window for that version ended up being incredibly short. The law was passed in August 2022. It then took time for programs to be formed and guidance to be released, as described earlier. In parallel, the investment community had to learn, come up the learning curve on the new opportunities, and build ecosystem collaborators (which themselves are also reacting and forming). Next thing you knew, as the election window started to ramp and policy uncertainty increased, many investors started to park their capital and take a wait-and-see approach in early 2024, as evidenced by strong increases in fund ‘dry powder’ (raised but uncommitted capital) but a sharp dropoff in actual capital deployment and assets under management at that same time. 

Moving quickly is critical to give investors, communities, and other associated stakeholders as much time as possible to understand the landscape, develop deployment pathways, build new solutions, and ideally iterate, given the chance to take more shots. 

That was a shorter-term perspective. In the spirit of leveraging speed to open the front end of the window, longer term, some thought should be made to how one extends the investability window. Investors typically do not decide to invest in a project purely due to the project’s merits. Particularly when entering a new sector, it’s also driven by the commercial prospect of potential follow-on deals. Short political windows and the associated ‘stroke-of-pen’ risks often raise major flags for the risk committees at financial institutions. As was mentioned earlier, many IRA programs arguably had much less than two years of impact. Deeper policy stability is critical to ensure continued, long-term investment. That type of stability has, at least historically, been a hallmark of the US regulatory & commercial system and a positive differentiator in the race to attract capital and talent from across the globe. For sectors with high strategic value, high early capital requirements, and long investment cycles, policymakers should consider more mechanisms to provide longer-term policy guarantees to give investors assurance that they have long enough windows to justify their business cases.


Chapter 4. “Okay, now let’s get in formation”: programmatic policy synchronization for fast market formation

Catalyzing the deployment of new infrastructure is usually enabled by a bevy of policy actions. This can be important as transforming a sector may require several changes in economics, behaviors, and processes. Especially when resulting from expansive new legislation and/or executive actions, the government may be required to deliver a host of new policy programs, including new rules (e.g., permitting reforms, categorical exemptions), funding allocations and programs, implementation guidance (e.g., for tax rules), informational reports (e.g., National Lab technical studies, commercialization reports, etc). These types of activities are highly valuable as they tackle different aspects of the deployment challenge and take huge amounts of effort to be effective. However, they often get rolled out and implemented on separate and independent processes. This can actually stall and frustrate deployment efforts as most investors will want to see the major policy puzzle pieces locked in place before getting comfortable enough to deploy capital – for most risk organizations, ‘stroke-of-pen’ risks are often viewed as red flags. Conversely, this hesitation can cause consternation for policymakers and advocates who may feel that they have done the heavy lifting in passing new legislation, but don’t see a corresponding flood of serious commitments immediately after. 

Policy deliverable schedule alignment

One way to address this is to implement a visible and synchronized schedule, showing all the related policy efforts and programs for an initiative. The interdependencies between those activities would be easier to identify and allow relevant stakeholders to see when all the major puzzle pieces would be in place, and in turn, also align their investment and advocacy efforts accordingly. 

An example of this comes from carbon capture: the federal tax credit for carbon dioxide sequestration (45Q) was first issued in 2008; however, the first set of tax guidance was not issued until 2020, as the IRS, Treasury, EPA, and other agencies had to build a suite of complex regulations around reporting, verification, stakeholder comments, and more. A consequence is that, though the tax credit was in place (and though there were strong complementary financing capabilities from renewables tax equity and thermal power plant development already in place), little to no investment went into this sector, effectively ‘wasting’ many years of eligibility and frustrating many interested stakeholders. 

By contrast, take advanced transmission technologies: despite being rapidly-deployable and cost-effective solutions to increase transmission and distribution system capacity and performance, they have been historically underutilized. To increase awareness and deployment, the federal government developed a suite of products including the formation of the Federal-State Modern Grid Deployment Initiative, grant funding via the DOE’s Grid Resilience Innovation Partnerships program, loan funding from the LPO’s Energy Infrastructure Reinvestment program, categorical exemptions in federal environmental permitting on upgrading existing transmission lines, a Pathways to Commercial Liftoff report on grid modernization, new national deployment goals, technical reports and new assistance programs by the National Labs, and more. These were all released within a couple of months of each other in 2024, so the market had a fuller picture to which it could react and begin to make greater progress. Since then, dozens of states have passed new laws, and the number of projects being pursued and funded has also been on the rise.

Capital source navigators

Relatedly, new legislation may result in several new governmental funding programs or changes in missions for existing funding programs. Many of these efforts and changes might go unnoticed or disproportionately utilized. Take the energy- and climate-tech startups, who may be seeking capital to grow or transform their businesses. Government capital tends to be attractive as it is often willing to embrace early technology risk (unlike most commercial capital), is often non-dilutive to the company’s capital stack, and can give the company extra visibility. Most people in the energy sector will know of programs like DOE’s ARPA-E (Advanced Research Program Agency for Energy) or the Loan Programs Office. Far fewer may know that there may be funding available through ‘non-energy’ agencies like the US Department of Agriculture, Small Business Administration (SBA), the Governmental Services Administration (GSA), or the Department of Defense (DOD). These have increased the pool of capital available and provided a wider array of financing products that increase the chances that the right kinds of capital are available to serve the spectrum of company needs.

Initiatives like the Climate Capital Guidebook, published in 2024, can be helpful to make these types of programs less opaque and easier to access, especially for startups and small businesses. At the state level, databases like DSIRE USA have been providing a beneficial service aggregating information on state incentive programs for years. 

Making information on federal, state, and/or municipal funding programs highly accessible and searchable from a centralized, common location is key. Or else they may get lost, buried in webpages that few know where to access. This process can be further enhanced using cross-cutting discovery technological tools. For example, AI-based agents could be used to continuously and automatically map these programs and keep information organized. Also, large language models can be deployed to allow stakeholders to more readily identify and compare programs of best fit (matching things like user capital needs versus program ‘ticket sizes’, usage restrictions, and eligibility requirements). 

Zooming out from individual needs, this would also augment new solution developers’ and investors’ ability to more comprehensively understand the relationship between governmental capital programs and the role they play in energy solution commercialization and deployment. For instance, related to technology readiness, it would make it easier to chart what programs are available to technologies at different stages of maturity. From the National Science Foundation (NSF) for fundamental research, or the Advanced Research Program Agency-Energy (ARPA-E) for more applied technology development and early manufacturability demonstration, to planning grants from various agencies, and federal tax credits for infrastructure projects. 

Similarly, funding programs could be mapped against project development phases. In areas like international project finance, this exercise would be valuable to demystify which programs and institutions are suitable for various phases of project development. In some cases, an international energy project developer using American technology might need to navigate a gauntlet of different funding institutions: from the US Trade and Development Association (USTDA) provide grants for front-end engineering development (FEED) studies, the Export-Import (EXIM) bank for domestic manufacturing loans, to the Development Finance Corporation (DFC) for equity co-investment and political risk insurance. Not to mention multilateral development banks like the World Bank and International Finance Corporation (IFC), which themselves have an array of funding programs and instruments. Here, providing clearer, more cohesive representations of how a patchwork of funding sources can work in tandem and be packaged together can have outsized strategic competitiveness for American companies. This would thereby help level the playing field for companies competing against companies backed by governments that can provide fully wrapped financing solutions.


Chapter 5. “C.R.E.A.M.”: More holistic valuation tools and methodologies

This facet addresses the challenge, which is still too common in solution valuation. Not of valuing the companies themselves, but of proving to customers and investors that the proposed energy solution is worth adopting. Especially because regulation alone is usually not a salve for driving energy transition activities in free market economies. While regulation may steer what should happen, costs and economics are often bigger drivers of how quickly that transition occurs. Borrowing a chemistry analogy, economics determines the activation energy and kinetics of transition policy. Solutions need to demonstrate their fit and attractiveness in often economically competitive and constrained environments. In addition, stakeholders with shared interests (e.g., not just federal, but also at the industry and state & city levels) should invest to build a common valuation infrastructure (e.g., resource characterization data, system models, and more) that helps lower the barriers to deployment and investment. Doing so will also make it easier to appropriately size any associated financial programs (like subsidies and grants), to ensure that there is sufficient catalysis to get multiple stakeholder groups moving and investing. 

Understanding end-use unit economics 

This means that solution providers, policymakers, and advocates need to develop very deep understandings of the commercial drivers and realities of the markets they are looking to serve. They need to put themselves in the shoes of their customers and related stakeholders. This is particularly important when trying to sell solutions into new and competing markets or applications that may not otherwise be required to change (e.g. regulations on fuel use or emissions).  This should seem obvious and has always needed to have been a primary focus, yet it’s a step that some innovators, policymakers, and advocates have not always adequately prioritized.  

This is a recipe for failure, particularly in the infrastructure space. Saying it’s good for the world is not sufficient to get traction; a need does not mean there’s a market. Getting a strong, detailed, and accurate understanding of customer unit economics is foundational to the success of any infrastructure. By their nature, this should encompass more rigorous estimations of how much a solution costs to produce and deliver (which often tends to be underestimated in early stages, and leave stakeholders to be surprised later on by cost overruns upon implementation). And it should likewise reflect an understanding of the customer’s cost and value drivers, as this affects project revenue and adoption readiness. This is a question that sometimes gets missed in the early stages, but in later stages, particularly when seeking significant capital to fund projects, it becomes highly pertinent as investors tend to take a much more critical viewpoint of the economic potential of the project, both to the upside and downside. As part of the process of developing detailed assumptions, they should develop reasonable sensitivities and scenarios that illustrate how the financial performance of the project may vary due to changing internal and macroeconomic conditions. 

Diagnosing this early not only helps the solution providers to be better positioned for commercial success with their customers but also enables them to catch potential flaws early enough, make different design choices, and ensure the product’s value proposition is more robust and resilient. In turn, this helps reduce project risk and gives comfort to financial investors along the way. 

Governments can help by driving easier price discovery and transparency, collaborating with project stakeholders (especially developers and customers) to compile and share relevant cost and value data in more public forums. Reports generated by government agencies (e.g., the series of Pathways to Commercial Liftoff reports by the US Department of Energy’s Office of Technology Transition/Commercialization), national laboratories, and private third-party analysts (e.g., BNEF, S&P, Lazard) have made strong contributions in attempting to fill those information gaps. Continuing to support and drive efforts like those would be valuable. Having state and regional actors (e.g., groups of state economic development organizations) can also help to make them even more granular, and perhaps more local, which would drive even more actionability. They could also compel information disclosure through legislation (akin to efforts in healthcare and drug pricing transparency over the past few years) or require a greater level of disclosure as a part of some government-funded programs, especially with new industries. 

Development of accessible, trustworthy technoeconomic evaluation analysis tools

Intending to perform the types of deep technoeconomic analyses described in the previous paragraph is one thing. But having the ability to do that is another. In many situations, you either suffer from data unavailability or asymmetric access. Taking the electricity system, for example, there are very few stakeholders (usually utilities and grid operators) with deep access to information about how the system and its underlying assets are performing.  Sometimes, this is intentional due to potential concerns around security and market manipulation.  Sometimes, like often in the case of customers requesting their own historical hourly usage data, the process might be archaic and difficult. 

That said, a major downside of this situation is that third parties are often not in a position to interrogate resource plans, challenge priorities, or test and validate new ideas. There are third-party analytics tools, but they sometimes don’t have the requisite data, fidelity, or trust to ensure that their results cannot be readily dismissed. That also makes it too easy for grid operators to dismiss new ideas without adequately considering them. Not having accessible data and models can result in excluding beneficial solutions from being part of the menu of options or from making it to market. 

This has been a pretty common battle with an array of more ‘disruptive’ energy technologies, like distributed energy systems and advanced transmission technologies. But it also occurs with generation. One example is the prospective transition of the Brandon Shores coal plant in Maryland. The plant’s owner, Talen Energy, filed to retire the facility because it was no longer economically viable to operate (following a trend with many coal plants around the country). However, the grid operator, PJM, sought to force an extension of its operation (via striking a reliability must run (RMR) contract) for four years until new transmission capacity can be brought in, citing potential reliability concerns. State and congressional officials strongly opposed this plan as extending the operation via an RMR contract would increase costs to ratepayers and increase local pollution. A group of advocates and energy experts, led by the Sierra Club and GridLab, proposed replacing the plant with a mix of energy storage, reconductoring, and voltage supports, which they claimed would be not just cleaner, but more cost-effective than what was proposed – even more so in the likely event that the transmission project is delayed. Note too that a similar concept was deployed in New York City at the Ravenswood power plant. However, that suggestion was dismissed by PJM, without real allowance for iteration, arguably for not using the right modeling methodology, and for not being a project sponsor. Aside, the decision is also reflective of PJM’s energy storage market structure’s inability to effectively value energy storage’s benefits as both energy and transmission assets. Though an agreement was ultimately reached, many stakeholders view this settlement as suboptimal, not just because of its outcome (even more so as it does not help wider issues like high-capacity market prices), but because the advocates did not have the modeling tools in place to meaningfully evaluate the options and force a more substantive dialogue with the grid operator. 

Rebalancing this situation is essential in order to allow additional key stakeholders like policymakers, regulators, project developers, solution providers, and other experts to interrogate the opportunity space and propose actionable new ideas. This should include creating common, accessible infrastructure for data, models, and evaluation methodologies that an interested stakeholder would need to know to assess potential project options – which are otherwise difficult or prohibitively expensive to access. Note too that government endorsement of these tools greatly assists the credibility of the prospective solution. 

The Australia Energy Market Operator (AEMO), for example, did exactly this by implementing the world’s first connection simulation tool. This tool is a digital twin of the country’s electric grid that project developers are using to rapidly evaluate their prospective solutions in an accurate, safe, and trustworthy environment. 

Also consider two approaches to accelerate geothermal project development, where insufficient quantification of the resource can add significant development costs and project risks. One example is Project InnerSpace: this is a collaborative effort funded by philanthropy, the US federal government, and Google to provide a common, open set of surface and subsurface characterization data. Another example is the Geothermal Development Company: this is a special purpose vehicle fully-owned by the Kenyan government, which performs the resource characterization and steam development themselves, and shares the information with prospective geothermal power producers, which in turn has significantly lowered the barriers to entry and made Kenya a global leader in geothermal power production. Both approaches are being used to help project investors to be more targeted, capital-efficient, and prolific. 

Similarly, Virginia recently passed a grid utilization law requiring their local utilities to measure the utilization of their transmission and distribution systems, including establishing metrics as well as plans to improve those metrics. If implemented well and the associated data is made available, having that data can drive more targeted investments, more cost-effectively manage customer energy bills, and allow new solutions like virtual power plants, distributed energy, and grid-enhancing technologies to be appropriately valued and play bigger roles in the energy solution mix.  

Quantify, aggregate, and internalize external benefits and costs

Many solutions labeled for “climate” often have a wide array of other benefits – lowering costs, boosting reliability, creating jobs, improving health, to name a few. In some cases, reducing emissions might be a secondary or even tertiary benefit. This often results in the cost-benefit of a potential solution being understated and capital being underallocated. Alternatively, it can lead to greenwashing, where the benefits are overstated relative to the impact and capital being misallocated. 

In some cases, like in electricity markets and industrials, decisions are made on a narrower set of financial criteria, ignoring the broader value proposition – e.g., which solution has the lowest upfront cost? What is the least costly way to meet power demand on hourly basis or does the solution pay itself back in three years? There have been some directional approaches that at least help with the first issue of benefit underquantification. 

An example is with FERC Order 1920, a rulemaking that covers new approaches to transmission planning and cost allocation. It called on state decision criteria to be expanded from just cost and reliability to a consideration of seven benefits: avoided or deferred infrastructure costs, reduced power outages, reduced production costs, reduced energy losses, reduced congestion, mitigated extreme weather impact, and reduced peak capacity costs. As it gets implemented across the country, that should provide a considerably more fair and holistic basis to assess the potential benefits of transmission projects and will likely increase the viability of game-changing concepts (e.g., reconductoring, interstate/interregional transmission). 

In other cases, like in some larger governmental grantmaking or policy efforts, a suite of benefits may be quantified but are estimated and presented in siloes, where the benefits appear to be orthogonal and nonadditive. So, the key to addressing that is developing valuation frameworks that do the difficult task of weighing the benefits together in a clear manner that’s directly relatable to the investment thesis. Applying ways to translate those benefits commensurately into the project’s financial terms is critical to ensure they get prioritized and realized. The IRA had elements of this at least conceptually, for instance, by applying bonus credits to low-carbon energy projects that paid fair wages, were put in economically disadvantaged areas, or used domestically-manufactured equipment. On a local level, there are state laws like Montana’s transmission law that established a new, elevated cost-recovery mechanism for transmission projects that used more efficient, high-performance conductors. 

Going further along the point of internalization, there are more structural issues where markets may not be designed to solve for the outcomes that stakeholders are seeking.  In electricity, for example, power markets generally solve for meeting demand for the least cost in a short time period (following a narrowly-defined reliability scheme). This may not only ignore solutions that may save more money over longer periods of time, but it also does not explicitly solve for attributes like resilience, sustainability, or flexibility. That often means external out-of-market solutions are needed to create desired outcomes (e.g., reliability-must-run contracts, tax credits, renewable energy credits). While those have had a great impact on their specific goals, they are imperfect and may have unintended consequences, like distorting market behavior or disincentivizing cost-cutting innovations. Solving that on a greater scale and more fundamental basis in some segments may require greater reforms, like redesigning electricity market structures, revisiting the Energy Policy Act and Federal Power Act, and more.


Chapter 6. “Take it to the bridge”: Rethinking the ‘missing middle’ problem

For the last face of the cube, it is incumbent to describe the role that a whole cadre of investors needs to play. Not just venture and early stage, but particularly later stage capital providers such as project financiers (equity and debt), institutional investors, pension funds, insurance funds, and even utility balance sheets. They have a massively important role and arguably need to be more proactively involved with ensuring the maturation of promising earlier-stage solutions. The ‘missing middle’ problem in energy, where a lack of transition and demonstration capital thwarts promising venture-backed solutions from progressing to mainstream infrastructure, is well-known. While continued innovation is needed to form new capital solutions to fill that gap, there is a lot that investors can do to shrink the gap and make that chasm easier to traverse. 

Engaging earlier to pull companies to maturity

Though they control the greatest share of the majority of assets under management and can support bigger ticket sizes, by the nature of their investment mandates, late-stage investors’ risk tolerances tend to skew more conservative. They tend to concentrate their efforts on solutions with established track records and large addressable markets that provide greater certainty of execution and relatively consistent returns. And they typically have more than enough deal volume to justify their focus in those markets. Consequently, though these investors typically at least follow major new trends, they tend to be hesitant to enter newer markets; in fact, many are content to just ‘wait’ for the market to come to them before they engage. This introduces several challenges. 

First, at the most basic level, it means that many lower-cost sources of capital may be hard to access for newer solutions, whether climate and otherwise, which makes it more challenging for them to compete on a level playing field early on. Next, as importantly, it means that companies may miss critical opportunities to get sharper earlier. The attributes that are valued by investors change dramatically over the life cycle. 

For instance, many early-stage products are rated by things like their uniqueness, differentiation, disruptiveness, and total addressable market. Those attributes that tend to attract venture capital and garner the most market visibility in the media. By contrast, at later stages, especially for project uniqueness and differentiation, might actually be seen as sources of risk: risks that get compounded if the solution is supplied by a new market entrant. Moreover, fungibility and supplier optionality may hold even greater weight. To mitigate the risk of a situation where things could go wrong with a project’s vendor, project investors are often comforted knowing there are substitutes that can be brought in as part of a contingency plan. In addition, though addressable market matters to both groups of investors, early-stage investment tends to focus on alignment with macroeconomic trends. While for later stages, micro arguably supersedes macros, as diligence tends to be more deeply and narrowly-focused on project-specific questions, like contracts, pricing, and execution. Furthermore, late-stage underwriters may need to conduct deeper diligence and acquire more data to get comfortable with the new technical attributes, features, and vendors, which can be a drag on their process as they have to spend more time and money to go through that process. Whereas for earlier stage investors, that deep focus on the new features already tends to be an integral part of the diligence and value creation process, and they get rewarded for that accordingly. 

Overall, not understanding these differences can create shocks for new companies that have had great success with attracting capital early, but get stopped in their tracks when graduating to the next level of maturity. This has also often meant that prospective solutions providers miss the opportunity to sharpen their pencils, address more detailed questions, and have their key assumptions stress-tested. Even if they are not prepared to transact, later-stage investors, especially in infrastructure project finance, should devote additional time to engage with promising technologies early on and bring them along. This is also in the investors’ interest as it allows them to get up the learning curve faster, be better positioned to take advantage of those opportunities when the markets come around, and ensure that the solutions that do make it to later stages are of higher quality and more likely to yield successful transactions. 

Formulate deal templates and archetypes

The previous steer for early engagement comes with a conundrum. For many investors, it is hard to meaningfully engage until there is a complete deal on the table. By complete, I mean a fully-fleshed out representation of all core theses, puzzle pieces, assumptions, and more, mapped to a specific, actionable situation. This is the scaffolding onto which the financing packages are built and the basis by which most risk managers are trained to evaluate the financeability of a solution. This can be true for governmental and commercial financing programs; “bring us deals to look at” is a common refrain.  

The conundrum comes because many of those details may not be fully known early on, so there may not be a fully-formed deal to bring per se, and in addition, there might not be a clear set of underwriting criteria that the company can aim for. Even for offices like the DOE LPO, which was proactive and engaged, struggled to get significant market traction for a few years, both to their and the market’s frustration. Instead of both sides staring at each other like the Spider-man meme, the impasse can be broken by creating deal templates and archetypes, which can take a more hypothetical representation of assumptions, including reasonable scenarios, and frame what the financial structure and execution pathway would look like for that. 

The LPO, for instance, did just that, creating several deal archetypes based on customer type and technology, with terms and execution timing aligned based on the associated risk. For instance, deals involving more established energy solutions (e.g., solar, storage, transmission) with investment-grade utilities providing corporate guarantees were allowed to have faster execution processes than some other deals, commensurate with the comparatively low level of credit risk involved. Next, deals associated with a narrow focus, like the Advanced Technology Vehicle Manufacturing program, tended to have more well-defined execution processes. For others that may have more default risk (e.g., newer technologies, non-investment grade counterparties), those deals might take more time to diligence and underwrite. This benefitted both the Office and the applicants, as this created clear, agreeable expectations for each.  Providing this clarity greatly increased deal volume and traction, as more clients brought more loan applications and had greater clarity and confidence in the transaction process they were entering into. 

More broadly, this is an area where the companies, industry associations, and other advocates can play an active role, independently and in collaboration with governments. Formulating early pictures and archetypes for financial stakeholders and investors can significantly enhance feedback and capital formation. 

Collaborate, Celebrate, and Replicate 

Finally, energy investors, especially in less mature sectors, need to find ways to be more open, as appropriate, about their investments and investment strategy. Though for product companies, this usually comes a bit more naturally and is necessary to market their products, later-stage investor communications for individual deals often tend to be more guarded and high-level. This is usually not because the information is unavailable. Sometimes this can be hard as they may be attempting to protect sensitive information, or the move may be to protect potential market share and not lead other players on to the same strategy (particularly if they worked hard to open a new market). The richer information transfer usually happens privately during deal execution (e.g., as part of due diligence) or during project- or fund-level capital raises. 

In newer spaces, however, progress itself is often catalytic (rising tides floating all boats). The easiest way to persuade a risk committee to invest is to show precedents and comparables. An underwriter can stand up with greater confidence when they can show that someone else has done it before.  It can be even more validating when that ‘someone else’ is a competitor. Project investors should endeavor to share more about how they got comfortable with the deals, market, and/or technologies involved, as appropriate. This is not out of altruism. Especially in emerging sectors, rapidly expanding the market and creating a foundational flywheel can be commercially more beneficial for the firm versus purely protecting market share. Getting more investors comfortable makes the pie bigger and encourages other investors to pursue their own projects. This then sends actionable signals to ecosystem stakeholders (e.g. supply chains) to invest and create production and delivery efficiencies. The efficiencies improve unit economics, reduce risk, and increase returns both down the line and potentially even on the early projects (e.g. operating expense and replacement parts scarcity reduction). These scale efficiencies and flywheel creation should help investors generate more deal flow and revenue, building on the expertise they have built and leadership position they have established. The advantages can be extended where significant public funds were allocated to the project, perhaps in exchange for preferable financing terms from the public funding institution. 

Similar ideas apply to public sector funding programs. Making announcements about projects and ribbon cuttings, though important, are  shorter-term and quick-hitting communications strategies that tend to be more formulaic. Instead, policymakers should take a page from commercial product marketing. Policymakers should view their deployment policy efforts as their products. As such, they should create consistent, thematic narratives to which individual initiatives, projects, legislation, and rules, can all be framed. Even though they may be exhausted after delivering the policy itself, government officials and program officers should not undervalue the uplift phase. It is crucial to plan to spend ample time and resources to explain and repeat the micro- and macro significance of each product to investors and community stakeholders, especially in today’s competitive information environment. Building greater public buy-in both nationally and with communities, is crucial, especially to longer-term, transformational projects. Government funders should work hard to bring along additional local governments, nonprofits, and investors along to collaborate, celebrate, and replicate the successes. 

Akin to what was mentioned in the valuation section about common information infrastructure, working closely with investors, industry, and other stakeholders to collate and amplify key investment theses, lessons learned, and others will be key to building investor confidence and creating more of a flywheel effect for follow-on investments. 


Conclusion. “The Next Episode”

Taking a step back, we have laid out many ideas and concepts in this paper: harmonization, collaboration, acceleration, synchronization, valuation, and amplification, to name a few. It may seem daunting to look at policymaking across so many vectors, particularly in a manner where many of those puzzle pieces have to align and move in sync in order to unlock significant and consistent investment. That said, the power of a strong and well-intentioned administrative state, at both national and local levels, lies in its very ability to wrap its arms around big challenges, partner with private industry, and leverage its resources to create high-value solutions with outsized benefits. This has repeatedly been proven in the US and globally across time and multiple sectors: whether it’s going to the moon or inventing life-saving medical treatments; building massive infrastructure, or delivering nanoscale electronics. States and towns should roll up their sleeves, find creative ways to collaborate, develop foundational information tools, and remove unnecessary market barriers. Investors should take an even more active role, making their needs known to early-stage companies and policymakers, building consortia to pull new opportunities forward, and creating an actionable set of commercial opportunities that they would find attractive. What’s more, acting now to design and implement new, actionable administrative structures, especially at the state and local level, will not only create more high-value pathways for progress now, but if it is well-coordinated, it can also lay a foundation for federal actions that can be taken, nearer and longer term. Though the challenges and journey are complex, the opportunity before us is massive, the imperatives are clear, the transformations are tractable, and success is achievable. This can be done, so let’s get busy!

Your DNA, Your Data: Preventing Genetic Discrimination in the Growing Bioeconomy

The U.S. bioeconomy is a growing economic sector driving technological innovation and global competitiveness. A significant portion of this innovation, especially in biotechnologies that improve health, like drug therapeutics and precision medicines, relies on the collection of genetic and non-genetic biological information through varied methods, including academic studies, direct-to-consumer testing services, and pharmaceutical companies. While this can lead to improvements in U.S. public health and biotechnology, there has been a growing fear among scientists and legal experts that this information is insufficiently protected against exploitation by foreign actors seeking to supplant U.S. leadership in biotechnology, as well as against domestic actors who might use this data to target or discriminate against certain subsets of the population 

Legislation and policy outlining the storage and use of human-derived biological data by federally-funded research, such as the NIH Genomic Data Sharing Policy, lessen the risks surrounding this data but are insufficient due to advancements in biotechnology and the multifaceted collection, use, and selling of this information by private industry and law enforcement. Meeting the moment and protecting the American people will require: 1) expanded legislative protections for biological data; 2) biological data use protocols developed for federal agencies; and 3) standardized development, storage, and use of biological data. Pursuing these policy enhancements will safeguard fundamental rights and secure national infrastructure as we enter a new era of biological understanding and innovation.

Challenge and Opportunity

The U.S. bioeconomy is an increasingly important facet of the GDP due to the growing role of biotechnology in economic sectors, including defense, agriculture, energy, and manufacturing, with the total market size of biotechnology expected to reach $2-4 trillion in 2030-40. An important driver of this growth is the increased role of biotechnology in drug discovery, therapeutics development, and precision medicine

This innovation, which includes novel treatments for cystic fibrosis, has necessitated the collection of massive datasets of biological data, including genetic, molecular, and biometric information. The federal government supports this through the direct creation and management or funding through grants of large-scale biobanks of individuals from varied geographic, demographic, and health backgrounds to be primarily used in biomedical research. Various pharmaceutical, technology, and biotechnology companies have additionally collected millions of primarily genetic samples from members of the public; for example, more than 26 million people have taken direct-to-consumer genetic tests through companies, such as 23andMe and Ancestry. 

However, as more human-derived biological datasets grow, they become strategic targets. There is an increasing concern that this information is insufficiently protected against exploitation by foreign actors seeking to supplant U.S. biotechnological leadership, as well as against malicious domestic actors who may misuse biological data to perpetrate genetic discrimination and biological data discrimination. While this may initially seem like a concern limited to employment or insurance, genetic discrimination has potential negative ramifications for the unchecked surveillance and intrusion into the private lives of Americans. Cases of genetic discrimination have already been identified in education, such as the case of Colman Chadam, a middle-schooler forced to leave his school because of a genetic susceptibility to cystic fibrosis. Additional civil liberties concerns arise around the non-consensual misuse of biological data in law enforcement investigations. Even while measures have been taken to secure biological datasets to minimize the number of people who might misuse this information, both public and private data collections are under scrutiny from sectors of the public about the risk of anonymized biological data being reidentified and the ability of these collectors to prevent data leaks.

The preeminent federal law guiding the use of this data in non-research settings is the Genetic Information Nondiscrimination Act (GINA). GINA, in combination with the Health Insurance Portability and Accountability Act (HIPAA) and other legislation, outlines the general use of genetic and biological information within the U.S. However, these laws leave regulatory gaps that enable the previously mentioned civil rights violations to rise. HIPAA only provides protections to biological information in the context of “protected health information” for covered entities, such as healthcare providers and their business associates. And GINA only prohibits the use of genetic data to discriminate in employment and health insurance coverage, and only restricts organizations with greater than 15 employees. Moreover, there are no laws that protect against discrimination surrounding the use of non-genetic biological information, such as those collected by private companies and personal health trackers. Protections beyond GINA at the state level are inconsistent and lacking, especially given the highly personal and largely unchanging nature of this information. While guidance exists for the storage, sharing, and oversight of biological data for institutions that receive federal funding, there is a lack of this same technical standard for commercial and direct-to-consumer companies

This regulatory vacuum allows deeply personal information about an individual’s disease history, familial relationships, and potential traits to be used to cause harm. This opens the door to dangerous infringements on personal safety and human rights, threatening the stability of the growing bioeconomy.

Plan of Action

To secure the U.S. bio-infrastructure, maintain global leadership in biotechnology, and safeguard American citizens from emerging threats to their privacy, the federal government must modernize its approach to human genetic and biological data. The current regulatory patchwork leaves the bioeconomy vulnerable to foreign exploitation and American citizens open to unchecked surveillance. The following recommendations establish a necessary framework to build trust in U.S. innovation while protecting individual liberty.

Recommendation 1. Modernize Genetic Privacy Laws to Close Security Gaps

Congress should advance legislation that comprehensively expands GINA to include all forms of biological data, including but not limited to: genetic, protein, microbiome, and biometric data, in order to close the loopholes present in the original law.

To ensure that an American’s biological information remains their private property and not a tool of overreach, this legislation should expand nondiscrimination protections beyond health insurance and employment. This legislation could be modeled on CalGINA, a 2011 California law that adds “genetic information” to existing protected classes, such as race, sex, and age. New federal legislation would expand on this model by codifying “biological information” as a protected class with “genetic information” within existing civil rights law. Additionally, this legislation should include direct language from CalGINA that prevents business establishments, health facilities, housing providers, and state-funded programs from demanding genetic tests, and penalizing Americans for their biological makeup. This legislation can also use language from the EU’s General Data Protection Regulation (GDPR) that classifies genetic and biometric data under a special “sensitive data” category. GDPR language would assist in classifying the scope of genetic and biological data as well as the protections individuals possess in regards to this information.

By setting this new federal baseline, Congress will harmonize the current fragmented regulatory landscape, clarifying compliance for businesses that may seek this information, and will assure the American public that their biological information cannot be weaponized against them.

Recommendation 2. Establish Guidelines For The Federal Use Of Biological Data

To prevent unwarranted surveillance and privacy erosion, the president should issue a memorandum tasking the National Institute of Standards and Technology (NIST) and the Office of Science and Technology Policy (OSTP) with developing a “Federal Human-Derived Biological Data Use Standard”. 

To ensure the standard accounts for the full spectrum of federal use cases, NIST and OSTP, in coordination with the Office of Management and Budget (OMB) and the National Security Council, must conduct an interagency review of all current and potential federal uses of biological data. The Standard should specifically adopt a privacy-centric model, similar to that established by the 2019 Interim Department of Justice policy on genetic genealogy. Once developed, federal agencies must make the Standard available to state and local partners to serve as a model for non-federal policy. Additionally, OSTP should publish a public-facing framework that clarifies federal use cases. This framework must include clear definitions of biological data types, transparent access standards, a list of actions explicitly prohibited by the new protocol, and clear accountability mechanisms.

This standard will define strict policies for permissible federal use of biological data to streamline disparate protocols and prevent the over-exposure of citizen data by the federal government. It will additionally serve as a model to ensure consistent protection for Americans across all levels of government.

Recommendation 3. Implement Technical Standards For Biological Data Security And Innovation

The president should direct OMB to issue a Biological Data Protection Directive. This directive must mandate that federal agencies standardize the technical infrastructure regarding how human-derived biological data is collected, stored, and shared. 

Specifically, the Directive should:

To implement these activities, the president should request Congress to appropriate $50-80 million over three years for staffing, training, and technical infrastructure. Standardizing this infrastructure will close security gaps that currently allow foreign adversaries to target American biological data while driving market-wide adoption of secure protocols and reducing friction for U.S. businesses.

Conclusion

Biological data is becoming as central to modern society as a traditional digital footprint and carries similar far-reaching risks if misused. Without proactive federal action, the expanding role of biological data will continue to enable new forms of discrimination, privacy violations, and civil-rights harms, while leaving critical national assets vulnerable to exploitation by foreign competitors and unchecked domestic surveillance.

If successfully and fully implemented, these new policies would protect individual rights and secure the bioeconomy, establishing the United States as a leader in responsible biotechnology  innovation. The first recommendation would provide clear and enforceable civil protections for all Americans, ensuring that individuals, businesses, and institutions cannot misuse biological information regardless of how it was obtained. This would prevent cases like that of Colman Chadam from recurring. The second recommendation would support more effective and accountable law enforcement by establishing rigorous, updated guidelines that limit federal overreach, and ultimately reduce privacy risks while improving public trust. Finally, the third recommendation would strengthen federally funded and private biomedical research by developing standards that make biological data interoperable, AI-ready, and secure.

The combination of all the recommendations will provide clarity to both state and private actors on appropriate development, storage, and use of biological information. This approach ensures that U.S. values define the global bioeconomy, creating lasting protections for the use of this information in critical facets of society.

Frequently Asked Questions (FAQ)
How will “biological information” be defined to prevent loopholes that allow discrimination based on inferred traits (e.g., ancestry, disability risk, or behavioral genetics)?

The proposed legislation will define “biological information” broadly to encompass all data derived from biological samples or measurements that can reveal health, behavioral, or ancestry-related traits. This includes molecular, biometric, and physiological data that can infer or predict personal or familial traits and diseases.

How will this expanded protection interact with existing civil rights laws and state-level equal protection statutes?

The proposal complements, rather than replaces, existing civil rights frameworks. By adding “biological information” to the list of protected classes, the law provides a clear and enforceable basis for addressing discrimination that current statutes do not explicitly cover.

Who will enforce these expanded protections, and what recourse will individuals have if they experience discrimination based on biological data?

Enforcement will rely on existing federal civil rights and consumer protection infrastructure. The Equal Employment Opportunity Commission will be empowered to investigate biological-data–based discrimination in employment, while the Department of Justice’s Civil Rights Division can address systemic violations across public institutions. The Federal Trade Commission will continue to regulate unfair or deceptive data practices by private companies.

How will this policy affect innovation in biotechnology?

Many small businesses and startups are already taking scattered approaches to protecting their data. This policy would remove burdens and accelerate biotechnological innovation by providing clear standards for the use of biological data for those entering the field and lowering the delays necessary to understand a scattered regulatory landscape.

What are the national security implications?

Due to advancements in biotechnology, malicious domestic actors may use biological data to target, blackmail, or exploit different segments of the American public who have voluntarily provided this information. This policy would minimize those risks by securing personal information and provide clear ramifications for misuses.

How does this policy support U.S. global leadership?

By implementing this policy, the U.S. will be the first country to make comprehensive policy on the security and use of personal biological datasets by federal and private actors. This policy will thus serve as a model for other nations realizing the dangers and necessity of protecting this type of information.

Why Credit Access Makes or Breaks Clean Tech Adoption and What Policy Makers Can Do About It

Building Blocks to Make Solutions Stick

For clean energy to reach everyone, government can’t just regulate behavior. It has to actively shape credit markets in partnership with the private sector.

Implications for democratic governance

Capacity needs


Access to affordable credit is a necessary condition for an equitable energy transition and an inclusive economy. Markets naturally concentrate capital where risk is low and returns are predictable, leaving low-income communities, rural areas, and smaller projects behind. Well-designed federal policy can change that dynamic by shaping markets—reducing risk, creating incentives, and unlocking private capital so clean technologies reach everyone, everywhere. This paper explores how policy-enabled finance must be part of the toolkit if we are going to drive widespread adoption of clean technologies, and can be summarized as follows: 

The critical role of policy-enabled finance to drive widespread economic opportunity  

Access to affordable credit is not just a financial tool—it is a cornerstone of economic opportunity. It enables families to buy homes, entrepreneurs to launch businesses, and communities to invest in technologies that reduce costs and improve quality of life. Yet, across the United States, access to credit remains deeply uneven. Nearly one in five Americans and entire regions – particularly rural and Tribal communities – are excluded from the financial mainstream, limiting their ability to thrive.

Private-sector financial institutions—banks, private equity firms, and other lenders—are designed to maximize profit. They concentrate on markets where risk is predictable, transaction costs are low, and deals are easy to close. This business model leaves behind borrowers and communities that fall outside these parameters. Without intervention, capital flows toward the familiar and away from the places that need it most.

Public policy can change this dynamic. By creating incentives or mitigating risk, policy can make lending to or investing in underserved markets viable and attractive. These interventions are not distortions — they are strategic investments that unlock economic potential where the market alone cannot, generating economic value and vitality for the direct recipients while yielding positive externalities and public benefit for local communities. And, importantly, these policy interventions act as a critical complement to regulation. Increasing access to credit is often the carrot that can be paired with, or precede, a regulatory stick so that people are not only led to a particular economic intervention, but they are also incentivized and enabled.  

For decades, policy-enabled finance has delivered measurable impact through multiple programs and agencies designed to support local financial institutions – regulated and unregulated, depository and non-depository – that are built to drive economic mobility and local growth. These policies and programs have taken multiple forms, but can generally be put in three categories: 

These tools enjoy broad recognition and bipartisan support because they work. They increase access, availability, and affordability of credit—fueling job creation, housing stability, and economic resilience. Policy-enabled finance is not charity; it is a proven strategy for broad and inclusive economic growth and a key tool for the policy-maker toolkit to support capital investment, project development, and adoption of beneficial technologies in a market-driven context that can increase the effectiveness of a regulatory agenda. 

Most importantly, policy-enabled finance has led to major improvements in wealth-building and quality of life for millions of Americans. The 30-year mortgage was created by the Federal Housing Administration in the 1930s as a response to the Great Depression. Before this intervention, only the very wealthy could afford to buy a home given the high downpayment requirements and short-term loans. Since this policy change, thousands of financial institutions have offered long-term mortgages to millions of Americans who have bought homes that provide safety and security for their families, strong communities, and an opportunity to build wealth through appreciating assets. Broad home ownership is a public good, but until the government created the right policy and regulatory framework for the markets, it was out of reach for the majority of Americans. 

Similarly, the Small Business Administration’s loan guarantee programs started in the 1950s supported financial institutions, including banks and non-bank lenders, in extending credit to small businesses that would otherwise be difficult to serve with affordable credit. These programs have collectively helped millions of small businesses access the credit they need to grow their businesses, create wealth for themselves and their families, provide critical goods and services in their communities, and create a diverse and vibrant local tax base. 

The financial markets, without these types of interventions, are not structured to prioritize access and affordability. Well-designed policy and complementary regulatory interventions have been proven to drive different behaviors in the capital markets that yield real benefits for American families and businesses.  

The role of access to credit in driving an equitable energy transition 

The public and private sectors have spent decades and billions of dollars investing in the development of clean technologies that reduce greenhouse gas emissions, create economic benefits, and deliver a better customer experience. Now that these technologies exist, the challenge is to deploy them for everyone, everywhere. 

The barrier to widespread deployment is that most clean technologies require an upfront investment to yield long-term benefits and savings (i.e., an initial capital expense to reduce ongoing operational expenses) – technologies like solar and battery storage, electric vehicles, electric HVAC and appliances, etc. – which means that people and companies with cash or access to credit are adopting these better technologies while those without access to cash or credit are being left behind. This is yielding an even greater divide – creating economic savings, health benefits, and better technologies for those who can afford them, while leaving dirty, volatile, and increasingly expensive energy sources for the lowest-income communities. 

Many of the federal policy interventions to support deployment of these new technologies to date have been through tax credits. These policies have been very popular, but are not often widely adopted, particularly in rural and lower-income communities, because, (a) they are complex, (b) they often require working with individuals or businesses with large tax liabilities, and (c) they typically come with high transaction costs, making smaller, more distributed projects harder to make work. The energy transition is a huge wave of change, but it is made up of many small component parts – individual buildings, machines, vehicles, grids – so if our policies fail to enable small projects to get done, we will fail to transition quickly and equitably.

To deploy everywhere, households and businesses need credit to offset capital expenses. To expand access to credit, we need supportive clean energy policies that work within and alongside local financial services ecosystems – just like we’ve seen with housing and small businesses. 

Regulation is insufficient to drive widespread adoption 

Pursuing a carbon-free economy is a massive undertaking and, understandably, much of the state and federal government’s toolkit has focused on regulation of people and businesses to drive behavior change – policies like fuel economy standards, pollution restrictions, renewable energy standards, and electrification mandates. This is an important piece of the puzzle – but insufficient to drive broad (and willing) adoption. 

Take, for example, the goal of electrifying heavy-duty trucks in and around port communities. States like California have attempted to set a date at which all new trucks on the registry must be zero-emissions vehicles. Predictably, this mandate was met with a lot of pushback from truck drivers, small operators, and industry associations who struggled to see a path to complying with this regulation without a major increase in cost. 

It wasn’t until the regulation was paired with direct incentives for truck purchases and an attractive and feasible financing package for vehicle acquisition and charging infrastructure that the industry actors started to come around. This has helped change behavior of both buyers and incumbent sellers in the market. 

Policy-enabled finance creates tools – often used in conjunction with other policy mechanisms – that can more effectively meet people where they are with affordable, appropriate, and tailored solutions and can help demonstrate a feasible path to adoption that can help buyers and sellers in these markets adapt accordingly. 

The Greenhouse Gas Reduction Fund as an innovative policy-enabled finance program 

The Greenhouse Gas Reduction Fund (GGRF) is more than an emissions initiative—it is a strategic investment in economic equity and market innovation that took lessons in program design from many sectors and programs of the past. Designed with three core objectives, the program aims to:

GGRF programs, including the National Clean Investment Fund, the Clean Communities Investment Accelerator, and Solar for All, were built to complement other Inflation Reduction Act (IRA) programs by occupying a critical middle ground between grant programs and tax credits. Grant programs provide direct, one-time support for projects and programs that are not financeable (i.e., not generating revenue). Tax credits are put into the market to incentivize private investment for anyone interested in taking advantage but are not typically targeted to any specific project or population. 

GGRF bridges these approaches. It channels capital into markets where funding does not naturally flow in the form of loans and investments, ensuring that clean energy and climate solutions reach every community—but does so in a way that often extends the benefits of the tax credits and incentive programs so that they reach a broader set of projects and communities where the incentive is insufficient to drive adoption. GGRF focuses on increasing access to credit and investment in places that traditional finance overlooks by reducing risk and creating scalable financing structures, empowering local lenders, community organizations, and national financing hubs to deploy resources where they are needed most. Also, because the program makes loans and investments, it recycles capital continuously – akin to a revolving loan fund – so that the work filling gaps in market adoption can continue for decades. 

GGRF’s design was built on a strong foundation of successful direct investment programs for local lenders, such as CDFI Fund awards and USDA programs. What makes it unique is its scale—tens of billions of dollars—and its centralized approach, leveraging national financing hubs to drive systemic change with and through new and existing local financial capillaries (i.e., credit unions, community banks, green banks, and loan funds). This program was not built to drive incremental progress; it is a market-shaping intervention designed to accelerate the clean energy transition while promoting widespread economic growth.

Unfortunately, the program was stopped in its tracks when the Trump administration illegally froze funds already disbursed to awardees, leading to multiple lawsuits to restore funding. Without this disruption, awardees and their partners across the country would be driving direct economic benefits for families and communities across all 50 states. In the first six months of the program, awardees had pipelines of projects and investments that were projected to create over 49,000 jobs, drive $866 million in local economic benefits, save families and businesses $2.7 billion in energy costs, and leverage nearly $17 billion in private capital. The intention and mechanics of the program were working – and working fast – to deliver direct economic, health and environmental benefits for millions of Americans.  

Moving at the speed of trust: Bringing the public and private sectors together for effective implementation 

For a program like the Greenhouse Gas Reduction Fund to succeed, both the private and public sectors need clarity, confidence and accountability. But most importantly, they need a baseline of trust between the parties to support ongoing creative problem solving to implement a new, scaled program with exciting promise and a limited blueprint. 

For the private sector, certainty is paramount. Investors and lenders (and importantly, their lawyers) require clear definitions, consistent requirements, and transparency about the availability of funds, requirements of use, and the ability to forward commit capital to projects and businesses. They need mechanisms to leverage public dollars with private capital and assurances that counterparties will be shielded from political, compliance, and policy risk. Flexibility is equally critical, allowing actors to adapt to rapid market shifts and technological innovations without being constrained by rigid program structures. Understanding these requirements – and the needs of the financial market actors involved – is outside the comfort zone of most government agencies and employees and requires significant experience and capacity building to strengthen this muscle. Nimble thinking is not often associated with government agencies, but in policy-driven financial services, it is paramount. 

At the same time, the public sector has its own requirements which require patience and understanding from the private sector. Policymakers and the EPA, the implementing agency of the GGRF, must ensure that funds are used properly and that Congressional and public oversight is robust. This means designing programs that comply with all laws and regulations while advancing policy priorities. It requires mechanisms for accountability—certifications, reporting, and transparency in how funds flow – along with safeguards against undue influence from purely profit-motivated private actors. Balancing these needs is not optional when managing taxpayer funds; it is the foundation for building trust and ensuring that the program delivers on its promise of reducing emissions, benefiting communities, and transforming markets. 

Implementation requires striking the balance between the needs of the private and public actors; this was difficult and time consuming for both the federal employees and for us as private recipients. There was pressure to deploy quickly to demonstrate impact and the value of the program, but it took a long time to get contracts signed and funds in the market because of the many requirements of the public and private parties involved. We speak different languages, are solving for different constraints, and work in drastically different environments – all which led to complexity and delays. 

Internal EPA requirements and federal crosscutters (i.e., federal requirements from other related laws that applied to this program) increased time to market and transaction costs. Many of these requirements came with high-level policy objectives without the ability to get to a level of detail required for capital deployment. 

For example, two of the major policy crosscutters were the Davis Bacon and Related Acts (DBRA) requirements around labor and workforce, and the Build America Buy America (BABA) requirements for equipment manufacturing and component parts. While the agency and private awardees were aligned at a high level on policy intention – good-paying jobs and domestically-manufactured goods – down streaming these requirements to borrowers and projects required significantly more detail and nuance than was available to the agency, adding weeks and months onto implementation and frustration among private counterparties. 

Clear expectations up front on how to manage the trade-offs – policy priorities versus capital deployment – could have helped create a high-level framework for implementation, which was a one-by-one review of use cases to determine feasibility and applicability. This added complexity and friction to the process without driving outsized results. 

More requirements and complexity led to slower, more costly deployment, which meant fewer communities would benefit from the program’s goals of cutting emissions, creating jobs, and cutting household and business costs. 

Another key feature of the program for the National Clean Investment Fund and Clean Communities Investment Accelerator was the ability for the federal government to leverage a Financial Agent to administer the funds. This arrangement was developed between the EPA and Treasury, leveraging a long-standing practice of the Treasury Department of contracting with external banks to provide financial services that were hard for the government to provide directly. This was particularly important for the National Clean Investment Fund program because the disbursement of funds into awardee accounts enabled the awardees to meet a core statutory requirement to leverage funds with private capital. Without this function, the cash would not be available on the balance sheet of the awardees and would be difficult to leverage with private investment. 

Lastly, the reporting requirements for the program were complex, making it hard to provide clarity on what data collection was required for early transactions. Again, both parties recognized the importance of transparent data collection and dissemination but implementing that intent in practice was time consuming. A simple, standardized framework to get started that could evolve over time would have helped reduce uncertainty and supported faster deployment. 

Altogether, the cross-sector translation – finding common ground between two disparate worlds – added many months onto the process of getting the program to the market which, in the current political climate, was time not spent doing the important work to educate a broad set of stakeholders on the program’s promise, potential, and purpose. A lot of this complexity could have been reduced by developing a baseline of trust between the parties through the application and award process, complemented by a common goal to improve program implementation over time. 

Strange bedfellows create weak alliances 

In addition to the programmatic elements of translation, the actors involved in implementing direct investment strategies tend to be unknown entities to government agencies and Congress. Even though many of the implementing organizations – the “awardees” – have been around for decades doing similar work, there were weak ties with Congress, federal agencies, and other related stakeholders. Similarly, there was a lack of understanding of the role that nonprofit and community-based financial organizations play in addressing market gaps. This mutual lack of understanding and engagement leaves room for misunderstanding, distrust or generalizations that can hinder the ability to make collective progress. 

Within the agency, this was a new program type for the EPA, so requirements and design process took many months before anything was shared publicly. The Notice of Funding Opportunity was released nearly a year after the legislation was signed. 

The unique form and function of the program and limited direct engagement with lawmakers and other stakeholders about the program left a vacuum of information, which led to skepticism and confusion. Because the funds were provided to awardees as grants, many interpreted this as just another grant program – a large federal spending package that would lead to “handouts” – instead of what it was, the federal government seeding a sustainable fund with “equity” that would be lent out, returned, and reinvested in perpetuity. For example, here is the Wall Street Journal editorial page,and later, the EPA press release conflating investments with “handouts”: 

Imagine if Republicans gave the Trump Administration tens of billions of dollars to dole out to right-wing groups to sprinkle around to favored businesses. That’s what Democrats did in the Inflation Reduction Act (IRA). The Trump team’s effort to break up this spending racket has led to a court brawl, which could be educational.

The fact that this policy structure and the private sector entities charged with implementing it were relative strangers led to confusion and delay during a period that could have been spent on outreach, engagement, and education. Without that broad base of support, the program unnecessarily became a political punching bag.

To mitigate this risk going forward, there needs to be greater investment in relationship building, education, stakeholder engagement and capacity building within and among the implementing partners across all relevant government actors and their private sector counterparts, especially after award selections are made. This connective tissue would go a long way in creating a baseline of common understanding of the policy objectives, program design, and implementation partners involved so all parties are aligned on strategic intent and path forward. 

Making policy-enabled finance programs work in the future 

If we agree that policy-enabled finance is essential to drive the energy transition and deliver broad benefits, the next step is asking the right questions about how to design these interventions for success, drawing lessons from the GGRF and other related programs.

First, what mechanisms should we use, and what are the trade-offs for each? Federally supported direct investment programs, such as managed funds, can deploy capital quickly and target underserved markets, but they require strong governance, thoughtful program design, and radical transparency, otherwise they are susceptible to the “slush fund” narrative or similar risks (i.e. conflicts of interest and political favors). 

Tax credits and incentives have proven effective in attracting private investment, yet they often favor actors with existing tax liability and can leave smaller players behind. Guarantees reduce risk for lenders and unlock private capital, but they demand careful structuring to avoid moral hazard and can struggle to reach communities that are truly under-resourced. 

Despite the many pitfalls of direct investment programs, they address a challenge that has plagued many of the more distributed policies: centralization and market making. Often in an attempt to let a thousand flowers bloom, policymakers underestimate the need for centralized or regional infrastructure to help with asset aggregation, data collection, product standardization, and scaled capital access. This yields local infrastructure that is sub-scale, inefficient, and unable to access the capital markets for private leverage – too small to truly shape markets.

While the GGRF’s future is uncertain given pending litigation, its purpose and role as a set of centralized financial institutions within the broader community-based financial ecosystem is critical – and needs to be more broadly understood as policymakers set future priorities. 

Second, should government manage funds and programs internally or partner with external experts? Internal management within an agency offers control and accountability but can strain agency capacity and impede the ability to be an active market participant. It is also difficult to attract the right talent within the government’s pay scale, leading to an inability to recruit and high turnover. This model has been attempted through programs like the Department of Energy’s Loan Programs Office (LPO), but even that market-based program has been slower to execute, delaying critical infrastructure and technology investments by months, if not years.   

On the other hand, external management brings specialized expertise and market agility, yet it raises questions about oversight and influence. No matter who the private party is, there is skepticism around the use of funds, their personal or professional gain, and their intentions with taxpayer money. In our deeply politicized world, this puts a target on the leaders of these organizations that may limit who is willing to play this role. 

Quasi-public Structures

Despite the challenges, on balance it seems that internal agency management or a quasi-public structure is the most feasible path. Internal management pushes the boundaries of public agency function but goes a long way to build trust and accountability. Quasi-public structures seem to be a good compromise when feasible. Other countries have figured out how to manage these programs within a government or quasi-government agency (see the Clean Energy Finance Corporation and Reconstruction Finance Corporation, both in Australia). We can too. 

At the federal level, credit programs should be managed by agencies with the skills and capacities to hold an investment function, like the Department of Energy or the Treasury Department, and leverage lessons learned from programs like DOE’s LPO and EPA’s GGRF to structure new entities. Or – like many of the state and local green banks have done – create quasi-public entities that have public sector governance and appropriations but otherwise operate independently as financial institutions with their own balance sheets, bonding authority, and staffing structure. 

Lastly, if public-private partnerships are preferred, who should the government work with to implement policies meant to expand access to capital and credit? Nonprofit financial institutions often prioritize mission, community impact and are willing to arrange complex financings that require a higher touch approach but often lack scale and institutional capital access. For-profit firms bring scale and expertise but often find it hard to manage a government program with a mindset or culture that differs from their typical profit-maximization frameworks. 

Depository institutions such as banks offer stability and regulatory oversight, whereas non-depositories can innovate more freely to reach the hardest to serve communities. Regulated entities provide robust and trusted infrastructure and controls, but unregulated actors may move faster and can be more creative in supporting traditionally under-resourced opportunities. Specialty firms bring deep sector or asset-class knowledge, while generalists offer broad reach and experience in managing across asset classes. 

To identify the optimal path, it is helpful to look to existing programs for lessons. The U.S. Treasury’s Emergency Capital Investment Program (ECIP) demonstrates how direct investment into regulated depository institutions can mobilize significant capital for underserved communities through an existing financial ecosystem. The Loan Programs Office shows what internal management can achieve for large-scale projects. Tax credit programs like the New Markets Tax Credit (NMTC) and Investment Tax Credit (ITC)/Production Tax Credit (PTC) illustrate how incentives can transform markets, while guarantee programs such as the U.S. Department of the Treasury’s Community Development Financial Institutions Fund (CDFI) Fund Bond Guarantee and SBA 7(a) and 504 guarantees highlight the power of risk mitigation in activating and standardizing products to support secondary market access. These precedents offer valuable insights as we design future policies to accelerate a broadly beneficial energy transition.

Educating policymakers to build trust in the community finance ecosystem

Regardless of path forward, one thing remains critical – building better relationships between policymakers and the community finance industry, including community banks, credit unions, loan funds, and green banks. These are the boots-on-the-ground organizations that share a mission with many policymakers to expand economic opportunity and broaden access to capital and credit. And they are often the organizations navigating multiple public products and programs to bring affordable, quality financial services to communities. 

The challenge is that most advocacy and educational work for these organizations has been siloed – there are groups representing credit unions big and small, those representing housing lenders, loan funds, green banks, and community banks. The disaggregation of these efforts has diluted the potential for policymakers to look at this ecosystem as a whole to determine how best to leverage it for public good. This is not to say that each of these individual groups does not have a role to play for their members – they all have different needs and requirements and deserve representation. But the broader industry would benefit from collaboration across these organizations to create a mechanism for these institutions to help with outreach, advocacy and education around policy-enabled finance overall. This would bring a strong and powerful group of actors together for a higher collective purpose and, ideally, create a large and diverse constituency with common goals. 

State and local governments stepping up  

In the near-term, the absence of federal support for clean technology deployment through policy-enabled finance creates an enormous opportunity for state and local governments to step up and push forward. Hundreds of local financial institutions were doing work to prepare for the delivery of GGRF funds to and through local projects and businesses to drive broader adoption of clean technologies. These organizations continue to have the skillsets, capacity, and pipeline to finance these projects – but need access to flexible and affordable capital to do so. 

State funding efforts could mirror the program and product design of the GGRF to get deals done locally, working with one or more of the constellation of financial institutions preparing to deploy federal funds. Just because the GGRF’s programs were cut short, it doesn’t mean that the infrastructure and learnings generated should go to waste – if there are public institutions willing to commit capital, there should be many financial institutions across the country ready to put it to good use. 

Conclusion 

If our shared goal is an equitable, rapid energy transition, policy must do more than regulate — it must enable finance and focus on deployment, or getting great projects done. The Greenhouse Gas Reduction Fund showed both the promise and the pitfalls of large-scale, policy-enabled finance: when designed and governed well, these tools can unlock private capital, deliver measurable local benefits, and sustain long-term market transformation. When implementation gaps and weak relationships persist, even well-intentioned programs become politically vulnerable and ripe for attack. To make these programs successful within our current political context, future efforts should prioritize clear governance, cross-sector capacity, and sustained stakeholder engagement so public dollars can catalyze private investment that reaches every community. 

Policy-enabled finance snapshot (illustrative, not exhaustive)

ProgramDirectTax IncentiveGuaranteeFederal agencyImplementing entit(ies)
CDFI Fund Financial Assistance ProgramXTreasuryCertified nonprofit loan funds
Emergency Capital Investment ProgramXTreasuryCommunity banks and credit unions
Opportunity ZonesXTreasuryPrivate funds and other financial intermediaries
Low-income Housing Tax Credit (LIHTC) ProgramXIRS, State Housing Finance AgenciesPrivate housing developers, lenders and syndicators
New Markets Tax Credit (NMTC) ProgramXTreasuryPrivate Community Development Entities (CDEs)
Investment and Production Tax Credit (ITC/PTC) ProgramXTreasuryPrivate developers, investors, and syndicators
USDA Business and Industry Loan GuaranteeXUSDABanks, credit unions, and farm credit lenders
USDA Single Family Loan GuaranteeXUSDAPrivate mortgage originators
SBA Loan Guarantees (7a, 504, etc.)XSBABank and non-bank private business lenders
DOE Loan Programs Office Guarantee ProgramXDepartment of EnergyDOE direct to companies, alongside private lenders and investors
CDFI Fund Bond Guarantee ProgramXTreasuryCertified Community Development Financial Institutions (CDFIs)
Greenhouse Gas Reduction Fund National Clean Investment Fund and Clean Communities Investment AcceleratorXEPANational nonprofit specialty finance organizations in partnership with local lenders (community banks, credit unions, green banks, and loan funds)

Rebuilding Environmental Governance: Understanding the Foundations

Today we are facing persistent, complex, and accelerating environmental challenges that require adding new approaches to existing environmental governance frameworks. The scale of some of them, such as climate change, require rethinking our regulatory tools, while diffuse sources of pollutants present additional difficulties. At the same time, effective governance systems must accommodate the addition of new infrastructure, housing, and energy delivery to support communities. Our legal framework must be sufficiently stable to enable regulation, investment, and innovation to proceed without the discontinuities and gridlock of the past few decades. 

In an increasingly divided atmosphere, it will take candid, multiperspective dialogue to identify paths toward such a framework. This discussion paper explores the baseline that we’re building on and some key dynamics to consider as we think about the durable systems, approaches, and capacity needed to achieve today’s multiple societal goals.


Building Blocks to Make Solutions Stick

Our environmental system was built for 1970s-era pollution control, but today it needs stable, integrated, multi-level governance that can make tradeoffs, share and use evidence, and deliver infrastructure while demonstrating that improved trust and participation are essential to future progress. 

Implications for democratic governance

Capacity needs

Modernize today’s system of cooperative federalism to address the lack of clear and intentional interconnections, adaptive feedback loops, and aligned objective, by:


The early 20th Century saw the emergence of our first national laws regulating public resources— the Federal Power Act in the 1930s, the precursor to the Clean Water Act in the 1940s, and the first version of the Clean Air Act in the 1950s. Then, in a concentrated decade of new laws and massive amendments to existing ones, the 1970s saw a focus on assessing, controlling, and reducing pollution, while setting ambitious goals for human and ecosystem health. These statutes generally were constructed around specific resources—airsheds, watersheds, public lands, and wildlife habitat—and articulated specific roles for federal agencies and other levels of government. State efforts were incorporated into a nationwide system of cooperative federalism, while many states undertook their own initiatives to address environmental problems.

For half a century these laws—enacted with overwhelming, bipartisan congressional support— produced a great deal of success, with conventional pollution decreasing across many resources and regions and some species and habitats recovering. But we have plateaued in terms of broad improvements, and meanwhile novel pollutants and more diffuse, global threats have emerged. Political shifts, legacy economic interests, and a changing information landscape have played an important role, as amply recounted elsewhere. 

The bipartisan legislation of the 1970s arose from both idealism and necessity, during an Earth Day moment that embraced ecological thinking in response to tangible harms to humans and the environment. The laws enjoyed massive public support and got many things right. Some were aspirational and holistic, such as the Clean Water Act’s “zero-discharge” target or NEPA’s vision “to create and maintain conditions under which man and nature can exist in productive harmony, and fulfill the social, economic, and other requirements of present and future generations of Americans.” The latter Act established the Council on Environmental Quality to coordinate this policy across the entire federal government.

Other advances came piecemeal, focused on specific resources. The U.S. Environmental Protection Agency (EPA) was cobbled together by an executive plan to reorganize several existing agencies and offices, then granted authority in a series of media-specific statutes that began with the Clean Air Act, Clean Water Act, and Safe Drinking Water Act, and later the Toxic Substances Control Act and Federal Insecticide, Fungicide, and Rodenticide Act. The Resource Conservation and Recovery Act, Superfund, and Oil Pollution Act addressed hazardous substances affecting the nation’s health and ecosystems. Implementation of all these laws required the Agency to develop in-house scientific expertise and detailed regulations that fleshed out statutory standards and applied them to specific sectors—an approach upheld for decades by the Supreme Court.

These laws made unquestionable progress on conventional pollution and waste, the visible, toxic byproducts of industrial production and consumer culture that had spurred the environmental movement and drawn a generation of lawyers to the new profession. But with specialization came fragmentation of environmental law into a plethora of subtopics, and a managerial, permit-centric legal culture that risked losing sight of ecological goals. Nor were the benefits distributed equally by race or class, as demonstrated by pioneering studies in the field of environmental justice.

As the field matured, it slowed, with congressional interventions becoming less frequent and more technical. Some of the last major amendments to a bedrock environmental statute were the Clean Air Act Amendments of 1990, enacted by a bipartisan Congress and signed by President George H.W. Bush. (The other prominent example is the Frank R. Lautenberg Chemical Safety for the 21st Century Act (Lautenberg Chemical Safety Act), a major amendment to TSCA in 2016.) Absent updated legislation, EPA regulations became paramount, but these had to run a gauntlet of shifting policy priorities, complex rulemaking procedures, litigation, and a transformed and often skeptical Supreme Court. 

Critiques of this system date back almost as far as the statutes themselves. One ELI study listed 34 major “rethinking” efforts emanating from academia, blue-ribbon commissions, and NGOs between 1985 and 2014, across the political spectrum and ranging from incremental reforms to radical reinvention. One highly touted initiative, led by sitting Vice President Al Gore, resulted in some modest administrative streamlining. Most remained paper exercises, appealing to good-government advocates but lacking political support.

The stakes grew higher with increasing awareness of climate change. In June 1988, NASA and book-length treatments followed, sparking broad discussion of what was then a fully bipartisan issue. Vice President Bush campaigned on addressing it, and as President in 1992, he traveled to Rio de Janeiro to sign the U.N. Framework Convention on Climate Change. With successes like the 1987 Montreal Protocol on the ozone layer or EPA’s 1990 Acid Rain Program doubtless in mind, the Senate ratified the Framework Convention 92-0.

But climate change implicates much larger portions of the U.S. economy—energy, transportation, agriculture—at individual as well as industrial scales. While NEPA embodied the 1960s slogan that “everything is connected,” the lesson of climate change is that many things emit greenhouse gases, and all things will be affected by global warming. The need for systemic change proved to be an uneasy fit with existing site-specific, media-specific environmental laws.

Growing awareness of climate change and the scale of action needed to address it also generated a backlash from entrenched economic interests. By the mid-2000s, the Bush/Cheney administration had reversed course on federal climate commitments. It contested and lost Massachusetts v. EPA, a landmark ruling in which a narrowly divided Supreme Court held that the Clean Air Act applies to greenhouse gas emissions that affect the climate. 

The Administration’s argument was captured by Justice Antonin Scalia’s flippant remark in dissent that “everything airborne, from Frisbees to flatulence, [would] qualif[y] as an ‘air pollutant.’” In Scalia’s opinion, real pollution must be visible, earthbound, toxic, inhaled, not a matter of colorless molecules interacting in the stratosphere. Even in dissent, this view set the stage for subsequent legal battles, right up to the present effort to revoke EPA’s 2009 “endangerment finding” that is now the underpinning of federal greenhouse gas regulation. 

Climate change likewise laid bare the long-standing divide between environmental law, which historically regulated the power sector in terms of its fuel inputs and combustion byproducts, and energy and utility law, which focused more on transmission and distribution of the resulting power. (Both fields are further divided among federal, state, and local authorities, as discussed below.) Vehicle emissions similarly are regulated via both EPA tailpipe standards and National Highway Transportation and Safety Administration mileage standards, with California authorized to propose more stringent ones. When coordinated, this multi-headed structure produces steady advances, but in deregulatory moments it has become fertile ground for opportunism, retrenchment, and delay. 

At the federal level, these questions have been exacerbated by massive shifts in administrative law, long the building block of environmental law and climate action, and in federal court rulings on the separation of powers, implicating the authority of federal agencies to issue and enforce rules. Successive administrations have run afoul of the current Supreme Court majority, whose “major questions doctrine” casts a shadow both on attempts to fit new problems into once-expansive environmental statutes, and on “whole of government” approaches that attempt to address climate change’s sources and impacts across the entire economy. 

Tentative attempts by presidents to leverage executive power and emergency authority have been curtailed when invoked for regulatory purposes, but are running strong in deregulatory efforts and executive actions in the service of “energy dominance.” Whether the Supreme Court will articulate some principled limits, and whether those will be even-handedly applied to future administrations, remains to be seen. Meanwhile, the past year has seen a large-scale push to reduce environmental regulation, in parallel with abrupt reorganizations and steep reductions in the federal workforce and agency budgets. These actions were joined by sharp declines in environmental enforcement and U.S. withdrawal from environmental and climate-related international instruments and bodies.

In this uncertain atmosphere, attention has turned to new technologies and building the necessary infrastructure to effect growth in low- and zero-carbon energy. As clean energy alternatives have matured and become economically competitive, the climate imperative is pushing against long-standing environmental review and permitting procedures. That may well include NEPA, which is now attracting attention from all three branches of government and a robust debate about whether, or how much, its procedures might be slowing energy deployment. 

Environmental issues were federalized for a reason: to counter pollution that crosses state borders and to prevent a race to the bottom. But decades of implementation have seen the blunting of some tools, expansion of others, and identification of gaps. Moving forward requires reaffirming that the environment is inseparable from societal health and well-being, economic stability, and energy systems. Any serious response must orient governance toward decarbonization, while embedding accountability, equity, and justice from the outset rather than inconsistently and often inadequately after the fact. Doing all this without sacrificing hard-won environmental gains will not be easy.

To meet the challenge of the worldwide crises of biodiversity loss, pollution overload, and climate change, creation of any new structure must be rooted in understanding the existing baseline for environmental governance. 

Cross-Cutting Objectives

Inseparable: Environment, Energy, Economy, and Society

The past half-century has demonstrated the impossibility of severing the environment from the economy, energy production, and social well-being. We must ensure the false dichotomy between environmental protection and economic development, characterized by an oversimplified idea that the two are in a zero-sum competition, also fades. The decades-old concept of sustainability (or triple bottom line) has not yet made its way into many of our foundational laws and governance structures.

Ignoring the complex relationships among environment, energy, the economy, and society favors short-term decisions that externalize impacts. This underlies the longstanding debate over the accuracy and efficacy of cost-benefit analyses, throughout their 40-plus year federal history, including questions about scope and how they handle uncertainty. For any project or program, system designers that consider an integrated suite of factors that move beyond basic environmental parameters or economic indicators (from public health to workforce development, from the supply chain to community well-being) have a greater chance of cross-sector success. 

These governance challenges are also inseparable from shifts in how finance flows. Public and private financial tools—from subsidies and tax credits to loans, grants, and community-based financing—are increasingly shaping market behavior and determining whether policy objectives translate into real-world outcomes. Who controls these tools, how they are deployed, and when capital is made available all play a central role in driving or constraining environmental progress.

Bridging these gaps is, of course, easier said than done. But widening the aperture of considerations can connect decisionmaking to holistic industrial policies that account for a wider range of economic, social, and environmental factors. Accounting for this wider range isn’t just a nice-to-have, but essential to shared prosperity. 

Foundational: Trust and Participation 

A process, project, or program will move at the speed of trust—no faster and no slower. This refers to trust in institutions, in science, and in process. 

Trust is earned through consistent transparency, clear accountability, and demonstrated responsiveness. For governance systems to function at the scale and pace required today, these principles must be embedded in decisionmaking in ways that are coherent and durable, rather than fragmented across a series of disparate steps and entities. Our traditional frameworks contain mechanisms to solicit and incorporate public input. But those mechanisms have limitations for all involved, both those trying to make their voice heard and those proposing the action and receiving input. (These range from when and how often participation occurs in the decisionmaking process to how the input is incorporated and decisions communicated.) Participation is foundational to our regulatory democracy and must occur early enough and in meaningful ways to improve decisions.

Effective participation also depends on clarity. People must be able to understand how decisions are made, what tradeoffs are being weighed, and where and how engagement can influence outcomes. But our frameworks still reflect reliance on elite and professional representation rather than widespread engagement. Trust—and the durability of outcomes—will increase when our processes have clearly articulated principles, transparently and rapidly weigh tradeoffs, and come to decisions through open and informed consideration. 

The Concurrent Risk and Promise of Technology 

Mechanization and industrialization created both unprecedented wealth and the pollutants that were the target of the 1970s wave of environmental laws. Emerging technologies likewise offer great promise, but also place familiar stresses—greenhouse gas emissions, water consumption, land use, waste—on the ecosystem and on human health and well-being. Our existing laws will need to respond and adapt to these problems as data centers and other novel demands reach greater scale, even as we evolve new ways of balancing those technologies’ potential against their up-front impacts and opportunity costs. 

Technology also offers a potential path through the climate crisis, as solar and wind energy have become scalable and cost-competitive with traditional fossil fuels. Other clean technologies on the horizon, such as geothermal or fusion energy, retain bipartisan support and will require legal and regulatory guardrails if they mature and are integrated into the system. Battery storage and energy efficiency advances will help manage and reduce energy demand, and carbon removal and sequestration technologies may also play a role in curbing emissions. And at the outer limits of our knowledge, various geoengineering concepts are raising difficult questions about feasibility, decisionmaking procedures, unintended consequences, and accountability. 

New technologies are also helping shape the implementation of environmental law in important ways. Existing tools such as satellite imaging, GPS location and geographic information systems, remote monitoring and sensing, and drones have fundamentally altered the way we view and record data from the physical world, in close to real time. Computer modeling and simulations have been a mainstay of climate science and policy, and other software innovations may improve environmental governance, including addressing long-standing issues of government transparency and public participation.

Sample Topics for Multi-Perspective Discussions
Communicating environmental challenges, conditions, and risks

 Effective messaging is essential to enhancing public understanding of interconnected issues and support for responses. It should be tailored to specific jurisdictions and informed by advances in research (e.g., behavioral science), learn from those thriving in today’s information ecosystem, and embrace strategies for reducing polarization.

Advancing the beneficial use of technologies while establishing reasonable guardrails

How can we identify and address barriers to the development and equitable deployment of technologies that advance environmental protection while limiting their negative impacts.

Democracy, Expertise, and Regulatory Certainty

In a healthy democracy, public policy is guided by evidence, and truth is the shared foundation for collective decisionmaking, whatever the chosen outcome. When facts and scientific expertise are dismissed or minimized in favor of ideology, however, it becomes harder for citizens to deliberate, solve problems, and hold leaders accountable. The diminution and marginalization of science contribute to the erosion of democracy itself.

In the United States, our ability to build necessary infrastructure and take action has been slowed by the long timelines and sometimes overlapping requirements of our regulatory processes. This is exacerbated by the increasingly extreme policy swings we have been experiencing between administrations. The result is the twin challenge of how to increase the pace of our processes without lessening their protections, while also making our decisions more stable and durable.

Aligning Regulatory Certainty and Timelines 

Regulatory certainty is not the same thing as rigidity. When done correctly, it can be the backdrop against which communities are able to plan for the future and companies can make informed decisions about where and how to invest. Regulation that is sufficiently clear on stable objectives does not have as much space in which to swing. 

Long horizons with clear milestones matter: think of a national clean electricity standard, or the emissions-based equivalent, set on a 15- to 20-year glidepath. Confidence in long-term decisions, however, stems from effective inclusion, holistic analysis, and transparent decisions. The perspectives of subject-matter experts (in-house and external), and of those who manage and care about the resources or land in question, should be considered essential and actively pursued by policymakers. 

Program-level thinking can help inform decisions at the project level. The energy transition will be remembered for feats of engineering—the thousands of miles of transmission lines, the buildout of battery storage—but its success will be determined by whether our framework listens, incorporates needed expertise, and produces rules that last long enough for people to plan their lives.

Evidence-Based Decisionmaking

For decades, the principle that good decisions require a good evidence base has been axiomatic. Dating back to 1945, the federal government has invested in science as a discipline and an idea, with government supporting the research to be conducted by public institutions and delivered as socially useful goods by the private sector.

Incorporating meaningful, often complex, evidence—including scientific data, traditional knowledge, and the needs, concerns, and priorities of potentially affected individuals—into decisionmaking is increasingly fraught. Climate change illustrates these challenges: despite decades of understanding by government officials and private sector decisionmakers about its causes and the need to act, economic and social interests have prevented effective policy and legislative response. Decisions are as good as the information they are based on. Emissions reductions ultimately depend not just on technical knowledge, but on institutions and governments capable of acting on that knowledge independently, transparently, and free from corruption and clientelism.

In a study assessing the effectiveness of the federal government’s efforts to improve evidence-based decisionmaking, the U.S. Government Accountability Office found mixed progress in: (1) developing relevant and high-quality evidence; (2) employing it in decisionmaking; and (3) ensuring adequate capacity to undertake those activities. These are foundational problems.

Compounding our challenges in making legislative and policy decisions based on accurate and pertinent evidence is the siren song of AI. Artificial intelligence promises many tools, ranging in complexity and autonomy from providing clerical tasks to generating substantive recommendations. (AI Clerical Assistive Systems automate certain administrative and procedural tasks, such as document classification and automatic transcription, and AI Recommendation Systems can contribute to judicial decision-making, for example, by analyzing legal codes and case precedents. Paul Grimm et al.)

 AI is already being used across jurisdictions and agencies for environmental regulation, including planning, reviewing proposals, drafting environmental reviews, public participation and engagement, monitoring compliance, and enforcement. Recent federal policy has fueled the AI flame, with a 2025 AI action plan and multiple Executive Orders that offer the power to expedite permitting processes.

Enormous governance questions around AI have yet to be resolved. Technologies built by people reflect the values and assumptions of those who built them, and their use shifts power in decisionmaking processes. If a judge were called upon to review a decision made by such a tool, how could she determine the finding was reasonable under existing standards of administrative law? Can machine-generated analysis satisfy NEPA’s “hard look” review? These types of governance concerns dog AI tools wherever they are deployed but become particularly critical when they have the potential to become the decisionmaker in our legal and regulatory system.

The importance of having rigorous systems for identifying and considering trusted information to ground collective and democratic decisionmaking cannot be overstated. Until recently, dozens of scientific advisory committees routinely advised federal agencies to help bridge information gaps. Staggering recent losses of federal research funding and government programs and scrubbing of essential data sets means any path forward will likely require significant investments of both financial and human capital. When we rebuild, priority should be placed on ensuring all participants in decisionmaking have access to the same evidence, supported by the same systems. 

Frontloading Regulatory Decisionmaking 

Even as we work to improve how evidence informs decisionmaking, we face growing risks, uncertainties, and tradeoffs. The challenge is not simply to generate more information, but to make better use of what we already know through regulatory systems that reflect the integrated nature of the problems we face—without mistaking uncertainty for an absence of evidence.

Many conflicts arise because decisions are fragmented across regulatory silos and institutions.  Consider a proposed electrical transmission line crossing a wetland. Decisionmakers must balance the imperatives of the energy transition, the conservation of biodiversity, the protection of water resources, and local economic opportunities. Yet these factors may be evaluated at different times, at different scales, and by different agencies. As a result, environmental permitting decisions can be made in isolation, long after foundational choices about the project’s purpose and design have already been locked in.

By the time site-specific questions arise, such as whether a particular wetland falls within the narrowed jurisdiction of the Clean Water Act, many broader tradeoffs have already been foreclosed. 

A holistic approach would entail identifying the priority of certain projects and a system for weighing their impacts. For example, infrastructure decisions could happen at a systemic scale such as nationwide grid needs, providing context for decisions about individual projects and resources. Our decisionmaking processes need systems for weighing tradeoffs, and making them transparent, to enable systems-level planning and prioritization and effective engagement. 

Hard decisions will have to be made regarding prioritized (and thus deprioritized) objectives. But frontloading data gathering, assessment, and decisionmaking on a national scale—through meaningful scenario planning, for example—could reduce the number of decisions made much further down the line in a project lifecycle and temper the uncertainty that can stem from permitting officials’ discretion. 

We will be facing these types of tradeoffs with increasing frequency as needs mount to build infrastructure and housing, retreat from our coasts, manage and conserve species and ecosystems, and respond to and prepare for increasingly frequent and severe emergencies. In addition to an integrated approach for assessing impacts and making tradeoffs transparent, the system will need certain decisions to be made earlier in the decisionmaking processes and with a broader scope. 

Acting (and Adapting) Amidst Uncertainty 

Core tenets of administrative law structure decisionmaking with up front analysis and assume that we have full—or at least sufficient—information about circumstances and potential impacts to support a decision. But this is not always the case. When there are substantial uncertainties about conditions or the possible impacts of an action or rulemaking, adaptive management can improve outcomes by taking an iterative, systematic approach. 

The uncertainties brought on by changing conditions due to climate impacts and unknowns about the consequences of proposed actions may call for an adaptive approach. And there are other situations where establishing sufficient evidence before taking irreversible action is appropriate. For example, we currently have limited understanding of the potential local and global impacts of geoengineering proposals to release aerosols into the atmosphere to block the sun’s rays, nor are there governing mechanisms in place to address them. 

There are also situations where it is important to ensure that we do not indefinitely postpone action due to a desire to have all the answers before acting, such as infrastructure for transitioning away from fossil fuel combustion. When appropriate, effective adaptive management plans include procedural and substantive safeguards such as clear goals to set an agenda and provide transparency, an accurate assessment of baseline conditions to compare future monitoring data against, an outline of the thresholds at which management actions should be taken to promote certainty and assist with judicial enforcement, and is linked to response action.

Learning as we go and making appropriate adjustments may be justified in some contexts, and even essential when we do not have the luxury of time and must move ahead without critical information. Adaptive management can increase an agency’s ability to make decisions and allow managers to experiment, learn, and adjust based on data. But adaptive management’s flexibility comes at the cost of more resources and less certainty, which may also invite controversy. The sweet spot for adaptive management may be when managing a dynamic system for which uncertainty and controllability are high and risk is low. While uncertainties are proliferating, situations that meet those conditions are not the norm. 

It would be beneficial for our environmental governance systems to explicitly identify conditions under which adaptive management may and may not be used, and to provide clear accountability mechanisms. The approach must fit with the practical realities of the working environment. For example, even if uncertainty and controllability are high and risk is relatively low, tinkering with large-scale energy infrastructure is not practical. Adaptive management may not be suited to regulatory contexts (1) in which long-term stability of decisions is important; (2) where decisions simply can’t easily be adjusted once implemented; or (3) where it is essential that an agency retain firm authority to say “yes” or “no” and leave it at that.  It is a valuable tool to be invoked when truly necessary.

Sample Topics for Multi-Perspective Discussions
Realigning to reflect today’s challenges

The interconnectedness of today’s global environmental challenges is in tension with the accreted framework of media-specific, site-specific laws and siloed agencies. Adjustments that help to align objectives, processes, and structures could scale impact. 

Evidence-based decisionmaking is foundational to U.S. governance and essential to progress towards today’s environmental imperatives

Our framework should reflect commitment to and investment in gathering and analyzing information, from intricate science to the concerns of impacted communities; and be designed to incorporate and respond to changing information, such as through judicial review or other checks. 

Designing effective certainty

In part because of impacts already set in motion, we must consider when we cannot wait for more information before taking action on environmental and climate challenges. By their nature, some of those actions can be adapted on an ongoing basis, while others cannot. Clear parameters for differentiating will help ensure clear timelines and appropriate, effective processes.

Building a Structure Fit for Purpose

The triple planetary crises, a term coined by the UN Environment Programme, refers to the challenges of biodiversity loss, pollution overload, and climate change. They require large-scale mobilization and societal level adjustments. This magnitude of action requires a multifaceted system that can support and move myriad levers in a coordinated and balanced manner. The year she received the Nobel Prize in Economics, Elinor Ostrom published a paper capturing the tension but also necessity of this layered system, calling for a “polycentric approach” to addressing climate change.

The following discussion focuses largely on federal and state government action. In addition, Tribal Nations are vital sovereign authorities, partners, and voices in governance, including natural resource management, and their needs and knowledge are critical to effective, sustainable, just results. And as Ostrom recognized, private entities will also be instrumental in addressing climate change and other complex challenges; this includes not only corporations, as discussed below, but philanthropic organizations and a variety of other nongovernmental actors.

The Scale Challenge 

Environmental regulation occurs at multiple levels: local ordinances, state laws and policies, interstate agreements, tribal laws, federal regulations, and international laws and norms. It also works at different resource scales, from managing a subspecies to protecting regional drinking water to setting nationwide air standards.

Jurisdictional nesting can provide comparative benefits at various levels for specific resources or pollutants. For example, working at the local level may allow for tailoring to specific circumstances to maximize benefits and the building of trust, while working at the state level can allow for the cumulative benefits of collective local action while also allowing for the testing of different approaches to federal implementation. Meanwhile, working at the federal and larger scale allows, among other things, the balancing of voices, and the establishment of shared objectives, standards, or requirements. 

However, tiered systems can also be subject to gaps in implementation, such as when there is no mechanism to trigger enforcement of an international mandate at a national level. This may inadvertently impede interoperability and shared learning, such as by using different data standards, tools, or systems, and slow action due to competing or otherwise unaligned priorities. In addition, rarely do jurisdictional boundaries align with resource definitions, whether it be a hydrogeographic basin, extent of an air pollutant, or natural hazard vulnerability zone. Further complexity is added by questions around preemption, with changes occurring in longstanding understandings of federal versus state authorities under key statutes and regulatory structures. 

Federal, tribal, state, and local governments must navigate these challenging dynamics as they work to effectively implement existing environmental laws and creatively address new environmental problems. 

Cooperative Federalism

Federalism—whereby the federal government and states share power and responsibilities—is a central tenet of the U.S. governance system. A particular form, cooperative federalism, is embodied in most of the major U.S. environmental laws, including the Clean Air Act and the Clean Water Act. These laws establish a legal framework in which minimum standards are established at the federal level and individual states implement the programs. Today, over 90 percent of the delegable federal environmental programs are run by states. As a general matter, states are responsible for ensuring that federal standards are met but have the flexibility to impose standards that are more stringent than the federal standards. 

In practice, the Congressional Research Service observes that the “precise relationship and balance of power between federal and state authorities in cooperative federalism systems is the subject of debate.” This debate has manifested in a variety of ways over the decades, including differences over the appropriate scope of federal oversight and levels of federal funding for state-delegated programs. 

Environmental protection has advanced in many respects over time with cooperative federalism as its foundation, but few would argue there is no room for improvement. For example, a 2018 memorandum by the Environmental Council of the States (ECOS) captured a consensus among states that the “current relationship between U.S. EPA and state environmental agencies doesn’t consistently and effectively engage nor fully leverage the capacity and expertise of the implementing state environmental agencies or the U.S. EPA.”

In addition to the leeway that cooperative federalism provides to the states in implementing federal environmental laws, states are free to regulate or otherwise address environmental problems that are not covered by federal laws. As a result, states are often referred to as (in Justice Brandeis’ phrase) “laboratories of democracy” for testing innovative policies. Historically, states have served as testing grounds for environmental policies later adopted by the federal government. Given the current federal governance landscape, discussed below, what happens in the states may stay in the states (at least for quite some time)—making state laboratories one of the few promising options for advancing environmental protection. 

Barriers to Optimal Functioning of Cooperative Federalism 

In addition to the inherent systemic challenges outlined above with respect to multi-tiered jurisdiction and resource scale, there are broad societal barriers to maximizing the efficacy of cooperative federalism. The numerous overarching problems contributing to democratic dysfunction (e.g., channelized communication, primaries that yield extreme candidates who foster dramatic pendulum swings, lack of public trust) will contribute to impeding the optimal functioning of cooperative federalism for the foreseeable future. 

The multitude of environmental governance-specific challenges identified earlier also significantly affect the functioning of cooperative federalism. These include, for example, long-standing congressional gridlock; new and emerging environmental harms that cannot be easily addressed within the existing, siloed framework; a Supreme Court changing its review of regulation; and regulatory pendulum swings that make consistency and stability difficult and hinder continuous improvement.

In addition, several additional barriers arguably weaken the foundations of cooperative federalism. These include: ineffective federal oversight of state programs (possibly both too stringent and too lenient in some respects); insufficient collection and dissemination of data (e.g., on environmental conditions, performance, pollution impacts), as well as inconsistent tracking of key environmental indicators; lack of state-specific effective risk communication and messaging; limited state resources for filling federal regulatory gaps or experimenting with innovative ways of implementing federal and state regulations; and insufficient federal funding for state programs. Recent critiques also point to the need to build out state administrative law to improve the functioning of cooperative federalism.

Opportunities for Renewing Cooperative Federalism

Recent developments in federal programs are disrupting many aspects of the country’s environmental protection efforts. These developments include drastic regulatory rollbacks, multiplied industry influence, curtailed input from scientists and other experts, rollback of federal grant funds to states and local governments, and sweeping staffing cuts resulting in loss of critical expertise. 

Cooperative federalism has been particularly undermined by federal funding cuts (e.g., withdrawal of federal grants, reductions in revolving loan funds) and cuts to the federal programs that collect and analyze environmental data. Moreover, federal interference with independent or “more stringent than” state initiatives is taking a toll (e.g., response to California’s electric vehicle requirements ).

Given the barriers outlined above that make major statutory change infeasible, building an entirely new structure to replace cooperative federalism will be a nonstarter for the foreseeable future. However, ample opportunities exist to strengthen the existing structure in a manner that yields more effective and innovative approaches to environmental protection. 

Front and center is building state and local governmental capacity to fill the gaps created by federal inaction and rollbacks as well as to lead on regulatory innovation. In so doing, states and local governments can serve as more effective laboratories of democracy and foster innovative federal action. And because states and local governments are on the frontlines of managing environmental and climate impacts such as floods and wildfires, as well as aging water infrastructure and other environment-related challenges, they are motivated to address the cause and effects of these harms, despite the intensely politicized nature of environmental issues such as climate change. 

To be sure, renewing the existing structure is complicated by an uneven political landscape. For example, the level of political and popular support for environmental protection measures in the 26 states led by Republican governors differs from the levels of support in the 24 states led by Democratic governors, and the relative dominance of a particular party (e.g., trifectas or triplexes) is also a factor. These dynamics likewise influence environmental action by local governments when, for example, the potential for state preemption of local authority is a factor. 

Nevertheless, the practical reality of increased extreme weather events, aging water infrastructure, and other environment-related challenges provides a strong incentive for all states and local governments to act. State and local efforts, however, are hindered by limited capacity in the form of staffing, funding, expertise, data, and other factors. For example, virtually all states could benefit in their decisionmaking from more robust data on local environmental conditions, and many states lack adequate funding, staff, and other resources.

Private Sector Synergies and Opportunities

Private environmental governance (PEG)—which can take a range of forms including collective standard-setting, certification and labeling systems, corporate carbon commitments, investor and lender initiatives, and supply chain requirements—is already making its mark across industries as diverse as electronics, forestry, apparel, and AI. For example, roughly 20 percent of the fish caught for human consumption worldwide and 15 percent of all temperate forests are subject to private certification standards. In addition, 80 percent of the largest companies in key sectors impose environmental supply chain contract requirements on their suppliers. And investors are increasingly taking environmental, social, and governance (ESG) into account, including risks related to climate change. A 2022 study estimated, for example, that assets invested in U.S. ESG products could double from 2021 to 2026 and reach $10.5 trillion. 

As professors Vandenbergh, Light, and Salzman explain in their book Private Environmental Governance: “If you want to understand the future of environmental policy in the 21st century, you need to understand the actors, strategies, and challenges central to private environmental governance.” 

Given the scope of PEG activities, it is not surprising that a range of regulatory regimes are implicated, including corporate governance, contract, antitrust, and consumer protection laws. In some cases, these legal regimes place constraints on the forms and scope of PEG initiatives. Many contend, however, that these constraints are inadequate, as reflected in recent efforts to severely curtail ESG initiatives. 

Further, some scholars and advocates have criticized PEG from an entirely different perspective, citing concerns that PEG measures constitute greenwashing—that is, that they do not actually change corporate behavior and environmental conditions. Among other concerns is that PEG may undermine support for public governance measures in certain contexts. 

Yet federal legislative gridlock, a dramatically swinging environmental regulatory pendulum, unregulated new technologies, and other factors point to needing a better understanding of how PEG can be leveraged to advance environmental protection efforts—including the improved functioning of cooperative federalism.

Sample Topics for Multi-Perspective Discussions
Building a robust and widely disseminated information base

How can we use innovative approaches for preserving existing data and collecting new data on environmental conditions, regulated entity performance, and pollution impacts to enhance interoperability of local, state, and federal systems, foster consistency among assessments of risk, and help align priorities and approaches?

Leveraging traditional state and local powers

Problems such as climate change require a whole of government approach to address and could benefit from leveraging adjacent state and local regulatory authorities in areas such as land use (e.g., zoning), infrastructure, and public health.

Enhancing connectivity within jurisdictional nesting and fostering networks of state-level and local-level regulators to align priorities

Bolstering state and local officials’ networks for sharing data, best practices, and regulatory innovations may help align priorities and produce further progress on cross-jurisdictional problems as well as new challenges such as permitting reforms.

Examining how PEG can be leveraged to advance environmental protection

For example, asking—what are the effects of PEG (e.g., emissions reductions); what are the drivers of PEG (e.g., brand reputation, shareholder actions, employees, and corporate customers); are there ways to reduce greenwashing and greenhushing; and how can we ensure that PEG complements public governance.

Leveraging new technologies for capacity-building

For example, AI and advanced monitoring technologies—if thoughtfully leveraged—could lessen the burden on state and local governments, particularly those that are under-resourced, in their efforts to assess climate risk, develop resilience plans, and monitor regulatory compliance.

Conclusion

The environmental gains of the last half-century demonstrate that governance choices matter. The United States built a system capable of addressing the urgent environmental crises of its time by combining scientific expertise, democratic accountability, and enforceable legal standards. 

Today’s urgent challenges—climate change, biodiversity loss, and pervasive pollution—demand a similar alignment under far more complex conditions. The challenge is not merely to regulate more, faster, or differently, but to recommit to decisionmaking that is credible and durable: by restoring confidence that evidence matters, that participation is meaningful, that tradeoffs get confronted honestly, and that rules will persist long enough to justify investment and collective effort.

The path forward lies neither in abandoning the foundations of environmental law, nor in relying solely on technological or private solutions. It will be found by strengthening and adapting existing governance structures—integrating cross-cutting objectives across domains, clarifying roles across jurisdictions, and rebuilding the shared evidentiary base and institutional capacity needed to act amid uncertainty, rather than deferring action in pursuit of unattainable certainty. And it requires clear communication about today’s complex, dispersed challenges that enhances understanding and reduces polarization. 

At its core, the triple planetary crisis is a democratic and governance challenge: how societies decide, together, to protect people and places while sharing costs and benefits fairly. Meeting that challenge will require systems capable of carrying both technical complexity and public trust, as well as a sustained commitment to invest in institutions that can decide, act, and endure. 

Costs Come First in a Reset Climate Agenda

Building Blocks to Make Solutions Stick

Durable and legitimate climate action requires a government capable of clearly weighting, explaining, and managing cost tradeoffs to the widest away of audiences, which in turn requires strong technocratic competency. 

Democratic governance needs

State Capacity needs


Key Takeaways

Introduction

Public policy involves tradeoffs. The primary tradeoff for climate change mitigation is economic cost. Secondary tradeoffs include commercial freedom, consumer choice, and the quality or reliability of goods and services. Political movements seeking to address a collective action problem, such as climate change, are prone to overlook the consequences of tradeoffs on other parties, like consumers and taxpayers. This paper posits that the cost tradeoffs of climate change mitigation have been underappreciated in the formation of public policy. This has resulted in an overselection of high cost policies that are not politically durable and may erode social welfare. It also results in overlooking low or negative-cost policies that are durable and hold deep abatement potential. These policies can have broad political appeal because they align with the self-interest of the United States, however they typically require dispersed beneficiaries to overcome the concentrated lobby of entrenched interests. 

A core, normative objective of public policy is to improve social welfare, which “encourages broadminded attentiveness to all positive and negative effects of policy choices”. Environmental economics determines the welfare effects of climate change mitigation policy by the net of its abatement benefits less the costs. The conventional technique to determine abatement benefits is the social cost of carbon (SCC). The barometer for whether climate policy benefits society is to determine whether abatement benefits exceed costs. Accounting for full social welfare effects requires consideration of co-benefits as well, granted these tend to be conventional air emissions with existing mitigation mechanisms covered under the Clean Air Act. Nevertheless, accounting for costs is essential to ensure climate policy benefits society. 

Abatement costs also have a discernable bearing on the likelihood and durability of policy reforms. Climate policies exhibit patterns of passage, mid-course adjustments, and political resilience across election cycles based on the constituency support levels linked to benefit-allocation and cost imposition. This paper develops four policy classifications as a function of their abatement benefit-cost profile, and uses this framework to examine the political economy, abatement effectiveness, and economic performance of select past and potential policy instruments. 

Political Economy and Policy Taxonomy 

The translation of climate policy concepts into legitimate policy options in the eyes of policymakers can be viewed through the Overton Window. That is, politicians tend to support policies when they do not unduly risk their electoral support. The Overton Window for climate policy is constantly shifting within and across political movements with the foremost factor being cost. 

In a 2024 survey of voters, the most valued characteristics of energy consumption were 37% for energy cost, 36% for power availability, 19% for climate effect, 6% for U.S. energy security effect, and 1% for something else. Democrats slightly valued energy cost and power availability more than climate effects. Independents and Republicans heavily valued energy cost and power availability more than climate effect. 

Figure 1. Voters’ Energy Values

Progressives have long exhibited greater prioritization of climate change policy, but cost concerns are driving an overhaul of the progressive Overton Window on climate change. In California, which contains perhaps the most climate-concerned electorate in the U.S., progressives have begun a “climate retreat” to recalibrate policy as “[e]lected officials are warning that ambitious laws and mandates are driving up the state’s onerous cost of living”. Nationally, a new progressive thought leadership think tank is encouraging Democrats to downplay climate change for electoral benefit. Importantly, they find that 61% of battleground voters acknowledge that “climate change is at least a very serious problem,” but that “it is far less important than issues like affordability.” 

Similarly, veteran progressive thought leaders, such as the Progressive Policy Institute, now stress that “energy costs come first” in a new approach to environmental justice. While emphasising the continued importance of GHG emissions reductions, those policy leaders are making energy affordability the top priority, amid a broader Democratic messaging pivot from climate to the “cheap energy” agenda. The rise of cost-conscious progressives is particularly notable because the progressive electorate has expressed a higher willingness to pay to mitigate climate change than moderate and conservative electoral segments. 

Economic tradeoffs, namely costs and more government control, has long been the central concern on climate policy for the conservative movement. The conventional climate movement messaged on fear and the need for economic sacrifice, which is the antithesis of the conservative electoral mantra: economic opportunity. Yet the conservative climate Overton Window emerged with a series of state and federal policy reforms when climate change mitigation aligned with expanded economic opportunity. However, pro-climate conservative thought leaders remain opposed to high cost policies, such as calling to phase out Inflation Reduction Act (IRA) subsidies for mature technologies. 

Many leading conservative thought leaders continue to challenge the climate agenda writ large because of its association with high cost policies. For example, President Trump’s 2025 Climate Working Group report was expressly motivated by concerns over “access to reliable, affordable energy” while acknowledging that climate change is a real challenge. Similarly, a 2025 American Enterprise Institute report finds that the public is most interested in energy cost and reliability and unwilling to sacrifice much financially to address climate change. Meanwhile, climate-conscious conservative thought leaders like the Conservative Coalition for Climate Solutions and the R Street Institute continue to emphasize a market-driven, innovation-focused policy agenda that prioritizes American economic interests and drives a cleaner, more prosperous future. Altogether, it indicates a conservative Overton Window on negative and low-cost climate change mitigation. 

While cost is driving the Overton Window within each political movement, it also buoys the potential for alignment across political movements. Political movements are not monoliths, but rather exhibit major subsets within each movement. The progressive movement has seen gains in popularity among its populist left flank, often identified as the “democratic socialist” wing, which contributes to ongoing debate about Democrats’ ideological direction. Climate policy initiated by this wing, however, is associated with high economic tradeoffs (e.g., degrowth) and has prompted a backlash within the progressive movement. By contrast, a subset of the progressive movement, sometimes labelled “abundance progressives,” has emerged to support a more pro-market, pro-development posture. This movement is especially responsive to energy cost concerns, and is an emerging substitute for the anti-development traditions of the progressive environmental movement. Overall, variances in the progressive movement are fairly straightforward to categorize linearly on the economic policy spectrum. 

The Republican electorate views capitalism far more favorably than Democrats, but with modest decline in recent years. Republicans have trended away from consistently conservative positions associated with limited government, which historically emphasized the rule of law and a strict cost-benefit justification for government intervention in the market economy. They have migrated towards right-wing populism associated with the Make America Great Again (MAGA) movement. Right-wing populism is hard to operationalize for economic policy because it is not a standalone ideology, but a movement vaguely attached to conservative ideology. Generally, the “America First” orientation of MAGA implies positions based on the self-interest of the U.S., with the Trump administration prioritizing cost reductions in energy policy. 

MAGA is further to the right of conventional conservatives on environmental regulation and general government reform. For example, conservatives have noted the contrast between conservative “limited, effective government” and the Department of Government Efficiency’s “gutted, ineffective government” reform approach. On the other hand, MAGA will occasionally back leftist policy instruments, such as coal subsidies, wind restrictions, executive orders to override state policies, and emergency authorities for fossil power plants. These are often justified to counteract the leftist policies passed by progressives (e.g., renewables subsidies, fossil restrictions, emergency authorities for renewables), resulting in dueling versions of industrial policy. In other words, ostensible overlap between MAGA and progressives on policy instrument choice actually reflects the use of similar tools used for conflicting purposes (e.g., restrictive permitting or subsidies for opposing resources; i.e. picking different “winners and losers”). Nevertheless, the disciplinary agent for right-wing energy populism has been cost concerns, which have influenced the Trump administration to pursue more traditionally conservative energy policies like permitting reform and lowering electric transmission costs. 

This political economy identifies the broadest cross-movement Overton Window between moderate or “abundance progressives” and traditional conservatives. Regardless, both broad movements exhibit cost sensitivity and growing prioritization of U.S. self–interest. Distinguishing the domestic SCC from global SCC is essential to determine what policies are consistent with the self-interest of the U.S. versus the world as a whole. Traditionally, the U.S. government only considers domestic effects in cost-benefit analysis, yet the vast majority of domestic climate change abatement benefits accrue globally. 

The first SCC, developed under the Obama administration, relied solely on a global SCC. Leading conservative scholars, including the former regulatory leads for President George W. Bush, criticized the use of the global SCC only to set federal regulations. They argued for a “domestic duty” to refocus regulatory analysis on domestic costs and benefits. Similarly, the first Trump administration used a domestic SCC. Although the second Trump administration moved to discard the SCC outright, this appears to be part of a regulatory containment strategy, not a reflection of the conservative movement’s dismissal of the negative effects of climate change. In other words, even if the SCC is not the explicit basis for policymaking, it is a useful heuristic for policymakers.

The proper value of the SCC is the subject of intense scholarly and political debate. It has fluctuated between $42/ton under President Obama, $1-$8/ton under President Trump, and $190/ton under the Biden administration (all values for 2020). The main methodological disagreement has been over whether to use a domestic or global SCC, with the Trump administration position guided by “domestic self-interest.” This suggests the original domestic and global SCC values may approximate the Overton Window parameters the best. This underscores the following policy taxonomy that characterizes climate abatement policies by cost relative to domestic and global SCC levels:

Policy Applications

There are myriad policies across the abatement cost spectrum. This analysis applies to particularly popular domestic policies already pursued or readily considered. This includes policies targeting the environmental market failure via direct abatement (GHG regulation) and indirect abatement (public spending, clean technology mandates, and fuel bans). It also includes policies targeting non-climate market failure, yet hold deep climate co-benefits (innovation policy). The analysis also examines policies that correct government failure and have major climate co-benefits (permitting, siting, and electric regulation reform). 

Fuel Mandates and Bans

For the last two decades, the most prevalent climate policy type in the U.S. has been state level fuel mandates and bans. Last decade, the environmental movement came to prefer policies that explicitly promote or remove fuels or technologies, not emissions. This is despite ample evidence in the economics literature that market-based policies are more effective and carry far lower abatement costs. Nevertheless, the most common domestic climate policy instrument this century has been state renewable portfolio standards (RPS). The literature notes several key findings from RPS:

Micro-mandates have also sprung up, primarily in progressive states. These have often targeted the promotion of nascent or symbolic energy sources that the market would not otherwise provide, with the costs obscured from public view (e.g., rolled into non-bypassable electric customer charges). A good example is offshore wind requirements in the Northeast, which carries a high abatement cost (over $100/ton). 

Fuel bans have become increasingly popular climate policy in progressive states and municipalities. Beginning in 2016, a handful of progressive states began banning coal. However, this does not appear to have created much cost or abatement benefit, as evidenced by a lack of commercial interest in coal expansion in areas without such restrictions. In fact, neither federal nor state regulation was responsible for steep emissions declines from coal retirements. Coal retirements were mostly driven by market forces, especially breakthroughs in low-cost natural gas production and high efficiency power plants. Policy factors, like the Mercury and Air Toxics Rule, were secondary drivers of coal plant retirement. 

Around 2020, California, New York, and most New England states began adopting partial natural gas bans or de facto bans on new gas infrastructure through highly restrictive permitting and siting practices. Unlike coal restrictions, these laws have markedly decreased commercial activity, namely gas pipeline and power plant development, and in some cases caused economically premature retirements. This has caused “pronounced economic costs and reliability risk.” Resulting pipeline constraints drive steep gas price premiums in these states, which translate into a core driver of elevated electricity prices

Insufficient pipeline service in the Northeast is especially problematic, as demonstrated by a December 2022 winter storm event that nearly led to an unprecedented loss of the Con Edison gas system in New York City that would have taken weeks or months to restore. Further, preventing gas infrastructure development does not provide a clear abatement benefit, because more infrastructure is needed to meet peak conditions even if gas burn declines. A prominent study found a 130 gigawatt increase in gas generation capacity by 2050 was compatible with a 95% decarbonization scenario. 

Progressive states and municipalities have also pursued natural gas consumption bans. This policy may carry exceptional cost, especially for existing buildings, with potentially well over $1 trillion in investment cost to replace gas with electric infrastructure. One estimate put the cost of natural gas bans at over $25,600 per New York City household. A Stanford study projected a 56% electric residential rate increase in California from a natural gas appliance ban. Generally, conservative thought leaders and elected officials have opposed natural gas bans for cost as well as non-pecuniary reasons, including security concerns and the erosion of consumer choice. This applies even for prominent members of the Conservative Climate Caucus. Altogether, gas bans are considered class IV policy with virtually no Overton Window alignment. 

GHG Transparency 

GHG regulation takes various forms. The least stringent is GHG transparency, which addresses an information deficiency and lowers transaction costs in voluntary markets. This begins with reporting and accounting requirements on emitters (Scope 1 emissions). Public policy can help resolve measurement and verification problems that have eroded confidence in voluntary carbon markets. GHG transparency policy can also standardize terminology and provide indirect emissions platforms. For example, making locational marginal emissions rates on power systems publicly available lets market participants identify the indirect power emissions of power consumption (Scope 2 emissions). Progressives have consistently favored GHG transparency policy, while conservatives have typically supported light-touch versions of it like the Growing Climate Solutions Act

The second Trump administration recently pursued removal of basic GHG reporting requirements on ideological grounds, specifically repeal of the GHG Reporting Program (GHGRP). This appears to reflect an optical deregulatory agenda over an effective one. Conservative groups have warned of the downsides of GHGRP repeal. Pressure to course correct may prove fruitful, given that the industry the Trump administration aims to assist – oil and natural gas – maintain that the U.S. Environmental Protection Agency (EPA) should retain the GHGRP. A recent analysis found that if states replace the GHGRP, new programs will be more expensive (Figure 2). 

Figure 2. Cost Comparison of Federal and California Reporting Programs

Many regulated industry and conservative groups instead support a low compliance cost GHG reporting regime with durability across future administrations. This not only applies to direct emissions reporting but indirect emissions reporting, as in the absence of federal policy industry faces a patchwork of compliance requirements across states and foreign governments. The same economic self-interest rationale justifies a role for limited government in emissions accounting, with an emphasis on the capital market appeal of showcasing the “carbon advantage” of the U.S. in emissions-intensive industries. An example is liquified natural gas, whose export market is enhanced by showcasing its lifecycle emissions advantage over foreign gas and coal. 

The abatement effectiveness of GHG transparency has grown appreciably in the 2020s, as voluntary industry initiatives have sharply increased. This policy set enables an efficient “greening of the invisible hand” with staying power, as corporate environmental sustainability efforts appear resilient regardless of political sentiment, unlike corporate social endeavors. In fact, the aggregate willingness to pay for voluntary abatement from producers, consumers, and investors suggests that well-informed domestic markets go a long way towards self-correcting the externality of GHGs (e.g., convergence of the private and social cost curves). Certain voluntary corporate behaviors may even exceed the global SCC, especially commitments to nuclear, carbon capture, and other higher cost abatement generation financed by the largest sources of power demand growth. Well-functioning voluntary carbon markets could yield roughly one billion metric tons of domestic carbon dioxide abatement by 2030. Providing locational marginal emissions data can slash abatement costs from $19-$47/ton down to $8-$9/ton while doubling abatement levels from some power generation sources. 

Overall, efficient GHG transparency policy described above is a low-cost mitigation strategy consistent with class II designation. Basic, federal GHG transparency policy may even constitute class I policy, because it avoids the higher compliance cost alternative of a patchwork of state and international standards that would manifest in the absence of federal policy. However, stringent GHG transparency policy may constitute class III or IV policy. Prominent examples include a recent California climate disclosure law and a former Securities and Exchange Commission proposed rule to require emissions disclosure related to assets a firm does not own or control (Scope 3). Such efforts may obfuscate material information on climate-related risk and worsen private-sector led emission mitigation efforts.

Direct GHG Regulation 

Classic environmental regulation takes the form of a command-and-control approach. These instruments include applying emissions performance standards or technology-forcing mechanisms, typically for power plants or mobile sources. These policies vary widely in stringency and cost. Overall, command-and-control is widely considered in the economics literature to be an unnecessarily costly approach to reducing GHGs relative to market-based alternatives. It can also result in freezing innovation, by discouraging adoption of new technologies. 

Federal command-and-control GHG programs have not been particularly environmentally effective, cost-effective, or demonstrated legal or political durability. The first power plant program was the Clean Power Plan, which was struck down in court, and yet its emissions target was achieved a decade early from favorable market forces and subnational climate policy. The most recent federal command-and-control approaches for GHG regulation were 2024 EPA rules for vehicles and power plants. A 2025 review of these and other federal climate regulations over the last two decades of federal climate regulations found:

The 2025 review study implies that past federal command-and-control had very high cost – well into class IV range. It has also been a top priority of conservatives to undercut. However, it is possible for modest command-and-control policy with class II or III costs. 

Some conservatives, noting EPA’s legal obligation to regulate GHGs and the cost of regulatory uncertainty from decades of EPA policy oscillations between administrations, suggested modest requirements as a better option to replace high cost rules in order to mitigate legal risk and provide industry a predictable, low-cost compliance pathway. For example, conservatives argued that replacing high cost requirements for power plants to adopt carbon capture and storage (CCS) with low cost requirements for heat rate improvements may lower compliance costs more than attempting to repeal the Biden era rule for CCS outright. Similarly, the oil and gas industry opposed stringent GHG regulations on power plants and mobile sources, but often validated alternative low cost compliance requirements. 

The first Trump administration pursued modest replace-and-repeal GHG regulation. The second Trump administration has opted for repeal policies and to eliminate the endangerment finding via executive rulemaking. However, regulated industry and many conservative thought leaders believe this is a strategic blunder, given the low odds of legal success, resulting in the perpetuation of “regulatory ping-pong that has plagued Washington, D.C., for decades.” If the courts uphold Massachusetts v. EPA and the associated endangerment finding, this implies that modest command-and-control policy may have durable political alignment potential. Yet this does not hold much abatement potential. In the absence of a legal requirement to regulate GHGs, there is unlikely to be broad political alignment for even modest command-and-control policy. Conservatives tend to view this as a gateway to more costly policies that will probably not meaningfully affect global GHG trajectories. 

The 2025 review study understates the full cost of U.S. climate regulations because they exclude state and local levels. Although no comprehensive study of state climate regulation is known, command-and-control state regulations often raise major cost concerns as well. The cost and environmental performance of such state programs varies immensely, often owing to differences in the accuracy of abatement technology costs that regulatory decisions are based upon (e.g., the failure of California’s zero-emission vehicle program compared to success with its low-emission vehicle program). A recent example is California’s rail locomotive mandate, which projected to impose tens of billions of dollars in costs before being withdrawn. State command-and-control regulation is commonplace in progressive states, but not beyond, implying meager Overton Window alignment. 

A more economical version of GHG regulation is a system of marketable allowances, or cap-and-trade (C&T). Over three decades of experience with C&T programs reveals two things. First, C&T is environmentally effective and economically cost effective relative to command-and-control policy. Second, C&T performance depends on its design quality and interaction with other policies. Abatement costs depend on stringency and other design features, but C&T in a backstop role is generally close to the domestic SCC, rendering it class II policy. Robust C&T generally falls in the class III policy range. C&T is an example of abatement policy that can be cost-effective on a per unit basis, but given the breadth of its coverage its total costs can be substantial. Recent developments in Pennsylvania indicate a possible preference for policies with higher per-unit abatement costs than C&T, which may reflect a political preference for policies with less cost transparency and lower aggregate costs. 

Some environmental C&T complaints are valid, such as emissions leakage, but C&T effectiveness concerns are generally readily fixable design flaws. C&T effectiveness complaints are often the result of interference from other government interventions like fuel mandates, relegating C&T to a backstop role and suppressing allowance prices. Such state interventions triggered anti-competitive concerns in wholesale power markets overseen by the Federal Energy Regulatory Commission (FERC). This prompted conservative state electric regulators to call for a conference to validate mechanisms like C&T as a market-compatible alternative to high cost interventions. Conservative expert testimony at that conference, invited by conservative FERC leadership, explained that interventions layered on top of C&T merely reallocate emissions reduction under a binding cap, which raises costs, creates no additional abatement, and undermines innovation. This implies that such states might increase abatement and lower aggregate costs by upgrading the role of C&T and downgrading the role of costlier interventions. 

In the 2000s, bipartisan interest in federal C&T policy arose, but it failed and has not resurfaced. In its absence, states have supplanted federal policy with subnational C&T programs. However, the durability of C&T beyond progressive states is unclear. Moderate states have sometimes joined a regional C&T program under Democratic leadership, but sometimes departed them under Republican leadership. Conservative state groups typically challenge C&T adoption and seek repeal of C&T programs like the Regional Greenhouse Gas Initiative. This suggests that C&T is at the fringe, but typically outside, an Overton Window across political movements. 

Permitting and Siting 

Permitting policy can base decisions explicitly on GHG criteria, or they can be based on non-GHG factors but hold indirect GHG consequences. Generally, only progressive states and presidents have pursued the former. Federally, these include the Obama administration’s “coal study” and Biden administration’s “pause” on liquified natural gas (LNG). The LNG pause did not provide any apparent emissions benefit, yet carried substantial foregone economic opportunity and strategic value to U.S. allies. Pragmatic progressive thought leaders expressed concern with the pause, noting the creation of economic and security risks, and suggested lifting the pause in exchange for companies to commit to strict, third-party verified methane emissions standards. Relatedly, some conservative thought leaders have supported policy that enables voluntary participation in certified programs that provide market clarity and confidence to harness private willingness to pay for lower GHG products. This has been buttressed by support from an industry-led effort to advance a market for environmentally differentiated natural gas based on a standard, secure certification process. 

Permitting constraints on clean technology supply chains can have perverse economic and emissions effects. A prime example is critical minerals, which are essential components to clean energy technologies. A net-zero emission energy transition, relative to current consumption, would increase U.S. annual mineral demand by 121% for copper, 504% for nickel, 2,007% for cobalt, and 13,267% for lithium. Market forces, unsubsidized, are poised to produce a sufficient amount of domestic copper and lithium supply to satiate a large share of domestic demand, but face undue barriers to entry that restrict production far below its potential. To meet net-zero objectives, permitting reform allowing all currently proposed projects to enter the market would lower U.S. import reliance for copper from 74% to 41%, while dropping lithium import reliance from 100% to 51%. 

Expanding domestic mining no doubt carries local environmental tradeoffs. However, the U.S. has some of the most stringent and comprehensive mining safeguards in the world. Thus, foregoing development domestically is likely to push mining toward foreign countries with inferior environmental, safety, and child labor protections. It is therefore critical that domestic permitting decisions account for the unintended effects of denying permits, not merely the direct consequences of approving a project. 

Permitting and siting constraints on energy infrastructure also impose major costs and foregone abatement. These entry barriers largely exist as environmental safeguards, yet almost always inhibit projects with a superior emissions profile to the legacy resources they replace. In fact, 90% of planned and in progress energy projects on the federal dashboard were clean energy related as of July 2023. In 2023, the ratio of clean energy to fossil projects requiring an environmental impact statement to comply with the National Environmental Policy Act (NEPA) was 2:1 for the Department of Energy and nearly 4:1 for the Bureau of Land Management. A 2025 study estimated that bringing down permitting timelines from 60 months to 24 months would reduce 13% of U.S. electric power emissions. 

Permitting has proven to be a litmus test for the progressive environmental movement, as the movement bifurcates between anti-development symbolists and pragmatic pro-abundance progressives. While a minority of mainstream environmental groups have become amenable to permitting reform, such as The Nature Conservancy and Audubon Society, the core of progressive environmental groups have not. Instead, new progressive groups like Clean Tomorrow and the Institute for Progress filled the pro-abundance void alongside traditional market-friendly progressive groups like the Progressive Policy Institute. This progressive subset has helped influence moderate Democrats to support permitting reform in a collaborative way with conservatives. 

Permitting reform has long been championed by conservatives for its economic benefits, with climate considerations typically a secondary-at-best rationale. Yet permitting reform has become a priority for the newer climate-minded conservative movement. However, permitting has also proven to be a differentiator between conservatives and right-wing populists. The latter engages in forms of government intervention that sometimes contradict conservative principles. For example, the Trump administration enacted an offshore wind energy pause that followed the same problematic blueprint as the Biden administration’s LNG pause. This elevates the importance of technology-neutral permitting reforms with an emphasis on permitting permanence safeguards. 

In recent years, a coalition of Republicans, centrist Democrats, and clean energy and abundance advocates have pressed for reform to NEPA. A broad suite of federal permitting reforms with bipartisan appeal was identified in a 2024 report by the Bipartisan Policy Center. Bipartisan alignment led to the passage of the Fiscal Responsibility Act of 2023 into law and the Senate passage of the Energy Permitting Reform Act of 2024 (EPRA). Although a 2025 Supreme Court decision suggests executive actions alone may substantially reduce NEPA obstacles, plenty of NEPA and other federal statutory reforms remain of high value and hold considerable bipartisan potential

The positions of leading progressive, conservative, and centrist thought leadership organizations highlight alignment on various federal permitting and siting reforms. These include statutory changes to NEPA, the Endangered Species Act, the Clean Water Act, the Clean Air Act and the National Historic Preservation Act. Substantive alignment includes reforms that reduce litigation risk (e.g., judicial review reform), limit executive power to stop project approvals and undermine permitting permanence, maintain technology neutrality, strengthen federal backstop siting authority for interstate infrastructure, codify the Seven County decision, and streamline agency practices while ensuring sufficient state capacity. 

Despite considerable positive momentum at the federal level, the greatest permitting and siting barriers generally reside at the state and local levels and trending sharply in a more restrictive direction. Wind and solar ordinances have grown by over 1,500% since the late 2000s. Oil and gas pipelines and power plants face mounting permitting and siting restrictions in progressive states, which not only raise costs but do not necessarily reduce emissions. In fact, the New England Independent System Operator said that a lack of natural gas infrastructure in the region has raised prices and pollution by forcing reliance on higher-cost resources like oil-fired power plants. The only major power generation resource with a less restrictive trend is nuclear, as six states recently modified or repealed nuclear moratoria to ease siting. 

Motivation for opposing energy infrastructure permitting has included the well-known “not in my backyard” concerns, such as noise, construction disruptions, or land use conflicts. Interestingly, much opposition appears to come from perception, as much as substantiated negative effects. Relatedly, permitting resistance rationales increasingly appear to result from ideological opposition to particular energy sources. Finally, much opposition and most litigation of energy projects comes from non-governmental organizations, not the land owners directly affected. Altogether, this underscores the importance of permitting and siting reform that improves the quality of information to agencies and parties, ties decisionmaking to specific harms not speculative claims, limits standing to affected parties, and creates appeals processes for landowners to challenge obstructive local government laws and decisions. A key tension to overcome is that technology-agnostic legislation has been more likely to advance in states with one or more Republican chamber, yet environmental advocates resist “all-of-the-above” reforms.

Policies that reduce permitting and siting burdens are class I: they boost economic output and are increasingly key to emissions reductions. Permitting and siting policies that are restrictive on fossil development are not particularly effective at reducing emissions and often add considerable cost, granted costs vary widely depending on the nature of the policies and implementation. Effective fossil restrictions can range from class II to class IV policy, while ineffective ones actually increase emissions. The political economy of permitting and siting must overcome the lobby of entrenched suppliers, who seek to maintain competitive moats. An ironic example was incumbent asset owners funding environmental groups to oppose transmission infrastructure in the Northeast that would import emissions-free hydropower. 

Electric Regulation

The power industry is at the forefront of energy cost concerns and decarbonization objectives. In the early 2020s, electric rates have risen most in Democratic states. These concerns reoriented progressives towards cost containment, even at the expense of climate objectives. In the 2024 election, cost of living concerns propelled Republicans to widespread victories as President Trump vowed to halve electricity prices. A year later, voter concerns over rising electricity rates in Georgia, New Jersey, and Virginia boosted Democrats in gubernatorial and public service commission (PSC) elections. 

At the same time, electricity is arguably the most important sector for climate abatement given its emissions share and the indirect effects of electrifying other sectors, namely transportation and manufacturing. Ample pathways exist to reduce electric costs and emissions simultaneously, primarily by fixing profound government failure embedded in legacy regulation. Electric industrial organization shapes economic and climate outcomes, with market liberalization an advantage for both. 

Electric regulation falls into two basic formats. The first is cost-of-service (CoS) regulation, where the role of government is to substitute for the role of competition in overseeing a monopoly utility. The alternative is for regulation to facilitate competition by using the “visible hand” of market rules to enable the “invisible hand” to go to work. 

CoS regulation historically applied to power generation, though about a third of states enacted restructuring to introduce competition into power generation and retail services, in response to rising rates and the recognition that these are not natural monopoly services. Nearly all transmission and distribution (T&D) historically and today remains under CoS regulation. Importantly, CoS regulation motivates a utility to expand the regulated rate base upon which it earns a state-approved return. Generally, the main sources of cost discipline problems in the power industry stem from its CoS regulation segments: transmission, distribution, and the portion of generation that remains on CoS rates. 

Generally, restructured jurisdictions see greater innovation and downward pressure on the supply portion of customer bills. The economic performance of restructuring is highly sensitive to the quality of implementation. This includes the quality of wholesale energy price formation and capacity market design. It also includes various elements of retail choice implementation. They have also seen improved governance, whereas CoS utilities are prone to cronyism and corruption given the inherent incentives of their business model. Competitive wholesale and retail power markets hold cost and emissions advantages through several mechanisms:

Electric cost increases are multifaceted, prompting many misdiagnoses that blame markets for non-market problems. Utilities have begun pushing campaigns in restructured states to revert back to CoS regulation, whereas the growing consumer segment – namely data centers and industrials – are organizing campaigns to expand consumer choice. Independent economic assessments warn against a return to CoS regulation, and instead encourage state regulators to implement restructuring better. This includes better market design, consumer exposure to wholesale prices, and effective coordination with transmission investment. 

T&D costs, generally, are the core driver of electricity cost pressures nationwide. Over the last two decades, utility capital spending on distribution has increased 2.5 times while nearly tripling for transmission. This reflects profound flaws in CoS regulation of T&D, resulting in overinvestment in inefficient infrastructure and underinvestment in cost-effective infrastructure. This projects to worsen, given T&D expansion needed to meet grid reliability criteria as a result of aging infrastructure, turnover in the generation fleet, and load growth. 

T&D expansion is also central to abatement. Even partial transmission reforms can reduce carbon dioxide emissions by hundreds of million of tons per year. This explains why progressives have made reforms that expand transmission a top priority. This needs to be reconciled with the cost concerns of consumers and conservatives to result in durable policy. Consumers and conservatives have a budding transmission agenda rooted in upgrading the existing system, removing barriers to voluntary transmission development, using sound economic practices for mandatorily planned transmission, streamlined permitting and siting, and improved governance. A particularly promising frontier is reforms to enhance the existing system, given the expedience of their cost relief and consistency with a Trump administration directive

Recent federal regulatory actions have demonstrated bipartisan willingness to improve transmission policy and the related issue of interconnection, which has emerged as a major cost and emissions issue. In 2023, FERC passed Order 2023 on a bipartisan basis to reduce barriers to new power plants trying to interconnect to regional transmission systems. Subsequent reforms were motivated by a coalition of consumer groups and the center-right R Street Institute. In 2024, FERC passed Order 1920-A on a bipartisan basis to improve economic practices in regional transmission development. EPRA, a gamechanger for interregional transmission development, passed the Senate with bipartisan support in 2024. 

Demand growth has sparked reliability concerns over tight supply margins and recently put upward pressure on wholesale market prices. However, states with the greatest price decreases typically had increasing demand from 2019 to 2024 (Figure 3). This shows the importance of infrastructure utilization on electric rate pressures, as many areas had supply slack previously. The past may not be prologue. Emerging conditions show supply-constrained scenarios where marginal generation and T&D costs increase steeply to meet new load increase. The Energy Information Administration observes steady retail price increases and projects further rises to exceed inflation. 

Figure 3. Relationship Between Load Growth and Changes in Retail Electricity Prices (2019-2024)

Source: Wiser et al., 2025.

In an era of resurgent power demand growth, the states poised to keep rates and emissions down have wholesale competition, retail competition, efficient generator interconnection processes, economical T&D practices, and low permitting and siting barriers. The only state that reasonably accomplishes all of these is Texas, which is experiencing the most commercial interest among competitive suppliers and growing power consumers. Texas has experienced industry-leading clean energy investment and earned the distinction of Newsweek’s “greenest state” in 2024. 

All aforementioned electric reforms are considered class I policy. Despite cost-reduction appeal, power industry reforms have proven challenging for two reasons. First, reforms are highly technical in nature and face limited state capacity among legislative advisors and technocratic agencies, namely PSCs and FERC. For example, recent FERC and PSC activities reveal that these entities do not have the bandwidth or expertise to properly implement existing transmission policy, much less reform it. Secondly, reforms face strong resistance from incumbent utilities who hold concentrated interests in the status quo, creating a strong lobbying incentive. By contrast, the beneficiaries of reform, especially consumers, are dispersed interests that do not organize as effectively as a lobbying force. 

Although the Texas electricity experiment and associated federal power market reforms under President George W. Bush is a conservative legacy, most restructured states are progressive. This reflects significant bipartisan historic appeal. However, traditional conservatives have sometimes conflated pro-utility positions as the “pro-business” position, while it is unclear whether right-wing populist influences will catalyze pro-market reforms by challenging the status quo or retrench monopoly utility interests based on technocratic market skepticism (e.g., Project 2025). CoS utilities also commonly oppose cost-effective T&D reform, especially vertically-integrated utilities, which is consistent with their financial incentives to expand rate base and deter lower-cost imports from third parties. Nonetheless, the political economy of bipartisan electric regulatory reform remains promising, given voters’ prioritization of reducing electricity costs. 

Public Spending 

Government spending occurs through direct spending outlays or indirect spending through tax expenditures. Spending takes the form of industrial policy or innovation policy. The economics literature is historically critical of industrial policy, while positive literature on industrial policy usually conflates it with innovation policy. A distinguishing element is that innovation policy selects policy instruments suited to specific market failures, namely the positive externalities of knowledge spillovers and learning-by-doing. These generally apply to research and development (R&D) and early stage technologies, including those in demonstration stage and infant industries that have not achieved economies of scale. 

Predictably, progressives have been consistent backers of robust innovation policy, while conservatives typically scrutinize such expenses closely. Although differences of opinion exist on optimal funding levels, historically conservatives and progressives have agreed on a role for the government in supporting R&D. There is also a history of good governance agreement, such as a joint project between the Center for American Progress and the Heritage Foundation in 2013 on improving the performance of the national lab system. Improving outcomes-based Department of Energy program performance may have broad appeal, including better performance metrics, stronger linkages to private sector needs, and program reevaluation to determine government investment phase-out. Improvements to state capacity are paramount in this regard. 

Conservatives are often critical of public spending on infant industry, where government failure can outweigh market failure. For example, policymakers often struggle to identify when to end industry support, while industry engages in rent-maintenance behavior even after it has achieved maturity. Historic evidence indicates that direct subsidies and tax exemptions for infant energy industry continue well after the targeted technologies mature. Conservative and progressive scholars have historically framed the merits over subsidies for infant industry as a debate over government versus market failure. 

Since innovation policy targets non-climate market failures (e.g., knowledge spillovers) it may have a high static abatement cost. However, it is an inexpensive abatement policy when accounting for dynamic effects, because of induced innovation and learning-by-doing. Importantly, innovation policy holds massive climate benefits, because achieving abatement cost parity between clean and emitting resources is central to clean technology market adoption. Efficient R&D policy can be classified as class I policy, because the upfront cost of the policy is outweighed by long-term cost savings. Demonstration and infant industry support falls into class II-III range, depending on its implementation, and often exhibits substantial durability. 

In recent years, climate-minded conservatives have shown stronger inclinations of public spending for innovation policy. However, there is a stark difference between conservatives and right-wing populism on innovation policy. Conservatives note that the adverse consequences of Department of Government Efficiency’s “gutted, ineffective government” approach to the Department of Energy is inconsistent with limited, effective government practice. The economic self-interest benefits of innovation policy may induce a course-correction with MAGA, which has not deliberately targeted innovation policy insomuch as sacrificing it amid a rash government downsizing exercise. 

In contrast to innovation policy, industrial policy aims to directly promote a given industry, typically using mature technology, with interventions untethered to any underlying market failure (e.g., negative emissions externality). This generally takes the form of public spending on mature industries. For decades, traditional conservatives and climate-minded conservative scholars have been critical of green industrial policy for carrying high costs with modest emissions reductions. 

The most relevant case study in climate industrial policy versus innovation policy is the Inflation Reduction Act (IRA) of 2022. IRA represented the “largest federal response to climate change to date.” It consisted mostly of subsidies for mature technologies, especially wind, solar, and electric vehicles (EVs). It also contained subsidies for infant industry. IRA was passed exclusively by Democrats, with Republicans voicing concerns over its cost. Republicans then passed the One Big Beautiful Big Act (OBBBA) in 2025, which phased-out subsidies for mature technologies, but generally retained those for infant industry. This underscores the political durability of innovation policy and the fragility of industrial policy. 

A broader debrief on IRA and OBBBA reveals:

The takeaway from IRA and OBBBA is that subsidies for mature technologies are high cost, likely to erode social welfare, and not politically durable. Efficient public spending for RD&D, however, enhances social welfare and falls in the Overton Window due to its value for economic self-interest. Late-stage infant industry is at the fringe of the Overton Window. It is the area where conservative and progressive scholars have historically had contrasting views on whether market failure outweighs government failure, yet political outcomes have largely supported infant industry. 

Generally, the literature finds strong evidence of opportunity cost neglect in public policy, which “creates artificially high demand for public spending.” The IRA was a case-in-point. Meanwhile, the opportunity cost of public spending is rapidly rising given the dire fiscal trajectory of the United States. In 2025, moderate experts emphasized a pivot away from unsustainable and ineffective “Green New Deal thinking” for clean technology subsidies in favor of an innovation-driven strategy. 

Takeaways 

This analysis finds chronic flaws of cost considerations in ex ante policy analysis. Many medium and high-cost policies have passed without any robust accounting of costs at all (e.g., IRA, fuel bans). Interventions with cost-benefit analysis have had a tendency to underestimate costs (e.g., regulation). These flaws contribute to public misconception and play into political economy dynamics that tend to incent policies with hidden costs over those with transparent ones. 

High-cost policies have typically only been enacted by progressive governments and have come under greater scrutiny as energy costs escalate. This calls their social welfare effects and durability into question. It has cast climate action in the public eye as requiring deep economic sacrifice. 

Conservatives have been hesitant to engage on climate policy outright, largely over dire economic tradeoff perceptions. Such concerns have instigated a conservative backlash to climate policy, including to policies that are compatible with U.S. economic interests. This has been exacerbated by right-wing populism, which often strays from limited government conservatism in pursuit of cultural identity objectives. For example, in a 2024 piece promoting energy affordability, the Heritage Foundation correctly attributed cost increases to renewable energy mandates, but incorrectly presumed that a broad shift towards renewable energy and away from fossil fuels would always increase costs. 

High abatement cost policies not only risk reducing aggregate social welfare, but they create distributional concerns. Policies that raise energy costs tend to be regressive. This has challenged the social justice narrative of progressives, prompting a rethink by progressive leaders to take a “cost-first approach to [the] clean energy transition.” Although subsidies are a common response to lower burdens on low-income households, the most popular green subsidies pursued have exacerbated distributional concerns. Specifically, renewables subsidies favored by progressives have been challenged by conservatives as “green corporate welfare.” Progressives have also faced criticism for EV tax credits for disproportionately benefiting wealthy households. 

Encouragingly, negative- and low-cost policies comprise a rising share of the abatement curve. The Overton Window for pursuing such policies has grown remarkably for “abundance progressives” and conventional conservatives. However, populist subsets within both movements challenge the potential for political alignment. Enacting negative-cost policies also faces the collection active problem of dispersed beneficiaries versus a concentrated incumbent supplier lobby favoring the status quo. Mobilizing consumer and taxpayer groups is an underappreciated strategy to enact these policies. 

Abatement cost categoryOverton Window StrengthSocial Welfare EffectPolicy Examples
Class I. negativeStrongVery positiveLiberalized permitting and siting; Liberalized power markets; Streamlined generator interconnection; Economical transmission expansion; Efficient R&D policy
Class II. lowSubstantialPositiveEfficient GHG transparency; Efficient demonstration policy; Modest RPS; Backstop cap-and-trade; Modest command-and-control regulation
Class III. mediumInconsistentGlobally positive, often domestically negativeModerate RPS; Robust cap-and-trade; Moderate command-and-control regulation; Infant industry support
Class IV. highPoorOften negativeStringent RPS; Stringent command-and-control regulation; Onerous GHG transparency; Mature technology subsidies

This analysis is far from comprehensive. A notable omission from this paper is transportation policy, the largest GHG sector in the U.S. A scan of the transportation literature underscores major abatement potential for negative and low-cost policies, including reducing government barriers to efficient heavy-duty transportation like railways, shipping, and heavier trucking. Further, the electrification of transportation requires extensive fixes to government failure, such as liberalizing markets to enable competitive charging infrastructure, which lowers costs. The merits of innovation and GHG transparency policy, previously discussed, also appear to hold promise for transportation applications such as aviation fuel. The transportation sector has also been the target of GHG regulation, mostly in progressive states, which warrants close assessment of costs. For example, one study identified a vast abatement cost range for fuel standards ($60-$2,272/tonne). 

A shortcoming of this analysis is that it only characterizes costs by their efficiency (i.e., $/ton). Political decisions are highly sensitive to aggregate cost and its visibility to the public, which our taxonomy does not characterize. It is possible that efficient, transparent, and higher aggregate cost policies (e.g., C&T) fare less favorably in some political settings than inefficient, opaque, and sometimes lower aggregate cost policies (e.g., RPS solar carveouts). 

Despite the limitations of this analysis, the sample of policies evaluated is sufficient to support the thesis. That is, a retooled climate policy agenda that prioritizes cost considerations should elevate social welfare and achieve greater abatement by selecting more durable policies. 

Conclusion 

Abatement costs have huge bearing on whether climate policies benefit society, their likelihood of passage, and whether they prove politically durable. Most abatement need not come from dedicated climate policy, per se, but rather sound economic policy that carries deep climate co-benefits. Chronic disregard for cost considerations has led to an overselection of high-cost policies and underpursuit of low- and negative-cost policies. This has undermined policy durability and exacerbated political polarization over climate change abatement. 

This paper finds extensive abatement opportunities within negative-cost policies. These largely constitute fixes to government failure and include permitting, siting, and power regulation reforms. This analysis also finds considerable low-cost policies that are compatible with U.S. economic self-interests. These policies primarily spur voluntary private sector abatement through efficient innovation policy and GHG transparency. 

We offer three sets of recommendations moving forward for influencers of the climate policy agenda:

  1. Focus on results. Climate change abatement is a function of global GHG concentrations. Too much attention pursues symbolic objectives, like preventing fossil fuel infrastructure. This tends to undermine abatement goals and impose high costs.
  2. Emphasize cost considerations in policy agenda setting, formulation, and maintenance. Negative abatement cost policies should take top priority, with an emphasis on mobilizing beneficiaries. Robust cost-benefit analyses should precede all cost-additive policies and be reconducted periodically to guide policy adjustments.
  3. Prioritize quality state capacity. The net benefits of abatement policies are sensitive to government capacity and performance. Public management is in great jeopardy in an era of institutional decay. Negative-cost policies are often highly technocratic and require sufficient staffing expertise and accountable management at public institutions like DOE, FERC, PSCs, and permitting and siting agencies. 

In an era of energy affordability precedence, a reset climate agenda should anchor itself in good policy basics. That is, a sober-minded return to results-driven, net-benefits prioritized policy. This should improve the durability of climate policy and ensure it enhances social welfare. Executing reforms well requires a recommitment to improving the quality of institutions as much as the policy itself. 

Bureaucracy as Social Hope: An Argument for Renewing the Administrative State

I. Why Isn’t Government Working?

The “administrative state” is an unlovely bureaucratic term for a bureaucracy that has grown increasingly unloved: the network of government agencies that implements and enforces laws. In the United States, critiques of the administrative state abound. The nativist right pushes back against a purportedly dangerously powerful “deep state” while the left sees a meek state beholden to big corporations and incumbent interests. Libertarians bemoan bureaucratic inefficiency and hubris, while the newer “Abundance” movement describes a state choking on its own procedures. Though different narrators are telling different stories, they are arriving at the same moral that the core mechanics of the world’s greatest democracy just don’t work. From there, it is not too big a jump towards casting a nihilistic eye on democracy itself, and towards reckless deconstruction.

Erosion of faith in government is manifesting acutely in the climate movement. The Inflation Reduction Act (IRA) was by far the largest climate investment the world has ever seen. Biden-era regulations were intended to further spur rapid decarbonization of the world’s largest economy. And yet. If we had a dollar for every word written about the administrative state’s failure to effectively implement the IRA, we’d be shaving truffles on our eggs. Meanwhile, the current administration’s regulatory rollbacks are the latest play in what seems to be a never-ending game of political football around federal climate policy. If the administrative state can’t effectively address challenges it deems an “existential threat”, one might ask, what good is it?

Our answer: the American administrative state, since its modern creation out of the New Deal and the post-WWII order, has proven that it can do great things. Vast bureaucracies now successfully care for the elderly, the sick, the poor. Many communicable diseases are close to elimination. The administrative state, by directing tremendous amounts of public and private effort, built the power grid, the internet, the interstates. Nor are our glory days behind us: The American administrative state played the primary role in ending the Covid pandemic, saving millions of lives.

Even when it comes to climate change, the record simply isn’t one of failure. American bureaucratic regulation, including from the Environmental Protection Agency (EPA) and from the states, and from air pollution standards for cars to carbon trading systems for entire economies, combined with significant incentive investments, has brought us technological transformation. Renewable energy is the dominant source of new energy globally. Electric cars now comprise 20% of sales globally and will replace internal combustion by mid-century. Whole industries are decarbonizing and emissions will shortly be beginning to fall. For all the many dubiously legal rollbacks of the second Trump administration, the United States continues to decarbonize.

And so, we argue, it’s hardly time to abandon the administrative state. But it is time to reinvent it. Our core supposition is that the sense of malaise and stasis characterizing current views of the bureaucracy has a substantial amount to do with mismatches between tools that produced current successes and the next set of tools that will be required to sustain and grow them. In the same way that nations might have a first or a second Republic, with constitutional reforms intervening, it is likely time for the next American administrative state.

Again, grounding in climate illustrates the point. Significant administrative pushes have commercialized the technologies needed to address the climate crisis and substantially pushed them into use. The Inflation Reduction Act supercharged this process in the United States, while China – which has sought to dominate clean energy supply chains via its own administrative state and invested accordingly – did so globally. As we enter 2026, there is no real doubt that many clean technologies are available, profitable, and better than fossil technologies. Every nation, including those that do not substantially produce these clean technologies, benefits from their adoption

But we are now running into a “mid-transition” moment, in which rival technologies, energy systems, and the economic and political systems on which they depend, are in collision. Consider electric vehicles (EVs). It is one thing to call EVs into being by imposing traditional “supply-side” regulations on manufacturers. It is quite another, as gasoline demand begins to sharply decline, to manage knock-on consequences for the entirety of the fossil economy, from refineries to pipelines to gas stations – much less the local and state budgets and jobs that the fossil economy underpins. Though regulatory strategies can be designed to address these economy-wide consequences, we won’t get there by running the same plays harder and faster. We’ve got to seriously interrogate where the most significant bottlenecks are, who is equipped to address them, and what tools they have or will need to deploy.

Now add two further wrinkles. 

First, procedural tangles that were created for all the right reasons, but that now hamper problem solving. In the environmental space, laws and processes were put in place decades ago to carefully scrutinize the impacts of potentially polluting infrastructure and factories. These measures have, in many instances, succeeded in preventing harm and protecting communities. But they are also indisputably making it harder to rapidly, massively scale up green technologies. This “Greens’ Dilemma” playing itself out in debates over the national environmental regulatory regime nationally is, in fact, a specific manifestation of broader dynamics. Incumbent systems, and those invested in them, do not particularly like to change. Indeed, the American administrative state generally was designed to move deliberately and deliberatively, including multiple veto points to avoid capture by industry or any particular interests. A worthy goal, but distinct from moving with speed towards the public good. When system inertia makes it too easy to grind the gears, the result, unsurprisingly, is painfully slow progress on building new public infrastructure and harnessing new innovations. If we zoom back in on the environmental space with these broader dynamics in mind, the particular obstacle inhibiting climate progress emerges with startling clarity: a system that was designed to produce cleaner technologies within the fossil economy is simply not set up to replace the fossil economy.

Second, the fact that capacity of the government to navigate these challenging dynamics has been sapped. There are multiple drivers of eroding government capacity. At the state and local level, years of corrosive narrative attacks translated into unwise revenue restrictions that in turn made forward-looking capacity investments all but impossible. At the federal level, a variety of policies and misaligned incentives have led to stasis and overreliance on contractors as opposed to internal expertise. At all levels, well-intentioned good-government and environmental reforms have imposed layers of analytic requirements that, while initially successful, ultimately contributed to “kludgeocracy”, while a highly litigious American society has, unsurprisingly, produced a highly risk-averse American government. Make no mistake: U.S. government at all levels has, and has always had, countless dedicated and talented civil servants who find ways to accomplish great things. But generally, this government is riddled with systems and structures that make it ever-more difficult for even the most effective individual to quickly and creatively deliver, especially when armed with aging legal and regulatory tools. 

The upshot? We need not lose faith in the administrative state itself; we would do better to view it as having functioned with its hands tied tighter and tighter. But we are now starting, particularly in the climate and energy space, to hit real limits.

These aren’t issues we can resolve with one-off budget bills or Band-Aid workarounds. The vision, and the fixes, will have to run much deeper. The second Trump administration’s massive federal shake-ups, if nothing else, have opened the field for reconstruction. There is an opening – and, we believe, transpartisan appetite – for a bold, positive vision of a government that is attuned and responsive to the needs of American people and communities, that people can trust to deliver things like cheap, reliable energy; affordable, abundant housing; and fast, safe transportation even as it adeptly manages complex, higher-order challenges like climate change.

To launch its new Center for Regulatory Ingenuity, the Federation of American Scientists (FAS) engaged an ideologically diverse cohort of experts on government capacity and climate to describe how we might realize that vision. This cohort was asked to consider how to advance a paradigm of “regulatory ingenuity” – that is, creativity and cleverness in service of societal objectives alongside basic democratic values – in one or both of the following ways:

  1. Ingenuity in regulatory design. Looking across the entire regulatory lifecycle – from underlying statutory construction, to rule development, to implementation and (ideally) iterative improvement – to seriously examine how existing regulatory systems in the United States can be improved, and identify where fresh thinking is needed.
  2. Ingenuity in regulatory application. Considering how regulations can be coupled with other tools (e.g., innovative market designs, financial instruments, contracting mechanisms, etc.) to achieve societal goals quickly, equitably, and durably.

“Bureaucracy as Social Hope: An Argument for Renewing the Administrative State” is a collection of essays capturing the cohort’s insights. Essay authors envision new alignments of regulatory and financial power, new tools to enable multiple levels of government to move fast, to address distributional impacts, to channel capital at scale, to finally build infrastructure, and to, most fundamentally, break free from stasis. They are, eminently, not cynics. Though clear-eyed about the failings they seek to remedy, they understand that these failings are largely the shadows cast by past success. 

While these essays are grounded in climate policy, they address cross-cutting themes. They use climate as a lens to evaluate where government is and isn’t working. Indeed, the authors’ commentary with respect to government performance on climate challenges is easily extrapolated to other domains.

In writing, the authors revive an older American tradition of a vital administrative state in service of an equally vital and egalitarian democracy. Our nation used to regularly reorganize its government, and the Congress used to legislate regularly on hard problems. The recent reality of agencies working within aging statutes and confined by outdated structures was not the dominant face of government during the creative ferment of the New Deal or the Great Society or, indeed, of the Reconstruction itself. It is, in fact, deeply odd that we still largely live with the same administrative agencies and processes that we had in the 1970s.

So what should – what could – a modernized administrative state look like? The authors together imagine: 

A government that can deliver. It doesn’t need to take a generation to build a railroad, a power grid, or new housing. We can trade a veto-ocracy for the older progressive tradition of governance that rapidly responds to public needs – and secures us the service and infrastructure we need.

A government that can make decisions. The rules of the economy need to stop changing with every election and every major lawsuit. Re-empowering Congress to make big choices, and administrative agencies to deliver without constant swerves, will allow us to stop re-reading the manual and actually play the game.

A government for a modern economy. The future should be innovative and egalitarian. Realizing this future requires the de-risking and direction-setting powers of government to invite bold bets and spur investment, and the distributive powers of government to ensure that benefits are appropriately shared.

A government that listens and responds. We can replace the prevailing procedural labyrinth with a government that asks focused questions on the key issues, acknowledges and addresses real disagreements, and then moves forward thoughtfully yet confidently. That would involve, in part, staffing government fully and organizing it well – reversing decades of attacks on public servants and putting people to work on the right problems.

A government that works on all levels. Federal, state, and local governments each have unique levers and comparative strengths when it comes to serving our communities and society. A modern administrative state should recognize these, and emphasize frameworks that enable them to work well together.

Americans have spent too long living within a slowly failing version of last century’s government. The resulting civic frustration has largely fueled further attacks on government, spiraling us downwards. But an upwards spiral is possible too, in which structural reforms yield a government better equipped to chip away at tough problems in ways that improve daily life and rebuild civic satisfaction.  Because while the “administrative state” as a term is about as wonky as you can get, a renewed administrative state in practice is just common sense.

II. New Approaches for Climate and Democracy

As you will discover as you read, the authors do not all agree on every particular; our goal in inviting this collection was good-faith debate, not artificial consensus. Yet a survey of the collection’s component essays reveals common themes.

For instance, the authors generally agree that economic and industrial policy will be central to the next chapter of climate action. Incumbents still heavily invested in mature fossil-linked technologies and supply chains, as well as non-transparent pricing and other barriers to market entry, badly constrain the transition to competitive clean technologies in many sectors. And where promising technologies are still earlier-stage (e.g., as is the case for nuclear, geothermal, or green hydrogen), there are compelling arguments for government involvement to help establish U.S. dominance. Pollution regulators do not typically, though, control fiscal and monetary tools that can (i) correct market distortions, (ii) manage the very considerable distributive impacts of a shift away from fossil fuels that profoundly impacts industries and jobs across regions, and (iii) support a comprehensive strategy for incubating high-potential domestic industries. Nor are these regulators, with little ability to affect trade policy, well positioned to act within the complex geopolitical context of a partial energy transition. To put it frankly, it doesn’t make a lot of sense to run a massive societal transition with substantial global implications through the EPA. But in the absence of purpose-built institutions and statutes, that’s pretty much what we’ve been doing – with politically and legally unstable results.

This problem is compounded by the fact that the Supreme Court’s skepticism of sweeping regulatory mandates based on old statutes has left the administrative state with ever fewer tools to respond to economic transition needs. Regulations are regularly reversed, and the ongoing duel between litigators and executive branch agencies increasingly looks like an unproductive stalemate. The authors generally chart a path towards a reinvigorated role for Congress to settle disputes, for agencies to act more inventively, and for disputes to move away from the courts and back into democratic processes. 

The authors further point out that regulatory efforts alone are not sufficient to drive the infrastructure shifts needed to make those efforts last, or to buffer their up-front costs. Big infrastructure projects – including vastly growing the clean power grid, electrifying freight, expanding and upgrading transit systems, building new housing, and dismantling legacy, non-economic fuel systems – are central to regulatory success and stability, as well as to addressing an ongoing cost-of-living crisis and boosting national economic competitiveness. Infrastructure, the authors emphasize, isn’t an afterthought – it’s a core enabler of regulatory policy. Unfortunately, the now decades-long trench warfare over climate and other regulations has been accompanied by attacks on the state itself, stripping away administrative and delivery capacity along with the ability of many subnational governments to collect sufficient revenue to fund even basic services, let alone flagship infrastructure projects. The authors vehemently agree that there is much room to trim bureaucratic bloat, streamline process, and sensibly reorganize agencies. At the same time, they observe that a government that is smaller doesn’t always work better; not infrequently, the opposite is true. The authors therefore favor approaches that fit government agencies with the staffing, structures, and revenue they need to deliver on outcomes. Sometimes, those approaches are tweaks. Other times, they’re radical reforms.

II.A Towards a Shared Affirmative Vision

So how do we tackle these challenges – how do we start the upwards spiral in which effective delivery reinforces faith in democratic governance that in turn unlocks more delivery capacity? The authors develop a shared affirmative vision, one that broadly looks like this:

The collective vision is one in which the administrative state starts moving again, returning to the ethic of ongoing systematic revision that once characterized it. Rather than relying on the best ideas and institutions of a half-century ago, we would work towards structures more aligned with current needs – and do so in a way that reaffirms the creativity and vigor that has long powered America’s economy.

II.B Laying Out The Pieces

Each of the essays in this collection lays out particular pieces of the shared vision. Broadly: the collection starts by proposing fundamentally different ways to think about environmental and administrative law, seeing its task as delivering a clean economy at scale, rather than simply cutting pollution, and doing so with stable rules derived in democratically legitimate and procedurally stable ways. It then explores how these legal and regulatory structures could help guide the far larger private sector into configuration with public goals, removing barriers to competition that have insulated stubborn fossil incumbents and creating opportunities to move capital at scale into communities in ways that build a fairer and cleaner economy. From there, wrestling with the dislocations that nonetheless will accompany these changes, the collection describes ways to link participatory democracy with economic change, sharpening the focus of the regulatory state and its engagement with the public. The collection concludes by bringing these issues home, describing how state and local governments can deliver today – and presenting a “policy primer” of innovative ideas that can start moving from ambition to action this year. Below, we discuss each of these pieces in turn.

Jordan Diamond and co-authors at the Environmental Law Institute lays the foundation for this collection with a careful look at what environmental law can do, what it can’t, and how we might rebuild its powerful tools for modern challenges. They argue that the pollution statutes of the Nixon era, crucial though they are to addressing environmental contamination from fossil fuels, are at best limited tools for a whole-of-economy shift away from fossil fuels entirely. Viewing that new challenge as fundamentally one about driving economic innovation and infrastructure growth, they chart out areas ripe for legal development. At the same time, they explain why the next round of environmental progress is more likely to be led by infrastructure and economic agencies than pollution regulators – emphasizing that while pollution regulation will remain critical, we should stop asking pollution regulators to drive a national economic transition with aging environmental statutes alone. Their vision is of treating the energy transition like the economic problem it is, with tools to match. They would expand state capacity, bringing to bear a much wider set of agencies and approaches, and therefore also expand what we think of as “environmental law” to respond to the modern era.

Still working within legal reforms, Kirti Datla takes a close look at the profound challenges modern administrative law poses to the regulatory state. The Supreme Court’s new doctrines, she writes, are making it very difficult for environmental agencies, and regulators generally, to address new problems (and often even old problems) through existing statutes. And they suggest that the Court will impose its deregulatory views on even new statutes. These ever-changing rules strain government capacity, make it difficult for subnational governments and investors to plan a path forward, and prevent progress on policy goals. After acknowledging the need for new regulatory approaches, judicial system reforms, and new statutes, Datla focuses on how Congress can and should engage in the constitutional politics of asserting its role within our federal system, both to constrain the Court and to build its own capacity to address pressing problems like climate.

These two foundational essays, then, help us see the challenge before us. They explain why a kludged-together administrative state running off old statutes and aging structures keeps sputtering to a halt – and start to focus us on an expanded field of play, well beyond re-litigating the environmental policy disputes that have seesawed between the Obama, Biden, and Trump administrations. It is not that the regulatory state is inevitably a “hollow hope” for the shared challenges of climate, democracy, and fair economic growth – but that it has been asked to tackle enormous challenges without a shared theory of action or structures to match. Shifting the economy from its incumbent fossil foundations to a new electrified base, while managing the many linked distributive impacts of that shift under growing climate pressure, simply requires more than pollution regulations or one-time tax policy. If politics is the “slow boring of hard boards,” it helps to have the right tools to drill deep.

But, as Devin Hartman and Neel Brown posit, new tools need not – for durability’s sake, must not – be expensive tools. Nor will another round of mandates succeed without thinking seriously about how to address accompanying costs. Hartman and Brown argue that traditionally conservative lenses that look skeptically at giant fiscal policies and regulatory mandates do, in fact, bring to bear a canny understanding of the interests of incumbent economic system actors. The authors point out that the stuttering progress of the transition to clean technologies comes from the ways in which fossil fuels are deeply intertwined with the interests of powerful economic incumbents, and of existing government. And, having traced the root of the challenge, they conclude that opening these incumbents up to competitive disruption through appropriate reforms will be a potent strategy. For instance, Hartman and Brown contend that the repeal of the IRA may appropriately shift focus of subsidies from mature energy technologies (including clean technologies like solar as well as most fossil technologies) towards earlier-stage technologies (e.g., geothermal). From permitting reform to addressing market problems that deny Americans access to affordable EVs, Hartman and Brown set out a creative array of solutions that, with government backing, can push forward a modern economy at low, or even negative, cost.

Sometimes aligning with these arguments, sometimes complicating them, and always making them concrete, Beth Bafford describes how a focused set of government investments can further shift the economy onto new foundations by using public capital to leverage far greater private investments in the fundamental infrastructure American needs. She outlines how to wed together Hartman and Brown’s pro-competitive policies with the expanded and stable regulatory mission state described by Diamond and Datla. Regulators have often operated on a model in which government grants help underwrite regulatory mandates. Bafford instead starts to outline a structure in which government investments – including simple and accessible loan products – instead help shift the economy towards profitable and self-reinforcing clean new industries. Her model is one in which capital access builds entire businesses that can electrify and modernize core sectors of the economy, from the freight sector to the power grid. Regulations can and should still set the direction of travel in this model – but its engine is broadly shared profitability. Rather than forcing innovation into new channels with politically-exposed regulatory mandates, agencies in Bafford’s model would help convene and channel the economy towards new system states entirely, with regulations conceived as tools operating in concert with economic investments and planning to help crowd in capital to communities across the country.

Nicole Steele explores the role of capital in renewing the administrative state from a different lens. Steele observes that mission-aligned financial institutions (including values-based banks, green banks, CDFIs, and other purpose-driven funds) are increasingly functioning as essential partners in the administrative state’s delivery capacity. Sitting at the intersection of public policy and private markets, these institutions translate legislative and regulatory goals into bankable, scalable projects by absorbing early risk, standardizing structures, and aggregating demand. In practice, this has included mission-aligned banks working alongside state and local governments to deploy catalytic capital – whether as first-loss reserves, flexible operating support, balance-sheet backstops, or credit enhancement – in support of simple, repeatable lending platforms (such as residential and commercial PACE financing) that allow households, small businesses, and local governments to access clean energy, resilience, and efficiency upgrades without relying on bespoke grants or one-off subsidies.

By deploying catalytic capital, Steele continues, these intermediaries unlock funding that would not otherwise reach underserved markets or emerging project types. Critically, investment into mission-driven institutions does not substitute for private capital; it enables it. Strengthening the balance sheets and operating capacity of green banks and CDFIs allows them to originate, warehouse, and scale lending products that meet market standards, crowding in institutional capital while maintaining public purpose. In a period of federal uncertainty and shifting incentive regimes, expanding the availability of catalytic capital will require a diversified approach: drawing on state and local public balance sheets, philanthropy and quasi-philanthropic capital, and mission-aligned institutional investors willing to deploy flexible funds through intermediaries rather than relying on centralized federal programs alone.

Nana Ayensu builds on Bafford and Steele’s insights. As Ayensu points out, we have a transformational economic opportunity to deploy modern, clean energy infrastructure at scale. 

Federal and subnational governments have a real chance to catalyze significant capital deployment of mature and emerging clean energy technologies that are primed for growth – both directly and via investment into infrastructure. Widespread social benefits are available if governments use their authorities to assemble the puzzle pieces needed to create more actionable investment environments. Ayensu describes the state’s ability to do so: it can synchronize intra- and intergovernmental policy execution, build high-value foundational infrastructure to provide project stakeholders with the information they need, develop deeper risk and reward sharing partnerships with the private sector, and create the market forces that close align with economic & societal benefits. Making this type of consistent, efficient multi-pronged effort will be critical to garner the scale of investment needed to expand and update critical energy infrastructure systems and deliver lasting value to communities and industries across the nation.

Ali Zaidi makes the case for bringing this ingenuity to the arena of critical minerals and materials, what he calls “the atomic foundation for reindustrialization and any shot at lasting prosperity and security.” Zaidi draws moral inspiration from America’s post-oil shock response, a crisis moment that authored a broad policy playbook with a spine for experimentation. New laws and regulatory authorities, institutions and infrastructure, and moonshot moves on research…that moment, he writes, gave life to policy to solve a problem. It was “policy with helmets and pads”: playing offense, not defense. Zaidi urges bringing that same positioning to minerals and materials security policy today. In his conception, that policy should entail three pillars – production, partnership, and a drive for increasing productivity – that together support the shared goal of strengthening American competitiveness.

The third pillar is where Zaidi spends the most time. The oil shock of the 1970s propelled domestic standards designed to achieve greater fuel economy and appliance efficiency. Such standards have been weighed down over time by clunky test procedures, multi-year rulemakings, and heavy hand of government auditors. Zaidi proposes a framework for materials productivity that adopts the same solutions-oriented spirit of the 1970s energy policy environment, but is characterized by standards that bind instead of burden. To unlock minerals and materials security, Zaidi writes, “we should replace red tape with rubber bands, just enough structure to allow us to slingshot forward new production, processing, and partnerships — and increased productivity.” Zaidi details a framework that is digital, dynamic, and data-driven: where enforcement is algorithmic, not bureaucratic; and the work is easily federated and easily staffed. This light, flexible scaffolding will accelerate capital formation and technological innovation. 

Indeed, Jennifer DeCesaro, Jennifer Pahlka and Hannah Safford add, we’d do well to apply a similar mindset to planning: a standard feature, and all-too-common bug, of climate policy. Environmental statutes are rife with planning mandates, from Clean Air Act implementation plans to natural hazard mitigation plans required by the Stafford Act to all things NEPA. Look beyond pure statute and become quickly overwhelmed: climate-related plans are mandated by public utilities commissions, developed by task forces, produced as a precondition for grant eligibility, and on and on. Though plans are easy to ask for, they’re often expensive and time-consuming to develop; moreover, lack of coordination among overlapping plans can lead to duplication or even contradictions. DeCesaro, Pahlka, and Safford therefore ask a simple question: “What are all these plans getting us?” They argue that climate policy too often falls into the trap of “planning primacy”, where planning becomes the end goal instead of an intermediate step towards progress. Put another way, it’s rarely the case that a plan is developed and its directions are then followed to the letter. Rather, the process of thinking through scenarios, understanding constraints, building mental models, and developing relationships with other plan stakeholders is what really matters. DeCesaro, Pahlka, and Safford draw from both the climate space and other domains to illustrate how treating plans as compasses, not maps, can improve efficiency and outcomes. Because to quote Eisenhower: “In preparing for battle I have always found that plans are useless, but planning is indispensable.”

Shifting incumbent systems requires not just low-cost solutions, access to capital, and competent, efficient regulatory capacity. It also requires ways to reconcile or resolve competing interests. Our current regulatory system has gotten bogged down with ineffective procedural approaches to dispute resolution, yielding a litigation-driven collection of process fouls and veto points that no one really likes. Our next set of authors observes that improving this system requires more than a simplistic call for deregulation. Moreover, they argue, the solution can’t be to ignore stakeholder input altogether – that runs the risk of policies that are poorly informed, technically unfeasible, and brittle given lack of buy-in by the businesses, communities, and people they serve. Rather, our authors propose a range of reforms to help administrative bodies effectively collect input from stakeholders, weigh hard trade-offs and disputes, and move forward fairly, but expeditiously: thereby using democratically legitimate decisionmaking to strengthen industrial policy.

The first of these authors is James Goodwin, who argues for an “agonistic” view of the regulatory state in which regulators must actively surface and invite input on genuine disputes. Goodwin proposes replacing today’s box-checking engagement exercises and voluminous stacks of public comments with a focused participation process. In this process, administrators would at each state of a project or regulation, identify the core disputes and disagreements that need resolving, and draw in input specifically on these issues. By targeting engagement – and avoiding consensus – in this way, administrators would be able to efficiently advance dialogues with the public that are both quicker and inherently more resistant to status quo bias.

Loren DeJonge Schulman and Shaibya Dalal pick up on this theme. They argue that treating public engagement as a strategic asset, not a box-checking exercise, leads to smarter, more durable policies that reflect real community needs and build trust in government. Participation is not a distraction from governing – it is how government governs well. They argue that the failure of many engagement processes is not that agencies invite too much input, but that they do so too late, too perfunctorily, and in ways that exclude the communities most affected by public decisions. When participation is treated as compliance rather than governance, it fuels distrust, invites procedural obstruction, and produces policies that are fragile and contested. By reflecting the full range of transactional public participation and relational community engagement options, and by applying clear principles (purposeful design, mutual respect, transparency, accessibility, and iteration) agencies can use engagement to surface lived experience, anticipate conflict, improve policy design, and strengthen the legitimacy and durability of their actions. Done well, participation becomes a form of ingenuity that reduces conflict, eases implementation, and reinforces democratic accountability.

Of course, inviting public participation only works when people are interested in participating. Angela Barranco and Kristi Kimball argue that the American climate movement faces a critical public engagement crisis that threatens to undermine decades of progress on clean energy adoption – and explore how advocates can speak to the public to build interest and support for the shifts that government seeks to deliver and legitimize. Despite nearly 70% of Americans expressing concern about climate change, Barranco and Kimball contend that current advocacy strategies fail to tee up paths for politically durable dispute resolution (and eventual support) because those strategies are unduly rooted in fear-based messaging and technical data. Barranco and Kimball make the case for a shift towards a public conversation that approaches Americans as consumers (who must adopt new technologies and cannot be persuaded through regulatory mandates alone) making lifestyle choices rather than political constituents to be mobilized. Drawing on proven strategies from consumer marketing, behavioral psychology, and community-based social marketing research, Barranco and Kimball observe tremendous opportunities for (i) reframing climate engagement around consumer choice, and (ii) leveraging the unprecedented infrastructure investments necessitated by extreme weather impacts to build lasting climate coalitions while simultaneously strengthening democratic institutions and community trust. 

Ultimately, these changes and debates occur not in the abstract, and not just in Washington, DC. State and local governments are the theaters in which economic and democratic change play out, mediating federal policy and global geopolitical shifts in the lives of real people. Thus both the climate crisis and the economic transition are inherently “polycentric”. Subnational governments have therefore always been at the core of climate and regulatory policy. It is these governments that are most able to set democratically responsive visions for clean economic growth, climate resilience, and infrastructural change that will concretely change lives. If our future is to be shaped more by ordinary people than by technocrats, it is these governments that must have the capacity and creativity to act.

Louise Bedsworth provides a prospectus for local action. As she argues, a rebuilt regulatory state has to position state and local governments for creative action and response. These governments, she writes, are more than subsidiary partners, and more than replacements for federal regulators during deregulatory periods (important though those roles can be). State and local governments are innovators and leaders in their own right. The task is not just to provide ancillary community benefits from federal grants, or to mandate particular state plans, but for state and local democracies to be engines of national and even global change. By expanding their own capacity, aligning capital and economic plans to build regional prosperity and resilience, and engaging in and leveraging networks across geographies, nationally and globally, subnational governments can reshape climate action and the regulatory state.  

Indeed, because of the enormous creativity of subnational governments, and the huge opportunities created by the private sector, in response to past regulatory guidance and government investments, we do not need to wait for a new federal administration to start putting solutions into place. We have already identified a broad network of ideas and actors that can start building these ideas in reality, this year – in a policy primer for that foundational work. The primer, crowd-sourced from leaders across the field, highlights a starting list of policies well within the reach of subnational actors, and focusing strongly on economic and industrial policy interventions that can durably advance clean economic systems while managing real trade-offs with savvy deployment of government capacity. It is a practical point of engagement, allowing for the ideas articulated in these papers to be tested now, not after further electoral cycles.  

III. Conclusion

We do not need more stories of American decline. Critics on the left, center and right have already told us that our government doesn’t work. Americans feel underserved, underrepresented, and ripped off. But Americans also know how to do better. We are always rebuilding our democracy; it is time to do it again.

Collectively, our authors have sketched out the beginnings of an administrative state for this era – grounded in the pressing challenge of climate change and its increasingly evident impacts on American lives. This state would enable governments across scales, and stakeholders across sectors, to realize the vision of a nation where:

This sort of “mission state” – a government that sets a clear vision and brings together public and private sectors to execute it – is actually an old American tradition. What else were the New Deal, the Apollo Program, Operation Warp Speed, and the creation of the internet than missions of this sort? Indeed, when it comes to newer challenges like climate change, we have started, a bit haphazardly, to reach for a mission again. The Inflation Reduction Act’s billions in investments, and the Biden administration’s complementary regulations, were an attempt to bring together the public and private sectors around the vision of a clean and prosperous economy, with good-paying jobs and dominance in the technologies increasingly certain to underpin the 21st-century global order. Yet because of obstacles identified above, that mission was…while not entirely a failure, hardly a resounding success.

But the mission remains necessary. America must not remain mired halfway between the old economy and the new, exposed to climate shocks, with a government unable to satisfyingly respond. Clean technologies are advanced enough that retrenchment and retreat to fossil is a doomed strategy; similarly, we’ve seen that taking a chainsaw to government leaves our whole nation bleeding.

The only logical approach is to tap into the creative, determined spirit that is the essence of American identity. Think of the millions of Americans who, in the midst of the Great Depression, spread out to every part of this country to rebuild it. We still live among the lovely parks, trails, and civic architecture called into being by the Civilian Conservation Corps; our power grid was brought to us by rural electrification, the Federal Power Act, and the Tennessee Valley Authority. We know what it looks like when Americans believe in government and the government is worthy of that belief. 

It looks, to start, like a conversation. As CRI launches, in partnership with a broad network of partners and contributors, we invite debate, dissent, and experimentation. One of our goals is to bring together people and perspectives that are often in tension to identify where there are some threads of common sentiment – and how we can productively move forward despite the tension that remains. We will be gathering thinkers, exchanging ideas, and mapping out pilot projects with growing momentum across the months and years to come, working not just to theorize around solutions but to bring them to life. To adapt the truism about trees: the best time to renew our administrative state was ten years ago. The second-best time is today.

From Ambition to Action: A Policy Primer

How public leaders can boost climate progress, restore trust in government, and make lives better…starting today.

People across the nation are clamoring for solutions that make their lives better. And they’re frustrated by the responses they’re getting. Confronting massive inequality, Americans watch leaders finger-point on the price of eggs; yearning for security and stability, Americans watch politics lurch between radically different agendas. No wonder, then, that public trust in the U.S. government has been in the basement for decades. Americans are facing both everyday challenges and a deep, growing sense of discontent. But they’ve lost faith in government to resolve either.

That sense of stuckness doesn’t need to last. But change means focusing on outcomes, eliminating bottlenecks, and prioritizing delivery. It means embracing tools and talent that better connect big ideas to real-world results. It means resisting the temptation to chase buzzwords – from “abundance” to “dominance” to “affordability” – and focusing on the method over the message.

One place to start is with the shift to clean technologies, a place where there is powerful momentum. One in five cars globally are already electric, while heat pumps have outsold gas furnaces in the United States for four consecutive years. The vast bulk of new energy generation is renewable: globally, clean energy investment is now double the amount spent on all fossil fuels combined.

While the transition to clean technologies is unstoppably underway, it is also in its messy middle. Rival technologies and energy systems (and the economic and political systems on which they depend) are now colliding. Many counties and cities depend heavily on fossil fuel revenues; meanwhile, job quality and union density in the renewable energy industry leaves much to be desired. And core parts of our infrastructure – from the power grid to gas stations – are complex and expensive to convert to serve renewable and clean industries, even if those industries will ultimately boost affordability.

Put simply, remaining globally competitive on critical clean technologies requires far more than pointing out that individual electric cars and rooftop solar panels might produce consumer savings. But we also can’t afford to cede the space. Internationally, clean energy spending is booming. China’s clean energy industry by itself would be the world’s eighth largest economy if it were a country, and Europe’s investments have almost doubled over the last decade. Even if current estimates hold, fossil fuel demand will peak mid-century. If the U.S. continues to hold fast to existing policies until then, we’ll be 30 years behind the rest of the world’s energy economy, and it will be impossible to catch up. The bottom line? Good climate policy is good economic policy, and vice versa.

Good climate policy is also good politics. Climate-induced disasters are increasing by the day, and are impacting both safety and affordability. Americans generally see climate and energy policy as important as immigration. Most Americans, on both sides of the political aisle, support environmental regulations and clean energy development. Many say electricity costs are just as stressful as grocery bills, and they worry about higher insurance rates and local market problems. And they’re tired of entrenched corporate interests calling the shots.

What’s needed are creative, clever strategies that boost climate progress while delivering everyday benefits. The Federation of American Scientists (FAS), as part of our new Center for Regulatory Ingenuity (CRI), developed this primer to put a bunch of those strategies in one place. Our goal is for this primer to serve as a resource for public-sector leaders at the federal, state, and local levels who believe that government can do great things for our communities and our planet.

The strategies herein are open-sourced from a diverse network of contributors and collaborators, and are shovel-ready. Many of these strategies are already being deployed across the country. They’re designed to make energy, housing, and transportation better this year.

Indeed, we hope that readers see the actionability of these solutions not just as a benefit, but as an imperative. Americans aren’t looking for the magic message or the magic moment. They’re looking to government for leadership. Every day that government is paralyzed by gridlock, indecisiveness, or fear of failure is another day that it fails to realize the potential of the good that it can achieve, and that public trust in government further erodes. That’s a downwards spiral that we’ve got to stop.

Finally, we emphasize that this primer is a starting place. We’re at the precipice of a new era for climate and energy policy in the United States, and the strategies that will form the backbone of this new era – by adeptly fitting together government capacity, private innovation, and democratic decision-making – are just starting to come into view. As they do, CRI and its partners are committed to working hand-in-glove with bold doers and thinkers, sharpening our collective focus, and realizing the vision of a more responsive government, more optimistic society, and more resilient nation.


Getting to Work: Opportunities in Energy, Transportation, and Housing

Solving problems requires framing them accurately. As observed above, the truth is that clean technologies are increasingly dominant, and that the United States is rapidly falling behind. A response predicated on propping up the 20th-century fossil economy is doomed to fail. So too, we’ve learned, is a response that relies on the U.S. federal government to muscle the clean-technology transition forward single-handedly.

Fortunately, because so many clean technologies are now commercial, the opportunity for leadership on multiple levels, and multiple fronts, has never been more available – or more crucial. For example, simple economics will do much to propel wind, solar, and battery technologies if needed supporting infrastructure is in place and clean technologies are given the chance to compete on fair terms. Policymakers can worry less about expending political capital on expensive public subsidies for clean power, and focus instead on transpartisan policies enabling broad market access, streamlined interconnection processes, and swift power grid build-out. In the transportation sector, policies that ensure transparent vehicle pricing or increase market competition for legacy car companies may matter more than traditional regulatory standards.

This new reality also makes thoughtful economic, industrial, and social policy indispensable. The advent of new technology often comes with the promise of broad societal benefits, but making good on that promise is hardly a guarantee (witness the emergent effects of AI). It’s incumbent on government to ensure that the clean-technology transition reduces inequality and improves quality of life at scale, and that the transition doesn’t abandon workers in fossil-dependent regions and industries to the vagaries of the market. And it’s government, working across multiple scales, that can assess regional comparative advantages and figure out where the United States can still compete – as well as where it must innovate and diversify.

Government leaders, in short, have the unique ability to see all the way from the kitchen table to the commanding heights of the global economy, and to mediate between them.

We illustrate below the types of approaches that entrepreneurial policymakers can adopt to secure U.S. leadership on critical clean technologies, in ways that benefit all Americans. We focus on energy, transportation, and housing, which are collectively the largest sources of climate pollution and key elements of household and regional economies nationwide. The list below is not exhaustive, or comprehensive, but exemplary – a demonstration that there are real opportunities for change.

Unleashing Modern Energy

There’s massive untapped potential for clean energy in the United States. To realize it, we’ve got to make room for new energy to move.

This isn’t primarily a project of continued renewable energy subsidies: there’s good evidence that renewable energy can compete on a level playing field when it’s given the chance. Rather, the project is one of clearing away barriers to financing and building projects, fixing broken market incentives that favor existing players over new entrants and distort energy pricing, and accelerating construction of major grid infrastructure. 

This project looks a lot like the successful national push towards rural electrification that the United States led a century ago: a serious effort that aligns private and public investments to rethink how and where we deliver energy. In executing this effort, we must grapple with the full set of barriers to building – not just cost and permitting, but also thorny local siting processes, misaligned incentives for electric utilities, and lengthy wait times to connect projects to the grid. 

Today, of course, we’ve also got to reckon with the growing threats of cyberattacks and extreme weather to energy infrastructure, as well as the unprecedented, unpredictable energy demands of hyperscalers. Such challenges can only be managed by a mix of climate stabilization policies, economic risk-sharing strategies, and investments in infrastructure modernization. That’s not a cheap or easy proposition, but it is one with major lasting benefits.

At the consumer level, building more clean energy can help stabilize residential electricity prices (though many other factors also contribute to electricity prices and price volatility). More broadly, clean energy could unlock billions of dollars in potential efficiencies, such as by reducing costs associated with redundant natural gas transmission infrastructure. Expanding clean energy, especially distributed energy resources and virtual power plants, can also upgrade outdated grid infrastructure and secure it against cyber threats. But getting to these benefits requires government leadership.

Energy ingenuity could look like:

Making Transportation Cleaner and Cheaper

People just want to get to where they’re going safely, efficiently, and affordably. Yet despite record levels of federal transportation spending, traffic, emissions, and pedestrian deaths keep rising. And as the Cato Institute observes, “U.S. policy contributes to an inefficient and costly transportation system that reduces workers’ time and incomes.”

We can do better. This starts by recognizing that in much of the United States, cars are both essential and increasingly unaffordable. There’s opportunity for a suite of policies that break market strangleholds while expanding consumer choice, moving us away from involuntary dependence on expensive cars and towards a future with transit that people actually want to ride – as well as affordable yet excellent, and often zero-emission, personal transportation. Core federal clean transportation programs have supported $4.6 billion in domestic investments and created at least 14,000 jobs in manufacturing, demonstrating the large-scale benefits of such programs and the economic case for continued federal support. Because the tools involved are nearly all within the authorities of state and local governments, and independent of ongoing federal regulatory disputes, they also can go into effect quickly.

On the vehicle side, this agenda includes governmental efforts to address legacy company market power. Incentives and protections for domestic manufacturing are sensible so long as they boost local economies, support American workers, and drive American innovation – but they’ve got to be coupled with policies ensuring price transparency and other oversight mechanisms, to ensure that benefits flow to consumers rather than pad company profits. Unlocking a more affordable, competitive, zero-emission vehicle (ZEV) market – with more options for buyers at lower prices – is also a key political foundation to the next round of vehicle regulatory mandates, by creating a larger constituency for further progress.

On the system side, states and cities can significantly build up regional budgets with savvy transportation investments. The data are clear that transit and walkability investments bring more valuable housing into cities and connect people with jobs, raising economic activity and raising property values. Investments in electric-vehicle charging similarly boost local business revenue and spurs economic vitality. Communities thrive when their members have transportation options (that all work well), instead of being steered towards legacy vehicle technology and wrestling with creaky 20th-century infrastructure.

On the vehicle side, transportation ingenuity could look like:

On the system side, transportation ingenuity could look like:

Building Affordable, Abundant Housing

Housing shouldn’t be a luxury: it’s a prerequisite for a stable, healthy life. Yet Americans – facing prohibitively high (and increasing) rental costs as well as unrealistic down payments and pathways to ownership – are struggling to meet this basic need. And with extreme weather on the rise, renters and owners alike are facing concerns about physical safety and skyrocketing insurance as well as price hurdles. The emissions that the housing sector produces only worsen these problems.

Delivering more affordable, resilient, and climate-friendly housing means making it easier to build housing of all shapes and sizes; tailoring solutions to rural communities, urban communities, and different geographies generally; and striking a better balance between development for housing and development for other purposes. These strategies need to be paired with deep investments in government capacity to facilitate permitting and approval of new housing construction, as well as to facilitate more complex projects – like retrofits, infill development, and office-to-residential conversion – at scale. Also critical is reimagining community and stakeholder engagement on housing questions, aiming to maintain trust, democratic process, and local buy-in without overvaluing the perspectives of existing homeowners, developers, or any other particular constituency. at the expense of the rest of the community.

Housing ingenuity could look like:


Making Solutions Stick: The Cross-Cutting Benefits of Government Capacity, Pro-Democracy Design, and Innovative Financing

Each of the policy solutions above offers a way to boost climate progress while delivering everyday benefits across energy, transportation, and/or housing. But how do we make those solutions stick? With trust in government at historic lows, public-sector leaders must quickly follow ambition with action, investing in both ideas and the building blocks that turn ideas into reality. Below, we outline how public leaders can use three of these core building blocks – government capacity, financing, and pro-democracy design – to get on the scoreboard early…and stay there for the long term.

Government Capacity

Government capacity refers to the ability of government to get things done, whether through efficient processes, effective talent, or fit-for-purpose tools. Americans are frustrated by the slow pace of government, but they don’t want the functions that keep them safe and supported dismantled: they want them improved. Accomplishing this requires more than new programs or new funding streams or new inventions. It requires leaders to seriously (and systematically – not via a “wrecking ball” approach) consider which government functions are working, which need to be overhauled, and which should be retired.

Rebuilding government capacity is inseparable from strengthening democracy itself. Both of these goals are wholly intertwined with climate progress. When government acts competently, transparently, and in partnership across levels, it restores public faith that collective action is possible and worthwhile. When it can’t, even well-designed policies stall under the weight of fragmented authority, procedural burden, risk aversion, and institutional inertia. Treating government capacity as a core investment is therefore much more than administrative housekeeping. It’s a prerequisite for durable climate progress.

To boost government capacity, public leaders can:

Finance

Capital is a powerful tool for policymakers and others working in the public interest to shape the forward course of the economy in a fair and effective way. Very often, the capital needed to achieve major societal goals comes from a blend of sources; this is certainly true with respect to climate action and facilitating the transition to clean technologies.

States, cities, banks, community-driven financial institutions (CDFIs), impact investors, and philanthropies have long worked in partnership with the federal government on clean-technology projects – and are stepping up in a new way now that federal support for such projects has been scaled back. These entities are developing bond-backed financing, joint procurement schemes, and revolving loan funds – not just to fill gaps, but to reimagine what the clean technology economy can look like.

In the near term, opportunities for subnational investments are ripe because the now partially paused boom in potential firms and projects generated by recent U.S. industrial policy has generated a rich set of already underwritten, due-diligenced projects for re-investment. In the longer term, the success of redesigned regulatory approaches will almost certainly depend on creating profitable firms that can carry forward the clean-technology transition. Public sector leaders can assume an entrepreneurial role in ensuring these new entities, to the degree they benefit from public support, advance the public interest: connecting economic growth to shared prosperity.

To be sure, subnational actors generally cannot fund at the scale of the federal government. But they can have a truly catalytic impact on financing availability and capital flows nevertheless. 

To boost finance, public leaders can:

Public Participation

Public participation in climate action is often treated as a procedural requirement to be satisfied late in the process, rather than as a core function of governing well. The result is familiar: performative town halls, notice-and-comment processes that invite frustration rather than insight, and transparency tools that are easily weaponized by organized interests. This dynamic erodes trust, slows projects, and fuels the perception that government is both unresponsive and incapable. Yet participation, when designed well and tailored to the moment, is not an obstacle to effective governance:  it is how government discovers what will work, where friction will arise, and how to build solutions that communities will defend rather than resist. Treating participation as a functional component of state capacity means seeing it as an input to smarter design, faster implementation, and more durable outcomes.

Upgrading how government listens and engages is vital to upgrading how government delivers. When residents see clearly how their input shapes decisions, participation builds legitimacy and reduces the incentives for obstruction and litigation later in the process. When agencies invest in the infrastructure, tools, roles, and expectations that make participation meaningful, they create a feedback loop that improves policy design and strengthens democratic trust at the same time. And when climate leaders meet the public where they are in terms of how they experience and make consumer choices in the the climate transition, we can strengthen the connective tissue between government action and public trust.The recommendations below are aimed at helping public leaders move beyond compliance-driven engagement toward participation models that are relational, deliberative, and integrated into the machinery of experience and delivery. This approach ensures that climate solutions are not only technically sound, but socially resilient and democratically grounded. These take time, but we encourage recognition that they enable enormous time, risk and failure saved. 

To boost public participation, public leaders can:


About The Primer

Ambition to Action was authored by Angela Barranco, Zoë Brouns, Megan Husted, Kristi Kimball, Arjun Krishnaswami, Hannah Safford, Loren Schulman, Craig Segall, and Addy Smith.

Many individuals contributed ideas and input to this primer. The authors are grateful to the following individuals and organizations for their time, expertise, and constructive feedback: Patrick Bigger, Laurel Blatchford, Heather Clark, Ted Fertik, Danielle Gagne, Kate Gordon, Betony Jones, Nuin-Tara Key, Alex McDonough, Sara Meyers, Shara Mohtadi, Saharnaz Mirzazad, Beth Osborne, Alexis Pelosi, Sam Ricketts, Bridget Sanderson, Lotte Schlegel, Igor Tregub, Louise White, and Clinton Britt. The content of this primer does not necessarily reflect the views of individuals or organizations acknowledged. Any errors are the sole fault of the authors.

A National AI Laboratory to Support the Administration’s AI Agenda at the Department of Commerce

The United States faces intensifying international competition in Artificial Intelligence (AI). The Trump administration’s AI Action Plan places the Department of Commerce at the center of its agenda to strengthen international standards-setting, protect intellectual property, enforce export controls, and ensure the reliability of advanced AI systems. Yet no existing federal institution combines the flexibility, scale, and technical depth needed to fully support these functions.

To deliver on this agenda, Commerce should expand their AI capability by sponsoring a new Federally Funded Research and Development Center (FFRDC), the National AI Laboratory (NAIL). NAIL would:

  1. Advance the science of AI,
  2. Ensure that the United States leads in international AI standards and promotes the trusted adoption of U.S. AI products abroad, 
  3. Identify and mitigate AI security risks, 
  4. Protect U.S. technologies through effective export controls. 

While the National Institute of Standards and Technology’s (NIST’s) Center for AI Standards and Innovation (CAISI) within Commerce provides a base of expertise to advance these goals, a dedicated FFRDC offers Commerce the scale, flexibility, and talent recruitment necessary to deliver on this broader commercial and strategic agenda. Together with complementary efforts to strengthen CAISI and expand public-private partnerships, NAIL would serve as the backbone of a more capable AI ecosystem within Commerce. By aligning with Commerce’s broader mission, NAIL will give the Administration a powerful tool to advance exports, protect American leadership, and counter foreign competition.

Challenge

AI’s breakneck pace is having a real-world impact. The Trump administration has made clear that widespread adoption of AI, backed by strong export promotion and international standards leadership, is essential for maintaining America’s position as the world’s technology leader. The Department of Commerce sits at the center of this agenda: advancing AI trade, developing international standards, advancing the science of AI, promoting exports, and ensuring effective export controls on critical technology.

Even as companies and countries race to adopt AI, the U.S. lacks the capacity to fully characterize the behavior and risks of AI systems and ensure leadership across the AI stack. This gap has direct consequences for Commerce’s core missions. First, advances in the science of AI are necessary to ensure that AI systems are sufficiently robust and well understood to be widely adopted at home and abroad. Second, without trusted methods for evaluating AI, the U.S. cannot credibly lead the development of international standards, an area where allies are seeking American leadership and where adversaries are pushing their own approaches. Third, this deep understanding of AI models is needed to identify and mitigate security concerns present in both foreign and domestic models. Fourth, deep technical expertise within the federal government is required to properly create and enforce export controls, ensuring that sensitive AI technologies and underlying hardware are not misused abroad. A deep bench of subject matter experts in AI models and infrastructure is increasingly critical to these efforts.

As AI systems become more capable, the lack of predictable and understandable behavior risks further eroding public trust in AI and inhibiting beneficial AI adoption. Jailbreaking attacks, in which carefully crafted prompts get around Large Language Model (LLM) guardrails, can produce unexpected behavior of models. For example, jailbreaking can prime LLMs for use in cyberattacks, which can cause significant economic harms, or cause them to leak personal information, or produce toxic content, causing legal liability and reputational harm to companies using these models. As companies deploy custom models built on top of LLMs they need to know that medical assistants will not produce harmful recommendations, or that agentic AI systems will not misspend personal funds.  Addressing these concerns is an extremely challenging technical problem that requires more effective and consistent methods of evaluating and predicting model performance. 

The ability to effectively characterize these models is central to the Trump administration’s AI Action Plan, which highlights widespread adoption of AI as a major policy priority, while also recognizing that the government has a key role to play in managing emerging national security threats. The AI Action Plan gives Commerce a central role in addressing these concerns; nearly two fifths of the plan’s recommendations involve Commerce. Commerce’s responsibilities include:

For a full list of AI Action Plan recommendations involving Commerce, see Appendix A. 

While Commerce has an impressive track record in AI, including through its work at the National Institute of Standards and Technology and CAISI, it will face immense institutional challenges in delivering on the ambitions of the AI Action Plan, which require broad and deep expertise. Like other U.S. government entities, Commerce operates under federal hiring rules that make it difficult to quickly recruit and retain top technical talent. The government also struggles to match AI industry pay scales. For example, fresh PhDs joining AI companies frequently receive total compensation that is twice the cap set for the overwhelming majority of government workers, and senior researchers earn five times this cap or more. In some cases, top researchers may also hold equity in private companies, further complicating their employment by the government. Without a new institutional mechanism designed to attract and deploy world-class expertise, Commerce will struggle to execute on the ambitious goals of the AI Action Plan.

Opportunity

To deliver on the scope of the AI Action Plan, the Department of Commerce needs a dedicated institution with the resources, flexibility, and talent pipeline that existing structures cannot provide. A Federally Funded Research and Development Center (FFRDC) offers this capacity. Unlike traditional government offices, an FFRDC can recruit competitively from the same pools as industry, while remaining mission-driven and independent of commercial interests.

At its core, a new FFRDC, the National AI Laboratory (NAIL), would provide the technical expertise Commerce needs to carry out its central responsibilities. Specifically, NAIL would:

  1. Advance the science of AI, including the measurement and evaluation of AI models.
  2. Develop the methods and benchmarks that underpin international standards and ensure U.S. companies remain the trusted source for global AI solutions.
  3. Identify and mitigate AI security risks, ensuring U.S. technologies are not exploited by adversaries.
  4. Provide the technical expertise needed to support export promotion, export controls, and international trade negotiations.

NAIL would equip Commerce with the authoritative science and engineering base it needs to advance America’s commercial and strategic AI leadership.

FFRDCs are unique in combining the flexibility of private organizations with the mission focus of federal agencies. Their long-term partnership with a sponsoring agency ensures alignment with government priorities, while their independent status allows them to provide objective analysis and rapid technical response. This hybrid structure is particularly well-suited to the fast-moving and security-relevant domain of frontier AI. More background information on FFRDCs can be found in Appendix C. 

The current talent landscape underscores the value of the FFRDC model. While industry salaries are high, many senior researchers are constrained by proprietary agendas and limited opportunities to pursue foundational, publishable work. To obtain greater freedom in their research, many top industry researchers have been seeking positions at universities, despite drastically lower salaries. An FFRDC focused on frontier model understanding, interpretability, and security offers a rare combination: freedom to pursue scientifically important problems, the ability to publish, and a mission anchored in national competitiveness and public service. This environment can attract researchers who would not join the civil service but are motivated by high-impact scientific and policy goals.

FFRDCs have repeatedly demonstrated their ability to deliver large-scale technical capability for federal sponsors. For example, NASA’s Jet Propulsion Laboratory has successfully built and landed multiple rovers on Mars, among many other achievements. The Departments of Energy and Defense have led much of the U.S.’ efforts in science and technology assisted by more than two dozen FFRDCs. Their track record shows that FFRDCs are uniquely suited to problems where neither academia nor industry is structured to meet federal needs—exactly the situation Commerce now faces in AI. Commerce currently supports one FFRDC, the fourth smallest. As advanced AI technology grows even more central to Commerce’s mission, it makes sense to add to this capacity.

Plan of Action

Recommendation 1. Establish an FFRDC to support the AI Mission at Commerce.  

Commerce should establish a new FFRDC within two years with a mission to begin important research and timely evaluations. Establishing a new FFRDC requires the sponsoring organization (Commerce in this case) to satisfy the criteria laid out in the Federal Acquisition Regulations (48 CFR 35.017-2) for creating a new FFRDC. Key requirements involve demonstrating needs that are not met by existing sources and that Commerce has sufficient expertise to evaluate the FFRDC. It will require consistent government support through appropriations, and Commerce must identify an appropriate organization to manage it. The rapid pace of AI development makes it an urgent priority to move forward as soon as possible. Recent FFRDCs have taken about 18 months to establish after initial announcement, a significant length of time in the AI field. Further details related to establishing an FFRDC can be found in Appendix D. 

Recommendation 2. NAIL should focus on topics that will advance the Administration’s AI Agenda, including recommendations given to Commerce in the AI Action Plan. 

These topics should include:

The proposed FFRDC should pursue activities that range from longer term, fundamental research to rapid response to new developments. Much of the knowledge needed to fulfill Commerce’s mandate lies at the heart of the most significant research questions in AI. This requires deep research, which is also important in attracting top tier talent. On a shorter time scale, it will be important for the FFRDC to provide regular evaluations of models as they progress, including the evaluation of security concerns in foreign models. NAIL can speed up these time critical security evaluations. It will also need to use these evaluations to help create and update procurement guidelines for federal agencies and assess the state of international AI competition. Finally, the FFRDC should be a source of expertise that can support Commerce in a wide range of topics such as export control and development of a workforce trained to appropriately take advantage of AI tools.

The FFRDC will also need to work closely with industry to develop standards for the evaluation of models, and support efforts to create international standards. For example, it may seek to facilitate an industry consensus on the evaluation of new models for security concerns. NIST is well known for similar efforts in many technical areas. Finally, the FFRDC should provide a capacity for rapid response to significant AI developments, including possible urgent security concerns.

Recommendation 3. Provide a sufficient budget to cover the necessary scale of work.

There are different possible scales at which NAIL might be created. It is important to note that creating industry scale models from scratch can cost tens or hundreds of millions of dollars. However, the task of evaluating models may be undertaken without this expense by experimenting on models that have already been trained. Much of the published work on model evaluation takes this course. Such evaluations and experiments still require access to significant computational resources, requiring millions of dollars a year in compute, depending on the size of the effort. The FFRDC’s research might also include experiments in which smaller models are built from scratch at a much smaller expense than what is required to train industry sized models.

We consider two alternatives as to the size and budget of the proposed FFRDC:

The figure in Appendix B lists all current FFRDCs and their annual budget in 2023. 

The budget of the FFRDC would need to cover several different costs:  

Recommendation 4. Make NAIL the Backbone of a Broader AI Ecosystem at Commerce.

While an FFRDC offers a unique combination of technical depth and recruiting flexibility, other institutional approaches could also expand Commerce’s AI expertise. One option is to expand the Center for AI Standards and Innovation (CAISI) within NIST, leveraging its standards and measurement mission, though it remains bound by federal hiring and funding rules that slow recruitment and limit pay competitiveness.

A separate proposal envisions a NIST Foundation—a congressionally authorized nonprofit akin to the CDC Foundation or the newly created Foundation for Energy Security and Innovation (FESI)—to mobilize philanthropic and private funding, convene stakeholders, and run fellowships supporting NIST’s mission. Such a foundation could strengthen public-private engagement but would not provide the sustained, large-scale technical capacity needed for Commerce’s AI responsibilities. 

Taken together, these models could form a complementary ecosystem: an expanded CAISI to coordinate standards and technical policy within government as well as providing oversight over the FFRDC; a NIST Foundation to channel flexible funding and external partnerships; and an FFRDC to serve as the enduring research and engineering backbone capable of executing large-scale technical work.

Conclusion

The Trump administration has set ambitious goals for advancing U.S. leadership in artificial intelligence, with the Department of Commerce at the center of this effort. Ensuring America’s continued leadership in AI requires technical expertise that existing institutions cannot provide at scale.

NAIL, a new Federally Funded Research and Development Center (FFRDC) offers Commerce the capacity to:

By sponsoring this FFRDC, Commerce can secure the talent, flexibility, and independence needed to deliver on the Administration’s commercial AI agenda. While CAISI provides the technical anchor within NIST, the FFRDC will enable Commerce to act at the necessary scale—ensuring the U.S. leads the world in AI innovation, standards, and exports.


Appendix A. References to the Department of Commerce in America’s AI Action Plan

Appendix B. FFRDC Budgets

Appendix C. Further Background on FFRDCs

FFRDCs in Practice: Successes and Pitfalls

FFRDCs have been supporting US government institutions since World War II. Overviews can be found here and here. In this appendix we briefly describe the functioning of FFRDCs and lessons that can be drawn for the current proposal. 

In a paper by the Institute for Defense Analyses (IDA) a panel of experts “expressed their belief that high-quality technical expertise and a trusting relationship between laboratory leaders and their sponsor agencies were important to the success of FFRDC laboratories” and felt that “The most effective customers and sponsors set only ‘the what’ (research objectives to be met) and allow the laboratories to determine ‘the how’ (specific research projects and procedures).”  Frequent personnel exchange programs between the FFRDC and its sponsor are also suggested. 

This and the experience of successful FFRDCs suggests that the proposed FFRDC be closely linked to relevant ongoing efforts in NIST, especially CAISI, with frequent exchanges of information and even personnel. At the same time, the proposed FFRDC should have the freedom to explore very challenging research questions that lie at the heart of its mission. 

As an example of the relationship between agencies and associated FFRDCs, the Jet Propulsion Laboratory supports many of NASA’s priorities, addressing long-term goals such as understanding how life emerged on earth, along with more immediate goals such as catalyzing economic growth and contributing to national security. Caltech manages operations of JPL. In general, NASA sets strategic goals, and JPL aligns its long-term quests with these goals. NASA may solicit proposals and JPL may compete to lead or participate in appropriate missions. JPL may also propose missions to NASA. As an example, in 2011 the National Academies recommended that NASA begin a mission to return samples from Mars. NASA decided to launch a new Mars rover mission. NASA then tasked JPL to build and manage operations of Perseverance, to accomplish this mission. 

On a less positive note, after concerns about the Department of Energy’s (DOE) management of FFRDCs, DOE shifted from a “transactional model to a systems-based approach” offering greater oversight, but also leading to concerns of loss of flexibility and micromanagement. Concerns have also previously been raised about the level of transparency and assessment of alternatives when agencies renew FFRDC contracts, as well as mission creep of existing FFRDCs 

Existing FFRDCs Relevant to AI Work

One of the most important criteria for establishing a new FFRDC is to demonstrate that this will fill a need that cannot be filled by existing entities. Many current FFRDCs are conducting work on AI, but this work does not adequately address the needs of Commerce, especially in light of the requirements of the AI Action Plan. For example, the Software Engineering Institute (SEI) run by CMU has deep expertise in the development of AI systems, along with software development and acquisition. However, their mission is to  “execute applied research to drive systemic transition of new capabilities for the DoD.”  Its AI work focuses on defense related capabilities, and not on the comprehensive evaluation of frontier models needed by NIST. 

NIST does support the National Cybersecurity FFRDC (NCF) operated by MITRE. This unit focuses on security needs, not on general model evaluation (although it will be important to clearly delineate the scopes of a new Commerce FFRDC and the NCF). Other FFRDCs, such as Los Alamos or Lawrence Berkeley have significant AI efforts aimed at using AI to enhance scientific discovery. Industry AI labs address some of the questions central to the proposed FFRDC, but it is important that the government have access to deep technical expertise that is able to act in the public interest.

Establishing a New FFRDC

A precedent on the establishment of FFRDCs comes from the Department of Homeland Security (DHS). Under Section 305 of the Homeland Security Act of 2002, DHS was authorized to establish one or more FFRDCs to provide independent technical analysis and systems engineering for critical homeland security missions. In April 2004, DHS created its first FFRDC, the Homeland Security Institute. Four years later, on April 3, 2008, it issued a notice of intent to establish a successor organization, the Homeland Security Systems Engineering and Development Institute (HSSEDI), and in 2009 selected the MITRE Corporation to operate it. HSSEDI—along with DHS’s other FFRDC, the Homeland Security Operational Analysis Center—is overseen by the Department’s FFRDC Program Management Office. This case illustrates both a procedural pathway (statutory authorization, public notice, operator selection) and the typical timeline for standing up such an entity: roughly 12–18 months from notice of intent to full operation. Similarly, the National Cybersecurity FFRDC had its first notice of intent filed April 22, 2013, with the final contract to operate the FFRDC awarded to MITRE on September 24, 2014, about 17 months later. 

Appendix D. Requirements for Establishing an FFRDC

Establishing a new FFRDC requires the sponsoring organization (Commerce in this case) to satisfy the criteria laid out in the Federal Acquisition Regulations (48 CFR 35.017-2) for creating a new FFRDC.

These include:

The establishment of an FFRDC must follow the notification process laid out in 48 CFR 5.205(b). The sponsoring agency must transmit at least three notices over a 90-day period to the GPE (Governmentwide point of entry) and the Federal Register, indicating the agency’s intention to sponsor an FFRDC, and its scope and nature, requesting comments. This plan must be reviewed by the Office of Federal Procurement Policy (OFPP) within the White House Office of Management and Budget (OMB). 

A sponsoring agreement (described in 48 CFR 35.017-1) must be generated by Commerce for the new FFRDC. This agreement is required by regulations (48 CFR 35.017-1(e)) to last for no more than five years, but may be renewed. It outlines conditions for awarding contracts and methods of ensuring independence and integrity of the FFRDC. FFRDCs initiate work at the request of federal entities, which would then be approved by appropriate units within DOC. The proposed FFRDC should align its mission closely with Commerce and NIST, obtaining contracts from these sponsoring agencies that will determine its priorities. The FFRDC would hire top tier researchers who can both execute this research and provide bottom-up identification of important new research topics.

The FAIR in Education Act: Federal coordination to support responsible AI deployment

Artificial Intelligence (AI) has the potential to enhance education systems by personalizing student learning, providing real-time feedback, and streamlining administrative tasks to optimize teachers’ time and focus on instruction. AI, like other classroom technologies, can expand access to educational resources and when used properly, support student engagement. However, no long-term studies on the impacts of generative AI on student learning outcomes and the cognitive abilities of early learners exist and issues around algorithm transparency and data security persist. To meet these challenges, we propose a Framework for AI Responsibility (FAIR) in Education Act, a Governor’s Conference, and the establishment of a national center to support AI deployment in K-12.

Challenge and Opportunity

No Guardrails or Guidance 

Successful integration of classroom technologies relies on the availability and stability of infrastructure as well as the readiness of the end users. The United States AI Action Plan and Advancing Artificial Intelligence Education for American Youth Executive Order aim to promote streamlined pathways for AI adoption. However, neither provide any practical implementation guidance for the responsible deployment of AI in educational settings nor allocate funds to support the local infrastructure necessary. Current actions also fail to address longstanding concerns regarding data privacy, the establishment of guardrails to mitigate algorithm bias, and efforts to reduce the digital divide which are increasingly more important upon interactions with minors. 

AI competency is becoming a necessary skill for the future, much like knowing how to use a search engine effectively to navigate online information. Students who understand how AI works, its limitations, and its potential biases will be better equipped to navigate the technology driven world we live in. Establishing guardrails and guidance is not meant to restrict student access to AI, but aims to ensure students can use these tools safely and responsibly. Proper guardrails, transparency, and guidance allow students to leverage AI as a learning aid while minimizing risks to privacy, fairness, and well being. 

In the absence of guardrails and guidance, AI can increase inequities, introduce bias, spread misinformation, and risk data security. These negative impacts are often exacerbated in communities that are marginalized or economically disadvantaged. Simply put, the current posture towards AI puts the cart before the horse. The United States needs a better understanding of the impact of AI on student learning and clear guardrails before introducing it large-scale.  

More Data Needed 

Reeling from the impacts of the COVID-19 pandemic on student learning, such as learning loss and widening achievement gaps, the most recent National Assessment of Educational Progress asserts a clear decline in K-12 science, reading, and mathematics proficiencies compared to 2019. The results of the study will be used to inform educational reforms, however, educators and policymakers should be cautious in framing AI as the cure-all for America’s educational challenges. The promises of similar tech-driven advances foreshadow a likely failed result if the policy does not adapt accordingly .   

Currently, there are no federal guidelines that govern AI usage in the classroom and there are no longitudinal studies on AI’s impact on student learning and cognitive development.  Short-term studies have demonstrated that AI can have a positive effect on student learning, however, results are highly variable and context specific. In addition, there are significant risks, such as student overreliance on the technology, especially generative AI chatbots. Early learners are particularly at risk for negative impacts and it is unknown how AI use impacts deeper learning and information retention and synthesis. Studies indicate that  technology use among  school-aged children can negatively affect  attention spans, self-control, cognitive development, and problem-solving skills. Moreover, AI chatbots may pose psychological impacts or “empathy gaps” in children that are not well understood. Only recently has the Federal Trade Commission launched an inquiry into the impact of AI chatbots on children. We need more data on the long-term impacts of AI in the classroom in order to develop coherent policies that support educators and learners. 

These shortcomings do not imply that AI cannot have a place in the classroom. Instead it demonstrates that a comprehensive understanding of AI’s impact is necessary before its use is scaled up. Furthermore, algorithm transparency is paramount for minimizing bias, ensuring student psychological safety, and promoting data security. Organizations like TeachAI, acknowledge some of these risks and provide resources for schools and universities developing AI policy, however, there is still much to learn. 

Federal Support and Coordination are Paramount

Uncertainty around the future of federal support for education and education research is also a key challenge. The Department of Education (DoEd) is currently responsible for addressing national educational issues by setting federal policy, supporting equal access to education, protecting civil rights, collecting educational data, and analyzing trends. The DoEd also works to hold institutions and States accountable for educational outcomes.  The current administration, however, has a stated goal of abolishing the DoEd and sending those powers to States. While States should be empowered to support policy development and implementation, federal coordination and oversight is vital for protecting civil rights and understanding long-term national education trends. 

If the DoEd is abolished, it is uncertain what government agency would assume responsibility for the development, monitoring, and evaluation of educational standards at the precipice of the AI age. If States are tasked with this responsibility, it will require sustained financial federal support. Proposed cuts to the National Science Foundation (NSF) STEM Education Directorate and other STEM education federal funders would limit the ability for education researchers to effectively assess the impact of AI on the educational and psychological development of students or develop tools for the effective use of AI. 

Implementation Requires Community Involvement

While current federal initiatives are in place promoting the role of AI in education, their ultimate success depends on meaningful training experiences for educators and strong collaboration with State and local stakeholders. Federal frameworks, such as the April 2025 Advancing Artificial Intelligence Education for American Youth Executive Order (E.O. 14277), addresses the critical need to provide America’s youth with opportunities to cultivate AI competency, but it does not express the major value of having States and local districts leading the implementation effort to ensure that AI integration meets community needs, supports student achievement, and strengthens workforce development.

State and local communities could potentially draw on federal resources under this E.O. (if available) and work collaboratively with education-focused professional societies, such as the National Science Teachers Association (NSTA) and the Computer Science Teachers Association (CSTA) to help develop community-created standards, define clear metrics, and continuously evaluate what works within their specific contexts. Initiatives such as NSF’s EducateAI and the National AI Research Resource (NAIRR) offer curriculum models, research infrastructure, and other resources that can complement any locally developed approaches. These federal programs can also support collaborative networks among educators, researchers, and industry partners to share best practices and insights. However, realizing the full potential of these federal programs first requires providing teachers with professional development and training to use AI tools effectively and confidently in the classroom, because even the most advanced resources are only as impactful as the educators who apply and understand them.   

Recommendations

Framework for AI Responsibility (FAIR) in Education Act 

Congress should propose legislation on the responsible use of AI in education. This comprehensive act, known as the Framework for AI Responsibility in Education Act or the FAIR in Education Act, would support a large-scale study on the impact of AI on education, provide funding for education research, support State leadership in AI in education, require greater algorithm transparency for algorithms influencing minors, and provide infrastructure for ongoing monitoring and assessment of a community-centered implementation of AI technologies in the classroom. This legislation should address both K-12 use and higher education. 

First, the FAIR in Education Act should instruct the National Academies of Science, Engineering and Mathematics (NASEM) to conduct a study and report on the impact of AI in K-12 schools, higher education, and informal learning settings such as libraries and museums. This landscape study should address student learning, the impact on cognitive abilities, psychological impacts, the ethical use of AI, and provide recommendations for how the federal, state, and local governments can support AI literacy and teacher education. 

Next, the use of AI in the classroom raises several academic integrity and scientific integrity issues, including plagiarism, authorship and credit, accuracy, reliability of AI outputs, reproducibility, and data bias. The FAIR in Education Act should instruct the Committee on STEM Education (CoSTEM), a subcommittee of the National Science and Technology Council under Office of Science and Technology Policy (OSTP), to within 270 days of passage of the act provide guidance to assist educational institutions in thoughtfully updating their own definitions of academic integrity in light of AI and other technologies used in educational settings. This guidance would help institutions uphold ethical standards while enabling the responsible use of AI in learning and assessment. 

The Act should also require transparency in how AI algorithms used in education are trained, what data was used, and how the guardrails were tested. Educators should be aware of the design decisions and development processes that engineers made for the algorithms and how those decisions might affect the use of AI as a tool to enhance student learning. Such transparency will enable educators to guide students effectively in using AI as a learning tool, particularly supporting equitable outcomes among disadvantaged communities. 

The Act will direct federal funds to support the requisite infrastructure and security needed to safely use AI. There are examples from previous administrations of funding opportunities and convenings through the Federal Communications Commission (FCC) to support school district cybersecurity and the infrastructure required to support AI and high speed internet use. Additionally, the Act would support streamlined implementation of the Broadband Equity, Access, and Deployment Program to address high speed internet access across the country.

The responsible use of AI requires not only federal engagement, but State engagement as well. The FAIR in Education Act will require Federal, State, and local coordination on AI use in the classroom and facilitate continued monitoring and evaluation. The Act will also increase funding for teacher professional development, with emphasis on development and training for STEM fields. We envision these goals will be accomplished through the funding and development of a “Supporting Pedagogy and AI Readiness in K-12” (SPARK) Center, which will be informed by an inaugural country-wide Governor’s conference.

Governor’s Conference – State-Led Design of the SPARK Center

    The creation of the SPARK Center should be conducted in cooperation with state and local officials, as well as parents, educators, and students. The education system in the United States is heavily dependent on state and local government to provide leadership in the implementation of new initiatives or educational practices, and thus it is essential that they are involved in the decision making. To begin incorporating these essential voices, we recommend hosting a“Governor’s Conference” with a primary focus on AI in education, and specifically the community driven design of the SPARK Center. The National Governor’s Association (NGA) Center for Best Practices has a program area focused on K-12 education and previously led a Governor’s convening on a K-12 education agenda in 2023. NGA can utilize these existing networks to drive a new focus on the use of AI in education, and preparation and design of the SPARK Center.

    As of September 2025, thirty States have issued guidance on AI in Education. At the conference, Governors can share the successes and challenges of their current AI policies as they relate to education, engage in real-time conversations with teachers, students, and parents, and inspire policy action in States which may not yet have infrastructure in place to support the responsible deployment of AI in their own education systems. Attendees should include all state Governors (or their proxies, such as Secretaries of Education or people in similar positions), representatives from the American Federation of Teachers, the National Education Association, the Association of American Educators, the Superintendents Associations, possible NGOs such as the leadership from CSTA and NSTA, administrators of TeachAI, and relevant NSF funded researchers and academics conducting pedagogical studies on AI impacts on education and childhood development. In addition to representatives from state Governor offices, educators from local school districts must be an essential part of this process to garner buy-in and receive guidance from the final users. 

    The event organizer should consider the best way to integrate parent and student feedback into the outcomes of the conference, such as dedicating one day of the conference specifically to receive their feedback through Track 1.5 roundtables, or stakeholder prepared presentations. The goal of the conference is to create an opportunity for state governments to learn where there are insurmountable challenges in the deployment of AI in education for States to address independently, and where students could benefit from federal standardization of the U.S. approach. The outcome of the conference should lead to a deployable roadmap and fulsome design of the SPARK Center, including the accumulation of educational training resources for teachers and teachers associations. It could also lead to the percolation of new initiatives for the federal government, such as drafted federal guidelines for AI in K-12 education, a new country-wide grand challenge, or an increase in funding or resources provided to the States. It could also lead to the design of a new research and potential pilot projects conducted by the NGA’s Center for Best Practices. These are solely illustrative examples, and will ultimately be determined by the involved participants.

    A community-created approach, paired with federal resources, enables a two-way exchange in which federal guidance informs local practice, while lessons learned from schools will feed back into federal research, policy, and frameworks. This partnership will ensure AI is integrated responsibly, equitably, and effectively across the education system in America.

    Supporting Pedagogy and AI Readiness in K-12 (SPARK) Center 

    For AI to truly benefit classrooms, communities must create, establish, and embrace standards to help guide responsible AI use and effectiveness. These efforts, such as CSTA’s AI Learning Priorities,  will be bolstered through the establishment of the SPARK Center per the FAIR in Education Act. 

    To maximize AI benefits and minimize risks, AI use in the classroom must be guided by community-created standards. Education stakeholders including students, teachers, and families need to be involved with defining how AI is used in the classroom to ensure it aligns with local values, protects student data, and supports student-centered, teacher-facilitated learning. State and local leadership, creating essential policies for these standards, is critical in order to adapt practices to local contexts and to monitor effective classroom use. What works in one district or school may not work elsewhere; standards must be flexible and informed by the community stakeholders because a one-size-fits all approach will not work in every school across America.

    Effective AI use requires ongoing monitoring and evaluation. At a local level, schools should track learning outcomes, student experiences, teacher workload, and overall engagement and productivity with the technology. Feedback from students, teachers, and education stakeholders should be a part of every assessment monitoring and evaluation cycle to help improve AI adoption in the classroom. Implementing routine monitoring and evaluation cycles will enable schools to adjust AI practices, identify unintended consequences, and ensure AI is supporting the learning objectives established in the curriculum instead of creating new challenges in the classroom. 

    This work overall can be burdensome across teachers and school districts. If a community realizes that the deployment of AI in their educational infrastructure is not reaching anticipated goals, or potentially even causing unintended negative consequences across students, there are few places for educators to turn for answers. The SPARK Center will be designed to be a federally managed resource which manages the monitoring and evaluation capacities across the country, and compiles best practices for educators to pull from based on their analyses. Other functions of the center will be determined through a community-driven approach, and informed by a Governor’s Conference convened at the federal level.  

    Conclusion: Connecting Federal Support to Advance Community-Created Approaches

    AI has enormous potential to enhance teaching and learning but only if its adoption is guided by communities, led locally, and continuously monitored. By combining student-centered, teacher-facilitated classroom practices with State and local guidance and federal support, schools can ensure AI empowers both educators and students while safeguarding equity, ethics, and critical thinking. Federal support should strengthen these community centered approaches, providing resources and guidance without replacing local decision making. 

    The views contained in this memo reflect the personal views of the authors.

    Ending Rural Teacher Shortages: What Federal, State and Local Government Can Do

    Rural communities face unique barriers to providing every student with a well-rounded, excellent education. Chief among them are staffing shortages: rural communities often struggle to recruit and retain qualified teachers. Recent shifts to the federal policy landscape threaten to worsen this challenge. This memo recommends action steps for federal, state and district policymakers to end rural teacher shortages. 

    Challenge and Opportunity

    When I left my job as an elementary STEM teacher in rural North Carolina, I gave each of my students an envelope, pre-labeled with my family’s address, and told them to write me a letter with their good news. A year later, an envelope arrived from a student who wrote to tell me that he missed science class; he hadn’t had a science teacher all year. My heart sank, remembering his enthusiasm and interest in science, and knowing that a year without science class put him off track for more advanced courses later, courses he would need if he wanted to pursue a STEM major in college. 

    This is hardly a unique story. The Organisation for Economic Co-operation and Development (OECD) recently made headlines warning of an increasing teacher shortage crisis across the world. In the U.S., teacher shortages are a well-documented problem in certain subject areas and locations. In rural communities like the one where I taught, educator shortages are longstanding and to many, feel intractable. 

    What do we know about rural teacher shortages? 

    Rural schools serving low-income students and those serving mostly students of color have the highest rates of teacher turnover nationally–markedly higher than schools serving similar groups of students in urban and suburban areas. 

    A 2020 study of California school districts found that rural districts posted an additional twelve teacher vacancies for every 100 teachers compared to their urban counterparts. These rural California districts also struggled more to fill vacancies with qualified staff, hiring twice as many emergency certified educators. 

    And while this pattern may not be consistent across all rural communities, rural schools appear to struggle more with the impact of shortages. In the 2023-2024 school year, a national sample of rural school administrators actually reported lower rates of teacher vacancies than non-rural schools: 69% of rural schools said they were fully staffed compared to 56% of all public schools reporting. But rural schools in this same survey who experienced vacancies were more likely to report that they impacted the day-to-day experience of students and teachers.

    Rural schools struggle to recruit educators, with fewer applicants and fewer qualified candidates, and fewer teacher preparation programs nearby from which to recruit teacher candidates. Teacher preferences may work against rural schools’ efforts to recruit from outside the community: national research shows that teachers are more likely to teach within fifteen miles of their hometown, and by virtue of smaller local populations, administrators have a smaller pool of candidates to draw from who fit that profile. Instead, rural schools often find themselves working against the grain of teacher preferences, recruiting from outside of rural communities. 

    Recruiting from outside the community presents its own share of challenges, and for these and other reasons, rural schools also struggle to retain teachers. New research studying rural teacher mobility between 1987 and 2018 found that rural teacher shortages across the country were driven much more by turnover than by other causes that are often responsible for open positions (such as retirement, or growing student enrollment). Teachers were over twice as likely to move out of rural schools and to urban or suburban schools as they were to move from urban or suburban schools to rural schools.

    Non-rural schools may be able to offer some benefits and resources that rural schools cannot, but compensation may not be the main reason educators are leaving rural schools. While thirty-four percent of teachers who left rural schools did cite salary and benefits as their reason for leaving, the most significant reported causes of rural teacher turnover had to do with school culture and working conditions, particularly issues with school leadership.

    Plan of Action

    In the face of these challenges, rural schools have tremendous assets to draw on in building, hiring and retaining a strong teaching workforce. For local community members in small rural labor economies, teaching can be an attractive job, particularly to community members who don’t want to leave to access economic opportunity. Rural schools that have cultivated positive, close-knit relationships to their school communities can also be attractive to teachers looking for a supportive environment, and many rural schools offer the chance to live in a small, interconnected community with access to nature and affordable cost-of-living.

    But rural schools can’t do it alone. In order to leverage these assets and end teacher shortages, local, state and federal leaders play a critical role. What can leaders at each level of government do to end teacher shortages? We recommend action at the district and school, state, and federal levels.

    Recommendation 1. District and School-Level Actions to Attract and Retain Teaching Talent 

    Identify your school community’s strongest assets: what attracts teachers to teaching in your community? Use these as a starting point to inform your recruitment strategy.

    Gather data to find the root causes of teacher recruitment and retention issues in your community, and design your teacher recruitment and retention strategy based on these root causes. If your state does not offer a shared teacher exit survey, districts can use their own exit surveys to gather data on teachers’ reasons for leaving, and use that data to narrow in on solutions. Alaska’s Lower Kuskokwim School District, for example, has historically struggled to recruit and retain new teachers, and wanted to know why educators were leaving. As part of a Regional Education Laboratory (or REL)-supported project, the district used exit survey data to identify substandard educator housing (which is provided by the district to educators at a subsidized rate) as a key barrier to working conditions, and has since partnered with a local vocational education program to build additional housing for educators. 

    A critical step in this process is gathering and monitoring data and pivoting when solutions are not having their intended impact. For example, many rural districts have turned to four-day school weeks in the hope of solving a host of challenges, including teacher shortages, budget shortages and long student commute times. But early evidence suggests that four-day school weeks are not having the intended impact on teacher recruitment and retention, and in fact, may result in additional turnover. Armed with this evidence, districts can adjust course. 

    Put current students’ and local community members on a path to become educators and school staff. While recruiting from outside the community may still be necessary in the short and medium term, preparing the next generation of local communities for jobs that allow them to stay in the community provides a benefit to both current and future students. Grow-Your-Own programs and high school pipeline programs into teaching jobs are a powerful potential tool. As part of regular reporting, publish data on program outcomes. 

    Share teachers (and services) across districts. For the hardest to staff roles and roles where student enrollment is too low to support a full-time teacher in a certain subject area, rural districts can work together in cross-district consortia to share access to courses–sometimes virtually, sometimes in person. Some districts also use this shared services model to provide professional learning to educators.

    Recommendation 2. State Actions to Support Rural Teacher Recruitment and Retention

    Target solutions based on demonstrated staffing shortages. Too often, states fund one-size-fits-all solutions to teacher shortages that direct limited resources too broadly, often to roles that schools don’t actually struggle to fill, or to schools that don’t have any shortage of qualified applicants. Prioritizing the highest-need areas is especially critical when working with limited resources: with a limited amount of money, a state can do more to solve teacher shortages by targeting incentives to the teacher roles where they are most needed. Both Alaska and Colorado, for example,provide incentives to teacher preparation candidates to teach in rural schools.

    Fund educator pipeline programs targeted to rural communities with demonstrated shortages. States have made significant recent investments in Registered Apprenticeship, Grow Your Own, post-baccalaureate and high school pipeline programs to recruit and train new teachers. States can prioritize rural districts with demonstrated shortages to pilot and expand these programs. Ensure timely evaluation and publication of outcomes for these programs.

    Fund rural schools fairly. Rural districts have lower enrollment, face higher overall costs to deliver student services, can’t reduce costs through economies of scale, and have fewer local resources in the form of local tax dollars and ability to levy local bonds. Rural districts rely more on state and federal funds for this reason, and state education funding formulas are critical to ensure rural schools have enough money to provide critical services. To ensure local schools can fund competitive salaries and support recruitment and retention initiatives, states should evaluate whether or not their current funding formulas are sufficient to meet rural schools’ needs. 

    States nationwide have taken this on, with Utah recently revising its school funding formula to provide rural schools up to 1.5 times the per-pupil funding rate of non-rural schools. Both Wisconsin and Massachusetts provide schools with supplemental aid specifically for rural schools; Wisconsin’s program has made a demonstrated impact on rural students’ college enrollment and completion

    Some states are committing new state money directly to educator salaries, working to close the gap between rural and non-rural districts. In 2023, Arkansas funded a statewide raise of the state’s minimum teacher salary from $36,000 to $50,000, and provided all K-12 public educators with a raise of at least $2,000. Research from the first year of implementation found that it had substantially increased funding for both rural and urban schools. Rural schools, which had provided average starting pay of $2,400 less than urban districts, cut that gap to $48 in the initiative’s first year. 

    Give districts the flexibility to share staff and resources. Increasingly, rural school districts are working across districts to share limited staff and resources. Forming local consortia, districts may give students the opportunity to enroll in advanced or specialized coursework across districts. States can ensure that state policy reduces barriers to this approach; Texas, for example, passed state legislation to remove barriers to this approach and support growth through a new Rural Pathway Excellence Partnership Program, which currently serves ten consortia made up of thirty rural districts. Massachusetts’ Rural School Aid Program specifically prioritizes district spending to “increase regional collaboration, consolidation, or other strategies to improve long-term operational efficiency and effectiveness.”

    Provide access to virtual courses. When rural districts cannot hire enough teachers or muster enough students to provide specialized or advanced courses, states can also work creatively to provide access to these courses statewide. Montana’s legislature created the Montana Digital Academy, which has provided statewide access to virtual courses since 2009. The classes, taught by certified Montana educators, ensure that students anywhere in the state (which boasts the most one-room schoolhouses of any state), can take Advanced Placement, dual enrollment and specialized courses like Indigenous Languages or Artificial Intelligence.

    Gather and publish the data to better understand shortage patterns. States should give themselves, districts and the public the ability to understand shortage patterns at a detailed level, including by rurality. States should collect data that allows leaders to understand, at a minimum, how rural schools are experiencing shortages:

    Gather data on teachers’ reasons for leaving through statewide teacher working conditions surveys and exit surveys for departing teachers. Systematize this data by requiring collection at the state level through a single survey, deliver data back to district and schools, and provide facilitated opportunities to analyze data and act on feedback. Publish disaggregated data by rurality to understand the unique issues facing rural schools. Tennessee’s statewide teacher working conditions survey, for example, provides detailed statewide data on teachers’ and administrators perspectives on working conditions year over year; the survey’s research partner published analysis of results for rural schools.

    Recommendation 3. Federal Actions to Support Rural School Funding and Success

    Maintain access to federal education funds that rural schools rely on to support teachers. Federal funds are a critical source of funding for rural schools, who rely on them for a host of core functions, including many that directly support teachers: paying salaries, providing supportive professional learning, and funding innovative approaches to recruit new teachers. As the Trump administration has impounded allocated funds, released promised formula funds late, proposed cutting funds for future budget years, and abruptly begun moving funding programs to other agencies that lack the capacity or expertise to run them, rural schools have been left to plan for the worst. This has created an atmosphere of chaos and uncertainty, leaving rural schools struggling to plan ahead for the months and years ahead. (For more on how cuts to these programs impact rural schools, see the table, “Using federal education funds to end rural teacher shortages.”)

    Increase access to discretionary grant funding by including rural schools in the Secretary of Education’s Supplemental Priorities. Rural schools often struggle to apply for and effectively compete for discretionary federal grants that could be used to support teacher recruitment and retention. With a Supplemental Priority, the Secretary could ensure rural schools are prioritized in future grant competitions. 

    Release guidance on how federal funds can be blended and braided to end teacher shortages. The Department of Education has historically provided a wide range of federal funds that can be used in concert to fund teacher recruitment and retention strategies; it is critical to maintain access to these funds. If, in the future, the Department’s role in funding and providing technical assistance to states is restored, the agency could work to ensure that more schools are making strategic investments to meet their goals around the teacher workforce. The Department could provide guidance to states and districts highlighting how schools have successfully brought these funding streams, along with state, local and philanthropic dollars, together to end teacher shortages. For more on current funding sources that states and districts can use to solve teacher shortages, and how cuts to these programs will impact rural schools, see the table, “Using federal education funds to end rural teacher shortages.” 

    Build a real-time national teacher labor market data system. Currently, very little detailed, timely data exists to understand the national landscape of teacher hiring and persistent vacancies. The Department of Education should spearhead a collaboration between the National Center for Education Statistics (NCES) and the Department of Labor’s Bureau of Labor Statistics (BLS) to provide better national teacher labor market data. States and local communities would be able to use this data to support secondary research to understand where rural communities are having success in lowering teacher vacancies and where others are struggling. Research suggests that the prevalence of rural teacher shortages may vary by state, and the field would benefit from understanding why. 

    Build the evidence base for teacher recruitment and retention practices, and fund rural-specific research. Much of the research on effective practices for attracting and retaining teachers does not specifically test the effectiveness or implementation challenges of a specific intervention in rural contexts. The federal government has an important role to play in funding action-oriented research to solve these urgent problems. At a minimum, it is critical that Congress continue to invest in programs like the Department’s Education Innovation Research grants (which include a specific priority for rural research).


    Using federal education funds to end rural teacher shortages 

    A range of federal education funds can be used to combat rural teacher shortages, including, but not limited to:

    For rural school serving high populations of Native students, the following funds can also be used:

    Access to the funds listed here have been threatened by the Trump administration, through revoking current awards (such as Teacher Quality Partnership Grants), proposed cuts to future spending, and proposed consolidation of funding streams into block grants to states at drastically lower funding levels (such as REAP and Title II, Part A). At the same time, the administration has begun to transfer administration of many of the programs above to other agencies, which are ill-equipped to quickly stand up complex programs that send billions to states and districts nationwide. In the wake of these disruptions and potential cuts, rural schools will have little support available from the federal government to solve critical teacher shortages, and will likely face worsening challenges in an increasingly strapped budget environment. 


    Conclusion 

    The impact of teacher shortages impacts hundreds of thousands of young people like my former student each day–students who may go a whole year without a certified teacher, or graduate high school without ever having access to the advanced classes that unlock their future aspirations. Rural students of color and those living in high-poverty rural areas bear the brunt of this long-standing problem. 

    States, districts and the federal government each have a critical and distinct role to play in supporting rural schools. And while rural schools are used to being scrappy and doing more with less, without state and federal support, districts will be hard-pressed to close teacher workforce gaps on their own. 

    A Digital Public Infrastructure Act Should Be America’s Next Public Works Project

    The U.S. once led the world in building railroads, highways, and the internet. Today, America lags in building the digital infrastructure foundation that underpins identity, payments, and data. Public Digital systems should be as essential to daily life as roads and bridges, yet America’s digital foundation is fractured and incomplete.

    Digital public infrastructure (DPI) refers to a set of core and foundational digital systems like identity, payments, and data exchange that makes it easier for people, businesses, and governments to securely connect, transact, and access services. 

    DPI consists of interoperable, open, and secure digital systems that enable identity verification, digital payments, and data exchange across sectors. Its foundational pillars are Digital Identity, Digital Payments, and Data Exchange, which together provide the building blocks for inclusive digital governance and service delivery. DPI acts as the digital backbone of an economy, allowing citizens, governments, and businesses to interact seamlessly and securely.

    America’s current digital landscape is a patchwork of systems across states, agencies and private companies, and misses an interoperability layer. This means fragmented identity verification, uneven instant payment networks, and siloed data exchange rules and mechanisms. This fragmentation not only frustrates citizens but also costs taxpayers billions, leads to inefficiency and fraud. This memo makes the case that the United States needs sweeping legislation– a Digital Public Infrastructure Act— to ensure that the nation develops a coherent, secure, and interoperable foundation for digital governance. 

    Challenges and Opportunities 

    Around the world, governments are investing in digital public infrastructure to deliver trusted, inclusive, and efficient digital services. In contrast, the United States faces a fragmented ecosystem of systems and standards. This section examines each pillar of digital public infrastructure, digital identity, digital payments, and data exchange, highlighting leading international models and what institutional and policy challenges the U.S. must address to achieve a similarly integrated approach.

    Fragmented and non-interoperable Digital Identities

    Digital Identity. The U.S. has no universal digital identification system. Proving who you are online often relies on a jumble of methods like scanning driver’s licenses, giving your social security number, or one-off logins. Unlike many countries with national e-ID schemes, the U.S. relies on the REAL ID law which sets higher standards for physical driver’s licenses, but it provides no digital ID or consent mechanism for online use. Just under half of U.S. states have rolled out some form of mobile driver’s license (mDL) or digital ID, and each implementation is largely unique.

    Federal agencies have tried to streamline login with services like Login.gov, yet many agencies still contract separate solutions (Experian, ID.me, LexisNexis, Okta, etc.), leading to duplication. The Government Accountability Office recently found that two dozen major agencies use a mix of at least five different identity-proofing providers. The result is an identity verification landscape that is inconsistent and costly, both for users and the government.

    Fragmented Digital Payment Infrastructure

    Digital Payments. The United States still lags in offering universal, real-time payments accessible to all. The payments landscape is highly fragmented, with multiple systems operated by both public and private entities, each governed by distinct rule sets. The Automated Clearing House (ACH) network is the batch-based system that processes routine bank-to-bank transfers such as salaries, bill payments, and account debits or credits. It is co-run by the Federal Reserve (FedACH) and The Clearing House (EPN) under Nacha rules and settles with delay. The Real-Time Payments (RTP) network is an instant 24/7 credit-push system that moves money within seconds through a prefunded joint account at the Federal Reserve Bank of New York. It was launched by The Clearing House in 2017 and is governed by its private bank owners.

    In 2023, the Federal Reserve launched FedNow, the first publicly operated real-time payment rail in the United States, offering instant settlement through banks’ Federal Reserve master accounts. Card networks such as Visa, Mastercard, Amex, and Discover continue to operate proprietary systems, while peer-to-peer platforms like Zelle, Venmo, and CashApp run closed-loop schemes that often rely on RTP for back-end settlement. Because these systems differ in ownership, governance, settlement models, and liability frameworks, they remain largely non-interoperable. A payment sent through RTP cannot be received on FedNow, and card or wallet systems do not seamlessly connect to ACH or instant payment rails. 

    FedNow operates as a real-time gross settlement (RTGS) infrastructure, enabling participating banks and credit unions to send and receive instant payments around the clock. Its design is infrastructure-centric: the Federal Reserve provides the back-end rail, while banks must opt in, build their own consumer interfaces, and set transaction fees and rules. The system does not define standardized public APIs, merchant QR systems, or interoperable consumer applications. These layers are left to the market. Its policy intent centers on efficiency and resilience in interbank payments rather than universal inclusion or open access.

    Examples of Complete Public Payment Ecosystems

    By contrast, India’s Unified Payments Interface (UPI) and Brazil’s Pix were designed as full digital public infrastructures that combine settlement, switching, and retail layers within a single public framework. Both are centrally governed, with UPI managed by the National Payments Corporation of India under Reserve Bank of India oversight and Pix managed by the Central Bank of Brazil. They enforce mandatory interoperability across all banks, wallets, and payment apps through open API standards. Their architecture integrates digital identity, authentication, and consent layers, allowing individuals and merchants to transact instantly at zero or near-zero cost.

    While FedNow provides the plumbing for real-time settlement among banks, UPI and Pix function as complete public payment ecosystems built on open standards, public governance, and inclusion by design. Real-time payment systems in India (UPI), Brazil (Pix), and the United Kingdom (Faster Payments) now process far higher transaction volumes than their U.S. counterparts, reflecting how deeply these infrastructures have become embedded in daily economic activity.

    Credit: fxcintel.com

    This fragmented payment ecosystem became painfully apparent during COVID-19: some people waited weeks or months for stimulus and unemployment checks, while fraudsters exploited the delays. Only in 2025 did the Treasury Department finally announce it will stop issuing paper checks for most federal payments, to reduce delays, fraud, and theft.

    Clearly, the U.S. needs a more cohesive approach to instant, secure payments, from Government-to-Person (G2P) benefits to Person-to-Government (P2G) tax payments and everyday Person-to-Person (P2P) transactions.

    Data Exchange. Americans routinely encounter data silos and repetitive paperwork when interacting with different sectors and agencies. Each domain follows its own regulatory and technical standards. Health records are governed by the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and the Trusted Exchange Framework and Common Agreement (TEFCA) established under the 21st Century Cures Act of 2016. Financial data are protected by the Gramm–Leach–Bliley Act of 1999 (GLBA) and will soon fall under the Consumer Financial Protection Bureau’s proposed Personal Financial Data Rights Rule (Section 1033, Dodd–Frank Act). Tax and education data are separately governed by the Internal Revenue Code and the Family Educational Rights and Privacy Act of 1974 (FERPA).

    There is no unified, citizen-centric protocol for individuals to consentingly share their data across sectors. For example, verifying income for a mortgage, student loan, or benefits application might require three separate data pulls from the IRS or employer, each with its own process. In healthcare, TEFCA is creating a nationwide data-sharing framework but remains voluntary and limited to medical providers. In finance, Europe’s PSD2 Open Banking Directive (2018) forced banks to open consumer data via APIs, while the United States is only beginning similar steps through the CFPB’s data portability rulemaking. Overall, data-sharing rules remain sector-specific rather than citizen-centric, making it difficult to “connect the dots” across domains.

    Data Protection. The United States follows a fragmented, sectoral approach to data protection rather than a single, unified framework. Health information is covered by HIPAA (1996), financial data by GLBA (1999), student records by FERPA (1974), and children’s online data by the Children’s Online Privacy Protection Act (COPPA, 1998).

    States have layered on their own privacy laws, most notably the California Consumer Privacy Act (CCPA, 2018) and the California Privacy Rights Act (CPRA, 2020). At the federal level, the Federal Trade Commission (FTC) fills gaps using its authority to regulate “unfair or deceptive practices” under Section 5 of the FTC Act (15 U.S.C. §45). However, there remains no nationwide baseline for consent, portability, or deletion rights that applies uniformly across all sectors.

    An illustration from 2021 by the New York Times shows the picture very well.

    Credit: Dana Davis

    Recent efforts in Congress, including the proposed American Data Privacy and Protection Act (ADPPA, 2022) and the American Privacy Rights Act (APRA, 2024), sought to create a comprehensive federal framework for data privacy and user rights. APRA built on ADPPA’s foundations by refining provisions related to state preemption, enforcement, and individual rights, proposing national standards for access, correction, deletion, and portability, and stronger obligations for large data holders and brokers. It also envisioned expanded enforcement powers for the FTC and state attorneys general, along with a limited private right of action.

    Despite initial bipartisan attention, APRA has not secured sustained bipartisan support and remains stalled in Congress. The bill was jointly introduced in 2024 by the Republican Chair of the House Energy and Commerce Committee and the Democratic Chair of the Senate Commerce Committee, reflecting early cross-party interest. However, Democratic support weakened after language addressing civil-rights protections and algorithmic discrimination was removed, prompting several members to withdraw backing (Wired, 2024). As a result, the legislation has not advanced beyond committee referral, leaving the United States reliant on a patchwork of sector-specific and state-based privacy laws.

    The outcome is a system where Americans face both fragmented data exchange and fragmented data protection, undermining trust in digital public services and complicating any transition toward a citizen-centric digital infrastructure.

    The High Cost of Fragmentation

    This patchwork system isn’t just inconvenient; it also bleeds billions of dollars. When agencies can’t reliably identify people, deliver payments quickly, or cross-check data, waste and fraud increase. Here are just a few examples:

    Improper Payments. In FY2023 the federal government reported an estimated $236 billion in improper payments. That astronomical sum (almost a quarter-trillion dollars) stemmed from issues like payments to deceased or ineligible individuals and clerical errors. In fact, over 74% of the improper payments were overpayments The largest drivers included Medicare/Medicaid billing mistakes and identity-verification failures in pandemic relief programs. For example, the Pandemic Unemployment Assistance program alone saw an increase of $44 billion in erroneous payments, as identity thieves and imposter claims slipped through weak verification checks. While not all improper payments can be eliminated, a significant portion, GAO notes, can be eliminated. The biggest share of improper payments results from documentation and eligibility verification weaknesses, not intentional fraud. All errors could be reduced with better digital identity and data sharing systems.

    Identity Theft and Fraud. American consumers are suffering a wave of identity-related fraud. In 2023, the Federal Trade Commission received over 1 million reports of identity theft such as credit cards opened in another person’s  name or fraudsters hijacking unemployment benefits. Identity theft now accounts for about 17% of all consumer fraud reports. The surge during the pandemic (when government aid became a target) showed how criminals exploit weak ID verification. State unemployment systems, for instance, paid out a significant sum to fraudsters who used stolen identities. Strengthening the digital ID infrastructure in U.S. could curb these losses by catching imposters before payments go out.

    Administrative Overhead. Fragmentation forces each agency and company to reinvent the wheel, at great expense. Consider identity proofing: federal agencies spent over $240 million from 2020–2023 on contracts for login and ID verification solutions, much of it to third-party vendors, despite overlapping functionality. States and private institutions likewise pour resources into redundant systems for onboarding and verifying users. Processing paper documents and manual checks adds further costs and an indirect cost of time and frustration for citizens. A GAO report noted that agencies have widely varying systems and that a coordinated digital identity approach could improve security and save money. In short, the lack of shared public digital infrastructure means higher costs and slower service across the board.

    Plan of Action 

    What would a Digital Public Infrastructure Act do?

    It’s clear that the status quo isn’t working. The U.S. needs a Digital Public Infrastructure (DPI) Act, a comprehensive federal law that would build the rails and rules for secure, efficient digital interactions nationwide. Just as past Congresses invested in highways and the internet itself, Congress today should invest in core digital systems to serve as public goods. A DPI Act could establish three pillars in particular:

    Federated, Privacy-preserving Digital Identity

    A secure digital ID that Americans can use (voluntarily) to prove who they are online, without creating a centralized “Big Brother” database. This would be a federated system, meaning you could choose from multiple trusted identity providers. For example, you could share your identification with your state DMV, the U.S. Postal Service, or a certified private entity, all adhering to common standards. The federated system must follow the latest NIST digital identity guidelines for security and privacy (e.g. NIST SP 800-63) to ensure high Identity Assurance Levels.

    Crucially, it should be privacy-preserving by design: using techniques like encrypted credentials and pairwise pseudonymous identifiers so that each service you log into only sees a unique code, not your entire identity profile. A federated approach would leverage existing ID infrastructures (state IDs, passports, social security records) without replacing them. Instead, it links and elevates them to a digital plane.

    Under a DPI Act, an American citizen might verify their identity once through a trusted provider and then use that digital credential to access any federal or state service, open a bank account, or consent to a background check, with one login. This approach can dramatically reduce fraud (no more 5 different logins for 5 agencies) while protecting civil liberties by avoiding any single centralized ID database. The Act could establish a national trust framework (operating under agreed standards and audits) so that a digital ID issued in, say, Colorado is trusted by a bank in New York or a federal portal, just as state driver’s licenses are mutually recognized today. Done right, a digital ID saves time and protects privacy: imagine applying for benefits or a loan online by simply confirming a verified ID attribute (e.g. “I am Alice, over 18 and a U.S. citizen”) rather than scanning and emailing your driver’s license to unknown clerks.

    Universal, Real-time Payments (G2P, P2G, P2P)

    The DPI Act should ensure that instant payment capability becomes as ubiquitous as email. This likely means leveraging FedNow, the Federal Reserve’s new instant payment rail, and expanding its use. For Government-to-Person (G2P) payments, Congress could mandate that federal disbursements (tax refunds, Social Security, veterans’ benefits, emergency relief, etc.) use a real-time option by default, with an ACH or card fallback only if a recipient opts out.

    No citizen should wait days or weeks for funds that could be sent in seconds. The same goes for Person-to-Government (P2G) payments: taxes, fees, and fines should be payable instantly online, with immediate confirmation. This reduces float and uncertainty for both citizens and agencies. Finally, Person-to-Person (P2P): while the government doesn’t run private payment apps, a robust public instant payments infrastructure can connect banks of all sizes, enabling truly universal P2P transfers. This way, someone at Bank A can instantly pay someone at Credit Union B without needing both to join the same private app.

    FedNow, as a public utility, is an important player, but the Act could incentivize or require banks to join so no institution is left behind. The result would be a seamless national payments system where money moves as fast as email, enabling things like on-demand wage payments, rapid disaster aid, and easier commerce.

    Cross-sector, Consent-based Data Exchange

    The third pillar is perhaps the most forward-looking: creating standard protocols for data sharing that put individuals in control. Imagine a secure digital pipeline that lets you, the citizen, pull or push your personal data from one place to another with a click – for instance, authorizing the IRS to share your income info directly with a state college financial aid office, or allowing your bank to verify your identity by querying a DMV record (with your consent) instead of asking you to upload photos or scans.

    A DPI Act can establish an open-data exchange framework inspired by efforts like open banking and TEFCA, but broader. This framework would include technical standards (APIs, encryption, logging of data requests) and legal rules (what consents are needed, liability for misuse, etc.) to enable “tell us once” convenience for the public. 

    Importantly, it must be consent-based: your data doesn’t move unless you approve and authorize it.It can let you carry digital attestations i.e. driver’s license, vaccination, veteran status, etc. on an e-wallet and share just the necessary bits with whoever needs to know. Some building blocks already exist: the federal Office of the National Coordinator for Health IT (ONC) is working on health data interoperability through TEFCA (so hospitals can query each other’s records), and the Consumer Financial Protection Bureau has begun rulemaking to give bank customers the right to share their financial data with third-party apps.

    A DPI Act could unify these efforts under one umbrella, extend them to other domains, and fill in the gaps (for instance, enabling portable eligibility, if you qualify for one program, easily prove it for another). It could establish a governance entity or standards board to oversee the trust frameworks needed. Crucially, this must be accompanied by strong privacy and security measures like audit trails, encryption, and an emphasis that individuals can see and control who accesses their data. An example of this is how the EU wallet provides a dashboard for users to review and revoke data sharing.

    The Digital Public Infrastructure Act would not necessarily build each piece from scratch but set national standards and provide funding to knit them together. It could, for example, direct NIST and a multi-agency task force to implement a federated ID by a certain date (building on Login.gov’s lessons), require the Treasury and Federal Reserve to ensure every American has a route to instant payments across platforms (leveraging FedNow), and authorize pilot programs for cross-sector data exchange in key areas like social services.

    Precedent for such an approach already exists in bipartisan efforts:

    Navigating Roadblocks: Federalism, Privacy, and Tech Contractors

    Enacting a U.S. Digital Public Infrastructure Act will face several real challenges. It’s important to acknowledge these roadblocks and consider strategies to overcome them:

    Federalism and Decentralized Authority

    Unlike many countries where a central government can launch a national ID or payments platform by decree, the U.S. must coordinate federal, state, and local authorities. Identity in the U.S. is traditionally a state domain (driver’s licenses, birth certificates), while federal agencies also issue identifiers (Social Security numbers, passports). A DPI solution must respect these layers. States may fear a federal takeover of their DMV role, and agencies might guard their IT turf. Solution: design the system as a federation of trust. The Act could explicitly empower states by providing grants for states to upgrade to digital driver’s licenses (the Improving Digital Identity Act proposed in 2022 did exactly this, offering grants for state DMV mobile IDs). It could also create a governance council with state CIOs and federal officials to jointly set standards.

    Civil Liberties and Privacy Concerns

    Any mention of a “digital ID” in America raises eyebrows about Big Brother. Civil liberties advocates will rightly question how to prevent government overreach or mass surveillance. The Act should incorporate privacy by design provisions e.g., require minimal data collection, mandate independent audits for security, and give users legal rights over their data. One promising approach is using decentralized identity technologies, where your personal data (like credentials) stay mostly on your device under your control, and only verification proofs are shared. Also, the law can explicitly forbid certain uses, for instance, prohibit law enforcement from fishing through the digital ID system without a warrant, or forbid using the digital ID for profiling citizens. Including groups like the ACLU and EFF in the drafting process could help address concerns early. It’s worth noting that privacy and security can actually be enhanced by a good digital ID: today, Americans hand over copious personal details to random companies for ID checks (e.g. scan of your driver’s license to rent an apartment, which might sit in a landlord’s email forever). A federated ID could reduce exposure by only transmitting a yes/no verification or a single attribute, rather than a photocopy of your entire ID. Conveying that narrative, that this can protect people from identity theft and data breaches, will be key to overcoming knee-jerk opposition. Still, robust safeguards and perhaps a pilot phase to prove the concept will be needed to convince skeptics that a U.S. digital identity won’t become a surveillance tool.

    Incumbent Resistance (Big tech and Contractors)

    There are vested interests in the current disjointed system. Large federal IT contractors and identity verification vendors profit from selling agencies one-off solutions; big tech companies dominate payments and data silos in the status quo. A unified public infrastructure could be seen as competition or a threat to some business models. For example, if a free government-backed digital ID becomes widely accepted, companies like credit bureaus (which sell ID verification services) or ID.me might lose market share. If open-data sharing is mandated, banks that monetize data might push back. The solution is to engage industry so they can find new opportunities within the ecosystem. Many banks, for instance, actually support digital ID because it would cut fraud costs for them. The banking industry has been calling for better ID verification to fight account takeover and synthetic identities. In fact, a coalition of financial institutions endorsed the earlier Improving Digital Identity legislation.

    Fintechs will favor Digital Public Infrastructure (DPI) because it transforms customer acquisition from a slow, expensive manual process into an instant, low-cost digital utility. By plugging into standardized government layers for identity (e-KYC) and data sharing (Account Aggregators), fintechs can instantly verify and underwrite users who lack traditional credit histories. This allows them to scale rapidly and profitably serve millions of previously “unbanked” customers by making lending decisions based on real-time data rather than rigid credit scores.. The Act can create a public-private task force (as earlier bills proposed) to hash out implementation. For government contractors, the reality is that building DPI will still require significant IT work, just more standardized. Contractors who adapt can win contracts to build the new infrastructure.

    Political Will and Public Perception

    DPI can be a bipartisan win if framed correctly.

    For conservatives and fiscal hawks: emphasize the anti-fraud, waste-cutting angle. Stopping improper payments (recall that $236B figure!) and preventing identity theft aligns with the goal of efficient government. The Act  essentially plugs leaky buckets, something everyone can get behind.

    For liberals and tech-progressives: emphasize equity and empowerment. How digital infrastructure can help the unbanked access financial services, ensure eligible people aren’t left out of benefits, and give individuals control of their own data (a pro-consumer, anti-monopoly stance). Indeed, digital public goods are often framed as a way to ensure big tech doesn’t exclusively control our digital lives.

    The key will be avoiding hot button mis-framings: this is not a surveillance program, not a national social credit system, etc. It’s an upgrade to basic government digital infrastructure. One strategy is to start with pilot programs and voluntary adoption to build trust. For example, the Act could fund a pilot in a few states to link a state’s digital driver’s license with federal Login.gov accounts, showing a working federated ID in action. Or pilot using FedNow for a chunk of tax refunds in one region. Early successes will create momentum and help refine the approach. Champions in the Congress will need to communicate that this is infrastructure in the truest sense: just as U.S. needed electrification and interstate highways, it now needs the digital equivalent to keep America competitive and secure.

    Conclusion

    A Digital Public Infrastructure Act represents more than a technical upgrade; it is an investment in America’s institutional capacity. The challenges the U.S. faces today like identity theft, improper payments, slow benefit delivery, and fragmented data governance are the predictable consequences of an outdated public digital foundation that has never been treated as national infrastructure. Just as the interstate highway system knit together the physical economy, and just as the early internet created the backbone for the digital economy, the United States now needs a unified, secure, and interoperable set of digital rails to support the next era of public service delivery and economic growth.

    Unlike centralized systems elsewhere in the world, the American version of DPI would be federated, privacy-preserving, and deeply respectful of federalism. States would remain primary issuers of identity credentials. Private innovators would continue to build consumer-facing services. Federal agencies would govern standards rather than run monolithic platforms. This hybrid model plays to America’s institutional strengths such as distributed authority, competitive innovation, and strong civil liberties protections.

    Congress must enact a Digital Public Infrastructure Act, a recognition that the government’s most fundamental responsibility in the digital era is to provide a solid, trustworthy foundation upon which people, businesses, and communities can build. America has done this before when it built the railroads, electrified the nation, and invested in the early internet. The next great public works project must be digital.