Buzzwords like ‘Abundance’ and ‘Affordability’ are out. Learning policy lessons from the global community is in.

Something is wrong with American policymaking. There are obvious issues: hyperpolarization, deep public distrust of government, and outdated institutions make it difficult to implement durable laws. Pundits and think tanks try to overcome those issues by developing new framings, like ‘Abundance’ and ‘affordability’, that too often lack specific policy ideas and instead put style before substance. 

Rather than get caught up in the buzzword flavor of the month, the policymaking ecosystem should study what’s actually working. Many other countries have figured out how to develop cohesive policy agendas that deliver on their promises and build trust with constituents, resulting in improved outcomes in education, healthcare, housing, transportation, and energy—things that America still struggles with. 

We can learn valuable lessons from those governments about how to build more durable, more responsive, and more effective policy. The models discussed below offer a starting point; examples of how prioritizing implementation, outcomes-first design, and long-term and inclusive planning can result in better governance—across countries with very different political systems.

What’s not working? 

The policy tools we currently have at our disposal are not working. Faced with a dysfunctional Congress, policymakers rarely pass new laws and instead stretch old ones to fit purposes they weren’t designed for. When well-designed policies are passed, agencies often lack the workforce, funding, and organizational infrastructure to actually implement those ideas. This failure to deliver further hurts an already declining level of public trust in institutions, but it also means that Americans lose out on basic needs. Homeownership feels unobtainable for growing portions of the country. An outdated grid and rising energy prices strain communities (while an ongoing war in Iran worsens those issues and further drains federal funding). Oversized, high-emissions cars create health and safety hazards while accessing healthcare to treat those hazards can bankrupt a family.

The federal policy ecosystem’s responses have been underwhelming, despite the urgency. Consistent policy confusion, poor organization, and hyperpolarization—exacerbated by the Trump administration’s destruction of agency infrastructure and workforce—all contribute to the struggle for durable and meaningful change. The ecosystem lacks a unifying policy objective that can act as a foundation for policymakers, a set of guiding strategic principles to return to when designing and implementing policy. 

Instead, those in the Beltway look for new ways to package broad solutions. Movements like “Abundance,” or slogans like “affordability” and “dominance” might be catchy, easily marketable, and play to a big audience (or the right political network) but they lack technical substance and specificity. Abundance has been applied to everything from large-scale clean energy supply to more effective prisons, and we still don’t have a roadmap for how to actually achieve energy affordability. Even “social justice” and “diversity, equity, and inclusion,” concepts which have real academic foundations and a deep history of implementation in a range of socioeconomic fora, were co-opted after the murder of George Floyd in 2020 and applied to a whole universe of policies that didn’t always reflect the original goals of the movements and in turn undermined the actual meaning of those words. 

That approach might work to win elections, bump up polling numbers, or increase influence in the policy world, but it doesn’t actually get tangible results. Ultimately, Americans care less about Abundance than they do about outcomes: affordable houses; sustainable wages; reliable energy; quality education and childcare. So how do we get policymaking apparati to focus on deciding on the present before the wrapping paper?

How do we get it right?

To start, we can look to the rest of the world. Other governments have been successfully putting substance ahead of style—and delivering on their promises—for decades. America’s insular attitude towards domestic policymaking is supported by a culture of American exceptionalism and a view of ourselves as the ideal democratic state (although we invented some of those metrics).

That view is both incomplete and inaccurate, leaving out imperialistic tendencies, hundreds of years of oppressive policies, and the bargaining strength of being the world’s sole superpower. America is outpaced on a number of critical fronts by other countries. Building rail infrastructure costs 50% more and takes longer in the U.S. than in Europe or Canada. Americans pay more per person on healthcare than other developed countries despite faring worse on certain outcomes, including higher maternal mortality rates and lower life expectancy. Poverty rates are some of the highest among OECD countries, with more workers earning “low pay” than any other OECD country. 

That view is also limiting. It encourages policymakers to continue the ‘style over substance’ feedback loop, investing in ideas that are culturally aligned with that perspective instead of in new, ambitious ones. Those new and ambitious policy ideas don’t have to be novel – they could come from places that are succeeding where we’re falling behind. 

Many other countries have figured out how to put substance first. The examples below start with a more internally cohesive theory of domestic policy or central guiding principles, like strong government capacity, outcome-focused policy design, and an emphasis on social wellbeing, and build the messaging platform later. They focus on reflecting the actual wants and needs of constituents rather than projecting how they think the public feels about government.

Nordic countries

Several Nordic countries, including Sweden, Finland, and Norway, illustrate one model: a welfare state with social democratic tendencies, robust social safety nets, and high levels of trust and public investment in social goods. These countries start with basic principles—that government should provide a reasonable standard of living for all citizens—and the policy substance follows from there. 

Their systems of governance are built on a tripartite policymaking structure that allows for meaningful, long-term engagement between government, industry, and labor. America might not have the infrastructure (or the desire) to implement a tripartite system, but it points to deeper values that underscore their policymaking. The Nordic model values public participation—not just on one-off projects, but throughout the process. It’s not direct democracy, but co-creation by bodies that represent the organized interests of major economic players. Public participation that’s meaningful, consistent, and long-term creates buy-in from those interests and durable policy. It’s also something that the United States consistently grapples with. 

Nordic governance also supports policy design that’s targeted at specific outcomes, but integrates considerations from multiple sectors. Sweden has spent decades investing in clean energy technologies and deploying clean electricity—but has also implemented cross-cutting policies that target other areas of the transition. Several are aimed at reducing energy poverty, including subsidies, energy-inclusive rent, builder incentives, and efficiency standards. These policies are outcome-based, but are coordinated across multiple ministries rather than being siloed within one. The result is an “energy” policy that supports a clean transition but cuts across social services, labor, housing, and energy. The United States has tried this approach before with bills like the Inflation Reduction Act, but issues with implementation and government capacity limited the success of the bill. 

Another example is Finland’s ‘housing first’ initiative. It’s firmly rooted in a tangible outcome—securing housing for everyone, shored up with social service support and community integration. It’s been hugely successful, reducing long-term homelessness by 68% since 2008. Finland’s program is deeply integrated across federal, state, and local governments and civil society organizations, providing proof of concept for community navigator mechanisms that allow community expertise to steer federal dollars.

These policies deliver on their promises: housing, energy access, poverty reduction. Combining public participation with real delivery supports a continuous positive feedback loop of high trust, which creates an easy argument for more investment in the government that implements these policies. That’s necessary, because the reason this model delivers so well is that it relies on a public sector that’s well-funded by high tax rates and redistributive economic policies (which in turn are backed up by the economic powers of the tripartite system). Americans may balk at high taxes, but that’s partially because they don’t see the impact in their daily lives. They don’t trust the government to do the right thing with their money. Breaking into that low-trust cycle is difficult, but we have to start somewhere. 

China and Singapore

Singapore and China showcase another model. Although lacking in political freedoms and public participation, both countries offer examples of how to build transportation, energy, and housing infrastructure fast and well. At the core of this building is an emphasis on governance and implementation, long-term planning, and public investment in human capital. 

Singapore is consistently held up as an example of good governance in both policy design and implementation. It’s fully integrated scenario-planning and foresight tools into its policymaking processes, allowing government to be more proactive in tackling barriers and achieving desired outcomes. This type of long-term planning is only possible with detailed policy agendas and sustained commitment to outcomes. It also requires investment in and retention of a talented civil service, which additionally supports cross-government functionality, program longevity and durability, and smooth implementation of policy. 

The state’s successful delivery on social outcomes like education (students comfortably outperform the OECD average), healthcare (high life expectancy and low maternal mortality at lower-than-average prices) and economic development (doubling GDP per person over the last 20 years) helps reinforce trust in the ruling party, further strengthening its ability to continue to have outsized agency in policymaking. Some of these elements are harder to implement in the United States, given the inherent instability of changing administrations, but it underscores the need for agreed-upon foundational principles regardless of who’s in power. 

China employs similar strategies. Both China and Singapore have well-developed industrial policies – something the U.S. has lacked for several decades. China has spent years intentionally subsidizing specific industries, like transportation, clean energy, and technology, with comprehensive public spending strategies and long but detailed implementation timelines. It invested in both human and physical infrastructure, now boasting the largest industrial workforce in the world who are trained to continuously innovate. These investments have paid off: China leads the world in solar panel and electric vehicle manufacturing, has rapidly expanded its transportation networks, and has built so much housing that it helped contribute to a real estate crisis. This targeted, long-term engineering of economic development in both countries underscores the power of policy durability, strong governance, and administrative discipline in public sector delivery.

Similar to the Nordic model, Chinese and Singaporean success with delivering on outcomes is the result of high levels of trust. But their models also work because those governments enjoy a high level of agency that only exists because of the lack of liberal democracy. But the underlying principle—that government needs some amount of empowerment to make decisions—is not incompatible with U.S. aspirations. Many of the ‘lessons learned’ reports on the successes and failures of the Bipartisan Infrastructure Law and the Inflation Reduction Act lament slow decision-making that was drawn out by consensus-based processes and multiple layers of overlapping approvals across agencies. Adopting principles of agency and empowered decision-making could speed up countless government processes, improving delivery. 

None of these models is perfect. Rapid industrialization in China has led to massive pollution issues and Singapore struggles with an over-reliance on foreign labor and income inequality. Both countries have serious democratic and human rights challenges. In Sweden and Norway, consistent problems with anti-migrant sentiment sow discord and threaten policy successes. Americans should be looking beyond the surface of these policies. We don’t need to copy the designs verbatim, but rather figure out what principles we want to borrow form the foundation of our own policy agenda. 

What those principles should be is an open question, but not an impossible one. Americans value social goods, and they trust their government when they see the impact of their investments, but they also want choice. How do we identify those principles, translate them into real policy designs, and then implement them sustainably? How do we scale up existing trust and rebuild trust that’s broken? How can we create an administrative state that actually delivers on its promises to constituents?

 Building a more positive policy vision

There’s no silver bullet, making the revolving door of movements like Abundance even more frustrating. Those wrappings without substance, promising catch-all solutions, take up oxygen that could be better spent taking a step back, trying to figure out what kind of country we want to live in, and learning from those who are making it happen. 

The good news is that there is quite a bit of agreement among the public when it comes to that vision. Like many other communities around the world, we want our lives to be better. We want safe and healthy communities, a stable financial system, freedom of choice, and systems that deliver on the promises they make. Other countries have succeeded in achieving some of those outcomes. It’s worth looking around to see what we could learn.

Clearing the Roadblocks to Transportation Innovation

Breakthrough technologies are emerging rapidly throughout U.S. transportation systems, from AI-enabled traffic management pilots by state DOTs to the continued expansion of automated vehicle (AV) testing and deployment. Yet the institutions responsible for researching, testing, and deploying these innovations were largely designed for a different era, with funding and governance structures organized around distinct transportation modes, limiting their ability to integrate cross-cutting, system-level technologies.

Over the past few years, the Federation of American Scientists (FAS) has engaged hundreds of local governments, researchers, industry leaders, and transportation experts to better understand where the most pressing transportation R&D gaps lie. These insights informed FAS’ recent recommendations  to the Department of Transportation’s (DOT) as it shapes its Research and Development (R&D) Strategic Plan. 

Talking with hundreds of people, so many kept saying the same thing: the biggest barriers to transportation innovation are not purely technical – they are structural. Innovation is happening across the ecosystem, but it is often fragmented, slow to deploy, and poorly coordinated across jurisdictions. Addressing structural barriers will require a more coordinated national approach to transportation innovation, including fully funding the Advanced Research Projects Agency-Infrastructure (ARPA-I), strengthening DOT’s role as a national convener, and investing in regional research networks that can bridge the gap between research and real-world deployment.

Infrastructure & Innovation Are Moving at Different Speeds

Our national transportation infrastructure was designed for station wagons, not the innovations of today and those yet to come. Roadways, signals, and transit systems were built around relatively predictable patterns of vehicle ownership and travel behavior. However, today, cities are navigating shared mobility, micromobility devices, automated vehicles, and digitally connected transportation systems operating simultaneously. 

At the same time, many promising technologies already exist that could improve transportation systems. Advanced sensing tools, AI-enabled traffic management systems, and connected infrastructure platforms have the potential to improve safety, reduce congestion, and enhance system resilience. Advanced construction methods and materials are being developed to the point where efforts like ARPA-I’s eXceptional Bridges through Innovative Design and Groundbreaking Engineering (X-BRIDGE) program can realistically set out to answer the question: how can we deliver bridges at half the cost, in half the time, and with twice the lifespan?

The challenge? Deployment. Traditional procurement and financing models are often designed for long-term infrastructure projects rather than rapidly evolving digital technologies. Even when solutions are available, local governments may struggle to evaluate, pilot, and scale them. These challenges are particularly pronounced for smaller jurisdictions with limited technical capacity.

Emerging Technologies Raise New Research Questions

Innovation moves quickly, research and validation do not. 

Think of the automated vehicles piloted in a major metropolitan near you (we’ve seen them drop off some folks at FAS HQ). Demonstrating their safety relative to human drivers requires robust evaluation methods and standardized testing frameworks. Researchers must also better understand how automated systems will integrate with transit networks, emergency response operations, and existing road users.

Meanwhile, intangible data is becoming the backbone of modern transportation systems. Connected vehicles, smart infrastructure, and real-time mobility services all depend on the ability to collect and share large volumes of data. Sounds great, but transportation data ecosystems remain fragmented across jurisdictions and operators that limit interoperability and coordination.

Those we’ve spoken to have emphasized ensuring transportation innovation benefits communities of all sizes. Many emerging technologies are first piloted in large metropolitan areas, leaving smaller cities and rural communities with fewer opportunities to participate in early deployments. Accessibility considerations, including ensuring new mobility systems work for people with disabilities, must also remain central to transportation innovation efforts. 

Together, these challenges highlight the need for a more coordinated approach to transportation research, development, and deployment.

ARPA-I: Building a Stronger Transportation Innovation Ecosystem

Addressing these challenges will require a more integrated national transportation R&D strategy – one that combines breakthrough research, regional experimentation, and strong federal coordination.

How can we do it? Congress should fully fund and support ARPA-I. ARPA-I was designed to support high-risk, high-reward research capable of addressing systemic infrastructure challenges. Its milestone-driven model allows researchers to test ambitious ideas quickly and refine them through rapid iteration. This approach is particularly well suited to emerging areas such as digital twins for infrastructure systems, AI-enabled safety technologies, and advanced construction methods.

At the same time, DOT must continue strengthening its role as a national convener for transportation innovation. Federal leadership ensures that lessons learned from pilot programs are shared across jurisdictions, that data standards remain interoperable, and that research investments align with real-world operational needs.

Finally, investing in regional transportation research networks can help bridge the gap between research and deployment. Regional Centers of Excellence that connect universities, public agencies, industry partners, and nonprofit organizations can provide environments for collaborative experimentation, workforce development, and technology transfer. These networks would mean that small jurisdictions have opportunities to participate in innovation efforts.

Turning Research Insights into Action

The insights gathered from local governments, researchers, and industry leaders make one thing clear: the U.S. does not lack ideas for improving its transportation system. What it needs is a research ecosystem capable of turning those ideas into deployed solutions.

Fully funding ARPA-I, strengthening DOT’s innovation capacity, and investing in regional research networks would create a coordinated pipeline for transportation innovation. Congress can make this possible. Sustained appropriations for ARPA-I will ensure the agency can pursue high-risk, high-reward research programs that address systemic infrastructure challenges. Lawmakers can also support transportation innovation by directing resources toward regional research partnerships, Centers of Excellence, and workforce development initiatives that help state and local governments and manage emerging technologies.

Congress should also consider policies that modernize procurement and financing pathways for emerging transportation technologies, support interoperable data standards across jurisdictions, and provide targeted technical assistance to state and local agencies implementing advanced infrastructure systems. These steps would bridge the gap between research and deployment, particularly for smaller jurisdictions that often lack the resources to evaluate and implement new technologies.

Taken together, these actions would allow the U.S. to accelerate transportation breakthroughs while ensuring that innovations reach communities across the country. Building the transportation systems of the future will require more than new technologies, it will require building the institutions, partnerships, and policy frameworks needed to bring those technologies to life

What Happens When Unicorns Exist, But Don’t Exit: How the Reverse-Acquihires Trend Threatens the Future of Innovation

When competition looms, incumbents often tighten their grip on a market by snapping up rivals and rapidly shelving breakthrough technologies that could otherwise accelerate the next wave of innovation. We now find ourselves at a similar technological inflection point where rapid innovation in AI poses a deep challenge to dominant firms. 

To sidestep this challenge, large tech firms have adapted the acquihire—traditionally the acquisition of a company primarily for its talent rather than its products—into a structure that evades the regulatory scrutiny that would ordinarily accompany a merger of two companies. The result is the reverse-acquihire: unlike traditional acquihires these deals avoid antitrust scrutiny by replicating the benefits of an acquisition through alternative structures that stop short of an actual purchase. These deals allow dominant firms to both license a booming startup’s intellectual property and poach its top talent, often hollowing out the remaining company in the process. 

The trend first emerged in 2024 and has since become a go-to strategy of incumbents wanting to maintain a competitive advantage, with growing implications for the long-term health of Silicon Valley’s innovation ecosystem. Over the last two years a majority of major American tech firms such as Google, Amazon, and Microsoft have turned to this workaround strategy.

Consolidation Across the AI Stack Harms Innovation

This consolidation is especially consequential because it spans the full AI stack—from cloud infrastructure and advanced chips to AI foundation models and the commercial software platforms that dominant firms already control and use to deploy AI at scale. As control over these layers becomes concentrated, costs rise and access becomes restricted, creating structural barriers for new entrants. Because each layer of the stack depends on the others, if even one piece of the AI innovation ecosystem gets locked up, it can stall the entire product innovation cycle. 

In traditionally structured acquisitions, regulators would assess whether a transaction could limit access to critical inputs or otherwise constrain future innovation across the ecosystem in an anticompetitive manner. When these deals go unexamined by regulators, the result is less diverse technological solutions available on the market. Such limitations hinder scientific advancements in AI, reduce safety, and slow the responsible integration of the technology into broader economic and societal contexts.

These deals also pose a concern for other companies seeking to utilize core AI infrastructure services as part of their own product innovation practices. Meta’s 2025 reverse acquihire deal with Scale AI, a company that provides data annotation services necessary for the development of AI models, led to competitors such as Google and OpenAI ending their contracted data acquisition services with the company over concerns regarding their proprietary information being provided to Meta – a direct competitor company. Before the signing of the deal, Google had been Scale AI’s largest client. This deal perfectly illustrates how reverse acquihires can destabilize critical industry relationships, and cause detrimental impact across the entire ecosystem.  

Reverse acquihires concentrate gains at the top, and weaken incentives for tech workers

The rapid emergence of the reverse-acquihire also risks breaking the unspoken social contract that powers Silicon Valley startups: top researchers and tech workers join startups for the chance at meaningful financial upside, autonomy, and accelerated growth, not just a salary. When startups opt for reverse acquihire deals, those key benefits become unevenly distributed. 

Traditionally, joining a startup came with the expectation that all employees would share in the upside of a successful exit or acquisition. With a reverse-acquihire, however, investors and select staff benefit first, and remaining employees are usually left behind financially. This stands in stark contrast to the industry norm and undermines the financial incentive structure that draws talent to early-stage companies in the first place. Instead, knowledge and talent is funneled into the walled gardens of dominant firms while deepening inequality within the industry. 

There is also a tangible labor market harm to tech workers. As these deals become more common, prospective hires must now assess not just the strength of a startup’s mission, but whether its founders will pursue a traditional, team-inclusive exit—or quietly cash out early. This added uncertainty deters risk-taking and creates a new class of winners and losers within the startup ecosystem, divided by who benefits from these lucrative deals and who gets left behind. The most sought-after AI researchers, scientists, and tech workers often choose startups for the chance at hyperprofitable exits. Remove that incentive, and they’ll opt for lower-risk roles at already dominant firms, thus reducing breakthrough product developments for us all. 

This trend also impacts potential future founders in the ecosystem. Early hires who successfully exit a startup commonly use those earnings to bootstrap their own venture in the future. This dynamic famously played out across tech in the 2000s with the “PayPal mafia” diffusing both talent and capital in a manner that led to many successful tech companies being formed. Reverse acquihires could weaken this feedback loop.

Reverse Acquihires Could Break the VC Model

In addition to constraining future innovations and reducing the potential financial upside for startup employees, reverse-acquihire deals also threaten to destabilize the broader venture capital ecosystem. Typically, VC-backed companies favor acquisitions over IPOs, as they offer a faster, more predictable financial exit for investors, largely insulated from market volatility. Acquisitions also provide greater economic certainty and immediate liquidity, which can be especially appealing to venture capital firms. In contrast, IPOs are lengthy, complex processes that are highly sensitive to market conditions and external factors that can significantly impact valuations and, ultimately, equity holder and investor payouts. 

If the reverse-acquihire trend continues, VC funds may increasingly lose the opportunity to invest in and fully realize financial returns from the next generation of unicorn AI companies. Without the outsized returns generated by mega-successful exits, the traditional venture capital model becomes difficult to sustain. As a result, the institutionalization of reverse acquihires across the tech industry could have a chilling effect on the flow of capital into smaller companies and startups that are creating the next big breakthrough. This trend is particularly harmful for AI innovation because the field is highly resource-intensive. Cutting-edge research conducted outside of dominant firms requires significant capital for datasets, cloud infrastructure, and specialized hardware, resources that startups typically access through venture capital.

Regulatory scrutiny is rising, but the risks extend beyond AI

With the Federal Trade Commission’s recent announcement of an investigation into these practices, questions about the evasion of antitrust laws will likely receive greater scrutiny. A member of the Commission has even publicly commented on the threats these deals pose to innovation. What remains largely absent, however, is a broader conversation about how the widespread use of these deals could reshape the full innovation ecosystem itself. While reverse-acquihire arrangements have so far been concentrated among AI-adjacent startups, their implications extend far beyond this sector. Similar structures could easily be deployed in other emerging fields, including biotechnology, where a reverse acquihire might prevent life-saving medical innovations from ever reaching the market. 

When dominant firms consolidate technological talent and ideas solely for internal advantage, they do more than just preserve their competitive edge in a market. They also slow the pace of progress across the entire field. If reverse acquihires become the default path for absorbing promising startups, the dynamic competition that has long defined the American technology sector risks being replaced by a cycle of defensive consolidation that suppresses innovation to the detriment of our country as a whole.

2026 Is Year of the Female Farmer. We Spoke to Five Who Are Also Technologists.

According to the 2022 Census of Agriculture taken by the U.S. Department of Agriculture’s (USDA) National Agricultural Statistics Service (NASS), the United States has 1.2 million female producers, or farmers, which accounts for 36% of the 3.4 million producers nationwide. The producers hail from all over the country, but the state with the most female producers was Texas, a state FAS Impact Fellow Jodie McVane knows well.

Jodie, a Texas resident, has served as the Smart Agriculture and Forestry Impact Fellow at the USDA’s Natural Resources Conservation Service (NRCS) since 2024. Within NRCS, Jodie evaluates additions and modifications to the list of existing Smart Agriculture and Forestry practices which includes summarizing and presenting recommendations to NRCS and Farm Production and Conservation (FPAC) mission area leadership. 

“People who aren’t in agriculture ask me what work we do at the Ecological Science Division of the Natural Resources Conservation Service. I explain that we are taking traditional agricultural  practices and developing and implementing new technologies to assess and treat soil, water, air, plant, animal, and energy resource concerns. So I was really excited to be a FAS Impact Fellow doing a tour of service at the USDA. It has allowed me to expand my knowledge of emerging science as it applies to farming, and gave me an opportunity to work with others passionate about American farmers.”

Conversations with Female Farmers

Jodie’s work utilizing the latest technology and evaluating the best practices don’t happen in isolation. Part of Jodie’s day job as an Impact Fellow consists of building relationships with other agriculturists and farmers across the country. So she was thrilled to facilitate a conversation with female farmers Hannah Breckbill (Decorah, IA), Jess D’Souza (Mt. Horeb, WI), Corrie Scott (Benson, IL), and Lauren Reedy (Ben Lomond, CA), about what brought them to farming and what drives their passion for the field today 

Although some farmers, like Hannah and Lauren, nurtured an interest in agriculture from early childhood by playing farmer or spotting (and identifying) plants and flowers during their daily soybean walks, other farmers like Corrie Scott (Lauren’s sister) and Jess D’Souza didn’t find their passion for farming until they were adults. Jess didn’t even start thinking about farming until her early twenties. She told Jodie, “I started reading some books that had me questioning where my food comes from. I started farming in my own backyard. Then I started thinking about how exciting it would be to feed other people!” 

Meanwhile, Corrie, who grew up with Lauren on their family’s homestead, never thought she would go into farming. “I didn’t actually enjoy the farm stuff when we were little. I wanted to go out and see the world, and not stay close to home.” Corrie did in fact leave home, and it wasn’t until she spent time in Hawaii after college and started noticing that the state heavily relied on exports, that she wanted to be more intentional about learning where her food was coming from.

Building Community is Crucial to Success

The idea that one should be connected to how and where their food is grown is common among the group. All agreed that one of the vital ways to learn – and teach – was by building community, and as Jodie noted “women are good at that.” Corrie and Lauren actually found community by first realizing that although they don’t look like stereotypical farmers, they’re farmers all the same. “We’ve gotten more connected to other farmers,” Corrie said, “We don’t grow it all, we don’t want to grow it all. We’ve really been able to build a network of fellow farmers that start to connect in ways that candidly I wasn’t fully aware of.” 

Hannah’s story is similar. “As a first generation farmer…I didn’t even imagine I could access land ownership, but then my community made it happen.” The  land was up for auction and Hannah’s neighbors were concerned about potential future landowners exploiting the land or taking it out of agricultural use. They approached the previous owner as a group, asked her to name a price, and bought the land to prevent that from happening. State and local policies build on these community driven efforts by setting aside funding for farmland access through programs, like the The Farmland Protection Policy Act (1981), and other conservation programs, grants, and cooperative ownership models. These investments help reduce barriers for first generation and historically excluded farmers while keeping land in sustainable, community centered use.

Although Hannah found a community she can rely on – they are currently all pooling their money to buy additional farmland together – she still understands the strength in her unique identity. “When I talk about being a woman farmer, or a queer farmer, I am thinking a lot about how those identities inform the farming that I do. I farm in an intentionally sustainable and diverse way and that is, by nature, really different that the farming that is around me. Being a queer person helps me think up different ways of doing things. Being a woman means that I’m excluded from a lot of systems and mainstream ways of thinking about farming. That gives me a lot of freedom to do something different and do something in alignment with my values.” 

Jess works a state away but she agrees. “There is a robust agricultural community around me, and it has changed over the years to become more diverse in the people who manage farms. There is a change in what it has been and what it is becoming. This area has done well on addressing development and has set limits to protect agricultural lands. People are understanding the benefits of diverse farms.”

Public policy and investment in agricultural innovation, like USDA’s new Regenerative Pilot Program reinforce this shift. These efforts not only help protect farmland but also strengthen local economies, improve food access, and build more resilient communities. As technologies evolve, many are incorporated into farming practices.

Planting the Next Generation of Farmers

All four farmers agree that they have benefitted from their communities, but they also take the role of giving back and providing innovation for the future of their communities very seriously. One area of particular interest is community health. “That is a big heart issue,” Lauren says, “We are in the beginnings of our involvement with a food as medicine project that is starting in the state of Illinois with a few major players like OSF Healthcare.” This is near and dear to both Lauren and Corrie as they’ve watched family members develop neurological disintegration. The women attribute this to previous farmland chemical use. In addition to diversifying farm products and responsible use of chemicals, there is renewing interest in smaller scale, regenerative agricultural practices.

Another sentiment Jodie, Jess, Hannah, Corrie, and Lauren agree on? Agriculture is its own culture. “There is a big social aspect to agriculture, and we should never forget that,” said Jodie. Lauren immediately agreed saying, “Christa Barfield, the CEO of FarmerJawn in Philadelphia, is a friend of mine and her tagline is, ‘Agriculture is the culture.’ I think that circles back to everything we’ve said already. Agriculture is the culture because it is our food culture. It is our health culture. It is our social culture. Everything comes back to the soil.”

With 2026 being the International Year of the Woman Farmer (IYWF) and International Women’s Month being celebrated throughout March, it felt more than appropriate to highlight the experiences of women farmers in America.

Gil on the Hill: You can’t spell funding without “fun”

Me to the U.S. Government:

Paramount ©

This presidential administration has promised energy dominance, AI leadership, and a secure nation. What do all these things have in common? They cost money! And a functioning government apparatus to deliver. The first fiscal year of the Trump administration still hobbles on as the Department of Homeland Security flails about without funding and specially-funded Immigration and Customs Enforcement (ICE) get drafted in to run airport security. We’ve discussed the political potency of airport delays on shutdown scenarios before, and they’ll be playing their major part yet again as they reach the biggest delays in history

It’s a busy time and you have things to do. Here are three things worth tracking in science policy as Fiscal Year 2026 (FY26) wraps and we head into FY27.

DHS shutdown deal – too little too late? 

Politico: “‘It looks like everybody is going to stare at each other for a little while,’ Senate Majority Leader John Thune said Wednesday, before nodding at lawmakers’ best hope for getting a deal — their overwhelming desire to leave town.”

A Republican deal, blessed by the White House, to fund DHS, limit ICE enforcement, and include some parts of Trump’s desired voting law changes has hit major roadblocks that leave little optimism for a DHS shutdown conclusion by week’s end. If the Senate cannot pass something by Friday, the House will go home for the recess and the DHS shutdown will linger on at least another week.  

Congress is racing against the spring travel surge and the inevitable viral images of massive Transportation Security Administration (TSA) lines operated by ICE agents. Visuals like these speak thousands of words a floor speech cannot. 

Complicating matters is the growing possibility of another GOP-led reconciliation bill, also being mulled by President Trump, as an alternative path for delivering on various items like the Iran war, DHS funding, and Trump’s voting rules changes, but it’s unclear that would pass Senate rules scrutiny.

The White House published “A National Policy Framework for AI

White House: “The Administration recognizes that some Americans feel uncertain about how this transformative technology will affect issues they care about, like their children’s wellbeing or their monthly electricity bill. These issues, along with other emerging AI policy considerations, require strong Federal leadership to ensure the public’s trust in how AI is developed and used in their daily lives.”

This framework addresses six key objectives regarding families, communities, intellectual property, censorship, innovation, and workforce. It’s the latest push against the “patchwork” of AI regulation as the Administration continues to demand a moratorium on state AI laws and acquiescence to a national set of rules. FAS has concerns about such an approach. 

This all comes against the backdrop of a great deal of legislative activity concerning data centers and as specific topics come into more of a focus, like a serious bipartisan look at AI use in courts, scrutiny of data privacy and supply chain security.

Science Funding – let it flow

FAS: “The Federation of American Scientists urges the U.S. government to release holds on Congressionally-appropriated funding for scientific research, education, and critical activities at the earliest possible time.”

The Presidential Budget Request for Fiscal Year 2027 should arrive in the coming days, and with it a barrage of budget hearings, oversight, and negotiation over topline priorities. We’re still checking the receipts from FY26, and lots of discrepancies remain to be sorted out about how (and if) that money was properly spent as legally prescribed by Congress. 

The first big flashpoint of the FY27 Presidential Budget arrives with OMB Director Russ Vought testifying on Capitol Hill on April 15th. Military funding, homeland security, and economic affordability will demand attention, but we cannot lose sight of how the slow-rolling of scientific funding is harming economic competitiveness, national security, and global scientific leadership. Core government functions, like grantmaking for research, are being directly and indirectly disrupted. It’s hard to see this Administration delivering on the promises of its agenda without the scientific progress to power us through unprecedented challenges presented by AI, energy, and an evolving national security landscape. My smarter colleagues put it more eloquently here

Ta-Ta for now! 

There is so much more to discuss, but we’ve only so much word count. Keep up the convo here GRuiz@FAS.org and let us know what you think. 

We’ll be closely tracking science funding budget hearings and FY26 spending accountability, the growing urgency around AI and emerging tech legislation, and the actions of the Trump administration as they helm the world’s greatest scientific enterprise of all time in the form of the U.S. government.

Science & Technology Funding Uncertainty Impacts Regular People, Too

The Federation of American Scientists urges the U.S. government to release holds on Congressionally-appropriated funding for scientific research, education, and critical activities at the earliest possible time. This includes removing new or additional administrative processes that create additional layers of review and approval for federal funding opportunities, consistent with the Administration’s stated commitment to reduce administrative overhead in science and technology.

Funding disruptions and delays can create additional uncertainty for scientific programs, forcing individuals to change plans, cancel work, or seek other opportunities. According to a recent survey published by STAT:

When the economic dynamics of any industry change, those with the talent and ability to change direction are often the first to do so.  We are already seeing increasing competition for international funding opportunities from American scientists. The prestigious European Research Council, whose grants are awarded if researchers agree to establish their team in Europe, has seen a fourfold increase in applications from Americans for the program’s Advanced Grants.  Last year, a survey by Nature suggested that 75% of American researchers were considering moving overseas.

Changes in the viability of proposals as a result of funding delays and lengthy approval processes also wastes the government’s time and money, forcing program officers to reevaluate proposals, often under new direction, when projects being considered for funding are no longer considered viable. 

What seem like esoteric considerations that impact only a small fraction of the population is, in fact, a much larger concern: historically, funding for science and technology has significant ripple effects that boost the American economy and offer real solutions that improve people’s daily lives.  If the National Institutes of Health expenditure is limited to that provided in the Fiscal Year 2026 President’s Budget Request, the projected economic impact would cost the U.S. economy approximately 46 billion dollars and over 200,000 jobs (not including NIH jobs already terminated).  Cuts to research-performing agencies can lead to losses of capacity, reduced ability to develop and deploy critical and emerging technologies, and diminished capability to maintain datasets that are essential for the functioning of the American economy, as noted in FAS’s ongoing “Dearly Departed Datasets” project.

“Recapturing the urgency that propelled us so far in the last century”, as the President’s letter to OSTP Director Michael Kratsios directs, requires approaching our changing competitive landscape, including federal support for science and technology, with the same level of urgency.

At FAS, we believe that collaboration produces the strongest policy solutions in pursuit of a government that delivers real results for the American people. This includes engaging directly with key stakeholders across the S&T ecosystem who are deeply connected to, impacted by or implicated in federal policymaking decisions around S&T funding and infrastructure. We also engage directly with members of the public who may have ideas or input about the impact of changes to the structure and level of the federal S&T ecosystem, but may not be in a position to directly impact it. 

If you are interested in this topic, want to offer your perspective, or have ideas for policy solutions to this challenge, we invite you to connect with our team by responding to our open call.

Who Governs Government AI? The Challenge of Federal Implementation

Public Trust and the Stakes of Federal AI Regulation

Americans are skeptical that their government can regulate artificial intelligence. A Pew Research Center study from October 2025 found that while large majorities in countries like India (89%), Indonesia (74%), and Israel (72%) trust their governments to regulate AI effectively, only 44% of Americans say the same, and a greater number, 47%, express distrust. Globally, more people trust the European Union (53%) to regulate AI than the United States (37%). Americans will only realize the benefits of AI if they have confidence that these systems are used safely, fairly, and in ways that improve their lives. 

Trust is not a soft concern: it is the foundation for the adoption, legitimacy, and long-term success of any technology. When people doubt that AI systems are governed responsibly, they are less likely to accept their use in sensitive domains like healthcare, education, public benefits, or national security. Public skepticism can slow innovation, undermine compliance, and deepen polarization around emerging technologies. Encouragingly, this is not a partisan issue. Republicans and Democrats alike have emphasized that trustworthy AI use is a prerequisite for public adoption and lasting legitimacy. If the U.S. is going all-in on AI, then building and maintaining that trust is therefore not simply a communications challenge; it is a governance imperative.

The federal government plays a starring role in meeting that imperative—not only as a regulator, but also as a model user of AI. It deploys some of the most consequential and high-risk AI systems, including those that shape access to benefits, guide law enforcement priorities, manage immigration processes, and support national security decisions. The federal approach to deploying these systems does more than affect service delivery or cost savings; it sets expectations for industry standards, academic research, and public perception of the technology. In effect, the federal government serves as a societal-level proving ground for AI governance. Because it uses AI in high-risk contexts, it must demonstrate that these systems can be governed effectively through transparency, oversight, accountability, and meaningful safeguards. Failure to do so would not only diminish confidence in AI as an economic and societal asset, but weaken the already tenuous trust the public has in government as a manager of risk and opportunity

Two use cases illustrate this point. One existing high-potential but high-risk application is the Veteran’s Administration’s (VA) REACH VET program, which uses predictive models to identify veterans at elevated suicide risk so clinicians can proactively reach out. Because it draws on health records and includes explicit race coding, one would be concerned about opaque modeling choices and the possibility of inequitable or incorrect flags. The stakes are high. If veterans feel that an algorithm is driving interventions without clear transparency, clinical guardrails, and accountability or if it misses potential intervention needs, trust can erode, not only in REACH VET but in the VA’s broader use of AI, and its mental health screening and treatment programs.

Planned uses of AI in the current administration are also concerning. CMS’s planned Medicare WISeR Model would test whether “enhanced technologies,” including AI, can “expedite the prior authorization processes for select items and services that have been identified as particularly vulnerable to fraud, waste, and abuse, or inappropriate use.” In practice, this could result in automated systems delaying or denying coverage for medically necessary prescriptions or treatments if a model incorrectly flags them as suspicious. The trust risk is immediate: prior authorization already feels like a barrier to care, and adding AI without appropriate guardrails or adjudication can make delays or denials seem more automated, less explainable, and more complicated to challenge, especially for older or medically complex beneficiaries. If people perceive AI as prioritizing cost control over care, it will quickly undermine confidence in Medicare and in government AI more broadly.

These two use cases show how setting parameters around federal AI governance is not  an abstract compliance exercise; it directly shapes whether people experience AI as a helpful tool or as an unaccountable gatekeeper in some of the most sensitive and consequential interactions they have with the government. Federal guidance on incorporating elements like risk assessments, inventory documentation, and recourse processes into agency deployment play an outsized role in fomenting trust in government use of AI. 

Attempting to meet this challenge, both the Biden and Trump administrations have issued major federal guidance on how agencies should govern their use of AI. In 2024, the Biden administration’s Office of Management and Budget released OMB Memorandum M-24-10: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence as part of their role in establishing how federal agencies operate and implement government-wide regulations. This memorandum set forth a government-wide framework for the responsible use of AI, including requirements for risk assessments, transparency, safeguards for high-impact systems, and clear waiver processes. However, we previously found that the growing body of AI-specific guidance, layered on top of existing procurement rules such as the Federal Acquisition Regulation (FAR), can be difficult for agencies and vendors to navigate, particularly when determining at what stage in the acquisition process risk and impact assessments should occur.

Last year, the Trump Administration’s OMB superseded OMB M-24-10 with new guidance: M-25-21: Accelerating Federal Use of AI through Innovation, Governance, and Public Trust. This memo includes elements similar to the Biden administration guidance but, because of its more flexible, agency-driven model, also makes consistent implementation more challenging. The shift toward greater agency discretion could be explained by the Administration’s emphasis on accelerating AI adoption and reducing centralized compliance requirements that could slow experimentation or deployment. Agencies now shoulder greater responsibility for building their own governance and compliance structures, a task that depends heavily on available resources and technical capacity. Well-funded agencies may be positioned to meet these expectations, while smaller or resource-constrained agencies, including those whose tools have the greatest impact on low-income or marginalized communities, may struggle to develop and implement the same safeguards. The result is a growing risk of fragmented governance across the federal landscape, with uneven protections for the people most affected by AI systems.

With this context in mind, it’s worth examining how each administration has approached the challenge of governing high-risk AI, and what these differences mean for agency accountability and public trust.

From “Rights- and Safety-Impacting” to “High-Impact”: A Change in Orientation

AI Risk Thresholds

OMB Guidance M-24-10, issued under the Biden administration, established a government-wide framework for identifying and managing artificial intelligence systems that pose elevated risks to rights or safety. The memo introduced two formal designations: “rights-impacting AI” and “safety-impacting AI.” Rights-impacting systems are those whose outputs serve as a principal basis for decisions or actions with legally significant effects on individuals’ civil rights, liberties, privacy, or equitable access to services such as housing, education, credit, or employment. Safety-impacting systems are those whose decisions or actions have the potential to significantly affect human life or well-being, the environment, critical infrastructure, or national and strategic assets.

Under the Trump administration, OMB M-25-21 replaced the dual “rights-impacting” and “safety-impacting” categories with a single unified definition of “high-impact AI.” This term covers any AI system whose “output serves as a principal basis for a decision or action that has legal, material, binding, or similarly significant effects on individuals or entities.” Examples still include systems affecting civil rights, access to government programs or resources, health and safety, critical infrastructure, or other vital assets. While the framework remains centered on AI systems that serve as a principal basis for consequential decisions, the new memo consolidates the prior rights- and safety-based categories into a single, more generalized standard.

This shift is not merely semantic. The way OMB defines high-risk or high-impact AI determines which federal agencies must apply heightened safeguards, conduct impact assessments, and implement specific oversight and accountability measures. It also signals to contractors, state and local governments, and private-sector partners the types of AI use that warrant the most stringent governance practices. As discussed below, consolidating the categories may affect the scope, clarity, and structure of minimum risk-mitigation requirements across agencies.

Minimum Risk Management Practices 

Reaching a designated risk threshold, whether categorized as “rights- or safety-impacting” under the Biden administration or “high-impact” under the Trump Administration, does not bar an AI system from being used in government. Instead, both administrations require agencies to meet a set of minimum risk management practices before deploying such systems. These requirements, summarized in the table below, establish the baseline safeguards for high-risk AI use.

Table 1. Comparison of minimum risk management practices for Biden and Trump Administration AI Use

Governance AreaBiden Administration (OMB M-24-10)Trump Administration (OMB M-25-21)What Changed
AI Impact AssessmentRequired an AI impact assessment that documents at a minimum the intended use of the AI system, the potential risks of using that AI system, and the quality and appropriateness of relevant data.Requires an AI Impact Assessment that includes the intended purpose for the AI and its expected benefit, the quality and appropriateness of the relevant data and model capability, the potential impacts of fusing AI (supported by documentation on potential impacts on the privacy, civil rights, and civil liberties of the public), reassessment scheduling and procedures, related cost analysis, results of review by an independent reviewer within the agency, and risk acceptance (signature from an individual accepting the risk).Assessment remains central, but shifts from a precautionary, rights-forward framing to a benefit-and-risk tradeoff model with explicit risk acceptance.
Predeployment Testing & ValidationRequired AI system testing, e.g., ensuring that benefits are real and that risks can be effectively mitigated.Requires pre-deployment testing as a minimum risk management practice.Both have considerations for pre-deployment testing.
Independent ReviewRequired independent evaluation by the agency Chief AI Officer (CAIO) or an advisory board.Requires review by an independent reviewer within the agency who was not involved in the development of the AI system. The review must be documented in the impact assessment.Retains independent review, but widens it to internal reviewers.
Ongoing Monitoring & ReassessmentRequired continuous monitoring, regular risk re-evaluation, and mitigation of emerging risks over time.Requires defined reassessment schedules and procedures but leaves frequency and depth to agency discretion.Moves from continuous monitoring to periodic reassessment, giving agencies more flexibility.
Human Training & OversightRequired training and assessment of personnel and additional human oversight for decisions affecting rights or safety.Requires training and assessment of personnel and additional human oversight for high-impact use cases.Oversight remains.
Public TransparencyRequired public notice in plain language for AI systems.Encourages consultation and feedback from end users and the public.Replaces a specific public notice requirement in M-24-10 with discretionary engagement language in M-25-21.
Equity & Civil Rights ProtectionsEstablished a specific set of minimum-risk practices for rights-impacting AI. For example, the memo explicitly required agencies to identify and mitigate impacts on equity and fairness, monitor AI-enabled discrimination, notify affected individuals, and maintain opt-out options.Since M-25-21 does not identify rights impacting AI, it does not have the same proactive requirements as Biden-era guidance. Currently, the Administration requires documentation of potential impacts on privacy, civil rights, and civil liberties, and offers remedies or appeals for negatively affected individuals.Moves from proactive discrimination mitigation and opt-outs to post-hoc remedies and appeals.
Remedy & RedressRequired human consideration, notification, remedies, and opt-out options for rights-impacting AI decisions.Requires consistent remedies or appeals for negatively affected individuals.Narrows remedies from broad human review and opt-out rights to appeals mechanisms.

While there are consistent practices among both guidance documents, including AI impact assessments, ongoing monitoring and evaluation, and workforce training, there are a few elements noticeably absent from the Trump administration’s M-25-21. For example, the new guidance does not have opt-out considerations, has a looser procedure for remedies of high impact systems, and does not go into as much detail on what ongoing risk monitoring should look like. Independent review in the Biden administration formalized the inclusion of the Chief AI Officer (CAIO) or another agency advisory board, while the Trump administration has more flexibility in who can review high-impact use cases. 

The Trump administration also differs in including a new element: pilot projects. These pilot AI programs are exempt from full risk-management requirements if they are limited in scale and duration, approved and centrally tracked by the agency’s Chief AI Officer, allow participants to opt in or out with proper notice when possible, and still apply risk-management practices wherever practicable.

Waivers 

If, for whatever reason, agencies decide to not undergo the aforementioned minimum practices, both guidance documents offer waivers that give the agency’s CAIO authority to supersede a minimum risk practice. These waivers are centrally tracked and reported to OMB.

Whereas the Biden administration portrayed this as a procedural element, M-25-21 shifts the tone and purpose of these waivers.  Under this system, an agency’s CAIO, in coordination with relevant officials, can grant a waiver from one or more of the minimum practices whenever strict compliance would impede mission-critical operations or increase overall risk. The memo explicitly allows waivers when compliance might “create an unacceptable impediment” to agency objectives, a broader, more permissive standard than under Biden.

By introducing a flexible pilot program model and more permissive and vague language risk management practices, the framework places substantial discretion in the hands of agencies and their CAIOs. In practice, agencies will exercise this discretion unevenly because they vary widely in governance maturity, technical capacity, and oversight infrastructure, an issue discussed in more detail below. These disparities are compounded by differences in how CAIO roles are structured across agencies: some CAIOs are career officials with dedicated staff and technical expertise, while others serve in an acting or dual-hatted capacity, combining AI oversight with unrelated portfolios and limited institutional support. The absence of uniform qualification requirements or minimum resource standards further increases the likelihood that implementation will diverge significantly across agencies.

Agency Snapshots: A Disjointed Compliance Landscape

Federal AI governance operates at two distinct levels: (1) centralized policy direction issued by OMB, and (2) agency-level compliance processes that operationalizes those policies. While policy sets uniform expectations, compliance is implemented through agency-specific procedures shaped by capacity, mission, and internal governance maturity. The interaction between these layers determines whether federal AI governance appears coherent or fragmented.

Under Trump’s OMB Memorandum M-25-21, every federal agency is required to publish both an AI Strategy and an AI Compliance Plan outlining how it will govern its high-impact AI systems and manage its waiver processes. The majority of these plans were published in September and October 2025. The following agencies provide a useful snapshot of how different parts of the government are approaching compliance with this guidance.

Table 2. High Impact AI Processes in Agency Compliance Plans

AgencyConsiderationsExisting WaiversWaiver ProcessConsiderations for High Impact AI
Department of Homeland Security (DHS)DHS is one of the most mission-critical and high-risk users of AI in the federal government. Its systems touch national security, border management, transportation safety, and law enforcement which are areas that exemplify “high-impact” AI.UndisclosedWaivers require coordination between the DHS Chief AI Officer and relevant officials, supported by a written, system- and context-specific risk assessment. All waivers are tracked in the DHS AI Use Case Inventory, reported to OMB, and re-evaluated annually.DHS has its own framework for determining high risk systems.
General Services Administration (GSA)GSA manages much of the government’s shared digital infrastructure and procurement systems, meaning its approach to AI governance can set precedents for other agencies. In August 2025, GSA launched USAi.gov, a platform to facilitate the adoption of general-purpose AI throughout the federal government, which has come under public scrutiny because it could lead to hasty adoption without proper oversight.UndisclosedGSA’s waiver process includes submitting a request to both its CAIO and its EDGE Board which is by the Deputy Administrator and co-chaired by the Chief Data Officer (CDO)/CAIO, it reports to the GSA Administrator and includes senior leadership from across the agency.GSA has a specific AI Safety team that reviews potential high impact use cases and figures out how to ensure compliance.
Department of Labor (DOL)DOL’s programs involve employment, benefits, and worker protections, and other areas where “rights-impacting” AI concerns are high, especially around fairness, bias, and automated decision-making. In the Biden administration, DOL had published guidance on how to avoid AI related hiring discrimination that has since been removed from government websites.DOL’s compliance plan states that it does not anticipate any waivers.Does not have a set process outside of its Impact Assessment Framework (see next column).DOL has
introduced an AI Use Case Impact Assessment Framework, complete with an Impact Assessment Form, which documents potential risks as well as assigns a risk category. The actual Impact Assessment does not appear to be public.
Court Services and Offender Supervision Agency (CSOSA)This is a highly specialized justice-related agency that is resource-constrained. Its work sits squarely within an area of intense public scrutiny, especially given ongoing debates about the use of algorithms in the criminal justice system and their role in bail, sentencing, and risk assessment decisions.CSOSA’s compliance plan states that it does not anticipate any waiversAccording to its compliance plan, CSOSA is developing its AI Policy to issue, revoke, deny, certify and track waivers for minimum risk management practices.CSOSA has an AI Governance Body that is still developing its procedure.

It is appropriate for agencies to develop risk evaluation approaches that reflect their distinct missions and deployment contexts. Sector-specific risks vary enormously: the harms posed by clinical decision-support tools differ from those associated with benefits administration, law enforcement, or worker-protection considerations. Agencies need the flexibility to evaluate risks within their own operational contexts.

However, differences in the content of sectoral risks and differences in the processes agencies use to manage those risks are not the same thing. Allowing agencies wide latitude in interpreting minimum risk management practices and in designing their waiver procedures creates the possibility of procedural divergence, not just divergence in substantive sector-specific requirements.This is where inconsistency becomes a governance problem, not just a technical one. 

Agencies have long struggled to apply their own policies consistently across programs and time. A 2023 study of Biden-era AI governance practices found that fewer than 40 percent of mandated actions under key federal AI authorities were verifiably implemented, and that nearly half of federal agencies failed to publish required AI use-case inventories despite demonstrable use of machine-learning systems. Although the Trump administration may grant more discretion in agency AI governance, we see that the ability to consistently apply guidance is a structural issue that spans administrations. Without a baseline of procedural consistency, OMB may struggle in its mission to oversee these compliance plans. 

The Importance of State Capacity

When each agency is left to design its own compliance architecture, implementation will also inevitably diverge according to capacity rather than mission need. This will produce a fragmented governance landscape that closely resembles the “patchwork” often cited as a concern in broader AI regulatory debates. Some agencies have already demonstrated the ability to produce relatively robust internal guidance because they possess deeper technical benches, established governance bodies, and more mature risk assessment processes. As shown in Table 2, for example, DHS has established centralized AI governance structures, published detailed AI inventories and use-case documentation, and built out internal review mechanisms to assess high-risk systems. Similarly, the DoL has developed agency-wide AI plans and formal oversight processes that integrate risk assessment, transparency, and workforce training components. But smaller, under-resourced agencies, such as the Court Services and Offender Supervision Agency (CSOSA) references in Table 1, may struggle even to stand up the foundational processes needed to comply with M-25-21. 

At the core of this capacity gap is a workforce challenge. Effective AI governance depends not only on the right guidance but also on sufficient and well-deployed talent. This includes AI talent – staff with expertise in machine learning, data science, and model evaluation, and AI-enabling talent, which includes product managers, procurement specialists, privacy and civil liberties experts, domain specialists, and program managers who can integrate understanding of technical systems into real-world decisions and operations. AI governance bodies, risk assessment frameworks, and waiver adjudication processes cannot function without personnel who understand the technology and the agency’s mission context, and who can manage and adapt agency learning and implementation systems over time. A single brilliant CAIO is a smart first step, but long term effectiveness relies on the agency’s ability to enable a “flywheel” of adaptation, growing AI and AI enabling capacity over time. 

The Biden administration had an AI Talent Surge with the explicit focus on bringing in AI and AI-enabling talent into the federal government, and was able to bring at least 200 experts into public service while advising agencies on structure and capacity-building. While M-25-21 prompts agencies to develop and retain AI and AI-enabling talent, it’s unclear how that matches up with the fact that 317,000 federal workers have left the government in 2025. Because many of the Biden-era AI hires were still within their probationary period, therefore vulnerable to layoffs, and because some entire digital teams, such as GSA’s 18F and the DHS’ own AI Corps, were slashed, it is now difficult to determine where federal AI talent resides or how much of that capacity remains in government. 

Recent Trump administration moves have recognized some of this gap, but the emphasis on early-career vs. institutional adaptation is limiting. Late last year, the Office of Personnel Management issued a “Building the AI Workforce of the Future” guidance document, with emphasis on the launched TechForce (hiring early-career technologists for limited terms of two years), Project Management and Data Science Fellows programs, and other early-career oriented programs. 

Conclusion

The divergence between M-24-10 and M-25-21, coupled with the uneven compliance plans that have followed, reveal a federal AI governance landscape marked by structural fragmentation, one that carries real implications for public trust. Agencies with robust technical resources are positioned to comply with these requirements if they choose to, while others will struggle to keep pace. Compounding this disparity, the dissolution of digital teams and loss of probationary AI hires have obscured the government’s understanding of its AI workforce, weakening its capacity to implement trusted and transparent governance.

Ultimately, M-25-21’s compliance plans will not fulfill their intended purpose unless agencies receive the funding, staffing, and political support required to carry them out. A compliance plan is only as strong as the people and resources behind it. Robust, transparent governance is impossible without investments in the civil service capacity needed to implement it, and without such trust-building capacity, agencies risk forgoing the responsible adoption of AI systems that could improve public services and operational effectiveness.

Igniting Innovation: Progress and a Path Forward for Wildfire Policy

Communities nationwide are experiencing longer wildfire seasons and more intense, destructive wildfires. Hotter and drier weather, decades of fire over-suppression leading to the buildup of flammable materials, and increasing development in and around fire-prone areas have transformed wildfire—once a natural and sustainable part of American landscapes—into a major threat. From California to New Jersey, wildfires are taking a toll—costing the United States up to $424 billion annually and displacing tens of thousands of people.

One year after catastrophic wildfires blazed through southern California, the Environmental and Energy Study Institute (EESI) and the Federation of American Scientists (FAS) held a Congressional briefing on emerging solutions to tackle the wildfire crisis and federal policy strategies for getting these solutions into the field. The briefing was followed by a reception co-hosted by FAS and Megafire Action

Jessica and other panelists

The briefing featured four expert panelists who brought decades of experience building wildfire resilience from space, from sky, from the fireline, and from the law office

The briefing came at a pivotal moment for U.S. wildfire policy. Since FAS started working on wildfire four years ago, we’ve helped to cultivate a growing coalition of partners who have elevated wildfire as an urgent concern in Congress, the White House, governors’ offices, and boardrooms. All of these stakeholders recognize that the wildfire crisis is solvable. FAS is proud to have been an early champion of the Fix Our Forests Act (S.1462), whose provisions would be a critical step in improving how we use science, data, and technology to build wildfire resilience. We’ve also seen the private sector and philanthropic investment supercharge innovation. And we’ve seen stakeholders, from insurance companies to utilities, recognizing the urgent need to act. 

In a lively Q&A, panelists answered audience questions about how to act: how to incentivize home hardening, how the federal government can lead wildfire resilience, budget barriers to risk reduction, and more. 

What we discussed

FAS understands that how we govern science, data, and technology will play a huge role in determining whether we achieve wildfire resilience. We know a future of coexisting safely with beneficial fire is possible if we act with urgency, fidelity to science, and a collaborative spirit. FAS is pushing energetically towards this future and we look forward to continuing to work closely with Congress and with partners to that end.

A pre-mortem on OPM’s HR 2.0 initiative: Imagining failure in order to support success

[Editor’s note: full examination here (pdf)]

Large-scale IT modernization projects fail with remarkable regularity. They fail in private companies with strong profit incentives and unified leadership. They fail in state and local governments with narrower missions and simpler constraints. And they fail — often spectacularly — in the federal government. Entire multibillion‑dollar industries exist precisely because implementing large, complex software, including Enterprise Resource Planning (ERP) systems, is hard: technically complex, organizationally disruptive, politically fraught, and culturally destabilizing.

OPM’s new HR 2.0 initiative is therefore entering hostile terrain by default. The initiative aspires to rationalize, consolidate, and modernize a sprawling thicket of federal human resources systems that has grown organically over half a century. It seeks to replace dozens of agency‑specific solutions, hundreds of interfaces, and innumerable manual workarounds with a standardized, interoperable, enterprise‑wide platform capable of supporting modern workforce management.

Those of us who have followed federal HR modernization for years desperately want this effort to succeed. The current HR IT landscape is costly, brittle, opaque, insecure, and increasingly misaligned with how the federal government needs to recruit, manage, pay, and deploy its workforce. As OPM has documented and independent research shows, the federal government likely wastes billions of dollars maintaining hundreds of systems that slow agencies down, force them to duplicate effort, and obfuscate rather than clarify the data required to make business and workforce decisions. Some of these systems are decades old and have been assessed as a high risk to government operations if they should fail. Modernization is no longer optional. It is a prerequisite for addressing mission delivery, workforce planning, and public trust.

But optimism is not a plan, and aspiration is not execution. In our experience, the greatest danger to large federal IT programs is not a lack of good intentions, but rather a failure to fully internalize how hard it is to succeed and avoid the missteps of the past. In that spirit, this paper adopts an intentionally uncomfortable posture: It is a pre‑mortem. Rather than waiting until a future GAO report, Inspector General audit, or congressional hearing explains why this effort underperformed, we imagine that possible failure mode now.

We assume — purely for analytical purposes — that OPM’s HR 2.0 initiative did not achieve its intended outcomes. From that hypothetical vantage, we ask:

  1. What were the most likely failure modes that doomed the effort?
  2. What could OPM, OMB, Congress, and agencies have done earlier to materially reduce those risks?
  3. What questions should OMB and OPM leadership be asking today to avoid that outcome?

OPM, agencies, and OMB have already invested substantial time and energy in planning this effort. This paper is intended to complement — not undermine — that work by surfacing structural vulnerabilities early, when they can still be addressed. This, in turn, can help guide implementation teams’ focus today under the presumption that success, with care and forethought, is possible despite all the barriers.

HR 2.0 is a good idea, but it has risks

At its core, OPM’s initiative is a good one and addresses an often-neglected part of the federal business enterprise that has long needed attention from senior leadership. It is also perhaps the most ambitious attempt ever made to solve this problem once and for all. In fact, OPM has made a series of choices related to how it has structured the program — decisions that demonstrate the administration’s seriousness and commitment, and we mostly agree with the impulse and meaning behind each of them:

However, we also know how hard this is going to be, both because of our own experience working on this topic inside the federal government ,and because the government has failed at this exact exercise before. In fact, it has already failed at this project this decade.

Learning from DoD’s failure

In March of 2025, Secretary of Defense Pete Hegseth released a memo and then a video highlighting an effort to cut wasteful spending and putting several programs on hold. The first program on his list was the Defense Civilian Human Resources Management System, or DCHRMS (pronounced dee-charms in classic defense bureaucracy style). 

The program had been “intended to streamline a significant portion of the Department’s legacy Human Resources (HR) information technology stack – an important mission we still need to achieve – but further investment in the DCHRMS project would be throwing more good taxpayer money after bad.” In his telling, the program was “780 percent over budget. We’re not doing that anymore.” It was over — the DoD had tried and spectacularly failed to move to a single HR system for just its own department. This high-profile bust is exactly what we mean when we say this type of HR IT modernization is hard and fails all the time.

The project originally started in 2018 as a $36 million, one-year proof of concept and then morphed into a years-long effort to consolidate at least six separate DoD systems based on Oracle’s E-Business Suite software onto a single, DoD-wide Oracle Cloud HCM platform. The project moved from proof of concept into full execution without a formal acquisition or rigorous planning, leaving the systems integrator that managed the legacy systems also in charge of implementing the new system. The department tried mightily to standardize business processes across DoD services. But people familiar with the project say that middle managers and subject-matter experts across the department added requirements that led to scope creep as the project wore on. As the project timeline began slipping, Oracle introduced new technologies and features that led to further slippage to incorporate them into the program baseline. By the time the program was cancelled, it was not clear what DoD’s measures of success were. That the integrator responsible for deploying the new system was simultaneously profiting from operating the legacy systems also presented an obvious conflict of interest. 

The DCHRMS saga maps several pitfalls associated with large-scale enterprise IT modernization programs. The failure to maintain a rigorous convergence baseline and guard against scope creep is one. That seems to have been compounded by a business model and accountability structure that were not well thought through or did not adhere to best practices. And ultimately, by the time it became clear that the program was unable to deliver concrete, measurable outcomes in a reasonable and well-defined timeframe, the state of technology had evolved, rendering the program’s initial targets irrelevant and forcing the program to rebaseline.

These reasons for failure are not unique to DCHRMS, nor are they unforeseeable. In fact, they are some of the most common failure modes that doom complicated, multi\stakeholder technology implementations in complex organizations. Not even the DoD’s generally deferential-to-leadership and can-do culture could overcome them.

Predicting failure modes and mitigating the risks

For OMB and OPM to avoid this fate for HR 2.0, they need to consider the possibility of failure and take the risks of their approach head on. DCHRMS was a good idea, too, but good ideas only get you out of the gate and not over the finish line.

Based on our experience, we’ve imagine what the failure modes might be; suggest mitigations; and, crucially, articulate the questions leaders should be asking today to try to avoid failure in the future.

Failure mode 1: The single-award strategy backfires, or Industry doth protest too much

Scenario: In early 2026, GSA awarded the government-wide contract to implement HR 2.0 to a single vendor after a competitive evaluation,but the project quickly went the way of JEDI. Within weeks, two unsuccessful offerors — gigantic tech companies with deep pockets and nothing to lose — filed protests with GAO, arguing that the evaluation criteria unfairly favored the awardee’s architecture and that OPM had failed to adequately consider total cost of ownership. GAO sustained one protest on narrow technical grounds, requiring a reevaluation. That process took months, during which a third vendor protested, alleging the revised criteria were designed to reverse-engineer the original outcome. By the time the litigation resolved in late 2027, OPM had lost its original program leadership, the vendor’s proposed technical team had largely dispersed to other projects, and three agencies that had been preparing for early implementation had redirected their modernization budgets elsewhere. 

The single-award approach isn’t inherently flawed, but it demands unusual discipline in execution and presents significant risks. OPM and GSA must assume protests are coming and prepare accordingly, both legally and programmatically. Their goal should be twofold: make protests less likely to succeed on their merits, and structure the program so that even a sustained protest doesn’t collapse momentum entirely. Here’s how:

Key questions for OPM and OMB leadership to ask: What is our realistic timeline and budget for protest and litigation? And have we structured the program so that a significant delay won’t collapse momentum entirely?

Failure mode 2: An OPM-led, OPM-managed effort becomes a bottleneck or Herding Cats Is too Hard

Scenario: By mid-2027, the program had a governance problem that no one wanted to name. OPM had established an impressive array of boards, councils, and working groups, but decisions that should have taken days were taking months. Agency requests for configuration changes sat in queues. Escalation paths were unclear. When disputes reached senior leadership, they often got sent back for “more analysis.” Agencies, meanwhile, learned that the fastest path to resolution was to route around OPM entirely: calling OMB, complaining to appropriators, or simply delaying participation until someone else went first.

Centralizing authority at OPM makes sense in theory: It’s the government’s HR agency, and fragmented leadership doomed earlier efforts. But centralization only works if OPM has the capacity to actually lead, and if governance structures enable decisions rather than defer them when agencies push back — and they will push back. This requires deliberate investment in both institutional capability and stakeholder engagement:

Key questions for OPM and OMB leadership to ask: Does OPM have — or can it rapidly build — the programmatic capacity to manage a government-wide implementation? Or will it need to partner more deeply with other organizations to fill critical gaps?

Failure mode 3: Contracting directly with OEMs goes awry, or Integrators were integral after all

Scenario: The idea was novel: contract directly with the software company, make it  accountable for delivery, and relegate the big integrators to supporting roles. However, what no one fully appreciated was that the OEM had never run a federal program at this scale. Its government practice was built around licensing, not implementation. When agencies reached out to them directly, staff struggled to handle their dual role as client navigator and enforcer of standards. Meanwhile, the integrator subcontractors had little incentive to go beyond its narrowly defined task orders; It had learned from experience that exceeding scope meant absorbing risk. By 2028, the program had developed a peculiar dysfunction: The OEM nominally owned delivery but lacked the expertise to drive it, while the integrators who had the expertise lacked the authority or incentive to deploy it. Problems that should have been resolved at the working level instead became triangular disputes among OPM, the OEM, and whichever integrator happened to be nearby when something broke.

Contracting directly with the OEM aligns authority with product knowledge, a real advantage when implementation challenges stem from product limitations. But OEMs are product companies, not delivery organizations. Making this model work requires treating the OEM relationship as a partnership to be developed, not a vendor to be managed, and designing governance structures that compensate for predictable gaps. Here’s how:

Key Questions for OPM and OMB leadership to ask: Has the OEM ever successfully delivered a program of comparable scale and complexity? And if not, what governance structures will compensate for that inexperience?

Failure Mode 4: Configuration management becomes unmanageable, or The Christmas tree collapses under its own weight”

Scenario: No one could point to the moment the baseline stopped being a baseline. It happened gradually, one exception at a time. An agency with a unique pay authority needed a configuration variant; that was legitimate. Another agency’s union agreement required a different leave-tracking workflow; that was unavoidable. A third agency wanted to preserve a legacy-report format that its budget office depended on; that was easier to accommodate than to fight. By 2028, the “standard” system had 17 major configuration branches, 42 approved extensions, and an uncounted number of agency-specific workflows that had been implemented as “temporary” accommodations. The vendor’s upgrade cycle, originally planned for quarterly releases, slipped to annual. Even then, each upgrade required months of regression testing across configuration variants to ensure that push of new commercial code didn’t break these customizations. The government had succeeded in replacing dozens of legacy systems with a single modern platform. Unfortunately, it also had recreated the fragmentation that modernization was supposed to eliminate.

Configuration pressure is inevitable. Federal HR is governed by multiple statutory regimes, and agencies will always have legitimate reasons for divergence. Some amount of tailoring is inevitable, but the major goal OPM should consider is how it might govern the solution so that exceptions remain exceptions rather than becoming the new normal. This requires treating configuration management as a strategic discipline, not an administrative afterthought. Here’s how:

Key Questions for OPM and OMB leadership to ask: Who has the authority to say “no” to an agency’s configuration request> And will those with that authority get backup when politically powerful agencies push back?

Failure mode 5: Funding is insufficient, unreliable, or unsustainable, or The passed hat drops

Scenario: The funding model mapped to a usual format for government: Agencies would pay for their participation, OPM would recover costs through its revolving fund, and the program would be self-sustaining once it reached scale. What the model hadn’t accounted for was the messy reality of federal budgeting. Three agencies requested implementation funding in their FY 2027 submissions; two were denied by their appropriations subcommittees, who saw HR modernization as discretionary against more pressing mission needs. A fourth agency had funds but couldn’t obligate them in time because its IAA with OPM was still being negotiated. By 2028, the program’s wave schedule had been revised four times, each revision eroding vendor confidence that the government was serious. The OEM, facing uncertain volume, quietly raised its per-agency pricing to hedge against lower-than-expected adoption. Agencies that had been on the fence used the chaos as justification to wait. OPM found itself in the worst of all positions: accountable for a government-wide program but dependent on agencies it couldn’t compel and appropriators it couldn’t control.

In the federal government, budgets are political documents as much as they are management ones. The way money flows determines who has authority, who bears risk, and who ultimately decides what gets built. A distributed funding model may be administratively orthodox, but it diffuses accountability in ways that are toxic to enterprise modernization. OPM and OMB should treat the funding architecture as a strategic design decision, not an inherited constraint. Here’s how:

Key Questions for OPM and OMB leadership to ask: Can this program realistically achieve its objectives through distributed agency funding? Or does success require a level of centralized financial authority that OPM does not currently have, at least at the implementation phase?

Failure Mode 6: Agencies are not ready when their turn comes, or Agencies miss their marks

Scenario: OPM and the OEM did their parts. The contract was awarded, governance was established, and the wave schedule was published 18 months in advance. What no one had fully reckoned with was the state of agency readiness. The first wave included 4agencies, chosen for their manageable size and expressed enthusiasm. Two were genuinely prepared: Their data was clean, processes were documented, and change management was underway. The other two had overestimated their readiness. One discovered during configuration that its position data existed in three different systems that had never been reconciled; cleaning it would take nine months. The other had documented its “as-is” processes, but those documents described how the agency thought things worked rather than how they actually worked, a gap that surfaced only when end users began testing. OPM faced an uncomfortable choice: delay the wave, which would ripple across the entire schedule; lower quality standards, which would embed problems into the baseline; or push forward and absorb the pain.

Agency readiness isn’t just an agency problem, it is also a program problem. OPM can execute flawlessly on procurement, governance, and vendor management and still fail if agencies aren’t prepared when their turn comes. That means readiness requirements need to be specific, measurable, and consequential. Agencies have incentives to obfuscate their readiness until it’s too late if they don’t think you’re serious or don’t understand what you’re asking them to do. OPM needs a clear escalation path if agencies miss their marks. Here’s how:

Key questions for leadership: How will OPM distinguish among agencies that are genuinely ready and those that merely believe they are? And what happens when an agency in the latter category is scheduled for an early wave?

Failure mode 7: Executive sponsorship wanes over time, or Government takes its eye off the ball

Scenario: For the first two years of the term, the program had everything it needed: White House attention, OMB backing, an OPM Director with the right skills who made modernization a personal priority, and agency heads who understood they were expected to participate. Then, as happens in nearly every term, political appointees began to turn over. New appointees came in after the midterms with different priorities. The career staff who understood the program’s history remained, but their authority to make decisions — and their air cover when those decisions were contested — evaporated. Agency executives who had reluctantly committed to early waves found that their objections now received a more sympathetic hearing. By 2028, the program still existed: contracts were in place, some agencies had implemented, governance bodies still met. But the urgency was gone. Wave schedules slipped. The program had become one of many initiatives rather than the initiative. It would eventually deliver something — but not the enterprise transformation that had been promised.

Executive attention is a wasting asset. It cannot be sustained indefinitely through personal commitment alone: Eventually, leaders move on, priorities shift, and attention migrates to newer challenges. The only way to protect a multi-year, multiadministration program is to convert early momentum into durable structures that don’t depend on any single leader’s continued engagement, and embed support for this program in the career staff who will need to sustain it across agencies far into the future.

Key question for OPM and OMB leadership to ask: What specific structures, commitments, and artifacts can be put in place in the next 18 months that would make it difficult for a future administration to abandon or significantly scale back this initiative?

OPM needs to manage the risk without paralyzing the program

All of these failure modes are, in our view, plausible but they are not inevitable. The fact that they’re extremely foreseeable makes them easier to plan around.

The good news is that the risks facing this initiative are not primarily technical. Whomever OPM selects as the vendor will likely be able to deliver some kind of working product. Rather, the risks are mostly governance risks, capacity risks, and incentive-alignment risks. The bad news is that these risks are harder to mitigate, and addressing them requires more than better requirements or more detailed project plans. It requires a conscious effort to design institutions, funding flows, and oversight mechanisms that help the program succeed rather than simply document its shortcomings.

With this in mind, there are some things that OPM and OMB can do to get a better hold on them. In particular, there are programmatic opportunities to rethink the use of independent verification and validation (IV&V) and the role of other actors in the federal ecosystem, such as Congress, GAO, and OMB, who often play their roles as overseers, authorizers, and advisers in the process of transformation. There are also obvious lessons from private sector product management experience that can help reduce the risk of a catastrophic meltdown posed by large-scale waterfall implementations.

Traditional IV&V models often emphasize exhaustive risk identification, which may be appropriate for discrete, bounded systems. However, for a multi-year, enterprise-scale transformation operating in a high-risk environment, a more useful IV&V strategy would be selective, staged, and decision oriented. Rather than attempting to monitor everything at once, IV&V should focus on a small number of high-leverage risk domains aligned with the failure modes identified in this paper, such as configuration governance and convergence discipline, funding adequacy and sustainability, agency readiness and sequencing decisions, and executive sponsorship and institutionalization. Within these domains, IV&V should aim not merely to assess compliance, but to inform real decisions: whether to pause, resequence, simplify, or escalate. Stage gating the implementation based on these factors (rather than just cost, schedule, and performance) can help OPM and OMB course correct when they need to rather than barrel ahead until it is too late.

In conjunction with this, OPM should lean into its relationship with stakeholders such as Congress and GAO. Agencies and program managers often avoid interacting with these officers because such interactions seem to invite scrutiny and criticism. But this program, with its size and ambition, will not avoid scrutiny along the way. And engaging these powerful actors earnestly up front offers OPM the best chance it will have to enlist them as allies and secure longer-term sponsorship for this important effort.

Finally, OPM should consider adopting a product operating model for HR 2.0 rather than managing it as a traditional, time-boxed “waterfall” IT project. As our colleagues have previously argued, the product operating model directly counteracts several of the failure modes identified in this paper. Replacing rigid milestone-based delivery with iterative development cycles reduces the risk of configuration complexity spiraling out of control, because problems surface early and can be corrected before they calcify into permanent accommodations. Embedding dedicated technical product managers within the program and empowering them to resolve ambiguity, manage scope, and make tradeoff decisions addresses the governance bottleneck risk by ensuring that day-to-day decisions don’t require constant escalation to senior leadership. Continuous, outcome-based funding aligned to a product model mitigates the funding fragility by shifting the budgetary conversation from one-time project appropriations to sustained investment in a living service. And because the product model emphasizes organizational alignment with outcomes rather than obstacles, it helps insulate the program against the loss of executive sponsorship: durable team structures, institutionalized feedback loops, and transparent progress metrics create continuity that persists even as political leadership turns over. 

In short, the product operating model is an institutional design that would reduce the probability of several of the most dangerous failure scenarios HR 2.0 faces, and in doing so, increase the probability of historic success.

A Final Observation

Federal HR IT modernization is ambitious because it must be. The federal government is one of the largest single employers in the world and it runs on badly outdated and outclassed HR software. The status quo is unsustainable. Fragmentation, duplication, and opacity carry their own costs and risks. The choice, then, is not between risk and safety. It is between managed risk and unmanaged risk. The failure modes outlined in this paper are not predictions — they don’t have to come true — but they are warnings. Each represents a point at which deliberate choices can either compound fragility or build resilience.

The success of this initiative will depend less on technical execution than on leaders willing to confront these choices honestly, early, and repeatedly. That, more than any single procurement or platform decision, will determine whether HR 2.0 becomes a foundation for reform — or another cautionary tale about a federal IT meltdown

For a more detailed examination of these ideas, please download the full report (pdf) here.

Appendix: A brief history of HR IT modernization and consolidation in the federal government

Early agency‑built HR systems

Federal agencies, like their private sector counterparts, began building enterprise HR and payroll systems in the 1970s. These systems were typically bespoke, homegrown solutions designed to meet the specific needs of individual agencies. They were written in what was then state-of-the-art programming languages such as COBOL and Natural, languages that are now considered archaic, despite the fact that they continue to underpin mission‑critical systems in the banking industry and across government.

At the time, this approach made sense. Commercial HR software barely existed, and the federal government was already one of the largest employers in the world. Computing helped agencies manage complex, routine tasks like payroll and therefore were highly customized. There was little expectation that systems would interoperate across agencies, as the internet did not yet exist in its modern form. Each organization optimized for its own statutory authorities, workforce composition, and operational needs.

Over time, however, these systems accreted complexity. New laws, pay plans, labor agreements, and reporting requirements were layered on top of old code. Documentation decayed. Original developers retired and left little in the way of documentation about what they did. Institutional knowledge became increasingly fragile. What remained were systems that worked — until they didn’t — and that were extraordinarily difficult to modify, integrate, or retire.

The commercial ERP wave

In the 1990s, commercial ERP systems, led by vendors such as SAP and PeopleSoft, rose to prominence in the private sector. Initially focused on manufacturing and finance, these platforms gradually expanded to include HR, payroll, and talent management functionality for almost all large enterprises.

By the late 1990s, federal agencies began adopting commercial HR systems, overwhelmingly selecting PeopleSoft. These implementations promised modernization, vendor support, and alignment with private‑sector best practices. In practice, agencies often customized these systems extensively to replicate legacy processes and accommodate federal‑ and agency-specific requirements inherent in the custom solutions they replaced. While modernization occurred, standardization largely did not.

Payroll consolidation: A rare success

By the early 2000s, the federal government operated more than 20 mostly bespoke payroll systems, each of which did the same basic thing: calculate payroll and send instructions to the Department of the Treasury to process. This level of duplication was expensive and untenable, leading the Bush administration to adopt payroll consolidation as a pillar of its newly minted “e‑Government” agenda and the newly established HR Line of Business.

This effort is notable for both its sponsorship and its execution. The initiative was driven directly by OMB Director Mitch Daniels, with strong leadership from OPM Director Kay Coles James. OPM conducted a formal internal competition among federal payroll providers, resulting in the designation of four agencies — the General Services Administration, the Defense Finance and Acquisition Service (DFAS), the Department of Agriculture’s National Finance Center, and the Department of the Interior’s National (now Interior) Business Center  — as payroll shared service providers, responsible for processing not only their own agency’s payroll but also that of several customer agencies. The Department of Agriculture, for example, processes payroll for the Departments of Homeland Security and Justice, while DFAS processes payroll for the Veterans Administration and the Department of Energy, among other arrangements.

Despite early skepticism and schedule slippage, payroll consolidation succeeded for the most part. By 2006–2007, most civilian agencies had migrated payroll operations to one of these providers. OPM later estimated that the effort produced roughly $1 billion in savings and cost avoidance, with continued benefits accruing over time, including better standardization and control over the data supply chain from agency systems to OPM.

Crucially, this payroll consolidation was not explicitly authorized by statute or executive order. It succeeded because senior leaders treated it as a management imperative, and they enforced compliance and sustained attention long enough to overcome institutional resistance.

The long plateau: 2007–2024

After payroll consolidation, OMB sought to extend the shared services model to broader HR functionality. Beginning in 2007, OMB issued a series of memoranda requiring agencies to migrate to approved HR shared service centers when modernizing. This policy trajectory culminated in OMB Memorandum M‑19‑16, which established Quality Service Management Offices for HR, financial management, grants management, and cybersecurity.

Despite these directives, progress was uneven. Some agencies modernized successfully; many did not. Fragmentation persisted. A defining feature of this period was the absence of sustained, senior‑level executive sponsorship comparable to that seen during payroll consolidation. HR IT modernization became a perennial priority — but rarely the top priority.

Gil on the Hill February 2026 – Appropriations: Signed, Sealed… Will It Be Delivered?

Implementation Season

January saw us watching whether the government would fund science. February has been about how that funding will be distributed, regulated, and contested.

Appropriations are (mostly) done. The shutdown clock has (mostly) stopped ticking. Congress, federal agencies, and the states are quietly settling into regular business for the year.

Let’s see what’s been going on.

Appropriations: Signed, Sealed… Will It Be Delivered? 

As we’ve been tracking together, on January 23, the President signed into law the Commerce, Justice, Science; Energy and Water Development; and Interior and Environment Appropriations Act (H.R. 6938), locking in funding for core science accounts at the National Science Foundation (NSF), the Department of Energy (DOE), the National Aeronautics and Space Administration (NASA), and related agencies.

One week later, the Senate passed a bipartisan spending package funding most remaining agencies, including Defense, Health and Human Services (HHS), and Education, through September, while providing only a two-week stopgap for the Department of Homeland Security (DHS). That deadline has since come and gone, leaving DHS with lapsed funding and technically a partial government shutdown. With political ire as high as ever around Immigrations and Customs Enforcement (ICE) and high-profile incidents involving the deaths of U.S. citizens, it’s safe to assume we will not reach agreement on that front any time soon. 

So, we avoided the shutdown cliff (mostly), but careful scrutiny about how (and how much) federal funding will be spent is the real point of contention now. Congress gives the money and the Administration is supposed to spend it. This has always been a given, but right now anyone with an interest in how federal funding is spent should be paying attention to the receipts coming in from the Administration. 

For the science enterprise, a key takeaway is that topline federal funding stability comes bundled with increased reporting, compliance, and political scrutiny, but those measures still need to be tested outside of the confines of bill text and in the real world that Office of Management and Budget Director Russ Vought operates in. 

Congress Gets Busy

We saw Congressional science committees active with oversight as well as moving legislation. 

NIH Modernization

The Senate HELP Committee held a full committee hearing titled Modernizing the National Institutes of Health: Faster Discoveries, More Cures.” NIH Director Dr. Jay Bhattacharya testified amid broader institutional transition, including the prospects of delinking NIH research facility support from research projects in an effort to change up the geographic and institutional concentration of NIH funding.

At the same time:

Research Security Pressure

The House Science, Space and Technology (SST) Committee has been active:

Research security and foreign research collaboration remain central congressional pressure points and continue to see legislative activity. 

AI Policy Highlights

Bipartisan Bright Spots for Science Bills

Three bipartisan House SST Committee bills passed the House under suspension:

These are pragmatic, application-driven bills that harness science and evidence-based policy to address difficult challenges. We love to celebrate bipartisan collaboration on science over here at FAS. 

Exec Branch Watch

Tariffs! 

President Trump’s new 10% global tariffs are kicking in as the Supreme Court’s ruling invalidates his most sweeping duties. The president threatened to raise the levy to 15% on certain countries “where appropriate.”

EPA and the GHG Endangerment Finding

The Environmental Protection Agency announced what it described as the largest deregulatory action in U.S. history, eliminating the 2009 Greenhouse Gas Endangerment Finding. Legal challenges are already on the way. FAS happened to launch the Center for Regulatory Ingenuity (the same day) which aims to address systemic regulatory challenges such as this. 

NSF Workforce RFI

The National Science Foundation issued an RFI: “Investing in U.S. Workforce Training and Innovation to Advance the President’s Trade Agreements and Ensure America’s Energy Dominance.”

NSF: Quantum and Agriculture

NSF launched a $100M National Quantum and Nanotechnology Research Infrastructure program. It also announced first awards under AI-ENGAGE, modernizing agriculture via AI systems.

OPM Civil Service Rule

The Office of Personnel Management finalized a rule creating Schedule Policy/Career, a new category for certain career federal positions they deem as “policy-influencing.” Read the FAS analysis

Space, Satellites, and AI Infrastructure

The Federal Communications Commission accepted for filing SpaceX’s application for orbital data centers. There remain concerns in the astronomy community over satellite proliferation impacts

Ta Ta for Now

“Structural reform” is the theme reverberating throughout Congress, the White House and the S&T ecosystem right now. The relationship between the executive branch and Congress is being tested in unprecedented ways as we all witness the “impoundment” fight play out in real time. Federal agencies are ramping up activities that are questioning longheld assumptions of how science is conducted in America. Science policymakers are thinking big about the future of science and opportunities for good-faith reforms. 

If it’s all successful and carefully thought out, then it could be a welcomed and overdue evolution that stands to benefit the public significantly. However, it’s easy to be skeptical. And if it is indeed successful, will the systems we’re building be durable enough to survive the next political turn?

Onward.

Everything You Need to Know (and Ask!) About OPM’s New Schedule Policy/Career Role: Oversight Resource for OPM’s Schedule Policy/Career Rule

In February 2026, the Office of Personnel Management finalized a rule creating Schedule Policy/Career, a new category for certain career federal positions they deem as “policy-influencing.” 

When the rule was initially proposed, FAS raised concerns that removing civil servant employment protections could place unnecessary and undesirable political pressure on highly specialized scientific and technical career professionals serving in government. While we appreciate the Administration’s revisions (such as those that clarify competitive service status), important questions remain about how the rule will be implemented in practice, and how it may affect agency operations, workforce motivation, and mission delivery. This is a complex change to a long-standing system, with significant implications for thousands of current and future public servants – with great potential for unintended consequences. Congress has both a responsibility and opportunity to understand the rule’s intent, implementation, and impacts as it works constructively to shape a better federal workforce system that meets the needs of the country.

This resource is designed to help Congressional members and staff (and other oversight bodies) with cross-cutting and agency oversight roles understand what implementation could look like, where discretion lives in implementation, what changes or risks may emerge over time, and what questions may be most useful to ask in oversight activities such as hearings, briefings, letters, commissioned reports, and GAO audits. Potential areas to watch and requests are aimed at specific implementation periods, as part ongoing engagement with individual agencies, or as part of more holistic review, with the goal of supporting practical, evidence-based oversight as agencies put the rule into effect.

Background

Under the rule, Schedule P/C positions remain career, merit-based roles, but employees in Schedule P/C roles:

Importantly, career staff who had competitive status can transfer to a non-Schedule P/C role and regain competitive service protections. Staff who are hired into Schedule P/C roles under the merit system can likewise gain competitive status after 2 years and acquire competitive service protections if they move out of Schedule P/C.

This rule gives agencies significantly more authority over certain career policy roles. Whether that authority improves accountability or creates new risks depends almost entirely on how agencies interrupt and apply it. 

If you’re interested in….

What the rule actually changes (and what it doesn’t)

Understand

Ask agencies (now)

Watch

Why it matters: Early confusion or inconsistency may lead to uneven or overbroad designation of roles, uneven treatment across agencies, or morale challenges due to confusion about goals.

What is policy influencing (and what isn’t)

Understand: Agencies are supposed to identify roles based on whether the duties of the position meet the statutory test for being policy influencing – the role, not the person. Agencies are told to consider: roles that:

Agencies should not be considering: 

Ask (after agencies have made determinations): 

Watch: 

Why it matters: Good oversight here is about definitions and consistency.

How positions get put on the schedule

Understand: Agencies identify the roles, OPM vets the justification, and the President makes the final decision to place the positions into Schedule Policy/Career.

Ask (after agencies have made designations)

Ask (on a rolling basis, or in a GAO review 1 year after implementation)

Watch

Why it matters: Much of the practical discretion in this rule rests in how agencies conduct and document this step. Understanding this process is key to meaningful oversight.

What the loss of Chapter 43 & 75 protections really means

Understand: This removes performance improvement periods (PIPs), MSPB appeal rights, and statutory due process (notice and response) removal processes.

Ask

Watch

Why it matters: The health of the civil service depends on disciplined, fair, and consistent implementation of workforce policies. 

What replaces MSPB and OSC review and whistleblower safeguards

Understand: Schedule P/C employees cannot appeal placement or removal through MSPB or file complaints with the OSC. Instead, the rule requires agencies to create and enforce internal protections against Prohibited Personnel Practices (PPPs), including whistleblower reprisal.

Ask (when agencies have made designations)

Watch

Why it matters: Under the traditional civil service system, MSPB provided an independent judge,  formal record, public decisions, visible check on agency action. OSC safeguarded the merit system by protecting federal employees and applicants from prohibited personnel practices and provided a secure channel for federal employees to blow the whistle by disclosing wrongdoing. Under Schedule P/C, legitimacy depends on whether agencies build credible, transparent, and trusted internal safeguards. Visible safeguards are essential for preventing misuse of at-will authority; protecting whistleblowers and dissenters acting in good faith; maintaining workforce trust in policy offices; ensuring accountability does not become perceived politicization. Agencies need to have strong systems before problems arise.

Hiring and merit rules

Understand: Hiring for Schedule P/C roles must still follow merit procedures. New hires in Schedule P/C can gain competitive status in 2 years.

Ask (on a rolling basis)

Watch

Why it matters: Perceptions of politicization may arise here.

Workforce and mission impacts

Understand: These roles will sit in a wide range of functions across agencies. Early concerns about Schedule P/C highlighted risks to sensitive, scientific, technical, or high-demand roles where continuity and ability to “speak truth to power” are valued. 

Ask (on a rolling basis)

Watch

Why it matters: Accountability gains should not come at the expense of mission capacity.

Does this address the performance problem it’s meant to solve?

Understand: OPM justifies the rule using MSPB and FEVS data showing managers struggle to remove poor performers; however, the rule does not introduce a more mature performance management standard. 

Ask (on a rolling basis, or through GAO review)

Watch

Why it matters: Congress should know if the remedy matches the diagnosis.

Data Congress should request via GAO for ongoing tracking and comparison

Request from agencies:

Why it matters: Early transparency prevents speculation and enables evidence-based oversight.

Biosecurity Modernization and Innovation Act of 2026 is a Major Step for U.S. Biosecurity

There are moments where biosecurity reform moves from thought to action. This week is one of them. 

On Wednesday, Senators Tom Cotton (R-AK) and Amy Klobuchar (D-MN) introduced the Biosecurity Modernization and Innovation Act of 2026. Inspired in part by the National Security Commission on Emerging Biotechnology’s recommendation for foundational biosecurity and biosafety oversight reform, this bipartisan effort works to fix a basic problem: the United States still lacks clear, accountable oversight for biological risks.

This bill takes a first practical step. It gives the White House 90 days to assess the state of biosecurity oversight by clarifying roles, measuring effectiveness, listening to practitioners, and identifying gaps in resources and capability. Asking these basic accountability questions is essential to the growth of a strong and secure biotechnology landscape.

This assessment would feed directly into implementation, with executive actions where possible, legislative action where necessary, and structural reforms that could consolidate oversight mechanisms into a central oversight hub on biorisk matters. 

Clarity is not just bureaucratic housekeeping. It is the critical foundation for national security, international competitiveness, and public trust in biotechnology. We have known for decades that the system needs modernization. This bill finally begins some of this critical work.

Ninety (90) days is ambitious for running an interagency process at this scale, but the urgency is needed. 

For half a century, we’ve patched problems as they arose, building a culture of compliance, not curiosity. As an example, after the early-2000s anthrax attacks and a series of controversial experiments, the government created and tightened rules around select agents and created policies for “dual use” and pandemic research. Yet those policies were rarely evaluated or consistently implemented, and in some cases the measures would not have prevented the very incidents that prompted their creation. While part of the gap is technical, much of it comes from a paradigm that positions biosecurity and biosafety as a hindrance to innovation, and not its enabler.

This misalignment matters. The pace of advance in the last 50 years is dwarfed by the leaps in the last five. Biotechnology today is more diffuse and comes at a lower price point than at any point in history. It is digital, global, and increasingly powered by AI. Tools that once required specialized labs may now run from a laptop or are outsourced across borders. 

This is the moment to move from fragmented compliance to modern governance: a system that is proactive, coordinated, and accountable. Senators Tom Cotton and Amy Klobuchar recognize that reality. Their bill creates the space to step back, clarify roles, and design biosecurity oversight on purpose – not by accident or after the next crisis.

By preparing credible, bipartisan options now, before the bill becomes law, we can give the Administration a plan that is ready to implement rather than another study that gathers dust. 

At the Federation of American Scientists, this reform agenda builds on work already underway across the biotechnology landscape – from exploring and advancing practical governance approaches for AI-enabled and data-driven biology, strengthening domestic biomanufacturing and scale-up policy, identifying gaps and coordination challenges in federal oversight, and translating technical expertise into actionable options for policymakers. 

FAS is expanding its role as a convener and catalyst over the coming months through additional gatherings, publications, and structured dialogues with government, industry, academia, and civil society leaders to help shape the present and future of biorisk policy.

This next phase will focus on foundational, cross-cutting reform. Many of the ideas on the table today are incremental; they target individual risks or technologies in isolation. However, the challenges we face are systemic. We need institutions and oversight tools that evolve alongside the science, and align innovation, economic growth, and security rather than treating them as tradeoffs. That’s the focus of our work. If you would like to engage, please reach out to any of us via email or on LinkedIn. We are actively working with practitioners, policymakers, and researchers to surface practical insights, align incentives, and ensure that oversight frameworks are both grounded in real-world practice and widely supported.