Getting Federal Hiring Right from the Start

Validating the Need and Planning for Success in the Federal Hiring Process

Most federal agencies consider the start of the hiring process to be the development of the job posting. However, the federal hiring process really begins well before the job is posted and the official clock starts. There are many decisions that need to be made before an agency can begin hiring. These decisions have a number of dependencies and require collaboration and alignment between leadership, program leaders, budget professionals, hiring managers, and human resource (HR) staff. What happens in these early steps can not only determine the speed of the hiring process, but the decisions made also can cause the hiring process to be either a success or failure. 

In our previous blog post, we outlined the steps in the federal hiring process and identified bottlenecks impacting the staffing of roles to support permitting activities (e.g., environmental reviews). This post dives into the first phase of the process: planning and validation of the hiring need. This phase includes four steps:  

  1. Allocate Budget for Program Staffing and Workload
  2. Validate Hiring Need Against Workforce, Staffing, and Recruiting Plans
  3. Request Personnel Action to Fill the Job
  4. Launch Recruiting Efforts for the Position

Clear communication and quality collaboration between key actors shape the outcomes of the hiring process. Finance staff allocate the resources and manage the budget. HR workforce planners and staffing specialists identify the types of positions needed across the agency. Program owners and hiring managers define the roles needed to achieve their mission and goals. These stakeholders must work together throughout this phase of the process.

Even with collaboration, challenges can arise. For example, there may be:  

Adding to these challenges, the stakeholders engaging in this early phase bring preconceptions based on their past experience. If this phase has previously been delayed, confusing, or difficult, these negative expectations may present a barrier to building effective collaboration within the group.

Breaking Down the Steps

For each step in the Planning and Validation phase, we provide a description, explain what can go wrong, share what can go right, and provide some examples from our research, where applicable. This work is based on extensive interviews with hiring managers, program leaders, staffing specialists, workforce planners and budget professionals as well as on-the-job experience.  

Step I. Allocate Budget for Program Staffing and Workload

In this first step, the agency receives budget authorization or program direction funding through OMB derived from new authorizing legislation, annual appropriations, or a continuing resolution. Once the funds are available from the Treasury Department, agency budget professionals  allocate the resources to the particular programs inside the agency. They provide instructions regarding how the money is to be used (e.g., staffing, contracting, and other actions to support program execution). For example, the Bipartisan Infrastructure Law (BIL) provided funding for grants to build cell towers and connections for expanding internet access to underserved communities. This included a percentage of funds for administration and program staffing.

In an ideal world, program leaders could select the best mix of investments in staffing, contracting, equipment, and services to implement their programs efficiently and effectively. They work toward this in budget requests, but in the real world, some of these decisions are constrained by the specifics of the authorizing legislation, OMB’s interpretation, and the agency’s language in the program direction. 

What Can Go Wrong

What Can Go Right

Step II. Validate Hiring Need Against Workforce, Staffing, and Recruiting Plans

After receiving their budget allocation, program leaders validate their hiring need by matching budget resources with workload needs. A robust workforce plan becomes useful, as it allows leaders to identify gaps in the current workforce, workload, and recruiting plans and future workload requirements. Workforce plans that align with budget requests and anticipate future needs enable HR specialists and hiring managers to quickly validate the hiring need and move to request the personnel action. 

What Can Go Wrong

What Can Go Right

Step III. Request Personnel Action to Fill the Job and Launch Recruiting Efforts for the Position

Note: Requesting personnel action to fill the job is a relatively straightforward step, so we have combined it with launching the recruiting process for simplification.

In most agencies, the hiring manager or program leader fills out an SF-52 form to request the hiring action for a specific position. This includes defining the position title, occupation, grade level, type of position, agency, location, pay plan, and other pertinent information. To do this, they verify that the funding is available and they have the budget authority to proceed. 

Though recruiting can begin before and after this step, this is the chance to begin recruiting in earnest. This can involve activating agency HR staff, engaging contract recruiting resources if they are available, preparing and launching agency social media announcements, and notifying recruitment networks (e.g., universities, professional organizations, alumni groups, stakeholders, communities of practice, etc.) of the job opening.

What Can Go Wrong

What Can Go Right

Conclusion

Following What Can Go Right practices in this beginning phase can reduce the risk of challenges emerging later on in the hiring process. Delays in decision making around budget allocation and program staffing, lingering ambiguity in the positions needed for programs, and delayed recruiting activities can lead to difficulties in accessing the candidate pools needed for the roles. This ultimately increases the risk of failure and may require a restart of the hiring process.

The best practices outlined here (e.g., anticipating budget decisions, adapting workforce plans, and expanding recruiting) set the stage for a successful hiring process. They require collaboration between HR leaders, recruiters and staffing specialists, budget and program professionals, workforce planners, and hiring managers to make sure they are taking action to increase the odds of hiring a successful employee.

The actions that OPM, the Chief Human Capital Officers Council (CHCO), their agencies, and others are taking as a result of the recent Hiring Experience Memo support many of the practices highlighted in What Can Go Right for each step of the process. Civil servants should pay attention to OPM’s upcoming webinars, guidance, and other events that aim to support you in implementing these practices.

As noted in our first blog on the hiring process for permitting talent, close engagement between key actors is critical to making the right decisions about workforce configuration and workload management. Starting right in this first phase increases the chances of success throughout the hiring process.

Democratizing Hiring: A Public Jobs Board for A Fairer, More Transparent Political Appointee Hiring Process

Current hiring processes for political appointees are opaque and problematic; job openings are essentially closed off except to those in the right networks. To democratize hiring, the next administration should develop a public jobs board for non-Senate-confirmed political appointments, which includes a list of open roles and job descriptions. By serving as a one-stop shop for those interested in serving in an administration, an open jobs board would bring more skilled candidates into the administration, diversify the appointee workforce, expedite the hiring process, and improve government transparency.

Challenge and Opportunity

Hiring for federal political appointee positions is a broken process. Even though political appointees steer some of the federal government’s most essential functions, the way these individuals are hired lacks the rigor and transparency expected in most other fields.

Political appointment hiring processes are opaque, favoring privileged candidates already in policy networks. There is currently no standardized hiring mechanism for filling political appointee roles, even though new administrations must fill thousands of lower-level appointee positions. Openings are often shared only through word-of-mouth or internal networks, meaning that many strong candidates with relevant domain expertise may never be aware of available opportunities to work in an administration. Though the Plum Book (an annually updated list of political appointees) exists, it does not list vacancies, meaning outside candidates must still have insider information on who is hiring.

These closed hiring processes are deeply problematic because they lead to a non-diverse pool of applicants. For example, current networking-based processes benefit graduates of elite universities, and similar networking-based employment processes such as employee referral programs tend to benefit White men more than any other demographic group. We have experienced this opaque process firsthand at the Aspen Tech Policy Hub; though we have trained hundreds of science and technology fellows who are interested in serving as appointees, we are unaware of any that obtained political appointment roles by means other than networking.

Appointee positions often do not include formal job descriptions, making it difficult for outside candidates to identify roles that are a good fit. Most political appointee jobs do not include a written, formalized job description—a standard best practice across every other sector. A lack of job descriptions makes it almost impossible for outside candidates utilizing the Plum Book to understand what a position entails or whether it would be a good fit. Candidates that are being recruited typically learn more about position responsibilities through direct conversations with hiring managers, which again favors candidates who have direct connections to the hiring team.

Hiring processes are inefficient for hiring staff. The current approach is not only problematic for candidates; it is also inefficient for hiring staff. Through the current process, PPO or other hiring staff must sift through tens of thousands of resumes submitted through online resume bank submissions (e.g. the Biden administration’s “Join Us” form) that are not tailored to specific jobs. They may also end up directly reaching out to candidates that may not actually be interested in specific positions, or who lack required specialized skills.

Given these challenges, there is significant opportunity to reform the political appointment hiring process to benefit both applications and hiring officials.

Plan of Action

The next administration’s Presidential Personnel Office (PPO) should pilot a public jobs board for Schedule C and non-career Senior Executive Service political appointment positions and expand the job board to all non-Senate-confirmed appointments if the pilot is successful. This public jobs board should eventually provide a list of currently open vacancies, a brief description for each currently open vacancy that includes a job description and job requirements, and a process for applying to that position.

Having a more transparent and open jobs board with job descriptions would have multiple benefits. It would:

Additionally, an open jobs board will allow administration officials to collect key data on applicant background and use these data to improve recruitment going forward. For example, an open application process would allow administration officials to collect aggregate data on education credentials, demographics, and work experience, and modify processes to improve diversity as needed. Having an updated, open list of positions will also allow PPO to refer strong candidates to other open roles that may be a fit, as current processes make it difficult for administration officials or hiring managers to know what other open positions exist.

Implementing this jobs board will require two phases: (1) an initial phase where the transition team and PPO modify their current “Join Us” form to list 50-100 key initial hires the administration will need to make; and (2) a secondary phase where it builds a more fulsome jobs board, launched in late 2025, that includes all open roles going forward. 

Phase 1. By early 2025, the transition team (or General Services Administration, in its transition support capacity) should identify 50-100 key Schedule C or non-career Senior Executive service hires they think the PPO will need to fill early in the administration, and launch a revised resume bank to collect applicants for these positions. The transition team should prioritize roles that a) are urgent needs for the new administration, b) require specialized skills not commonly found among campaign and transition staff (for instance technical or scientific knowledge), and c) have no clear candidate already identified. The transition team should then revise the current administration’s “Join Us” form to include this list of 50-100 soon-to-be vacant job roles, as well as provide a 2-3 sentence description of the job responsibilities, and allow outside candidates to explicitly note interest in these positions. This should be a relatively light lift, given the current “Join Us” form is fairly easy to build.

Phase 2. Early in the administration, PPO should build a larger, more comprehensive jobs board that should aim to go live in late 2025 and includes all open Schedule C or non-Senior Executive Service (SES) positions. Upon launch, this jobs board should include open jobs for whom no candidate has been identified, and any new Schedule C and non-SES appointments that are open going forward. As described in further detail in the FAQ section, every job listed should include a brief description of the position responsibilities and qualifications, and additional questions on political affiliation and demographics.

During this second phase, the PPO and the Office of Personnel Management (OPM) should identify and track key metrics to determine whether it should be expanded to cover all non-Senate confirmed appointments. For example, PPO and OPM could compare the diversity of applicants, diversity of hires, number of qualified candidates who applied for a position, time-to-hire, and number of vacant positions pre- and post-implementation of the jobs board. 

If the jobs board improves key metrics, PPO and OPM should expand the jobs board to all non-Senate confirmed appointments. This would include non-Senate confirmed Senior Executive Service appointee positions.

Conclusion

An open jobs board for political appointee positions is necessary to building a stronger and more diverse appointee workforce, and for improving government transparency. An open jobs board will strengthen and diversify the appointee workforce, require hiring managers to specifically write down job responsibilities and qualifications, reduce hiring time, and ultimately result in more successful hires.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
Why can’t candidates just use the Plum Book to find relevant job opportunities?
Outside applicants seeking appointee positions in an administration are frequently advised to read the Plum Book, an annually updated list of political appointments in an administration. However, the Plum Book does not state what positions are currently recruiting, which means that to be effective, a job seeker will need insider information on who is currently hiring.
Why should PPO be responsible for implementing this jobs board?
The Presidential Personnel Office (PPO), in partnership with the US Office of Personnel Management (OPM), should ultimately run and implement the jobs board. As the main entity responsible for recruiting and vetting appointments for a new administration, PPO is in a good position to manage this board. The PPO should also work closely with OPM, as they are currently responsible for implementing and updating the electronic Plum Book, as per 5 U.S.C. 3330f (the Periodically Listing Updates to Management [PLUM] Act of 2022), and therefore have relevant connections to all agencies with political appointees.
How should PPO manage a jobs platform if they are overwhelmed by the number of applications?

An open jobs board will attract many applicants, perhaps more than the PPO’s currently small team can handle. If the PPO is overwhelmed by the number of job applicants it can either directly forward resumes to hiring managers — thereby reducing burden on PPO itself — or consider hiring a vetted third-party to sort through submitted resumes and provide a smaller, more focused list of applicants for PPO to consider.


PPO can also include questions to enable candidates to be sorted by political experience and political alignment, so as (for instance) to favor those who worked on the president’s campaign.

How will this job board increase efficiency if hiring managers have to develop job descriptions?
Though hiring managers will have to write job descriptions, they will ultimately save time in this process by finding more qualified candidates for specific positions, and by reducing time-to-hire. Some political appointee positions can remain unfilled for months, and an open jobs board would reduce the time-to-hire for those more difficult-to-fill positions. This process will also result in better hires, and ultimately more time savings, since hiring managers will need to have the discipline to think through key qualifications and responsibilities before making a hire.
Are there examples of other governments that have implemented open jobs board processes for appointee positions?
Yes, mainly at the state and local level. The Governor’s Office of Maryland, for example, recruited for political appointee positions like Special Assistant and Chief Innovation Officer positions via open job postings. The incoming administration could work with staff organizing these hiring processes at the state/local level to learn about how they are able to manage these processes efficiently.
What would be the cost of this recommendation?

Both phases of our recommendation would be a relatively light lift, and most costs would come from staff time. Phase 1 costs will solely include staff time; we suspect it will take ⅓ to ½ of an FTE’s time over 3 months to source the 50-100 high-priority jobs, write the job descriptions, and incorporate them into the existing “Join Us” form.


Phase 2 costs will include staff time and cost of deploying and maintaining the platform. We suspect it will take 4-5 months to build and test the platform, and to source the job descriptions. The cost of maintaining the Phase 2 platform will ultimately depend on the platform chosen. Ideally, this jobs board would be hosted on an easy-to-use platform like Google, Lever, or Greenhouse that can securely hold applicant data. If that proves too difficult, it could also be built on top of the existing USAJobs site.

Are there any existing resources the transition teams or PPO can use to build this jobs platform?

PPO may be able to use existing government resources to help fund this effort. The PPO may be able to pull on personnel from the General Services Administration in their transition support capacity to assist with sourcing and writing job descriptions. PPO can also work with in-house technology teams at the U.S. Digital Service to actually build the platform, especially given they have considerable expertise in reforming hiring for federal technology positions.

How will the PPO preserve the confidentiality of job functions?
We understand that some political appointee positions have confidential job responsibilities that cannot be disclosed in a fully public jobs board. Even for confidential roles, hiring managers should be able to write simple, one paragraph job descriptions that provide a high-level overview of a role and do not disclose confidential information.
What information should be contained in a job entry?
Every job listed on the jobs board should include the position name, a brief (at least one paragraph) description, and a list of qualifications. Applicants should be able to submit their resumes and cover letters for positions they are interested in. The jobs board should also include additional questions asking candidates for evidence of their political affiliation and previous campaign work, as this will allow hiring teams to specifically identify candidates who share the values of the administration for which they will be working, and demographic information to assess whether jobs are reaching a diverse group of applicants.

Building a Comprehensive NEPA Database to Facilitate Innovation

The Inflation Reduction Act and the Infrastructure Innovation and Jobs Act are set to drive $300 billion in energy infrastructure investment by 2030. Without permitting reform, lengthy review processes threaten to make these federal investments one-third less effective at reducing greenhouse gas emissions. That’s why Congress has been grappling with reforming the National Environmental Policy Act (NEPA) for almost two years. Yet, despite the urgency to reform the law, there is a striking lack of available data on how NEPA actually works. Under these conditions, evidence-based policy making is simply impossible. With access to the right data and with thoughtful teaming, the next administration has a golden opportunity to create a roadmap for permitting software that maximizes the impact of federal investments.

Challenge and Opportunity

NEPA is a cornerstone of U.S. environmental law, requiring nearly all federally funded projects—like bridges, wildfire risk-reduction treatments, and wind farms—to undergo an environmental review. Despite its widespread impact, NEPA’s costs and benefits remain poorly understood. Although academics and the Council on Environmental Quality (CEQ) have conducted piecemeal studies using limited data, even the most basic data points, like the average duration of a NEPA analysis, remain elusive. Even the Government Accountability Office (GAO), when tasked with evaluating NEPA’s effectiveness in 2014, was unable to determine how many NEPA reviews are conducted annually, resulting in a report aptly titled “National Environmental Policy Act: Little Information Exists on NEPA Analyses.”

The lack of comprehensive data is not due to a lack of effort or awareness. In 2021, researchers at the University of Arizona launched NEPAccess, an AI-driven program aimed at aggregating publicly available NEPA data. While successful at scraping what data was accessible, the program could not create a comprehensive database because many NEPA documents are either not publicly available or too hard to access, namely Environmental Assessments (EAs) and Categorical Exclusions (CEs). The Pacific Northwest National Laboratory (PNNL) also built a language model to analyze NEPA documents but contained their analysis to the least common but most complex category of environmental reviews, Environmental Impact Statements (EISs).

Fortunately, much of the data needed to populate a more comprehensive NEPA database does exist. Unfortunately, it’s stored in a complex network of incompatible software systems, limiting both public access and interagency collaboration. Each agency responsible for conducting NEPA reviews operates its own unique NEPA software. Even the most advanced NEPA software, SOPA used by the Forest Service and ePlanning used by the Bureau of Land Management, do not automatically publish performance data.

Analyzing NEPA outcomes isn’t just an academic exercise; it’s an essential foundation for reform. Efforts to improve NEPA software have garnered bipartisan support from Congress. CEQ recently published a roadmap outlining important next steps to this end. In the report, CEQ explains that organized data would not only help guide development of better software but also foster broad efficiency in the NEPA process. In fact, CEQ even outlines the project components that would be most helpful to track (including unique ID numbers, level of review, document type, and project type).

Put simply, meshing this complex web of existing softwares into a tracking database would be nearly impossible (not to mention expensive). Luckily, advances in large language models, like the ones used by NEPAccess and PNNL, offer a simpler and more effective solution. With properly formatted files of all NEPA documents in one place, a small team of software engineers could harness PolicyAI’s existing program to build a comprehensive analysis dashboard.

Plan of Action

The greatest obstacles to building an AI-powered tracking dashboard are accessing the NEPA documents themselves and organizing their contents to enable meaningful analysis. Although the administration could address the availability of these documents by compelling agencies to release them, inconsistencies in how they’re written and stored would still pose a challenge. That means building a tracking board will require open, ongoing collaboration between technologists and agencies.

Conclusion

The stakes are high. With billions of dollars in federal climate and infrastructure investments on the line, a sluggish and opaque permitting process threatens to undermine national efforts to cut emissions. By embracing cutting-edge technology and prioritizing transparency, the next administration can not only reshape our understanding of the NEPA process but bolster its efficiency too.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
Why is it important to have more data about Environmental Assessments and Categorical Exclusions?

It’s estimated that only 1% of NEPA analyses are Environmental Impact Statements (EISs), 5% are Environmental Assessments (EAs), and 94% are Categorical Exclusions (CEs). While EISs cover the most complex and contentious projects, using only analysis of EISs to understand the NEPA process paints an extremely narrow picture of the current system. In fact, focusing solely on EISs provides an incomplete and potentially misleading understanding of the true scope and effectiveness of NEPA reviews.


The vast majority of projects undergo either an EA or are afforded a CE, making these categories far more representative of the typical environmental review process under NEPA. EAs and CEs often address smaller projects, like routine infrastructure improvements, which are critical to the nation’s broader environmental and economic goals. Ignoring these reviews means disregarding a significant portion of federal environmental decision-making; as a result, policymakers, agency staff, and the public are left with an incomplete view of NEPA’s efficiency and impact.

Using Home Energy Rebates to Support Market Transformation

Without market-shaping interventions, federal and state subsidies for energy-efficient products like heat pumps often lead to higher prices, leaving the overall market worse off when rebates end. This is a key challenge that must be addressed as the Department of Energy (DOE) and states implement the Inflation Reduction Act’s Home Electrification and Appliance Rebates (HEAR) program. 

DOE should prioritize the development of evidence-based market-transformation strategies that states can implement with their HEAR funding. The DOE should use its existing allocation of administrative funds to create a central capability to (1) develop market-shaping toolkits and an evidence base on how state programs can improve value for money and achieve market transformation and (2) provide market-shaping program implementation assistance to states.

There are proven market-transformation strategies that can reduce costs and save consumers billions of dollars. DOE can look to the global public health sector for an example of what market-shaping interventions could do for heat pumps and other energy-efficient technologies. In that arena, the Clinton Health Access Initiative (CHAI) has shown how public funding can support market-based transformation, leading to sustainably lower drug and vaccine prices, new types of “all-inclusive” contracts, and improved product quality. Agreements negotiated by CHAI and the Bill and Melinda Gates Foundation have generated over $4 billion in savings for publicly financed health systems and improved healthcare for hundreds of millions of people. 

Similar impact can be achieved in the market for heat pumps if DOE and states can supply information to empower consumers to purchase the most cost-effective products, offer higher rebates for those cost-effective products, and seek supplier discounts for heat pumps eligible for rebates. 

Challenge and Opportunity 

HEAR received $4.5 billion in appropriations from the Inflation Reduction Act and provides consumers with rebates to purchase and install high-efficiency electric appliances. Heat pumps, the primary eligible appliance, present a huge opportunity for lowering overall greenhouse gas emissions from heating and cooling, which makes up over 10% of global emissions. In the continental United States, studies have shown that heat pumps can reduce carbon emissions up to 93% compared to gas furnaces across their lifetime

However, direct-to-consumer rebate programs have been shown to enable suppliers to increase prices unless these subsidies are used to reward innovation and reduce cost. If subsidies are dispersed and the program design is not aligned with a market-transformation strategy, the result will be a short-term boost in demand followed by a fall-off in consumer interest as prices increase and the rebates are no longer available. This is a problem because program funding for heat pump rebates will support only ~500,000 projects over the life of the program—but more than 50 million households will need to convert to heat pumps in order to decarbonize the sector.

HEAR aims to address this through Market Transformation Plans, which states are required to submit to DOE within a year after receiving the award. States will then need to obtain DOE approval before implementing them. We see several challenges with the current implementation of HEAR: 

Despite these challenges, DOE has a clear opportunity to increase the impact of HEAR rebates by providing program design support to states for market-transformation goals. To ensure a competitive market and better value for money, state programs need guidance on how to overcome barriers created by information asymmetry – meaning that HVAC contractors have a much better understanding of the technical and cost/benefit aspects of heat pumps than consumers do. Consumers cannot work with contractors to select a heat pump solution that represents the best value for money if they do not understand the technical performance of products and how operating costs are affected by Seasonal Energy Efficiency Rating, coefficient of performance, and utility rates. If consumers are not well-informed, market outcomes will not be efficient. Currently, consumers do not have easy access to critical information such as the tradeoff in costs between increased Seasonal Energy Efficiency Rating and savings on monthly utility bills. 

Overcoming information asymmetry will also help lower soft costs, which is critical to lowering the cost of heat pumps. Based on studies conducted by New York State, Solar Energy Industries Association and DOE, soft costs run over 60% of project costs in some cases and have increased over the past 10 years.

There is still time to act, as thus far only a few states have received approval to begin issuing rebates and state market-transformation plans are still in the early stages of development.

Plan of Action 

Recommendation 1. Establish a central market transformation team to provide resources and technical assistance to states.

To limit cost and complexity at the state level for designing and staffing market-transformation initiatives, the DOE should set up central resources and capabilities. This could either be done by a dedicated team within the Office of State and Community Energy Programs or through a national lab. Funding would come from the 3% of program funds that DOE is allowed to use for administration and technical assistance. 

This team would:

Data collection, analysis, and consistent reporting are at the heart of what this central team could provide states. The DOE data and tools requirements guide already asks states to provide information on the invoice, equipment and materials, and installation costs for each rebate transaction. It is critical that the DOE and state programs coordinate on how to collect and structure this data in order to benefit consumers across all state programs.

A central team could provide resources and technical assistance to State Energy Offices (SEOs) on how to implement market-shaping strategies in a phased approach.

Phase 1. Create greater price transparency and set benchmarks for pricing on the most common products supported by rebates.

The central market-transformation team should provide technical support to states on how to develop benchmarking data on prices available to consumers for the most common product offerings. Consumers should be able to evaluate pricing for heat pumps like they do for major purchases such as cars, travel, or higher education. State programs could facilitate these comparisons by having rebate-eligible contractors and suppliers provide illustrative bids for a set of 5–10 common heat pump installation scenarios, for example, installing a ductless mini-split in a three-bedroom home.

States should also require contractors to provide hourly rates for different types of labor, since installation costs are often ~70% of total project costs. Contractors should only be designated as recommended or preferred service providers (with access to HEAR rebates) if they are willing to share cost data.

In addition, the central market-transformation team could facilitate information-sharing and data aggregation across states to limit confusion and duplication of data. This will increase price transparency and limit the work required at the state level to find price information and integrate with product technical performance data.

Phase 2. Encourage price and service-level competition among suppliers by providing consumers with information on how to judge value for money.

A second area to improve market outcomes is by promoting competition. Price transparency supports this goal, but to achieve market transformation programs need to go further to help consumers understand what products, specific to their circumstances, offer best value for money. 

In the case of a heat pump installation, this means taking account of fuel source, energy prices, house condition, and other factors that drive the overall value-for-money equation when achieving improved energy efficiency. Again, information asymmetry is at play. Many energy-efficiency consultants and HVAC contractors offer to advise on these topics but have an inherent bias to promoting their products and services. There are no easily available public sources of reliable benchmark price/performance data for ducted and ductless heat pumps for homes ranging from 1500 to 2700 square feet, which would cover 75% of the single-family homes in the United States. 

In contrast, the commercial building sector benefits from very detailed cost information published on virtually every type of building material and specialty trade procedure. Data from sources such as RSMeans provides pricing and unit cost information for ductwork, electrical wiring, and mean hourly wage rates for HVAC technicians by region. Builders of newly constructed single-family homes use similar systems to estimate and manage the costs of every aspect of the new construction process. But a homeowner seeking to retrofit a heat pump into an existing structure has none of this information. Since virtually all rebates are likely to be retrofit installations, states and the DOE have a unique interest in making this market more competitive by developing and publishing cost/performance benchmarking data. 

State programs have considerable leverage that can be used to obtain the information needed from suppliers and installers. The central market-transformation team should use that information to create a tool that provides states and consumers with estimates of potential bill savings from installation of heat pumps in different regions and under different utility rates. This information would be very valuable to low- and middle-income (LMI) households, who are to receive most of the funding under HEAR.

Phase 3. Use the rebate program to lower costs and promote best-value products by negotiating product and service-level agreements with suppliers and contractors and awarding a higher level of rebate to installations that represent best value for money.

By subsidizing and consolidating demand, SEOs will have significant bargaining power to achieve fair prices for consumers.

First, by leveraging relationships with public and private sector stakeholders, SEOs can negotiate agreements with best-value contractors, offering guaranteed minimum volumes in return for discounted pricing and/or longer warranty periods for participating consumers. This is especially important for LMI households, who have limited home improvement budgets and experience disproportionately higher energy burdens, which is why there has been limited uptake of heat pumps by LMI households. In return, contractors gain access to a guaranteed number of additional projects that can offset the seasonal nature of the business.

Second, as states design the formulas used to distribute rebates, they should be encouraged to create systems that allocate a higher proportion of rebates to projects quoted at or below the benchmark costs and a smaller proportion or completely eliminate the rebates to projects higher than the benchmark. This will incentivize contractors to offer better value for money, as most projects will not proceed unless they receive a substantial rebate. States should also adopt a similar process as New York and Wisconsin in creating a list of approved contractors that adhere to “reasonable price” thresholds.

Recommendation 2. For future energy rebate programs, Congress and DOE can make market transformation more central to program design. 

In future clean energy legislation, Congress should direct DOE to include the principles recommended above into the design of energy rebate programs, whether implemented by DOE or states. Ideally, that would come with either greater funding for administration and technical assistance or dedicated funding for market-transformation activities in addition to the rebate program funding. 

For future rebate programs, DOE could take market transformation a step further by establishing benchmarking data for “fair and reasonable” prices from the beginning and requiring that, as part of their applications, states must have service-level agreements in place to ensure that only contractors that are at or below ceiling prices are awarded rebates. Establishing this at the federal level will ensure consistency and adoption at the state level.

Conclusion

The DOE should prioritize funding evidence-based market transformation strategies to increase the return on investment for rebate programs. Learning from U.S.-funded programs for global public health, a similar approach can be applied to the markets for energy-efficient appliances that are supported under the HEAR program. Market shaping can tip the balance towards more cost-effective and better-value products and prevent rebates from driving up prices. Successful market shaping will lead to sustained uptake of energy-efficient appliances by households across the country.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
Why are prices driven up by subsidies?

There is compelling evidence that federal and state subsidies for energy-efficient products can lead to price inflation, particularly in the clean energy space. The federal government has offered tax credits in the residential solar space for many years. While there has been a 64% reduction in the ex-factory photovoltaic module price for residential panels, the total residential installed cost per kWh has increased. The soft costs, including installation, have increased over the same period and are now ~65% or more of total project costs.


In 2021, the National Bureau of Economic Research linked consumer subsidies with firms charging higher prices, in the case of Chinese cell phones. The researchers found that by introducing competition for eligibility, through techniques such as commitment to price ceilings, price increases were mitigated and, in some cases, even reduced, creating more consumer surplus. This type of research along with the observed price increases after tax credits for solar show the risks of government subsidies without market-shaping interventions and the likely detrimental long-term impacts.

In which contexts has market-shaping/transformation work succeeded in the global health sector?

CHAI has negotiated over 140 agreements for health commodities supplied to low-and-middle-income countries (LMICs) with over 50 different companies. These market-shaping agreements have generated $4 billion in savings for health systems and touched millions of lives.


For example, CHAI collaborated with Duke University and Bristol Myers Squibb to combat hepatitis-C, which impacts 71 million people, 80% of whom are in LMICs, mostly in Southeast Asia and Africa [see footnote]. The approval in 2013 of two new antiviral drugs transformed treatment for high-income countries, but the drugs were not marketed or affordable in LMICs. Through its partnerships and programming, CHAI was able to achieve initial pricing of $500 per treatment course for LMICs. Prices fell over the next six years to under $60 per treatment course while the cost in the West remained at over $50,000 per treatment course. This was accomplished through ceiling price agreements and access programs with guaranteed volume considerations.


CHAI has also worked closely with the Bill and Melinda Gates Foundation to develop the novel market-shaping intervention called a volume guarantee (VG), where a drug or diagnostic test supplier agrees to a price discount in exchange for guaranteed volume (which will be backstopped by the guarantor if not achieved). Together, they negotiated a six-year fixed price VG with Bayer and Merck for contraceptive implants that reduced the price by 53% for 40 million units, making family planning more accessible for millions and generating $500 million in procurement savings.


Footnote: Hanafiah et al., Global epidemiology of hepatitis C virus infection: New estimates of age-specific antibody to HCV seroprevalence, J Hepatol. (2013), Volume 57, Issue 4, Pages 1333–1342; Gower E, Estes C, Blach S, et al. Global epidemiology and genotype distribution of the hepatitis C virus infection. J Hepatol. (2014),61(1 Suppl):S45-57; World Health Organization. Work conducted by the London School of Hygiene and Tropical Medicine. Global Hepatitis Report 2017.

How are states implementing HEAR?

Many states are in the early stages of setting up the program, so they have not yet released their implementation plans. However, New York and Wisconsin indicate which contractors are eligible to receive rebates through approved contractor networks on their websites. Once a household applies for the program, they are put in touch with a contractor from the approved state network, which they are required to use if they want access to the rebate. Those contractors are approved based on completion of training and other basic requirements such as affirming that pricing will be “fair and reasonable.” Currently, there is no detail about specific price thresholds that suppliers need to meet (as an indication of value for money) to qualify.

How can states get benchmark data given the variation between homes for heat pump installation?

DOE’s Data and Tools Requirements document lays out the guidelines for states to receive federal funding for rebates. This includes transaction-level data that must be reported to the DOE monthly, including the specs of the home, the installation costs, and the equipment costs. Given that states already have to collect this data from contractors for reporting, this proposal recommends that SEOs streamline data collection and standardize it across all participating states, and then publish summary data so consumers can get an accurate sense of the range of prices.


There will be natural variation between homes, but by collecting a sufficient sample size and overlaying efficiency metrics like Seasonal Energy Efficiency Rating, Heating Seasonal Performance Factor, and coefficient of performance, states will be able to gauge value for money. Rewiring America and other nonprofits have software that can quickly make these calculations to help consumers understand the return on investment for higher-efficiency (and higher-cost) heat pumps given their location and current heating/cooling costs.

What impact would price transparency and benchmark data have?

In the global public health markets, CHAI has promoted price transparency for drugs and diagnostic tests by publishing market surveys that include product technical specifications, and links to product performance studies. We show the actual prices paid for similar products in different countries and by different procurement agencies. All this information has helped public health programs migrate to the best-in-class products and improve value for money. Stats could do the same to empower consumers to choose best-in-class and best-value products and contractors.

Driving Product Model Development with the Technology Modernization Fund

The Technology Modernization Fund (TMF) currently funds multiyear technology projects to help agencies improve their service delivery. However, many agencies abdicate responsibility for project outcomes to vendors, lacking the internal leadership and project development teams necessary to apply a product model approach focused on user needs, starting small, learning what works, and making adjustments as needed. 

To promote better outcomes, TMF could make three key changes to help agencies shift from simply purchasing static software to acquiring ongoing capabilities that can meet their long-term mission needs: (1) provide education and training to help agencies adopt the product model; (2) evaluate investments based on their use of effective product management and development practices; and (3) fund the staff necessary to deliver true modernization capacity. 

Challenge and Opportunity

Technology modernization is a continual process of addressing unmet needs, not a one-time effort with a defined start and end. Too often, when agencies attempt to modernize, they purchase “static” software, treating it like any other commodity, such as computers or cars. But software is fundamentally different. It must continuously evolve to keep up with changing policies, security demands, and customer needs. 

Presently, agencies tend to rely on available procurement, contracting, and project management staff to lead technology projects. However, it is not enough to focus on the art of getting things done (project management); it is also critically important to understand the art of deciding what to do (product management). A product manager is empowered to make real-time decisions on priorities and features, including deciding what not to do, to ensure the final product effectively meets user needs. Without this role, development teams typically march through a vast, undifferentiated, unprioritized list of requirements, which is how information technology (IT) projects result in unwieldy failures. 

By contrast, the product model fosters a continuous cycle of improvement, essential for effective technology modernization. It empowers a small initial team with the right skills to conduct discovery sprints, engage users from the outset and throughout the process, and continuously develop, improve, and deliver value. This approach is ultimately more cost effective, results in continuously updated and effective software, and better meets user needs.

However, transitioning to the product model is challenging. Agencies need more than just infrastructure and tools to support seamless deployment and continuous software updates – they also need the right people and training. A lean team of product managers, user researchers, and service designers who will shape the effort from the outset can have an enormous impact on reducing costs and improving the effectiveness of eventual vendor contracts. Program and agency leaders, who truly understand the policy and operational context, may also require training to serve effectively as “product owners.” In this role, they work closely with experienced product managers to craft and bring to life a compelling product vision. 

These internal capacity investments are not expensive relative to the cost of traditional IT projects in government, but they are currently hard to make. Placing greater emphasis on building internal product management capacity will enable the government to more effectively tackle the root causes that lead to legacy systems becoming problematic in the first place. By developing this capacity, agencies can avoid future costly and ineffective “modernization” efforts.

Plan of Action

The General Services Administration’s Technology Modernization Fund plays a crucial role in helping government agencies transition from outdated legacy systems to modern, secure, and efficient technologies, strengthening the government’s ability to serve the public. However, changes to TMF’s strategy, policy, and practice could incentivize the broader adoption of product model approaches and make its investments more impactful.

The TMF should shift from investments in high-cost, static technologies that will not evolve to meet future needs towards supporting the development of product model capabilities within agencies. This requires a combination of skilled personnel, technology, and user-centered approaches. Success should be measured not just by direct savings in technology but by broader efficiencies, such as improvements in operational effectiveness, reductions in administrative burdens, and enhanced service delivery to users.

While successful investments may result in lower costs, the primary goal should be to deliver greater value by helping agencies better fulfill their missions. Ultimately, these changes will strengthen agency resilience, enabling them to adapt, scale, and respond more effectively to new challenges and conditions.

Recommendation 1. The Technology Modernization Board, responsible for evaluating proposals, should: 

  1. Assess future investments based on the applicant’s demonstrated competencies and capacities in product ownership and management, as well as their commitment to developing these capabilities. This includes assessing proposed staffing models to ensure the right teams are in place.
  2. Expand assessment criteria for active and completed projects beyond cost savings, to include measurements of improved mission delivery, operational efficiencies, resilience, and adaptability. 

Recommendation 2. The TMF Program Management Office, responsible for stewarding investments from start to finish, should: 

  1. Educate and train agencies applying for funds on how to adopt and sustain the product model. 
  2. Work with the General Services Administration’s 18F to incorporate TMF project successes and lessons learned into a continuously updated product model playbook for government agencies that includes guidance on the key roles and responsibilities needed to successfully own and manage products in government.
  3. Collaborate with the Office of Personnel Management (OPM) to ensure that agencies have efficient and expedited pathways for acquiring the necessary talent, utilizing appropriate assessments to identify and onboard skilled individuals. 

Recommendation 3. Congress should: 

  1. Encourage agencies to set up their own working capital funds under the authorities outlined in the TMF legislation. 
  2. Explore the barriers to product model funding in the current budgeting and appropriations processes for the federal government as a whole and develop proposals for fitting them to purpose. 
  3. Direct OPM to reduce procedural barriers that hinder swift and effective hiring. 

Conclusion 

The TMF should leverage its mandate to shift agencies towards a capabilities-first mindset. Changing how the program educates, funds, and assesses agencies will build internal capacity and deliver continuous improvement. This approach will lead to better outcomes, both in the near and long terms, by empowering agencies to adapt and evolve their capabilities to meet future challenges effectively.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
What is the Technology Modernization Fund and what does it do?

Congress established TMF in 2018 “to improve information technology, and to enhance cybersecurity across the federal government” through multiyear technology projects. Since then, more than $1 billion has been invested through the fund across dozens of federal agencies in four priority areas.

Why is the TMF uniquely positioned to lead product management adoption across the federal government?
The TMF represents an innovative funding model that offers agencies resource flexibility outside the traditional budget cycle for priority IT projects. The TMF team can leverage agency demand for its support to shape not only what projects agencies pursue but how they do them. Through the ongoing demonstration of successful product-driven projects, the TMF can drive momentum toward making the product model approach standard practice within agencies.

How to Build Effective Digital Permitting Products in Government

The success of historic federal investments in climate resilience, clean energy, and new infrastructure hinges on the government’s ability to efficiently permit, site, and build projects. Many of these projects are subject to the National Environmental Policy Act (NEPA), which dictates the procedures agencies must use to anticipate environmental, social, and economic impacts of potential actions. 

Agencies use digital tools throughout the permitting process for a variety of tasks including permit data collection and application development, analysis, surveys, impact assessments, public comment processing, and post-permit monitoring. However, many of the technology tools presently used in NEPA processes are fragmented, opaque, and lack user-friendly features. Investments in permitting technology (such as software, decision support tools, data standards, and automation) could reduce the long timelines that plague environmental review. In fact, the Council on Environmental Quality (CEQ)’s recent report to Congress highlights the “tremendous potential” for technology to improve the efficiency and responsiveness of environmental review.

The Permitting Council, a federal agency focused on improving the “transparency, predictability, and outcomes” of federal permitting processes, recently invested $30 million in technology projects at various agencies to “strengthen the efficiency and predictability of environmental review.” Agencies are also investing in their own technology tools aimed at improving various parts of the environmental review process. As just one example, the Department of Energy’s Coordinated Interagency Transmission Authorizations and Permits (CITAP) Program recently released a new web portal designed to create more seamless communication between agencies and applicants. 

Yet permitting innovation is still moving at a slow pace and not all agencies have dedicated funding to develop needed technology tools for permitting. We recently wrote a case study about the Department of Transportation’s Freight Logistics Optimization Works (FLOW) project to illustrate how agency staff can make progress in developing technology without large upfront funding investments or staff time. FLOW is a public-private partnership that supports transportation industry users in anticipating and responding to supply chain disruptions. Andrew Petritin, who we interviewed for our case study, was a member of the team that co-created this digital product with users. 

In a prior case study, Jennifer Pahlka and Allie Harris identified strategies that contributed to DOT FLOW’s success in building a great digital product in government. Here, we expand on a subset of these strategies and how they can be applied to build great digital products in the permitting sector. We also point to several permitting technology efforts that have benefited from independently applying similar strategies to demonstrate how agencies with permitting responsibilities can approach building digital products. These case studies and insights serve as inspiration for how agencies can make positive change even when substantive barriers exist. 

Make data function as a compass, not a grade.

Here is an illustrative example of how data can be used as a compass to inform decisions and provide situational awareness to customers. 

The National Telecommunications and Information Administration (NTIA) recently launched a ​​Permitting Mapping Tool to support grantees and others in deploying infrastructure by identifying permitting requirements and potential environmental impacts. This is a tool that both industry and the public can use to see the permitting requirements for a geographic location. The data gathered and shared through this tool is not intended to assess performance; rather, it is used to provide an understanding of the landscape to support decision making. 

NTIA staff recognized the potential value of the Federal Communication Commission’s (FFC) existing map of broadband serviceable locations to users in the permitting process and worked to combine it with other available information in order to support decision making. According to NTIA staff, NTIA’s in-house data analysts started prototyping mapping tools to see how they could better support their customers by using the FCC’s information about broadband serviceable locations. They first overlaid federal land management agency boundaries and showed other agencies where deployments will be required on federal lands in remote and unserved areas, where they might not have a lot of staff to process permits. The team then pulled in hundreds of publicly available data sources to illustrate where deployments will occur on state and Tribal lands and in or near protected environmental resources including wetlands, floodplains, and critical habitats before releasing the application on NTIA’s website with an instructional video. Notably, NTIA staff were able to make substantial progress prior to receiving Permitting Council funds to support grant applicants in using the environment screening to improve the efficiency of categorical exclusions processing. 

Build trust. Trust allows you to co-create with your users. Understand your users’ needs, don’t solicit advice

Recent recipients of Permitting Council grants for technology development have the opportunity to define their customers and work with them from day one to understand their needs. Rather than assuming their customer’s pain points, grant recipients can gather input from their customers and build the new technology to meet their needs. Recipients can learn from FLOW’s example by building trust early through direct collaboration. Examples of strategies agencies can use to engage customers include defining user personas for their technology; facilitating user interviews to understand their needs; visiting field offices to meet their customers and learn how technology integrates into their work processes and environment; conducting observations of existing technologies to assess opportunities for improvement; and rapidly prototyping, testing, and iterating their solutions with user feedback. 

In the longer term, the Permitting Council and other funding entities can drive the adoption of a user-center approach to technology development through their future grant requirements. By incorporating user research, user testing, and agile methodologies in their requests for proposals, the Permitting Council and others can set clear expectations for user involvement throughout the technology development process. 

In comparison to DOT FLOW, where the customers are largely external to the federal government, the customers and stakeholders for permitting technology include internal federal employees with responsibilities for preparing, reviewing, and publishing NEPA documentation. But even if your end-users are within your organization (or even on your same team!), the principles of building trust, co-creating, and understanding user needs still apply. 

Fight trade-off denial. 

When approaching the complex process of permitting and developing technological tools to support customers, it is critical for teams to focus on a specific problem and prioritize user needs to develop a minimum viable product (MVP). A great example of this is the Department of Energy (DOE)’s Coordinated Interagency Transmission Authorizations and Permits Program (CITAP)

DOE collaborated with a development team at the National Renewable Energy Laboratory to create a new portal for interstate transmission applications requiring environmental review and compliance. The team applied a “user-centered, agile approach” to develop and deploy the new tool by the effective date for new CITAP regulations. The tool streamlines communication by allowing the project proponent to track the status of the permit, submit documentation, and communicate with DOE through the platform. Through iterative development, DOE plans to expand the system to include additional users, including cooperating agencies, and provide the ability for cooperating agencies to receive applicant-submitted materials. Deprioritizing these desired functions in the initial release required tradeoffs and a prioritization of user needs, but enabled the team to ultimately meet its deadline and provide near-term value to the public. 

Prioritizing functionality and activities for improvements in permitting can be challenging, but it is critical that agencies make decisions on where to focus and start small with any technology development. Having more accessible data can help inform these trade off decisions by providing an assessment of problem criticality and impact. 

Don’t just reduce burden – provide value. 

Our partners at EPIC recently wrote about the opportunity to operationalize rules and regulations in permitting technology. They discussed how AI could be applied to: (1) help answer permitting questions using a database of rulings, guidelines, and past projects; (2) verify compliance of permits and analyses with up-to-date legal requirements, and (3) automatically apply legal updates impacting permitting procedures to analyses. These examples illustrate how improving permitting technology can not only reduce burdens on the permitting workforce, but simultaneously provide value by offering decision support tools.

Fund products, not projects. 

The federal government often uses the project funding model for developing and modernizing technology. This approach provides different levels of funding based on a specific waterfall process step (e.g., requirements gathering, development, and operations and maintenance). While straightforward, this model provides little flexibility for iteration and little support for modernization and maintenance. Jen Pahlka, former U.S. Deputy Chief Technology Officer, recommends the government move towards a product funding model that acknowledges software development never ends, rather there is ongoing work to improve technology over time. This requires steady levels of funding and has implications for talent.

Permitting teams should be considering these different models when developing new technology and tools. Whether procuring technology or developing it in-house, teams should be thinking about how they can support long-term technology development and hire employees with the knowledge, skills, and abilities to effectively manage the technology. Where relevant, agencies should seek to fund products. While product funding models may seem onerous at first, they are likely to have lower costs and enable teams to respond more effectively to user needs over time. 

Several existing resources support product development in government. The 18F unit, part of the General Services Administration (GSA)’s Technology Transformation Services (TTS), helps federal agencies build, share, and buy technology products. 18F offers a number of tools to support agencies with product management. GSA’s IT Modernization Centers of Excellence can support agency staff in using a product-focused approach. The Centers focused on Contact Center, Customer Experience, and Data and Analytics may be most relevant for agencies building permitting technology. Finally, the U.S. Digital Service (USDS) “collaborates with public servants throughout the government”; their staff can assist with product, strategy, and operations as well as procurement and user experience. Agencies can also look to the private sector and NGOs for compelling examples of product development. 

Looking forward

Agency staff can deploy tactics like those outlined above to quickly improve permitting technology using existing authorities and resources. But these tactics should complement, not substitute, a longer-term systemic strategy for improving the U.S. permitting ecosystem. Center of government entities and individual agencies need to be thinking holistically about shared user needs across processes and technologies. As CEQ stated in their report, where there are shared needs, there should be shared services. Government leadership must equip successful small-scale projects with the resources and guidance needed to scale effectively. 

Additionally, there needs to be an investment from the government in developing effective permitting technology, with technical talent (product managers, developers, user researchers, data scientists) hired to support these efforts). 

As the government continues to modernize to meet emerging challenges, it will need to adopt best practices from industry and compete for the talent to bring their visions to life. Sustained investment in interagency collaboration, talent, and training can shift the status quo from pockets of innovation (such as DOT FLOW and other examples highlighted here) to an innovation ecosystem guided by a robust, shared product strategy.

Many Chutes and Few Ladders in the Federal Hiring Process

How hard can it be to hire into the federal government? Unfortunately, for many, it can be very challenging. A recent conversation with a hiring manager at a federal regulatory agency, shed light on some of the difficulties experienced in the hiring process.

A Hiring Experience

This hiring manager – let’s call her Alex – needed to hire someone to join her team and support environmental review efforts (e.g., reviewing the impact of building a road near a wetland) towards the end of 2023. It was a position she had hired for previously, and she had a strong understanding of the skills and knowledge that a candidate would need to be successful in the role. 

Luckily, she did not need to create a new job description, classify the position, or create a new assessment. Instead, she was able to use the previous job description, job analysis, and assessment, only making small tweaks. This meant that she just needed to work with the HR Specialist (personnel who provide human resource management services within their agency) to finalize the Job Opportunity Announcement (JOA). 

This was happening in December and given the holidays, she decided to wait on posting the JOA until the new year. They posted the announcement in early January and closed the application a week later. Alex publicized the opening through her network on LinkedIn and through other LinkedIn pages.

Anxious to bring a new teammate on board, Alex was quite frustrated to not receive a certified list of candidates from the HR Specialist until four months later. And when she began her review of the candidates, she was surprised to find only one applicant with the experience and skills she was looking for in the role. Alex reached out to the candidate, but learned that they had already accepted a different role.

Feeling disheartened, Alex contacted the HR Specialist to ask for a second list of candidates, explaining the incompatibility of the other applicants in the initial list. Alex waited until June to receive the second list, now six months past the posting date, but she was excited to see several qualified candidates for the role. 

Following their evaluation process, Alex made an offer to a candidate from the list. With the tentative offer accepted, they started the background check, which took about two months. The candidate finally started in September, nine months after posting the position.

Now, what happened? Why did it take nine months to fill this position, especially when the job announcement only required small changes?

Mapping the Hiring Process

In our recent blog post, we shared how difficult it is to hire into the federal government and cited a number of different challenges (e.g., outdated job descriptions, reclassifying roles, defining an assessment strategy, etc.) hindering the government from building talent capacity. We decided to map out the federal government’s competitive hiring process to illustrate how the hiring process typically works and where pain points often emerge. Through research (e.g., OPM’s Hiring Process Analysis Tool), expert feedback, and practitioner discussions (e.g., interviews with hiring managers, HR specialists, and leaders involved in permitting activities), we outlined the main steps of the hiring process from workforce planning through candidate selection and onboarding. And we found the process to look similar to a game of Chutes and Ladders. 

As you’ll see, the hiring process is divided into four major phases: (1) aligning the workforce plan and validating the hiring need, (2) developing and posting a job opportunity announcement, (3) assessing the candidates, and (4) selecting a candidate and making an offer. Distributed throughout this process, we identified nine primary pain points that drive the majority of delays experienced by civil servants.

In the first phase, the major challenges experienced are receiving the funding to begin the hiring process and realigning the workforce plan to account for the new role, especially when there is a talent surge that was unanticipated. In the case of environmental permitting, the Inflation Reduction Act (IRA) and Bipartisan Infrastructure Law (BIL) provided significant funding to support talent acquisition, but agencies had not planned for the talent surge. These new talent needs did not align with their existing workforce plans nor their capacity to recruit, source, assess, and bring new staff onboard. 

Additionally, budget availability has also caused a number of delays. The new legislation only provides short-term funding for talent or in other cases, is unclear how the funds can be used for staffing. As a result, agencies have hesitated in hiring. They are left weighing the tradeoffs of hiring for full time employees with uncertain future funding or hiring for term positions (i.e., roles with a limited duration). Analyzing retention and retirement rates have helped some agencies navigate this decision, but the desire to avoid future layoffs combined with the risk averse culture has made the process difficult. Some have decided to hire for term positions, but have struggled in recruiting talent interested in a short-term role. Ultimately, this short-term funding does not help address long-term talent capacity gaps.

In the second phase of the process, the pain points center around developing and preparing the final job opportunity announcement (JOA). This can be delayed if there is not a position description that accurately captures the role, there is not a strong assessment strategy, or the HR Specialist and Hiring Manager disagree on the language to be used in the announcement. 

With permitting-related positions, many agencies have been looking to hire for interdisciplinary positions that have a range of expertise. OPM, the Permitting Council, and agencies have worked to create interdisciplinary position descriptions and announcements across technical disciplines. Developing the job descriptions, confirming the job duties, and formulating an assessment strategy takes more time, ultimately resulting in a longer time to hire. 

Even for positions that are more regularly used across agencies (e.g., Environmental Protection Specialist) descriptions may be available and up to date, but there may not be an assessment for a particular grade. For example, OPM and the Permitting Council collaborated to create a pooled hiring, cross-government announcement for a multi-grade Environmental Protection Specialist (EPS). This allowed for one JOA to produce a list of candidates that many agencies could use for hiring. Yet the assessment remained somewhat of a bottleneck because there were not standard assessments available for each grade (e.g., GS-5-14) in the JOA, which required more time for assessment development. This is not unique; for many positions, standard assessments do not exist for each grade.

In the third phase, the primary challenge is a lack of qualified candidates. Hiring managers receive a list of candidates (i.e., certificate list) who should meet the requirements of the position, but that is not always the case. This can result from a number of issues ranging from the use of self-assessments and HR Specialists lacking the expertise to screen resumes to insufficient recruiting efforts. 

In discussions with civil servants looking to hire for permitting-related positions, we have heard these challenges. Some agencies have struggled to make time for efforts given their limited capacity, resulting in a limited applicant pool. Alex’s story provides another example. Alex and their HR Specialist selected a self-assessment strategy, where applicants report their level of experience and skills on a number of questions related to the role. Both self-inflation and humility can distort these scores, resulting in qualified candidates not making it through the process. In reviewing the first certification list, Alex explained being surprised to see individuals with resumes unrelated to the role. This likely resulted from inaccurate self-assessment scores combined with a lack of expertise among the HR Specialist to effectively screen the resumes for the position. Receiving a certificate list with unqualified candidates can significantly delay the process, and in Alex’s case, result in another two month delay.

In the last phase of the process, delays often result from candidates declining their offer and the time required for background checks. Candidate declines can be very demotivating for a Hiring Manager who is excited to bring on the candidate they selected. Candidate declinations are a challenge for permitting-related positions. This is often due to constraints in negotiating salaries and relocation requirements, especially when candidates are asked to move to an area with a high cost of living. With today’s high interest rates, some candidates are just unable to move given the federal government’s stagnant pay structure. 

Improving Alex’s Experience

Thinking back to Alex, this process highlights some areas where the process went astray, particularly with the assessment and HR Specialist screening. These issues can be solved through skills-based hiring and better assessment tools such as Subject Matter Expert Qualification Assessment (SME-QA) (i.e., a process that incorporates subject matter expert resume reviews into the screening process). However, an often-overlooked challenge, not highlighted in the process map, is the relationship between the Hiring Manager and HR Specialist. 

The breakdown in communication between HR Specialists and Hiring Managers is not uncommon. Building a strong relationship and shared ownership across the hiring process is key to success. In Alex’s case, she was discouraged from reaching out to the HR Specialist with questions because of the HR team’s limited capacity; the team was centralized across their organization and responsible for servicing many offices. This left Alex frustrated. The process felt like a black box, leaving her with no insight as she waited for her certificate list to eventually arrive. A kickoff meeting with the HR Specialist to align on a timeline, establish roles and responsibilities, and form a line of communication to share updates throughout the process could have helped open and shine light in the black box, fostering a collaborative relationship to identify and mitigate issues as they arose throughout the process.

Summary

When we take a step back and look at this hiring process, it can feel daunting. The average time to hire one candidate is 101 days. In comparison, the private sector takes less than half the time. While it may not be possible for this current process to meet the private sector’s timeline, there are things that can be done to streamline today’s process. In our next series of blog posts, we will dive into each phase in more detail and highlight short-term solutions for hiring managers, HR specialists, program managers, and budget personnel to bypass these chutes — and focus on the ladders.

Scaling Proven IT Modernization Strategies Across the Federal Government

Ten years after the creation of the U.S. Digital Service (USDS) and 18F (an organization with the General Services Administration that helps other government agencies build, buy, and share technology products), the federal government still struggles to buy, build, and operate technology in a speedy, modern, scalable way. Cybersecurity remains a continuous challenge – in part due to lack of modernization of legacy technology systems. As data fuels the next transformative modernization phase, the federal government has an opportunity to leverage modern practices to leap forward in scaling IT Modernization.

While there have been success stories, like IRS’s direct file tool and electronic passport renewal, most government technology and delivery practices remain antiquated and the replacement process remains too slow. Many obstacles to modernization have been removed in theory, yet in practice Chief Information Officers (CIOs) still struggle to exercise their authority to achieve meaningful results. Additionally, procurement and hiring processes, as well as insufficient modernization budgets, remain barriers.

The DoD failed to modernize its 25-year-old Defense Travel System (DTS) after spending $374 million, while the IRS relies on hundreds of outdated systems, including a key taxpayer data processing system built in the 1960s, with full replacement not expected until 2030. The GAO identified 10 critical systems across various agencies, ranging from 8 to 51 years old, that provide essential services like emergency management, health care, and defense, costing $337 million annually to operate and maintain, many of which use outdated code and unsupported hardware, posing major security and reliability risks. Despite the establishment of the Technology Modernization Fund (TMF) with a $1.23 billion appropriation, most TMF funds have been expended for a small number of programs, many of which did not solve legacy modernization problems. Meanwhile the urgency of modernizing antiquated legacy systems to prevent service breakdowns continues to increase.

This memo proposes a new effort to rapidly scale proven IT modernization strategies across the federal government. The result will be a federal government with the structure and culture in place to buy, build, and deliver technology that meets the needs of Americans today and into the future. 

Challenge and Opportunity 

Government administrations typically arrive with a significant policy agenda and a limited management agenda. The management agenda often receives minimal focus until the policy agenda is firmly underway. As a result, the management agenda is rarely well implemented, if it is implemented at all. It should be noted that there are signs of progress in this area, as the Biden-Harris Administration publishing its management agenda in the first year of the Administration, while the the Trump Administration did not publish its management agenda until the second year of the administration. However, even when the management agenda is published earlier, alignment, accountability and senior White House and departmental  leadership focus on the management agenda is far weaker than for the policy agenda.

Even when a PMA has been published and alignment is achieved amongst all the stakeholders within the EOP, the PMA is simply not a priority for Departmental/Agency leadership and there is little focus on the PMA among Secretaries/Administrators. Each Department/Agency is responsible for a policy agenda and, unless, IT or other management agenda items are core to the delivery of the policy agenda, such as at the VA, departmental political leadership pays little attention to the PMA or related activities such as IT and procurement.

 An administration’s failure to implement a management agenda and improve government operations jeopardizes the success of that administration’s policy agenda, as poor government technology inhibits successful implementation of many policiesThis has been clear during the Biden – Harris administration as departments have struggled to rapidly deliver IT systems to support loan, grant and tax programs, sometimes delaying or slowing the implementation of those programs. 

The federal government as a whole spends about 80% of its IT budget on maintenance of outdated systems—a percentage that is increasing, not declining. Successful innovations in federal technology and service delivery have not scaled, leaving pockets of success throughout the government that are constantly at risk of disappearing with changes in staff or leadership. 

The Obama administration created USDS and 18F/Technology Transformation Services (TTS) to begin addressing the federal government’s technology problems through improved adoption of modern Digital Services. The Trump administration created the Office of American Innovation (OAI) to further advance government technology management. As adoption of AI accelerates, it becomes even more imperative for the federal government to close the technology gap between where we are and where we need to be to provide the government services that the American people deserve. 

The Biden administration has adapted IT modernization efforts to address the pivot to AI innovations by having groups like USDS, 18F/TTS and DoD Software Factories increasingly focus on Data adoption and AI. With the Executive Order on AI and the Consortium Dedicated to AI Safety the Biden-Harris administration is establishing guidelines to adopt and properly govern increasing focus on Data and AI. These are all positive highlights for IT modernization – but there is a need for these efforts to deliver real productivity. Expectations of citizens continue to increase. Services that take months should take weeks, weeks should take days, and days should take hours. This level of improvement can’t be reached across the majority of government services until modernization occurs at scale. While multiple laws designed to enhance CIO authorities and accelerate digital transformation have been passed in recent years, departmental CIOs still do not have the tools to drive change, especially in large, federated departments where CIOs do not have substantial budget authority.

As the evolution of Digital Transformation for the government pivots to data, modernizedAgencies/Department can leap forward, while others are still stuck with antiquated systems and not able to derive value from data yet. For more digitally mature Agencies/Departments, the pivot to data-driven decisions, automation and AI, offer the best chance for a leap in productivity and quality gains. AI will fuel the next opportunity to leap forward by shifting focus from the process of delivering digital services (as they become norms) and more on the data based insights they ingest and create. For the Agencies/Departments “left behind” the value of data driven-decisions, automation and AI – could drive rapid transformation and new tools to deliver legacy system modernization.

The Department of Energy’s “Scaling IT Modernization Playbook” offers key approaches to scale IT modernization by prioritizing  mission outcomes, driving data adoption, coordinating at scale across government, and valuing speed and agility because, “we underrate speed as value”. Government operations have become too complacent with slow processes and modernization; we are increasingly outpaced by faster developing innovations. Essentially, Moore’s Law (posited by Gordon Moore that the number of transistors in an integrated circuit doubles every 18 months while cost increases minimally. Moore’s law has been more generally applied to a variety of advanced technologies) is outpacing successful policy implementation.

As a result, the government and the public continue to struggle with dysfunctional legacy systems that make government services difficult to use under normal circumstances and can be crippling in a crisis. The solution to these problems is to boldly and rapidly scale emerging modernization efforts across the federal government enterprise – embracing leaps forward with the opportunistic shift of data and AI fueled transformation. 

Some departments have delivered notably successful modern systems, such DHS’ Global Entry site and the State Department’s online passport renewal service. While these solutions are clearly less complex than the IRS’ tax processing system, which the IRS has struggled to modernize, they demonstrate that the government can deliver modern digital services under the right conditions. 

Failed policy implementation due to failed technology implementation and modernization will continue until management and leadership practices associated with modern delivery are rapidly adopted at scale across government and efforts and programs are retained between administrations. 

Plan of Action 

Recommendation 1. Prioritize Policy Delivery through the Office of Management and Budget (OMB) and the General Services Administration (GSA) 

First, the Administration should elevate the position of Federal CIO to be a peer to the Deputy Directors at the OMB and move the Federal CIO outside of OMB, while remaining within the Executive Office of the President, to ensure that the Federal CIO and, therefore, IT and Cybersecurity priorities and needs of the departments and agencies have a true seat at the table. The Federal CIO represents positions that are as important as but different from those of the OMB Deputy Directors and the National Security Advisor and, therefore, should be peers to those individuals, just as they are within departments and agencies, where CIOs are required to report to the Secretary or Administrator. Second, Elevate the role of the GSA Administrator to a Cabinet-level position, and formally recognize GSA as the federal government’s “Operations & Implementation” agency. These actions will effectively make the GSA Administrator the federal government’s Chief Operating Officer (COO). Policy, financial oversight, and governance will remain the purview of the OMB. Operations & Implementation will become the responsibility of the GSA, aligning existing GSA authorities of TTS, quality & proven shared services, acquisitions, and asset management with a renewed focus on mission centric government-service delivery. The GSA Administrator will collaborate with the President’s Management Council (PMC), OMB and agency level CIOs to coordinate policy delivery strategy with delivery responsibility, thereby integrating existing modernization and transformation efforts from the GSA Project Management Office (PMO) towards a common mission with prioritization on rapid transformation. 

For the government to improve government services, it needs high-level leaders charged with prioritizing operations and implementation—as a COO does for a commercial organization. Elevating the Federal CIO to an OMB Deputy Director and the GSA Administrator to a Cabinet-level position tasked with overseeing “Operations & Implementation” would ensure that management and implementation best practices go hand in hand with policy development, dramatically reducing the delivery failures that put even strong policy agendas at risk.

Recommendation 2. Guide Government Leaders with the Rapid Agency Transformation Playbook 

Building on the success of the Digital Services Playbook, and influenced by the DOE’s “Scaling IT Modernization Playbook” the Federal CIO should develop a set of “plays” for rapidly scaling technology and service delivery improvements across an entire agency. The Rapid Agency Transformation Playbook will act both as a guide to advise agency leaders in scaling best practices, as well as a standard against which modernization efforts can be assessed. The government wide “plays” will be based on practices that have proven successful in the private and public sectors, and will address concepts such as fostering innovation, rapid transformation, data adoption, modernizing or sunsetting legacy systems, and continually improving work processes infused with AI. Where the Digital Services Playbook has helped successfully innovate practices in pockets of government, the Rapid Agency Transformation Playbook will help scale those successful practices across government as a whole. 

A Rapid Agency Transformation Playbook will provide a living document to guide leadership and management, helping align policy implementation with policy content. The Playbook will also clearly lay out expected practices for Federal employees and contractors who collaborate on policy delivery. 

Recommendation 3. Fuel Rapid Transformation by Creating Rapid Transformation Funds

Congress should create Rapid Transformation Funds (RTF) under the control of each Cabinet-level CIO, as well as the most senior-IT leader in smaller departments and independent agencies. These funds would be placed in a Working Capital Fund (WCF) that is controlled by the cabinet level CIO or the most senior IT leader in smaller departments and independent agencies. These funds must be established through legislation. For those departments that do not currently have a working capital fund under the control of the CIO, the legislation should create that fund, rather than depending on each department or agency to make a legislative request for an IT WCF. 

This structure will give the CIO of each Department/Agency direct control of the funds. All RTFs must be under the control of the most senior IT leader in each organization and the authority to manage these funds must not be delegatable.The TMF puts the funds under the control of GSA’s Office of the Federal Chief Information Officer (OFCIO) and a board that has to juggle priorities among GSA OCIO and the individual Departments and Agencies. Direct control will streamline decision making and fund disbursement. It will help to create a carrot to align with existing Federal Information Technology Acquisition Reform Act (FITARA) (i.e., stick) authorities. In addition, Congress should evaluate how CIO authorities are measured under FITARA to ensure that CIOs have a true seat at the table.

The legislation will provide the CIO the authority to sweep both expiring and canceling funds into the new WCF. Seed funds in the amount of 10% of department/agency budgets will be provided to each department/agency. CIOs will have the discretion to distribute the funds for modernization projects throughout their department or agency and to determine payback model(s) that best suit their organization, including the option to reduce or waive payback for projects, while the overarching model will be cost reimbursement.

The RTF will enhance the CIO’s ability to drive change within their own organization. While Congress has expanded CIO authorities through legislation three different times in recent years, no legislation has redirected funding to CIOs. Most cabinet level CIOs control a single digit percentage of the Department’s IT budget. For example, the Department of Energy CIO directly controls about 5% of DOE’s IT spending. Direct control of a meaningfully sized pool of money that can be allocated to component IT teams by the cabinet level CIO enables that cabinet level CIOs to drive critical priorities including modernization and security. Without funding, CIO authorities amount to unfunded mandates. The RTF will allow the CIO to enhance their authority by directly funding new initiatives. A reevaluation of the metrics associated with CIO authorities would ensure that CIOs have a true seat at the table.

Recommendation 4. Ensure transformation speed through continuity by establishing a Transformation Advisory Board and department/agency management councils. 

First, OMB should establish a Transformation Advisory Board (TAB) within the Executive Office of the President (EOP), composed of senior and well-respected individuals who will be appointed to serve fixed terms not tied to the presidential administration and sponsored by the Federal CIO. The TAB will be chartered to impact management and technology policy across the government and make recommendations to change governance that impedes rapid modernization and transformation of government. Modeled after the Defense Innovation Board, the TAB will focus on entrenching rapid modernization efforts across administrations and on supporting, protecting, and enhancing existing digital-transformation capabilities. Second, each department and agency should be directed to establish a management council composed of leaders of the department/agency’s administrative functions to include at least IT, finance, human resources, and acquisition, under the leadership of the deputy secretary/deputy administrator. In large departments this may require creating a new deputy secretary or undersecretary position to ensure meaningful focus on the priorities, rather than simply holding meaningless council meetings. This council will ensure that collaborative management attention is given to departmental/agency administration and that leadership other than the CIO understand IT challenges and opportunities. 

A Transformation Advisory Board will ensure continuity across administrations and changes in agency leadership to prevent the loss of good practices, enabling successful transformative innovations to take root and grow without breaks and gaps in administration changes. The management council will ensure that modernization is a priority of departmental/agency leadership beyond the CIO.

Ann Dunkin contributed to an earlier version of this memo.

This idea was originally published on November 13, 2020; we’ve re-published this updated version on October 22, 2024.

Frequently Asked Questions
We have given CIOs lots of authority and nothing has changed. Why should we do this now? What difference will it make?

While things have not changed as much as we would like, departments and agencies have made progress in modernizing their technology products and processes. Elevating the GSA Administrator to the cabinet level, adding a Transformation Advisory Board, elevating the Federal CIO, reevaluating how CIO authorities are measured, creating departmental/agency management councils, and providing modernization funds directly to CIOs through working capital funds will provide agencies and departments with the management attention, expertise, support, and resources needed to scale and sustain that progress over time. Additionally, CIOs—who are responsible for technology delivery—are often siloed rather than part of a broad, holistic approach to operations and implementation. Elevating the GSA Administrator and the Federal CIO, as well as establishing the TAB and departmental/agency management councils, will provide coordinated focus on the government’s need to modernize IT.

How will this help fix and modernize the federal government’s legacy systems?

Elevating the role of the Federal CIO and the GSA Administrator will provide more authority and attention for the President’s Management Agenda, thereby aligning policy content with policy implementation. Providing CIOs with a direct source of modernization funding will allow them to direct funds to the most critical projects throughout their organizations, as well as require adherence to standards and best practices. A new focus on successful policy delivery aided by experienced leaders will drive modernization of government systems that rely on dangerously outdated technology.

How do we ensure that scaling modernization is actually part of the President’s Management Agenda?

We believe that an administration that embraces the proposal outlined here will see scaling innovation as critical. Establishing a government COO and elevating the Federal CIO along with an appointed board that crosses administrations, departmental management councils, better measurement of CIO authorities, and direct funding to CIOs will dramatically increase the likelihood that that improved technology and service delivery remain a priority for future administrations.

Is the federal government doing anything now that can be built upon to implement this proposal?

The federal government has many pockets of innovation that have proven modern methodologies can and do work in government. These pockets of innovation—including USDS, GSA TTS, 18F, the U.S. Air Force Software Factories, fellowships, the Air Force Works Program (AFWERX), Defense Advanced Research Projects Agency (DARPA), and others—are inspiring. It is time to build on these innovations, coordinate their efforts under a U.S. government COO and empowered Federal CIO, and scale solutions to modernize the government as a whole.

Is another cabinet-level agency necessary to solve this problem?

Yes. A cabinet-level chief operating officer with top-level executive authority over policy operations and implementation is needed to carry out policy agendas effectively. It is hard to imagine a high-performing organization without a COO and a focus on operations and implementation at the highest level of leadership.

A president has a great deal to think about. Why should modernizing government technology and service delivery be a priority?

The legacy of any administration is based on its ability to enact its policy agenda and its ability to respond to national emergencies. Scaling modernization across the government is imperative if policy implementation and emergency response is important to the president.

Mobilizing Innovative Financial Mechanisms for Extreme Heat Adaptation Solutions in Developing Nations

Global heat deaths are projected to increase by 370% if direct action is not taken to limit the effects of climate change. The dire implications of rising global temperatures extend across a spectrum of risks, from health crises exacerbated by heat stress, malnutrition, and disease, to economic disparities that disproportionately affect vulnerable communities in the U.S. and in low- and middle-income countries. In light of these challenges, it is imperative to prioritize a coordinated effort at both national and international levels to enhance resilience to extreme heat. This effort must focus on developing and implementing comprehensive strategies to ensure the vulnerable developing countries facing the worst and disproportionate effects of climate change have the proper capacity for adaptation, as wealthier, developed nations mitigate their contributions to climate change. 

To address these challenges, the U.S. Agency for International Development (USAID) should mobilize finance through environmental impact bonds focused on scaling extreme heat adaptation solutions. USAID should build upon the success of the  SERVIR joint initiative and expand it to include a partnership with NIHHIS to co-develop decision support tools for extreme heat. Additionally, the Bureau for Resilience, Environment, and Food Security (REFS) within the USAID should take the lead in tracking and reporting on climate adaptation funding data. This effort will enhance transparency and ensure that adaptation and mitigation efforts are effectively prioritized. By addressing the urgent need for comprehensive adaptation strategies, we can mitigate the impacts of climate change, increase resilience through adaptation, and protect the most vulnerable communities from the increasing threats posed by extreme heat.

Challenge 

Over the past 13 months, temperatures have hit record highs, with much of the world having just experienced their warmest June on record. Berkeley Earth predicts a 95% chance that 2024 will rank as the warmest year in history. Extreme heat drives interconnected impacts across multiple risk areas including: public health; food insecurity; health care system costs; climate migration and the growing transmission of life-threatening diseases.

Thus, as global temperatures continue to rise, resilience to extreme heat becomes a crucial element of climate change adaptation, necessitating a strategic federal response on both domestic and international scales.

Inequitable Economic and Health Impacts 

Despite contributing least to global greenhouse gas emissions, low- and middle-income countries experience four times higher economic losses from excess heat relative to wealthier counterparts. The countries likely to suffer the most are those with the most humidity, i.e. tropical nations in the Global South. Two-thirds of global exposure to extreme heat occurs in urban areas in the Global South, where there are fewer resources to mitigate and adapt. 

The health impacts associated with increased global extreme heat events are severe, with projections of up to 250,000 additional deaths annually between 2030 and 2050 due to heat stress, alongside malnutrition, malaria, and diarrheal diseases. The direct cost to the health sector could reach $4 billion per year, with 80% of the cost being shouldered by Sub-Saharan Africa. On the whole, low-and middle-income countries (LMICs) in the Global South experience a higher portion of adverse health effects from increasing climate variability despite their minimal contributions to global greenhouse emissions, underscoring a clear global inequity challenge. 

This imbalance points to a crucial need for a focus on extreme heat in climate change adaptation efforts and the overall importance of international solidarity in bolstering adaptation capabilities in developing nations. It is more cost-effective to prepare localities for extreme heat now than to deal with the impacts later. However, most communities do not have comprehensive heat resilience strategies or effective early warning systems due to the lack of resources and the necessary data for risk assessment and management — reflected by the fact that only around 16% of global climate financing needs are being met, with far less still flowing to the Global South. Recent analysis from Climate Policy Initiative, an international climate policy research organization, shows that the global adaptation funding gap is widening, as developing countries are projected to require $212 billion per year for climate adaptation through 2030. The needs will only increase without direct policy action.  

Opportunity: The Role of USAID in Climate Adaptation and Resilience

As the primary federal agency responsible for helping partner countries adapt to and build resilience against climate change, USAID announced multiple commitments at COP28 to advance climate adaptation efforts in developing nations. In December 2023, following COP28, Special Presidential Envoy for Climate John Kerry and USAID Administrator Power announced that 31 companies and partners have responded to the President’s Emergency Plan for Adaptation and Resilience (PREPARE) Call to Action and committed $2.3 billion in additional adaptation finance. Per the State Department’s December 2023 Progress Report on President Biden’s Climate Finance Pledge, this funding level puts agencies on track to reach President Biden’s pledge of working with Congress to raise adaptation finance to $3 billion per year by 2024 as part of PREPARE.

USAID’s Bureau for Resilience, Environment, and Food Security (REFS) leads the implementation of PREPARE. USAID’s entire adaptation portfolio was designed to contribute to PREPARE and align with the Action Plan released in September 2022 by the Biden Administration. USAID has further committed to better integrating adaptation in its Climate Strategy for 2022 to 2030 and established a target to support 500 million people’s adaptation efforts.  

This strategy is complemented by USAID’s efforts to spearhead international action on extreme heat at the federal level, with the launch of its Global Sprint of Action on Extreme Heat in March 2024. This program started with the inaugural Global Heat Summit and ran through June 2024, calling on national and local governments, organizations, companies, universities, and youth leaders to take action to help prepare the world for extreme heat, alongside USAID Missions, IFRC and its 191-member National Societies. The executive branch was also advised to utilize the Guidance on Extreme Heat for Federal Agencies Operating Overseas and United States Government Implementing Partners.

On the whole, the USAID approach to climate change adaptation is aimed at predicting, preparing for, and mitigating the impacts of climate change in partner countries. The two main components of USAID’s approach to adaptation include climate risk management and climate information services. Climate risk management involves a “light-touch, staff-led process” for assessing, addressing, and adaptively managing climate risks in non-emergency development funding. The climate information services translate data, statistical analyses, and quantitative outputs into information and knowledge to support decision-making processes. Some climate information services include early warning systems, which are designed to enable governments’ early and effective action. A primary example of a tool for USAID’s climate information services efforts is the SERVIR program, a joint development initiative in partnership with the National Aeronautics and Space Administration (NASA) to provide satellite meteorology information and science to partner countries. ​​

Additionally, as the flagship finance initiative under PREPARE, the State Department and  USAID, in collaboration with the U.S. Development Finance Corporation (DFC), have opened an Adaptation Finance Window under the Climate Finance for Development Accelerator (CFDA), which aims to de-risk the development and scaling of companies and investment vehicles that mobilize private finance for climate adaptation. 

Plan of Action

Recommendation 1: Mobilize private capital through results-based financing such as environmental impact bonds

Results-based financing (RBF) has long been a key component of USAID’s development aid strategy, offering innovative ways to mobilize finance by linking payments to specific outcomes. In recent years, Environmental Impact Bonds (EIBs) have emerged as a promising addition to the RBF toolkit and would greatly benefit as a mechanism for USAID to mobilize and scale novel climate adaptation. Thus, in alignment with the PREPARE plan, USAID should launch an EIB pilot focused on extreme heat through the Climate Finance for Development Accelerator (CFDA), a $250 million initiative designed to mobilize $2.5 billion in public and private climate investments by 2030.  An EIB piloted through the CFDA can help unlock public and private climate financing that focuses on extreme heat adaptation solutions, which are sorely needed. 

With this EIB pilot, the private sector, governments, and philanthropic investors raise the upfront capital and repayment is contingent on the project’s success in meeting predefined goals. By distributing financial risk among stakeholders in the private sector, government, and philanthropy, EIBs encourage investment in pioneering projects that might struggle to attract traditional funding due to their novel or unproven nature. This approach can effectively mobilize the necessary resources to drive climate adaptation solutions. 

This approach can effectively mobilize the necessary resources to drive climate adaptation solutions.

Overview of EIB structure, including cash flow (purple and green arrows) and environmental benefits (black arrows). The EIB is designed by project developers, and implemented by stakeholders and others to fund restoration activities that yield quantifiable environmental benefits. These environmental benefits are converted by the beneficiaries into financial benefits that influence the return on investment.
Environmental Impact Bonds structure

Overview of EIB structure, including cash flow (purple and green arrows) and environmental benefits (black arrows). 

Adapted from Environmental Impact Bonds: a common framework and looking ahead

The USAID EIB pilot should focus on scaling projects that facilitate uptake and adoption of affordable and sustainable cooling systems such as solar-reflective roofing and other passive cooling strategies. In Southeast Asia alone, annual heat-related mortality is projected to increase by 295% by 2030. Lack of access to affordable and sustainable cooling mechanisms in the wake of record-shattering heat waves affects public health, food and supply chain, and local economies. An EIB that aims to fund and scale solar-reflective roofing (cool roofs) has the potential to generate high impact for the local population by lowering indoor temperature, reducing energy use for air conditioning, and mitigating the heat island effect in surrounding areas. Indonesia, which is home to 46.5 million people at high risk from a lack of access to cooling, has seen notable success in deploying cool roofs/solar-reflective roofing through the Million Cool Roof Challenge, an initiative of the Clean Cooling Collaborative. The country is now planning to scale production capacity of cool roofs and set up its first testing facility for solar-reflective materials to ensure quality and performance. Given Indonesia’s capacity and readiness, an EIB to scale cool roofs in Indonesia can be a force multiplier to see this cooling mechanism reach millions and spur new manufacturing and installation jobs for the local economy. 

To mainstream EIBs and other innovative financial instruments, it is essential to pilot and explore more EIB projects. Cool roofs are an ideal candidate for scaling through an EIB due to their proven effectiveness as a climate adaptation solution, their numerous co-benefits, and the relative ease with which their environmental impacts can be measured (such as indoor temperature reductions, energy savings, and heat island index improvements). Establishing an EIB can be complex and time-consuming, but the potential rewards make the effort worthwhile if executed effectively. Though not exhaustive, the following steps are crucial to setting up an environmental impact bond:

Analyze ecosystem readiness

Before launching an environmental impact bond, it’s crucial to conduct an analysis to better understand what capacities already exist among the private and public sectors in a given country to implement something like an EIB. Additionally working with local civil society organizations is important to ensure climate adaptation projects and solutions are centered around the local community. 

Determine the financial arrangement, scope, and risk sharing structure 

Determine the financial structure of the bond, including the bond amount, interest rate, and maturity date. Establish a mechanism to manage the funds raised through the bond issuance.

Co-develop standardized, scientifically verified impact metrics and reporting mechanism 

Develop a robust system for measuring and reporting the environmental impact projects; With key stakeholders and partner countries, define key performance indicators (KPIs) to track and report progress.

USAID has already begun to incubate and pilot innovative financing mechanisms in the global health space through development impact bonds. The Utkrisht Impact Bond, for example, is the world’s first maternal and newborn health impact bond, which aims to reach up to 600,000 pregnant women and newborns in Rajasthan, India. Expanding the use case of this financing mechanism in the climate adaptation sector can further leverage private capital to address critical environmental challenges, drive scalable solutions, and enhance the resilience of vulnerable communities to climate impacts.

Recommendation 2: USAID should expand the SERVIR joint initiative to include a partnership with NIHHIS and co-develop decision support tools such as an intersectional vulnerability map. 

Building on the momentum of Administrator Power’s recent announcement at COP28, USAID should expand the SERVIR joint initiative to include a partnership with NOAA, specifically with NIHHIS, the National Integrated Heat Health Information System. NIHHIS is an integrated information system supporting equitable heat resilience, which is an important area that SERVIR should begin to explore. Expanded partnerships could begin with a pilot to map regional extreme heat vulnerability in select Southeast Asian countries. This kind of tool can aid in informing local decision makers about the risks of extreme heat that have many cascading effects on food systems, health, and infrastructure.

Intersectional vulnerabilities related to extreme heat refer to the compounding impacts of various social, economic, and environmental factors on specific groups or individuals. Understanding these intersecting vulnerabilities is crucial for developing effective strategies to address the disproportionate impacts of extreme heat. Some of these intersections include age, income/socioeconomic status, race/ethnicity, gender, and occupation. USAID should partner with NIHHIS to develop an intersectional vulnerability map that can help improve decision-making related to extreme heat. Exploring the intersectionality of extreme heat vulnerabilities is critical to improving local decision-making and helping tailor interventions and policies to where it is most needed. The intersection between extreme heat and health, for example, is an area that is under-analyzed, and work in this area will contribute to expanding the evidence base. 

The pilot can be modeled after the SERVIR-Mekong program, which produced 21 decision support tools throughout the span of the program from 2014-2022. The SERVIR-Mekong program led to the training of more than 1,500 people, the mobilization of $500,000 of investment in climate resilience activities, and the adoption of policies to improve climate resilience in the region. In developing these tools, engaging and co-producing with the local community will be essential. 

Recommendation 3: USAID REFS and the State Department Office of Foreign Assistance should work together to develop a mechanism to consistently track and report climate funding flow. This also requires USAID and the State Department to develop clear guidelines on the U.S. approach to adaptation tracking and determination of adaptation components.

Enhancing analytical and data collection capabilities is vital for crafting effective and informed responses to the challenges posed by extreme heat. To this end, USAID REFS, along with the State Department Office of Foreign Assistance, should co-develop a mechanism to consistently track and report climate funding flow. Currently, both USAID and the State Department do not consistently report funding data on direct and indirect climate adaptation foreign assistance. As the Department of State is required to report on its climate finance contributions annually for the Organisation for Economic Co-operation and Development (OECD) and biennially for the United Nations Framework Convention on Climate Change (UNFCCC), the two agencies should report on adaptation funding at similarly set, regular interval and make this information accessible to the executive branch and the general public. A robust tracking mechanism can better inform and aid agency officials in prioritizing adaptation assistance and ensuring the US fulfills its commitments and pledges to support global adaptation to climate change.

The State Department Office of Foreign Assistance (State F) is responsible for establishing standard program structures, definitions, and performance indicators, along with collecting and reporting allocation data on State and USAID programs. Within the framework of these definitions and beyond, there is a lack of clear definitions in terms of which foreign assistance projects may qualify as climate projects versus development projects and which qualify as both. Many adaptation projects are better understood on a continuum of adaptation and development activities. As such, this tracking mechanism should be standardized via a taxonomy of definitions for adaptation solutions. 

Therefore, State F should create standardized mechanisms for climate-related foreign assistance programs to differentiate and determine the interlinkages between adaptation and mitigation action from the outset in planning, finance, and implementation — and thereby enhance co-benefits. State F relies on the technical expertise of bureaus, such as REFS, and the technical offices within them, to evaluate whether or not operating units have appropriately attributed funding that supports key issues, including indirect climate adaptation. 

Further, announced at COP26, PREPARE is considered the largest U.S. commitment in history to support adaptation to climate change in developing nations. The Biden Administration has committed to using PREPARE to “respond to partner countries’ priorities, strengthen cooperation with other donors, integrate climate risk considerations into multilateral efforts, and strive to mobilize significant private sector capital for adaptation.”  Co-led by USAID and the U.S. Department of State (State Department), the implementation of PREPARE also involves the Treasury, NOAA, and the U.S. International Development Finance Corporation (DFC). Other U.S. agencies, such as USDA, DOE, HHS, DOI, Department of Homeland Security, EPA, FEMA, U.S. Forest Service, Millennium Challenge Corporation, NASA, and U.S. Trade and Development Agency, will respond to the adaptation priorities identified by countries in National Adaptation Plans (NAPs) and nationally determined contributions (NDCs), among others. 

As USAID’s REFS leads the implementation of the PREPARE and hosts USAID’s Chief Climate Officer, this office should be responsible for ensuring the agency’s efforts to effectively track and consistently report climate funding data. The two REFS Centers that should lead the implementation of these efforts include the Center for Climate-Positive Development, which advises USAID leadership and supports the implementation of USAID’s Climate Strategy, and the Center for Resilience, which supports efforts to help reduce recurrent crises — such as climate change-induced extreme weather events — through the promotion of risk management and resilience in the USAID’s strategies and programming. 
In making standardized processes to prioritize and track the flow of adaptation funds, USAID will be able to more effectively determine its progress towards addressing global climate hazards like extreme heat, while enhancing its ability to deliver innovative finance and private capital mechanisms in alignment with PREPARE. Additionally, standardization will enable both the public and private sectors to understand the possible areas of investment and direct their flows for relevant projects.

Frequently Asked Questions
How does USAID describe, compare, and analyze its global climate adaptation efforts?

USAID uses the Standardized Program Structure and Definitions (SPSD) system — established by State F — to provide a common language to describe climate change adaptation and resilience programs and therefore enable the comparison and analysis of budget and performance data within a country, regionally or globally. The SPSD system uses the following categories: (1) democracy, human rights, and governance; (2) economic growth; (3) education and social services; (4) health; (5) humanitarian assistance; (6) peace and security; and (7) program development and oversight. Since 2016, climate change has been in the economic growth category and each climate change pillar has separate Program Areas and Elements. The SPSD consists of definitions for foreign assistance programs, providing a common language to describe programs. By utilizing a common language, information for various types of programs can be aggregated within a country, regionally, or globally, allowing for the comparison and analysis of budget and performance data.


Using the SPSD program areas and key issues, USAID categorizes and tracks the funding for its allocations related to climate adaptation as either directly or indirectly addressing climate adaptation. Funding that directly addresses climate adaptation is allocated to the “Climate Change—Adaptation” under SPSD Program Area EG.11 for activities that enhance resilience and reduce the vulnerability to climate change of people, places, and livelihoods. Under this definition, adaptation programs may have the following elements: improving access to science and analysis for decision-making in climate-sensitive areas or sectors; establishing effective governance systems to address climate-related risks; and identifying and disseminating actions that increase resilience to climate change by decreasing exposure or sensitivity or by increasing adaptive capacity. Funding that indirectly addresses climate adaptation is not allocated to a specific SPSD program area. It is funding that is allocated to another SPSD program area and also attributed to the key issue of “Adaptation Indirect,” which is for adaptation activities. The SPSD program area for these activities is not Climate Change—Adaptation, but components of these activities also have climate adaptation effects.


In addition to the SPSD, the State Department and USAID have also identified “key issues” to help describe how foreign assistance funds are used. Key issues are topics of special interest that are not specific to one operating unit or bureau and are not identified, or only partially identified, within the SPSD. As specified in the State Department’s foreign assistance guidance for key issues, “operating units with programs that enhance climate resilience, and/or reduce vulnerability to climate variability and change of people, places, and/or livelihoods are expected to attribute funding to the Adaptation Indirect key issue.”


Operating units use the SPSD and relevant key issues to categorize funding in their operational plans. State guidance requires that any USAID operating unit receiving foreign assistance funding must complete an operational plan each year. The purpose of the operational plan is to provide a comprehensive picture of how the operating unit will use this funding to achieve foreign assistance goals and to establish how the proposed funding plan and programming supports the operating unit, agency, and U.S. government policy priorities. According to the operational plan guidance, State F does an initial screening of these plans.

What is the role of multilateral development banks (MDBs)?

MDBs play a critical role in bridging the significant funding gap faced by vulnerable developing countries that bear a disproportionate burden of climate adaptation costs—estimated to reach up to 20 percent of GDP for small island nations exposed to tropical cyclones and rising seas. MDBs offer a range of financing options, including direct adaptation investments, green financing instruments, and support for fiscal adjustments to reallocate spending towards climate resilience. To be most sustainably impactful, adaptation support from MDBs should supplement existing aid with conditionality that matches the institutional capacities of recipient countries.

What is the role of other federal agencies on an international scale?

In January 2021, President Biden issued an Executive Order (EO 14008) calling upon federal agencies and others to help domestic and global communities adapt and build resilience to climate change. Shortly thereafter in September 2022, the White House announced the launch of the PREPARE Action Plan, which specifically lays out America’s contribution to the global effort to build resilience to the impacts of the climate crisis in developing countries. Nineteen U.S. departments and agencies are working together to implement the PREPARE Action Plan: State, USAID, Commerce/NOAA, Millennium Challenge Corporation (MCC), U.S. Trade and Development Agency (USTDA), U.S. Department of Agriculture (USDA), Treasury, DFC, Department of Defense (DOD) & U.S. Army Corps of Engineers (USACE), International Trade Administration (ITA), Peace Corps, Environmental Protection Agency (EPA), Department of Energy (DOE), Federal Emergency Management Agency (FEMA), Department of Transportation (DOT), Health and Human Services (HHS), NASA, Export–Import Bank of the United States (EX/IM), and Department of Interior (DOI).

What is the role of Congress in international climate finance for adaptation?

Congress oversees federal climate financial assistance to lower-income countries, especially through the following actions: (1) authorizing and appropriating for federal programs and multilateral fund contributions, (2) guiding federal agencies on authorized programs and appropriations, and (3) overseeing U.S. interests in the programs. Congressional committees of jurisdiction include the House Committees on Foreign Affairs, Financial Services, Appropriations, and the Senate Committees on Foreign Relations and Appropriations, among others.

Scaling Effective Methods across Federal Agencies: Looking Back at the Expanded Use of Incentive Prizes between 2010-2020

Policy entrepreneurs inside and outside of government, as well as other stakeholders and advocates, are often interested in expanding the use of effective methods across many or all federal agencies, because how the government accomplishes its mission is integral to what the government is able to produce in terms of outcomes for the public it serves. Adoption and use of promising new methods by federal agencies can be slowed by a number of factors that discourage risk-taking and experimentation, and instead encourage compliance and standardization, too often as a false proxy for accountability. As a result, many agency-specific and government-wide authorities for promising methods go under-considered and under-utilized. 

Policy entrepreneurs within center-of-government agencies (e.g., Executive Office of the President) are well-positioned to use a variety of policy levers and actions to encourage and accelerate federal agency adoption of promising and effective methods. Some interventions by center-of-government agencies are better suited to driving initial adoption, others to accelerating or maintaining momentum, and yet others to codifying and making adoption durable once widespread. Therefore, a policy entrepreneur interested in expanding adoption of a given method should first seek to understand the “adoption maturity” of that method and then undertake interventions appropriate for that stage of adoption. The arc of agency adoption of new methods can be long—measured in years and decades, not weeks and months. Policy entrepreneurs should be prepared to support adoption over similar timescales. In considering adoption maturity of a method of interest, policy entrepreneurs can also reference the ideas of Tom Kalil in a July 2024 Federation of American Scientists blog post on “Increasing the ‘Policy Readiness of Ideas,” which offers sample questions to ask about “the policy landscape surrounding a particular idea.”

As a case study for driving federal adoption of a new method, this paper looks back at actions that supported the widespread adoption of incentive prizes by most federal agencies over the course of fiscal years 2010 through 2020. Federal agency use of prizes increased from several incentive prize competitions being offered by a handful of agencies in the early 2000s to more than 2,000 prize competitions offered by over 100 federal agencies by the end of fiscal year 2022. These incentive prize competitions have helped federal agencies identify novel solutions and technologies, establish new industry benchmarks, pay only for results, and engage new talent and organizations. 

A summary framework below includes types of actions that can be taken by policy entrepreneurs within center-of-government agencies to support awareness, piloting, and ongoing use of new methods by federal agencies in the years ahead. (Federal agency program and project managers who seek to scale up innovative methods within their agencies are encouraged to reference related resources such as this article by Jenn Gustetic in the Winter 2018 Issues in Science and Technology: “Scaling Up Policy Innovations in the Federal Government: Lessons from the Trenches.”) 

Efforts to expand federal capacity through new and promising methods are worthwhile to ensure the federal government can use a full and robust toolbox of tactics to meet its varied goals and missions. 

OPPORTUNITIES AND CHALLENGES IN FEDERAL ADOPTION OF NEW METHODS

Opportunities for federal adoption and use of promising and effective methods

To address national priorities, solve tough challenges, or better meet federal missions to serve the public, a policy entrepreneur may aim to pilot, scale, and make lasting federal use of a specific method. 

A policy entrepreneur’s goals might include new ways for federal agencies to, for example:

To support these and other goals, an array of promising methods exist and have been demonstrated, such as in other sectors like philanthropy, industry, and civil society, in state, local, Tribal, or territorial governments and communities, or in one or several federal agencies—with promise for beneficial impact if more federal agencies adopted these practices. Many methods are either specifically supported or generally allowable under existing government-wide or agency-specific authorities. 

Center-of-government agencies include components of the Executive Office of the President (EOP) like the Office of Management and Budget (OMB) and the Office of Science and Technology Policy (OSTP), as well as the Office of Personnel Management (OPM) and the General Services Administration (GSA). These agencies direct, guide, convene, support, and influence the implementation of law, regulation, and the President’s policies across all Federal agencies, especially the executive departments. An August 2016 report by the Partnership for Public Service and the IBM Center for the Business of Government noted that, “The Office of Management and Budget and other “center of government” agencies are often viewed as adding processes that inhibit positive change—however, they can also drive innovation forward across the government.”

A policy entrepreneur interested in expanding adoption of a given method through actions driven or coordinated by one or more center-of-government agencies should first seek to understand the “adoption maturity” of a given method of interest by assessing: (1) the extent that adoption of the method has already occurred across the federal interagency; (2) any real or perceived barriers to adoption and use; and (3) the robustness of existing policy frameworks and agency-specific and government-wide infrastructure and resources that support agency use of the method.

Challenges in federal adoption and use of new methods

Policy entrepreneurs are usually interested in expanding federal adoption of new methods for good reason: a focus on supporting and expanding beneficial outcomes. Effective leaders and managers across sectors understand the importance of matching appropriate and creative tactics with well-defined problems and opportunities. Ideally, leaders are picking which tactic or tool to use based on their expert understanding of the target problem or opportunity, not using a method solely because it is novel or because it is the way work has always been done in the past. Design of effective program strategies is supported by access to a robust and well-stocked toolbox of tactics. 

However, many currently authorized and allowable methods for achieving federal goals are generally underutilized in the implementation strategies and day-to-day tactics of federal agencies. Looking at the wide variety of existing authorities in law and the various flexibilities allowed for in regulation and guidance, one might expect agency tactics for common activities like acquisition or public comment to be varied, diverse, iterative, and even experimental in nature, where appropriate. In practice, however, agency methods are often remarkably homogeneous, repeated, and standardized.   

This underutilization of existing authorities and allowable flexibilities is due to factors such as:

Strategies for addressing challenges in federal adoption and use of new methods

Attention and action by center-of-government agencies often is needed to address the factors cited above that slow the adoption and use of new methods across federal agencies and to build momentum. The following strategies are further explored in the case study on federal use of incentive prizes that follows: 

Additional strategies can be deployed within federal agencies to address agency-level barriers and scale promising methods—see, for example, this article by Jenn Gustetic in the Winter 2018 Issues in Science and Technology: “Scaling Up Policy Innovations in the Federal Government: Lessons from the Trenches.” 

LOOKING BACK: A DECADE OF POLICY ACTIONS SUPPORTING EXPANDED FEDERAL USE OF INCENTIVE PRIZES

The use of incentive prizes is one method for open innovation that has been adopted broadly by most federal agencies, with extensive bipartisan support in Congress and with White House engagement across multiple administrations. In contrast to recognition prizes, such as the Nobel Prize or various presidential medals, which reward past accomplishments, incentive prizes specify a target, establish a judging process (ideally as objective as possible), and use a monetary prize purse and/or non-monetary incentives (such as media and online recognition, access to development and commercialization facilities, resources, or experts, or even qualification for certain regulatory flexibility) to induce new efforts by solvers competing for the prize. 

The use of incentive prizes by governments (and by high net worth individuals) to catalyze novel solutions certainly is not new. In 1795, Napoleon offered 12,000 francs to improve upon the prevailing food preservation methods of the time, with a goal of better feeding his army. Fifteen years later, confectioner Nicolas François Appert claimed the prize for his method involving heating, boiling and sealing food in airtight glass jars — the same basic technology still used to can foods. Dava Sobel’s book Longitude details how the rulers of Spain, the Netherlands, and Britain all offered separate prizes, starting in 1567, for methods of figuring out longitude at sea, and finally John Harrison was awarded Britain’s top longitude prize in 1773. In 1919, Raymond Orteig, a French-American hotelier, aviation enthusiast, and philanthropist, offered a $25,000 prize for the first person who could perform a nonstop flight between New York and Paris. The prize offer initially expired by 1924 without anyone claiming it. Given technological advances and a number of engaged pilots involved in trying to win the prize, Orteig extended the deadline by 5 years. By 1926, nine teams had come forward to formally compete, and the prize went to a little-known aviator named Charles Lindbergh, who attempted the flight in a custom-built plane known as the “Spirit of St. Louis.”

The U.S. Government did not begin to adopt the use of incentive prizes until the early 21st century, following a 1999 National Academy of Engineering workshop about the use of prizes as an innovation tool. In the first decade of the 2000s, the Defense Advanced Research Projects Agency (DARPA), the National Aeronautics and Space Administration (NASA), and the Department of Energy conducted a small number of pilot prize competitions. These early agency-led prizes focused on autonomous vehicles, space exploration, and energy efficiency, demonstrating a range of benefits to federal agency missions. 

Federal use of incentive prizes did not accelerate until, in the America COMPETES Reauthorization Act of 2010, Congress granted all federal agencies the authority to conduct prize competitions (15 USC § 3719). With that new authority in place, and with the support of a variety of other policy actions, federal use of incentive prizes reached scale, with over 2,000 prize competitions offered on Challenge.gov by over 100 federal agencies between the fiscal years 2010 and 2022

There certainly remains extensive opportunity to improve the design, rigor, ambition, and effectiveness of federal prize competitions. That said, there are informative lessons to be drawn from how incentive prizes evolved in the United States from a method used primarily outside of government, with limited pilots among a handful of early-adopter federal agencies, to a method being tried by many civil servants across an active interagency community of practice and lauded by administration leaders, bipartisan members of Congress, and external stakeholders alike. 

A summary follows of the strategies and tactics used by policy entrepreneurs within the EOP—with support and engagement from Congress as well as program managers and legal staff across federal agencies—that led to increased adoption and use of incentive prizes in the federal government.

role of philanthropy

Summary of strategies and policy levers supporting expanded use of incentive prizes

In considering how best to expand awareness, adoption, and use among federal agencies of promising methods, policy entrepreneurs might consider utilizing some or all of the strategies and policy levers described below in the incentive prizes example. Those strategies and levers are summarized generally in the table that follows. Some of the listed levers can advance multiple strategies and goals. This framework is intended to be flexible and to spark brainstorming among policy entrepreneurs, as they build momentum in the use of particular innovation methods. 

Policy entrepreneurs are advised to consider and monitor the maturity level of federal awareness, adoption, and use, and to adjust their strategies and tactics accordingly. They are encouraged to return to earlier strategies and policy levers as needed, should adoption and momentum lag, should agency ambition in design and implementation of initiatives be insufficient, or should concerns regarding risk management be raised by agencies, Congress, or stakeholders. 

Stage of Federal AdoptionStrategyTypes of Center-of-Government Policy Levers
Early – No or few Federal agencies using methodUnderstand federal opportunities to use method, and identify barriers and challenges* Connect with early adopters across federal agencies to understand use of agency-specific authorities, identify pain points and lessons learned, and capture case studies (e.g., 2000-2009)

* Engage stakeholder community of contractors, experts, researchers, and philanthropy

* Look to and learn from use of method in other sectors (such as by philanthropy, industry, or academia) and document (or encourage third-party documentation of) that use and its known benefits and attributes (e.g., April 1999, July 2009)

* Encourage research, analysis, reports, and evidence-building by National Academies, academia, think tanks, and other stakeholders (e.g., April 1999, July 2009, June 2014)

* Discuss method with OMB Office of General Counsel and other relevant agency counsel

* Discuss method with relevant Congressional authorizing committee staff

* Host convenings that connect interested federal agency representatives with experts

* Support and connect nascent federal “community of interest”
Early – No or few Federal agencies using methodBuild interest among federal agencies* Designate primary policy point of contact/dedicated staff member in the EOP (e.g., 2009-2017, 2017-2021)

* Designate a primary implementation point of contact/dedicated staff at GSA and/or OPM

* Identify leads in all or certain federal agencies

* Connect topic to other administration policy agendas and strategies

* Highlight early adopters within agencies in communications from center-of-government agencies to other federal agencies (and to external audiences)

* Offer congressional briefings and foster bipartisan collaboration (e.g., 2015)
Early – No or few Federal agencies using methodEstablish legal authorities and general administration policy * Engage OMB Office of OMB General Counsel and OMB Legislative Review Division, as well as other relevant OMB offices and EOP policy councils

* Identify existing general authorities and regulations that could support federal agency use of method (e.g., March 2010)

* Establish general policy guidelines, including by leveraging Presidential authorities through executive orders or memoranda (e.g., January 2009)

* Issue OMB directives on specific follow-on agency actions or guidance to support agency implementation (“M-Memos” or similar) (e.g., December 2009, March 2010, August 2011, March 2012)

* Provide technical assistance to Congress regarding government-wide or agency-specific authority (or authorities) (e.g., June-July 2010, January 2011)

* Delegate existing authorities within agencies (e.g., October 2011)

* Encourage issuance of agency-specific guidance (e.g., October 2011, February 2014)

* Include direction to agencies as part of broader Administration policy agendas (e.g., September 2009, 2011-2016)
Early – No or few Federal agencies using methodRemove barriers and “make it easier”* Create a central government website with information for federal agency practitioners (such as toolkits, case studies, and trainings) and for the public (e.g., September 2010)

* Create dedicated GSA schedule of vendors (e.g., July 2011)

* Establish an interagency center of excellence (e.g., September 2011)

* Encourage use of interagency agreements on design or implementation of pilot initiatives (e.g., September 2011)

* Request agency budget submissions to OMB to support pilot use in President’s budget (e.g., December 2013)
Adoption well underway – Many federal agencies have begun to use methodConnect practitioners* Launch a federal “community of practice” with support from GSA for meetings, listserv, and collaborative projects (e.g., April 2010, 2016, June 2019)

* Host regular events, workshops, and conferences with federal agency and, where appropriate and allowable, seek philanthropic or nonprofit co-hosts (e.g., April 2010, June 2012, April 2015, March 2018, May 2022)
Adoption well underway – Many federal agencies have begun to use methodStrengthen agency infrastructure* Foster leadership buy-in through briefings from White House/EOP to agency leadership, including members of the career senior executive service

* Encourage agencies to dedicate agency staff and invest in prize design support within agencies

* Encourage agencies to create contract vehicles as needed to support collaboration with vendors/ experts

* Encourage agencies to develop intra-agency networks of practitioners and to provide external communications support and platforms for outreach

* Request agency budget submissions to OMB for investments in agency infrastructure and expansion of use, to include in the President's budget where needed (e.g., 2012-2013), and request agencies otherwise accommodate lower-dollar support (such as allocation of FTEs) where possible within their budget toplines
Adoption well underway – Many federal agencies have begun to use methodClarify existing policies and authorities* Issue updated OMB, OSTP, or agency-specific policy guidance and memoranda as needed based on engagement with agencies and stakeholders (e.g.,: August 2011, March 2012)

* Provide technical assistance to Congress on any needed updates to government-wide or agency-specific authorities (e.g., January 2017)
Adoption prevalent – Most if not all federal agencies have adopted, with a need to maintain use and momentum over timeHighlight progress and capture lessons learned* Require regular reporting from agencies to EOP (OSTP, OMB, or similar) (e.g., April 2012, May 2022)

* Require and take full advantage of regular reports to Congress (e.g., April 2012, December 2013, May 2014, May 2015, August 2016, June 2019, May 2022, April 2024)

* Continue to capture and publish federal-use case studies in multiple formats online (e.g., June 2012)

* Undertake research, evaluation, and evidence-building

* Co-develop practitioner toolkit with federal agency experts (e.g., December 2016)

* Continue to feature promising examples on White House/EOP blogs and communication channels (e.g., October 2015, August 2020)

* Engage media and seek both general interest and targeted press coverage, including through external awards/honorifics (e.g., December 2013)
Adoption prevalent – Most if not all federal agencies have adopted, with a need to maintain use and momentum over timePrepare for presidential transitions and document opportunities for future administrations* Integrate go-forward proposals and lessons learned into presidential transition planning and transition briefings (e.g., June 2016-January 2017)

* Brief external stakeholders and Congressional supporters on progress and future opportunities

* Connect use of method to other, broader policy objectives and national priorities (e.g., August 2020, May 2022, April 2024)

Phases and timeline of policy actions advancing the adoption of incentive prizes by federal agencies

  1. Growing number of incentive prizes offered outside government (early 2000s)

At the close of the 20th century, federal use of incentive prizes to induce activity toward targeted solutions was limited, though the federal government regularly utilized recognition prizes to reward past accomplishment. In October 2004, the $10 million Ansari XPRIZE—which was first announced in May 1996—was awarded by the XPRIZE Foundation for the successful flights of Spaceship One by Scaled Composites. Following the awarding of the Ansari XPRIZE and the extensive resulting news coverage, philanthropists and high net worth individuals began to offer prize purses to incentivize action on a wide variety of technology and social challenges. A variety of new online challenge platforms sprung up, and new vendors began offering consulting services for designing and hosting challenges, trends that lowered the cost of prize competition administration and broadened participation in prize competitions among thousands of diverse solvers around the world. This growth in the use of prizes by philanthropists and the private sector increased the interest of the federal government in trying out incentive prizes to help meet agency missions and solve national challenges. Actions during this period to support federal use of incentive prizes include:

  1. Obama-Biden Administration Seeks to Expand Federal Prizes Through Administrative Action (2009-2010)

From the start of the Obama-Biden Administration, OSTP and OMB took a series of policy steps to expand the use of incentive prizes across federal agencies and build federal capacity to support those open-innovation efforts. Bipartisan support in Congress for these actions soon led to new legislation to further advance agency adoption of incentive prizes. Actions during this period to support federal use of incentive prizes include:

  1. Implementing New Government-Wide Prizes Authority Provided by the America COMPETES Act (2011-2016)

During this period of expansion in the federal use of incentive prizes supported by new government-wide prize authority provided by Congress, the Obama-Biden Administration continued to emphasize its commitment to the model, including as a key method for accomplishing administration priorities, including priorities related to open government and evidence-based decision making. Actions during this period to support federal use of incentive prizes include:

toolkit
  1. Maintaining Momentum in New Presidential Administrations

Support for federal use of incentive prizes continued beyond the Obama-Biden Administration foundational efforts. Leadership by federal agency prize leads was particularly important to support this momentum from administration to administration. Actions during the Trump-Pence and Biden-Harris Administrations to support federal use of incentive prizes include:

Harnessed American ingenuity through increased use of incentive prizes. Since 2010, more than 80 Federal agencies have engaged 250,000 Americans through more than 700 challenges on Challenge.gov to address tough problems ranging from fighting Ebola, to decreasing the cost of solar energy, to blocking illegal robocalls. These competitions have made more than $220 million available to entrepreneurs and innovators and have led to the formation of over 275 startup companies with over $70 million in follow-on funding, creating over 1,000 new jobs.

In addition, in January 2017, the Obama-Biden Administration OSTP mentioned the use of incentive prizes in its public “exit memo” as a key “pay-for-performance” method in agency science and technology strategies that “can deliver better results at lower cost for the American people,” and also noted:

Harnessing the ingenuity of citizen solvers and citizen scientists. The Obama Administration has harnessed American ingenuity, driven local innovation, and engaged citizen solvers in communities across the Nation by increasing the use of open-innovation approaches including crowdsourcing, citizen science, and incentive prizes. Following guidance and legislation in 2010, over 700 incentive prize competitions have been featured on Challenge.gov from over 100 Federal agencies, with steady growth every year.

By the end of fiscal year 2022, federal agencies had hosted over 2,000 prize competitions on Challenge.gov, since its launch in 2010. OSTP, GSA, and NASA CoECI had provided training to well over 2,000 federal practitioners during that same period. 

Number of Federal Prize Competitions by Authority FY14-FY22

Source: Office of Science and Technology Policy. Biennial Report on “IMPLEMENTATION OF FEDERAL PRIZE AND CITIZEN SCIENCE AUTHORITY: FISCAL YEARS 2021-22.” April 2024.

Federal Agency Practices to Support the Use of Prize Competitions

Source: Office of Science and Technology Policy. Biennial Report on “IMPLEMENTATION OF FEDERAL PRIZE AND CITIZEN SCIENCE AUTHORITY: FISCAL YEARS 2019-20.” March 2022. 

CONCLUSION

Over the span of a decade, incentive prizes had moved from a tool used primarily outside of the federal government to one used commonly across federal agencies, due to a concerted, multi-pronged effort led by policy entrepreneurs and incentive prize practitioners in the EOP and across federal agencies, with bipartisan congressional support, crossing several presidential administrations. And yet, the work to support the use of prizes by federal agencies is not complete–there remains extensive opportunity to further improve the design, rigor, ambition, and effectiveness of federal prize competitions; to move beyond “ideas challenges” to increase the use of incentive prizes to demonstrate technologies and solutions in testbeds and real-world deployment scenarios; to train additional federal personnel on the use of incentive prizes; to learn from the results of federal incentive prizes competitions; and to apply this method to address pressing and emerging challenges facing the nation.

In applying these lessons to efforts to expand the use of other promising methods in federal agencies, policy entrepreneurs in center-of-government federal agencies should be strategic in the policy actions they take to encourage and scale method adoption, by first seeking to understand the adoption maturity of that method (as well as the relevant policy readiness) and then by undertaking interventions appropriate for that stage of adoption. With attention and action by policy entrepreneurs to address factors that discourage risk-taking, experimentation, and piloting of new methods by federal agencies, it will be possible for federal agencies to utilize a further-expanded strategic portfolio of methods to catalyze the development, demonstration, and deployment of technology and innovative solutions to meet agency missions, solve long-standing problems, and address grand challenges facing our nation. 

Photo by Nick Fewings

Don’t Fight Paper With Paper: How To Build a Great Digital Product With the Change in the Couch Cushions

Barriers abound. If there were a tagline for most peoples’ experience building tech systems in government, that would be a contender. At FAS, we constantly hear about barriers agencies face in building systems that can help speed permitting review, a challenge that’s more critical than ever as the country builds new infrastructure to move away from a carbon economy. But breaking down barriers isn’t just a challenge in the permitting arena. So today we’re bringing you an instructive and hopefully inspiring story from Andrew Petrisin, Deputy Assistant Secretary for Multimodal Freight at the U.S. Department of Transportation. We hope his success in building a new system to help manage the supply chain crisis provides the insight – and motivation – you need to overcome the barriers you face. 

To understand Andrew’s journey, we need to go back to the start of the pandemic. Shelter-in-place orders around the world disrupted global supply chains. Increased demand for many goods could not be met, creating a negative feedback loop that drove up costs and propelled inflation. In June of 2021, the Biden Administration announced it would establish a Supply Chain Disruption Task Force to address near-term supply and demand misalignments. Andrew joined the team together with Port Envoy and previous Deputy Secretary of Transportation John Porcari.

Porcari pulled together all the supply chain stakeholders out of the Port of LA on a regular basis to build situational awareness. That included the ports, terminal operators, railroads, ocean carriers, trucking associations, and labor. These types of meetings, happening three times each week during the height of the crisis, allowed stakeholders to share data and talk through challenges from different perspectives. Before the supply-chain crisis, meetings with all of the key players – in what Petrisin calls a “wildly interdependent system” – were rare. Now, railroads and the truckers had better awareness of the dwell times at the port (i.e., how long a container is sitting on terminal). Ocean carriers and ports now had greater understanding of what might be causing delays inland.

Going with the FLOW

These meetings were helpful, but to better see around corners, it needed to evolve to something more sophisticated. “The irony is that the problem was staring us right in the face,” Andrew told us, “but at the time we really had limited options to proactively fix it.” The meetings were building new relationships and strengthening existing ones, but there was a clear need for more than what had, thus far, consisted mostly of exchanges of slide decks. This prompted Petrisin to start asking some new questions: “How could we provide more value? What would make the data more actionable for each of you?” And critically, “Who would each of you trust with the data needed to make something valuable for everyone?” This was the genesis of FLOW (Freight Logistics Optimization Works)

Looking back, it might be easy to see a path from static data in slide decks shared during big conference calls to a functional data system empowering all the actors to move more quickly. But that outcome was far from certain. To start, the DOT is rarely a direct service provider. There was little precedent for the agency taking on such a role. The stakeholders Andrew was dealing with saw the Department as either a regulator or a grantmaker, both roles with inherent power dynamics. Under normal circumstances, if the Department asked a company for data, the purpose was to evaluate them to inform either a regulatory or grantmaking decision. That makes handing over data to the Department something private companies do carefully, with great caution and often trepidation. In fact, one company told Andrew “we’ve never shared data with the federal government that didn’t come back to bite us.” Yet to provide the service Andrew was envisioning, the stakeholders would need to willingly share their data on an ongoing, near real-time basis. They would need to see DOT in a whole new light, and a whole new role. DOT would need to see itself in a new light as well. 

Oh, This is Different: Value to the Ecosystem

Companies had no obligation to give DOT this data, and until now, had no real reason to do so. In fact, other parts of government had asked for it before, and been turned down. But companies did share the data with Andrew’s team, at least enough of them to get started. Part of what Andrew thinks was different this time was that DOT wasn’t collecting this data primarily for its own use. “Oh, this is very different,” one of his colleagues said. “You are collecting data for other people to use.” The goal in this case was not a decision about funding or rules, but rather the creation of value back to the ecosystem of companies whose data Andrew’s system was ingesting. 

To create that value, Andrew could not rely on a process that presumed to know up front what would work. Instead, he, his team, and the companies would need to learn along the way. And they would need to learn and adjust together. Instead of passive customers, Andrew’s team needed active participants who would engage in tight ‘build-measure-learn’ cycles – partners who would immediately try out new functionality and not only provide candid, quick feedback, but also share how they were using the data provided to improve their operations. “I was very clear with the companies: I need you guys to be candid. I need to know if this is working for you or not; if it’s valuable for you or not. I don’t need advice, I need active participation,” Petrisin says.

This is an important point. Too often, leaders of tech projects misunderstand the principle of listening to users as soliciting advice and opinions from a range of stakeholders and trying to find an average or mid-point from among them. What product managers should be trying to surface are needs, not opinions. Opinions get you what people think they want.“If I’d asked them what they wanted, they would have said faster horses,” Henry Ford is wrongly credited with saying. It’s the job of the digital team to uncover and prioritize needs, find ways to meet those needs, serve them back to the stakeholders, learn what works, adjust as necessary, and continue that cycle. The FLOW team did this again and again.  

Building Trust through Partnership

That said, many of the features of FLOW exist because of ideas from users/companies that the team realized would create value for a larger set of stakeholders. “People sometimes ask How’d you get the shippers to give you purchase order data? The truth is, it was their idea,” Petrisin says. But this brings us back to the importance of an iterative process that doesn’t presume to know what will work up front. If the FLOW team had asked shippers to give them purchase order data in a planning stage, the answer most certainly would have been no, in part because the team hadn’t built the necessary trust yet, and in part because the shippers could not yet imagine how they would use this tool and how valuable it would be to them. 

Co-creation with users relies on a foundation of trust and demonstrable value, which is built over time. It’s very hard to build that foundation through a traditional requirements-heavy up front planning process, which is assumed to be the norm in government. An iterative – and more nimble – process matters. One industry partner told Petrisin, “‘usually the government comes in and tells us how to do our jobs, but that isn’t what you did. It was a partnership.’”

One way that iterative, collaborative process manifested for the FLOW team was regular document review with the participating companies. “Each week we’d send them a written proposal on something like how we’re defining demand side data elements, for example,” Petrisin told us. “It would essentially say, ‘This is what we heard, this is what we think makes sense based on what we’re hearing from you. Do you agree?’ And people would review it and comment every week. Over time, you build that culture, you show progress, and you build trust.” 

Petrisin’s team knew you can’t cultivate this kind of rapid, collaborative learning environment at scale from day one. So he started small, with a group of companies that were representative of the larger ecosystem. “So we got five shippers, two ocean carriers, three ports, two terminals, two  chassis companies, three third-party logistics firms, a trucking company, and a warehouse company,” he told us, trying to keep the total number under 20. “Because when you get above 20, it becomes hard to have a real conversation.” In these early stages, quality mattered more than quantity, and the quality of the learning was directly tied to the ability to be in constant, frank communication about what was working, what wasn’t, and where value was emerging. 

Byrne’s Law states that you can get 85% of the features of most software for 10% of the price. You just need to choose the right priorities. This is true not only for features, but for data, too. The FLOW team could have specified that a system like this could only succeed if it had access to all of the relevant data, and very often thorough requirements-gathering processes reinforce this thinking. But even with only 20 companies participating early on, FLOW ensured those companies were representative of the industry. This enabled real insights from very early in the process. Completeness and thoroughness, so often prized in government software practices, is often neither practical nor desired.

Small Wins Yield Large Returns

Starting small can be hard for government tech programs. It’s uncomfortable internally because it feels counter to the principle that government serves everyone equally; external stakeholders can complain if a competitor or partner is in the program and they’re not. (Such complaints can be a good problem to have; it can mean that the ecosystem sees value in what’s being built.) But too often, technology projects are built with the intent of serving everyone from day one, only to find that they meet a large, pre-defined set of requirements but don’t actually serve the needs of real users, and adoption is weak. Petrisin didn’t enjoy having to explain to companies why they couldn’t be in the initial cohort, but he stuck to his guns. The discipline paid off. “Some of my favorite calls were to go back to those companies a few months later and say, ‘We’re ready! We’re ready to expand, and we can onboard you.’” He knew he was onboarding them to a better project because his team had made the hard choices they needed to make earlier. 

Starting small can ironically position products to grow fast, and when they do, strategies must change. Petrisin says his team really felt that. “I’ve gone from zero to one on a bunch of different things before this, but never really past the teens [in terms of team size], so to speak,” he says. “And now we’re approaching something like 100. So a lot of the last fiscal year for me was learning how to scale.” Learning how to scale a model of collaborative and shared governance was challenging. 

Petrisin had to maintain FLOW’s commitment to serving the needs of the broader public, while also being pragmatic about who DOT could serve with given resources and continuing to build tight build-measure-learn cycles. Achieving consensus or even directional agreement during a live conversation with 20 stakeholders is one thing, but it’s much harder, and possibly counterproductive, with 60 or 100. So instead of changing the group of 20, which provided crucial feedback and served as a decision-making body, Andrew developed a second point of engagement: a bi-weekly meeting open to everyone for the FLOW team to share progress against the product roadmap weekly, which provided transparency and another opportunity to build trust through communicating feature delivery.

Fighting Trade-off Denial

One thing that didn’t change as the project scaled up was the team’s commitment to realistic and transparent prioritization. “We have to be very honest with ourselves that we can’t do everything,” Petrisin tells us. “But we can figure out what we want to do and transparently communicate that to the industry. They [the industry members] run teams. They manage P&Ls [profits and loss statements]. They understand what it is to make trade-offs with a given budget.” There was a lot of concern about not serving all potential supply chain partners, but Petrisin fought that “trade-off denial.” At that point, his team could either serve a smaller group well or serve everyone poorly. Establishing the need for prioritization early allowed for an incremental and iterative approach to development.

What drove that prioritization is not the number of features, lines of code, or fidelity to a predetermined plan, but demonstrable value to the ecosystem, and to the public. “Importers are working to better forecast congestion, which improves their ability to smooth out their kind of warehouse deliveries. Ocean carriers are working to forecast their bookings using the purchase order data. The chassis providers have correlated the flow demand data to their chassis utilization.” These are all qualitative outcomes, highly valuable but ones that could not necessarily have been predicted. There are quantitative measures too. FLOW aims to reduce operational variance, smoothing out the spikes in the supply chain because the actors can better manage changing demands. That means a healthier economy, and it means Americans are more likely to have access to the goods they need and want. 

Major Successes Don’t Have to Start With Major Funding

What did the FLOW team have that put them in a position to succeed that other government software products don’t have? Given that FLOW was born out of the pandemic crisis, you might guess that it had a big budget, a big team, and a brand-name vendor. It had none of those. The initial funding was “what we could find in the couch cushions, so to speak,” says Andrew. More funding came as FLOW grew, but at that point there was already a working product to point to, and its real-world value was already established, both inside DOT and with industry. What did the procurement look like and how did they choose a vendor? They didn’t. So far, FLOW has been built entirely by developers led by DOT’s Bureau of Transportation Statistics, and the FLOW team continues to hire in-house. Having the team in-house and not having to negotiate change orders has made those build-measure-learn cycles a lot tighter. 

FLOW did have great executive support, all the way up to the office of Transportation Secretary Pete Buttigieg, who understood the critical need of better digital infrastructure for the global supply chain. It’s unfortunately not as common as it should be for leadership to be involved and back development the way DOT’s top brass showed up for Petrisin and his team. But the big difference was just the team’s approach. “The problem already existed, the data already existed, the data privacy authorities already existed, the people already existed,” he told us. “What we did was put those pieces together in a different way. We changed processes and culture, but otherwise, the tools were already there.”

FLOW into the Future

FLOW is still an early stage product. There’s a lot ahead for it, including new companies to onboard that bring more and more kinds of data, more features, and more insights that will allow for a more resilient supply chain in the US. When Andrew thinks about where FLOW is going, he thinks about his role in its sustainability. “My job is to get the people, processes, purpose, and culture in place. So I’ve spent a lot of time on making sure we have a really great team who are ready to continue to move this forward, who have the relationships with industry. It’s not just my vision. It’s our vision.” He also thinks about inevitability, or at least the perception of it. “Five years from now we should look back and think, why did we not do this before? Or why would we ever have not done it? It’s digital public infrastructure we need. This is a role government should play, and I hope in the future people think it’s crazy that anyone would have thought government can’t or shouldn’t do things like this.” 

Just this spring, when the Baltimore bridge collapsed, FLOW allowed stakeholders to monitor volume changes and better understand the impact of cargo rerouting to other ports. Following the collapse of the Francis Scott Key Bridge, Ports America Chesapeake, the container terminal operator at the Port of Baltimore, committed to joining FLOW for the benefits of supply chain resiliency to the Baltimore region. But FLOW’s success was not inevitable. Anyone who’s worked on government tech projects can rattle off a dozen ways a project like this could have failed. And outside of government, there’s a lot of skepticism, too. Petrisin remembers talking to the staff of one of the companies recently, who admitted that when they’d heard about the project, they just assumed it wouldn’t go anywhere and ignored it. He admits that’s a fair response. Another one, though, told him that he’d realized early on that the FLOW team wasn’t going to try to “press release” their way to success, but rather prove their value. The first company has since come back and told him, “Okay, now that everyone’s in it, we’re at a disadvantage to not be in it. When can we onboard?“

“You can’t fight paper with paper.” Ultimately, this sentiment sums up the approach Petrisin and his team took, and why FLOW has been such a success with such limited resources. It reminds us of the giant poster Mike Bracken, who founded the Government Digital Service in the UK, and inspired the creation of such offices as the US Digital Service, used to have on the wall behind his desk. In huge letters, it said “SHOW THE THING.” There’ll never be a shortage of demands for paperwork, for justification, for risk mitigation, for compliance in the game of building technology in government. These “government needs” (as Mike would call them) can eat up all your team has to give, leaving little room for meeting user needs. But momentum comes from tangible progress — working software that provides your stakeholders immediate value and that they can imagine getting more and more from over time. That’s what the FLOW team delivered, and continues to. They fought paper with value.

If Andrew and the FLOW team at DOT can do it, you can too. Where do you see successes like this in your work? What’s holding you back and what help do you need to overcome these barriers? Share on LinkedIn.


Lessons

Use a precipitating event to change practices. The supply chain crisis gave the team both the excuse to build this infrastructure that now seems indispensable, and the room to operate a little differently in how they built it. Now the struggle will be to sustain these different practices when the crisis is perceived to be over. 

Make data function as a compass, not a grade. Government typically uses data for purposes of after-the fact evaluation, which can make internal and external actors wary of sharing it. When it can serve instead to inform one’s actions in closer to real-time, its value to all parties becomes evident. As Jennifer says in her book Recoding America, make data a compass, not a grade.

Build trust. Don’t try to “press release your way to success.” Actively listen to your users and be responsive to their concerns. Use that insight to show your users real value and progress frequently, and they’ll give you more of their time and attention. 

Trust allows you to co-create with your users. Many of FLOW’s features came from the companies who use it, who offered up the relevant data to enable valuable functionality. 

But understand your users’ needs, don’t solicit advice. The FLOW roadmap was shaped by understanding what was working for its stakeholders and what wasn’t, by what features were actually used and how. The behavior and actions of the user community are better signals than people’s opinions.

Start small. A traditional requirements-heavy up front planning process asks you to know everything the final product will do when you start. The FLOW team started with the most basic needs, built that, observed how the companies used it, and added from there. This enables them to learn along the way and build a product with greater value at lower cost.

Be prepared to change practices as you scale. How you handle stakeholders early on won’t work as you scale from a handful to hundreds. Adapt processes to the needs of the moment. 

Fight trade-off denial. There will always be stakeholders, internal and external, who want more than the team can provide. Failing to prioritize and make clear decisions about trade-offs benefits no one in the end.

Don’t just reduce burden – provide value. There’s been a huge focus on reduction of burden on outside actors (like the companies involved in FLOW) over the past years. While it’s important to respect their time, FLOW’s success shows that companies will willingly engage deeply, offering more and more of their time, when there’s a real benefit to them. Watch your ratio of burden to value. 

Measure success by use and value to users. Too many software projects define success as on-time and on-budget. Products like FLOW define success by the value they create for their users, as measured by quantitative measures of use and qualitative use cases. 
Fund products not projects. FLOW started with a small internal team trying to build an early prototype, not a lengthy requirements-building stage. It was funded with “what we could find in the couch cushions” followed by modest dedicated allocations, and continues to grow modestly. This matches Jennifer’s model of product funding, as distinct from project funding.

Improving Government Capacity: Unleashing the capacity, creativity, energy, and determination of the public sector workforce

Peter Bonner is a Senior Fellow at FAS.

Katie: Peter, first, can you explain what government capacity means to you?

Peter: What government capacity means to me is ensuring that the people in the public sector, federal government primarily, have the skills, the tools, the technologies, the relationships, and the talent that help them meet their agency missions they need to do their jobs.

Those agency missions are really quite profound. I think we lose sight of this: if you’re working at the EPA, your job is to protect human health in the environment. If you’re working at the Department of the Interior, it’s to conserve and protect our natural resources and cultural heritage for the benefit of the public. If you’re working for HHS, you’re enhancing the health and well-being of all Americans. You’re working for the Department of Transportation, you’re ensuring a safe and efficient transportation system. And you can get into the national security agencies about protecting us from our enemies, foreign and domestic. These missions are amazing. Building that capacity so that the people can do their jobs better and more effectively is a critical and noble undertaking. Government employees are stewards of what we hold in common as a people. To me, that’s what government capacity is about.

Mr. Bonner’s Experience and Ambitions at FAS

You’ve had a long career in government – but how is it that you’ve come to focus on this particular issue as something that could make a big difference?

I’ve spent a  couple of decades building government capacity in different organizations and roles, most recently as a government executive and political appointee as an associate director at the Office of Personnel Management. Years ago I worked as a contractor with a number of different companies, building human capital and strategic consulting practices. In all of those roles, in one way or another, it’s been about building government capacity.

One of my first assignments when I worked as a contractor was working on the  Energy Star program, and helping to bridge the gaps between the public sector interests – wanting to create greater energy efficiency and reduce energy usage to address climate change – to the private sector interests – making sure their products were competitive and using market forces to demonstrate the effectiveness of federal policy. This work promoted energy efficiency across energy production, computers, refrigerators, HVAC equipment, even commercial building and residential housing. Part of the capacity building piece of that was working with the federal staff and the federal clients who ran those programs, but also making sure they had the right sets of collaboration skills to work effectively with the private sector around these programs and work effectively with other federal agencies. Agencies not only needed to work collaboratively wih the private sector, but across agencies as well. Those collaboration skills–those skills to make sure they’re working jointly inter-agency – don’t always come naturally because people feel protective about their own agency, their own budgets, and their own missions. So that’s an example of building capacity. 

Another project early on I was involved in was helping to develop a training program for inspectors of underground storage tanks. That’s pretty obscure, but underground storage tanks have been a real challenge in the nation in creating groundwater pollution. We developed an online course using simulations on how to detect leaks and underground storage tanks. The capacity building piece was getting the agencies and  tank inspectors at the state and local level to use this new learning technology to make their jobs easier and more effective. 

Capacity building examples abound – helping OPM build human capital frameworks and improve operating processes, improving agency performance management systems, enhancing the skills of Air Force medical personnel to deal with battlefield injuries, and on. I’ve been doing capacity building through HR transformation,  learning, leadership development, strategy and facilitation, human centered design, and looking at how do you develop HR and human capital systems that support that capacity building in the agencies. So across my career, those are the kinds of things that I’ve been involved in around government capacity.

What brought you to FAS and what you’re doing now? 

I left my job as the associate director for HR Solutions at the Office of Personnel Management last May with the intent of finding ways to continue to contribute to the effective functioning of the federal government. This opportunity came about from a number of folks I’d worked with while at OPM and elsewhere.

FAS is in a unique position to change the game in federal capacity building through thought leadership, policy development, strategic placement of temporary talent, and initiatives to bring more science and technical professionals to lead federal programs. 

I’m really trying to help change the game in talent acquisition and talent management and how they contribute to government capacity. That ranges from upfront hiring in the HR arena through to onboarding and performance management and into program performance.

I think what I’m driven by at FAS is to really unleash the capacity, the creativity, the energy, the determination of the public sector workforce to be able to do their jobs as efficiently and effectively as they know how. The number of people I know in the federal government that have great ideas on how to improve their programs in the bottom left hand drawer of their desk or on their computer desktop, that they can never get around to because of everything else that gets in the way. 

There are ways to cut through the clutter to help make hiring and talent management effective. Just in hiring: creative recruiting and sourcing for science and technical talent, using hiring flexibilities and hiring authorities on hand, equipping HR staffing specialists and hiring managers with the tools they need, working across agencies on common positions, accelerating background checks are all ways to speed up the hiring process and improve hiring quality.

It’s the stuff that gets in the way that inhibits their ability to do these things. So that unleashing piece is the real reason I’m here. When it comes to the talent management piece changing, if you can move the needle a little bit on the perception of public sector work and federal government work, because the perception, the negative perception of what it’s like to work in the federal government or the distrust in the federal government is just enormous. The barriers there are profound. But if we can move the needle on that just a little bit, and if we can change the candidate experience of the person applying for a federal job so that they, while it may be arduous, results in a positive experience for them and for the hiring manager and HR staffing specialist, that then becomes the seed bed for a positive employee experience in the federal job. That then becomes the seed bed for an effective customer experience because the linkage between employee experience and customer experience is direct. So if we can shift the needle on those things just a little bit, we then start to change the perception of what public sector work is like, and tap into that energy of what brought them to the public sector job in the first place, which by and large is the mission of the agency.

Using Emerging Technologies to Improve Government Capacity

How do you see emerging technologies assisting or helping that mission?

The emerging technologies in talent management are things that other sectors of the economy are working with and that the federal government is quickly catching up on. Everybody thinks the private sector has this lock picked. Well, not necessarily. Private sector organizations also struggle with HR systems that effectively map to the employee journey and that provide analytics that can guide HR decision-making along the way.

A bright spot for progress in government capacity is in recruiting and sourcing talent. Army Corps of Engineers, Department of Energy are using front end recruiting software to attract people into their organizations. The  Climate Corps, for example, or the Clean Energy Corps at Department of Energy. So they’re using those front end recruiting systems to bring people in and attract people in to submit the resumes and their applications that can again, create that positive initial candidate experience, then take ’em through the rest of the process. There’s work being done in automating and developing more effective online assessments from USA Hire, for example, so that if you’re in a particular occupation, you can take an online test when you apply and that test is going to qualify you for the certification list on that job.

Those are not emerging technologies but they are being deployed effectively in government. The mobile platforms to quickly and easily communicate with the applicants and communicate with the candidates at different stages of the process. Those things are coming online and already online in many of the agencies. 

In addition to some experimentation with AI tools, I think one of the more profound pieces around technologies is what’s happening at the program level that is changing the nature of the jobs government workers do that then impacts what kind of person an HR manager is  looking for. 

For example, while there are specific occupations focused on machine learning, AI, and data analytics, data literacy and acumen and using these tools going to be part of everyone’s job in the future. So facility with those analytic tools and with the data visualization tools that are out there is going to have a profound impact on the jobs themselves. Then you back that up to, okay, what kind of person am I looking for here? I need somebody with that skill set coming in. Or who can be easily up-skilled into that. That’s true for data literacy, data analytics, some of the AI skill sets that are coming online. It’s not just the technologies within the talent management arena, but it’s the technologies that are happening in the front lines and the programs that then determine what kind of person I’m looking for and impact those jobs.

The Significance of Permitting Reform for Hiring

You recently put on a webinar for the Permitting Council. Do you mind explaining what that is and what the goal of the webinar was?

The Permitting Council was created under what’s called the Fast 41 legislation, which is legislation to improve the capacity and the speed at which environmental permits are approved so that we can continue with federal projects. Permitting has become a real hot button issue right now because the Inflation Reduction Act, the CHIPS and Science Act, the Bipartisan Infrastructure Law created all of these projects in the field, some on federal lands, some on state and local lands, and some on tribal or private sector lands, that then create the need to do an environmental permit of some kind in order to get approval to build. 

So under the Bipartisan Infrastructure Law, we’re trying to create internet for all, for example, and particularly provide internet access in rural communities where they haven’t had it before, and people who perhaps couldn’t afford it. That requires building cell towers and transmission lines on federal lands, and that then requires permits, require a permitting staff or a set of permitting contractors to actually go in and do that work.

Permitting has been, from a talent perspective, underresourced. They have not had the capacity, they have not had the staff even to keep up with the permits necessitated by these new pieces of legislation. So getting the right people hired, getting them in place, getting the productive environmental scientists, community planners, the scientists of different types, marine biologists, landscape folks, the fish and wildlife people who can advise on how best to do those environmental impact statements or categorical exclusions as a result of the National Environmental Protection Act – it has been a challenge. Building that capacity in the agencies that are responsible for permitting is really a high leverage point for these pieces of legislation because if I can’t build the cell tower, I then don’t realize the positive results from the Bipartisan Infrastructure Law. And you can think of the range of things that those pieces of legislation have fostered around the country from clean water systems in underserved communities, to highways, to bridges, to roads, to airports.

Another example is offshore wind. So you need marine biologists to be able to help do the environmental impact statements around building the wind turbines offshore and examine the effect on the marine habitats. It’s those people that the Department of Interior, the Department of Energy, and Department of Commerce need to hire to come in and run those programs and do those permits effectively. That’s what the Permitting Council does.

One of the things that we worked with with OPM and the Permitting Council together on is creating a webinar so that we got the hiring managers and the HR staffing specialists in the room at the same time to talk about the common bottlenecks that they face in the hiring process. After doing outreach and research, we created journey maps and a set of personas to identify a couple of the most salient and common challenges and high leverage challenges that they face.

Overcoming Hiring Bottlenecks for Permitting Talent, a webinar presented to government hiring managers, May 2024

Looking at the ecosystem within hiring, from what gets in the way in recruiting and sourcing, all the way through to onboarding, to focusing in on the position descriptions and what do you do if you don’t have an adequate position description upfront when you’re trying to hire that environmental scientist to the background check process and the suitability process. What do you do when things get caught in that suitability process? And if you can’t bring those folks on board in a timely fashion you risk losing them. 

We focused on a couple of the key challenges in that webinar, and we had, I don’t know, 60 or 70 people who were there, the hiring managers and HR staffing specialists who took away from that a set of tools that they can use to accelerate and improve that hiring process and get high quality hires on quickly to assist with the permitting.

The Permitting Council has representatives from each of the agencies that do permitting, and works with them on cross agency activities. The council also has funding from some of these pieces of legislation to foster the permitting process, either through new software or people process, the ability to get the permits done as quickly as possible. So that’s what the webinar was about. I We’re talking about doing a second one to look at the more systemic and policy related changes, challenges in permitting hiring.

The Day One Project 2025

FAS has launched its Day One Project 2025, a massive call for nonpartisan, science-based policy ideas that a next presidential administration can utilize on “day one” – whether the new administration is Democrat or Republican. One of the areas we’ve chosen to focus on is Government Capacity. Will you be helping evaluate the ideas that are submitted?

I’ve had input into the Day One Project, and particularly around the talent pieces in the government capacity initiative, and also procurement and innovation in that area. I think the potential of that to help set the stage for talent reform more broadly, be it legislative policy, regulatory or the risk averse culture we have in the federal government. I think the impact of that Day One Project could be pretty profound if we get the right energy behind it. So one of the things that I’ve known for a while, but has come clear to me over the past five months working with FAS, is that there are black boxes in the talent management environment in the federal government. What I mean by that is that it goes into this specialized area of expertise and nobody knows what happens in that specialized area until something pops out the other end.

How do you shed light on the inside of those black boxes so it’s more transparent what happens? For instance: position descriptions when agencies are trying to hire someone. Sometimes what happens with position descriptions is that the job needs to be reclassified because it’s changed dramatically from the previous position description. Well, I know a little about classification and what happens in the classification process, but to most people looking from the outside to hiring managers, that’s a black box. Nobody knows what goes on. I mean, they don’t know what goes on within that classification process to know that it’s going to be worthwhile for them once they have the position description at the other end and are able to do an effective job announcement. Shedding light on that, I think has the potential to increase transparency and trust between the hiring manager and the HR folks or the program people and the human people.

If we’re able to create that greater transparency. If we’re able to tell the candidates when they come in and apply for a job where they are in the hiring process and whether they made the cert list or didn’t make the cert list. And if they are in the CT list, what’s next in terms of their assessment and the process? If they’ve gone through the interview, where are we in the decision deliberations about offering me the job? Same thing in suitability. Those are many black boxes all the way, all the way across. And creating transparency and communication around it, I think will go a long way, again, to moving that needle on the perception of what federal work is and what it’s like to work in the system. So it’s a long answer to a question that I guess I can summarize by saying, I think we are in a target rich environment here. There’s lots of opportunity here to help change the game.