Ensuring Racial Equity in Federal Procurement and Use of Artificial Intelligence
In pursuit of lower costs and improved decision-making, federal agencies have begun to adopt artificial intelligence (AI) to assist in government decision-making and public administration. As AI occupies a growing role within the federal government, algorithmic design and evaluation will increasingly become a key site of policy decisions. Yet a 2020 report found that almost half (47%) of all federal agency use of AI was externally sourced, with a third procured from private companies. In order to ensure that agency use of AI tools is legal, effective, and equitable, the Biden-Harris Administration should establish a Federal Artificial Intelligence Program to govern the procurement of algorithmic technology. Additionally, the AI Program should establish a strict data collection protocol around the collection of race data needed to identify and mitigate discrimination in these technologies.
Researchers who study and conduct algorithmic audits highlight the importance of race data for effective anti-discrimination interventions, the challenges of category misalignment between data sources, and the need for policy interventions to ensure accessible and high-quality data for audit purposes. However, inconsistencies in the collection and reporting of race data significantly limit the extent to which the government can identify and address racial discrimination in technical systems. Moreover, given significant flexibility in how their products are presented during the procurement process, technology companies can manipulate race categories in order to obscure discriminatory practices.
To ensure that the AI Program can evaluate any inequities at the point of procurement, the Office of Science and Technology Policy (OSTP) National Science and Technology Council Subcommittee on Equitable Data should establish guidelines and best practices for the collection and reporting of race data. In particular, the Subcommittee should produce a report that identifies the minimum level of data private companies should be required to collect and in what format they should report such data during the procurement process. These guidelines will facilitate the enforcement of existing anti-discrimination laws and help the Biden-Harris Administration pursue their stated racial equity agenda. Furthermore, these guidelines can help to establish best practices for algorithm development and evaluation in the private sector. As technology plays an increasingly important role in public life and government administration, it is essential not only that government agencies are able to access race data for the purposes of anti-discrimination enforcement—but also that the race categories within this data are not determined on the basis of how favorable they are to the private companies responsible for their collection.
Challenge and Opportunity
Research suggests that governments often have little information about key design choices in the creation and implementation of the algorithmic technologies they procure. Often, these choices are not documented or are recorded by contractors but never provided to government clients during the procurement process. Existing regulation provides specific requirements for the procurement of information technology, for example, security and privacy risks, but these requirements do not account for the specific risks of AI—such as its propensity to encode structural biases. Under the Federal Acquisition Regulation, agencies can only evaluate vendor proposals based on the criteria specified in the associated solicitation. Therefore, written guidance is needed to ensure that these criteria include sufficient information to assess the fairness of AI systems acquired during procurement.
The Office of Management and Budget (OMB) defines minimum standards for collecting race and ethnicity data in federal reporting. Racial and ethnic categories are separated into two questions with five minimum categories for race data (American Indian or Alaska Native, Asian, Black or African American, Native Hawaiian or Other Pacific Islander, and White) and one minimum category for ethnicity data (Hispanic or Latino). Despite these standards, guidelines for the use of racial categories vary across federal agencies and even across specific programs. For example, the Census Bureau classification scheme includes a “Some Other Race” option not used in other agencies’ data collection practices. Moreover, guidelines for collection and reporting of data are not always aligned. For example, the U.S. Department of Education recommends collecting race and ethnicity data separately without a “two or more races” category and allowing respondents to select all race categories that apply. However, during reporting, any individual who is ethnically Hispanic or Latino is reported as only Hispanic or Latino and not any other race. Meanwhile, any respondent who selected multiple race options is reported in a “two or more races” category rather than in any racial group with which they identified.
These inconsistencies are exacerbated in the private sector, where companies are not uniformly constrained by the same OMB standards but rather covered by piecemeal legislation. In the employment context, private companies are required to collect and report on demographic details of their workforce according to the OMB minimum standards. In the consumer lending setting, on the other hand, lenders are typically not allowed to collect data about protected classes such as race and gender. In cases where protected class data can be collected, these data are typically considered privileged information and cannot be accessed by the government. In the case of algorithmic technologies, companies are often able to discriminate on the basis of race without ever explicitly collecting race data by using features or sets of features that act as proxies for protected classes. Facebook’s advertising algorithms, for instance, can be used to target race and ethnicity without access to race data.
Federal leadership can help create consistency in reporting to ensure that the government has sufficient information to evaluate whether privately developed AI is functioning as intended and working equitably. By reducing information asymmetries between private companies and agencies during the procurement process, new standards will bring policymakers back into the algorithmic governance process. This will ensure that democratic and technocratic norms of agency rule-making are respected even as privately developed algorithms take on a growing role in public administration.
Additionally, by establishing a program to oversee the procurement of artificial intelligence, the federal government can ensure that agencies have access to the necessary technical expertise to evaluate complex algorithmic systems. This expertise is crucial not only during the procurement stage but also—given the adaptable nature of AI—for ongoing oversight of algorithmic technologies used within government.
Plan of Action
Recommendation 1. Establish a Federal Artificial Intelligence Program to oversee agency procurement of algorithmic technologies.
The Biden-Harris Administration should create a Federal AI Program to create standards for information disclosure and enable evaluation of AI during the procurement process. Following the two-part test outlined in the AI Bill of Rights, the proposed Federal AI Program would oversee the procurement of any “(1) automated systems that (2) have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services.”
The goals of this program will be to (1) establish and enforce quality standards for AI used in government, (2) enforce rigorous equity standards for AI used in government, (3) establish transparency practices that enable public participation and political accountability, and (4) provide guidelines for AI program development in the private sector.
Recommendation 2. Produce a report to establish what data are needed in order to evaluate the equity of algorithmic technologies during procurement.
To support the AI Program’s operations, the OSTP National Science and Technology Council Subcommittee on Equitable Data should produce a report to establish guidelines for the collection and reporting of race data that balances three goals: (1) high-quality data for enforcing existing anti-discrimination law, (2) consistency in race categories to reduce administrative burdens and curb possible manipulation, and (3) prioritizing the needs of groups most affected by discrimination. The report should include opportunities and recommendations for integrating its findings into policy. To ensure the recommendations and standards are instituted, the President should direct the General Services Administration (GSA) or OMB to issue guidance and request that agencies document how they will ensure new standards are integrated into future procurement vehicles. The report could also suggest opportunities to update or amend the Federal Acquisition Regulations.
High-Quality Data
The new guidelines should make efforts to ensure the reliability of race data furnished during the procurement process. In particular:
- Self-identification should be used whenever possible to ascertain race. As of 2021, Food and Nutrition Service guidance recommends against the use of visual identification based on reliability, respect for respondents’ dignity, and feedback from Child and Adult Care Food Program) and Summer Food Service Program participants.
- The new guidelines should attempt to reduce missing data. People may be reluctant to share race information for many legitimate reasons, including uncertainty about how personal data will be used, fear of discrimination, and not identifying with predefined race categories. These concerns can severely impact data quality and should be addressed to the extent possible in the OMB guidelines. New York’s state health insurance marketplace saw a 20% increase in response rate for race by making several changes to the way they collect data. These changes included explaining how the data would be used and not allowing respondents to leave the question blank but instead allowing them to select “choose not to answer” or “don’t know.” Similarly, the Census Bureau found that a single combined race and ethnicity question improved data quality and consistency by reducing the rate of “some other race,” missing, and invalid responses as compared with two separate questions (one for race and one for ethnicity).
- The new guidelines should follow best practices established through rigorous research and feedback from a variety of stakeholders. In June 2022, the OMB announced a formal review process to revise Statistical Policy Directive No. 15: Standards for Maintaining, Collecting, and Presenting Federal Data on Race and Ethnicity. While this review process is intended for the revision of federal data requirements, its findings can help inform best practices for collection and reporting requirements for nongovernmental data as well.
Consistency in Data Reporting
Whenever possible and contextually appropriate, the guidelines for data reporting should align with the OMB guidelines for federal data reporting to reduce administrative burdens. However, the report may find that other data is needed that goes beyond the OMB guidelines for the evaluation of privately developed AI.
Prioritizing the Needs of Affected Groups
In their Toolkit for Centering Racial Equity Throughout Data Integration, the Actionable Intelligence for Social Policy group at the University of Pennsylvania identifies best practices for ensuring that data collection serves the groups most affected by discrimination. In particular, this toolkit emphasizes the need for strong privacy protections and stakeholder engagement. In their final report, the Subcommittee on Equitable Data should establish protocols to secure data and for carefully considered role-based access to it.
The final report should also engage community stakeholders in determining which data should be collected and establish a plan for ongoing review that engages with relevant stakeholders, prioritizing affected populations and racial equity advocacy groups. The report should evaluate the appropriate level of transparency in the AI procurement process, in particular, trade-offs between desired levels of transparency and privacy.
Conclusion
Under existing procurement law, the government cannot outsource “inherently governmental functions.” Yet key policy decisions are embedded within the design and implementation of algorithmic technology. Consequently, it is important that policymakers have the necessary resources and information throughout the acquisition and use of procured AI tools. A Federal Artificial Intelligence Program would provide expertise and authority within the federal government to assess these decisions during procurement and to monitor the use of AI in government. In particular, this would strengthen the Biden-Harris Administration’s ongoing efforts to advance racial equity. The proposed program can build on both long-standing and ongoing work within the federal government to develop best practices for data collection and reporting. These best practices will not only ensure that the public use of algorithms is governed by strong equity and transparency standards in the public sector but also provide a powerful avenue for shaping the development of AI in the private sector.
Scaling High-Impact Solutions with a Market-Shaping Mechanism for Global Health Supply Chains
Summary
Congress created the Development Finance Corporation (DFC) to finance private sector solutions to the most critical challenges facing the developing world. In parallel, the United States Agency for International Development (USAID) has committed to engaging the private sector and shifting more resources to local market providers to further the impact of U.S. foreign aid dollars.
USAID is on the verge of awarding its largest-ever suite of foreign aid contracts, totaling $17 billion over the next ten years and comprising nine awards as part of the “NextGen Global Health Supply Chain” (GHSC) contracts. This is a continuation of previous global health supply chain contracts that date back to the 1960s that have grown exponentially in total value but have underperformed and not meaningfully transitioned responsibility for deployment to low- and middle-income country (LMIC) governments and LMIC-based organizations.
Now is the time for USAID and the DFC to pilot new ways of working with the private sector that put countries on a path to high-impact, sustainable development that builds markets.
We propose that USAID set aside $300 million of the overall $17 billion package – or less than 2 percent of the overall value – to create a Supply Chain Commercialization Fund to demonstrate a new way of working with the private sector and administering U.S. foreign aid. USAID and the DFC can deploy the Commercialization Fund to:
- Create and finance instruments that pay for results against certain well-defined success metrics, such as on-time delivery;
- Provide blended financing to expand the footprint and capabilities of established LMIC-based healthcare and logistics service providers that may require additional working capital to grow their presence and/or expand operations; and
- If successful, invite other countries to participate in this model, with the potential for replication to other geographies and sectors where there are robust private sector markets, such as in agriculture, water, and power.
USAID and the DFC can pilot this new model in three countries where there are already thriving and well-established private markets, like Ghana, Kenya, and Nigeria.
Challenge and Opportunity
The world is facing an unprecedented concurrence of crises: pandemics, war, rising food insecurity, and a rapidly warming climate. Low- and middle-income countries (LMICs) are deeply affected, with many having lost decades’ worth of gains made toward the Sustainable Development Goals in only a few short years. We now face the dual tasks of regaining lost ground while ensuring those gains are more durable and lasting than before.
The Biden Administration recognizes this pivotal moment in its new U.S. Strategy Toward Sub-Saharan Africa. The Strategy acknowledges the continent’s growing importance to U.S. global priorities and lays out a 21st-century partnership to contribute to a strong and sustainable global economy, foster new technology and innovation, and ultimately support the long-envisioned transition from donor-driven to country-driven programs. This builds on past U.S. foreign aid initiatives led by administrations of both political parties, including Administrator Mark Green’s Journey to Self Reliance and Administrator Raj Shah’s USAID Forward initiatives. Rather than creating a new flagship program, the U.S. Strategy Toward Sub-Saharan Africa focuses on improved implementation and better integration of existing initiatives to supercharge results. Such aims were echoed repeatedly during the U.S.-Africa Leaders Summit in December 2022.
To realize a new vision for U.S.-Africa partnerships, the Biden Administration should more effectively fuse the work of USAID and the DFC. A key policy rationale for the DFC’s creation in 2018 was to counter China’s Belt and Road Initiative (BRI) and growing economic influence in frontier markets. By combining this investment arm with USAID’s programmatic work, Congress hoped to accelerate major development impact. However numerous mismatches between USAID and DFC priorities have limited and sometimes actively undermined Congress’ goals. In the worst cases, USAID dollars have been used to pay international aid contractors to perform work in places where existing market providers could. Rather than bolster markets, this can distort them.
This memo lays out a new approach to development rooted in better USAID-DFC collaboration, where the work of both agencies contributes to the commercialization of sectors ready to transition from aid-dependent models to commercial and trade-enabled ones. In these sectors, USAID should work to phase out its international aid contractor-led model and instead scale up the work of existing market participants, including by paying them for results. This set of recommendations also advances USAID priorities outlined in the Agency’s new Acquisition and Assistance Strategy and proposed implementation plan, as well as USAID’s policy framework, which each call for working more closely with the private sector and transitioning to more pay-for-performance models.
The global health supply chain is ideal for USAID and the DFC to test the concept of a commercialization fund because of the sector’s discrete metrics and robust existing logistics companies. Investing in cheaper, more efficient evidence-driven solutions in a competitive marketplace can improve aid effectiveness and better serve target populations with the health goods like PPE, vaccines, and medications they need. This sector receives USAID’s largest contracts, with the Agency spending more than $1B each year on procurement and logistics to get the right health products to the right place, at the right time, and in the right condition across dozens of countries. In the logistics space, only about 25%1 of USAID’s expenditure supports directly distributing commodities to health facilities in target nations; the other 75% is spent on fly-in contractors who oversee that work. Despite this premium, on-time and in-full distribution rates often miss their targets, and stockouts are still common, according to USAID’s reports and audits.2
A Commercialization Fund can directly address policy goals such as localization or private-sector engagement by building resilient health supply chains through a marketplace of providers that ensures patients and providers access the supplies they need on time. In addition to improving sustainability and results and cutting costs, a well-structured Commercialization Fund can improve global health donor coordination, crowd-in new investments from other funders and philanthropy that want to pay for outcomes, and hasten the transition from donor-led aid models to country-led ones.
Plan of Action
USAID should create the Global Health Supply Chain Commercialization Fund, a $300 million initiative to purchase commercial supply chain services directly from operators, based on performance or results. USAID should pilot using the Commercialization Fund to pay providers in three countries where there are already thriving and well-established private logistics markets, such as Kenya, Nigeria, and Ghana. In these countries, dozens of logistics and healthcare providers operate at scale, serving millions of people.
With an initial focus on health logistics, USAID should use $300 million from its yet-to-be-awarded suite of $17 billion NextGen Global Health Supply Chain contracts to provide initial funding for the Commercialization Fund. If successful, the Commercialization Fund will create an open playing field for competition and crowd-in high-impact technology, innovation, and more market-based actors in global health supply chains. This fund will build upon existing efforts across the Agency to identify, incubate, and catalyze innovations from the private sector.
To quickly stand up this Commercialization Fund and select vendors, Administrator Power should utilize her “impairment authority.” Though typically applied to emergencies, the “impairment authority” has been used previously during global health events like the COVID-19 pandemic and the Ebola response and could be used to achieve a specific policy priority such as localization and/or transforming the way USAID administers its global health supply chains. (See FAQ for more information regarding this authority).
The creation of this Fund, which can be fully budget-neutral, requires the following steps:
Step 1. USAID and DFC take administrative action to design and capitalize the $300 million, five-year, cross-cutting, and disease-agnostic Supply Chain Commercialization Fund. A joint aid effectiveness “tiger team” within USAID and the DFC should:
- Spearhead the design and implementation framework for the Fund and stipulate clear, standardized key performance indicators (KPIs) to indicate significant improvements in health supply chain performance in countries where the Commercialization Fund operates.
- Select three countries to adopt the Commercialization Fund, chosen in coordination with overseas USAID Missions and the DFC. Countries should be selected and prioritized based on factors such as analyses of health systems’ needs, the existence of local supply chain service providers, and countries’ desire to manage more of their own health supply chains. As a follow-on to the U.S.-Africa Leaders Summit, we recommend USAID and the DFC direct initial Commercialization Fund funds to support activities in Africa where there are already thriving and well-established private markets such as Ghana, Kenya, and Nigeria.
- Set pricing for each KPI and product in each Commercialization Fund country market. For example, pay-for-performance indicators could include percent of on-time deliveries. USAID and the DFC should set high expectations for performance, such as 95+ percent on-time delivery, especially in geographies where existing market providers can already deliver against similarly rigorous targets in other sectors. USAID bureaus and missions, partner country governments, and in-country private sector healthcare and logistics leaders, as well as supply chain and innovative financing experts, should be consulted during this process.
- Choose funding mechanisms that pay for results (see Step 2 for details).
- Provide blended financing to vendors that may need additional resources to scale their footprint and/or increase their capabilities.
- Select a third-party auditor(s) to audit the results upon which providers are paid.
Step 2: USAID structures financial instruments to pay service providers against results delivered in selected Commercialization Fund countries
USAID should pay Commercialization Fund providers to deliver results, consistent with the KPIs set in Step 1 by the joint aid effectiveness “tiger team.” Pay-for-performance contracts can also provide incentives and/or price assurances for service providers to build infrastructure and expand to areas they don’t traditionally serve.
Structuring pay-for-performance tools will favor providers that can demonstrate their ability to deliver superior and/or more cost-effective results relative to status quo alternatives. Preference should be given to providers that are operational in the target country where there is existing market demand for their services, as evidenced by factors such as whether the host country government, national health insurance program, or consumers already pay for the providers’ services. USAID should work with the host country government(s) to select vendors to ensure strong country buy-in.
To maximize performance and competition, USAID should explicitly not use cost-reimbursable payment models that reimburse for effort and optimize for compliance and reporting. The red tape associated with these awards is so cumbersome that non-traditional USAID service providers cannot compete.3
USAID should consider using the following pay-for-performance modalities:
- Fixed-price, milestone-based awards that trigger payment when a service provider meets certain milestones, such as for each delivery made with a 95% on-time rate and with little-to-no product spoilage or wastage. Using fixed-price grants and contracts in this way can effectively make them function as forward-contracts that provide firms with advanced price assurances that, as long as they continue to deliver against predetermined objectives, the U.S. Government will pay. Fixed-amount grants and contracts are easier for non-traditional USAID partners to apply for and manage than more commonly-used “cost reimbursement” awards that reimburse vendors for time, materials, and effort and have enormous compliance costs. Because pay-for-results awards only pay upon proof of milestones achieved, they also increase accountability for the U.S. taxpayer.
As USAID’s proposed acquisition and assistance implementation plan points out, “‘pay-for-result’ awards (such as firm fixed price contracts or fixed amount awards) can substantially reduce burdens on [contracting officers] and financial management staff as well as open doors for technically strong local partners unable to meet U.S. Government financial standards.” - Innovation Incentive Awards (IIAs) that pay providers retroactively after they meet certain predetermined results criteria. This award authority, expanded by Congress in December 2022, enables USAID to pre-publish its willingness to pay up to $100,000 for certain well-defined, predetermined results; then pay retroactively once a service provider can demonstrate it met the intended objective.
Unlike a fixed-price award, which establishes a longer-term relationship between USAID and the selected vendor, USAID can use the IIA modality to provide vendors with one-time spot payments. However, USAID could still use this payment modality to move more money at scale provided a vendor(s) can successfully meet multiple objectives (e.g. USAID could make multiple $100,000 payments for multiple on-time deliveries). - USAID could pursue Other Transaction Authority (OTA) opportunities without additional authorization, but the Agency may also benefit from consultation with the White House, the Office of Management and Budget (OMB), and The Office of Information & Regulatory Affairs (OIRA), as well as Congress, to secure additional authorities or waivers to disburse Commercialization Fund resources using innovative pay-for-results tools, including OTA, which other federal agencies have used to invite greater private sector participation from nontraditional U.S. Government partners.
Step 3: USAID and DFC should provide countries with additional technical assistance resources to create intentional pathways for selected countries to contribute to the design and management of program implementation.
To ensure these initiatives support countries’ needs and facilitate country ownership and increase voice, USAID should also consider establishing a supra-agency advisory board to support the success of the Commercialization Fund modeled after DFC’s Africa Investment Advisor Program that seats a panel of experts that can continually advise both agencies on strategic priorities, key risks, and award structure, etc. It could also model elements of the Millennium Challenge Corporation’s compact model to ensure participating countries have a hand in the design of relevant aspects of the Commercialization Fund.
USAID should additionally provide participating Commercialization Fund countries with Technical Assistance resources to ensure that host country governments can eventually take on larger management responsibilities regarding the administration of Commercialization Fund pay-for-performance contracts.
Step 4: As needed, USAID and the DFC should collaborate to provide sustainable pathways for blended financing that allows existing market providers to access working capital to scale their footprint.
While the DFC and USAID have worked on blended finance deals in the past, the Biden Administration should explicitly direct the two agencies to work together to identify and scale the footprints and capabilities of logistics and healthcare providers in targeted Commercialization Fund countries.
Many of the existing healthcare and logistics providers that could potentially manage a greater share of global health supply chains could need additional financing to expand their operations, increase working capital, or grow their capabilities, but they often find themselves in a chicken or the egg problem to secure financing from financial institutions like the DFC.
Traditional banks and DFC investment officers often consider these companies to be potentially risky investments because their revenue in health supply chains is not assured, especially because one of the largest healthcare payers in many LMICs is the U.S. Government, but USAID (and other global health donors) have historically funded international aid contractors to manage countries’ health supply chains, not local firms or alternative service providers. However, at the same time, USAID and other donors have not relied more on existing logistics service providers to manage health supply chains because many of these providers do not operate at the scale of larger international aid contractors.
To break this cycle, and to enable the DFC and other lenders to offer better financing terms to firms that need it to grow their capabilities or secure working capital, USAID could provide identified firms with more blended finance deals, including guaranteed eligibility to receive pay-for-performance revenue using the funding modalities described above. It could also provide unrestricted early-stage and/or phased funding to cover operational costs associated with working with the U.S. Government.
Increasing available credit to firms via the DFC and using a USAID pay-for-performance contract as collateral would also enhance firms’ overall ability to raise credit from other sources. This assurance, in turn, reduces the cost of capital for receiving firms, resulting in more significant, impactful investments from private capital in the construction of other supply chain infrastructure, including warehouses, IT systems, and shipping fleets.
Step 5: Pending success, USAID and the DFC should replicate the Commercialization Fund in additional countries. Congress should codify the Commercialization Fund into law and authorize larger-scale commercialization funds in additional geographies and sectors as part of the BUILD Act reauthorization in 2025.
While this initial Commercialization Fund will focus on building sustainable, high-performing global health supply chains in three LMICs, the same blueprint could be leveraged in other countries and in other sectors where there are robust private sectors, such as in food or power.
- Congress should require USAID and DFC to report overall Commercialization Fund performance every six months for a minimum of three years.
- If the Commercialization Fund proves successful after the first year, USAID and the DFC should proactively invite other countries to participate to expand this model to other geographies, where appropriate.
- If successful with healthcare supply chains, the Commercialization Fund should also be expanded to cover additional sectors and geographies and included in the BUILD Act 2025 reauthorization.
Conclusion
Continued reliance on traditional aid in commercial-ready sectors contributes to market failures, limits local agency, and minimizes the opportunity for sustainable impact.
As a team of researchers from the Carnegie Endowment’s Africa Program pointed out on the heels of the U.S.-Africa Leaders Summit, “A persistent humanitarian approach to Africa…creates pathologies of unhelpful dependency, insufficient focus on the drivers of inclusive growth, and perverse incentives for the continuation of the status quo by a small coterie of connected beneficiaries.” Those researchers identified 18 new initiatives announced at the Summit supported with public money in economic sectors that can facilitate trade, investments, entrepreneurship, and jobs creation, signaling an unprecedented readiness in this Administration to prioritize trade alongside aid.
The Commercialization Fund outlined in this memo — a market-shaping mechanism designed to correct market failures that conventional aid models can perpetuate — has the potential to become a model for accelerating the transition of other key economic sectors away from the status quo and toward innovation, investment, impact, and long-term sustainability.
The global health supply chain is an ideal sector for USAID and the DFC to test the concept of a Commercialization Fund:
First, virtually every industry relies on robust supply chains to get goods around the world. There are dozens of African logistics companies that deliver goods to last-mile communities every day, including hard-to-transport items that require cold-chain storage like perishable goods and vaccines. These firms can deliver health commodities faster, cheaper, and more sustainably than traditional aid implementers, especially to last-mile communities.
Second, health supply chain performance metrics are relatively straightforward and easy to define and measure. As a result, USAID can facilitate managed competition that pays multiple logistics providers against rigorous, predetermined pay-for-performance indicators. To provide additional accountability to the taxpayer, it could withhold payment for factors such as health commodity spoilage.
Third, global health receives the largest share of USAID’s overall budget, but a significant share of those resources pay for contractor overhead and profit margin, so there is considerable opportunity to re-allocate those resources to create a pay-for-performance Supply Chain Commercialization Fund. Only about 25 percent of USAID’s in-country logistics expenditures pay for the actual work of distributing commodities to health facilities in target nations; the other 75 percent pays for larger aid contractors’ overhead, management, and other costs. Despite this premium, on-time and in-full distribution rates often miss their targets, and stockouts are still a common occurrence, according to USAID’s reports and audits.
Investing in cheaper, more efficient, and effective operators in a competitive marketplace can improve aid effectiveness and better serve target populations with essential healthcare. A Commercialization Fund can directly address policy goals of “progress over programs” by building resilient health supply chains that, once and for all, ensure patients and providers get the supplies they need on time. Since local providers can typically provide services faster, cheaper, and more sustainably than international aid contractors, transitioning to models that pay for results with fees set to prevailing local rates can also advance USAID’s localization priorities and bolster markets rather than distort them.
The administrator could activate her unique “impairment authority” to fashion the scope of procurement competitions at will. The fundamental concept is that if full and open competition for a contract or set of contracts—the normal process followed to fulfill the U.S. Government’s requirements—would impair foreign assistance objectives, then the administrator can divide procurements falling under the relevant category to advance an objective like localization. This authority, which is codified in USAID’s core authorizing legislation (the Foreign Assistance Act of 1961, as amended), along with a formal U.S. Government regulation, was previously used to quickly procure during Iraq reconstruction, Afghanistan humanitarian needs, and the Ebola and COVID-19 responses. While “impairment authority” may be an untested pathway for global health supply chains, it does offer the administrator a viable pathway to launch the Fund and ensure high-impact operators are receiving USAID contracts while continuing to consult with Congress to codify the Fund’s activities long-term. The administrator’s extraordinary “impairment authority” comes from 636(a)(3) of the Foreign Assistance Act and AIDAR (the USAID-specific Supplement to the FAR) Section 706.302-70 “Impairment of foreign aid programs.” See especially 706.302-70(a)(3)(ii).
Many LMIC governments increasingly embrace technological solutions outside of traditional aid models because they know technology can lead to greater efficiencies, support job creation and economic development, and drive improved results for their populations. Sustaining a marketplace within a country or region is an advantage to supporting new entrants and existing firms in the sector. The impact of these companies’ services can also be scaled via pay-for-results models and domestic government spending, as the firms that deliver superior performance will rise to the top and continue to be demanded, and those that do not meet established metrics will not be contracted with again.
Supply chain and innovative financing experts who deeply understand the challenges plaguing global health supply chains should be consulted to design successful pay-for-results vehicles. These individuals should support the USAID/DFC tiger team to support the design and implementation framework for the Commercialization Fund, define KPIs, set appropriate pricing, and select auditors. USAID Missions and local governments will be most familiar with the unique supply chain challenges within their jurisdictions and should work alongside supply chain experts to define the desired supply chain results for the Commercialization Funds in their countries.
Through the Commercialization Fund, USAID will contract any supply chain service provider that can meet exceptionally high performance targets set by the Agency. USAID will increase its volume of business with providers that consistently hit relevant targets over consecutive months. Operators will be paid based on their performance under these contracts, providing them with predictable and consistent cash flows to grow their businesses and reach system-wide scale and impact. Based on these anticipated cash flows, DFC will be well-positioned with equity investments and able to provide upfront and working capital financing.
As the highest-performing operators scale, they gain cost efficiencies that allow them to lower their pricing, just as with any technology adoption curve making services accessible to more customers. Over time, as clear pricing and operating standards are realized, USAID will transition from directly paying these operators for performance to supporting governments to remunerate them against transparent, auditable service contracts.
The Supply Chain Commercialization Fund will also facilitate an exchange of expertise, greater interagency learning, and long-term coordination. DFC will share with USAID how to commercialize sectors, transition them from aid to trade, and lay the groundwork for DFC deal flow, while USAID will help DFC evaluate smaller, riskier deals in sectors with fewer commercial entrants. Both institutions can use the Fund to align on clear measures of success through USAID’s contracting directly with supply chain service providers that get paid only if they hit exceptionally high performance targets and DFC’s increasing investment in companies based on their development effectiveness.
The risk of supply chain disruptions is low because the initial three countries proposed—Kenya, Ghana, and Nigeria—already have existing African-based logistics providers that provide essential health commodities to communities every day, including in last-mile and low-resourced settings. Many of these providers deliver products faster, cheaper, and more sustainably than international aid donor-funded distributors. The capacity-building fund mechanisms described above can also mitigate risks to ensure firms have the capital investment to scale their existing work to meet contract requirements.
USAID should hire third-party auditors to verify the impact and results of Fund investments. We anticipate the Agency should draw from Commercialization Fund resources to pay for these services.
While $300 million represents less than 2 percent of the overall Global Health Supply Chain suite of awards, this commitment would send important, long-term market signals for firms in partner countries over a multi-year period. It would also provide sufficient capital to scale selected companies and demonstrate how a new supply chain funding model can work.
Algorithmic Transparency Requirements for Lending Platforms Using Automated Decision Systems
Now is the time to ensure lending models offered by private companies are fair and transparent. Access to affordable credit greatly impacts quality of life and can potentially impact housing choice. Over the past decade, algorithmic decision-making has increasingly impacted the lives of American consumers. But it is important to ensure all forms of algorithmic underwriting are open to review for fairness and transparency, as inequities may appear in either access to funding or in credit terms. A recent report released by the U.S. Treasury Department speaks to the need for more oversight in the FinTech market.
Challenge and Opportunity
The financial services sector, a historically non-technical industry, has recently and widely adopted automated platforms. Financial technology, known as “FinTech”, offers financial products and services directly to consumers by private companies or in partnership with banks and credit unions. These platforms use algorithms that are non-transparent but directly affect Americans’ ability to obtain affordable financing. Financial institutions (FIs) and mortgage brokers use predictive analytics and artificial intelligence to evaluate candidates for mortgage products, small business loans, and unsecured consumer products. Some lenders underwrite personal loans such as auto loans, personal unsecured loans, credit cards, and lines of credit with artificial intelligence. Although loans that are not government-securitized receive less scrutiny, access to credit for personal purposes impacts the debt-to-income ratios and credit scores necessary to qualify for homeownership or the global cash flow of a small business owner. Historic Home Mortgage Disclosure Act (HMDA) data and studies on small business lending demonstrate that disparate access to mortgages and small business loans occurs. This scenario will not be improved through unaudited decision automation variables, which can create feedback loops that hold the potential to scale inequities.
Forms of discrimination appear in credit approval software and can hinder access to housing. Lorena Rodriguez writes extensively about the current effect of technology on lending laws regulated by the Fair Housing Act of 1968, pointing out that algorithms have incorporated alternative credit scoring models into their decision trees. These newly selected variables have no place in determining someone’s creditworthiness. Inputs include factors like social media activity, retail spending activity, bank account balances, college of attendance, or retail spending habits.
Traditional credit scoring models, although cumbersome, are understandable to the typical consumer who takes the time to understand how to impact their credit score. However, unlike credit scoring models, lending platforms can input a data variable with no requirement to disclose the models that impact decisioning. In other words, a consumer may never understand why their loan was approved or denied, because models are not disclosed. At the same time, it may be unclear which consumers are being solicited for financing opportunities, and lenders may target financially vulnerable consumers for profitable but predatory loans.
Transparency around lending decision models is more necessary now than ever. The COVID-19 pandemic created financial hardship for millions of Americans. The Federal Reserve Bank of New York recently reported all-time highs in American household debt. In a rising interest rate environment, affordable and fair credit access will become even more critical to help households stabilize. Although artificial intelligence has been in use for decades, the general public is only recently beginning to realize the ethical impacts of its uses on daily life. Researchers have noted algorithmic decision-making has bias baked in, which has the potential to exacerbate racial wealth gaps and resegregate communities by race and class. While various agencies—such as the Consumer Financial Protection Bureau (CFPB), Federal Trade Commission (FTC), Financial Crimes Enforcement Network, Securities and Exchange Commission, and state regulators—have some level of authority over FinTech companies, there are oversight gaps. Although FinTechs are subject to fair lending laws, not enough is known about disparate impact or treatment, and regulation of digital financial service providers is still evolving. Modernization of policy and regulation is necessary to keep up with the current digital environment, but new legislation can address gaps in the market that existing policies may not cover.
Plan of Action
Three principles should guide policy implementation around FinTech: (1) research, (2) enforcement, (3) incentives. These principles balance oversight and transparency while encouraging responsible innovation by community development financial institutions (CDFIs) and charitable lenders that may lead to greater access to affordable credit. Interagency cooperation and the development of a new oversight body is critical because FinTech introduces complexity due to technical, trade, and financial services overlap.
Recommendation 1: Research. The FTC should commission a comprehensive, independent research study to understand the scope and impact of disparate treatment in FinTech lending.
To ensure equity, the study should be jointly conducted by a minimum of six research universities, of which at least two must be Historically Black Colleges and Universities, and should be designed to understand the scope and impact of fintech lending. A $3.5 million appropriation will ensure a well-designed, multiyear study. A strong understanding of the landscape of FinTech and its potential for disparate impact is necessary. Many consumers are not adequately equipped to articulate their challenges, except through complaints to agencies such as the Office of the Comptroller of Currency (OCC) and the CFPB. Even in these cases, the burden of responsibility is on the individual to be aware of channels of appeal. Anecdotal evidence suggests BIPOC borrowers and low-to-moderate income (LMI) consumers may be the target of predatory loans. For example, an LMI zip code may be targeted with FinTech ads, while product terms may be at a higher interest rate. Feedback loops in algorithms will continue to identify marginalized communities as higher risk. A consumer with lesser means who also receives a comparative triple-interest rate will remain financially vulnerable due to extractive conditions.
Recommendation 2: Enforcement. A suite of enforcement mechanisms should be implemented.
- FinTechs engaged in mortgage lending should be subject to Home Mortgage Disclosure Act (HMDA) reporting on lending activity and Community Reinvestment Act (CRA) examination. When a bank utilizes a FinTech1, a vendor CRA assessment should be incorporated into the bank’s own examination process. Credit unions should also be required to produce FinTech vendor CRA exams during their examination process. CRA and HMDA requirements would encourage FinTech to make sure they are lending broadly.
- Congress should codify’ FinTechs’ role as the “true lender” whenever a FinTech’s underwriting model is used by an FI partner to clarify FinTech responsibility to all applicable state, local, and federal interest caps, fair lending laws, etc., as well as liability when they do not meet existing standards. Federal regulatory agency guidelines must also be updated to clarify the bank or credit union’s Fintech partner’s shared responsibility when a FinTech model for underwriting violates UDAAP or fair lending guidelines.
- A previously proposed OCC FinTech charter should be adopted but made optional. However, when a FinTech chooses to adopt the OCC charter, the charter should give FinTechs interstate privileges covered under the Reigle-Neal Interstate Banking and Branch Efficiency Act of 1994. This provision should also require FinTechs to fulfill state licensing requirements in each state in which they operate, eliminating their current ability to bypass licensing by partnering with regulated FIs.
- Companies engaged in any financing activity or providing automated lending software to regulated FIs must be required to disclose decision models to the FI’s examiner upon request. FinTech data disclosure must not be limited to federally secured loans such as small business or mortgage loans but include secured and unsecured loan products made to consumers such as auto, personal, and small dollar loans. When consumers obtain a predatory product in these categories, the loans can have a severe impact on debt-to-income/back-end ratios and credit scores of borrowers, preventing them from obtaining homeownership or causing them to receive less favorable terms.
Recommendation 3: Incentives. Develop an ethical FinTech certification that denotes a FinTech as responsible lender, such as modeled by the U.S. Treasury’s CDFI certification.
The certification can sit with the U.S. Treasury and should create incentives for FinTechs demonstrated to be responsible lenders in forms such as grant funding, procurement opportunities, or tax credits. To create this certification, FI regulatory agencies, with input from the FTC and National Telecommunications and Information Administration, should jointly develop an interagency menu of guidelines that dictate acceptable parameters for what criteria may be input into an automated decision model for consumer lending. Guidelines should also dictate what may not be used in a lending model (example: college of attendance). Exceptions to guidelines must be documented, reviewed, and approved by the oversight body after being determined to be a legitimate business necessity.
Conclusion
Now is the time to provide policy guidance that will prevent disparate impact and harm to minority, BIPOC, and other traditionally marginalized communities as a result of algorithmically informed biased lending practices.
Yes, but the CFPB’s general authority to do so is regularly challenged as a result of its independent structure. It is not clear if its authority extends to all forms of algorithmic harm, as its stated authority to regulate FinTech consumer lending is limited to mortgage and payday lending. UDAAP oversight is also less clear, as it pertains to nonregulated lenders. Additionally, the CFPB has the authority to regulate institutions over $10 billion. Many FinTechs operate below this threshold, leaving oversight gaps. Fair lending guidance through financial technology must be codified apart from the CFPB, although some oversight may continue to rest with the CFPB.
Precedent is currently being set for regulation of small business lending data through the CFPB’s enforcement of Section 1071 of the Dodd-Frank Act. Regulation will require financial disclosure of small business lending data. Other government programs, such as the CDFI fund, currently require transaction-level reporting for lending data attached to federal funding. Over time, private company vendors are likely to develop tools to support reporting requirements around lending. Data collection can also be incentivized through mechanisms like certifications or tax credits for responsible lenders that are willing to submit data.
The OCC has proposed a charter for FinTechs that would subject them to regulatory oversight (see policy recommendation). Other FI regulators have adopted various versions of FinTech oversight. Oversight for FinTech-insured depository partnerships should remain with a primary regulatory authority for the depository with support from overarching interagency guidance.
A new regulatory body with enforcement authority and congressional appropriations would be ideal, since FinTech is a unique form of lending that touches issues that impact consumer lending, regulation of private business, and data privacy and security.
This argument is often used by payday lenders that offer products with egregious, predatory interest rates. Not all forms of access to credit are responsible forms of credit. Unless a FinTech operates as a charitable lender, its goal is profit maximization—which does not align well with consumer protection. In fact, research indicates financial inclusion promises in FinTech fall short.
Many private lenders are regulated: Payday lenders are regulated by the CFPB once they reach a certain threshold. Pawn shops and mortgage brokers are subject to state departments for financial regulation. FinTechs also have the unique potential to have a different degree of harm because their techniques of automation and algorithmic evaluation allow for scalability and can create reinforcing feedback loops of disparate impact.
Aligning Regional Economic Development Plans with Federal Priorities
Summary
Economic development planning shouldn’t be this hard. Our planning system in the United States is highly disjointed, both from the bottom up and from the top down, and this negatively impacts our ability to build functioning, aligned, and specialized innovation ecosystems. Today, there is no single document or directive that outlines America’s economic priorities from an R&D, commercial, or economic development perspective. In addition, the organizations that carry out our economic development planning rarely include deep analysis of innovation ecosystems and opportunities for cluster development in their plans.
The elements of a coherent innovation plan have started to appear in policy publications: for example, the 2022 National Security Strategy document outlines the need for a “modern industrial and innovation strategy,” and the biotechnology executive order, the CHIPS and Science Act, and the National Network for Critical Technology Assessment all send strong signals that a short list of industries, industrial capabilities, and strategic supply chains are critical to our country’s continued prosperity. However, while these signals might be strong, they are not yet clear and not yet strategically framed. The Office of Science and Technology Policy, in consultation with other federal agencies and non-governmental organizations, will bring together a national competitiveness plan from these disparate efforts in the coming months. Today, proponents of innovation would do well to think about the next step of this challenge: once a national competitiveness plan exists, how will it be implemented and who will lead the charge?
Across the country, a network of regional development organizations (RDOs) regularly create and maintain economic development plans (called comprehensive economic development strategies, or CEDS) on a regional basis. At the same time, the federal government’s emphasis on building innovation ecosystems and developing regional innovation clusters has unleashed billions of dollars in funding for cluster-aligned projects. One might assume that these efforts are highly aligned and that CEDS created and maintained by RDOs provide the analysis and foundation for cluster development efforts. In reality, cluster development efforts rarely begin with a CEDS for a few key reasons: (1) CEDS are not aligned with a clear national competitiveness strategy; (2) the RDOs that create CEDS often have limited capacity to assess innovation ecosystems and even more limited resources with which to improve their capacity or conduct their analysis; and (3) existing CEDS are often hard to find (even for community members in the RDO’s district).
Creating a better planning system will require clear, top-down guidance about competitiveness priorities, which is on its way. It will also require more sophisticated, focused, and better supported local economic planning. The U.S. Economic Development Administration (EDA) manages existing processes that allow for certification of RDOs and the regular production of CEDS. Additional guidelines and incentives should be structured into these programs in order to build our national capacity for strategic planning around shared competitiveness priorities and to ensure that regional planning processes incorporate a cohesive national framework. This will allow local cluster development efforts to best capitalize upon their respective comparative advantages, setting up communities for success as they develop plans to build stronger local economies, create better jobs, and promote sustainable growth.
Challenges
Challenge 1: Innovation ecosystem development is a form of economic development activity for which both funding and planning are highly fragmented.
Innovation is a key part of economic development and is driven by young, dynamic firms, leading to higher levels of job creation and productivity gains. As such, the federal government has consistently taken an active role in incentivizing startup creation and growth, especially in high-tech industries. Historically, public sector tools for supporting innovation ecosystem development and startup creation have included grants, grand challenges, prize competitions, tax incentives, and loan assistance, among other mechanisms. In recent years, investments have promoted the growth of high-tech and advanced industry clusters in geographic areas outside of traditional innovation hubs like Silicon Valley in California or Boston’s Route 128 in Massachusetts. For example:
- The EDA’s Build Back Better Regional Challenge (BBBRC) awarded $1 billion to encourage cities and regions to develop regional development plans centered around expansion of industry hubs. The 21 winners supported the development of several high-priority industries by strengthening their underlying production, talent, and capital access capabilities.
- The EDA also administers programs like the Good Jobs Challenge (GJC) and Build to Scale (B2S), which enhance innovation capacity by strengthening local workforces and providing access to catalytic capital. Recently, the GJC awarded $500 million to 32 cities, colleges, and workforce organizations to expand local industry talent hubs, and B2S provided $47 million across 51 grants to a range of universities and accelerators.
- The Department of Energy (DoE)’s Regional Clean Hydrogen Hubs will establish 6 to 10 hubs to produce, process, deliver, store, and use clean hydrogen across a range of industrial applications.
- The Department of Defense (DoD)’s Defense Manufacturing Community Support Program invests in workforce development, skills, R&D, and small business support to aid the defense innovation base in communities across the United States.
Many federal agencies provide funding for innovation ecosystem development activities. In 2018, the DoE spent $10 billion on R&D alone, and the passage of the Inflation Reduction Act (IRA) will add hundreds of billions of grant and loan support for commercialization of green technologies such as solar, wind, hydrogen, and carbon capture and storage. Beyond the DoE, the DoD’s DARPA, the Department of Homeland Security (DHS) Science & Technology Directorate, the National Science Foundation (NSF), and a host of other government agencies distribute billions in innovation funding, which has been recently buttressed by the American Rescue Plan, Inflation Reduction Act, CHIPS and Science Act, and the Infrastructure Investment and Jobs Act. These are all supported by smaller, but critically important, matching investments at the state level.
In short, innovation ecosystem development is economic development, and the federal government understands that. It has a clear national interest in prioritizing the development of certain industries in order to generate positive spillovers, correct market failures, and preserve national competitiveness. State governments and regional bodies have an interest in promoting the economic well-being of specific communities. So why is reconciling these interests across the country so difficult? The answer lies more in game theory than in politics.
Challenge 2: Communities focus their ideas for developing innovation clusters in just a few industries and fail to give enough thought and analysis to their comparative advantage and their role in national competitiveness as they make these choices.
When a region decides to assess its potential to develop an innovation cluster, its leaders must first decide which cluster to develop, turning to the information that is mostly easily accessible to them. This generally includes lagging metrics describing the region’s present-day economy (such as location quotient, industry-level employment, and skills concentration). In many regions of the country that do not already have a strong cluster, these metrics look very similar. It is also data that answers the wrong question. When seeking to build a cluster, regions should not just ask “What are our strengths today?” or “What industries have gotten the most press lately?” Instead, regions should ask “In which growing or emerging industries might our community have a comparative advantage?” Asking the wrong question also leads regions to end up proposing cluster efforts in a few industries (e.g., biotechnology, advanced manufacturing, and semiconductors), rather than picking a goal for cluster development in an industry that is comparatively underserved yet still vitally important (e.g., green tech, water tech, or aerospace). Even a modest concentration of assets in an underserved industry might position a region as a leading hub, and 40% of American regions cannot build identical hubs at the same time.
For example, too many National Science Foundation Engines proposals are centered around semiconductor and microelectronic clusters. Funding to create semiconductor hubs is limited to a small number of places: recently, Commerce Secretary Gina Raimondo announced that the Department of Commerce would spend the $50 billion of CHIPS Act funds to develop at least two semiconductor hubs. However, given the scale and cost of developing semiconductor manufacturing facilities, as well as the required workforce, infrastructure, and other public services investments, only two or three additional hubs will likely be developed. Consequently, the vast majority of the 23 regions and cities that have submitted Engines applications focused on semiconductors and microelectronics will waste time and money while incurring significant opportunity costs chasing clusters that they are ill-suited to build. More importantly, however, this distracts cities and regions from making longer-term plans that they can stick to, which is essential to the long-term investments in infrastructure, education and training, housing, and land permitting, among others, that are needed to promote innovation.
While we have the systems and processes to support a massive push to integrate competitiveness priorities into local development plans, both the system and the organizations it funds have limited bandwidth and even more limited resources. Policymakers must also balance the need for increased attention to national competitiveness with the tradition of local control that is core to the American way. There is no appetite for central planning in the United States, but there is a desperate need for clear, shared priorities that allow each region to determine how their community can best serve a higher, patriotic purpose.
Challenge 3: The organizations responsible for economic planning (RDOs) need to improve their internal capacity to plan for innovation ecosystem development, which will require additional resources.
It is easy to imagine a world in which RDOs take the lead on improving the quality of CEDS planning, and efforts led by the National Association for Development Organizations (NADO) Research Foundation are in place to do just that. In addition to managing an EDA-funded community of practice for RDOs, which includes maintaining resources and conducting webinar trainings, the NADO Research Foundation independently maintains CEDS Central, an excellent repository of best practices. While these resources and their consistent use demonstrate communities’ desires to plan well, they alone have not yet led to the widespread use of CEDS as a means of detailed cluster analysis and planning.
When looking at any one individual RDO, it is easy to see why. RDOs (also called Councils of Governments, Economic Development Districts, Regional Development Districts, or Regional Planning Organizations) have incredibly broad planning responsibilities and limited staffing and resources. The CEDS that they manage reflect these broad remits and often include elements related to broadband access, transportation, aging population services, housing, education, and employment and economic growth. As a result, there is very little capacity to create clear and detailed innovation plans. Of the organizations whose plans NADO listed as exemplary on their CEDS Central site, the average total staff size was 17 with an average of three employees dedicated to economic development. Moreover, no employees were dedicated to innovation ecosystems, entrepreneurship, or cluster development.
In order for RDOs to build their capacity for creating regional cluster development plans, they must train and hire staff with new capabilities. This is nearly impossible for these organizations to do on their own, given their current financial resources and the breadth of demands on their time. Changing that will require dedicated funding for new staff and training, as well as a clear directive to prioritize this work.
Challenge 4: Many regional, local, state, and nongovernmental stakeholders participate in de facto economic planning activities. However, these are not universally integrated with RDO efforts, and transparency is a key barrier.
The innovation funding picture is further complicated by a long tail of regional, local, and state players. There are over 520 RDOs in the United States. However, only 23 RDOs have published digital CEDS, indicating that these critical planning documents are not yet widely produced, or at least not widely shared, by local and regional stakeholders. In addition, there is no publicly accessible central repository of active CEDS. Providing such a resource could facilitate greater community alignment and better understanding of communities’ comparative advantage across the country.
In addition, a wide range of private and social sector bodies participate in innovation planning and regional development. Incubators such as the Boston-based MassVentures provide venture funding but also support technology transfer, mentorship, and small business innovation research (SBIR) support. Industry trade organizations, local chambers of commerce, and large nonprofits lobby for regulatory change, help their constituents navigate government resources, and encourage informal planning. Without meaningful transparency in the CEDS process, these groups will struggle to align their activities with regional plans.
Transparency alone will not fix a highly fragmented system, but it will give groups that are inclined to seek alignment the opportunity to do so. It also allows the opportunity for more federal programs (including EDA’s own innovation programs) to require that applicants speak to alignment with their regional CEDS in more detailed ways during the application process and include alignment with CEDS as an evaluation criterion when applications are reviewed.
Opportunity
R&D is at the core of innovation, and the United States has excelled compared to its peers and competitors. Both the European Union and China have struggled to reach the benchmark level of 2% of gross domestic product (GDP) invested in R&D, giving the United States a huge edge in cutting-edge technologies such as biotechnology, clean tech, and software. However, the decline in public R&D spending, which was over 1% of GDP in the 1970s but is now down to ~0.7%, has significant repercussions for competitiveness in emerging technologies that require significant public investment to overcome developmental hurdles. For example, China was first to launch a quantum encryption satellite, and by 2030 China is projected to have 25% of semiconductor manufacturing capacity, compared to just 10% in the United States.
US dollar (USD) billion in constant purchasing power parity (PPP) prices. Source: OECD R&D statistics, February 2023 (accessed on 21 March 2023).
To be clear, the United States retains a large quantitative and qualitative advantage in R&D and innovation, buttressed by a world-leading university system and large and growing private investments. Gross expenditures are increasing in the United States. However, private sector R&D is largely geared toward commercialization rather than developments in basic science given different incentives and time horizons. Economists have long recognized the market failures in early-stage R&D: private sector firms do not consider the positive social spillovers in investment, leading to suboptimal investment levels. The government is better able to justify the total social impact of investing in innovation.
Increasing the amount of public R&D and ecosystem spending, which includes workforce development and infrastructure, is crucial to accelerating American innovation. The $500 million appropriated in the FY23 omnibus spending bill for EDA’s Regional Technology and Innovation Hubs is a good start, but this is only a fraction of the amount proposed by the CHIPS Act. However, there is bipartisan agreement in favor of regional cluster building, most recently demonstrated by the December 2022 House Subcommittee on Research and Technology hearing on Building Regional Innovation Economies.
In addition to growing the cumulative effectiveness of national innovation spending, regionally based cluster development plans will distribute economic prosperity more equitably. In 2021, the United States invested nearly $350 billion in venture capital dollars. However, nearly $250 billion went to just three states: California, New York, and Massachusetts. While these three states are home to some of the nation’s largest, most productive, and best-educated cities, other regions also have budding clusters and compelling competitive advantages that deserve more financial and human capital. A well-structured innovation roadmap that starts with national priorities, incorporates local advantages, and encourages transparency will help public, private, and nonprofit stakeholders at the regional level develop long-term investment plans. In turn, this will enable a greater number of regions and individuals to reap the economic benefits of innovation, create good jobs, and increase standards of living.
Plan of Action
Recommendation 1: Direct, align, and coordinate innovation ecosystem development activities more clearly at the federal level.
Better coordination of innovation spending starts at the top. Regions, states, and cities would benefit from greater clarity in the direction of U.S. priorities regarding innovation. This is especially true for technologies and sectors that are critical to national competitiveness, require significant upfront R&D, and have large spillover benefits.
- Publish an innovation roadmap every four years at the national level. The White House and key federal departments should work together to publish a comprehensive innovation roadmap every four years, shortly after the beginning of a President’s term. To enable the rapid development of the national roadmap, presidential campaigns should make place-based innovation policy a core part of their campaign and transition team structures. This document should be refreshed every year in conjunction with major events, such as the State of the Union and the publication of the National Security Strategy. At a minimum, the OSTP should include such a roadmap within their Quadrennial Science and Technology Review (Sec. 10613).
- Fund continuous assessment of critical technology areas and their needs. The NSF should continue its funding for efforts like the National Network for Critical Technology Assessment to ensure that our understanding of the specific needs within these industries remains current and technically relevant.
- The White House or Office of Management and Budget should regularly convene innovation ecosystems and cluster development program managers across the federal government. Ensuring that federal programs work together to facilitate local alignment and engagement with CEDS can help serve an ongoing alignment function. It can also prevent local fragmentation caused by funding of competing or misaligned efforts across agencies.
- Key stakeholders should include the National Economic Council, Domestic Policy Council, and the National Security Council. The Departments of Commerce and Treasury and the Small Business Administration should also have major roles. In addition, other agencies should be involved in specific components of the innovation plan (such as the Departments of Labor and Education for workforce development).
Recommendation 2: Direct RDOs to include detailed innovation and cluster planning in the CEDS process. The EDA should update CEDS Content Guidelines to require that plans address opportunities to build innovation ecosystems and develop local clusters, as they have to require that plans include resilience measures. These plans should include:
- A clear description of a selected area of cluster emphasis and a data-informed rationale for making that choice.
- An overall strategic direction.
- An asset capacity assessment relative to the selected cluster.
- A high-level operational plan that outlines major initiatives.
- A description of active coalition members and their roles.
- A framework for evaluating progress, including key metrics.
Recommendation 3: Give RDOs the resources needed to include detailed innovation and cluster planning in the CEDS process. Congress should authorize annual funds to support the placement of senior innovation and cluster leaders in RDOs as Regional Competitiveness Officers (RECOs). This program should be administered by the EDA or its designee and be modeled on the Economic Recovery Corps fellowships. This will build staff capacity to help coordinate the development of regional strategies that cut across state and city lines such that innovation planning becomes a regular facet of economic development policy.
- This funding should be prioritized in regions that applied to the Build Back Better Regional Challenge or the Good Jobs Challenge but did not win an award.
- RECOs should be responsible for leading community efforts to create plans for innovation ecosystem and cluster support and facilitate broad engagement in community efforts to seek federal grants to implement these plans.
The EDA should provide innovation ecosystems and cluster research training through its existing community of practice for RDOs to invest in developing innovation strategies as a component of their CEDS process.
Recommendation 4: Facilitate local alignment through greater CEDS transparency and require that federally funded cluster development initiatives to ask applicants to demonstrate alignment with their regional CEDS as they apply.
- The EDA should coordinate with RDOs to create and maintain a publicly accessible national database of CEDS and their accompanying innovation strategies. This will facilitate greater transparency and coordination among regional players working to identify ideas already in the market and coordinate resources appropriately. The database can also serve a critical transparency function and answer currently unanswerable questions like “How many U.S. regions are working to build biotech or semiconductor clusters?”
- Federal cluster development programs across all agencies should require applicants to articulate their alignment with their regional CEDS. Staff and external reviewers should be instructed to consider the degree of alignment as a key criterion for all regional cluster-based programs.
- Regions, states, and cities should ground their economic development in transparent and publicly available CEDS goals and strategies. Smaller entities and political designations that offer planning grants, support grants, and other resources could integrate alignment with the CEDS into their criteria for assessment.
Conclusion
The approach proposed here will facilitate the development of a coordinated national approach to innovation policy. Adopting this approach will help regions make better investments in their industry clusters, help private sector investors more productively channel funding into strategically vital areas, and accelerate the growth of high-quality jobs. Strengthening innovation planning will benefit all Americans by accelerating economic development, expanding local economic clusters, and generating middle-class employment that builds communities.
Meeting Agricultural Sustainability Goals by Increasing Federal Funding for Research on Genetically Engineered Organisms
Summary
Ensuring the sustainability and resiliency of American food systems is an urgent priority, especially in the face of challenges presented by climate change and international geopolitical conflicts. To address these issues, increased federal investment in new, sustainability-oriented agricultural technology is necessary in order to bring greater resource conservation and stress tolerance to American farms and fields. Ongoing advances in bioengineering research and development (R&D) offer a diverse suite of genetically engineered organisms, including crops, animals, and microbes. Given the paramount importance of a secure food supply for national well-being, federal actors should promote the development of genetically engineered organisms for agricultural applications.
Two crucial opportunities are imminent. First, directives in the Biden Administration’s bioeconomy executive order provide the U.S. Department of Agriculture (USDA) a channel through which to request funding for sustainability-oriented R&D in genetically engineered organisms. Second, renewal of the Farm Bill in 2023 provides a venue for congressional legislators to highlight genetic engineering as a funding focus area of existing research grant programs. Direct beneficiaries of the proposed federal funding will predominantly be nonprofit research organizations such as land grant universities; innovations resulting from the funded research will provide a public good that benefits producers and consumers alike.
Challenge and Opportunity
The resiliency of American agriculture faces undeniable challenges in the coming decades. The first is resource availability, which includes scarcities of fertile land due to soil degradation and of water due to overuse and drought. Resource availability is also vulnerable to acute challenges, as revealed by the impact of the COVID-19 pandemic and the Russian-Ukraine war on the supply of vital inputs such as fertilizer and gas. The second set of challenges are environmental stressors, many of which are exacerbated by climate change. Flooding can wipe out an entire harvest, while the spread of pathogens poses existential risks not only to individual livelihoods but also to the global market of crops like citrus, chocolate, and banana. Such losses would be devastating for both consumers and producers, especially those in the global south.
Ongoing advances in bioengineering R&D provide technological solutions in the form of a diverse suite of genetically engineered organisms. These have the potential to address many of the aforementioned challenges, including increasing yield and/or minimizing inputs and boosting resilience to drought, flood, and pathogens. Indeed, existing transgenic crops, such as virus-resistant papaya and flood-tolerant rice, demonstrate the ability of genetically engineered organisms to address agricultural challenges. They can also address other national priorities such as climate change and nutrition by enhancing carbon sequestration and improving the nutritional profile of food.
Recent breakthroughs in modifying and sequencing DNA have greatly enhanced the speed of developing new, commercializable bioengineered varieties, as well as the spectrum of traits and plants that can be engineered. This process has been especially expedited by the use of CRISPR gene-editing technology; the European Sustainable Agriculture Through Genome Editing (EU-SAGE)’s database documents more than 500 instances of gene-edited crops developed in research laboratories to target traits for sustainable, climate-resilient agriculture. There is thus vast potential for genetically engineered organisms to contribute to sustainable agriculture.
More broadly, this moment can be leveraged to bring about a turning point in the public perception of genetically engineered organisms. Past generations of genetically engineered organisms have been met with significant public backlash, despite the pervasiveness of inter-organism gene transfer throughout the history of life on earth (see FAQ). Reasons for negative public perception are complex but include the association of genetically engineered organisms with industry profit, as well as an embrace of the precautionary principle to a degree that far exceeds its application to other products, such as pharmaceuticals and artificial intelligence. Furthermore, persistent misinformation and antagonistic activism have engendered entrenched consumer distrust. The prior industry focus on herbicide resistance traits also contributed to the misconception that the technology is only used to increase the use of harmful chemicals in the environment.
Now, however, a new generation of genetically engineered organisms feature traits beyond herbicide resistance that address sustainability issues such as reduced spoilage. Breakthroughs in DNA sequencing, as well as other analytical tools, have increased our understanding of the properties of newly developed organisms. There is pervasive buy-in for agricultural sustainability goals across many stakeholder sectors, including individual producers, companies, consumers, and legislators on both sides of the aisle. There is great potential for genetically engineered organisms to be accepted by the public as a solution to a widely recognized problem. Dedicated federal funding will be vital in seeing that this potential is realized.
Plan of Action
Recommendation 1: Fund genetically engineered organisms pursuant to the Executive Order on the bioeconomy.
Despite the importance of agriculture for the nation’s basic survival and the clear impact of agricultural innovation, USDA’s R&D spending pales in comparison to other agencies and other expenditures. In 2022, for example, USDA’s R&D budget was a mere 6% of the National Institutes of Health’s R&D budget, and R&D comprised only 9.6% of USDA’s overall discretionary budget. The Biden Administration’s September 2022 executive order provides an opportunity to amend this funding shortfall, especially for genetically engineered organisms.
The Executive Order on Advancing Biotechnology and Biomanufacturing Innovation for a Sustainable, Safe, and Secure American Bioeconomy explicitly embraces an increased role for biotechnology in agriculture. Among the policy objectives outlined is the call to “boost sustainable biomass production and create climate-smart incentives for American agricultural producers and forest landowners.”
Pursuant to this objective, the EO directs the USDA to submit a plan comprising programs and budget proposals to “support the resilience of the United States biomass supply chain [and] encourage climate-smart production” by September 2023. This plan provides the chance for the USDA to secure funding for agricultural R&D in a number of areas. Here, we recommend (1) USDA collaboration in Department of Energy (DoE) research programs amended under the CHIPS and Science Act and (2) funding for startup seed grants.
CHIPS and Science Act
The 2022 CHIPS and Science Act aims to accelerate American innovation in a number of technology focus areas, including engineering biology. To support this goal, the Act established a new National Engineering Biology Research and Development Initiative (Section 10402). As part of this initiative, the USDA was tasked with supporting “research and development in engineering biology through the Agricultural Research Service, the National Institute of Food and Agriculture programs and grants, and the Office of the Chief Scientist.” Many of the initiative’s priorities are sustainability-oriented and could benefit from genetic engineering contributions.
A highlight is the designation of an interagency committee to coordinate activities. To leverage and fulfill this mandate, we recommend that the USDA better coordinate with the DoE on bioengineering research. Specifically, the USDA should be involved in the decision-making process for awarding research grants relating to two DoE programs amended by the Act.
The first program is the Biological and Environmental Research Program, which includes carbon sequestration, gene editing, and bioenergy. (See the Appendix for a table summarizing examples of how genetic engineering can contribute sustainability-oriented technologies to these key focus areas.)
The second program is the Basic Energy Sciences Program, which has authorized funding for a Carbon Sequestration Research and Geologic Computational Science Initiative under the DoE. Carbon sequestration via agriculture is not explicitly mentioned in this section, but this initiative presents another opportunity for the USDA to collaborate with the DoE and secure funding for agricultural climate solutions. Congress should make appropriating funding for this program a priority.
Seed Grants
The USDA should pilot a seed grant program to accelerate technology transfer, a step that often poses a bottleneck. The inherent risk of R&D and entrepreneurship in a cutting-edge field may pose a barrier to entry for academic researchers as well as small agricultural biotech companies. Funding decreases the barrier of entry, thus increasing the diversity of players in the field. This can take the form of zero-equity seed grants. Similar to the National Science Foundation (NSF)’s seed grant program, which awards $200+ million R&D funding to about 400 startups, this would provide startups with funding without the risks attached to venture capital funding (such as being ousted from company leadership). The NSF’s funding is spread across numerous disciplines, so a separate agricultural initiative from the USDA dedicated to supporting small agricultural biotech companies would be beneficial. These seed grants would meet a need unmet by USDA’s existing small business grant programs, which are only awarded to established companies.
Together, the funding areas outlined above would greatly empower the USDA to execute the EO’s objective of promoting climate-smart American agriculture.
Recommendation 2: Allocate funding through the 2023 Farm Bill.
The Farm Bill, the primary tool by which the federal government sets agricultural policy, will be renewed in 2023. Several existing mandates for USDA research programs, administered through the National Institute of Food and Agriculture as competitive grants, have been allocated federal funding. Congressional legislators should introduce amendments in the mandates for these programs such that the language explicitly highlights R&D of genetically engineered organisms for sustainable agriculture applications. Such programs include the Agriculture and Food Research Initiative, a major competitive grant program, as well as the Specialty Crop Research Initiative and the Agricultural Genome to Phenome Initiative. Suggested legislative text for these amendments are provided in the Appendix. Promoting R&D of genetically engineered organisms via existing programs circumvents the difficulty of securing appropriations for new initiatives while also presenting genetically engineered organisms as a critically important category of agricultural innovation.
Additionally, Congress should appropriate funding for the Agriculture Advanced Research and Development Authority (AgARDA) at its full $50 million authorization. Similar to its counterparts in other agencies such as ARPA-E and DARPA, AgARDA would enable “moonshot” R&D projects that are high-reward but high-risk or have a long timeline—such as genetically engineered organisms with genetically complex traits. This can be especially valuable for promoting the development of sustainability-oriented crops traits: though they are a clear public good, they may be less profitable and/or marketable than crops with consumer-targeted traits such as sweetness or color, and as such profit-driven companies may be dissuaded from investing in their development. The USDA just published its implementation strategy for AgARDA. Congress must now fully fund AgARDA such that it can execute its strategy and fuel much-needed innovation in agricultural biotechnology.
Conclusion
Current federal funding for genetically engineered organism R&D does not reflect their substantial impact in ensuring a sustainable, climate-smart future for American agriculture, with applications ranging from increasing resource-use efficiency in bioproduction to enhancing the resilience of food systems to environmental and manmade crises. Recent technology breakthroughs have opened many frontiers in engineering biology, but free market dynamics alone are not sufficient to guarantee that these breakthroughs are applied in the service of the public good in a timely manner. The USDA and Congress should therefore take advantage of upcoming opportunities to secure funding for genetic engineering research projects.
Appendix
Biological and Environmental Research Program Examples
Research focus area added in CHIPS and Science Act | Example of genetic engineering contribution |
Bioenergy and biofuel | Optimizing biomass composition of bioenergy crops |
Non-food bioproducts | Lab-grown cotton; engineering plants and microbes to produce medicines |
Carbon sequestration | Improving photosynthetic efficiency; enhancing carbon storage in plant roots |
Plant and microbe interactions | Engineering microbes to counter plant pathogens; engineering microbes to make nutrients more accessible to plants |
Bioremediation | Engineering plants and microbes to sequester and/or breakdown contaminants in soil and groundwater |
Gene editing | Engineering plants for increased nutrient content, disease-resistance, storage performance |
New characterization tools | Creating molecular reporters of plant response to abiotic and biotic environmental dynamics |
Farm Bill Amendments
Agriculture and Food Research Initiative
One of the Agriculture and Food Research Initiative (AFRI)’s focus areas is Sustainable Agricultural Systems, with topics including “advanced technology,” which supports “cutting-edge research to help farmers produce higher quantities of safer and better quality food, fiber, and fuel to meet the needs of a growing population.” Furthermore, AFRI’s Foundational and Applied Science Program supports grants in priority areas including plant health, bioenergy, natural resources, and environment. The 2023 Farm Bill could amend the Competitive, Special, and Facilities Research Grant Act (7 U.S.C. 3157) to highlight the potential of genetic engineering in the pursuit of AFRI’s goals.
Example text:
Subsection (b)(2) of the Competitive, Special, and Facilities Research Grant Act (7 U.S.C. 3157(b)(2)) is amended—
(1) in subparagraph (A)—
(A) in clause (ii), by striking the semicolon at the end and inserting “including genetic engineering methods to make modifications (deletions and/or insertions of DNA) to plant genomes for improved food quality, improved yield under diverse growth conditions, and improved conservation of resource inputs such as water, nitrogen, and carbon;”;
(B) in clause (vi), by striking the “and”;
(C) in clause (vii), by striking the period at the end and inserting “; and”; and
(D) by adding at the end the following:
“(viii) plant-microbe interactions, including the identification and/or genetic engineering of microbes beneficial for plant health”
(2) in subparagraph (C), clause (iii), by inserting “production and” at the beginning;
(3) in subparagraph (D)–
(A) in clause (vii), by striking “and”;
(B) in clause (vii), by striking the period at the end and inserting “; and”; and
(C) by adding at the end the following:
“(ix) carbon sequestration”.
Agricultural Genome to Phenome Initiative
The goal of this initiative is to understand the function of plant genes, which is critical to crop genetic engineering for sustainability. The ability to efficiently insert and edit genes, as well as to precisely control gene expression (a core tenet of synthetic biology), would facilitate this goal.
Example text:
Section 1671(a) of the Food, Agriculture, Conservation, and Trade Act of 1990 (7 U.S.C. 5924(a)) is amended—
- In subparagraph (4), by inserting “and environmental” after “achieve advances in crops and animals that generate societal”; and
- In subparagraph (5), by inserting “genetic engineering, synthetic biology,” after “to combine fields such as genetics, genomics,”
Specialty Crop Research Initiative
Specialty crops can be a particularly fertile ground for research. There is a paucity of genetic engineering tools for specialty crops as compared to major crops (e.g. wheat, corn, etc.). At the same time, specialty crops such as fruit trees offer the opportunity to effect larger sustainability impacts: as perennials, they remain in the soil for many years, with particular implications for water conservation and carbon sequestration. Finally, economically important specialty crops such as oranges are under extreme disease threat, as identified by the Emergency Citrus Disease Research and Extension Program. Genetic engineering offers potential solutions that could be accelerated with funding.
Example text:
Section 412(b) of the Agricultural Research, Extension, and Education Reform Act of 1998 (7 U.S.C. 7632(b)) is amended—
- In paragraph (1), by inserting “transgenics, gene editing, synthetic biology” after “research in plant breeding, genetics,” and—
- In subparagraph (B), by inserting “and enhanced carbon sequestration capacity” after “size-controlling rootstock systems”; and
- In subparagraph (C), by striking the semi-colon at the end and inserting “, including water-use efficiency;”
Scientists usually use the term “genetic engineering” as a catch-all phrase for the myriad methods of changing an organism’s DNA outside of traditional breeding, but this is not necessarily reflected in usage by regulatory agencies. The USDA’s glossary, which is not regulatorily binding, defines “genetic engineering” as “manipulation of an organism’s genes by introducing, eliminating or rearranging specific genes using the methods of modern molecular biology, particularly those techniques referred to as recombinant DNA techniques.” Meanwhile, the USDA’s Animal and Plant Health Inspection Service (APHIS)’s 2020 SECURE rule defines “genetic engineering” as “techniques that use recombinant, synthesized, or amplified nucleic acids to modify or create a genome.” The USDA’s glossary defines “genetic modification” as “the production of heritable improvements in plants or animals for specific uses, via either genetic engineering or other more traditional methods”; however, the USDA National Organic Program has used “genetic engineering” and “genetic modification” interchangeably.
“Transgenic” organisms can be considered a subset of genetically engineered organisms and result from the insertion of genetic material from another organism using recombinant DNA techniques. “Gene editing” or “genome editing” refers to biotechnology techniques like CRISPR that make changes in a specific location in an organism’s DNA.
The term “bioengineered” does carry regulatory weight. The USDA-AMS’s National Bioengineered Food Disclosure Standard (NBFDS), published in 2018 and effective as of 2019, defines “bioengineered” as “contains genetic material that has been modified through in vitro recombinant deoxyribonucleic acid (DNA) techniques; and for which the modification could not otherwise be obtained through conventional breeding or found in nature.” Most gene-edited crops currently in development, such as those where the introduced gene is known to occur in the species naturally, are exempt from regulation under both the AMS’s NBFDS and APHIS’s SECURE acts.
Though “genetic engineering” has only entered the popular lexicon in the last several decades, humans have modified the genomes of plants for millennia, in many different ways. Through genetic changes introduced via traditional breeding, teosinte became maize 10,000 years ago in Mesoamerica, and hybrid rice was developed in 20th-century China. Irradiation has been used to generate random mutations in crops for decades, and the resulting varieties have never been subject to any special regulation.
In fact, transfer of genes between organisms occurs all the time in nature. Bacteria often transfer DNA to other bacteria, and some bacteria can insert genes into plants. Indeed, one of the most common “genetic engineering” approaches used today, Agrobacterium-mediated gene insertion, was inspired by that natural phenomenon. Other methods of DNA delivery including biolistics (“gene gun”) and viral vectors. Each method for gene transfer has many variations, and each method varies greatly in its mode of action and capabilities. This is key for the future of plant engineering: there is a spectrum—not a binary division—of methods, and evaluations of engineered plants should focus on the end product.
Genetically engineered organisms are chiefly regulated by USDA-APHIS, the EPA, and the FDA as established by the 1986 Coordinated Framework for the Regulation of Biotechnology. They oversee experimental testing, approval, and commercial release. The Framework’s regulatory approach is grounded in the judgment that the potential risks associated with genetically engineered organisms can be evaluated the same way as those associated with traditionally bred organisms. This is in line with its focus on “the characteristics of the product and the environment into which it is being introduced, not the process by which the product is created.”
USDA-APHIS regulates the distribution of regulated organisms that are products of biotechnology to ensure that they do not pose a plant pest risk. Developers can petition for individual organisms, including transgenics, to be deregulated via Regulatory Status Review.
The EPA regulates the distribution, sale, use, and testing of all pesticidal substances produced in plants and microbes, regardless of method of production or mode of action. Products must be registered before distribution.
The FDA applies the same safety standards to foods derived from genetically engineered organisms as it does to all foods under the Federal Food, Drug, and Cosmetic Act. The agency provides a voluntary consultation process to help developers ensure that all safety and regulatory concerns, such as toxicity, allergenicity, and nutrient content, are resolved prior to marketing.
Mechanisms of action vary depending on the specific trait. Here, we explain the science behind two types of transgenic crops that have been widespread in the U.S. market for decades.
Bt crops: Three of the major crops grown in the United States have transgenic Bt varieties: cotton, corn, and soybean. Bt crops are genetically engineered such that their genome contains a gene from the bacteria Bacillus thuringiensis. This enables Bt crops to produce a protein, normally only produced by the Bt bacteria, that is toxic to a few specific plant pests but harmless for humans, other mammals, birds, and beneficial insects. In fact, the bacteria itself is approved for use as an organic insecticide. However, organic applications of Bt insecticides are limited in efficacy: since the bacteria must be topically applied to the crop, the protein it produces is ineffective against insects that have penetrated the plant or are attacking the roots; in addition, the bacteria can die or be washed away by rain.
Engineering the crop itself to produce the insecticidal protein more reliably reduces crop loss due to pest damage, which also minimizes the need for other, often more broadly toxic systemic pesticides. Increased yield allows for more efficient use of existing agricultural land. In addition, decreased use of pesticides reduces the energy cost associated with their production and application while also preserving wildlife biodiversity. With regards to concerns surrounding insecticide resistance, the EPA requires farmers who employ Bt, both as a transgenic crop and as an organic spray, to also plant a refuge field of non-Bt crops, which prevents pests from developing resistance to the Bt protein.
The only substantive difference between Bt crops and non-Bt crops is that the former produces an insecticide already permitted by USDA organic regulations.
Ringspot-resistant rainbow papaya: The transgenic rainbow papaya is another example of the benefits of genetic engineering in agriculture. Papaya plantations were ravaged by the papaya ringspot virus in the late 1900s, forcing many farmers to abandon their lands and careers. In response, scientists developed the rainbow papaya, which contains a gene from the virus itself that allows it to express a protein that counters viral infection. This transgenic papaya was determined to be equivalent in nutrition and all other aspects to the original papaya. The rainbow papaya, with its single gene insertion, is widely considered to have saved Hawaii’s papaya industry, which in 2013 accounted for nearly 25% of Hawaii’s food exports. Transgenic papaya now makes up about 80% of the Hawaiian papaya acreage. The remaining comprise non-GMO varieties, which would have gone locally extinct had it not been for transgenic papayas preventing the spread of the virus. The rainbow papaya’s success has clearly demonstrated that transgenic crops can preserve the genetic diversity of American crops and preserve yield without spraying synthetic pesticides, both of which are stated goals of the USDA Organic Program. However, the National Organic Program’s regulations currently forbid organic farmers from growing virus-resistant transgenic papaya.
With the advent of CRISPR gene-editing technology, which allows scientists to make precise, targeted changes in an organism’s DNA, new genetically engineered crops are being developed at an unprecedented pace. These new varieties will encompass a wider variety of qualities than previously seen in the field of crop biotechnology. Many varieties are directly aimed at shoring up agricultural resilience in the face of climate change, with traits including tolerance to heat, cold, and drought. At the same time, the cost of sequencing an organism’s DNA continues to decrease. This makes it easier to confirm the insertion of multiple transgenes into a plant, as would be necessary to engineer crops to produce a natural herbicide. Such a crop, similar to Bt crops but targeting weeds instead of insects, would reduce reliance on synthetic herbicides while enabling no-till practices that promote soil health. Furthermore, cheap DNA sequencing facilitates access to information about the genomes of many wild relatives of modern crops. Scientists can then use genetic engineering to make wild relatives more productive or introduce wild traits like drought resilience into domesticated varieties. This would increase the genetic diversity of crops available to farmers and help avoid issues inherent to monocultures, most notably the uncontrollable spread of plant diseases.
At present, most crops engineered with CRISPR technology do not contain genes from a different organism (i.e., not transgenic), and thus do not have to face the additional regulatory hurdles that transgenics like Bt crops did. However, crops developed via CRISPR are still excluded from organic farming.
- Improving sustainability and land conservation: potatoes that are slower to spoil, wheat with enhanced carbon sequestration capacity
- Increasing food quality and nutrition: vegetables with elevated micronutrient content
- Increasing and protecting agricultural yields: higher-yield fish, flood-tolerant rice
- Protecting against plant and animal pests and diseases: blight-resistant chestnut, HLB-resistant citrus
- Cultivating alternative food sources: bacteria for animal-free production of protein
The pool of producers of genetically engineered crops is increasingly diverse. In fact, of the 37 new crops evaluated by APHIS’s Biotechnology Regulatory Service under the updated guidelines since 2021, only three were produced by large (>300 employees) for-profit corporations. Many were produced by startups and/or not-for-profit research institutions. USDA NIFA research grants predominantly fund land-grant universities; other awardees include private nonprofit organizations, private universities, and, in select cases (such as small business grants), private for-profit companies.
Historically, the concept of GMOs has been associated with giant multinational corporations, the so-called Big Ag. The most prevalent GMOs in the last several decades have indeed been produced by industry giants such as Dow, Bayer, and Monsanto. This association has fueled the negative public perception of GMOs in several ways, including:
- Some companies, such as Dow, were responsible for producing the notorious chemical Agent Orange, used to devastating effect in the Vietnam War. While this is an unfortunate shadow on the company, it is unrelated to the properties of genetically engineered crops.
- Companies have been accused of financially disadvantaging farmers by upholding patents on GMO seeds, which prevents farmers from saving seeds from one year’s crop to plant the next season. Companies have indeed enforced seed patents (which generally last about 20 years), but it is important to note that (1) seed-saving has not been standard practice on many American farms for many decades, since the advent of (nonbioengineered) hybrid crops, from which saved seeds will produce an inferior crop, and (2) bioengineered seeds are not the only seeds that can be and are patented.
How to Replicate the Success of Operation Warp Speed
Operation Warp Speed (OWS) was a public-private partnership that produced COVID-19 vaccines in the unprecedented timeline of less than one year. This unique success among typical government research and development (R&D) programs is attributed to OWS’s strong public-private partnerships, effective coordination, and command leadership structure. Policy entrepreneurs, leaders of federal agencies, and issue advocates will benefit from understanding what policy interventions were used and how they can be replicated. Those looking to replicate this success should evaluate the stakeholder landscape and state of the fundamental science before designing a portfolio of policy mechanisms.
Challenge and Opportunity
Development of a vaccine to protect against COVID-19 began when China first shared the genetic sequence in January 2020. In May, the Trump Administration announced OWS to dramatically accelerate development and distribution. Through the concerted efforts of federal agencies and private entities, a vaccine was ready for the public in January 2021, beating the previous record for vaccine development by about three years. OWS released over 63 million doses within one year, and to date more than 613 million doses have been administered in the United States. By many accounts, OWS was the most effective government-led R&D effort in a generation.
Policy entrepreneurs, leaders of federal agencies, and issue advocates are interested in replicating similarly rapid R&D to solve problems such as climate change and domestic manufacturing. But not all challenges are suited for the OWS treatment. Replicating its success requires an understanding of the unique factors that made OWS possible, which are addressed in Recommendation 1. With this understanding, the mechanisms described in Recommendation 2 can be valuable interventions when used in a portfolio or individually.
Plan of Action
Recommendation 1. Assess whether (1) the majority of existing stakeholders agree on an urgent and specific goal and (2) the fundamental research is already established.
Criteria 1. The majority of stakeholders—including relevant portions of the public, federal leaders, and private partners—agree on an urgent and specific goal.
The OWS approach is most appropriate for major national challenges that are self-evidently important and urgent. Experts in different aspects of the problem space, including agency leaders, should assess the problem to set ambitious and time-bound goals. For example, OWS was conceptualized in April and announced in May, and had the specific goal of distributing 300 million vaccine doses by January.
Leaders should begin by assessing the stakeholder landscape, including relevant portions of the public, other federal leaders, and private partners. This assessment must include adoption forecasts that consider the political, regulatory, and behavioral contexts. Community engagement—at this stage and throughout the process—should inform goal-setting and program strategy. Achieving ambitious goals will require commitment from multiple federal agencies and the presidential administration. At this stage, understanding the private sector is helpful, but these stakeholders can be motivated further with mechanisms discussed later. Throughout the program, leaders must communicate the timeline and standards for success with expert communities and the public.
Example Challenge: Building Capability for Domestic Rare Earth Element Extraction and Processing |
Rare earth elements (REEs) have unique properties that make them valuable across many sectors, including consumer electronics manufacturing, renewable and nonrenewable energy generation, and scientific research. The U.S. relies heavily on China for the extraction and processing of REEs, and the U.S. Geological Survey reports that 78% of our REEs were imported from China from 2017-2020. Disruption to this supply chain, particularly in the case of export controls enacted by China as foreign policy, would significantly disrupt the production of consumer electronics and energy generation equipment critical to the U.S. economy. Export controls on REEs would create an urgent national problem, making it suitable for an OWS-like effort to build capacity for domestic extraction and processing. |
Criteria 2. Fundamental research is already established, and the goal requires R&D to advance for a specific use case at scale.
Efforts modeled after OWS should require fundamental research to advance or scale into a product. For example, two of the four vaccine platforms selected for development in OWS were mRNA and replication-defective live vector platforms, which had been extensively studied despite never being used in FDA-licensed vaccines. Research was advanced enough to give leaders confidence to bet on these platforms as candidates for a COVID-19 vaccine. To mitigate risk, two more-established platforms were also selected.
Technology readiness levels (TRLs) are maturity level assessments of technologies for government acquisition. This framework can be used to assess whether a candidate technology should be scaled with an OWS-like approach. A TRL of at least five means the technology was successfully demonstrated in a laboratory environment as part of an integrated or partially integrated system. In evaluating and selecting candidate technologies, risk is unavoidable, but decisions should be made based on existing science, data, and demonstrated capabilities.
Example Challenge: Scaling Desalination to Meet Changing Water Demand |
Increases in efficiency and conservation efforts have largely kept the U.S.’s total water use flat since the 1980s, but drought and climate variability are challenging our water systems. Desalination, a well-understood process to turn seawater into freshwater, could help address our changing water supply. However, all current desalination technologies applied in the U.S. are energy intensive and may negatively impact coastal ecosystems. Advanced desalination technologies—such as membrane distillation, advanced pretreatment, and advanced membrane cleaning, all of which are at technology readiness levels of 5–6—would reduce the total carbon footprint of a desalination plant. An OWS for desalination could increase the footprint of efficient and low-carbon desalination plants by speeding up development and commercialization of advanced technologies. |
Recommendation 2: Design a program with mechanisms most needed to achieve the goal: (1) establish a leadership team across federal agencies, (2) coordinate federal agencies and the private sector, (3) activate latent private-sector capacities for labor and manufacturing, (4) shape markets with demand-pull mechanisms, and (5) reduce risk with diversity and redundancy.
Design a program using a combination of the mechanisms below, informed by the stakeholder and technology assessment. The organization of R&D, manufacturing, and deployment should follow an agile methodology in which more risk than normal is accepted. The program framework should include criteria for success at the end of each sprint. During OWS, vaccine candidates were advanced to the next stage based on the preclinical or early-stage clinical trial data on efficacy; the potential to meet large-scale clinical trial benchmarks; and criteria for efficient manufacturing.
Mechanism 1: Establish a leadership team across federal agencies
Establish an integrated command structure co-led by a chief scientific or technical advisor and a chief operating officer, a small oversight board, and leadership from federal agencies. The team should commit to operate as a single cohesive unit despite individual affiliations. Since many agencies have limited experience in collaborating on program operations, a chief operating officer with private-sector experience can help coordinate and manage agency biases. Ideally, the team should have decision-making authority and report directly to the president. Leaders should thoughtfully delegate tasks, give appropriate credit for success, hold themselves and others accountable, and empower others to act.
The OWS team was led by personnel from the Department of Health and Human Services (HHS), the Department of Defense (DOD), and the vaccine industry. It included several HHS offices at different stages: the Centers for Disease Control and Prevention (CDC), the Food and Drug Administration (FDA), the National Institutes of Health (NIH), and the Biomedical Advanced Research and Development Authority (BARDA). This structure combined expertise in science and manufacturing with the power and resources of the DOD. The team assigned clear roles to agencies and offices to establish a chain of command.
Example Challenge: Managing Wildland Fire with Uncrewed Aerial Systems (UAS) |
Wildland fire is a natural and normal ecological process, but the changing climate and our policy responses are causing more frequent, intense, and destructive fires. Reducing harm requires real-time monitoring of fires with better detection technology and modernized equipment such as UAS. Wildfire management is a complex policy and regulatory landscape with functions spanning multiple federal, state, and local entities. Several interagency coordination bodies exist, including the National Wildfire Coordinating Group, Wildland Fire Leadership Council, and the Wildland Fire Mitigation and Management Commission, but much of these efforts are consensus-based coordination models. The status quo and historical biases against agencies have created silos of effort and prevent technology from scaling to the level required. An OWS for wildland fire UAS would establish a public-private partnership led by experienced leaders from federal agencies, state and local agencies, and the private sector to advance this technology development. The team would motivate commitment to the challenge across government, academia, nonprofits, and the private sector to deliver technology that meets ambitious goals. Appropriate teams across agencies would be empowered to refocus their efforts during the duration of the challenge. |
Mechanism 2: Coordinate federal agencies and the private sector
Coordinate agencies and the private sector on R&D, manufacturing, and distribution, and assign responsibilities based on core capabilities rather than political or financial considerations. Identify efficiency improvements by mapping processes across the program. This may include accelerating regulatory approval by facilitating communication between the private sector and regulators or by speeding up agency operations. Certain regulations may be suspended entirely if the risks are considered acceptable relative to the urgency of the goal. Coordinators should identify processes that can occur in parallel rather than sequentially. Leaders can work with industry so that operations occur under minimal conditions to ensure worker and product safety.
The OWS team worked with the FDA to compress traditional approval timelines by simultaneously running certain steps of the clinical trial process. This allowed manufacturers to begin industrial-scale vaccine production before full demonstration of efficacy and safety. The team continuously sent data to FDA while they completed regulatory procedures in active communication with vaccine companies. Direct lines of communication permitted parallel work streams that significantly reduced the normal vaccine approval timeline.
Example Challenge: Public Transportation and Interstate Rail |
Much of the infrastructure across the United States needs expensive repairs, but the U.S. has some of the highest infrastructure construction costs for its GDP and longest construction times. A major contributor to costs and time is the approval process with extensive documentation, such as preparing an environmental impact study to comply with the National Environmental Policy Act. An OWS-like coordinating body could identify key pieces of national infrastructure eligible for support, particularly for near-end-of-lifespan infrastructure or major transportation arteries. Reducing regulatory burden for selected projects could be achieved by coordinating regulatory approval in close collaboration with the Department of Transportation, the Environmental Protection Agency, and state agencies. The program would need to identify and set a precedent for differentiating between expeditable regulations and key regulations, such as structural reviews, that could serve as bottlenecks. |
Mechanism 3: Activate latent private-sector capacities for labor and manufacturing
Activate private-sector capabilities for production, supply chain management, deployment infrastructure, and workforce. Minimize physical infrastructure requirements, establish contracts with companies that have existing infrastructure, and fund construction to expand facilities where necessary. Coordinate with the Department of State to expedite visa approval for foreign talent and borrow personnel from other agencies to fill key roles temporarily. Train staff quickly with boot camps or accelerators. Efforts to build morale and ensure commitment are critical, as staff may need to work holidays or perform higher than normally expected. Map supply chains, identify critical components, and coordinate supply. Critical supply chain nodes should be managed by a technical expert in close partnership with suppliers. Use the Defense Production Act sparingly to require providers to prioritize contracts for procurement, import, and delivery of equipment and supplies. Map the distribution chain from the manufacturer to the endpoint, actively coordinate each step, and anticipate points of failure.
During OWS, the Army Corps of Engineers oversaw construction projects to expand vaccine manufacturing capacity. Expedited visa approval brought in key technicians and engineers for installing, testing, and certifying equipment. Sixteen DOD staff also served in temporary quality-control positions at manufacturing sites. The program established partnerships between manufacturers and the government to address supply chain challenges. Experts from BARDA worked with the private sector to create a list of critical supplies. With this supply chain mapping, the DOD placed prioritized ratings on 18 contracts using the Defense Production Act. OWS also coordinated with DOD and U.S. Customs to expedite supply import. OWS leveraged existing clinics at pharmacies across the country and shipped vaccines in packages that included all supplies needed for administration, including masks, syringes, bandages, and paper record cards.
Example Challenge: EV Charging Network |
Electric vehicles (EVs) are becoming increasingly popular due to high gas prices and lower EV prices, stimulated by tax credits for both automakers and consumers in the Inflation Reduction Act. Replacing internal combustion engine vehicles with EVs is aligned with our current climate commitments and reduces overall carbon emissions, even when the vehicles are charged with energy from nonrenewable sources. Studies suggest that current public charging infrastructure has too few functional chargers to meet the demand of EVs currently on the road. Reliable and available public chargers are needed to increase public confidence in EVs as practical replacements for gas vehicles. Leveraging latent private-sector capacity could include expanding the operations of existing charger manufacturers, coordinating the deployment and installation of charging stations and requisite infrastructure, and building a skilled workforce to repair and maintain this new infrastructure. In February 2023 the Biden Administration announced actions to expand charger availability through partnerships with over 15 companies. |
Mechanism 4: Shape markets with demand-pull mechanisms
Use contracts and demand-pull mechanisms to create demand and minimize risks for private partners. Other Transaction Authority can also be used to procure capabilities quickly by bypassing elements of the Federal Acquisition Regulation. The types of demand-pull mechanisms available to agencies are:
- Volume guarantees: Commits the buyer (i.e., a federal agency) to purchase a minimum quantity of an existing product at a set price from multiple vendors.
- Advance purchase agreements: Establishes a contract between a single buyer and a single supplier in which the buyer provides advance funding for resources to manufacture a product or provide a service.
- Advance market commitments: Engages multiple suppliers or producers to produce a product or service by providing advance funds.
- Prize competitions: Solicits the development of creative solutions for a particular, well-defined problem from a wide range of actors, including individuals, companies, academic teams, and more, and rewards them with a cash prize.
- Challenge-based acquisitions: Solicits creative solutions for a well-defined problem and rewards success by purchasing the solution.
- Milestone payments: Provide a series of payments contingent on achieving defined objectives through the contract timeline.
HHS used demand-pull mechanisms to develop the vaccine candidates during OWS. This included funding large-scale manufacturing and committing to purchase successful vaccines. HHS made up to $483 million in support available for Phase 1 trials of Moderna’s mRNA candidate vaccine. This agreement was increased by $472 million for late-stage clinical development and Phase 3 clinical trials. Several months later, HHS committed up to $1.5 billion for Moderna’s large-scale manufacturing and delivery efforts. Ultimately the U.S. government owned the resulting 100 million doses of vaccines and reserved the option to acquire more. Similar agreements were created with other manufacturers, leading to three vaccine candidates receiving FDA emergency use authorization.
Example Challenge: Space Debris |
Low-earth orbit includes dead satellites and other debris that pose risks for existing and future space infrastructure. Increased interest in commercialization of low-earth orbit will exacerbate a debris count that is already considered unstable. Since national space policy generally requires some degree of engagement with commercial providers, the U.S. would need to include the industry in this effort. The cost of active space debris removal, satellite decommissioning and recycling, and other cleanup activities are largely unknown, which dissuades novel business ventures. Nevertheless, large debris objects that pose the greatest collision risks need to be prioritized for decommission. Demand-pull mechanisms could be used to create a market for sustained space debris mitigation, such as an advanced market commitment for the removal of large debris items. Commitments for removal could be paired with a study across the DOD and NASA to identify large, high-priority items for removal. Another mechanism that could be considered is fixed milestone payments, which NASA has used in past partnerships with commercial partners, most notably SpaceX, to develop commercial orbital transportation systems. |
Mechanism 5: Reduce risk with diversity and redundancy
Engage multiple private partners on the same goal to enable competition and minimize the risk of overall program failure. Since resources are not infinite, the program should incorporate evidence-based decision-making with strict criteria and a rubric. A rubric and clear criteria also ensure fair competition and avoid creating a single national champion.
During OWS, four vaccine platform technologies were considered for development: mRNA, replication-defective live-vector, recombinant-subunit-adjuvanted protein, and attenuated replicating live-vector. The first two had never been used in FDA-licensed vaccines but showed promise, while the second two were established in FDA-licensed vaccines. Following a risk assessment, six vaccine candidates using three of the four platforms were advanced. Redundancy was incorporated in two dimensions: three different vaccine platforms and two separate candidates. The manufacturing strategy also included redundancy, as several companies were awarded contracts to produce needles and syringes. Diversifying sources for common vaccination supplies reduced the overall risk of failure at each node in the supply chain.
Example Challenge: Alternative Battery Technology |
Building infrastructure to capture energy from renewable sources requires long-term energy storage to manage the variability of renewable energy generation. Lithium-ion batteries, commonly used in consumer electronics and electric vehicles, are a potential candidate, since research and development has driven significant cost declines since the technology’s introduction in the 1990s. However, performance declines when storing energy over long periods, and the extraction of critical minerals is still relatively expensive and harmful to the environment. The limitations of lithium-ion batteries could be addressed by investing in several promising alternative battery technologies that use cheaper materials such as sodium, sulfur, and iron. This portfolio approach will enable competition and increase the chance that at least one option is successful. |
Conclusion
Operation Warp Speed was a historic accomplishment on the level of the Manhattan Project and the Apollo program, but the unique approach is not appropriate for every challenge. The methods and mechanisms are best suited for challenges in which stakeholders agree on an urgent and specific goal, and the goal requires scaling a technology with established fundamental research. Nonetheless, the individual mechanisms of OWS can effectively address smaller challenges. Those looking to replicate the success of OWS should deeply evaluate the stakeholder and technology landscape to determine which mechanisms are required or feasible.
Acknowledgments
This memo was developed from notes on presentations, panel discussions, and breakout conversations at the Operation Warp Speed 2.0 Conference, hosted on November 17, 2022, by the Federation of American Scientists, 1Day Sooner, and the Institute for Progress to recount the success of OWS and consider future applications of the mechanisms. The attendees included leadership from the original OWS team, agency leaders, Congressional staffers, researchers, and vaccine industry leaders. Thank you to Michael A. Fisher, FAS senior fellow, who contributed significantly to the development of this memo through January 2023. Thank you to the following FAS staff for additional contributions: Dan Correa, chief executive officer; Jamie Graybeal, director, Defense Budgeting Project (through September 2022); Sruthi Katakam, Scoville Peace Fellow; Vijay Iyer, program associate, science policy; Kai Etheridge, intern (through August 2022).
The OWS approach is unlikely to succeed for challenges that are too broad or too politically polarizing. For example, curing cancer: While a cure is incredibly urgent and the goal is unifying, too many variations of cancer exist and they include several unique research and development challenges. Climate change is another example: particular climate challenges may be too politically polarizing to motivate the commitment required.
No topic is immune to politicization, but some issues have existing political biases that will hinder application of the mechanisms. Challenges with bipartisan agreement and public support should be prioritized, but politicization can be managed with a comprehensive understanding of the stakeholder landscape.
The pandemic created an emergency environment that likely motivated behavior change at agencies, but OWS demonstrated that better agency coordination is possible.
In addition to using processes like stakeholder mapping, the leadership team must include experts across the problem space that are deeply familiar with key stakeholder groups and existing power dynamics. The problem space includes impacted portions of the public; federal agencies and offices; the administration; state, local, Tribal, and territorial governments; and private partners.
OWS socialized the vaccination effort through HHS’s Office of Intergovernmental and External Affairs, which established communication with hospitals, healthcare providers, nursing homes, community health centers, health insurance companies, and more. HHS also worked with state, local, Tribal, and territorial partners, as well as organizations representing minority populations, to address health disparities and ensure equity in vaccination efforts. Despite this, OWS leaders expressed that better communication with expert communities was needed, as the public was confused by contradictory statements from experts who were unaware of the program details.
Future efforts should create channels for bottom-up communication from state, local, Tribal, and territorial governments to federal partners. Encouraging feedback through community engagement can help inform distribution strategies and ensure adoption of the solution. Formalized data-sharing protocols may also help gain buy-in and confidence from relevant expert communities.
Possibly, but it would require more coordination and alignment between the countries involved. This could include applying the mechanisms within existing international institutions to achieve existing goals. The mechanisms could apply with revisions, such as coordination among national delegations and nongovernmental organizations, activating nongovernmental capacity, and creating geopolitical incentives for adoption.
The team included HHS Secretary Alex Azar; Secretary of Defense Mark Esper; Dr. Moncef Slaoui, former head of vaccines at GlaxoSmithKline; and General Gustave F. Perna, former commanding general of U.S. Army Materiel Command. This core team combined scientific and technical expertise with military and logistical backgrounds. Dr. Slaoui’s familiarity with the pharmaceutical industry and the vaccine development process allowed OWS to develop realistic goals and benchmarks for its work. This connection was also critical in forging robust public-private partnerships with the vaccine companies.
It depends on the challenge. Determining which mechanism to use for a particular project requires a deep understanding of the particular R&D, manufacturing, supply chain landscapes to diagnose the market gaps. For example, if manufacturing process technologies are needed, prize competitions or challenge-based acquisitions may be most effective. If manufacturing volume must increase, volume guarantees or advance purchase agreements may be more appropriate. Advance market commitments or milestone payments can motivate industry to increase efficiency. OWS used a combination of volume guarantees and advance market commitments to fund the development of vaccine candidates and secure supply.
Creating Equitable Outcomes from Government Services through Radical Participation
Government policies, products, and services are created without the true and full design participation and expertise of the people who will use them–the public: citizens, refugees, and immigrants. As a result, the government often replicates private sector anti-patterns1, using or producing oppressive, disempowering, and colonial policies through products and services that embody bias, limit access, create real harm, and discriminate against underutilized communities2 on the basis of various identities violating the President’s Executive Order on Equity. Examples include life-altering police use of racially and sexually biased facial recognition products, racial discrimination in the delivery access of life-saving Medicaid services and SNAP benefits, and racist child welfare service systems.
The Biden-Harris Administration should issue an executive order to embed Radical Participatory Design (RPD) into the design and development of all government policies, products, and services, and to require all federally-funded research to use Radical Participatory Research (RPR). Using an RPD and RPR approach makes the Executive Order on Racial Equity, Executive Order on Transforming the Customer Experience, and the Executive Order on DEIA more likely to succeed. Using RPD and RPR as the implementation strategy is an opportunity to create equitable social outcomes by embodying equity on the policy, product and service design side (Executive Order on Racial Equity), to improve the public’s customer experience of the government (Executive Order on Transforming the Customer Experience, President’s Management Agenda Priority 2), and to lead to a new and more just, equitable, diverse, accessible, and inclusive (JEDAI) future of work for the federal government (Executive Order on DEIA).
Challenge and Opportunity
The technology industry is disproportionately white and male. Compared to private industry overall, whites, men, and Asians are overrepresented while Latinx people, Black people, and women are underrepresented. Only 26% of technology positions in the U.S. are held by women though they represent 57% of the US workforce. Even worse, women of color hold 4% of technology positions even though they are 16% of the population. Similarly, Black Americans are 14% of the population but hold 7% of tech jobs. Latinx Americans only hold 8% of tech jobs while comprising 19% of the population. This representation decreases even more as you look at leadership roles in technology. In FY2020, the federal government spent $392.1 billion contracting services, including services to build products. Latinx, African Americans, Native Americans, and women are underrepresented in the contractor community.
The lack of diversity in designers and developers of the policies, products, and services we use leads to harmful effects like algorithmic bias, automatic bathroom water and soap dispensers that do not recognize darker skin, and racial bias in facial recognition (mis)identification of Black and Brown people.
With a greater expectation of equity from government services, the public experiences greater disappointment when government policies, services, and products are biased, discriminatory, or harmful. Examples include inequitable public school funding services, race and poverty bias in child welfare systems, and discriminatory algorithmic hiring systems used in government.
The federal government has tried to improve the experience of its products and services through methodologies like Human-centered Design (HCD). In HCD, the design process is centered on the community who will use the design, by first conducting research interviews or observations. Beyond the research interactions with community members, designers are supposed to carry empathy for the community all the way through the design, development, and launch process. Unfortunately, given the aforementioned negative outcomes of government products and services for various communities, empathy often is absent. What empathy may be generated does not persist long enough to influence or impact the design process. Ultimately, individual appeals to empathy are inadequate at generating systems level change. Scientific studies show that white people, who make up the majority of technologists and policy-makers, have a reduced capacity for empathy for people of other and similar backgrounds. As a result, the push for equity remains in government services, products, and policies, leading to President Biden’s Executive Order on Advancing Racial Equity and Support for Underserved Communities and, still, again, with the Executive Order on Further Advancing Racial Equity and Support for Underserved Communities.
The federal government lacks processes to embed empathy throughout the lifecycle of policy, product, and service design, reflecting the needs of community groups. Instead of trying to build empathy in designers who have no experiential knowledge, we can create empathetic processes and organizations by embedding lived experience on the team.
Radical Participatory Design (RPD) is an approach to design in which the community members, for whom one is designing, are full-fledged members on the research, design, and development team. In traditional participatory design, designers engage the community at certain times and otherwise work, plan, analyze, or prepare alone before and after those engagements. In RPD, the community members are always there because they are on the team; there are no meetings, phone calls, or planning without them.
RPD has a few important characteristics. First, the community members are always present and leading the process. Second, the community members outnumber the professional designers, researchers, or developers. Third, the community members own the artifacts, outcomes, and narratives around the outcomes of the design process. Fourth, community members are compensated equitably as they are doing the same work as professional designers. Fifth, RPD teams are composed of a qualitatively representative sample (including all the different categories and types of people) of the community.
Embedding RPD in the government connects the government to a larger movement toward participatory democracy. Examples include the Philadelphia Participatory Design Lab, the Participatory City Making Lab, the Center for Lived Experience, the Urban Institute’s participatory Resident Researchers, and Health and Human Service’s “Methods and Emerging Strategies to Engage People with Lived Experience.” Numerous case studies show the power of participatory design to reduce harm and improve design outcomes. RPD can maximize this by infusing equity as people with lived experience both choose, check, and direct the process.
As the adoption of RPD increases across the federal government, the prevalence and incidence of harm, bias, trauma, and discrimination in government products and services will decrease, aiding the implementation of the executive orders on Advancing Racial Equity and Support for Underserved Communities and Further Advancing Racial Equity and Support for Underserved Communities, and ensuring the OSTP AI Bill of Rights for AI products and services. Additionally, RPR aligns with OSTP’s actions to advance open and equitable research. Second, the reduction of harm, discrimination, and trauma improves the customer experience (CX) of government services aiding the implementation of the Executive Order on Transforming the Customer Experience, the President’s Management Agenda Priority 2, and the CAP goal on Customer Experience. An improved CX will increase community adoption, use, and engagement with potentially helpful and life-supporting government services that underutilized people need. RPD highlights the important connection between equity and CX and creates a way to link the two executive orders. You cannot claim excellent CX when the CX is inequitable and entire underutilized segments of the public have a harmful experience.
Third, instead of seeking the intersection of business needs and user needs like in the private sector, RPD will move the country closer to its democratic ideals by equitably aligning the needs of the people with the needs of the government of the people, by the people, and for the people. There are various examples where the government acts like a separate entity completely unaligned to the will of a majority of the public (gun control, abortion). Project by project, RPD helps align the needs of the people and the needs of the government of the people when representative democracy does not function properly.
Fourth, all community members, from all walks of life, brought into government to do participatory research and design will gain or refine skills they can then use to stay in government policy, product, and service design or to get a job outside of government. The workforce outcomes of RPD further diversify policy, product, and service designers and researchers both inside and outside the federal government, aligning with the Executive Order on DEIA in the Federal Workforce.
Plan of Action
The use of RPD and RPR in government is the future of participatory government and a step towards truly embodying a government of the people. RPD must work at the policy level as well, as policy directs the creation of services, products, and research. Equitable product and service design cannot overcome inequitable and discriminatory policy. The following recommended actions are initial steps to embody participatory government in three areas: policy design, the design and development of products and services, and funded research. Because all three areas occur across the federal government, executive action from the White House will facilitate the adoption of RPD.
Policy Design
An executive order from the president should direct agencies to use RPD when designing agency policy. The order should establish a new Radical Participatory Policy Design Lab (RPPDL) for each agency with the following characteristics:
- Embodies a qualitatively representative sample of the public target audience impacted by the agency
- Includes a qualitatively representative sample of agency employees who are also impacted by agency policy
- Designs policy through this radical participatory design team
- Sets budget policy through participatory budgeting (Grand Rapids, NYC, Durham, and HUD examples)
- Assesses agency programs that affect the public through participatory appraisals and participatory evaluations
- Rotates community policy designers in to the lab and out of the lab on six-month renewable terms
- Community policy designers receive equitable compensation for their time
- Community policy designers can be offered jobs to stay in government based on their policy experience, or the office that houses the RPPDL will assist community policy designers in finding roles outside of government based on their experience and desire
- An RPD authorization to allow government policy employees to compensate community members outside of a grant, cooperative agreement, or contract (GSA TTS example)
The executive order should also create a Chief Experience Officer (CXO) for the U.S. as a White House role. The Office of the CXO (OCXO) would coordinate CX work across the government in accordance with the Executive Order on Transforming the CX, the PMA Priority 2, the CX CAP goal, and the OMB Circular A-11 280. The executive order would focus the OCXO on the work of coordinating, approving, advising the RPD work across the federal government, including the following initiatives:
- Improve the public experience of high-impact, trans-agency journeys by managing the Office of Management and Budget (OMB) life experience projects
- Facilitate a CXO council of all federal agency CXOs.
- Advise various agency CXOs and other officials on embedding RPD into their policy, service, and product design and development work.
- Work with agencies to recruit and create a list of civil society organizations who are willing to help recruit community members for RPD and RPR projects.
- Recruit RPD public team members and coordinate the use of RPD in the creation of White House policy.
- Coordinate with the director of OMB and the Equitable Data Working Group to create
- an equity measure of the social outcomes of the government’s products, services, and policies,
- a public CX measurement of the entire federal government.
- Serve as a member of the White House Steering Committee on Equity established by the Executive Order on Further Advancing Equity.
- Serve as a member of the Equitable Data Working Group established by the Executive Order on Advancing Racial Equity.
- Strategically direct the work of the OCXO in order to improve the equity and CX metrics.
- Embed equity measures in CX measurement and data reporting template required by the OMB Circular A-11 280. CX success requires healthy, equitable CX across various subgroups, including underutilized communities, connecting the Executive Order on Transforming the CX to the Executive Order on Advancing Racial Equity.
- Update the OMB Circular A-11 280’s CX Capacity Assessment tool and the Action Plan template to include equity as a central component.
- Evaluate and assess the utilization of RPD in policy, product, and service design by agencies across the government.
Due to the distributed nature of the work, the funding for the various RPPDLs and the OCXO should come from money the director of OMB has identified and added to the budget the President submits to Congress, according to Section 6 of the Executive Order on Advancing Racial Equity. Agencies should also utilize money appropriated by the Agency Equity Teams required by the Executive Order on Further Advancing Racial Equity.
Product and Service Design
The executive order should mandate that all research, design, and delivery of agency products and services for the public be done through RPR and RPD. RPD should be used both for in-house and contracted work through grants, contracts, or cooperative agreements.
On in-house projects, funding for the RPD team should come from the project budget. For grants, contracts, and cooperative agreements, funding for the RPD team should come from the acquisition budget. As a result, the labor costs will increase since there are more designers on the project. The non-labor component of the project budget will be less. A slightly lower non-labor project budget is worth the outcome of improved equity. Agency offices can compensate for this by requesting a slightly higher project budget for in-house or contracted design and development services.
In support of the Executive Order on Transforming the CX, the PMA Priority 2, and the CX CAP goal, OMB should amend the OMB Circular A-11 280 to direct High Impact Service Providers (HISPs) to utilize RPD in their service work.
- HISPS must embed RPD in their product and service research, design, development, and delivery.
- HIPSs must include an equity component in their CX Capacity Assessment and CX Action Plan in line with guidance from the CXO of the U.S.
- Following applicable laws, HISPs should let customers volunteer demographic information during customer experience data collection in order to assess the CX of various subgroups.
- Agency annual plans should include both CX and equity indicator goals.
- Equity assessment data and CX data for various subgroups and underutilized communities must be reported in the OMB-mandated data dashboard.
- Each agency should create an RPD authorization to allow government employees and in-house design teams to compensate community members outside of a grant, cooperative agreement, or contract (GSA TTS example).
OSTP should add RPD and RPD case studies as example practices in OSTP’s AI Bill of Rights. RPD should be listed as a practice that can affect and reinforce all five principles.
Funded Research
The executive order should also mandate that all government-funded, use-inspired research about communities or intended to be used by people or communities, should be done through RPR. In order to determine if a particular intended research project is use-inspired, the following questions should be asked by the government funding agency prior to soliciting researchers:
- For technology research, is the technology readiness level (TRL) 2 or higher?
- Is the research about people or communities?
- Is the research intended to be used by people or communities?
- Is the research intended to create, design, or guide something that will be used by people and communities?
If the answer to any of the questions is yes, the funding agency should require the funded researchers to use an RPR approach.
Funding for the RPR team comes from the research grant or funding. Researchers can use the RPR requirement to estimate how much funding should be requested in the proposal.
OSTP should add RPR and the executive order to their list of actions to advance open and equitable research. RPR should be listed as a key initiative of the year of Open Science.
Conclusion
In order to address inequity, the public’s lived experience should lead the design and development process of government products and services. Because many of those products and services are created to comply with government policy, we also need lived experience to guide the design of government policy. Embedding Radical Participatory Design in government-funded research as well as policy, products, and services reduces harm, creates equity, and improves the public customer experience. Additionally, RPD connects and embeds equity in CX, moves us toward our democratic ideals, and creatively addresses the future of work by diversifying our policy, product, and service design workforce.
Because we do not physically hold digital products, the line between a software product and a software service is thin. Usually, a product is an offering or part of an offering that involves one interaction or touchpoint with a customer. In contrast, a service involves multiple touchpoints both online and offline, or multiple interactions both digital and non-digital.
For example, Google Calendar can be considered a digital product. A product designer for Google Calendar might work on designing its software interface, colors, options, and flows. However, a library is a service. As a library user, you might look for a book on the library website. If you can’t find it, you might call the library. The librarian might ask you to come in. You go in and work with the librarian to find the book. After realizing it is not there, the librarian might then use a software tool to request a new book purchase. Thus, the library service involved multiple touchpoints, both online and offline: a website, a phone line, an in-person service in the physical library, and an online book procurement tool.
Most of the federal government’s offerings are services. Examples like Medicare, Social Security, and veterans benefits involve digital products, in-person services in a physical building, paper forms, phone lines, email services, etc. A service designer designs the service and the mechanics behind the service in order to improve both the customer experience and the employee experience across all touchpoints, offline and online, across all interactions, digital and non-digital.
Participatory design (PD) has many interpretations. Sometimes PD simply means interviewing research participants. Because they are “participants,” by being interviewees, the work is participatory. Sometimes, PD means a specific activity or method that is participatory. Sometimes practitioners use PD to mean a way of doing an activity. For example, we can do a design studio session with just designers, or we can invite some community members to take part for a 90-minute session. PD can also be used to indicate a methodology. A methodology is a collection of methods or activities; or a methodology is a philosophy or guiding philosophy or principles that help you choose a particular method or activity at a particular point in time or in a process.
In all the above ways of interpreting PD, there are times when the community is present and times when they are not. Moreover, the community members are never leading the process.
Radical comes from the Latin word “radix” meaning root. RPD means design in which the community participates “to the root,” fully, completely, from beginning to end. There are no times, planning, meetings, or phone calls where the community is not present because the community is the team.
Peer review is similar to an Institutional Review Board (IRB). A participatory version of this could be called a Community Review Board (CRB). The difficulty is that a CRB can only reject a research plan; a CRB does not create the proposed research plans. Because a CRB does not ensure that great research plans are created and proposed, it can only reduce harm. It cannot create good.
Equality means treating people the same. Equity means treating people differently to achieve equal outcomes. CRBs achieve equality only in approving power, by equally including community members in the approving process. CRBs fail to achieve equity in social outcomes of products and services because community members are missing in the research plan creation process, research plan implementation process, and the development process of policy, products, and services where inequity can enter. To achieve equal outcomes, equity, their lived experiential knowledge is needed throughout the entire process and especially in deciding what to propose to a CRB.
Still a CRB can be a preliminary step before RPR. Unfortunately, IRBs are only required for US government-funded research with human subjects. In practice, it is not interpreted to apply to the approval of design research for policy, products, and services, even when the research usually includes human subjects. The application of participatory CRBs to approve all research–including design research for policy, products, and services–can be an initial step or a pilot.
A good analogy is that of cooking. It is quite helpful for everyone to know how to cook. Most of us cook in some capacity. Yet, there are people who attend culinary school and become chefs or cooks. Has the fact that individual people can and do cook eliminated the need for chefs? No. Chefs and cooks are useful for various situations – eating at a restaurant, catering an event, the creation of cookbooks, lessons, etc.
The main idea is that the chefs have mainstream institutional knowledge learned from books and universities or cooking schools. But that is not the only type of knowledge. There is also lived, experiential knowledge as well as community, embodied, relational, energetic, intuitive, aesthetic, and spiritual knowledge. It is common to meet amazing chefs who have never been to a culinary school but simply learned to cook through lived experience of experimentation and having to cook everyday for X people. Some learned to cook through relational and community knowledge passed down in their culture through parents, mothers, and aunties. Sometimes, famous chefs will go and learn the knowledge of a particular culture from people who did not go to a learning school. The chefs will appropriate the knowledge and then create a cookbook to sell marketing a fusion cuisine, infused with the culture whose culinary knowledge they appropriated.
Similarly, everyone designs. It is not enough to be tech-savvy or an innovation and design expert. The most important knowledge to have is the lived experiential, community, relational, and embodied knowledge of the people for whom we are designing. When lived experience leads, the outcomes are amazing. Putting lived experience alongside professional designers can be powerful as well. Professional designers are still needed, as their knowledge can help improve the design process. Professionals just cannot lead, lead alone, or be the only knowledge base because inequity enters the system more easily.
To realize the ambitions of this policy proposal, full-time teams will be needed. The RPPDLs who are designing policy are full-time roles due to the amount and various levels of policy to design. For products and services, however, some RPD teams may be part-time. For example, improving an existing product or service may be one of many work projects a government team is conducting. So if they are only working on the project 50% of the time, they may only require a group of part-time community members. On the other hand, the work may require full-time work for RPD team members for the design and development of a greenfield or new product or service that does not exist. Full-time projects will need full-time community members. For part-time projects, community members can work on multiple projects to reach full-time capacity.
Team members can receive non-monetary compensation like a gift card, wellness services, or child care. However, it is best practice to allow the community member to choose. Most will choose monetary compensation like grants, stipends, or cash payments.
Ultimately they should be paid at a level equal to that of the mainstream institutional experts (designers and developers) who are being paid to do the same work alongside the community members. Remember to compensate them for travel and child care when needed.
RPD is an opportunity for the government to lead the way. The private sector can make money without equitably serving everyone, so it has no incentive to do so. Nonprofits do not carry the level of influence the federal government carries. The federal government has more money to engage in this work than state or local governments. The federal government has a mandate to be equitable in its products and services and their delivery, and if this goes well, the government can make a law mandating organizations in the private and nonprofit sector to do the same work to transform. The government has a long history of using policy and services to discriminate against various underutilized groups. So the federal government should be the first one to use RPD to move towards equity. Ultimately the federal government has a huge influence on the lives of citizens, immigrant residents, and refugees, and the opportunity is great to move us toward equity.
Embedding RPD in government products and services should also be done at the state and local level. Each level will require different memos due to the different mechanics, budgets, dynamics, and policies. The hope is that RPD work at the federal government can help spark RPD work at various state, local, and county governments.
Possible first steps include:
- Mandate that all use-inspired research, including design research for policy, products, and services, be reviewed by a Community Review Board (CRB) for approval.
- If not approved, the research, design, and development cannot move forward.
- Only mandate all government-funded, use-inspired research be conducted using RPR. Focusing on research funding alone shifts the payment of RPR community teams to the grant recipients, only.
- Mandate all government-funded, use-inspired research use RPR and all contracted research, design, development, and delivery of government products and services uses RPD.
- Focusing on research funding and contracted product and service work shifts the payment of RPR and RPD community team members to the grant recipients, vendors, and contract partners.
- Choose a pilot agency, like NIH, to start.
- Start with all HISPs instead of all federal government agencies.
Use RPD and RPR as the implementation strategy for only implementing the Executive Order on Transforming the Customer Experience, which focuses on the HISPs.
- Start with a high-profile set of projects such as the OMB life experience projects.
Then, later we can advance to an entire pilot agency.
- Focus on embedding equity measures in CX.
After equity is embedded in CX, start by choosing a pilot agency, benchmarking equity and CX, piloting RPD, and measuring the change attributable to RPD.
This allows time to build more evidence.
There are many existing case studies of participatory design.
- Decolonizing Participatory Design: Memory Making in Namibia
- Toward a more just library: Participatory design with Native American students
- Crossing Methodological Borders: Decolonizing Community-Based Participatory Research
- Different eyes/open eyes
- A Case Study Measuring the Impact of a Participatory Design Intervention on System Complexity and Cycle Time in an Assemble-to-Order System
Also there are also case studies of participatory design in the public sector.
In modern product and service development, products and services never convert into an operations and maintenance phase alone. They are continually being researched, designed, and developed due to continuous changes in human expectations, migration patterns, technology, human preferences, globalization, etc. If community members were left out of research, design, and development work after a service or product launches, then the service or product would no longer be designed and developed using an RPD approach. As long as the service or product is active and in service, radical participation in the continuous research, design, and development is needed.
Protecting Civil Rights Organizations and Activists: A Policy Addressing the Government’s Use of Surveillance Tools
In the summer of 2020, some 15 to 26 million people across the country participated in protests against the tragic killings of Black people by law enforcement officers, making it the largest movement in US history. In response, local and state government officials and federal agencies deployed surveillance tools on protestors in an unprecedented way. The Department of Homeland Security used aerial surveillance on protesters across 15 cities, and several law enforcement agencies engaged in social media monitoring of activists. But there is still a lot the public does not know, such as what other surveillance tactics were used during the protests, where this data is being stored, and for what future purpose.
Government agencies have for decades secretly used surveillance tactics on individual activists, such as during the 1950s when the FBI surveilled human rights activists and civil rights organizations. These tactics have had a detrimental effect on political movements, causing people to forgo protesting and activism out of fear of such surveillance. The First Amendment protects freedom of speech and the right to assemble, but allowing government entities to engage in underground surveillance tactics strips people of these rights.
It also damages people’s Fourth Amendment rights. Instead of agencies relying on the court system to get warrants and subpoenas to view an individual’s online activity, today some agencies are entering into partnerships with private companies to obtain this information directly. This means government agencies no longer have to meet the bare minimum of having probable cause before digging into an individual’s private data.
This proposal offers a set of actions that federal agencies and Congress should implement to preserve the public’s constitutional rights.
- Federal agencies should disclose what technologies they are using, how they are using them, and the effect on civil rights. The Department of Justice should use this information to investigate agencies and ensure their practices aren’t violating the public’s civil rights,
- The Office of Science and Technology Policy and the Department of Justice should work with the Office of the Attorney General to revise Attorney General Guidelines for the FBI.
- Congress should pass the Fourth Amendment Is Not For Sale Act.
- Congress should amend the Stored Communications Act of 1986 to compel companies to ensure user data isn’t sold to third parties who will then sell user data to government entities.
- Congress should pass border search exception legislation.
Challenges and Opportunities
Government entities have been surveilling activists and civil rights organizations long before the 2020 protests. Between 1956 and 1971, the FBI engaged in surveillance tactics to disrupt, discredit, and destroy many civil rights organizations, such as the Black Panther Party, American Indian Movement, and the Communist Party. Some of these tactics included illegal wiretaps, infiltration, misinformation campaigns, and bugs. This program was known as COINTELPRO, and the FBI’s goal was to destroy organizations and activists who had political agendas that they viewed as radical and would challenge “the existing social order.” While the FBI didn’t completely achieve this goal, their efforts did have detrimental effects on activist communities, as members were imprisoned or killed for their activist work, and membership in organizations like the Black Panther Party significantly declined and eventually dissolved in 1982.
After COINTELPRO was revealed to the public, reforms were put in place to curtail the FBI’s surveillance tactics against civil rights organizations, but those reforms were soon rolled back after the September 11 attacks. Since 9/11, it has been revealed, mostly through FOIA requests, that the FBI has surveilled the Muslim community, Occupy Wall Street, Standing Rock protesters, murder of Freddie Gray protesters, Black Lives Matter protests, and more. Today, the FBI has more technological tools at their disposal that make mass surveillance and data collection on activist communities incredibly easy.
In 2020, people across the country used social media sites like Facebook to increase engagement and turnout in local Black Lives Matters protests. The FBI’s Joint Terrorism Task Forces responded by visiting people’s homes and workplaces to question them about their organizing, causing people to feel alarmed and terrified. U.S. Customs and Border Protection (CBP) also got involved, deploying a drone over Minneapolis to provide live video to local law enforcement. The Acting Secretary of CBP also tweeted out that CBP was working with law enforcement agencies across the nation during the 2020 Black Lives Matter Protests. CBP involvement in civil rights protests is incredibly concerning given its ability to circumvent the Fourth Amendment and conduct warrantless searches due to the border search exception. (Federal regulations and federal law gives CBP the authority to conduct warrantless searches and seizures within 100 miles of the U.S. border, where approximately two-thirds of the U.S. population resides.)
The longer government agencies are allowed to surveil people who are simply organizing for progressive policies, the more people will be terrified to voice their opinion about the state of affairs in the United States. This has had detrimental effects on people’s First and Fourth Amendment rights and will continue to have even more effects as technology improves and government entities have access to advanced tools. Now is the time for government agencies and Congress to act to prevent further abuse of the public’s rights to protest and assemble. A country that uses tools to watch its residents will ultimately lead to a country with little to no civic engagement and the complete silencing of marginalized communities.
While there is a lot of opportunity to address mass surveillance and protect people’s constitutional rights, government officials have refused to address government surveillance for decades, despite public protest. In the few instances where government officials put up roadblocks to stop surveillance tactics, those roadblocks were later removed or reformed so as to allow the previous surveillance to continue. The lack of political will of Congressmembers to address these issues has been a huge challenge for civil rights organizations and individuals fighting for change.
Plan of Action
Regulations need to be put in place to restrict federal agency use of surveillance tools on the public.
Recommendation 1. Federal agencies must disclose technologies they are using to surveil individuals and organizations, as well as the frequency with which they use them. Agencies should to publish this information on their websites and produce a more comprehensive report for the Department of Justice (DOJ) to review.
Every six months, Google releases the number of requests it receives from government agencies asking for user information. Google informs the public on the number of accounts that were affected by those requests and whether the request was a subpoena, search warrant, or other court order. The FBI also discloses the number of DNA samples it has collected from individuals in each U.S. state and territory and how many of those DNA samples aided in investigations.
Likewise, government agencies should be required to disclose the names of the technologies they are purchasing to surveil people in the United States as well as the number of times they use this technology within the year. Government entities should no longer be able to hide which technologies their departments are using to watch the public. People should be informed on the depth of the government’s use of these tools so they have a chance to voice their objections and concerns.
Federal agencies also need to publish a more comprehensive report for the DOJ to review. This report will include what technologies were used and where, what category of organizations they were used against, racial demographics of the people who were surveilled, and possible threats to civil rights. The DOJ will use this information to run investigate whether agencies are violating the Fourth Amendment or First Amendment in using these technologies against the public.
Agencies may object to releasing this information because of the possibility of it interfering with investigations. However, Google does not release the names of individuals who have had their user information requested, and neither should government agencies release user information. Because government agencies won’t be required to release specific information on individuals to the public, this will not affect their investigations. This disclosure request is aimed at knowing what tools government agencies are using and giving the DOJ the opportunity to investigate whether these tools violate constitutional rights.
Recommendation 2. Attorney General Guidelines should be revised in collaboration with the White House Office of Science and Technology Policy (OSTP) and civil rights organizations that specialize in technology issues.
The FBI has used advanced technology to watch activists and protests with little to no government oversight or input from civil rights organizations. When conducting an investigation or assessment of an individual or organization, FBI agents follow the Attorney General Guidelines, which dictate how investigations should be conducted. Unfortunately, these guidelines do little to protect the public’s civil rights—and in fact contain a few provisions that are quite problematic:
- The FBI is able to conduct assessments, which don’t require factual basis but instead require an authorized purpose, such as obtaining information on an organization or person if it’s believed that they could be involved in activities threatening national security or suspected that they could be the target of an attack.
- Physical surveillance can be used during an assessment for a limited time, but that period has been redacted in the guide so it’s not clear how long they can engage in this practice.
- FBI employees can conduct internet searches of “publicly available information” for an authorized purpose without having a lead, tip, referral, or complaint. FBI employees can also use online services to obtain publicly available information before the employee even decides to open an assessment or formal investigation. FBI employees are not required to seek supervisor approval beforehand.
These provisions are problematic for a few reasons. FBI employees should not be able to conduct assessments on individuals without a factual basis. Giving employees the power to pick and choose who they want to assess provides an opportunity for inherent bias. Instead, all assessments and investigations should have some factual basis behind them and receive approval from a supervisor. Physical surveillance and internet searches, likewise, should not be conducted by FBI agents without probable cause. Allowing these kinds of practices opens the entire public to having their privacy invaded.
These policies should be reviewed and revised to ensure that activists and organizations won’t be subject to surveillance due to internal bias. President Biden should issue an executive order directing OSTP to collaborate with the Office of the Attorney General on the guidelines. OSTP should have a task force dedicated to researching government surveillance and the impact on marginalized groups to guide them on this collaboration.
External organizations that are focused on technology and civil rights should also be brought in to review the final guidelines and voice any concerns. Civil rights organizations are more in tune with the effect that government surveillance has on their communities and the best mechanisms that should be put in place to preserve privacy rights.
Congress also should take steps to protect the public’s civil rights by passing the Fourth Amendment Is Not for Sale Act, revising the Stored Communications Act, and passing border exception legislation.
Recommendation 3. Congress should close the loophole that allows government agencies to circumvent the Fourth Amendment and purchase data from private companies by passing the Fourth Amendment Is Not for Sale Act.
In 2008, it was revealed that AT&T had entered into a voluntary partnership with the National Security Agency (NSA) from 2001 to 2008. AT&T built a room in its headquarters that was dedicated to providing the NSA with a massive quantity of internet traffic, including emails and web searches.
Today, AT&T has eight facilities that intercept internet traffic across the world and provide it to the NSA, allowing them to view people’s emails, phone calls, and online conversations. And the NSA isn’t the only federal agency partnering with private companies to spy on Americans. It was revealed in 2020 that the FBI has an agreement with Dataminr, a company that monitors people’s social media accounts, and Venntel, Inc., a company that purchases bulk location data and maps the movements of millions of people in the United States. These agreements were signed and modified after BLM protests were held across the country.
Allowing government agencies to enter into agreements with private companies to surveil people gives them the ability to bypass the Fourth Amendment and spy on individuals with no restriction. Federal agencies no longer need rely on the courts when seeking private communications and thoughts; they can now purchase sensitive information like a person’s location data and social media activity from a private company. Congress should end this practice and ban federal government agencies from purchasing people’s private data from third parties by passing the Fourth Amendment Is Not For Sale Act. If this bill passed, government agents could no longer purchase location data from a data broker to figure out who was in a certain area during a protest or partner with a company to obtain people’s social media postings without going through the legal process.
Recommendation 4. Congress should amend the Stored Communications Act of 1986 (SCA) to compel electronic communication service companies to prove they are in compliance with the act.
The SCA prohibits companies that provide an electronic communication service from “knowingly” sharing their stored user data with the government. While data brokers are more than likely excluded from this provision, companies that provide direct services to the public such as Facebook, Twitter, and Snapchat are not. Because of this law, direct service companies aren’t partnering with government agencies to sell user information, but they are selling user data to third parties like data brokers.
There should be a responsibility placed on electronic communication service companies to ensure that the companies they sell user information to won’t sell data to government entities. Congress should amend the SCA to include a provision requiring companies to annually disclose who they sold user data to and whether they verified with the third party that the data will not be eventually sold to a government entity. Verification should require at minimum a conversation with the third party about the SCA provision and a signed agreement that the third party will not sell any user information to the government. The DOJ will be tasked with reviewing these disclosures for compliance.
Recommendation 5. Congress should pass legislation revoking the border search exception. As stated earlier, this exception allows federal agents to conduct warrantless searches and seizures within 100 miles of the U.S. border. It also allows federal agents to search and seize digital devices at the border without having any level of suspicion that the traveler has committed a crime. CBP agents have pressured travelers to unlock their devices to look at the content, as well as downloaded the content of the devices and stored the data in a central database for up to 15 years.
While other law enforcement agencies are forced to abide by the Fourth Amendment, federal agents have been able to bypass the Fourth Amendment and conduct warrantless searches and seizures without restriction. If federal agents are allowed to continue operating without the restrictions of the Fourth Amendment, it’s possible we will see more instances of local law enforcement agencies calling on CBP to conduct surveillance operations on the general public during protests. This is an unconscionable amount of power to give to agencies that can and has led to serious abuse of the public’s privacy rights. Congress must roll back this authority and require all law enforcement agencies—local, state, and federal—to have probable cause at a minimum before engaging in searches and seizures.
Conclusion
For too long, government agencies have been able to surveil individuals and civil rights organizations with little to no oversight. With the advancement of technology, their surveillance capabilities have grown tremendously, leading to near 24/7 surveillance. Regulations must be put in place to restrict the use of surveillance technologies by federal agencies, and Congress must pass legislation to protect the public’s constitutional rights.
The FBI operates under the jurisdiction of the DOJ and reports to the Attorney General. The Attorney General has been granted the authority under U.S. Codes and Executive Order 12333 to issue guidelines for the FBI to follow when they conduct domestic investigations. These are the Attorney General Guidelines.
This bill was introduced by Senators Ron Wyden, Rand Paul, and 18 others in 2021 to protect the public from having government entities purchase their personal information, such as location data, from private companies rather than going through the court system. Instead, the government would be required to obtain a court order before they getting an individual’s personal information from a data broker. This is a huge step in protecting people’s private information and stopping mass government surveillance.
Modernizing Enforcement of the Civil Rights Act to Mitigate Algorithmic Harm in Determining Federal Benefits
The Department of Justice should modernize the enforcement of Title VI of the Civil Rights Act to guide effective corrective action for algorithmic systems that produce discriminatory outcomes with regard to federal benefits. To do so, the Department of Justice should clarify the definition of “algorithmic discrimination” in the context of federal benefits, establish systems to identify which federally funded public benefits offices use machine-learning algorithms, and secure the necessary human resources to properly address algorithmic discrimination. This crucial action would leverage a demonstrable, growing interest in regulating algorithms that has bloomed over the past year via policy actions in both the White House and Congress but has yet to concretely establish an appropriate enforcement mechanism for acting on instances of demonstrated algorithmic harm.
Challenge and Opportunity
Algorithmic systems are inescapable in modern life. They have become core elements of everyday activities, like surfing the web, driving to work, and applying for a job. It is virtually impossible to go through life without encountering an algorithmic system multiple times per day.
As machine-learning technologies have become more pervasive, they have also become gatekeepers for crucial resources, like accessing credit, receiving healthcare, securing housing, and getting a mortgage. Both local and federal governments have embraced algorithmic decision-making to determine which constituents are able to access key services, often with little transparency, if any, for those who are subject to such decision-making.
When it comes to federal benefits, imperfections in these systems scale significantly. For example, the deployment of flawed algorithmic tools led to the wrongful termination of Medicaid for 19% of beneficiaries in Arkansas, the wrongful termination of Social Security income for thousands in New York, wrongful termination of $78 million worth of Medicaid and Supplemental Nutrition Assistance Program benefits in Indiana, and erroneous unemployment fraud charges for 40,000 people in Michigan. These errors are particularly harmful to low-income Americans for whom access to credit, housing, job opportunities, and healthcare are especially important.
Over the past year, momentum for regulating algorithmic systems has grown, resulting in several key policy actions. In February 2022, Senators Ron Wyden and Cory Booker and Representative Yvette Clarke introduced the Algorithmic Accountability Act. Endorsed by AI experts, this bill would have required deployers of algorithmic systems to conduct and publicly share impact assessments of their systems. In October 2022, the White House released its Blueprint for an AI Bill of Rights. Although not legally enforceable, this robust rights-based framework for algorithmic systems was developed with a broad coalition of support through an intensive, yearlong public consultation process with community members, private sector representatives, tech workers, and policymakers. Also in October 2022, the AI Training Act was passed into law. The legislation requires the development of a training curriculum covering core concepts in artificial intelligence for federal employees in a limited range of roles, primarily those involved in procurement. Finally, January 2023 saw the introduction of the NIST AI Risk Management Framework to guide how organizations and individuals design, develop, deploy, or use artificial intelligence to manage risk and promote responsible use.
Collectively, these actions demonstrate clear interest in preventing harm caused by algorithmic systems, but none of them provide clear enforcement mechanisms for federal agencies to pursue corrective action in the wake of demonstrated algorithmic harm.
However, Title VI of the Civil Rights Act offers a viable and legally enforceable mechanism to aid anti-discrimination efforts in the algorithmic age. At its core, Title VI bans the use of federal funding to support programs (including state and local governments, educational institutions, and private companies) that discriminate on the basis of race, color, or national origin. Modernizing the enforcement of Title VI, specifically in the context of federal benefits, offers a clear opportunity for developing and refining a modern enforcement approach to civil rights law that can respond appropriately and effectively to algorithmic discrimination.
Plan of Action
Fundamentally, this plan of action seeks to:
- Clarify how “algorithmic bias” is defined, specifically in the context of federal benefits.
- Identify where and when public benefits systems use machine-learning algorithms.
- Equip federal agencies with authority and skill sets to address algorithmic discrimination.
Clarify the Framework for Algorithmic Bias in Federal Benefits
Recommendation 1. Fund the Department of Justice (DOJ) to develop a new working group focused specifically on civil rights concerns around artificial intelligence.
The DOJ has already requested funding for and justified the existence of this unit in its FY2023 Performance Budget. In that budget, the DOJ requested $4.45 million to support 24 staff.
Clear precedents for this type of cross-sectional working group already exist within the Department of Justice (e.g., the Indian Working Group and LGBTQI+ Working Group). Both of these groups contain members of the 11 sections of the Civil Rights Division to ensure a comprehensive strategy for protecting the civil rights of Indigenous peoples and the LGBTQ+ community, respectively. The pervasiveness of algorithmic systems in modern life suggests a similarly broad scope is appropriate for this issue.
Recommendation 2. Direct the working group to develop a framework that defines algorithmic discrimination and appropriate corrective action specifically in the context of public benefits.
A clear framework or rubric for assessing when algorithmic discrimination has occurred is a prerequisite for appropriate corrective action. Despite having a specific technical definition, the term “algorithmic bias” can vary widely in its interpretation depending on the specific context in which an automated decision is being made. Even if algorithmic bias does exist, researchers and legal scholars have made the case that biased algorithms may be preferable to biased human decision-makers on the basis of consistency and the relative ease of behavior change. Consequently, the DOJ should develop a context-specific framework for determining when algorithmic bias leads to harmful discriminatory outcomes in federal benefits systems, starting with major federal systems like Social Security and Medicare/Medicaid.
As an example, the Brookings Institution has produced a helpful report that illustrates what it means to define algorithmic bias in a specific context. Cross-walking this blueprint with existing Title VI procedures can yield guidelines for how the Department of Justice can notify relevant offices of algorithmic discrimination and steer corrective action.
Identify Federal Benefits Systems that Use Algorithmic Tools
Recommendation 3. Establish a federal register or database for offices that administer federally funded public benefits to document when they use machine-learning algorithms.
This system should specifically detail the developer of the algorithmic system and the office using said system. If possible, descriptions of relevant training data should be included as well, especially if these data are federal property. Consider working with the Office of Federal Contract Compliance Programs to secure this information from current and future government contractors within the federal benefits domain.
In terms of cost, previous budget requests for databases of this type have ranged from $2 million to $5 million.
Recommendation 4. Provide public access to the federal register.
Making the federal register public would provide baseline transparency regarding the federal funding of algorithmic systems. This would facilitate external investigative efforts to identify possible instances of algorithmic discrimination in public benefits, which would complement internal efforts by directing limited federal staff bandwidth towards cases that have already been identified. The public-facing portion of this registry should be structured to respecting appropriate privacy and trade secrecy restrictions
Recommendation 5. Link the public-facing register to a public-facing form for submitting claims of algorithmic discrimination in the context of federal benefits.
This step would help channel public feedback regarding claims of algorithmic discrimination with a sufficiently high threshold to minimize frivolous claims. A well-designed system will ask for evidence and data to justify any claim of algorithmic discrimination, allowing federal employees to prioritize which claims to pursue.
Equip Agencies with Necessary Resources for Addressing Algorithmic Discrimination
Recommendation 6. Authorize funding for technical hires in enforcement arms of federal regulatory agencies, including but not limited to the Department of Justice.
Effective enforcement of anti-discrimination statutes today requires technical fluency in machine-learning techniques. In addition to the DOJ’s Civil Rights Division (see Recommendation 1), consider directing funds to hire or train technical experts within the enforcement arms of other federal agencies with explicit anti-discrimination enforcement authority, including the Federal Trade Commission, Federal Communications Commission, and Department of Education.
Recommendation 7. Pass the Stopping Unlawful Negative Machine Impacts through National Evaluation Act.
This act was introduced with bipartisan support in the Senate at the very end of the 2021–2022 legislative session by Senator Rob Portman. The short bill seeks to clarify that civil rights legislation applies to artificial intelligence systems and decisions made by these systems will be liable to claims of discrimination under said legislation, including the Civil Rights Act, the Americans with Disabilities Act, and the Age Discrimination Act of 1975, among others. Passing the bill is a simple but effective way to indicate to federal regulatory agencies (and those they regulate) that artificial intelligence systems must comply with civil rights law and affirms the federal government’s authority to ensure they do so.
Conclusion
On his first day in office, President Biden signed an executive order to address the entrenched denial of equal opportunities for underserved communities in the United States. Ensuring that federal benefits are not systematically denied via algorithmic discrimination to low-income Americans and Americans of color is crucial to successfully meeting the goals of that order and the rising chorus of voices who want meaningful regulation for algorithmic systems. The authority for such regulation in the context of federal benefits already exists. To ensure that authority can be effectively enforced in the modern age, the federal government needs to clearly define algorithmic discrimination in the context of federal benefits, identify where federal funding is supporting algorithmic determination of federal benefits, and recruit the necessary talent to verify instances of algorithmic discrimination.
An algorithm is a structured set of steps for doing something. In the context of this memo, an algorithm usually means computer code that is written to do something in a structured, repeatable way, such as determining if someone is eligible for Medicare, identifying someone’s face using a facial recognition tool, or matching someone’s demographic profile to a certain kind of advertisement.
Machine-learning techniques are a specific set of algorithms that train a computer to do different tasks by taking in a massive amount of data and looking for patterns. Artificial intelligence generally refers to technical systems that have been trained to perform tasks with minimal human oversight. Machine learning and artificial intelligence are similar and often used as interchangeable terms.
We can identify algorithmic bias by comparing the expected outputs of an algorithm to the actual outputs for an algorithm. For example, if we find that an algorithm uses race as a decisive factor in determining whether someone is eligible for federal benefits that should be race-neutral, that would be an example of algorithmic bias. In practice, these assessments often take the form of statistical tests that are run over multiple outputs of the same algorithmic system.
Although many algorithms are biased, not all biases are equally harmful. This is due to the highly contextual nature in which an algorithm is used. For example, a false positive in a criminal-sentencing algorithm arguably causes more harm than a false positive in a federal benefits determination. Algorithmic bias is not inherently a bad thing and, in some cases, can actually advance equity and inclusion efforts depending on the specific contexts (consider a hiring algorithm for higher-level management that weights non-male gender or non-white race more heavily for selection).
Using a Digital Justice Framework To Improve Disaster Preparation and Response
Social justice, environmental justice, and climate justice are all digital justice. Digital injustice arises from the fact that 21 million Americans are not connected to the internet, and seven percent of Americans do not use it, even if they have access to it. This lack of connectivity can lead to the loss of life, disrupted communities, and frayed social cohesion during natural disasters, as people are unable to access life-saving information and preventive tools found online.
Digital injustice primarily affects poor rural communities and African American, Indigenous, and other communities of color. These communities are also overexposed to climate risk, economic fragility, and negative public health outcomes. Digital access is a pathway out of this overexposure. It is a crucial aspect of the digital justice conversation, alongside racial equity and climate resilience.
Addressing this issue requires a long-term commitment to reimagining frameworks, but we can start by helping communities and policymakers understand the problem. Congress and the Biden-Harris Administration should embrace and support the creation of a Digital Justice Policy Framework that includes:
- training and access to information for divested communities
- within-government climate and digital literacy efforts
- a public climate and digital literacy campaign
Challenges and Opportunities
The internet has become a crucial tool in preparing for and recovering from ecological emergencies, building wealth, and promoting community connections. However, the digital divide has created barriers to accessing these resources for millions of people, particularly low-income individuals and people of color. The lack of access to the internet and technology during emergencies deepens existing vulnerabilities and creates preventable losses of life, displacement, and disrupted lives.

Digital divestment, disasters, and poverty overlap in dangerous ways that reveal “inequities and deepen existing vulnerability… In the United States, roughly 21% of children live in poverty and without consistent access to food. Cascading onto poverty and vulnerability to large-scale events like pandemics and other disasters is the lack of access to the Internet and the education and opportunity that comes with it.”
A recent report about digital divestment in rural communities shows that access to internet infrastructure, devices, and information is critical to economic development. Yet rural communities are more likely to have no device in the home—26.4% versus 20% of the broader United States. Access to broadband is even lower, as most rural counties have just one or no provider. Geography often challenges access to public services.
To tackle this issue, we must reimagine the use of data to ensure that all communities have access to information that reduces vulnerability and strengthens resilience. One pathway to reimagining data in a meaningful way is laid out in a National Academies of Science consensus study report, “Communities need information that they can effectively use in making decisions and investments that reduce the vulnerability and strengthen the resilience of their residents, economy, and environment. Assembling and using that information requires three things. First, data, while often abundantly available to communities, can be challenging for local communities and users to navigate, access, understand, and evaluate relative to local needs and questions. Second, climate data needs to be vetted and translated into information that is useful at a local level. Finally, information that communities receive from other sources needs to reflect the challenges and opportunities of those communities to not just be useful but also used.” Once communities are effectively connected and skilled up, they can use the information to make effective decisions.
The Government Accountability Office (GAO) looked into the intersection of information and justice, releasing a study on the fragmented and overlapping broadband plan and funding. It recommended a national strategy to help scale these efforts across communities and focus agency efforts on communities in need that includes recommendations for education, workforce training, and evidence-based policymaking.
Communities can be empowered to take a data-driven journey from lack of access to resources to using innovative concepts like regenerative finance to build resiliency. With the right help, divested communities can co-create sustainable solutions and work toward digital justice. The federal government should leverage initiatives like the Justice 40 initiative, aimed at undoing past injustices and divestment, to create opportunities for communities to gain access to the tools they need and understand how to use them.
Plan of Action
Executive branch agencies and Congress should initiate a series of actions to establish a digital justice framework. The first step is to provide education and training for divested communities as a pathway to participate in digital and green economies.
- Funding from recent legislation and agency earmarks should be leveraged to initiate education and training targeted at addressing historical inequities in the localization, quality, and information provided by digital infrastructure:
- The Infrastructure Investment and Jobs Act (IIJA) allocates $65 billion to expand the availability of broadband Internet access. The bulk of that funding is dedicated to access and infrastructure. Under the National Telecommunications and Information Administration’s (NTIA) Broadband Equity, Access, and Deployment (BEAD) Program, there is both funding and broad program language that allows for upskilling and training. Community leaders and organizations need support to advocate for funding at the state and local levels.
- The Environmental Protection Agency’s (EPA)1 environmental education fund, which traditionally has $2 million to $3.5 million in grant support to communities, is being shaped right now. Its offerings and parameters can be leveraged and extended without significant structural change. The fund’s parameters should include elements of the framework, including digital justice concepts like climate, digital, and other kinds of literacy programs in the notices of funding opportunities. This would enable community organizations that are already doing outreach and education to include more offerings in their portfolios.
To further advance a digital justice framework, agencies receiving funding from IIJA and other recent legislative actions should look to embed education initiatives within technical assistance requests for proposals and funding announcements. Communities often lack access to and support in how to identify and use public resources and information related to digital and climate change challenges. One way to overcome this challenge is to include education initiatives as key components of technical assistance programs. In its role of ensuring the execution of budget proposals and legislation, the Office of Budget and Management (OMB) can issue guidance or memoranda to agencies directing them to include education elements in notices of funding, requests for proposals, and other public resources related to IIJA, IRA and Justice 40.
One example can be found in the Building Resilient Infrastructure and Communities (BRIC) program. In addition to helping communities navigate the federal funding landscape, OMB could require that new rounds of the program include climate or resilience education and digital literacy. The BRIC program can also increase its technical assistance offerings from 20% of applicants to 40%, for example. This would empower recipients to navigate the fuller landscape of using science to develop solutions and then successfully navigate the funding process.
Another program that is being designed at the time of this writing is the Environmental and Climate Justice Grant Program, which contains $3 billion in funding from the IRA. There is a unique opportunity to draft requests for information, collaboration, or proposals to include ideas for education and access programs to democratize critical information by teaching communities how to access and use it.
An accompanying public education campaign can make these ideas sustainable. Agencies should engage with the Ad Council on a public education campaign about digital justice or digital citizenship, social mobility, and climate resilience. As an example, in 2022 FEMA funded a preparation initiative directed at Black Americans and disasters with the Ad Council that discussed protecting people and property from disasters across multiple topics and media. The campaign was successful because the information was accessible and demonstrated its value.
Climate literacy and digital citizenship training are as necessary for those designing programs as they are for communities. The federal agencies that disburse this funding should be tasked with creating programs to offer climate literacy and digital citizenship training for their workforce. Program leaders and policy staff should also be briefed and trained in understanding and detecting data collection, aggregation, and use biases. Federal program officers can be stymied by the lack of baseline standards for federal workforce training and curricula development. For example, FEMA has a goal to create a “climate literate” workforce and to “embed equity” into all of its work—yet there is no evidence-based definition nor standard upon which to build training that will yield consistent outcomes. Similar challenges surface in discussions about digital literacy and understanding how to leverage data for results.2 Within the EPA, the challenge is helping the workforce understand how to manage the data it generates, use it to inform programs, and provide it to communities in meaningful ways. Those charged with delivering justice-driven programs must be provided with the necessary education and tools to do so.
FEMA, like the EPA and other agencies, will need help from Congress. Congress should do more to support scientific research and development for the purpose of upskilling the federal workforce. Where necessary, Congress must allocate funding, or adjust current funding mechanisms, to provide necessary resources. There is $369 billion for “Energy Security and Climate Change” in the Inflation Reduction Act of 2022 that broadly covers the aforementioned ideas. Adjusting language to reference programs that address education and access to information would make it clear that agencies can use some of that funding. In the House, this could take the form of a suspension bill or addition as technical correction language in a report. In the Senate, these additions could be added as amendments during “vote-o-rama.”
For legislative changes involving the workforce or communities, it is possible to justify language changes by looking at the legal intent of complementary initiatives in the Biden-Harris Administration. In addition to IIJA provisions, policy writers can use parts of the Inflation Reduction Act and the Justice 40 initiative, as well as the climate change and environmental justice executive orders, to justify changes that will provide agencies with direction and resources. Because this project is at the intersection of climate and digital justice, the jurisdictional alignments would mainly be with the United States Department of Commerce, the National Telecommunications and Information Administration, the United States Department of Agriculture, EPA and FEMA.
Recommendations for federal agencies:
- Make public literacy about digital and climate justice a national priority. (This includes government agency personnel as well as residents and citizens.)
- Train agency program officers charged with administering programs on the impacts and solutions for digital justice.
- To empower rural and BIPOC communities to access programs consistently, require plain language drafts or section-by-section explainers for scientific and financial information related to digital justice.
- Create and require a set of “accessible research” guidelines for research institutions that receive federal funding to ensure their work is usable in communities.
Recommendations for Congress:
- Provide research dollars to help agencies develop evidence-based benchmarks for climate, data, and digital literacy programs.
- Set aside federal workforce development funds to build government-wide capacity in these areas.
- Make technical assistance for small municipalities and small community-based organizations a required part of any new digital justice-related statutes and funding mechanisms.
Conclusion
Digital justice is about a deeper understanding of the generational challenges we must confront in the next few years: the digital divide, climate risk, racial injustice, and rural poverty. Each of these connects back to our increasingly digital world and efforts to make sure all communities can access its benefits. A new policy framework for digital justice should be our ultimate goal. However, there are present opportunities to leverage existing programs and policy concepts to create tangible outcomes for communities now. Those include digital and climate literacy training, public education, and better education of government program leaders as well as providing communities and organizations with more transparent access to capital and information.
Digital divestment refers to the intentional exclusion of certain communities and groups from the social, intellectual, and economic benefits of the internet, as well as technologies that leverage the internet.
Climate resilience is about successfully coping with and managing the impacts of climate change while preventing those impacts from growing worse. This does not mean only thinking about severe weather. It also includes economic shocks and public health emergencies that come with climate change. During the COVID-19 pandemic, women disproportionately passed away and in one Maryland city, survivors’ social mobility decreased by 1%. However, the introduction of community WIFI began to change these outcomes.
Communities (municipalities, states) that are left out of access to internet infrastructure not only miss out on educational, economic, and social mobility opportunities; they also miss out on critical information about severe weather and climate change. Scientists and researchers depend on an internet connection to conduct research to target solutions. No high-quality internet means no access to information about cascading risk.
While the IIJA broadband infrastructure funding is a once-in-a-generation effort, the reality is that in many rural areas broadband is either not cost-effective nor a feasible solution due to geography or other contexts.
By opening funding to different kinds of internet infrastructures (community Wi-Fi, satellite, fixed access), communities can increase their risk awareness and make their own solutions.
The federal government is already creating executive orders and legislation in this space. What is needed is a more cohesive plan. In some cases that may entail partnering with the private sector or finding creative ways to partner with communities.
The first step is briefings and socializing this policy work because looking at equity, tech, and climate change from this perspective is still new and unfamiliar to many.
Enabling Faster Funding Timelines in the National Institutes of Health
Summary
The National Institutes of Health (NIH) funds some of the world’s most innovative biomedical research, but rising administrative burden and extended wait times—even in crisis—have shown that its funding system is in desperate need of modernization. Examples of promising alternative models exist: in the last two years, private “fast science funding” initiatives such as Fast Grants and Impetus Grants have delivered breakthroughs in responding to the coronavirus pandemic and aging research on days to one-month timelines, significantly faster than the yearly NIH funding cycles. In response to the COVID-19 pandemic the NIH implemented a temporary fast funding program called RADx, indicating a willingness to adopt such practices during acute crises. Research on other critical health challenges like aging, the opioid epidemic, and pandemic preparedness deserves similar urgency. We therefore believe it is critical that the NIH formalize and expand its institutional capacity for rapid funding of high-potential research.
Using the learnings of these fast funding programs, this memo proposes actions that the NIH could take to accelerate research outcomes and reduce administrative burden. Specifically, the NIH director should consider pursuing one of the following approaches to integrate faster funding mechanisms into its extramural research programs:
- Reform the existing R21 grant mechanism to bring it more in line with its own goals of funding high-reward, rapid-turnaround research; and
- Direct NIH institutes and centers to independently develop and deploy new research programs with faster funding timelines.
Future efforts by the NIH and other federal policymakers to respond to crises like the COVID-19 pandemic would also benefit from a clearer understanding of the impact of the decision-making process and actions taken by the NIH during the earliest weeks of the pandemic. To that end, we also recommend that Congress initiate a report from the Government Accountability Office to illuminate the outcomes and learnings of fast governmental programs during COVID-19, such as RADx.
Challenge and Opportunity
The urgency of the COVID-19 pandemic created adaptations not only in how we structure our daily lives but in how we develop therapeutics and fund science. Starting in 2020, the public saw a rapid emergence of nongovernmental programs like Fast Grants, Impetus Grants, and Reproductive Grants to fund both big clinical trials and proof-of-concept scientific studies within timelines that were previously thought to be impossible. Within the government, the NIH launched RADx, a program for the rapid development of coronavirus diagnostics with significantly accelerated approval timelines. Though the sudden onset of the pandemic was unique, we believe that an array of other biomedical crises deserve the same sense of urgency and innovation. It is therefore vital that the new NIH director permanently integrate fast funding programs like RADx into the NIH in order to better respond to these crises and accelerate research progress for the future.
To demonstrate why, we must remember that the coronavirus is far from being an outlier—in the last 20 years, humanity has gone through several major pandemics, notably swine flu, SARS CoV-1, and Ebola. Based on the long-observed history of infectious diseases, the risk of pandemics with an impact similar to that of COVID-19 is about two percent in any year. An extension of naturally occurring pandemics is the ongoing epidemic of opioid use and addiction. The rapidly changing landscape of opioid use—with overdose rates growing rapidly and synthetic opioid formulations becoming more common—makes slow, incremental grantmaking ill-suited for the task. The counterfactual impact of providing some awards via faster funding mechanisms in these cases is self-evident: having tests, trials, and interventions earlier saves lives and saves money, without sacrificing additional resources.
Beyond acute crises, there are strong longer-term public health motivations for achieving faster funding of science. In about 10 years, the United States will have more seniors (people aged 65+) than children. This will place substantial stress on the U.S. healthcare system, especially given that two-thirds of seniors suffer from more than one chronic disease. New disease treatments may help, but it often takes years to translate the results of basic research into approved drugs. The idiosyncrasies of drug discovery and clinical trials make them difficult to accelerate at scale, but we can reliably accelerate drug timelines on the front end by reducing the time researchers spend in writing and reviewing grants—potentially easing the long-term stress on U.S. healthcare.
The existing science funding system developed over time with the best intentions, but for a variety of reasons—partly because the supply of federal dollars has not kept up with demand—administrative requirements have become a major challenge for many researchers. According to surveys, working scientists now spend 44% of their research time on administrative activities and compliance, with roughly half of that time spent on pre-award activities. Over 60% of scientists say administrative burden compromises research productivity, and many fear it discourages students from pursuing science careers. In addition, the wait for funding can be extensive: one of the major NIH grants, R01, takes more than three months to write and around 8–20 months to receive (see FAQ). Even proof-of-concept ideas face onerous review processes and take at least a year to fund. This can bottleneck potentially transformative ideas, as with Katalin Kariko famously struggling to get funding for her breakthrough mRNA vaccine work when it was at its early stages. These issues have been of interest for science policymakers for more than two decades, but with little to show for it.
Though several nongovernmental organizations have attempted to address this need, the model of private citizens continuously fundraising to enable fast science is neither sustainable nor substantial enough compared to the impact of the NIH. We believe that a coordinated governmental effort is needed to revitalize American research productivity and ensure a prompt response to national—and international—health challenges like naturally occurring pandemics and imminent demographic pressure from age-related diseases. The new NIH director has an opportunity to take bold action by making faster funding programs a priority under their leadership and a keystone of their legacy.
The government’s own track record with such programs gives grounds for optimism. In addition to the aforementioned RADx program at NIH, the National Science Foundation (NSF) runs the Early-Concept Grants for Exploratory Research (EAGER) and Rapid Response Research (RAPID) programs, which can have response times in a matter of weeks. Going back further in history, during World War II, the National Defense Research Committee maintained a one-week review process.
Faster grant review processes can be either integrated into existing grant programs or rolled out by institutes in temporary grant initiatives responding to pressing needs, as the RADx program was. For example, when faced with data falsification around the beta amyloid hypothesis, the National Institute of Aging (NIA) could leverage fast grant review infrastructure to quickly fund replication studies for key papers, without waiting for the next funding cycle. In case of threats to human health due to toxins, the National Institute of Environmental Health Sciences (NIEHS) could rapidly fund studies on risk assessment and prevention, giving public evidence-based recommendations with no delay. Finally, empowering the National Institute of Allergy and Infectious Diseases (NIAID) to quickly fund science would prepare us for many yet-to-come pandemics.
Plan of Action
The NIH is a decentralized organization, with institutes and centers (ICs) that each have their own mission and focus areas. While the NIH Office of the Director sets general policies and guidelines for research grants, individual ICs have the authority to create their own grant programs and define their goals and scope. The Center for Scientific Review (CSR) is responsible for the peer review process used to review grants across the NIH and recently published new guidelines to simplify the review criteria. Given this organizational structure, we propose that the NIH Office of the Director, particularly the Office of Extramural Research, assess opportunities for both NIH-wide and institute-specific fast funding mechanisms and direct the CSR, institutes, and centers to produce proposed plans for fast funding mechanisms within one year. The Director’s Office should consider the following approaches.
Approach 1. Develop an expedited peer review process for the existing R21 grant mechanism to bring it more in line with the NIH’s own goals of funding high-reward, rapid-turnaround research.
The R21 program is designed to support high-risk, high-reward, rapid-turnaround, proof-of-concept research. However, it has been historically less popular among applicants compared to the NIH’s traditional research mechanism, the R01. This is in part due to the fact that its application and review process is known to be only slightly less burdensome than the R01, despite providing less than half of the financial and temporal support. Therefore, reforming the application and peer review process for the R21 program to make it a fast grant–style award would both bring it more in line with its own goals and potentially make it more attractive to applicants.
All ICs follow identical yearly cycles for major grant programs like the R21, and the CSR centrally manages the peer review process for these grant applications. Thus, changes to the R21 grant review process must be spearheaded by the NIH director and coordinated in a centralized manner with all parties involved in the review process: the CSR, program directors and managers at the ICs, and the advisory councils at the ICs.
The track record of federal and private fast funding initiatives demonstrates that faster funding timelines can be feasible and successful (see FAQ). Among the key learnings and observations of public efforts that the NIH could implement are:
- Pilot monthly or bimonthly study section and advisory council meetings for R21 grant review. CSR has switched to conducting the majority of its meetings virtually since the COVID-19 pandemic and has found that in-person and virtual meetings are of equal quality. CSR should take advantage of the convenience of virtual meetings by piloting shorter, virtual monthly or bimonthly study section meetings to review R21 grants outside of the three regular meetings held each year. By meeting more frequently but for shorter amounts of time, the individual time commitment for each meeting is reduced, which may incentivize more researchers to participate in study sections and prevent reviewer fatigue from the traditional one- to two-day meetings. To match this change, the advisory councils of ICs that review R21 grant applications should also pilot monthly virtual meetings, timed to occur immediately after the corresponding peer review meetings. Together, these changes could reduce the R21 grant review timeline from a minimum of nine months down to just two or three months.
- Explore new approaches for reviewer participation. One obstacle to faster funding timelines is the recruitment of reviewers without a conflict of interest. Previously, the travel and financial burden of in-person study sections kept the standing body of reviewers small; this makes it difficult to find and gather a quorum of knowledgeable and unconflicted experts. With online study sections, the CSR could engage a larger committee of reviewers at lower cost. This would allow them to identify and address conflicts of interest dynamically and to select a small and varying subset of reviewers to meet each month. Scientists may also be more inclined to participate as potential reviewers, knowing that they may not be called upon for every round of reviews.
- Emphasize the potential value of success over risk. Reviewers should be explicitly instructed not to lower their scores for the Approach criterion (or the new Rigor and Feasibility criterion proposed by CSR) solely due to a lack of extensive prior literature or over differences in the applicant’s past area of expertise. (Reviewer suggestions could still be used to help inform the direction of the proposed work in these cases.) Instead, the Significance and Innovation criteria (or the new Importance of Research criterion) should be weighed much more heavily than other criteria in the overall score. The rationale for these changes is evident: novel areas will naturally have less extensive prior literature, while learnings from one area of research can cross-pollinate innovation in an entirely different area of research. Acceptance of high-risk, high-reward proposals could be further facilitated by piloting the “golden ticket” model, in which reviewers are provided the right to unilaterally fund one application per year that they believe holds the most innovation potential.
- Reduce the length of applications. The length of proposals for both Fast Grants and Impetus Grants did not exceed two pages, which, according to reviewers, was more than enough to make well-reasoned judgment calls. The NIH should reduce the page limit from six to three pages for the R21 grant program. This will reduce the administrative burden and save time for both applicants and peer reviewers.
Pending the success of these changes, the NIH should consider applying similar changes to other major research grant programs.
Approach 2. Direct NIH institutes and centers to independently develop and deploy programs with faster funding timelines using Other Transaction Authority (OTA).
Compared to reforming an existing mechanism, the creation of institute-specific fast funding programs would allow for context-specific implementation and cross-institute comparison. This could be accomplished using OTA—the same authority used by the NIH to implement COVID-19 response programs. Since 2020, all ICs at the NIH have had this authority and may implement programs using OTA with approval from the director of NIH, though many have yet to make use of it.
As discussed previously, the NIA, NIDA, and NIAID would be prime candidates for the roll-out of faster funding. In particular, these new programs could focus on responding to time-sensitive research needs within each institute or center’s area of focus—such as health crises or replication of linchpin findings—that would provide large public benefits. To maintain this focus, these programs could restrict investigator-initiated applications and only issue funding opportunity announcements for areas of pressing need.
To enable faster peer review of applications, ICs should establish (a) new study section(s) within their Scientific Review Branch dedicated to rapid review, similar to how the RADx program had its own dedicated review committees. Reviewers who join these study sections would commit to short meetings on a monthly or bimonthly basis rather than meeting three times a year for one to two days as traditional study sections do. Additionally, as recommended above, these new programs should have a three-page limit on applications to reduce the administrative burden on both applicants and reviewers.
In this framework, we propose that the ICs be encouraged to direct at least one percent of their budget to establish new research programs with faster funding processes. We believe that even one percent of the annual budget is sufficient to launch initial fast grant programs funded through National Institutes. For example, the National Institute of Aging had an operating budget of $4 billion in the 2022 fiscal year. One percent of this budget would constitute $40 million for faster funding initiatives, which would be on the order of initial budgets of Impetus and Fast Grants ($25 million and $50 million accordingly).
NIH ICs should develop success criteria in advance of launching new fast funding programs. If the success criteria are met, they should gradually increase the budget and expand the scope of the program by allowing for investigator-initiated applications, making it a real alternative to R01 grants. A precedent for this type of grant program growth is the Maximizing Investigators’ Research Award (MIRA) (R35) grant program within the National Institute of General Medical Sciences (NIGMS), which set the goal of funding 60% of all R01 equivalent grants through MIRA by 2025. In the spirit of fast grants, we recommend setting a deadline on how long each institute can take to establish a fast grants program to ensure that the process does not extend for too many years.
Additional recommendation. Congress should initiate a Government Accountability Office report to illuminate the outcomes and learnings of governmental fast funding programs during COVID-19, such as RADx.
While a number of published papers cite RADx funding, the program’s overall impact and efficiency haven’t yet been assessed. We believe that the agency’s response during the pandemic isn’t yet well-understood but likely played an important role. Illuminating the learnings of these interventions would greatly benefit future emergency fast funding programs.
Conclusion
The NIH should become a reliable agent for quickly mobilizing funding to address emergencies and accelerating solutions for longer-term pressing issues. As present, no funding mechanisms within NIH or its branch institutes enable them to react to such matters rapidly. However, both public and governmental initiatives show that fast funding programs are not only possible but can also be extremely successful. Given this, we propose the creation of permanent fast grants programs within the NIH and its institutes based on learnings from past initiatives.
The changes proposed here are part of a larger effort from the scientific community to modernize and accelerate research funding across the U.S. government. In the current climate of rapidly advancing technology and increasing global challenges, it is more important than ever for U.S. agencies to stay at the forefront of science and innovation. A fast funding mechanism would enable the NIH to be more agile and responsive to the needs of the scientific community and would greatly benefit the public through the advancement of human health and safety.
The NIH released a number of Notices of Special Interest to allow emergency revision to existing grants (e.g., PA-20-135 and PA-18-591) and a quicker path for commercialization of life-saving COVID technologies (NOT-EB-20-008). Unfortunately, repurposing existing grants reportedly took several months, significantly delaying impactful research.
The current scientific review process in NIH involves multiple stakeholders. There are two stages of review at NIH, with the first stage being conducted by a Scientific Review Group that consists primarily of nonfederal scientists. Typically, Center for Scientific Review committees meet three times a year for one or two days. This way, the initial review starts only four months after the proposal submission. Special Emphasis Panel meetings that are not recurring take even longer due to panel recruitment and scheduling. The Institute and Center National Advisory Councils or Boards are responsible for the second stage of review, which usually happens after revision and appeals, taking the total timeline to approximately a year.
Because of the difficulty of empirically studying drivers of scientific impact, there has been little research evaluating peer review’s effects on scientific quality. A Cochrane systematic review from 2007 found no studies directly assessing review’s effects on scientific quality, and a recent Rand review of the literature in 2018 found a similar lack of empirical evidence. A few more recent studies have found modest associations between NIH peer review scores and research impact, suggesting that peer review may indeed successfully identify innovative projects. However, such a relationship still falls short of demonstrating that the current model of grant review reliably leads to better funding outcomes than alternative models. Additionally, some studies have demonstrated that the current model leads to variable and conservative assessments. Taken together, we think that experimentation with models of peer review that are less burdensome for applicants and reviewers is warranted.
Intuitively, it seems that having longer grant applications and longer review processes ensures that both researchers and reviewers expend great effort to address pitfalls and failure modes before research starts. However, systematic reviews of the literature have found that reducing the length and complexity of applications has minimal effects on funding decisions, suggesting that the quality of resulting science is unlikely to be affected.
Historical examples have also suggested that the quality of an endeavor is largely uncorrelated from its planning times. It took Moderna 45 days from COVID-19 genome publication to submit the mRNA-1273 vaccine to the NIH for use in its Phase 1 clinical study. Such examples exist within government too: during World War II, National Defense Research Committee set a record by reviewing and authorizing grants within one week, which led to DUKW, Project Pigeon, Proximity fuze, and Radar.
Recent fast grant initiatives have produced high-quality outcomes. With its short applications and next-day response times, Fast Grants enabled:
- detection of new concerning COVID-19 variants before other sources of funding became available.
- work that showed saliva-based COVID-19 tests can work just as well as those using nasopharyngeal swabs.
- drug-repurposing clinical trials, one of which identified a generic drug reducing hospitalization from COVID-19 by ~40%.
- Research into “Long COVID,” which is now being followed up with a clinical trial on the ability of COVID-19 vaccines to improve symptoms.
Impetus Grants focused on projects with longer timelines but led to a number of important preprints in less than a year from the moment person applied:
- Aging Fly Cell Atlas
- Modular, programmable RNA sensing using ADAR editing in living cells
- Mechanisms of natural rejuvenation in a test tube
- Optogenetic rejuvenation of mitochondrial membrane potential to extend C. elegans lifespan
- Evidence that conserved essential genes are enriched for pro-longevity factors
- Trials on neuroprotective effects of Canagliflozin
With the heavy toll that resource-intensive approaches to peer review take on the speed and innovative potential of science—and the early signs that fast grants lead to important and high-quality work—we feel that the evidentiary burden should be placed on current onerous methods rather than the proposed streamlined approaches. Without strong reason to believe that the status quo produces vastly improved science, we feel there is no reason to add years of grant writing and wait times to the process.
The adoption of faster funding mechanisms would indeed be valuable across a range of federal funding agencies. Here, we focus on the NIH because its budget for extramural research (over $30 billion per year) represents the single largest source of science funding in the United States. Additionally, the NIH’s umbrella of health and medical science includes many domains that would be well-served by faster research timelines for proof-of-concept studies—including pandemics, aging, opioid addiction, mental health, cancer, etc.
Project BOoST: A Biomanufacturing Test Facility Network for Bioprocess Optimization, Scaling, and Training
Summary
The U.S. bioeconomy commands millions of liters of bioproduction capacity, but only a tiny fraction of this capacity supports process optimization. Companies of all sizes face great pressures that limit their ability to commit resources to these important efforts. Consequently, the biomanufacturing industry is often forced to juggle sensitive, brittle production processes that don’t scale easily and are prone to disruption. As some recent failures of prominent companies demonstrate, this increases risk for the entire bioeconomy, and especially for the development of new companies and products.
To remedy this, the Department of Commerce should first allocate $80 million to seed a bioproduction R&D facility network that provides process optimization capability to the greater bioeconomy, followed by a $30 million process optimization challenge wherein participating facilities compete at workflow optimization, scaling, and transfer. Part one of the proposal requires rapid development, with the initial R&D facility network of four sites starting bioprocessing operations within 12 months of award. In addition to training workers for the greater bioeconomy, the facility network’s services would be available on a contract basis to any company at any stage of maturity. This work could include optimization for yield, scaling, process resilience, and/or technology transfer—all critical needs across the sector. After federal government startup funding, the network would transition toward financial independence, increasingly running on revenue from process optimization work, workforce training, and state/local support.
Part two of the plan lays out a biomanufacturing “Grand Challenge” in which participating network facilities compete to optimize a standardized biomanufacturing process. Prioritizing process resilience, security, and transferability in addition to yield, this effort would help set a new industry standard for what process optimization really means in addition to demonstrating what can be accomplished by the network facilities. With this demonstration of value, demand for facility services in other geographic locations would increase, spurring the growth of the facility network across the country.
By the end of the program, the U.S. biomanufacturing sector would see a number of benefits, including easier process innovation, a larger and better trained workforce, shortened product time to market, and reduced production risks.
Challenge & Opportunity
Biological products are, by definition, made by means of complex biological processes carried out by sensitive—some might even say fickle—microorganisms and cell lines. Determining the right steps and conditions to induce a microbe into producing a given product at a worthwhile yield is an arduous task. And once produced, the product needs to be extensively processed to make it pure, stable, and safe enough for shipping and use. Working out this entire production workflow takes a great deal of time, energy, and expertise, and the complexity of production workflows increases alongside the complexity of biological products. Many products fail at this point in development, keeping beneficial products out of the hands of end users and cutting off constructive contributions—revenue, jobs—to the larger bioeconomy.
Once a bioproduction process is worked out at an R&D level, it must be scaled up to a much larger commercial level—another major point of failure for academic and commercial programs. Scaling up requires different equipment with its own controls and idiosyncrasies, each generating additional, sometimes unpredictable, complexities that must be corrected for or managed. The biomanufacturing industry has been asking for help with process scaling for years, and recent national initiatives, such as the National Institute for Innovation in Manufacturing Biopharmaceuticals (NIIMBL) and the BioIndustrial Manufacturing and Design Ecosystem (BioMADE), have sought to address this strategic need.
Each step on this road to the end market represents a chance for failure, and the risks are so high that the road is littered with failed companies that had a promising product that just couldn’t be made reliably or a brittle production process that blew up when performed at scale. The overarching competitive commercial environment doesn’t help, as new companies must rush from concept to production, often cutting corners along the way. Meanwhile, mature biomanufacturing companies often nurse small profit margins and must aggressively guard existing revenue streams, leaving little or no spare capacity to innovate and improve processes. All of these factors result in production workflows that are hastily constructed, poorly optimized, prone to scaling difficulties, and vulnerable to failure at multiple points. When—not if—process failures occur, the entire economy suffers, often in catastrophic ways. In the last several years alone, such failures have been witnessed at Emergent Biosciences, Dr Reddy’s, and Abbott, with any number of downstream effects. Society, too, misses out when more sustainable, environmentally friendly production methods are overlooked in favor of older, less efficient but more familiar ones.
There is an urgent need for a national network of biomanufacturing facilities dedicated to process optimization and scaling—critical efforts that are too often overlooked or hastily glossed over, to the subsequent detriment of the entire bioeconomy. Available to any company at any stage of maturity, this facility network would operate on a contract basis to optimize biological production processes for stability, resilience, and technology transfer. The facilities would also assist with yield optimization, in addition to incorporating the specialized equipment designed to aid in scale-up risk analysis. Once established with government funding, the facility network would stand on its own, running on contract fees for process optimization and scale-up efforts. As demand for services grows, the facility network model could spread out geographically to support other markets.
This a highly opportune time for such a program. The COVID-19 pandemic has highlighted the essential importance of biomanufacturing capabilities—extending to the geopolitical level—as well as the fragility of many supply chains and processes. In response, the CHIPS and Science Act and Executive Order on Advancing Biotechnology and Biomanufacturing, among others, have provided directives to shore up U.S. biomanufacturing capacity and resilience. Project BOoST seeks to meet those directives all while building a workforce to support broader participation in a strong national bioeconomy.
Plan of Action
Project BOoST encompasses a $110 million ask spread out over four years and two overlapping phases: a first phase that quickly stands up a four-facility network to perform biomanufacturing process optimization, and a second phase that establishes a biomanufacturing “Grand Challenge” wherein facilities compete in the optimization of a standardized bioproduction process.
Phase I: Establishing the facility network
The Department of Commerce should allocate $80 million over three years to establish the initial facility network at four sites in different regions of the country. The program would be structured as a competitive contract, with a preference for contract bidders who:
- Bring along industry and academic partners, as evidenced by MOUs or letters of support
- Have industry process optimization projects queued and ready to commence
- Integrate strong workforce development initiatives into their proposals
- Include an entrepreneurship support plan to help startups advance through manufacturing readiness levels
- Leverage state and local matching funds and have a plan to grow state and local cost-share over time
- Prioritize process resilience (including supply chain) as much as yield optimization
- Prioritize data and process security, both in terms of intellectual property protection and cyber protection
- Propose robust means to facilitate process transfer, including developing data standards around process monitoring/measurement and the transfer process itself
- Have active means of promoting economic, environmental, social, and other forms of equity
Possible funding pathways include one of the bio-related Manufacturing Innovation Institutes (MIIs), such as NIIMBL, BioMADE, or BioFabUSA. At a minimum, partnerships would be established with these MIIs to disseminate helpful information gained from the facility network. The National Institute of Standards and Technology (NIST) could also be helpful in establishing data standards for technology transfer. The Bioeconomy Information Sharing and Analysis Center (BIO-ISAC) would be another important partner organization, helping to inform the facilities’ efforts to increase both cyber resilience of workflows and industry information sharing.
Funds would be earmarked for initial startup expenditures, including lease/purchase of appropriate buildings, equipment purchases, and initial salaries/benefits of operating personnel, trainers, and program support. Funding milestones would be configured to encourage rapid movement, including:
- 6-month milestone for start of workforce training programs
- 12-month milestone for start of bioprocess operations
- 12-month milestone for training program graduates obtaining industry jobs
Since no actual product made in these facilities would be directed toward regulated use (e.g., food, medical), there would likely be reduced need to build and operate the facilities at full Current Good Manufacturing Practice (CGMP) specification, allowing for significant time and cost savings. Of course, the ultimate intent is for optimized and scaled production processes to migrate back to regulated use where relevant, but process optimization need not be done in the same environment. Regardless, the facilities would be instrumented so as to facilitate bidirectional technology transfer. With detailed telemetry of processes and data traffic collected in a standardized manner from the network’s sites, organizations would have a much easier time bringing optimized, scaled processes from these facilities out to commercial production. This would result in faster parameter optimization, improved scale-up, increased workflow resilience, better product assurance, and streamlined tech transfer—all of which are major impediments and risks to growth of the U.S. bioeconomy.
Process optimization and scaling work would be accomplished on a contract basis with industry clients, with robust intellectual property protections in place to guard trade secrets. At the same time, anonymized information and techniques gathered from optimization and scaling efforts would be automatically shared with other sites in the network, enabling more globalized learning and more rapid method development. These lessons learned would also be organized and published to the relevant industry organizations, allowing these efforts to lift all ships across the bioeconomy. In this way, even a facility that failed to achieve sufficient economic self-sustainability would still make significant contributions to the industry knowledge base.
Focused on execution speed, each facility would be a public-private consortium, bringing together regional companies, universities, state and local governments, and others to create a locus of education, technology development, and job creation. In addition to hewing to provisions within the CHIPS and Science Act, this facility network would also match the “biomanufacturing infrastructure hubs” recommendation from the President’s Council of Advisors on Science and Technology.
Using the Regional Technology and Innovation Hubs model laid out in the CHIPS and Science Act, the facilities would be located near to, but not within, leading biotechnology centers, with an eye to benefiting small and rural communities where possible. All the aforementioned stakeholders would have a say in site location, with location criteria including:
- Level of partnership with state and local governments
- Degree of involvement of local educational institutions
- Proximity to biomanufacturing industry
- Positive impact on economic/environmental/social equity of small and/or rural communities
- Availability of trainable workforce
Although some MIIs have innovation acceleration and/or improving production availability within their charters, to date no production capacity has been built specifically to address the critical issues of process optimization and scaling. Project BOoST would complement the ongoing work of the bio-focused MIIs. And since the aforementioned risks to the bioeconomy represent a strategic threat today, this execution plan is intentionally designed to move rapidly. Locating network facilities outside of costly metropolitan areas and not needing full cGMP certification means that an individual facility could be spun up in months as opposed to years and at much lower cost. These facilities would quickly be able to offer their benefits to industry, local economies, and workers looking to train into a growing job sector.
Phase II: Scale-up challenge
Approximately 30 months from program start, facilities that meet the aforementioned funding milestones and demonstrate continuous movement toward financial self-sustainability (as measured by a shift from federal to state, local, and industry support) would be eligible to participate in an additional $30 million, 18-month scale-up challenge, wherein they would receive a reference production workflow so they could compete at workflow optimization, scaling, and transfer.
In contrast to previous Grand Challenges, which typically have a unifying theme (e.g., cancer, clean energy) but relatively open goals and means, Project BOoST would be hyperfocused to ensure a high degree of applicability and benefit to the biomanufacturing industry. The starting reference production workflow would be provided at lab scale, with specifications of materials, processing steps, and instrument settings. From this starting point, participating facilities would be asked to characterize and optimize the starting workflow to produce maximal yield across a broad range of conditions; scale the workflow to a 1,000L batch level, again maximizing yield; and transfer the workflows at both scales to a competing facility both for verification purposes and for proof of transferability.
In addition, all competing workflows would be subject to red-teaming by an independent group of biomanufacturing and cybersecurity experts. This examination would serve as an important assessment of workflow resilience in the setting of equipment failure, supply chain issues, cyberattack, and other scenarios.
The winning facilities—represented by their workflows—would be determined by a combination of factors:
- Maximum yield
- Process transferability
- Resistance to external tampering
- Resilience in the setting of equipment/process/supply chain failure
The end result would be the practical demonstration and independent verification of the successful optimization, scale-up, and transfer of a production process—a major opportunity for learning and knowledge sharing across the entire industry.
Conclusion
Scientific innovation and advanced automation in biomanufacturing represent a potent double-edged sword. While they have allowed for incredible advances in biomanufacturing capability and capacity—to the benefit of all—they have also created complexities and dependencies that together constitute a strategic risk to the bioeconomy. This risk is a significant threat, with process failures already creating national headlines out of company collapses and congressional investigations. We propose to act now to create a biomanufacturing facility network dedicated to making production workflows more robust, resilient, and scalable, with a plan strongly biased toward rapid execution. Bringing together commercial entities, educational institutions, and multiple levels of government, Project BOoST will quickly create jobs, provide workforce development opportunities, and strengthen the bioeconomy as a whole.
Project BOoST | MIIs | CIADMs | DoD/NDAA | |
Time frame to start of facility operations | Estimated 12 months from funding | Unknown—as of yet no new ground broken | Already operational, although only one surviving | Unknown—plan to meet goals of act due 6/2023 |
Geographic location | Targeting small and rural communities | Unknown | Mix: urban and less urban | Unknown |
Scope | Process optimization, resilience, and scaling, including scale-up risk assessment | DOD MII: TRL acceleration in nonmedical products DOC MII: accelerate biopharmaceutical innovation | Maintenance of critical product stockpiles, reserve production capacity | Research into new methods, capacity building, scaling |
Financial model | Initial government funding with transition to self-sufficiency | Government funding plus partner contributions | Persistent government funding | Unknown |
Yes. Supply chain resilience will be a constant evaluation criterion through the program. A more resilient workflow in this context might include onshoring and/or dual sourcing of critical reagent supplies, establishing on-site reserves of single-point-of-failure equipment, maintaining backups of important digital resources (e.g., software, firmware, ladder logic), and explicitly rehearsing failure recovery procedures.
While the specifics will be left up to the contract bidders, we recommend training programs ranging from short, focused trainings for people already in the biomanufacturing industry to longer certificate programs that can give a trainee the basic suite of skills needed to function in a skilled biomanufacturing role.
They would if they could. On a fundamental level, due to the nature of the U.S. economic system, the biomanufacturing industry is focused on competition, and there’s a lot of it. Industry organizations, whether large or small, must be primarily concerned with some combination of generating new products and producing those products. They are unable to devote resources toward more strategic efforts like resilience, data standards, and process assurance simply because energy and dollars spent there means less to put toward new product development or increasing production capacity. (Contrast this to a country like China, where the government can more easily direct industry efforts in a certain direction.) Revolutionary change and progress in U.S. biomanufacturing requires the public sector to step up to solve some of these holistic, longer-term challenges.