Increasing Access to Capital by Expanding SBA’s Secondary Market Capacity

Summary

Entrepreneurship and innovation are crucial for sustained community development, as new ventures create new jobs and wealth. As entrepreneurs start and grow their companies, access to capital is a significant barrier. Communities nationwide have responded by initiating programs, policies, and practices to help entrepreneurs creatively leverage philanthropic dollars, government grants and loans, and private capital. But these individually promising solutions collectively amount to a national patchwork of support. Those who seek to scale promising ideas face a funding continuum that is filled with gaps, replete with high-transaction costs, and highly variable depending on each entrepreneur’s circumstances. 

To help entrepreneurs better and more reliably access capital no matter where in the country they are, the Small Business Administration (SBA) should work with the other Interagency Community Investment Committee (ICIC) agencies to expand the SBA’s secondary market capacity. The SBA’s secondary market allows lenders to sell the guaranteed portion of a loan backed by the SBA. This provides additional liquidity to lenders, which in turn expands the availability of commercial credit for small businesses. However, there is no large standardized secondary market for debt serviced by other federal agencies, so the benefits of a secondary market are limited to only a portion of federal lending programs that support entrepreneurship. Expanding SBA’s secondary market authority would increase access to large pools of private capital for a larger proportion of entrepreneurs and innovative small businesses. 

As a first step towards this goal, one or several agencies should enter into a pilot partnership with SBA to use SBA’s existing administrative authority and infrastructure to enable private lenders to sell other forms of federally securitized loans. Once proven, the secondary market could be expanded further and permanently established as a government-sponsored enterprise (GSE). This GSE would provide accessible capital for entrepreneurs and small businesses in much the same way that the GSEs Fannie Mae and Freddie Mac provide accessible capital, as mortgages, for prospective homeowners.

With the 118th Congress considering the reauthorization of SBA for the first time in 22 years, there is an opportunity to seize on this reauthorization to modernize the SBA. Piloting the SBA’s secondary market capacity is a crucial piece of modernization to increase access to capital for entrepreneurs. 

Challenge and Opportunity

Access to capital changes the economic trajectory of individuals and communities. Approved small business loan applicants, for instance, report average income increases of more than 10% five years after loan approval. Unfortunately, capital for budding entrepreneurs is scarce and inequitably allocated. Some 83% of budding entrepreneurs never access adequate capital to start or grow their business. Success rates are even lower for demographic minorities. And when entrepreneurs can’t access capital to start their business, the communities around them suffer, as evidenced by the fact that two out of every three new jobs over the past 25 years has been generated by small businesses. 

The vast majority of new businesses in the United States are funded by personal or family savings, business loans from banks or financial institutions, or personal credit cards. Venture capital is used by only 0.5% of entrepreneurs because most entrepreneurs’ businesses are not candidates for it. Public and mission-driven lending efforts are valiant but can’t come close to matching the scale of this untapped potential. Outside of the COVID-19 emergency response, the SBA annually appropriates $1–2 billion for lending programs. The Urban Institute found that between 2011 and 2017, Chicago alone received $4 billion of mission-driven lending that predominantly went toward communities of color and high-poverty communities. But during the same time period, Chicago also received over $67 billion of market investment—most of which flowed to white and affluent neighborhoods.

Communities across the country have sought to bridge this gap with innovative ideas to increase access to private capital, often by leveraging federal funding or federal programmatic infrastructure. For example: 

These example programs are successful, replicable, and already supported by some of the agencies in the ICIC. These programs use traditional, well-understood financial mechanisms to provide capital to entrepreneurs: credit lines, insurance, shared-equity agreements, tax credits, and low-interest debt. The biggest obstacle to scaling these types of programs is financial: they must first raise money to support their core financial mechanism(s) and their dependence on ad hoc fundraising almost inevitably yields uneven results.

There is a clear rationale for federal intervention to improve capital access for entrepreneurship-support programs. Successful investment in marginalized communities serves the public interest by generating positive externalities such as increases in jobs, wealth, and ownership. Government can grow these externalities manyfold by reducing risk for investors and reducing the cost of capital to entrepreneurs through the expansion of SBA’s secondary market authority and ultimate creation of a GSE to create permanence, increased accountability, and further flexibility of capital access. With SBA reauthorization on the legislative docket, this is a prime opportunity to address the core challenge of capital access for entrepreneurs. 

Plan of Action 

Federal government should create standardized, straightforward mechanisms for entrepreneurs and small businesses across the country to tap into vast pools of private capital at scale. A first step is launching an administrative pilot that extends the SBA’s current secondary market capacity to interested agencies in the ICIC. An initial pilot partner could be the Department of the Treasury in order to recapitalize its Community Development Financial Institutions (CDFI) Fund. If the pilot proves successful, the secondary market could be expanded further and permanently established as a government-sponsored enterprise.

Recommendation 1. Establish an administrative pilot.

The SBA’s secondary market can already serve small business debt and debt-like instruments for small businesses and community development. The SBA currently underwrites, guarantees, securitizes, and sells pools of 7(a) and 504 loans, unsecured SBA loans in Development Company Participation Certificates, and Small Business Investment Company Debentures. Much like Federal Housing Administration and Veterans Affairs home loans offer guaranteed debt to homeowners, there are programs that offer guaranteed debt for entrepreneurs. However, there is no large standardized secondary market for the debt that extends across agencies. 

An interagency memorandum of understanding between interested ICIC agencies could quickly open up the SBA’s secondary market infrastructure to other forms of small business debt. This would allow the ICIC to explore, with limited risk, the extent to which an expanded secondary market for federally securitized debt products enables entrepreneurs and small businesses to more easily access low-cost capital. Examples of other forms of small business lending provided by ICIC agencies include Department of Agriculture Rural Business Development Grants, Department of Housing and Urban Development Community Development Block Grants, and the Treasury Small Business Lending Fund, among others.

An ideal initial pilot partner target among ICIC agencies would be the Treasury, which could pilot a secondary market approach to recapitalizing its CDFI Fund. This fund allocates capital via debenture to CDFIs for them to make personal, mortgage, and commercial loans to low-income and underserved communities. The fund is recapitalized on an annual basis through the federal budget process. A partnership with SBA to create a secondary market for the CDFI Fund would effectively double the federal support available for CDFIs that leverage that fund.

It is important to note that while SBA can create pilot intergovernmental agreements to extend its secondary market infrastructure, broader or permanent extension of secondary market authority may require congressional approval.

Recommendation 2. Create a government-sponsored enterprise (GSE).

Upon successful completion of the administrative pilot, the ICIC should explore creating a GSE that decreases the cost of capital for entrepreneurs and small businesses and expands capital access for underserved communities. This separate entity would be a more independent body than an expanded secondary market created through SBA’s existing infrastructure. Benefits of creating a GSE include providing more flexibility and allowing the agency to function more independently and with greater authority while being subject to more rigorous reporting and oversight requirements to ensure accountability. 

After the 2008 housing-market crash and subsequent recession, the concept of a GSE was criticized and reforms were proposed. There is no doubt that GSEs made mistakes in the housing market, but they also helped standardize and grow the mortgage market that now serves 65% of American households. The federal government will need to implement thoughtful, innovative governance structures to realize the benefits that a GSE could offer entrepreneurs and small businesses while avoiding repeating the mistakes that the mortgage-focused GSEs Fannie Mae and Freddie Mac made. 

One potential ownership structure is the Perpetual Purpose Trust (PPT). PPTs work by separating the ownership right of governance from the ownership right of financial return and giving them to different parties. The best-known example of a PPT to date is likely the one established by Yvon Chouinard to take over his family’s ownership interest in Patagonia. In a PPT, trustees—organized in a Trust Steward Committee (TSC)—are bound by a fiduciary duty to maintain focus on the stated purpose of the trust. None of the interests within the TSC are entitled to financial return; rather, the rights to financial return are held in a separate entity (the Corporate Trustee) that does not possess governance rights. This structure, which is backed by a Trust Enforcer, ensures that the TSC cannot force the company to do something that is good for profits but bad for purpose. 

Emulating this basic structure for a capital-focused GSE could circumvent the moral hazard that plagued the mortgage-focused GSEs. The roles of TSC, Trust Enforcer, and Corporate Trustee in a federal context could be filled as follows:

Conclusion 

The ICIC agencies support and create many creative solutions that blend private and public dollars to increase entrepreneurship and community development. Yet the federal government stops short of providing the most important benefit: standardization and scale. The ICIC agencies should therefore create an entity that unlocks standardization and scale for the programs they help create, with the overall goals of:

A first step towards accomplishing these goals is to establish an administrative pilot, by which interested ICIC agencies would use the SBA’s existing authority and infrastructure to create a secondary market for their securitized debt instruments. 

If the pilot proves successful, the next step is to expand the secondary market and establish it for the long term through a GSE modeled on those that have effectively supported the mortgage industry—but with a creative structure that proactively addresses GSE weaknesses unveiled by the 2008 housing-market crash. The result is a stable, permanent institution that enables all communities to realize the benefits of robust entrepreneurship by ensuring that budding entrepreneurs and small-business owners across the country can easily tap into the capital they need to get started. 

Frequently Asked Questions
Is there any precedent for a program like this?

Precedents for this type of federal intervention can be found in the mortgage industry. Homeownership is a major driver of wealth creation. The federal government supports homeownership through mortgage guarantees by federal agencies like the Federal Housing Authority and Veterans Affairs. In addition, the federal government increases liquidity in the mortgage industry by enabling insured mortgages and market-rate mortgages to be securitized, sold, and purchased on secondary markets through government-sponsored enterprises (GSEs) like Fannie Mae and Freddie Mac, or wholly owned agencies like Ginnie Mae. These structures have created a reliable stream of capital to originate loans for homeownership and lower the cost of borrowing.


The mortgage GSEs are engaging in innovation to increase access to housing credit. Fannie Mae, for example, is taking a number of steps to extend credit and homeownership to historically disadvantaged communities, including by using documented rental payments to help individuals build their credit scores and using special-purpose credit programs to develop new solutions for down payment assistance, underwriting, and credit enhancement. These changes will have an outsize effect on the mortgage industry because of the central role a GSE like Fannie Mae plays in connecting private markets to potential homeowners.


COVID-19 relief efforts provide an application of this model specific to small businesses. The California Rebuild Fund (CARF) was a private credit fund for small businesses capitalized with a mixture of state, federal, philanthropic, and private investment. The CARF used government debt guarantees to push down the cost of capital to Community Development Financial Institutions that were best positioned to originate and serve small businesses most negatively impacted by COVID-19.


The CARF proved that a coherent and routinized process for accessing private capital that lowers interest rates, expands credit for small businesses, and creates operational efficiencies for entrepreneurial support organizations. For instance, there is a single application site that matches potential borrowers to potential lenders. The keys to the CARF’s success were its guarantee from the state of California and the fact that it provided relatively uniform offering to different investors along a spectrum of return profiles.

What are some specific reforms that the GSE should incorporate?

To begin the new entity, securitize or purchase securities from only government guaranteed loans. Even during the worst of the housing crash, the government-guaranteed mortgage-backed securities were more stable than non-agency loans. Beginning with guaranteed loans allows this new entity to provide explicit guarantees to guarantee-sensitive investors. However, a gradual push into new mechanisms, innovative underwriting, and perhaps non-agency debt should be a goal.


The guarantee of the loans should be explicit but only sit after the equity of the borrower and the agency guarantee.


Any privileges extended to the new entity, such as exemption from securities registration or state and local taxation, that results in measurable decrease in cost of lending should be passed on to the final borrower, as much as possible.


Assuming that the regulatory body, acting as a fiduciary of the trust, can implement policies that take into account demographics like race, ethnicity, and country of origin, the GSE should use special purpose credit programs to address racial inequalities in access to capital.


The authorizing statute for the SBA secondary market required the lender to remain obligated to the SBA if it securitizes and sells the underlying loan on a secondary market. To promulgate that obligation the SBA requires the lender to keep a percentage of the loan on their books for servicing. This is an operational hurdle to securitizing loans. Either there needs to be a more robust market to justify the operational expense or there should be another manner by which the lender remains obligated to the SBA.


The SBA recently announced a change in the interest rates that lenders can charge for 7(a) loans. While it is understandable that the SBA does not want the guarantee to run up the profit margin for lenders, the tradeoff is that some entrepreneurs will go without capital because lenders cannot justify the risk at the formulated interest rate. The authorizing statute, CFR 120.213, merely requires that interest rates be reasonable. This should give the SBA room to experiment with how it can deliver low-cost capital to borrowers. For example, if the usury cap was removed for some loans, could the SBA require the excess yield be used to push down the cost of borrowing for other loans?

What is the ICIC?

The Interagency Community Investment Committee (ICIC) focuses on the operations and execution of federal programs that facilitate the flow of capital and the provision of financial resources into historically underserved communities, including communities of color, rural communities, and Tribal nations. The ICIC is composed of representatives from the Treasury, Small Business Administration, Department of Commerce, Department of Transportation, Housing and Urban Development, and Department of Agriculture.

Building a National Network of Composite Pipes to Reduce Greenhouse Gas Emissions

Summary

65,000 miles of pipeline: that’s the distance that may be necessary to achieve economy-wide net-zero emissions by 2050, according to a Princeton University study. The United States is on the verge of constructing a vast network of pipelines to transport hydrogen and carbon dioxide, incentivized by the Infrastructure Investment and Jobs Act and the Inflation Reduction Act. Yet the lifecycle emissions generated by a typical steel pipeline is 27.35 kg carbon dioxide eq per ft1. Which means 65,000 miles would result in nearly 9.4 million megatons of carbon dioxide eq (equal to over 2 million passenger cars annually) produced just from steel pipeline infrastructure alone.

Pipelines made from composite materials offer one pathway to lowering emissions. Composite pipe is composed of multiple layers of different materials—typically a thermoplastic polymer as the primary structural layer with reinforcing materials such as fibers or particulate fillers to increase strength and stiffness. Some types have lifecycle emissions that are nearly one-third less than typical steel pipeline. Depending on the application, composite pipelines can be safer and less expensive. However, the process under Pipeline and Hazardous Materials and Safety Administration (PHMSA) to issue permits for composite pipe takes longer than steel, and for hydrogen and supercritical carbon dioxide, the industry lacks regulatory standards altogether. Reauthorization of the Protecting Our Infrastructure of Pipelines and Enhancing Safety (PIPES) Act offers an excellent opportunity to review the policies concerning new, less emissive pipeline technologies.

Challenge and Opportunity

Challenge

The United States is on the verge of a clean energy construction boom, expanding far beyond wind and solar energy to include infrastructure that utilizes hydrogen and carbon capture. The pump has been primed with $21 billion for demonstration projects or “hubs” in the Infrastructure Investment and Jobs Act and reinforced with another $7 billion for demonstration projects and at least $369 billion in tax credits in the Inflation Reduction Act. Congress recognized that pipelines are a critical component and provided $2.1 billion in loans and grants under the Carbon Dioxide Transportation Infrastructure Finance and Innovation Act (CIFIA).

The United States is crisscrossed by pipelines. Approximately 3.3 million miles of predominately steel pipelines convey trillions of cubic feet of natural gas and hundreds of billions of tons of liquid petroleum products each year. A far fewer 5,000 miles are used to transport carbon dioxide and only 1,600 miles are dedicated to hydrogen. Research suggests the existing pipeline network is nowhere near what is needed. According to Net Zero America, approximately 65,000 miles of pipeline will be needed to transport captured carbon dioxide to achieve economy-wide net zero emissions in the United States by 2050. The study also identifies a need for several thousand miles of pipelines to transport hydrogen within each region.

Making pipes out of steel is a carbon-intensive process, and steel manufacturing in general accounts for seven to nine percent of global greenhouse gas emissions. There are ongoing efforts to lower emissions generated from steel (i.e., “green steel”) by being more energy efficient, capturing and storing emitted carbon dioxide, recycling scrap steel combined with renewable energy, and using low-emissions hydrogen. However, cost is a significant challenge with many of these mitigation strategies. The estimated cost of transitioning global steel assets to net-zero compatible technologies by 2050 is $200 billion, in addition to a baseline average of $31 billion annually to simply meet growing demand.

Opportunity

Given the vast network of pipelines required to achieve a net-zero future, expanding use of composite pipe provides a significant opportunity for the United States to lower carbon emissions. Composite materials are highly resistant to corrosion, weigh less and are more flexible, and have improved flow capacity. This means that pipelines made from composite materials have a longer service life and require less maintenance than steel pipelines. Composite pipe can be four times faster to install, require one-third the labor to install, and have significantly lower operating costs.2 The use of composite pipe is expected to continue to grow as technological advancements make these materials more reliable and cost-effective. 

Use of composite pipe is also expanding as industry seeks to improve its sustainability. We performed a lifecycle analysis on thermoplastic pipe, which is made by a process called extrusion that involves melting a thermoplastic material, such as high-density polyethylene or polyvinyl chloride, and then forcing it through a die to create a continuous tube. The tube can then be cut to the desired length and fittings can be attached to the ends to create a complete pipeline. We found that the lifecycle emissions from thermoplastic pipe were 6.83 kg carbon dioxide eq/ft and approximately 75% lower than an equivalent length of steel pipe, which has lifecycle emissions of 27.35 kg carbon dioxide eq/ft. 

These estimates do not include potential differences in leaks. Specifically, composite pipe has a continuous structure that allows for the production of longer pipe sections, resulting in fewer joints and welds. In contrast, metallic pipes are often manufactured in shorter sections due to limitations in the manufacturing process. This means that more joints and welds are required to connect the sections together, which can increase the risk of leaks or other issues. Further, approximately half of the steel pipelines in the United States are over 50 years old, increasing the potential for leaks and maintenance cost.3 Another advantage of composite pipe is that it can be pulled through steel pipelines, thereby repurposing aging steel pipelines to transport different materials while also reducing the need for new rights of way and associated permits. 

Despite the advantages of using composite materials, the standards have not yet been developed to allow for safe permitting to transport supercritical carbon dioxide4 and hydrogen. At the federal level, pipeline safety is administered by the Department of Transportation’s Pipeline and Hazardous Materials Administration (PHMSA).5 To ensure safe transportation of energy and other hazardous materials, PHMSA establishes national policy, sets and enforces standards, educates, and conducts research to prevent incidents. There are regulatory standards to transport supercritical carbon dioxide in steel pipe.6 However, there are no standards for composite pipe to transport either hydrogen or carbon dioxide in either a supercritical liquid, gas, or subcritical liquid state.

Repurposing existing infrastructure is critical because the siting of pipelines, regardless of type, is often challenging. Whereas natural gas pipelines and some oil pipelines can invoke eminent domain provisions under federal law such as the Natural Gas Act or Interstate Commerce Act, no such federal authorities exist for hydrogen and carbon dioxide pipelines. In some states, specific statutes address eminent domain for carbon dioxide pipelines. These laws typically establish the procedures for initiating eminent domain proceedings, determining the amount of compensation to be paid to property owners, and resolving disputes related to eminent domain. However, current efforts are under way in states such as Iowa to restrict use of state authorities to grant eminent domain to pending carbon dioxide pipelines. The challenges with eminent domain underscore the opportunity provided by technologies that allow for the repurposing of existing pipeline to transport carbon dioxide and hydrogen.

Plan of Action

How can we build a vast network of carbon dioxide and hydrogen pipelines while also using lower emissive materials? 

Recommendation 1. Develop safety standards to transport hydrogen and supercritical carbon dioxide using composite pipe. 

PHMSA, industry, and interested stakeholders should work together to develop safety standards to transport hydrogen and supercritical carbon dioxide using composite pipe. Without standards, there is no pathway to permit use of composite pipe. This collaboration could occur within the context of PHMSA’s recent announcement to update its standards for transporting carbon dioxide, which is being done in response to an incident in 2020 in Sartartia, MS.

Ideally, the permits could be issued using PHMSA’s normal process rather than as special permits (e.g., 49 CFR § 195.8). It takes several years to develop standards, so it is critical to launch the standard-setting process so that composite pipe can be used in Department of Energy-funded hydrogen hubs and carbon capture demonstration projects.

Europe is ahead of the United States in this regard, as the classification company DNV is currently undertaking a joint industry project to review the cost and risk of using thermoplastic pipe to transport hydrogen. This work will inform regulators in the European Union, who are currently revising standards for hydrogen infrastructure. The European Clean Hydrogen Alliance recently adopted a “Roadmap on Hydrogen Standardization” that expressly recommends setting standards for non-metallic pipes. To the extent practicable, it would benefit export markets for U.S. products if the standards were similar.  

Recommendation 2. Streamline the permitting process to retrofit steel pipelines. 

Congress should streamline the retrofitting of steel pipes by enacting a legislative categorical exclusion under the National Environmental Policy Act (NEPA). NEPA requires federal agencies to evaluate actions that may have a significant effect on the environment. Categorical exclusions (CEs) are categories of actions that have been determined to have no significant environmental impact and therefore do not require an environmental assessment (EA) or an environmental impact statement (EIS) before they can proceed. CEs can be processed within a few days, thereby expediting the review of eligible actions.

The CE process allows federal agencies to avoid the time and expense of preparing an EA or EIS for actions that are unlikely to have significant environmental effects. CEs are often established through agency rulemaking but can also be created by Congress as a “legislative CE.” Examples include minor construction activities, routine maintenance and repair activities, land transfers, and research and data collection. However, even if an action falls within a CE category, the agency must still conduct a review to ensure that there are no extraordinary circumstances that would warrant further analysis.

Given the urgency to deploy clean technology infrastructure, Congress should authorize federal agencies to apply a categorical exclusion where steel pipe is retrofitted using composite pipe. In such situations, the project is using an existing pipeline right-of-way, and there should be few, if any, additional environmental impacts. Should there be any extraordinary circumstances, such as substantial changes in the risk of environmental effects, federal agencies would be able to evaluate the project under an EA or EIS. A CE does not obviate the review of safety standards and other applicable, substantive laws, but simply right-sizes the procedural analysis under NEPA.

Recommendation 3. Explore opportunities to improve the policy framework for composite pipe during reauthorization of the PIPES Act. 

Both of the aforementioned ideas should be considered as Congress initiates its reauthorization of the Protecting Our Infrastructure of Pipelines and Enhancing Safety (PIPES) Act of 2020. Among other improvements to pipeline safety, the PIPES Act reauthorized PHMSA through FY2023. As Congress begins work on its next reauthorization bill for PHMSA, it is the perfect time to review the state of the industry, including the potential for composite pipe to accelerate the energy transition.

Recommendation 4. Consider the embedded emissions of construction materials when funding demonstration projects. 

The Office of Clean Energy Demonstrations should consider the embedded emissions of construction materials when evaluating projects for funding. Applicants that have a plan to consider embedded emissions of construction materials could receive additional weight in the selection process. 

Recommendation 5. Support research and development of composite materials. 

Composite materials offer advantages in many other applications, not just pipelines. The Office of Energy Efficiency and Renewable Energy (EERE) should support research to further enhance the properties of composite pipe while improving lifecycle emissions. In addition to ongoing efforts to lower the emissions intensity of steel and concrete, EERE should support innovation in alternative, composite materials for pipelines and other applications.

Conclusion

Recent legislation will spark construction of the next generation in clean energy infrastructure, and the funding also creates an opportunity to deploy construction materials with lower lifecycle emissions of greenhouse gases. This is important, because constructing vast networks of pipelines using high-emissive processes undercuts the goals of the legislation. However, the regulatory code remains an impediment by failing to provide a pathway for using composite materials. PHMSA and industry should commence discussions to create the requisite safety standards, and Congress should work with both industry and regulators to streamline the NEPA process when retrofitting steel pipelines. As America commences construction of hydrogen and carbon capture, utilization, and storage networks, reauthorization of the PIPES Act provides an excellent opportunity to significantly lower the emissions.

Frequently Asked Questions
How did you calculate a lifecycle analysis (LCA) for composite pipe?

We compared two types of pipes: 4” API 5L X42 metallic pipe vs. 4” Baker Hughes non-metallic next generation thermoplastic flexible pipe. The analysis was conducted using FastLCA, a proprietary web application developed by Baker Hughes and certified by an independent reviewer to quantify carbon emissions from our products and services. The emission factors for the various materials and processes are based on the ecoinvent 3.5 database for global averages.


  • The data for flexible pipe production is from 2020 production year and represents transport, machine, and energy usage at the Baker Hughes’ manufacturing plant located in Houston, TX.
  • All raw material and energy inputs for flex pipes are taken directly from engineering and plant manufacturing data, as verified by engineering and manufacturing personnel, and represent actual usage to manufacture the flexible pipes.
  • All of the data for metallic pipe production is from API 5L X42 schedule 80 pipe specifications and represent transport from Alabama and energy usage for production from global averages.
  • All raw material and energy inputs for hot rolling steel are computed from ecoinvent 3.5 database emission factors. All relevant production steps and processes are modeled.
  • All secondary processes are from the ecoinvent 3 database (version 3.5 compiled as of November 2018) as applied in SimaPro 9.0.0.30.
  • Results are calculated using IPCC 2013 GWP 100a (IPCC AR5).
What are the safety risks of transporting hydrogen and carbon dioxide using composite pipe?

Similar to steel pipe, transporting hydrogen and carbon dioxide using composite pipe poses certain safety risks that must be carefully managed and mitigated:


  • Hydrogen gas can diffuse into the composite material and cause embrittlement, which can lead to cracking and failure of the pipe.
  • The composite material used in the pipe must be compatible with hydrogen and carbon dioxide. Incompatibility can cause degradation of the pipe due to permeation, leading to leaks or ruptures.
  • Both hydrogen and carbon dioxide are typically transported at high pressure, which can increase the risk of pipe failure due to stress or fatigue.
  • Carbon dioxide can be corrosive to certain metals, which can lead to corrosion of the pipe and eventual failure.
  • Hydrogen is highly flammable and can ignite in the presence of an ignition source, such as a spark or heat.

To mitigate these safety risks, appropriate testing, inspection, and maintenance procedures must be put in place. Additionally, proper handling and transportation protocols should be followed, including strict adherence to pressure and temperature limits and precautions to prevent ignition sources. Finally, emergency response plans should be developed and implemented to address any incidents that may occur during transportation.

What are the existing relevant standards that need to be updated?

API Specification 15S, Spoolable Reinforced Plastic Line Pipe, covers the use of flexible composite pipe in onshore applications. The standard does not address transport of carbon dioxide and has not been incorporated into PHMSA’s regulations.


API Specification 17J, Specification for Unbonded Flexible Pipe, covers the use of flexible composite pipe in offshore applications. Similar to 15S, it does not address transport of carbon dioxide and has not been incorporated into PHMSA’s regulations.

Do the same recommendations apply to high-density polyethylene (HDPE) pipe?

HDPE pipe, commonly used in applications such as water supply, drainage systems, gas pipelines, and industrial processes, has similar advantages to composite pipe in terms of flexibility, ease of installation, and low maintenance requirements. It can be assembled to create seamless joints, reducing the risk of leaks. It can also be used to retrofit steel pipes as a liner per API SPEC 15LE.


HDPE pipe has been approved by PHMSA to transport natural gas under 49 CFR Part 192. However, the typical operating pressures (e.g., 100 psi) are significantly lower than composite pipe. Similar to composite pipe, there are no standards for the transport of hydrogen and carbon dioxide, though HDPE pipe’s lower pressure limits make it less suited for use in carbon capture and storage.

Addressing Online Harassment and Abuse through a Collaborative Digital Hub

Summary

Efforts to monitor and combat online harassment have fallen short due to a lack of cooperation and information-sharing across stakeholders, disproportionately hurting women, people of color, and LGBTQ+ individuals. We propose that the White House Task Force to Address Online Harassment and Abuse convene government actors, civil society organizations, and industry representatives to create an Anti-Online Harassment (AOH) Hub to improve and standardize responses to online harassment and to provide evidence-based recommendations to the Task Force. This Hub will include a data-collection mechanism for research and analysis while also connecting survivors with social media companies, law enforcement, legal support, and other necessary resources. This approach will open pathways for survivors to better access the support and recourse they need and also create standardized record-keeping mechanisms that can provide evidence for and enable long-term policy change. 

Challenge and Opportunity 

The online world is rife with hate and harassment, disproportionately hurting women, people of color, and LGBTQ+ individuals. A research study by Pew indicated that 47% of women were harassed online for their gender compared to 18% of men, while 54% of Black or Hispanic internet users faced race-based harassment online compared to 17% of White users. Seven in 10 LGBTQ+ adults have experienced online harassment, and 51% faced even more severe forms of abuse. Meanwhile, existing measures to combat online harassment continue to fall short, leaving victims with limited means for recourse or protection. 

Numerous factors contribute to these shortcomings. Social media companies are opaque, and when survivors turn to platforms for assistance, they are often met with automated responses and few means to appeal or even contact a human representative who could provide more personalized assistance. Many survivors of harassment face threats that escalate from online to real life, leading them to seek help from law enforcement. While most states have laws against cyberbullying, law enforcement agencies are often ill-trained and ill-equipped to navigate the complex web of laws involved and the available processes through which they could provide assistance. And while there are nongovernmental organizations and companies that develop tools and provide services for survivors of online harassment, the onus continues to lie primarily on the survivor to reach out and navigate what is often both an overwhelming and a traumatic landscape of needs. Although resources exist, finding the correct organizations and reaching out can be difficult and time-consuming. Most often, the burden remains on the victims to manage and monitor their own online presence and safety.

On a larger, systemic scale, the lack of available data to quantitatively analyze the scope and extent of online harassment hinders the ability of researchers and interested stakeholders to develop effective, long-term solutions and to hold social media companies accountable. Lack of large-scale, cross-sector and cross-platform data further hinders efforts to map out the exact scale of the issue, as well as provide evidence-based arguments for changes in policy. As the landscape of online abuse is ever changing and evolving, up-to-date information about the lexicons and phrases that are used in attacks also change.

Forming the AOH Hub will improve the collection and monitoring of online harassment while preserving victims’ privacy; this data can also be used to develop future interventions and regulations. In addition, the Hub will streamline the process of receiving aid for those targeted by online harassment.

Plan of Action

Aim of proposal

The White House Task Force to Address Online Harassment and Abuse should form an Anti-Online Harassment Hub to monitor and combat online harassment. This Hub will center around a database that collects and indexes incidents of online harassment and abuse from technology companies’ self-reporting, through connections civil society groups have with survivors of harassment, and from reporting conducted by the general public and by targets of online abuse. Civil society actors that have conducted past work in providing resources and monitoring harassment incidents, ranging from academics to researchers to nonprofits, will run the AOH Hub in consortium as a steering committee. There are two aims for the creation of this hub. 

First, the AOH Hub can promote collaboration within and across sectors, forging bonds among government, the technology sector, civil society, and the general public. This collaboration enables the centralization of connections and resources and brings together diverse resources and expertise to address a multifaceted problem. 

Second, the Hub will include a data collection mechanism that can be used to create a record for policy and other structural reform. At present, the lack of data limits the ability of external actors to evaluate whether social media companies have worked adequately to combat harmful behavior on their platforms. An external data collection mechanism enables further accountability and can build the record for Congress and the Federal Trade Commission to take action where social media companies fall short. The allocated federal funding will be used to (1) facilitate the initial convening of experts across government departments and nonprofit organizations; (2) provide support for the engineering structure required to launch the Hub and database; (3) support the steering committee of civil society actors that will maintain this service; and (4) create training units for law enforcement officials on supporting survivors of online harassment. 

Recommendation 1. Create a committee for governmental departments.

Survivors of online harassment struggle to find recourse, failed by legal technicalities in patchworks of laws across states and untrained law enforcement. The root of the problem is an outdated understanding of the implications and scale of online harassment and a lack of coordination across branches of government on who should handle online harassment and how to properly address such occurrences. A crucial first step is to examine and address these existing gaps. The Task Force should form a long-term committee of members across governmental departments whose work pertains to online harassment. This would include one person from each of the following organizations, nominated by senior staff:

This committee will be responsible for outlining fallibilities in the existing system and detailing the kind of information needed to fill those gaps. Then, the committee will outline a framework clearly establishing the recourse options available to harassment victims and the kinds of data collection required to prove a case of harassment. The framework should be completed within the first 6 months after the committee has been convened. After that, the committee will convene twice a year to determine how well the framework is working and, in the long term, implement reforms and updates to current laws and processes to increase the success rates of victims seeking assistance from governmental agencies. 

Recommendation 2: Establish a committee for civil society organizations.

The Task Force shall also convene civil society organizations to help form the AOH Hub steering committee and gather a centralized set of resources. Victims will be able to access a centralized hotline and information page, and Hub personnel will then triage reports and direct victims to resources most helpful for their particular situation. This should reduce the burden on those who are targets of harassment campaigns to find the appropriate organizations that can help address their issues by matching incidents to appropriate resources. 

To create the AOH Hub, members of the Task Force can map out civil society stakeholders in the space and solicit applications to achieve comprehensive and equitable representation across sectors. Relevant organizations include organizations/actors working on (but not limited to):

The Task Force will convene an initial meeting, during which core members will be selected to create an advisory board, act as a liaison across members, and conduct hiring for the personnel needed to redirect victims to needed services. Other secondary members will take part in collaboratively mapping out and sharing available resources, in order to understand where efforts overlap and complement each other. These resources will be consolidated, reviewed, and published as a public database of resources within a year of the group’s formation. 

For secondary members, their primary obligation will be to connect with victims who have been recommended to their services. Core members, meanwhile, will meet quarterly to evaluate gaps in services and assistance provided and examine what more needs to be done to continue growing the robustness of services and aid provided. 

Recommendation 3: Convene committee for industry.

After its formation, the AOH steering committee will be responsible for conducting outreach with industry partners to identify a designated team from each company best equipped to address issues pertaining to online abuse. After the first year of formation, the industry committee will provide operational reporting on existing measures within each company to address online harassment and examine gaps in existing approaches. Committee dialogue should also aim to create standardized responses to harassment incidents across industry actors and understandings of how to best uphold community guidelines and terms of service. This reporting will also create a framework for standardized best practices for data collection, in terms of the information collected on flagged cases of online harassment.

On a day-to-day basis, industry teams will be available resources for the hub, and cases can be redirected to these teams to provide person-to-person support for handling cases of harassment that require a personalized level of assistance and scale. This committee will aim to increase transparency regarding the reporting process and improve equity in responses to online harassment.

Recommendation 4: Gather committees to provide long-term recommendations for policy change.

On a yearly basis, representatives across the three committees will convene and share insights on existing measures and takeaways. These recommendations will be given to the Task Force and other relevant stakeholders, as well as be accessible by the general public. Three years after the formation of these committees, the groups will publish a report centralizing feedback and takeaway from all committees, and provide recommendations of improvement for moving forward. 

Recommendation 5: Create a data-collection mechanism and standard reporting procedures.

The database will be run and maintained by the steering committee with support from the U.S. Digital Service, with funding from the Task Force for its initial development. The data collection mechanism will be informed by the frameworks provided by the committees that compose the Hub to create a trauma-informed and victim-centered framework surrounding the collection, protection, and use of the contained data. The database will be periodically reviewed by the steering committee to ensure that the nature and scope of data collection is necessary and respects the privacy of those whose data it contains. Stakeholders can use this data to analyze and provide evidence of the scale and cross-cutting nature of online harassment and abuse. The database would be populated using a standardized reporting form containing (1) details of the incident; (2) basic demographic data of the victim; (3) platform/means through which the incident occurred; (4) whether it is part of a larger organized campaign; (5) current status of the incident (e.g., whether a message was taken down, an account was suspended, the report is still ongoing); (6) categorization within existing proposed taxonomies indicating the type of abuse. This standardization of data collection would allow advocates to build cases regarding structured campaigns of abuse with well-documented evidence, and the database will archive and collect data across incidents to ensure accountability even if the originals are lost or removed.

The reporting form will be available online through the AOH Hub. Anyone with evidence of online harassment will be able to contribute to the database, including but not limited to victims of abuse, bystanders, researchers, civil society organizations, and platforms. To protect the privacy and safety of targets of harassment, this data will not be publicly available. Access will be limited to: (1) members of the Hub and its committees; (2) affiliates of the aforementioned members; (3) researchers and other stakeholders, after submitting an application stating reasons to access the data, plans for data use, and plans for maintaining data privacy and security. Published reports using data from this database will be nonidentifiable, such as with statistics being published in aggregate, and not be able to be linked back to individuals without express consent.

This database is intended to provide data to inform the committees in and partners of the Hub of the existing landscape of technology-facilitated abuse and violence. The large-scale, cross-domain, and cross-platform nature of the data collected will allow for better understanding and analysis of trends that may not be clear when analyzing specific incidents, and provide evidence regarding disproportionate harms to particular communities (such as women, people of color, LGBTQ+ individuals). Resources permitting, the Hub could also survey those who have been impacted by online abuse and harassment to better understand the needs of victims and survivors. This data aims to provide evidence for and help inform the recommendations made from the committees to the Task Force for policy change and further interventions.

Recommendation 6: Improve law enforcement support.

Law enforcement is often ill-equipped to handle issues of technology-facilitated abuse and violence. To address this, Congress should allocate funding for the Hub to create training materials for law enforcement nationwide. The developed materials will be added to training manuals and modules nationwide, to ensure that 911 operators and officers are aware of how to handle cases of online harassment and how state and federal law can apply to a range of scenarios. As part of the training, operators will also be notified to add records of 911 calls regarding online harassment to the Hub database, with the survivor’s consent. 

Conclusion

As technology-facilitated violence and abuse proliferates, we call for funding to create a steering committee in which experts and stakeholders from civil society, academia, industry, and government can collaborate on monitoring and regulating online harassment across sectors and incidents. The resulting Anti-Online Harassment Hub would maintain a data-collection mechanism accessible to researchers to better understand online harassment as well as provide accountability for social media platforms to address the issue. Finally, the Hub would provide accessible resources for targets of harassment in a fashion that would reduce the burden on these individuals. Implementing these measures would create a safer online space where survivors are able to easily access the support they need and establish a basis for evidence-based, longer-term policy change.

Frequently Asked Questions
Why does online harassment matter?
Consequences of a vitriolic online space are severe. With #Gamergate, a notable case of online harassment, a group of online users, critical of progressivism in video game culture, targeted women in the industry with doxing, rape threats, and death threats. Brianna Wu, one of the campaign’s targets, had to contact the police and flee her home. She was diagnosed with post-traumatic stress disorder as a result of the harassment she endured. There are many other such cases that have resulted in dire emotional and even physical consequences.
How do platforms currently handle online harassment?

Platform policies on hate and harassment differ in the redress and resolution they offer. Twitter’s proactive removal of racist abuse toward members of the England football team after the UEFA Euro 2020 Finals shows that it is technically feasible for abusive content to be proactively detected and removed by the platforms themselves. However, this appears to only be for high-profile situations or for well-known individuals. For the general public, the burden of dealing with abuse usually falls to the targets to report messages themselves, even as they are in the midst of receiving targeted harassment and threats. Indeed, the current processes for reporting incidents of harassment are often opaque and confusing. Once a report is made, targets of harassment have very little control over the resolution of the report or the speed at which it is addressed. Platforms also have different policies on whether and how a user is notified after a moderation decision is made. A lot of these notifications are also conducted through automated systems with no way to appeal, leaving users with limited means for recourse.

What has the U.S. government done in response to online harassment?

Recent years have seen an increase in efforts to combat online harassment. Most notably, in June 2022, Vice President Kamala Harris launched a new White House Task Force to Address Online Harassment and Abuse, co-chaired by the Gender Policy Council and the National Security Council. The Task Force aims to develop policy solutions to enhance accountability of perpetrators of online harm while expanding data collection efforts and increasing access to survivor-centered services. In March 2022, the Biden-Harris Administration also launched the Global Partnership for Action on Gender-Based Online Harassment and Abuse, alongside Australia, Denmark, South Korea, Sweden, and the United Kingdom. The partnership works to advance shared principles and attitudes toward online harassment, improve prevention and response measures to gender-based online harassment, and expand data and access on gender-based online harassment.

What actions have civil society and academia taken to combat online harassment?

Efforts focus on technical interventions, such as tools that increase individuals’ digital safety, automatically blur out slurs, or allow trusted individuals to moderate abusive messages directed towards victims’ accounts. There are also many guides that walk individuals through how to better manage their online presence or what to do in response to being targeted. Other organizations provide support for those who are victims and provide next steps, help with reporting, and information on better security practices. However, due to resource constraints, organizations may only be able to support specific types of targets, such as journalists, victims of intimate partner violence, or targets of gendered disinformation. This increases the burden on victims to find support for their specific needs. Academic institutions and researchers have also been developing tools and interventions that measure and address online abuse or improve content moderation. While there are increasing collaborations between academics and civil society, there are still gaps that prevent such interventions from being deployed to their full efficacy.

How do we ensure the privacy and security of data stored regarding harassment incidents?

While complete privacy and security is extremely different to ensure in a technical sense, we envision a database design that preserves data privacy while maintaining its usability. First, the fields of information required for filing an incident report form would minimize the amount of personally identifiable information collected. As some data can be crowdsourced from the public and external observers, this part of the dataset would consist of existing public data. Nonpublicly available data would be entered by only individuals who are sharing incidents that are targeting them (e.g., direct messages), and individuals would be allowed to choose whether it is visible in the database or only shown in summary statistics. Furthermore, the data collection methods and the database structure will be periodically reviewed by the steering committee of civil society organizations, who will make recommendations for improvement as needed.

What is the scope of data collecting and reporting for the hub?

Data collection and reporting can be conducted internationally, as we recognize that limiting data collection to the U.S. will also undermine our goals of intersectionality. However, the hotline will likely have more comprehensive support for U.S.-based issues. In the long run, however, efforts can also be expanded internationally, as a cross-collaborative effort across multinational governments.

Accelerating Biomanufacturing and Producing Cost-Effective Amino Acids through a Grand Challenge

Summary 

A number of biomanufactured products require amino acids and growth factors as inputs, but these small molecules and proteins can be very expensive, driving up the costs of biomanufacturing, slowing the expansion of the U.S. bioeconomy, and limiting the use of novel biomedical and synthetically produced agricultural products. Manufacturing costs can be substantially limiting: officials from the National Institutes of Health and the Bill & Melinda Gates Foundation point to the manufacturing costs of antibody drugs as a major bottleneck in developing and distributing treatments for a variety of extant and emerging infectious diseases. To help bring down the costs of these biomanufacturing inputs, the Biden-Harris Administration should allocate federal funding for a Grand Challenge to research and develop reduced-cost manufacturing processes and demonstrate the scalability of these solutions. 

Amino acids are essential but costly inputs for large-scale bioproduction. To reduce these costs, federal funding should be used to incentivize the development of scalable production methods resulting in production costs that are half of current costs. Specifically, the U.S. Department of Agriculture (USDA) and ARPA-H should jointly commit to an initial funding amount of $15 million for 10 research projects in the first year, with a total of $75. million over five years, in Grand Challenge funding for researchers or companies who can develop a scalable process for producing food-grade or pharmaceutical-grade amino acids or growth factors at a fraction of current costs. ARPA-H should also make funding available for test-bed facilities that researchers can use to demonstrate the scalability of their cost-saving production methods. 

Scaling up the use of animal cell culture for biosynthetic production will only be economically effective if the costs of amino acids and growth factors are reduced. Reducing the cost of bioproduction of medical and pharmaceutical products like vaccines and antimicrobial peptides, or of animal tissue products like meat or cartilage, would improve the availability and affordability of these products, make innovation and new product development easier and more cost effective, and increase our ability to economically manufacture bioproducts in the United States, reducing our dependence on foreign supply chains. 

For a better understanding of the use of amino acids and growth factors in the production of biologics and animal cell-based products, and to accurately forecast supply and demand to ensure a reliable and available supply chain for medical products, the Department of Defense (DoD) and USDA should jointly commission an economic analysis of synthetic manufacturing pathway costs for common bioproducts and include assessments of comparative costs of production for major international competitors. 

Challenge and Opportunity

Amino acids are necessary inputs when synthesizing protein and peptide products, including pharmaceutical and healthcare products (e.g., antibodies, insulin) and agricultural products (e.g., synthetic plant and animal proteins for food, collagen, gelatin, insecticidal proteins), but they are very expensive. Amino acids as inputs to cell culture cost approximately $3 to $50 per kg, and growth factors cost $50,000 per gram, meaning that their costs can be half or more of the total production cost. 

Biomanufacturing depends on the availability of reagents, small molecules, and bioproducts that are used as raw inputs to the manufacturing process. The production of synthetic bioproducts is limited by the cost and availability of certain reagents, including amino acids and small signaling proteins like hormones and growth factors. These production inputs are used in cell culture to increase yields and production efficiency in the biosynthesis of products such as monoclonal antibodies, synthetic meat, clotting factors, and interferon (proteins that inhibit tumor growth and support immune system function). While some bioproducts can be produced synthetically in plant cells or bacterial cells, some products benefit from production steps in animal cells. One example is glycosylation, a protein-modification process that helps proteins fold into stable structures, which is a much simpler process in animal cells than in bacteria or in cell-free systems. The viruses used in vaccine development are also usually grown in animal cells, though some recombinant vaccines can be made in yeast or insect cells. There are benefits and drawbacks to the use of plant, fungi, bacteria, insect, or animal cells in recombinant bioproduction; animal cells are generally more versatile because they mimic human processes closely and require less engineering than non-animal cells. All cells, whether they are animal, plant or bacteria, require amino acids and various growth factors to survive and function efficiently. While in the future growth factors may no longer be required, amino acids will always be required. Amino acids are the most costly necessary additive on a price per kilogram basis; the most costly of the supporting additives are growth factors. 

Growth factors are proteins or steroids that act as signaling molecules that regulate cells’ internal processes, while amino acids are building blocks of proteins that are necessary both for cell function and for producing new proteins within a cell. Cells require supplementation with both growth factors and amino acids because most cells are not capable of producing their own growth factors. Biosynthetic production in animal cells frequently uses growth factors (e.g., TGF, IGF) to increase yield and increase production speed, signaling cells to work faster and make more of a particular compound.

Pharmaceuticals

Although pharmaceutical products are expensive, relatively small demand volumes prevent market forces from exerting sufficient cost pressure to spur innovation in their production. The biosynthetic production of pharmaceuticals involves engineering cells to produce large quantities of a molecule, such as a protein or peptide, which can then be isolated, purified, and used in medicine. Peptide therapeutics is a $39 billion global market that includes peptides sold as end products and others used as inputs to the synthesis of other biological compounds. Protein and peptide product precursors, including amino acids and growth factors, represent a substantial cost of production, which is a barrier to low-cost, high-volume biomanufacturing.

For example, the production of antimicrobial peptides, used as therapeutics against antibiotic-resistant bacteria and viruses, is strongly constrained by the cost of chemical inputs. One input alone, guanidine, accounts for more than 25% of the approximately $41,000 per gram production cost of antimicrobial peptides. Reducing the cost of these inputs will have substantial downstream effects on the economics of production. Antimicrobial peptides are currently very expensive to produce, limiting their development as alternatives to antibiotics, despite a growing need for new antibiotics. The U.S. National Action Plan for Combating Antibiotic-Resistant Bacteria (CARB) outlines a coordinated strategy to accelerate the development of new antibiotics and slow the spread of antibiotic resistance. Reducing the cost to produce antimicrobial peptides would support these goals. 

The high costs of synthetic production limit the growth of the market for synthetic products. This creates a local equilibrium that is suboptimal for the development of the synthetic biology industry and creates barriers to market entry for synthetic products that could, at scale, address environmental and bioavailability concerns associated with natural sources. The federal government has already indicated an interest in supporting the development of a robust and innovative U.S.-based biomanufacturing center, with the passage of the CHIPS and Science Act and Executive Order 14081 on Advancing Biotechnology and Biomanufacturing Innovation for a Sustainable, Safe, and Secure American Bioeconomy. Reducing the costs of basic inputs to the biomanufacturing process of a range of products addresses this desire to make U.S. biomanufacturing more sustainable. There are other examples of federal investment to reduce the cost of manufacturing inputs, from USDA support for new methods of producing fertilizer, to Food and Drug Administration investment to improve pharmaceutical manufacturing and establish manufacturing R&D centers at universities, to USDA National Institute of Food and Agriculture (NIFA) support for the development of bioplastics and bio-based construction materials. Federal R&D support increases subsequent private research funding and increases the number of new products that recipients develop, a positive measure of innovation. 

The effort to reduce biomanufacturing costs is larger than any one company; therefore, it requires a coordinated effort across industry, academia, and government to develop and implement the best solution. The ability to cost-effectively manufacture precursors will directly and indirectly advance all aspects of biomanufacturing. Academia and industry are poised and ready to improve the efficiency and cost of bioproduction but require federal government coordination and support to achieve this essential milestone and to support the development of the newly emerging industry of large-scale synthetic bioproducts.  

Synthetic meat

Developing cost-effective protein and peptide synthesis would remove a substantial barrier to the expansion of synthetic medical and agricultural products, which would address current supply bottlenecks (e.g., blood proteins, antibody drugs) and mounting environmental and political challenges to natural sourcing (e.g., beef, soy protein). Over the past decade, breakthroughs in the manufacturing capability to synthetically produce biological products, like biofuels or the antimalarial drug artemisinin, have failed to reach cost-competitiveness with naturally sourced competitors, despite environmental and supply-chain-related benefits of a synthetic version. The Department of Energy (DoE) and others continue to invest in biofuel and bioproduct development, and additional research innovation may soon bring these products to a cost-competitive threshold. For bioproducts that depend on amino acids and growth factors as inputs, that threshold may be very close. Proof of concept research on growth factor and amino acid production, as well as techno-economic assessments of synthetic meat products, point to precursor amino acids and proteins as being substantial barriers to cost competitiveness of bioproduction—but close to being overcome through technological development. Potential innovators lack support to invest in the development of potentially globally beneficial technologies with uncertain returns.

Reducing the costs of these inputs for the peptide drug and pharmaceutical market could also bring down the costs of synthetic meat, thereby increasing a substantial additional market for low-cost amino acids and growth factors while alleviating the environmental burdens of a growing demand for meat. Israel has demonstrated that there is strong demand for such products and has substantially invested in its synthetic meat sector, which in turn has augmented its overall bioeconomy. 

Bringing the cost of synthetic meat from current estimates of $250 per kg to the high end of wholesale meat prices at $10 per kg is infeasible without reducing the cost of growth factors and amino acids as production inputs but would also reduce the water and land usage of meat production by 70% to 95%. Synthetic meat would also alleviate many of the ethical and environmental objections to animal agriculture, reduce food waste, and increase the amount of plant products available for human consumption (currently 77% of agricultural land is used for livestock, meat, and dairy production, and 45% of the world’s crop calories are eaten by livestock).

Bioeconomy initiatives and opportunity

Maintaining U.S. competitiveness and leadership in biomanufacturing and the bioeconomy is a priority for the Biden-Harris Administration, which has led to a national bioeconomy strategy that aims to coordinate federal investment in R&D for biomanufacturing, improve and expand domestic biomanufacturing capacity, and expand market opportunities for biobased products. Reducing the cost and expanding the supply of amino acids and growth factors supports these three objectives by making bioproducts derived from animal cells cheaper and more efficient to produce. 

Several directives within President Biden’s National Biotechnology and Biomanufacturing Initiative could apply to the goal of producing cost-effective amino acids and growth factors, but a particular stipulation for the Department of Health and Human Services stands out. The 2022 Executive Order 14081 on Advancing Biotechnology and Biomanufacturing Innovation for a Sustainable, Safe, and Secure American Bioeconomy includes a directive for the Department of Health and Human Services (HHS) to invest $40 million to “expand the role of biomanufacturing for active pharmaceutical ingredients (APIs), antibiotics, and the key starting materials needed to produce essential medications and respond to pandemics.” Protein and peptide product precursors are key starting materials for medical and pharmaceutical products, justifying HHS support for this research challenge. 

Congress has also signaled its intent to advance U.S. biotech and biomanufacturing. The CHIPS and Science Act authorizes funding for projects that could scale up the U.S. bioeconomy. Title IV of the Act, on bioeconomy research and development, authorizes financial support for research, test beds for scaling up technologies, and tools to accelerate research. This support could take the form of grants, multi-agency collaborative funding, and Small Business Innovation Research (SBIR) or Small Business Technology Transfer Program (SBTTP) funding. 

Biomanufacturing is important for national security and stability, yet much research and development are needed to realize that potential. The abovementioned funding opportunities should be leveraged to support foundational, cross-cutting capabilities to achieve affordable, accessible biomanufactured products, such as the production of essential precursor molecules. 

Plan of Action

To provide the catalyst for innovation that will drive down the price of components, federal funding should be made available to organizations developing cost-effective biosynthetic production pathways. Initial funding would be most helpful in the form of research grants as part of a Grand Challenge competition. University researchers have made some proof-of-concept progress in developing cost-effective methods of amino acid synthesis, but the investment required to demonstrate that these methods succeed at scale is currently not provided by the market. The main market for synthetic biomanufacturing inputs like amino acids is pharmaceutical products, which can pass on high production costs to the consumer and are not sufficiently incentivized to drive down the costs of inputs. 

Recommendation 1. Provide Grand Challenge funding for reduced-cost scalable production methods for amino acids and growth factors.

The USDA (through the USDA-NIFA Agriculture and Food Research Initiative [AFRI] or through AgARDA if it is funded) and ARPA-H should jointly commit to $15 million for 10 projects in the first year, with a total of $75 million over five years, in Grand Challenge1 funding for researchers or companies who can develop a scalable process for producing food-grade or pharmaceutical-grade amino acids or growth factors at a fraction of current costs (e.g., $100,000 per kg for growth factors, and $1.50 per kg for amino acids), with escalating prizes for greater cost reductions. Applicants can also demonstrate the development of scalably produced bioengineered growth factors that demonstrate increased efficacy and efficiency. Grand Challenges offer funding to incentivize productive competition among researchers to achieve specific goals; they may also offer prizes for achieving interim steps toward a larger goal.

ARPA-H and USDA are well-positioned to spur innovation in cost-effective precursor production. Decreasing the costs of producing amino acids and growth factors would enable the transformative development of biologics and animal-cell-based products like synthetic meat, which aligns well with ARPA-H’s goal of supporting the development of breakthrough medical and biological products and technologies. ARPA-H aims to use its $6.5 billion in funding from the FY22 federal budget to invest in three-to-five-year projects that will support breakthrough technologies that are not yet economically compelling or sufficiently feasible for companies to invest internally in their development. An example technology cited by the ARPA-H concept paper is “new manufacturing processes to create patient-specific T-cells to search and destroy malignant cells, decreasing costs from $100,000s to $1000s to make these therapies widely available.” Analogously, new manufacturing processes for animal cell culture inputs will make biosynthetic products more cost-effective and widely available, but the potential market is still speculative, making investment risky.

AgARDA was meant to complement AFRI, in its model for soliciting research proposals, and being able to jointly support projects like a Grand Challenge to scale up amino acids and growth factors provides reason to fund AgARDA at its authorized level. Because producing cell-based meat at cost parity to animal meat would be an agricultural achievement, lowering the cost of necessary inputs to cell-based meat production could fall under the scope of AgARDA. 

Recommendation 2. Reward Grand Challenge winners who demonstrate scalability and provide BioPreferred program purchasing preference. 

Researchers developing novel low-cost and high-efficiency production methodology for amino acids and growth factors will also need access to facilities and manufacturing test beds to ensure that their solutions can scale up to industrial levels of production. To support this, ARPA-H should make funding available to Grand Challenge winners to demonstrate scaling their solutions to hundreds of kilograms per year. This is aligned with the test-bed development mandated by the CHIPS and Science Act. This funding should include $15 million to establish five test-bed facilities (a similar facility at the University of Delaware was funded at $3 million) and an additional $3 million to provide vouchers of between $10,000 and $300,000 for use at test-bed facilities. (These amounts are similar to the vouchers  provided by the California Department of Energy for its clean energy test-bed program.) 

To support the establishment of a market for the novel production processes, USDA should add to its BioPreferred program a requirement that federal procurement give preference to winners of the Grand Challenge when purchasing amino acids or growth factors for the production of biologics and animal cell-derived products. The BioPreferred program requires that federal purchases favor bio-based products (e.g., biodegradable cutlery rather than plastic cutlery) where the bio-based product meets the requirements for the purchaser’s use of that product. This type of purchasing commitment would be especially valuable for Grand Challenge winners who identify novel production methods—such as molecular “farming” in plants or cell-free protein synthesis—whose startup costs make it difficult to bootstrap incremental growth in production. Requiring that federal purchasing give preference to Grand Challenge winners ensures a certain volume of demand for new suppliers to establish themselves without increasing costs for purchasers. 

Stakeholder support for this Grand Challenge would include research universities; the alternative protein, peptide products, and synthetic protein industries; nonprofits supporting reduced peptide drug prices (such as the American Diabetes Association or the Boulder Peptide Foundation) and a reduction in animal agriculture (such as New Harvest or the Good Food Institute); and U.S. biomanufacturing supporters, including DoE and DoD. Companies and researchers working on novel methods for scalable amino acid and growth factor production will also support additional funding for technology-agnostic solutions (solutions that focus on characteristics of the end product rather than the method—such as precision fermentation, plant engineering, or cell-free synthesis—used to obtain the product). 

As another incentive, ARPA-H should solicit additional philanthropic and private funding for Grand Challenge winners, which could take the form of additional prize money or advance purchase commitment for a specified volume of amino acids or growth factors at a given threshold price, providing further incentive for bringing costs below the level specified by the Challenge. 

Recommendation 3. To project future demand, DoD should commission an economic analysis of synthetic manufacturing pathway costs for common bioproducts, and include assessments of comparative costs in major international competitors (e.g., China, the European Union, the United Kingdom, Singapore, South Korea, Japan). 

This analysis could be funded in part via BioMADE’s project calls for technology and innovation research. BioMADE received $87 million in DoD funding in 2020 for a seven-year period, plus an additional $450 million announced in 2023. Cost sharing for this project could come from the NSF Directorate for Technology, Innovation, and Partnerships or from the DoE’s Office of Science’s Biological and Environmental Research Program, which has supported techno-economic analyses of similar technologies, such as biofuels. 

EO 14081 also includes DoD as a major contributor to building the bioeconomy. The DoD’s Tri-Service Biotechnology for a Resilient Supply Chain program will invest $270 million over five years to speed the application of research to product manufacturing. Decreasing the costs of amino acids and growth factors as inputs to manufacturing biologics could be part of this new program, depending on the forthcoming details of its implementation. Advancing cost-effective biomanufacturing will transform defense capabilities needed to maintain U.S. competitiveness, secure critical supply chains, and enhance resiliency of our troops and defense needs, including medicines, alternative foods, fuels, commodity and specialty chemicals, sensors, materials, and more. China recently declared a focus on synthetic animal protein production in its January 2022 Five Year Plan for Agriculture. Our trade relationship with China, which includes many agricultural products, may shift if China is able to successfully produce these products synthetically.

Conclusion

To support the development of an expansive and nimble biomanufacturing economy within the United States, federal agencies should ensure that the necessary inputs for creating biomanufactured products are as abundant and cost-effective as possible. Just as the cost to produce an almond is greatly dependent on the cost of water, the cost to manufacture a biological product in a cell-based manufacturing system depends on the cost of the inputs used to feed that system. Biomanufactured products that require amino acids and growth factors as inputs range from the medically necessary, like clotting factors and monoclonal antibodies, to the potentially monumental and industry-changing, like cell-based meat and dairy products. Federal actions to increase the feasibility and cost-effectiveness of manufacturing these products in the United States will beneficially affect the bioeconomy and biotechnology industry, the pharmaceutical and biomedical industries, and potentially the food and agriculture industries as well.

Frequently Asked Questions
What are other potential funding sources?

Partnerships for Innovation. This program funds translational research to accelerate technology development, which could apply to research aimed at scaling up the production of amino acids and growth factors, and developing innovative and low-cost methods of production, purification, and processing.

How were grant funding amounts derived?

Similar grant funding through NINDS (CREATE Bio) and NIST (NIIMBL) for biomanufacturing initiatives devoted $10 million to $16 million in funding for 12-14 projects. The USDA recently awarded $10 million over five years to Tufts University to develop a National Institute for Cellular Agriculture, as part of a $146 million investment in 15 research projects announced in 2021 and distributed by the USDA-NIFA Agriculture and Food Research Initiative’s Sustainable Agricultural Systems (AFRI-SAS) program. AFRI-SAS supports workforce training and standardization of methods used in the production of cell-based meat, while Tufts’s broader research goals include evaluating the economics of production. Decreasing the cost of synthetic meat is key to developing a sustainable cellular agriculture program, and USDA could direct a portion of its AFRI-SAS funding to providing support for this initiative.

Would decreasing costs of amino acids and growth factors spur innovation?

Yes. Current production methods for biological products, such as monoclonal antibody drugs, are sufficiently high that developing monoclonal antibodies for infectious diseases that primarily affect poor regions of the world is considered infeasible. Decreasing the costs of manufacturing these drugs through decreasing the costs of their inputs would make it economically possible to develop antibody drugs for diseases like malaria and zika, and biomedical innovation for other infectious diseases could follow. Similarly, decreasing the costs of amino acid and growth factor inputs would allow synthetic meat companies greater flexibility in the types of products and manufacturing processes they are able to use, increasing their ability to innovate.

Why aren’t companies pursuing this work with market incentives? Why should the U.S. government fund this work?

In fact, a few non-U.S. companies are pursuing the production of synthetic growth factors as well as bioengineered platforms for lower-cost growth factor production. Israeli company BioBetter, Icelandic company ORF Genetics, UK-based CellRX, and Canadian company Future Fields are all working to decrease growth factor cost, while Japanese company Ajinomoto and Chinese companies such as Meihua Bio and Fosun Pharma are developing processes to decrease amino acid costs. Many of these companies receive subsidies or are funded by national venture funding dedicated to synthetic biology and the alternative protein sector. thus, U.S. federal funding of lower-cost amino acid and growth factor production would support the continued competitiveness of the national bioeconomy and demonstrate support for domestically manufactured bioengineered products. 

How would decreasing amino acid and growth factor costs result in job growth or biomanufacturing growth?

Reducing the supply chain costs of manufacturing allows companies to increase manufacturing volumes, produce a wider range of products, and sell into more price-sensitive markets, all of which could result in job growth and the expansion of the biomanufacturing center. As an example of a product that has seen similar effects, solar panels and photovoltaic cells have seen substantial decreases in their costs of production, which have been coupled with job growth. Jobs in photovoltaics are seeing the largest increases among overall growth in renewable energy employment.

Could the technologies that decrease cost of amino acid and growth factor production be used in other industries?

The techniques required to lower costs and scale production of amino acids and growth factors should translate to the production of other types of small molecules and proteins, and may even pave the way for more efficient and lower-cost production methods in chemical engineering, which shares some methods with bioengineering and biological manufacturing. For example, chemical engineering can involve the production of organic molecules and processing and filtration steps that are also used in the production of amino acids and growth factors.

How would increased synthetic meat production and consumption affect the livestock industry?

Increased synthetic meat production will help address growing demands for meat and for protein-rich foods that the livestock industry currently struggles with in combination with other demands for land, water, agricultural products, and skilled labor. As an example, the recent U.S. egg shortage demonstrated that the livestock industry is susceptible to external production shocks caused by disease and unexpected environmental effects. Many large-scale meat companies, including giants like Cargill and Tyson Foods, see themselves as in the business of supplying protein, rather than the business of slaughtering animals, and have invested in plant-based-meat companies to broaden their portfolios. Expanding into synthetic meat is another way for animal agriculture to continue to serve meat to customers while incorporating new technological methods of production. If synthetic meat adoption expands rapidly enough to reduce the need for animal husbandry, farmers and ranchers will likely respond by shifting the types of products they produce, whether by growing more vegetables and plant crops or by raising animals for other industries.

Visa Interview Waivers after COVID

Summary

The COVID-19 pandemic severely impaired State Department (DOS) processing capacity by interrupting operations at U.S. consulates and foreign posts, slashing revenue for consular services through the resultant collapse in collected fees, and straining preexisting staffing challenges. To respond to diminished capacity, the State Department used its authority to waive in-person interviews to efficiently process visas with the resources it had available, while protecting national security. Even after COVID-19 ends as an official public health emergency, its effects on visa processing capacity will  linger. In 2022, 48 percent of nonimmigrant visas were issued with an interview waiver, which was a vital component in rejuvenating the global talent mobility. Current visa interview waiver policies should remain  in place until U.S. visa processing fully rebounds and should become a permanent  feature of the State Department’s ongoing efforts to develop country-by-country consular policies that mitigate risk and avoid backlogs. 

Expanded use of interview waivers because of diminished processing capacity

Congress authorized interview waivers to allow the State Department to focus its scarce resources on potential threats. The State Department originally had complete discretion about who must make a “personal appearance” and who may be waived under the Immigration and Nationality Act of 1952.1 Before the September 11th attacks, personal appearance waivers were relatively common,2 but post-9/11 policy guidance, initially codified in regulation in 2003, restricted the use of waivers to certain circumstances.3 Congress codified these restrictions in the Intelligence Reform and Terrorism Prevention Act of 2004, which added an in-person interview requirement for all applicants between 14 and 79 except under particular circumstances.4 Namely, DOS can offer waivers for the in-person interview requirement to applicants renewing visas (who have already had interviews) and for designated low-risk applicants.5 

The 2004 State Department waiver authorities that Congress left to the State Department contain three components. First, individual consular officers may waive in-person interviews in certain cases when the applicant “presents no national security concerns requiring an interview.” Second, the Secretary of State may waive interviews when it is in the national interest. Third, the Deputy Assistant Secretary for Visa Services has the authority to waive interviews when it is “necessary as a result of unusual or emergent circumstances.”6 

In the wake of COVID-19, the State Department has strategically used these waivers to address growing backlogs. After a temporary suspension of visa processing at the beginning of the pandemic, DOS resumed limited visa processing in July 2020. However, limited capacity led to significant backlogs and wait times. A number of factors have contributed to lengthy backlogs: 

  1. Interrupted operations at consulates and embassies: Many consular offices shut down temporarily or scaled back their services during peak pandemic times due to lockdown measures and health risks. This led to delays in application processes that spiraled into massive backlogs when normal functionality resumed.
  2. Diminished revenue: As a fee-based agency, consular services lost the revenue associated with normal operations. Cuts to staff and resources left the agency with higher caseloads per officer. 
  3. Limited resources before pandemic: Even before COVID-19, US consulates and embassies had inadequate resources to efficiently handle significant processing demands. This problem was exacerbated by pandemic-related disruptions.
  4. Increased application volumes: Global travel resumed as vaccines became widely available. Families reuniting after extended time apart was a primary contributor to rising visa application volumes.  

The Department of State’s current policies focus on low-risk applicants, namely individuals who: have previously traveled to the United States; have biometrics on file for full screening and vetting; and either are the beneficiary of an approved petition from DHS confirming their eligibility for a visa classification or have already received a Certificate of Eligibility for a visa classification by an institution designated by DOS. 

On March 26, 2020, Secretary of State Pompeo announced that the DOS would expand the availability of waivers to certain H-2 applicants, marking the first expansion of visa waivers in response to reduced processing capacity.  In August 2020, Pompeo announced that applicants seeking a visa in the same category they previously held would be allowed to get an interview waiver if their visa expired in the last 24 months. Before this, the expiration period for an interview waiver was only 12 months. In December 2020, just two days before this policy was set to expire, DOS extended it through the end of March 2021. In March, the expiration period was doubled again, from 24 months to 48 months and the policy extended through December 31, 2021. In September of 2021, DOS also approved waivers through the remainder of 2021 for applicants of F, M, and academic J visas from Visa Waiver Program countries who were previously issued a visa.

In December 2021, DOS extended its then-existing policies (with some minor modifications) through December 2022. It also expanded its interview waiver policies by making first-time applicants for H-1, H-3, H-4, L, O, P, and Q visas — all classifications requiring petition adjudication by DHS — eligible for waivers if they are nationals of countries participating in the Visa Waiver Program and are a previous traveler to the United States through the Electronic System for Travel Authorization (ESTA). Applicants for H-1, H-3, H-4, L, O, P, and Q visas are also eligible for waivers if they have previously been issued any type of visa (meaning their  biometric data was on file with DOS), have never been refused a visa (unless the refusal was overcome or waived), and provided they have no apparent or potential ineligibility. Applicants who have been issued a valid Certificate of Eligibility for classification as an F-1 student or an exchange visitor on an academic J-1 program may also be issued a visa without an interview.  Moreover, the interview waiver policy that individuals renewing a visa in the same category as a visa that expired in the preceding 48 months may be eligible for issuance without an interview was announced as a standing policy of the State Department, and added to the department’s Foreign Affairs Manual for consular officers.  In December 2022, DOS announced another extension of these policies, which are set to expire at the end of 2023. 

In April 2023, President Biden signed a resolution ending the state of national emergency initiated by the pandemic. The public health emergency expires on May 11, 2023. 

As policymakers consider the future of interview waivers beyond the official COVID emergency, they should note that new waiver policies were a response to a profound reduction in processing capacity rather than as a direct public health measure. Even with an expanded use of waivers, backlogs are still significant. The average wait time is estimated to be about 100 days—well above pre-pandemic waits.  Even though the public health emergency has ended, we must retain current policies on interview waivers as long as processing delays persist. 

Interview Waivers Have Been Highly Effective

Interview waivers have positively contributed to effective visa processing. Recent data show a decline in global wait times for various applicant types, including students, exchange visitors, temporary workers requiring DHS petition approval, and B-1/B-2 visitors.7 Moreover, interview waivers have had a minimal impact on overstay rates. 

It should be noted that waivers are not granted at the expense of national security or public safety. Robust screening and vetting protocols persist even when interviews are waived. Preserving the waiver mechanism can help strike a balance between robust screening and vetting measures and the  procedural workflows that are vital for efficiently managing backlog cases.  Waived applicants typically consist of low-risk profiles or those who have previously been granted visas after comprehensive background checks, who are then subjected to the same screening and vetting checks and reviews as interviewed applicants based on their biometrics already on file.  The ability of State’s consular posts to receive visa applications without an interview, but not mandating that posts do so for all available categories,allows consular officials to take into account country-specific conditions.  

As the State Department recently noted: “These interview waiver authorities have reduced visa appointment wait times at many embassies and consulates by freeing up in-person interview appointments for other applicants who require an interview. Nearly half of the almost seven million nonimmigrant visas the Department issued in Fiscal Year 2022 were adjudicated without an in-person interview. We are successfully lowering visa wait times worldwide, following closures during the pandemic, and making every effort to further reduce those wait times as quickly as possible, including for first-time tourist visa applicants. Embassies and consulates may still require an in-person interview on a case-by-case basis and dependent upon local conditions.”8

Given that about half of all nonimmigrant visas were issued last year without an interview, discontinuing interview waivers following the end of the public health emergency will create undue strain on an already understaffed  consular workforce and hamper global mobility just as academic, industrial, and government travel is returning to pre-pandemic levels. The workload that previously took care of almost half of successful visa applications will instead increase pressure on a system that is poorly equipped to service the growing post-pandemic demand.

Interview waivers do not jeopardize security

As the State Department explained in 2015, “interview waiver options do not represent a reduced scrutiny of applicants; rather, they are intended to enhance the security of the visa process by allowing State to focus more of its resources on potential threats.” 

First, expanded use of interview waivers as a result of the pandemic only applies to low-risk applicants. The waivers are subject to important guardrails to safeguard security. They are not available to any applicant who: has previously been denied a visa; is listed in the Consular Lookout and Support System (CLASS); requires a Security Advisory Opinion or State Department clearance; is applying from a country they are not a national or resident of; or who is applying from a country designated a state sponsor of terrorism. Furthermore, they cannot be a member of any group that poses a security threat, has historically had an above average rate of visa denials, or poses a substantial risk of visa fraud. 

Second, applicants eligible for interview waivers remain subject to the background checks and all screening and vetting required for all nonimmigrants, including name checks and biometric screening. 

Third, the waivers are discretionary. Consular officers always have the option to interview an  applicant if they doubt their credibility or have any other questions about their eligibility following standard screening procedures.

Interview waivers maximize the security afforded by DOS for a given level of processing capacity by allowing the department to deploy its resources where they are most needed.

Recommendations

Current interview waivers should be extended until at least 80% of non-immigrant visa applicants in categories requiring USCIS petition approval or sponsor-issued Certificates of Eligibility can schedule an interview within three weeks. Existing waivers should not be lifted unless this benchmark for visa processing can be maintained. In 2012, the president established this benchmark as a target for DOS with regard to business and tourist visas. By 2015, the Department successfully brought wait times down with the help of numerous policy changes, including the use of interview waivers. This benchmark provides a reasonable criterion to define unusual or emergent circumstances related to visa processing justifying waivers. 

Consular management controls should include required annual reporting by consular posts to the State Department’s Bureau of Consular Affairs on the use of interview waivers.  Consular posts typically conduct a handful of validation studies each year for Visa Services leadership in Consular Affairs.  Each consular post should be tasked with reporting: whether the post utilized interview waiver authorities, and the reasoning for when the authorities were employed or not, what efficiencies or hurdles were encountered; and how the targeted use of interview waivers at the individual post can mitigate risks by allowing consular officials to focus attention on country-specific conditions.

Congress should authorize expanded interview waivers beyond the emergent circumstances of reduced processing capacity and task DOS with piloting other policies that would institutionalize efficient visa processing. Waivers are justified under current authority by the unusual circumstance of reduced processing capacity but may be helpful even when processing capacity has rebounded. Congress can and should make clear that it intends the national interest authorities left to the Secretary of State be utilized for the purpose of keeping processing times down. Congress can also help DOS pilot remote interviews for those  lowest risk applicants that remain ineligible for interview waivers or in countries where interview waivers are not an appropriate response for country conditions. Combining interview waivers with remote interviewing authority would allow the State Department to better choose how to deploy its resources while also maintaining thorough screening and vetting through biometrics.  Institutionalizing more certain and predictable timing on visa applications would help ensure the U.S. is attractive to international talent that is key to keeping the country competitive.

Conclusion

Despite reported improvements in pandemic conditions, visa backlogs continue to pose significant challenges at U.S. diplomatic missions around the world. The State Department should be allowed to broadly and flexibly use consular resources to collect and review screening and vetting results and complete all processing requirements without scheduling interviews. This will allow the Department to offer more timely options for qualified individuals seeking entry into the country. Maintaining interview waivers after the official expiration of the COVID-19 health crisis allows experts to focus on essential cases requiring more in-depth scrutiny, thus bolstering the security of our immigration system.

Lifting COVID-related restrictions does not automatically imply that all embassies or consulates will be able to immediately manage pre-pandemic levels of visa applications. Adjusting staffing resources and infrastructure could take time, especially when considering additional constraints from dealing with limited operational capacities. In these cases, visa interview waivers can help alleviate undue stress on embassy operations while providing flexibility to consular officers.

Creating a Fair Work Ombudsman to Bolster Protections  for Gig Workers

Summary

To increase protections for fair work, the U.S. Department of Labor (DOL) should create an Office of the Ombudsman for Fair Work. Gig workers are a category of non-employee contract workers who engage in on-demand work, often through online platforms. They have had historic vulnerabilities in the U.S. economy. A large portion of gig workers are people of color, and the nature of their temporary and largely unregulated work can leave them vulnerable to economic instability and workplace abuse. Currently, there is no federal mechanism to protect gig workers, and state-level initiatives have not offered thorough enough policy redress. Establishing an Office of the Ombudsman would provide the Department of Labor with a central entity to investigate worker complaints against gig employers, collect data and evidence about the current gig economy, and provide education to gig workers about their rights. There is strong precedent for this policy solution, since bureaus across the federal government have successfully implemented ombudsmen that are independent and support vulnerable constituents. To ensure its legal and long-lasting status, the Secretary of Labor should establish this Office in an act of internal agency reorganization.

Challenge and Opportunity

The proportion of the U.S. workforce engaging in gig work has risen steadily in the past few decades, from 10.1% in 2005 to 15.8% in 2015 to roughly 20% in 2018. Since the COVID-19 pandemic began, this trend has only accelerated, and a record number of Americans have now joined the gig economy and rely on its income. In a 2021 Pew Research study, over 16% of Americans reported having made money through online platform work alone, such as on apps like Uber and Doordash, which is merely a subset of gig work jobs. Gig workers in particular are more likely to be Black or Latino compared to the overall workforce.

Though millions of Americans rely on gig work, it does not provide critical employee benefits, such as minimum wage guarantees, parental leave, healthcare, overtime, unemployment insurance, or recourse for injuries incurred during work. According to an NPR survey, in 2018 more than half of contract workers received zero benefits through work. Further, the National Labor Relations Act, which protects employees’ rights to unionize and collectively bargain without retaliation, does not protect gig workers. This lack of benefits, rights, and voice leaves millions of workers more vulnerable than full-time employees to predatory employers, financial instability, and health crises, particularly during emergencies—such as the COVID-19 pandemic

Additionally, in 2022, inflation reached a decades-long high, and though the price of necessities has spiked, wages have not increased correspondingly. Extreme inflation hurts lower-income workers without savings the most and is especially dangerous to gig workers, some of whom make less than the federal minimum hourly wage and whose income and work are subject to constant flux.

State-level measures have as yet failed to create protections for all gig workers. In 2020, California passed AB5, legally reclassifying many gig workers as employees instead of independent contractors and thus entitling them to more benefits and protections. But further bills and Proposition 22 reverted several groups of gig workers, including online platform gig workers like Uber and Doordash drivers, to being independent contractors. Ongoing litigation related to Proposition 22 leaves the future status of online platform gig workers in California unclear. In 2022, Washington State passed ESHB 2076 guaranteeing online platform workers—but not all gig workers—the benefits of full-time employees. 

This sparse patchwork of state-level measures, which only supports subgroups of gig workers, could trigger a “race to the bottom” in which employers of gig workers relocate to less strict states. Additionally, inconsistencies between state laws make it harder for gig workers to understand their rights and gain redress for grievances, harder for businesses to determine with certainty their duties and liabilities, and harder for states to enforce penalties when an employer is headquartered in one state and the gig worker lives in another. The status quo is also difficult for businesses that strive to be better employers because it creates downward pressure on the entire landscape of labor market competition. Ultimately, only federal policy action can fully address these inconsistencies and broadly increase protections and benefits for all gig workers. 

The federal ombudsman’s office outlined in this proposal can serve as a resource for gig workers to understand the scope of their current rights, provide a voice to amplify their grievances and harms, and collect data and evidence to inform policy proposals. It is the first step toward a sustainable and comprehensive national solution that expands the rights of gig workers.

Specifically, clarifying what rights, benefits, and means of recourse gig workers do and do not have would help gig workers better plan for healthcare and other emergent needs. It would also allow better tracking of trends in the labor market and systemic detection of employee misclassification. Hearing gig workers’ complaints in a centralized office can help the Department of Labor more expeditiously address gig workers’ concerns in situations where they legally do have recourse and can otherwise help the Department of Labor better understand the needs of and harms experienced by all workers. Collecting broad-ranging data on gig workers in particular could help inform federal policy change on their rights and protections. Currently, most datasets are survey based and often leave out people who were not working a gig job at the time the survey was conducted but typically otherwise do. More broadly, because of its informal and dynamic nature, the gig economy is difficult to accurately count and characterize, and an entity that is specifically charged with coordinating and understanding this growing sector of the market is key.

Lastly, employees who are not gig workers are sometimes misclassified as such and thus lose out on benefits and protections they are legally entitled to. Having a centralized ombudsman office dedicated to gig work could expedite support of gig workers seeking to correct their classification status, which the Wage and Hour Division already generally deals with, as well as help the Department of Labor and other agencies collect data to clarify the scope of the problem.

Plan of Action

The Department of Labor should establish an Office of the Ombudsman for Fair Work. This office should be independent of Department of Labor agencies and officials, and it should report directly to the Secretary of Labor. The Office would operate on a federal level with authority over states.

The Secretary of Labor should establish the Office in an act of internal agency reorganization. By establishing the Office such that its powers do not contradict the Department of Labor’s statutory limitations, the Secretary can ensure the Office’s status as legal and long-lasting, due to the discretionary power of the Department to interpret its statutes.

The role of the Office of the Ombudsman for Fair Work would be threefold: to serve as a centralized point of contact for hearing complaints from gig workers; to act as a central resource and conduct outreach to gig workers about their rights and protections; and to collect data such as demographic, wage, and benefit trends on the labor practices of the gig economy. Together, these responsibilities ensure that this Office consolidates and augments the actions of the Department of Labor as they pertain to workers in the gig economy, regardless of their classification status.

The functions of the ombudsman should be as follows:

  1. Establish a clear and centralized mechanism for hearing, collating, and investigating complaints from workers in the gig economy, such as through a helpline or mobile app.
  2. Establish and administer an independent, neutral, and confidential process to receive, investigate, resolve, and provide redress for cases in which employers misrepresent to individuals that they are engaged as independent contractors when they’re actually engaged as employees.
  3. Commence court proceedings to enforce fair work practices and entitlements, as they pertain to workers in the gig economy, in conjunction with other offices in the DOL.
  4. Represent employees or contractors who are or may become a party to proceedings in court over unfair contracting practices, including but not limited to misclassification as independent contractors. The office would refer matters to interagency partners within the Department of Labor and across other organizations engaged in these proceedings, augmenting existing work where possible.
  5. Provide education, assistance, and advice to employees, employers, and organizations, including best practice guides to workplace relations or workplace practices and information about rights and protections for workers in the gig economy.
  6. Conduct outreach in multiple languages to gig economy workers informing them of their rights and protections and of the Office’s role to hear and address their complaints and entitlements.
  7. Serve as the central data collection and publication office for all gig-work-related data. The Office will publish a yearly report detailing demographic, wage, and benefit trends faced by gig workers. Data could be collected through outreach to gig workers or their employers, or through a new data-sharing agreement with the Internal Revenue Service (IRS). This data report would also summarize anonymized trends based on the complaints collected (as per function 1), including aggregate statistics on wage theft, reports of harassment or discrimination, and misclassification. These trends would also be broken down by demographic group to proactively identify salient inequities. The office may also provide separate data on platform workers, which may be easier to collect and collate, since platform workers are a particular subject of focus in current state legislation and litigation.

Establishing an Office of the Ombudsman for Fair Work within the Department of Labor will require costs of compensation for the ombudsman and staff, other operational costs, and litigation expenses. To reflect the need for a reaction to the rapid ongoing changes in gig economy platforms, a small portion of the Office’s budget should be set aside to support the appointment of a chief innovation officer, aimed at examining how technology can strengthen its operations. Some examples of tasks for this role include investigating and strengthening complaint sorting infrastructure, utilizing artificial intelligence to evaluate contracts for misclassification, and streamlining request for proposal processes.

Due to the continued growth of the gig economy, and the precarious status of gig workers in the onset of an economic recession, this Office should be established in the nearest possible window. Establishing, appointing, and initiating this office will require up to a year of time, and will require budgeting within the DOL.

There are many precedents of ombudsmen in federal office, including the Office of the Ombudsman for the Energy Employees Occupational Illness Compensation Program within the Department of Labor. Additionally, the IRS established the Office of the Taxpayer Advocate, and the Department of Homeland Security has both a Citizenship and Immigration Services Ombudsman and an Immigration Detention Ombudsman. These offices have helped educate constituents about their rights, resolved issues that an individual might have with that federal agency, and served as independent oversight bodies. The Australian Government has a Fair Work Ombudsman that provides resources to differentiate between an independent contractor and employee and investigates employers who may be engaging in sham contracting or other illegal practices. Following these examples, the Office of the Ombudsman for Fair Work should work within the Department of Labor to educate, assist, and provide redress for workers engaged in the gig economy.

Conclusion

How to protect gig workers is a long-standing open question for labor policy and is likely to require more attention as post-pandemic conditions affect labor trends. The federal government needs a solution to the issues of vulnerability and instability experienced by gig workers, and this solution needs to operate independently of legislation that may take longer to gain consensus on. Establishing an office of an ombudsman is the first step to increase federal oversight for gig work. The ombudsman will use data, reporting, and individual worker cases to build a clearer picture for how to create redress for laborers that have been harmed by gig work, which will provide greater visibility into the status and concerns of gig workers. It will additionally serve as a single point of entry for gig workers and businesses to learn about their rights and for gig workers to lodge complaints. If made a reality, this office will be an influential first step in changing the entire policy ecosystem regarding gig work. 

Frequently Asked Questions
Why would this be an effective way to handle the vulnerabilities gig workers face?

There is a current definitional debate about whether gig workers and platform workers are employees or contractors. Until this issue of misclassification can be resolved, there will likely not be a comprehensive state or federal policy governing gig work. However, the office of an ombudsman would be able to serve as the central point within the Department of Labor to handle gig worker issues, and it would be the entity tasked with collecting and publishing data about this class of laborers. This would help elevate the problems gig workers face as well as paint a picture of the extent of the issue for future legislation.

How long would the ombudsman’s tenure be?

Each ombudsman will be appointed for a six-year period, to ensure insulation from partisan politics.

Why should this be a federal and not state-level issue?

States often do not have adequate solutions to handle the discrepancies between employees and contractors. There is also the “race to the bottom” issue, where if protections are increased in one state, gig employers will simply relocate to states where the policies are less stringent. Further, there is the issue of gig companies being headquartered in one state while employees work in another. It makes sense for the Department of Labor to house a central, federal mechanism to handle gig work.

The tasks of ombudsmen are often broad in scope. How will the office of the Ombudsman for Fair Work ensure protections for gig workers?

The key challenge right now is for the federal government to collect data and solve issues regarding protections for gig work. The office of the ombudsman’s broadly defined mandate is actually an advantage in this still-developing conversation about gig work.

What are key timeline limitations for this proposal?

Establishing a new Department of Labor office is no small feat. It requires a clear definition of the goal and allowed activities of the ombudsman. This would require buy-in from key DOL bureaucrats. The office would also have to hire, recruit, and train staff. These tasks may be speed bottlenecks for this proposal to get off the ground. Since DOL plans its budget several years in advance, this proposal would likely be targeted for the 2026 cycle.

Establishing an AI Center of Excellence to Address Maternal Health Disparities

Summary

Maternal mortality is a crisis in the United States. Yet more than 60% of maternal deaths are preventable—with the right evidence-based interventions. Data is a powerful tool for uncovering best care practices. While healthcare data, including maternal health data, has been generated at a massive scale by the widespread adoption and use of Electronic Health Records (EHR), much of this data remains unstandardized and unanalyzed. Further, while many federal datasets related to maternal health are openly available through initiatives set forth in the Open Government National Action Plan, there is no central coordinating body charged with analyzing this breadth of data. Advancing data harmonization, research, and analysis are foundational elements of the Biden Administration’s Blueprint for Addressing the Maternal Health Crisis. As a data-driven technology, artificial intelligence (AI) has great potential to support maternal health research efforts. Examples of promising applications of AI include using electronic health data to predict whether expectant mothers are at risk of difficulty during delivery. However, further research is needed to understand how to effectively implement this technology in a way that promotes transparency, safety, and equity. The Biden-Harris Administration should establish an AI Center of Excellence to bring together data sources and then analyze, diagnose, and address maternal health disparities, all while demonstrating trustworthy and responsible AI principles.  

Challenge and Opportunity

Maternal deaths currently average around 700 per year, and severe maternal morbidity-related conditions impact upward of 60,000 women annually. Stark maternal health disparities persist in the United States, and pregnancy outcomes are subject to substantial racial/ethnic disparities, including maternal morbidity and mortality. According to the Centers for Disease Control and Prevention (CDC), “Black women are three times more likely to die from a pregnancy-related cause than White women.” Research is ongoing to specifically identify the root causes, which include socioeconomic factors such as insurance status, access to healthcare services, and risks associated with social determinants of health. For example, maternity care deserts exist in counties throughout the country where maternal health services are substantially limited or not available, impacting an estimated 2.2 million women of child-bearing age.

Many federal, public, and private datasets exist to understand the conditions that impact pregnant people, the quality of the care they receive, and ultimate care outcomes. For example, the CDC collects abundant data on maternal health, including the Pregnancy Mortality Surveillance System (PMSS) and the National Vital Statistics System (NVSS). Many of these datasets, however, have yet to be analyzed at scale or linked to other federal or privately held data sources in a comprehensive way. More broadly, an estimated 30% of the data generated globally is produced by the healthcare industry. AI is uniquely designed for data management, including cataloging, classification, and data integration. AI will play a pivotal role in the federal government’s ability to process an unprecedented volume of data to generate evidence-based recommendations to improve maternal health outcomes. 

Applications of AI have rapidly proliferated throughout the healthcare sector due to their potential to reduce healthcare expenditures and improve patient outcomes (Figure 1). Several applications of this technology exist across the maternal health continuum and are shown in the figure below. For example, evidence suggests that AI can help clinicians identify more than 70% of at-risk moms during the first trimester by analyzing patient data and identifying patterns associated with poor health outcomes. Based on its findings, AI can provide recommendations for which patients will most likely be at-risk for pregnancy challenges before they occur. Research has also demonstrated the use of AI in fetal health monitoring

Figure 1: Areas Where Artificial Intelligence and Machine Learning Is Used for Women’s Reproductive Health

Yet for all of AI’s potential, there is a significant dearth of consumer and medical provider understanding of how these algorithms work. Policy analysts argue that “algorithmic discrimination” and feedback loops in algorithms—which may exacerbate algorithmic bias—are potential risks of using AI in healthcare outside of the confines of an ethical framework. In response, certain federal entities such as the Department of Defense, the Office of the Director of National Intelligence, the National Institute for Standards and Technology, and the U.S. Department of Health and Human Services have published and adopted guidelines for implementing data privacy practices and building public trust of AI. Further, past Day One authors have proposed the establishment of testbeds for government-procured AI models to provide services to U.S. citizens. This is critical for enhancing the safety and reliability of AI systems while reducing the risk of perpetuating existing structural inequities. 

It is vital to demonstrate safe, trustworthy uses of AI and measure the efficacy of these best practices through applications of AI to real-world societal challenges. For example, potential use cases of AI for maternal health include a social determinants of health [SDoH] extractor, which combines AI with clinical notes to more effectively identify SDoH information and analyze its potential role in health inequities. A center dedicated to ethically developing AI for maternal health would allow for the development of evidence-based guidelines for broader AI implementation across healthcare systems throughout the country. Lessons learned from this effort will contribute to the knowledge base around ethical AI and enable development of AI solutions for health disparities more broadly. 

Plan of Action

To meet the calls for advancing data collection, standardization, transparency, research, and analysis to address the maternal health crisis, the Biden-Harris Administration should establish an AI Center of Excellence for maternal health. The AI Center of Excellence for Maternal Health will bring together data sources, then analyze, diagnose, and address maternal health disparities, all while demonstrating trustworthy and responsible AI principles. The Center should be created within the Department of Health and Human Services (HHS) and work closely with relevant offices throughout HHS and beyond, including the HHS Office of the Chief Artificial Intelligence Officer (OCAIO), the National Institutes of Health (NIH) IMPROVE initiative, the CDC, the Veterans Health Administration (VHA), and the National Institute for Standards and Technology (NIST). The Center should offer competitive salaries to recruit the best and brightest talent in AI, human-centered design, biostatistics, and human-computer interaction.

The first priority should be to work with all agencies tasked by the White House Blueprint for Addressing the Maternal Health Crisis to collect and evaluate data. This includes privately held EHR data that is made available through the Qualified Health Information Network (QHIN) and federal data from the CDC, Centers for Medicare and Medicaid (CMS), Office of Personnel Management (OPM), Healthcare Resources and Services Agency (HRSA), NIH, United States Department of Agriculture (USDA), Housing and Urban Development (HUD), the Veterans Health Administration, and Environmental Protection Agency (EPA), all of which contain datasets relevant to maternal health at different stages of the reproductive health journey from Figure 1. The Center should serve as a data clearing and cleaning shop, preparing these datasets using best practices for data management, preparation, and labeling.

The second priority should be to evaluate existing datasets to establish high-priority, high-impact applications of AI-enabled research for improving clinical care guidelines and tools for maternal healthcare providers. These AI demonstrations should be aligned with the White House’s Action Plan and be focused on implementing best practices for AI development, such as the AI Risk Management Framework developed by NIST. The following examples demonstrate how AI might help address maternal health disparities, based on priority areas informed by clinicians in the field:   

  1. AI implementation should be explored for analysis of electronic health records from the VHA and QHIN to predict patients who have a higher risk of pregnancy and/or delivery complications. 
  2. Drawing on the robust data collection and patient surveillance capabilities of the VHA and HRSA, AI should be explored for the deployment of digital tools to help monitor patients during pregnancy to ensure adequate and consistent use of prenatal care.  
  3. Using VHA data and QHIN data, AI should be explored in supporting patient monitoring in instances of patient referrals and/or transfers to hospitals that are appropriately equipped to serve high-risk patients, following guidelines provided by the American College of Obstetricians and Gynecologists.
  4. Data on housing from HUD, rural development from the USDA, environmental health from the EPA, and social determinants of health research from the CDC should be connected to risk factors for maternal mortality in the academic literature to create an AI-powered risk algorithm.
  5. Understand the power of payment models operated by CMS and OPM for novel strategies to enhance maternal health outcomes and reduce maternal deaths.

The final priority should be direct translation of the findings from AI to federal policymaking around reducing maternal health disparities as well as ethical development of AI tools. Research findings for both aspects of this interdisciplinary initiative should be framed using Living Evidence models that help ensure that research-derived evidence and guidance remain current.

The Center should be able to meet the following objectives within the first year after creation to further the case for future federal funding and creation of more AI Centers of Excellence for healthcare:

  1. Conduct a study on the use cases uncovered for AI to help address maternal health disparities explored through the various demonstration projects.
  2. Publish a report of study findings, which should be submitted to Congress with recommendations to help inform funding priorities for subsequent research activities.
  3. Make study findings available to the public to help build public trust in AI.

Successful piloting of the Center could be made possible by passage of an equivalent bill to S.893 in the current Congress. This is a critical first step in supporting this work. In March 2021, the S.893—Tech to Save Moms Act was introduced in the Senate to fund research conducted by National Academies of Sciences, Engineering, and Medicine to understand the role of AI in maternal care delivery and its impact on bias in maternal health. Passage of an equivalent bill into law would enable the National Academies of Sciences, Engineering, and Medicine to conduct research in parallel with HHS to generate more findings and to broaden potential impact.

Conclusion

The United States has the highest rate of maternal health disparities among all developed countries. Yet more than 60% of pregnancy-related deaths are preventable, highlighting a critical opportunity to uncover the factors impeding more equitable health outcomes for the nation as a whole. Legislative support for research to understand AI’s role in addressing maternal health disparities will affirm the nation’s commitment to ensuring that we are prepared to thrive in a 21st century influenced and shaped by next-generation technologies such as artificial intelligence.

Transforming On-Demand Medical Oxygen Infrastructure to Improve Access and Mortality Rates

Summary

Despite the World Health Organization’s (WHO) designation of medical oxygen as an essential medicine in 2017, oxygen is still not consistently available in all care settings. Shortages in medical oxygen, which is essential for surgery, pneumonia, trauma, and other hypoxia conditions in vulnerable populations, existed prior to the COVID-19 pandemic and persist today. By one estimate, pre-pandemic, only 20% of patients in low- and middle-income countries (LMICs) who needed medical oxygen received it. The pandemic tremendously increased the need for oxygen, further compounding access issues as oxygen became an indispensable treatment. During the peak of the pandemic, dozens of countries faced severe oxygen shortages due to patient surges impacting an already fragile infrastructure. 

The core driver of this challenge is not a lack of funding and international attention but rather a lack of infrastructure to buy oxygen, not just equipment. Despite organizations such as Unitaid, Bill & Melinda Gates Foundation, Clinton Health Access Initiative, UNICEF, WHO and U.S. Agency for International Development (USAID) prioritizing funding and provisions of medical oxygen, many countries still face critical shortages. Even fewer LMICs, such as Brazil, are truly oxygen self-sufficient. A broken and inequitable global oxygen delivery infrastructure inadvertently excludes low-income and rural area representation during the design phase. Furthermore, the current delivery infrastructure is composed of many individual funders and private and public stakeholders who do not work in a coordinated fashion because there is no global governing body to establish global policy, standards, and oversight; identify waste and redundancy; and ensure paths to self-sufficiency. As a result, LMICs are at the mercy of other nations and entities who may withhold oxygen during a crisis or fail to adequately distribute supply. It is time for aid organizations and governments to become more efficient and effective at solving this systemic problem by establishing global governance and investing in and enabling LMICs to become self-sufficient by establishing national infrastructure for oxygen generation, distribution, and delivery.

We propose transforming current interventions by centering the concept known as Oxygen as a Utility (OaaU), which fundamentally reimagines a country’s infrastructure for medical oxygen as a public utility supported by private investment and stable prices to create a functionable, equitable market for a necessary public health good. With the White House Covid Response Team shuttering in the coming months, USAID’s Bureau for Global Health has a unique opportunity to take a global leadership role in spearheading the development of an accessible, affordable oxygen marketplace. USAID should convene a global public-private partnership and governing coalition called the Universal Oxygen Coalition (UOC), pilot the OaaU model in at least two target LMICs (Tanzania and Uttar Pradesh, India), and launch a Medical Oxygen Grand Challenge to enable necessary technological and infrastructure innovation.

Challenge and Opportunity

There is no medical substitute for oxygen, which is used to treat a wide range of acute respiratory distress syndromes, such as pneumonia and pneumothorax in newborns, and noncommunicable diseases, such as asthma, heart failure, and COVID-19. Pneumonia alone is the world’s biggest infectious killer of adults and children, claiming the lives of 2.5 million people, including 740,180 children, in 2019. The COVID-19 pandemic compounded the demand for oxygen, and exposed the lack thereof, with increased death tolls in countries around the world as a result.

For every COVID-19 patient who needs oxygen, there are at least five other patients who also need it, including the 7.2 million children with pneumonia who enter LMIC hospitals each year. [Ehsanur et al, 2021]. Where it is available, there are often improperly balanced oxygen distribution networks, such as high-density areas being overstocked while rural areas or tertiary care settings go underserved. Only 10% of hospitals in LMICs have access to pulse oximetry and oxygen therapy, and those better-resourced hospitals tend to be in larger cities closer to existing oxygen delivery providers.

This widespread lack of access to medical oxygen in LMICs threatens health outcomes and well-being, particularly for rural and low-income populations. The primary obstacle to equitable oxygen access is lack of the necessary digital infrastructure in-country. Digital infrastructure provides insights that enable health system managers and policymakers to effectively establish policy, manage the supply of oxygen to meet needs, and coordinate work across a complex supply chain composed of various independent providers. Until replicable and affordable digital infrastructure is established, LMICs will not have the necessary resources to manage a national oxygen delivery system, forecast demand, plan for adequate oxygen production and procurement, safeguard fair distribution, and ensure sustainable consumption.

Oxygen can be delivered in a number of forms—via concentrators, cylinders, plants, or liquid—and the global marketplace encompasses many manufacturers and distributors selling in multiple nations. Most oxygen providers are for-profit organizations, which are not commercially incentivized to collaborate to achieve equal oxygen access, despite good intentions. Many of these same manufacturers also sell medical devices to regulate or deliver oxygen to patients, yet maintaining the equipment across a distributed network remains a challenge. These devices are complex and costly, and there are often few trained experts in-country to repair broken devices. Instead of recycling or repairing devices, healthcare providers are often forced to discard broken equipment and purchase new ones, contributing to greater landfill waste and compounding health concerns for those who live nearby.

Common contributing causes for fragmented oxygen delivery systems in LMICs include:

  1. No national digital infrastructure to connect, track, and monitor medical oxygen supply and utilization, like an electrical utility to forecast demand and ensure reliable service delivery.
  2. No centralized way to monitor manufacturers, distributors, and the various delivery providers to ensure coordination and compliance with local policy.
  3. In many cases, no established local policy for oxygen and healthcare regulation or no means to enforce local policy.
  4. Lack of purchasing options for healthcare providers, who are often forced to buy whichever oxygen devices are available versus the type of source oxygen that best fits their needs (i.e., concentrator or liquid) due to cumbersome tender systems and lack of coordination across markets.
  5. Lack of trained experts to maintain and repair devices, including limited national standardized certification programs, resulting in the premature disposal of costly medical devices contributing to waste issues. Further, lack of maintenance fuels the vicious cycle of LMICs requiring more regular funding to buy oxygen devices, which can increase reliance on third parties to sustain oxygen needs rather than domestic demand and marketplaces.

Medical oxygen investment is a unique opportunity to achieve global health outcomes and localization policy objectives. USAID invested $50 million to expand medical oxygen access through its global COVID-19 response for LMIC partners, but this investment only scratches the surface of what is needed to deliver self-sustainment. In response to oxygen shortages during the peaks of the pandemic, the WHO, UNICEF, the World Bank, and other donors shipped hundreds of thousands of oxygen concentrators to help LMICs deal with the rise in oxygen needs. This influx of resources addressed the interim need but did not solve the persisting healthcare system and underlying oxygen infrastructure problems. In 2021, the World Bank made emergency loans available to LMICs to help them shore up production and infrastructure capabilities, but not enough countries applied for these loans, as the barriers to solve these infrastructure issues are complex, difficult to identify without proper data and digital infrastructure to identify supply chain gaps, and hard to solve with a single cash loan.

Despite heavy attention to the issue of oxygen access in LMICs, current spending does not go far enough to set up sustainable oxygen systems in LMICs. Major access and equity gaps still persist. In short, providing funding alone without a cohesive, integrated industrial strategy cannot solve the root problem of medical oxygen inequality. 

USAID recently announced an expanded commitment in Africa and Asia to expand medical oxygen access, including market-shaping activities and partnerships. Since the pandemic began, USAID has directed $112 million in funding for medical oxygen to 50 countries and is the largest donor to The Global Fund, which has provided the largest international sums of money (more than $600 million) to increase medical oxygen access in over 80 countries. In response to the pandemic’s impacts on LMICs, the ACT-Accelerator (ACT-A) Oxygen Emergency Taskforce, co-chaired by Unitaid and the Wellcome Trust, has provided $700 million worth of oxygen supplies to over 75 countries and catalyzed large oxygen suppliers and NGO leaders to support LMICs and national healthcare ministries. This task force has brought together industry, philanthropy, NGO, and academic leaders. While USAID is not a direct partner, The Global Fund is a primary donor to the task force.

Without a sea change in policy, however, LMICs will continue to lack the support required to fully diagnosis national oxygen supply delivery system bottlenecks and barriers, establish national regulation policies, deploy digital infrastructures, change procurement approaches, enable necessary governance changes, and train in-country experts to ensure a sustained, equitable oxygen supply chain. To help LMICs become self-sufficient, we need to shift away from offering a piecemeal approach (donating money and oxygen supplies) to a holistic approach that includes access to a group of experts , funding for oxygen digital infrastructure systems, aid to develop national policy and governance mechanisms, and support for establishing specialty training and certification programs so that LMICs can self-manage their own medical oxygen supply chain. Such a development policy initiative relies on the Oxygen as a Utility framework, which focuses on creating a functional, equitable market for medical oxygen as a necessary public good. When achieved successfully, OaaU facilitates one fair rate for end-to-end distribution within a country, like other public utilities such as water and electricity.

A fully realized OaaU model within a national economy would integrate and streamline most aspects of oxygen delivery, from production to distribution of both the oxygen and the devices that dispense it, to training of staff on when to administer oxygen, how to use equipment, and equipment maintenance. This proposed new model coordinates industry partners, funders, and country leaders to focus on end-to-end medical oxygen delivery as an affordable, accessible utility rather than an in-kind development good. OaaU centers predictability, affordability, and efficiency for each stakeholder involved in creating sustainable LMIC medical oxygen supply chains. At its core, OaaU is about increasing both access and reliability by providing all types of oxygen at negotiated, market-wide, affordable, and predictable prices through industry partners and local players. This new business model would be sustainable by charging subscription and pay-per-use fees to serve the investment by private sector providers, each negotiated by Ministries of Health to empower them to manage their own country’s oxygen needs. This new model will incorporate each stakeholder in an LMIC’s healthcare system and facilitate an open, market-based negotiation to achieve affordable, self-sufficient medical oxygen supply chains.

Initial investment is needed to create a permanent oxygen infrastructure in each LMIC to digitally transform the tender system from an equipment and service or in-kind aid model to buying oxygen as a utility model. An industry business model transformation of this scale will require multistakeholder effort to include in-country coordination. The current oxygen delivery infrastructure is composed of many individual funders and private and public stakeholders who do not work in a coordinated fashion. At this critical juncture for medical oxygen provision, USAID’s convening power, donor support, and expertise should be leveraged to better direct this spending to create innovative opportunities. The Universal Oxygen Coalition would establish global policy, standards, and oversight; identify waste and redundancy; and ensure viable paths to oxygen self-sufficiency in LMICs. The UOC will act similarly to electric cooperatives, which aggregate supplies to meet electricity demand, ensuring every patient has access to oxygen, on demand, at the point of care, no matter where in the world they live.

Plan of Action

To steward and catalyze OaaU, USAID should leverage its global platform to convene funders, suppliers, manufacturers, distributors, health systems, financial partners, philanthropy, and NGOs and launch a call to action to mobilize resources and bring attention to medical oxygen inequality. USAID’s Bureau for Global Health, along with the its Private Sector Engagement Points of Contact, and the State Department’s Office of Global Partnerships should spearhead the UOC coalition. Using USAID’s Private Sector Engagement Strategy and EDGE fund as a model, USAID can serve as a connector, catalyzer, and lead implementer in reforming the global medical oxygen marketplace. The Bureau for Global Health should organize the initial summit, calls to action, and burgeoning UOC coalition because of its expertise and connections in the field. We anticipate that the UOC would require staff time and resources, which could be funded by a combination of private and philanthropic funding from UOC members in addition to some USAID resources.

To achieve the UOC vision, multiple sources of funding could be leveraged in addition to Congressional appropriation. In 2022, State Department and USAID funding for global health programs, through the Global Health Programs (GHP) account, which represents the bulk of global health assistance, totaled $9.8 billion, an increase of $634 million above the FY21 enacted level. In combination with USAID’s leading investments in The Global Fund, USAID could deploy existing authorities and funding from Development Innovation Ventures’ (DIV) and leverage Grand Challenge models like Saving Lives at Birth to create innovation incentive awards already authorized by Congress, or the newly announced EDGE Fund focused on flexible public-private sector partnerships to direct resources toward achieving equitable oxygen access for all. These transformative investments would also serve established USAID policy priorities like localization. UOC would work with USAID and the Every Breath Counts Initiative to reimagine this persistent problem by bringing essential players—health systems, oxygen suppliers, manufacturers and/or distributors, and financial partners—into a unified holistic approach to ensure reliable oxygen provision and sustainable infrastructure support. 

Recommendation 1.  USAID’s Bureau for Global Health should convene the Universal Oxygen Coalition Summit to issue an OaaU co-financing call to action and establish a global governing body. 

The Bureau for Global Health should organize the summit, convene the UOC coalition, and issue calls to action to fund country pilots of OaaU. The UOC coalition should bring together LMIC governments; local, regional, and global private-sector medical oxygen providers; local service and maintenance companies; equipment manufacturers and distributors; health systems; private and development finance; philanthropy organizations; the global health NGO community; Ministries of Health; and in-country faith-based organizations.

Once fully established, the UOC would invite industry coalition members to join to ensure equal and fair representation across the medical oxygen delivery care continuum. Potential industry members include Air Liquide, Linde, Philips, CHART, Praxair, Gulf Cryo, Air Products, International Futures, AFROX, SAROS, and GCE. Public and multilateral institutions should include the World Bank, World Health Organization, UNICEF, USAID country missions and leaders from the Bureau for Global Health, and selected country Ministries of Health. Funders such as Rockefeller Foundation, Unitaid, Bill & Melinda Gates Foundation, Clinton Health Access Initiative, and Wellcome Trust, as well as leading social enterprises and experts in the oxygen field such as Hewatele and PATH, should also be included.

UOC members would engage and interact with USAID through its Private Sector Engagement Points of Contact, which are within each regional and technical bureau. USAID should designate at least two points of contact from a regional and technical bureau, respectively, to lead engagement with UOC members and country-level partners. While dedicated funds to support the UOC and its management would be required in the long term either from Congress or private finance, USAID may be able to deploy staff from existing budgets to support the initial stand-up process of the coalition.

Progress and commitments already exist to launch the UOC, with Rockefeller Philanthropy Advisors planning to bring fiscal sponsorship as well as strategy and planning for the formation of the global coalition to the UOC with PATH providing additional strategic and technical functions for partners. The purpose of the UOC through its fiscal sponsor is to act as the global governing body by establishing global policy, standards, oversight controls, funding coordination, identifying waste & redundancy, setting priorities, acting as advisor and intermediary when needed to ensure LMIC paths to self-sufficiency are available. UOC would oversee and manage country selection, raising funding, and coordination with local Ministries of Health, funders, and private sector providers.

Other responsibilities of the UOC may include: 

The first UOC Summit will issue a call to action to make new, significant commitments from development banks, philanthropies, and aid agencies to co-finance OaaU pilot programs, build buy-in within target LMICs, and engage in market-shaping activities and infrastructure investments in the medical oxygen supply chain. The Summit could occur on the sidelines of the Global COVID-19 Summit or the United Nations General Assembly. Summit activities and outcomes should include:

Recommendation 2. The UOC should establish country prioritization based on need and readiness and direct raised funds toward pilot programs.

USAID should co-finance an OaaU pilot model through investments in domestic supply chain streamlining and leverage matched funds from development bank, private, and philanthropic dollars. This fund should be used to invest in the development of a holistic oxygen ecosystem starting in Tanzania and in Uttar Pradesh, India, so that these regions are prepared to deliver reliable oxygen supply, catalyzing broad demand, business activity, and economic development.

The objective is to deliver a replicable global reference model for streamlining the supply chain and logistics, eventually leading to equitable oxygen catering to the healthcare needs that can be rolled out in other LMICs and improve lives for the deprived. The above sites are prioritized based on their readiness and need as determined by the 2020 PATH Market Research Study supported by the Bill and Melinda Gates Foundation. We estimate that $495 million for the pilots in both nations would provide oxygen for 270 million people, which equates to less than $2 per person. The UOC should:

This effort will result in a sustainable oxygen grid in LMICs to produce revenue via subscription and pay-per-use model, reducing the need for aid organization or donor procurement investment on an annual basis. To create the conditions for OaaU, the UOC will need to make a one-time investment to create infrastructure that can provide the volume of oxygen a country needs to become oxygen self-sufficient. This investment should be backed by the World Bank via volume usage guarantees similar to volume usage guarantees for electricity per country. The result will shift the paradigm from buying equipment to buying oxygen.

Recommendation 3. The UOC and partner agencies should launch the Oxygen Access Grand Challenge to invest in innovations to reduce costs, improve maintenance, and enhance supply chain competition in target countries.

We envision the creation of a replicable solution for a self-sustaining infrastructure that can then serve as a global reference model for how best to streamline the oxygen supply chain through improved infrastructure, digital transformation, and logistics coordination. Open innovation would be well-suited to priming this potential market for digital and infrastructure tools that do not yet exist. UOC should aim to catalyze a more inclusive, dynamic, and sustainable oxygen ecosystem of public- and private-sector stakeholders.

The Grand Challenge platform could leverage philanthropic and private sector resources and investment. However, we also recommend that USAID deploy some capital ($20 million over four years) for the prize purse focused on outcomes-based technologies that could be deployed in LMICs and new ideas from a diverse global pool of applicants. We recommend the Challenge focus on the creation of digital public goods that will be the digital “command and control” backbone of a OaaU in-country. This would allow a country’s government and healthcare system to know their own status of oxygen supply per a country grid and which clinic used how much oxygen in real time and bill accordingly. Such tools do not yet exist at affordable, accessible levels in LMICs. However, USAID and its UOC partners should scope and validate the challenge’s core criteria and problems, as they may differ depending on the target countries selected.

Activities to support the Challenge should include:

Conclusion

USAID can play a catalytic role in spearheading the creation and sustainment of medical oxygen through a public utility model. Investing in new digital tools for aggregation of supply and demand and real-time command and control to radically improve access to medical oxygen on demand in LMICs can unlock better health outcomes and improve health system performance. By piloting the OaaU model, USAID can prove the sustainability and scalability of a solution that can be a global reference model for streamlining medical oxygen supply chain and logistics. USAID and its partners can begin to create sustained change and truly equitable oxygen access. Through enhancing existing public-private partnerships, USAID can also cement a resilient medical oxygen system better prepared for the next pandemic and better equipped to deliver improved health outcomes.

References

  1. Pneumonia in Children Statistics – UNICEF DATA
  2. Ann Danaiya Usher (2021). Medical oxygen crisis: a belated COVID-19 response. The
    Lancet, World Report.
  3. Lam F., Stegmuller A.,Chouz V.B., Grahma H.R. (2021). Oxygen systems strengthening as
    an intervention to prevent childhood deaths due to pneumonia in low-resource settings:
    systematic review, meta-analysis and cost-effectiveness. BMJ Global Health Journals
  4. AD Usher: Medical Oxygen crisis: a belated COVID-19 response (2021) The Lancet Global
    Health.
  5. Nair H, Simoes EA, Rudan I, Gessner BD, Azziz-Baumgartner E, Zhang JS et al.
    (2013) Global and regional burden of hospital admissions for severe acute lower respiratory
    infections in young children in 2010: a systematic analysis. Lancet 381: 1380–90.
    10.1016/S0140-6736(12)61901-1
  6. Liu L, Johnson HL, Cousens S, Perin J, Scott S, Lawn JE et al. (2012) Global, regional, and national causes of child mortality: An updated systematic analysis for 2010 with time trends since 2000. Lancet 379: 2151–61. 10.1016/S0140-6736(12)60560-1
  7. UNEP report. Africa waste management outlook. (2018)
  8. T Duke, SM Gramham, NN Cherian, AS Ginsburg, M English, S Howie, D Peel, PM Enarson,
    IH Wilson, and W Were, the Union Oxygen Systems Working Group (2010) Oxygen is an
    essential medicine: A call for international action.
  9. Unitaid press release. COVID-19 emergency impacting more than half a million people in low-
    and middle-income countries every day, as demand surges (2021)
Frequently Asked Questions
How does the Oxygen as a Utility (OaaU) model increase oxygen access?

The OaaU approach integrates and streamlines most aspects of oxygen delivery, just as integrated power grids grew into public utilities through government investment and public-private partnerships built on technological development to manage them. With an OaaU approach, investments would be made in oxygen digital grid design, build, interoperable connectivity across markets, staff training, demand forecasting and development of a longitudinal sustainable plan. Through this model, an increased number of oxygen suppliers would compete through auctions designed to drive down cost. Governments would receive a lower fixed price in exchange for offering a firm commitment to purchase a pre-established amount of oxygen, services, and equipment to provide oxygen over a long-time horizon. Financial partners guarantee the value of these commitments to reduce the risk that countries will default on their payments, seeking to encourage the increased competition that turns the wheels of this new mechanism. Providing a higher-quality, lower-cost means of obtaining medical oxygen would be a relief for LMICs. Additionally, we would anticipate the government to play a greater role in regulation and oversight which would provide price stability, affordability, and adequate supply for markets—just like how electricity is regulated.

What are the barriers to solving oxygen infrastructure issues?

First, oxygen is a complex product that can be generated by concentrators, cylinders, plants, and in liquid oxygen form. For a country to become oxygen self-sufficient, it needs all types of oxygen, and each country has its own unique combination of needs based on healthcare systems, population needs, and existing physical infrastructure. If a country has an excellent transportation system, then delivery of oxygen is the better choice. But if a country has a more rural population and no major highways, then delivery is not a feasible solution.


The oxygen market is competitive and consists of many manufacturers, each of which bring added variations to the way oxygen is delivered. While WHO-UNICEF published minimal technical specifications and guidance for oxygen therapy devices in 2019, there remains variation in how these devices are delivered and the type of data produced in the process. Additionally, oxygen delivery requires an entire system to ensure it safely reaches patients. In most cases, these systems are decentralized and independently run, which further contributes to service and performance variation. Due to layers of complexity, access to oxygen includes multiple challenges in availability, quality, affordability, management, supply, human resources capacity, and safety. National oversight through a digital oxygen utility infrastructure that requires the coordination and participation of the various oxygen delivery stakeholders would address oxygen access issues and enable country self-sustainment.

Why should agencies, development banks, and other donors invest in OaaU?

Given that oxygen provides areturn of US $50 per disability-adjusted life year, medical oxygen investment is a meaningful opportunity for development banks, foreign assistance agencies, and impact investors. The OaaU business model transformation will be a major step toward oxygen availability in the form of oxygen on-demand in LMICs. Reliable, affordable medical oxygen can strengthen the healthcare infrastructure and improve health outcomes. Recent estimates indicate every year about 120–156 million cases of acute lower respiratory infections occur globally in children under five, with approximately 1.4 million resulting in death. More than 95% of these deaths occur in low- and middle-income countries (Nair, 2013; Lui, 2012).

How is OaaU different from the status quo?

Unlike prior approaches, OaaU is a business model transformation from partial solutions to integrated solutions with all types of oxygen, just like the electricity sector transformed into an integrated grid of all types of electricity supply. From there, the medical facilities will buy oxygen, not equipment—just like you buy amounts of electricity, not a nuclear power plant.

Leveraging Pharmacoeconomics and Advance Market Commitments to Reduce Healthcare Expenditures

Summary

By establishing a self-sustaining fund to incentivize pharmaceutical companies to develop new and improved treatment protocols using low-cost, off-patent and unmonopolizable therapies, billions of dollars in cost savings could be realized by US government payers and health insurers in a financially “de-risked” manner while improving quality of care —truly a win/win opportunity.  

Currently, pharmaceutical companies do not develop medical therapies unless they can enforce a monopoly price using patents. As a result, thousands of low-cost therapies, such as repurposed generic drugs, nutraceuticals, plant medicines, medical diets, lifestyle interventions, and dose de-escalation protocols, lack private financial incentives for development. Clinically validating the safety and efficacy of these affordable treatments would help many patients while saving billions of dollars. Meanwhile, the largest pharmaceutical companies are earning trillions of dollars in revenue for new patented drugs that often provide limited or no added benefit to patients, while causing significant financial burden on patients and the taxpayer. 

We can solve these misaligned incentives using new payment models such as interventional pharmacoeconomic (IVPE) randomized controlled trials (RCTs) that result in cost savings for healthcare systems, even if RCTs fail, by comparing the efficacy of low-cost therapies to expensive patented drugs. Further, outcomes-based financing mechanisms known as Advance Market Commitments or Pay-For-Success (PFS) contracts, can incentivize the successful development of new low-cost therapies, entirely funded by payer costsavings from reduced reliance on monopoly-priced drugs. 

We propose that the National Institutes of Health (NIH), National Center for Advancing Translational Sciences (NCATS) work together with payers such as Centers for Medicare & Medicaid Services (CMS) and the United States Department of Veterans Affairs (VA) to transfer a fraction of their costsavings from IVPE RCTs and AMCs to create a self-sustaining “prize” fund for development of low-cost therapies under the 2010 America COMPETES Reauthorization Act

With these new payment models, it is possible to create a scalable and sustainable business for a sponsor to develop affordable therapies, while improving patient outcomes and saving significant costs. For example,  every 10,000 patients treated under an IVPE RCT + AMC contract comparing off-patent ketamine to patented esketamine, which could be more effective for treatment-resistant depression, would save payers at least $1.8 billion over 10 years until the expiry of the esketamine patent, part of which can be paid back into the fund. The IVPE + AMC contract can also provide revenues of at least $250 million to a sponsor of the clinical trials for development, FDA-approval, and post-approval (Phase IV) pharmacovigilance studies for ketamine (see Appendix). 

Challenge and Opportunity

Patients urgently need more effective treatments that are readily accessible and affordable. For many Americans, treatments are prohibitively scarce or expensive, with an estimated 42% of newly-diagnosed cancer patients losing all of their assets within 2 years. As national health expenditures continue to rise and drug R&D productivity continues to stagnate, it is crucial to rethink private-public alignment by implementing improved incentive design systems to support better health and economic outcomes.

Generic medicines have substantial potential for addressing healthcare costs and improving R&D productivity: low-cost generic drugs saved the U.S. healthcare system $1.67 trillion in the last decadeThousands of FDA-approved generic drugs—as well as 50,000+ nutraceuticals, plant medicines, diets, lifestyle interventions, and dose de-escalation regimens (collectively known as unmonopolizable therapies)—could be studied to treat diseases significantly cheaper and faster than developing new patented drugs. It costs an estimated $1 billion-plus and takes more than 10–15 years to get a newly patented drug to market, whereas it is significantly cheaper and faster to find new uses for existing drugs and other unmonopolizable therapies that have known safety profiles and mechanisms of action from their use over many years.

However, it is often not economically viable for pharmaceutical companies to pay for clinical trials assessing unmonopolizable therapies, which can be prescribed off-label at relatively low-cost. Generic-drug companies are protected by “skinny labeling” legislation, which makes it difficult or impossible to enforce patents for new uses of generic drugs. While pharmaceutical companies can reformulate generic drugs and re-patent them to charge a monopoly price, this might only be commercially viable if the reformulation is better than the original generic. And where a reformulation involves a combination of generic drugs, compounding pharmacies can prescribe the drug using the original low-cost generics. 

Due to this market failure, there is a lack of funding for large and robust clinical trials for such low-cost therapies; the chance of the original formulation of a generic drug obtaining FDA approval for a new use approaches zero when it goes generic. The same problem applies to funding large clinical trials for nutraceuticals, medical diets, lifestyle interventions, and novel dosing regimens, where it is almost impossible to stop doctors and patients accessing these therapies. Patients who desperately need more treatment options are unable to realize the benefits that existing off-patent, unmonopolizable, or low-cost therapies might offer, and there may be significant harm to the public due to such gaps in patent incentives

Increased direct grant funding for clinical trials can also create suboptimal outcomes. During the height of the COVID-19 pandemic in 2020, the Pepcid AC (famotidine) COVID-19 Study raised red flags weeks after a $21 million grant was awarded to study its effects as a potential therapy. Concerns over study integrity, outcome measures, and even administrative protocols were all brought to light. Ironically, such grant funding can lead to even more risk and wasted taxpayer funds, such as $150 million of federal funding on the dietary supplement curcumin studied in more than 120 clinical trials, with no tangible evidence that it is an effective treatment for any medical condition. Further, conservative estimates of publicly funded clinical trials for repurposing phospholipidosis-inducing CADs to treat COVID-19, including hydroxychloroquine, may be over $6 billion. Other than the large and pragmatic RECOVERY and TOGETHER trials initiated by the United Kingdom, which discovered that dexamethasone significantly reduced mortality in COVID patients on respiratory support, funding smaller, low-powered clinical trials did not lead to the development of significantly beneficial COVID therapies for patients. A risk-transferring market mechanism to fund large clinical trials would have been more efficient—or at least not have exposed taxpayers to the risk of failed clinical trials. IVPE RCTs are paid from cost savings, and AMC contracts only pay out when pre-specified requirements are met, so taxpayers and health insurers do not pay for any failures.

To quickly and affordably improve the lives of millions of patients, we propose that Congress should appropriate, and the Biden-Harris Administration should direct, the NIH, CMS, and VA with the support of the NCATS Repurposing Drugs program, to establish a self-sustaining fund utilizing IVPE RCTs and AMCs to fund clinical trials for affordable therapies generating cost savings (“IVPE + AMC Fund”), using their federal authorization to establish market rewards or “prizes” under the 2010 America Competes Reauthorization Act. Private health insurers are fragmented and have limited incentives to reduce the costs of the $4 trillion p/a US healthcare industry in order to justify the high premiums charged to their US customers, while providers earn more revenue by charging higher fees. However, there is some movement away from fee-for-service and towards PFS contracts and Value Based Pricing (VBP). There is also a significant risk of lawsuits (including personal liability for decisionmakers) under ERISA legislation and the Consolidated Appropriations Act of 2021, which impose a fiduciary duty on self-insured employers to reduce healthcare spend. Taxpayer-funded payers such as CMS, VA, United States Army and large self-insured employers can use IVPE RCTs and AMCs to incentivize development of low-cost therapies as a fiduciary and financial risk management mechanism. Their net cost savings would far exceed the cost of administering the IVPE + AMC Fund. 

In particular, IVPE RCTs compare equivalence or superiority of a low-cost repurposed generic drug to an expensive patented drug normally funded by a payer. For example, there was a recent proposal to establish a self-sustaining IVPE fund to determine the optimal minimum dose for expensive oncology drugs to save costs while reducing side-effects. The price difference between the low-cost and expensive intervention can far exceed the cost of running the RCT, which means it pays for itself in cost savings, even if the RCT fails. And if the RCT shows the low-cost treatment provides at least the same standard of care as the patented drug, this can save payers billions of dollars until the patent expires. If some of these cost savings are transferred back into an IVPE + AMC Fund, this will create a scalable business model for developing new low-cost therapies. 

The self-sustaining nature of the IVPE + AMC Fund could be demonstrated, for example, by providing market rewards up to $100 million (which can be pooled between federal agencies) to reimburse a sponsor recruiting patients into IVPE trials comparing low-cost and expensive therapies. The reimbursement amount should ideally be less than the cost of the expensive therapy and the resulting guaranteed payer cost savings from the low-cost therapy substituting the expensive therapy could be paid back to increase the size of the fund. The IVPE clinical trial would require ethics approval and provide valuable data to “de-risk” whether the low-cost therapy works, without requiring additional taxpayer support. AMCs then would act as a “pull” mechanism to reward sponsors of large Phase III Randomized Controlled Trials (RCTs) that result in FDA approval with higher reimbursement price for an otherwise low-cost therapy that would substitute the expensive therapy. For example, a sponsor can partner with a generic drug manufacturer to guarantee supply of a repurposed generic in return for sharing revenue under an AMC. The sponsor can then submit bioequivalence and efficacy RCT data under the 505(b)(2) pathway to obtain a new “label” for the new use, which also provides a 3-year period of data exclusivity. Alternatively, a sponsor can leverage the revenue from the AMC to develop a nominal reformulation (e.g. new mechanism of administration or dose) to disincentivize generic substitution and obtain a 5-year period of data exclusivity under the 505(b)(1) New Drug Application pathway. The sponsor can earn additional revenue from the sale of the repurposed generic or RCT data to other healthcare systems, with the payers backing the IVPE + AMC Fund receiving a lower price or royalties. Payer cost savings from substituting an expensive therapy with the lower-cost therapy can be paid back into the IVPE + AMC Fund to ensure long-term sustainability. 

AMC contracts are already implemented in various other contexts to address market failures. For example, so-called Social Impact Bonds (SIBs), a type of AMC contract, have been used to help fund projects preventing homelessness and prisoner recidivism, with over $700 million in SIBs raised to date. Operation Warp Speed used AMCs to incentivize vaccine development through FDA approval and lab-to-patient stages. Similar methodologies to IVPE RCTs have also been used to prove that a low  dose of bevacizumab (Avastin) could treat age-related macular degeneration rather than patented ranibizumab (Lucentis), which was estimated to save Medicare Part B $18 billion over 10 years

Therefore, an IVPE + AMC Fund can generate billions of dollars in revenue from payers due to cost savings through the development of new low-cost therapies that reduce reliance on expensive therapies. It can also address misaligned incentives under the patent system by encouraging pharmaceutical companies to (1) “de-link” profits from maximizing sales of a monopoly-priced drug, (2) ensure that patented drugs are value for money rather than pursuing “evergreening” strategies such as patenting slight modifications of generic drugs to extend the period of monopoly pricing, and (3) pursue the most effective therapies rather than the most patentable. AMCs can also be used to incentivize development of low-cost therapies to address unmet medical needs if an expensive comparator treatment is not available or otherwise, the IVPE methodology can compare usual care. 

A Market Failure Caused by a Tragedy of the Commons

Thousands of potentially safe and effective off-patent and low-cost therapies are currently ignored due to misaligned incentives under the patent system. Core to this tragedy is that the clinical trial data validating the safety and efficacy of treatment protocols is what is valuable to healthcare payers and patients, not whether the drug’s active ingredient is new. In essence, treatment protocols involving new uses for off-patent and unmonopolizable therapies are nonrivalrous and “highly non-excludable” public goods. The co-author Savva Kerdemelidis’s 2014 master’s thesis concluded that because the patent system provides inadequate incentives for the pharmaceutical industry to develop such “unmonopolizable therapies,” alternative “prize-like” incentives are needed. This allows payers to put a price on clinical trial data validating the efficacy of these new and more affordable treatment protocols.

Scaling the Development of Low-Cost Therapies with IVPE RCTs and AMCs

IVPE RCTs and AMCs (see definition in FAQ section) recognize that the value of a therapeutic intervention is not the cost of an active ingredient but the clinical trial data showing it is safe and effective in a particular patient population. To accelerate the development of unmonopolizable therapies through the clinical and regulatory pipeline, we propose that payers—specifically government agencies such as CMS and VA as well as health insurers responsible for pharmaceutical reimbursement (ideally a consortium of payers)—support IVPE RCT pilots to de-risk early-stage clinical research and the use of AMC contracts through the IVPE + AMC Fund. The AMC will incentivize a sponsor fund the Phase III studies need to obtain FDA approval and conduct post-approval (Phase IV) pharmacovigilance studies. It will be self-sustaining if a percentage of savings from the availability of low-cost therapies is paid back into the IVPE + AMC Fund.  

The total amount of outcome payments under an AMC drawn from the IVPE + AMC Fund can be calculated with reference to the level of clinical impact or Quality-Adjusted Life-Years (QALYs) generated, disability-adjusted life-years (DALYs) reduced, or even future cost savings from allowing a low-cost therapy to be substituted for an expensive one or reduce hospitalisation costs. For example, the number of QALYs resulting from an approved unmonopolizable therapy can be calculated in advance by a committee of pharmacoeconomic experts under a similar process to the United Kingdom National Health System’s Subscription-Style-Payment (SSP) model for incentivizing development of new antibiotics. Under an SSP, a fixed amount is paid annually according to the total QALY value of the new therapy, as assessed by an elected, independent Medical Evaluation Committee. Similarly, CMS in Louisiana implemented a “Netflix-subscription” model to guarantee supply of low-cost generic drugs to treat Hepatitis C by agreeing to a fixed annual payment in advance. Pay-for-performance and value-based-payment (VBP) contracts are similar to AMC contracts and often negotiated with payers to provide more cost-effective delivery of healthcare services, where rewards are based on certain conditions being met. Accordingly, using AMC contracts to reward successful RCTs is not a novel mechanism that will require a significant administrative burden for federal agencies to implement. It may only require changing a few sentences in an existing VBP contract to refer to a repurposed generic drug or low-cost therapy.  

The following example describes how the IVPE + AMC model works:

  1. A payer (e.g., CMS) agrees to an IVPE RCT with a sponsor comparing the equivalence or superiority of a repurposed generic drug or low-cost therapy to a patented drug (or expensive standard of care). The payer can reimburse the low-cost therapy at a higher price or per-patient price sufficient to cover the costs of the IVPE RCT. If the higher reimbursement is at a lower price than the patented drug or expensive treatment, it will guarantee cost savings for the payer, even if the RCT fails. If the IVPE RCT is successful, the payer agrees to a AMC worth, say, at least $100 million to purchase a  minimum quantity of the repurposed generic drug in advance or reimbursing the sponsor at the higher price, subject to a sponsor obtaining FDA approval and being responsible for post-approval pharmacovigilance. Notably, because the IVPE RCT is funded from immediate cost savings due to the price difference between the low-cost therapy and patented drug, both parties are financially de-risked. (See Appendix 1 for an example of the IVPE + AMC model using the example of generic ketamine vs. patented esketamine to treat depression.) 
  2. The sponsoring pharmaceutical company raises $50 million to conduct the clinical trials needed for FDA approval, on the basis of the agreement to transfer cost savings under the IVPE RCT and the $100 million AMC, subject to FDA approval and post-approval (Phase IV) pharmacovigilance studies showing continued safety and efficacy. 
  3. If the IVPE RCT is successful and the generic drug or low-cost therapy is shown to be equivalent to or better than the expensive patented drug, the AMC is triggered: sponsors are guaranteed minimum sales of $100M and also have a “branded” generic or low-cost therapy with new label and data exclusivity for three years by filing an abbreviated new drug application (ANDA) with the FDA. A new drug application (NDA) with five years of data exclusivity is possible if the generic or low-cost therapy contains a new active ingredient. They can also obtain a method of use patent on the optimal treatment protocol, which provides some commercial benefit and leverage to negotiate AMCs with other payers. A payer’s cost savings from updating their reimbursement guidelines to substitute a low-cost generic drug for the expensive patented drug will exceed the $100 million outcome payments under the AMC, which can be used to top-up the IVPE + AMC Fund to make it self-sustaining. In the case that clinical trials fail, the sponsor loses their investment, unless payers agree to transfer part of their cost savings for the duration of the IVPE RCTs to reimburse the sponsor. This is truly a win-win arrangement to fund new RCTs with very limited commercial risks compared to traditional drug development. The main task is finding repurposed generic drugs or low-cost interventions that could reduce reliance on expensive patented drugs by having at least the same safety and efficacy. There are many low-hanging fruits that are already medically “de-risked” (see generic drug repurposing use cases section below). 

Once the repurposed generic or low-cost therapy receives FDA approval and market authorization, the insurer can then market the approved therapy to prescribing medical doctors. Doctors in turn can prescribe the treatment protocol to their patients, who benefit from improved health. Moreover, this payment model that generates revenue from RCT data resulting in costsavings helps redress the conflict of interest and information asymmetry between government and healthcare payers and pharmaceutical companies, who are now incentivized to develop the most effective therapies for the lowest cost. 

Financial and Health Impact of the IVPE + AMC Fund

According to the Office of Management and Budget and the Office of Science and Technology Policy, prize competitions benefit the federal government by allowing federal agencies to:

  1. Pay only for success
  2. Establish ambitious goals and shift technological and other risks to prize participants 
  3. Increase the number and diversity of individuals, organizations, and teams tackling a problem, including those who have not previously received federal funding
  4. Increase cost effectiveness, stimulate private-sector investment, and maximize the return on taxpayer dollars
  5. Motivate and inspire the public to tackle scientific, technical, and societal problems

There are additional reasons why implementing a self-sustaining IVPE + AMC Fund as a prize-like “pull” incentive to reward development of low-cost therapies is more efficient and scalable than providing grant funding or “push” incentives (although the approaches can be complementary and push incentives can be superior when likely outcomes are known to the grantor). First, IVPE RCTs de-risk sponsors if the payer cost savings are shared by ensuring payer reimbursement of the low-cost therapy is sufficient to cover the costs of the RCT. Under an AMC contract, there is a transfer of risk from payers to the market: payers are not willing to take on the risk and expense of large RCTs, the responsibility of marketing to patients and doctors, and managing adverse events or product recalls. In turn, the market is comfortable taking on this risk and expense, as long as investors can obtain a standard rate of return (e.g., 10-20% p/a). Second, payers and government agencies are often not as well-qualified or equipped as the pharmaceutical industry, which has access to the most experienced staff and latest technological advances, including artificial intelligence. Third, grant programs have high administration costs, both for grantors and grantees, while the latter are not incentivized to deliver successful outcomes. By comparison, the markets are incentivized to fail fast and efficiently allocate capital to those best able to deliver results for the lowest cost. Lastly, repurposed off-patent and unmonopolizable therapies could outcompete patented drugs by providing improved health outcomes for a lower cost to payers. Pharmaceutical companies may also benefit from IVPE RCTs and AMC contracts versus developing novel molecules, due to decreased risk, costs, and time to market.  

The IVPE + AMC Fund creates a clinical trial data marketplace that incentivizes the funding of large-scale clinical trials of unmonopolizable therapies such as low-cost generic drugs that can result in billions, if not trillions, of dollars in healthcare savings for health insurers and governments and, moreover, provide better treatment options and outcomes for patients. Those cost savings can then be reinvested in additional IVPE + AMC Funds to incentivize further development of treatment protocols. Accordingly, the IVPE + AMC model not only incentivizes investment in unmonopolizable therapies, it can also be used to generate a sustainable and scalable business model for additional investment into low-cost therapies that also help improve access to healthcare in the Global South. 

Many Generic Drug Repurposing or Low-Cost Therapy Candidates Exist

Use Case 1: Metastatic Cancer

Hundreds of non-cancer generic drugs have already been tested by researchers and physicians in preclinical and clinical studies for cancer, some up to Phase II trials, and show promise. For example, repurposing the off-patent NSAID ketorolac as a prevention treatment resulting in 10% reduction in breast cancer recurrence would cost $5 million annually (100,000 cases at $50 per case for ketorolac and its administration). The savings could be over $1 billion annually (10,000 patients at approximately $100,000 per patient for the treatment of metastatic disease). These savings would be dwarfed by the cost savings available under an IVPE fund comparing low doses of expensive patented cancer drugs with their standard dose, which can result in fewer side-effects for patients, including nivolumab, abiraterone, trastuzumab, ibrutinib, paclitaxel, and pembrolizumab. The latter (Keytruda) is the top-selling blockbuster drug, with annual sales in excess of $15B; dosing Keytruda by weight could reduce use by 25% in approved indications such as lung cancers

Use Case 2: Major Depressive Disorder & Treatment-Resistant Depression

Depression is the leading cause of disability in the United States for people between the ages of 15 and 44. An estimated 12-month prevalence of medication-treated major depressive disorder (MDD) in the United States was 8.9 million adults, and 2.8 million had treatment-resistant depression (TRD). A growing body of evidence has shown that infusions of generic ketamine can be a viable and affordable therapy for both forms of depression, which cost the United States over $320 billion in 2018. Generic ketamine has been used as a general anesthetic since the Vietnam War and costs less than $2 per dose. However generic ketamine is not authorized to treat any form of depression. The patented and FDA-approved s-ketamine or esketamine, has a price at $850 per dose and is used for TRD and MDD with suicidal ideation. 

To date, many clinical trials show that generic ketamine is more effective. Moreover, a 2020 study indicated that esketamine is unlikely to be cost-effective for management of treatment-resistant depression in the United States unless its price falls by more than 40%. And recently, the UK payer NICE declined to reimburse esketamine. With approximately nine million American adults living with treatment-resistant depression, successfully repurposing generic ketamine using the self-sustaining IVPE + AMC Fund could save many lives through improved access and standard of care—and save healthcare payers hundreds of millions of dollars in monopoly prices.

Plan of Action

The IVPE + AMC Fund can be established through the America COMPETES Reauthorization Act of 2010 (P.L. 111-358), which encourages prize competitions by authorizing the head of any federal agency to carry out a competition that has the potential to stimulate innovation and advance the agency’s mission. In 2016, the 21st Century Cures Act (P.L. 114-255) directed the director of the NIH to support prize competitions that would realize significant advancements in biomedical science or improve health outcomes, especially as they relate to human diseases or conditions. IVPE + AMC contracts can act like a “prize-like” incentive that is designed to address market failures but also lower healthcare treatment costs for federal agencies  (e.g., CMS or VA)  by incentivizing the discovery and validation of evidence that low-cost interventions such as repurposed generic drugs may be equivalent to or more effective than expensive interventions such as patented drugs. 

In short, we propose that the IVPE + AMC Fund be established and operate as follows:

The IVPE + AMC Fund is established through the America COMPETES Reauthorization Act of 2010 (P.L. 111-358) to support backing of IVPE RCTs by payers as a self-funding mechanism and authorize outcome payments of up to $100 million under an AMC for successful clinical trials of a repurposed generic drug, nutraceutical, and/or other low-cost unmonopolizable therapy to treat a specific indication of high unmet medical need and cost burden (e.g. cancer, treatment resistant depression, glioblastoma, Crohn’s Disease, Alzheimer’s).

The IVPE + AMC Fund is furthermore supported by Section 2002 of The 21st Century Cures Act (Division A of P.L. 114-255), which requires the director of the NIH, under authorities in 15 U.S.C. §3719, to support prize competitions for one or both of the following goals:

  1. Identify and fund areas of biomedical science that could realize significant advancements through a prize competition; and 
  2. Improve health outcomes, particularly with respect to human diseases and conditions that are serious and represent a significant disease burden in the United States. The prize competition may also target human diseases and conditions for which public and private investment in research is disproportionately small relative to federal government expenditures for prevention and treatment activities and those diseases and conditions with potential for a significant return on investment via reduction in federal expenditures.

The director of the NIH elects a Medical Evaluation Board, which oversees and manages the prize purpose held by the IVPE + AMC Fund as follows:

  1. Determining the minimum reimbursement price to sponsors for patients receiving a low-cost therapy under an IVPE RCT which is less than the price of an expensive patented drug or intervention that it substitutes. Then determine the minimum purchase order or outcome payments under an AMC contract relative to total QALY / DALY improvement or cost savings, subject to large Phase 3 RCTs resulting in FDA-approval of the low-cost therapy, and further subject to ongoing safety and efficacy shown in Phase 4 pharmacovigilance studies. 
  2. The Medical Evaluation Board will be required to collect information on the effect of the IVPE + AMC Fund on advancing biomedical science or improving health outcomes and the effect of the innovations on federal expenditures.

Initially, we propose that Congress should fund the NIH with $2 million to establish a pilot IVPE + AMC Fund program in partnership with the NIH Office of Acquisition Management and Planning and NCATS. The focus of this would be to create a menu of “de-risked” low-cost therapies suitable for reimbursement under an IVPE + AMC Fund, and feasibility studies to show projected cost savings for payers such as CMS and VA and patient access benefits based on existing NCATS translational efforts, including generic drugs or dose de-escalation interventions by comparing them to expensive patented interventions under the IVPE model. For example, for treatment of age-related macular degeneration, generic drugs such as bevacizumab (Avastin) could save Medicare Part B $18 billion over 10 years compared with ranibizumab (Lucentis). Or compare the cost of prescribing the generic fluvoxamine to treat Covid at $6000 per QALY with molnupiravir at $55,000 per QALY, substituting sirolimus for nab-sirolumus to treat locally advanced unresectable or metastatic malignant perivascular epithelioid cell tumors (PEComa), or substituting sirolimus for everolimus in various cancers. Significant cost savings from IVPE RCT de-escalation studies comparing a lower dose of an expensive cancer drug to the standard treatment can fund the development of new unmonopolizable therapies. For example, this includes savings from low-dose nivolumab for head and neck cancer, low-dose abiraterone, and trastuzumab, ibrutinib, paclitaxel, and pembrolizumab for various other cancers, as noted above. The key to this pilot would be a sufficient evidence-based evaluation process to generate a menu of low-cost IVPE use cases. Ideally, at scale the IVPE + AMC Fund would cover a wide expanse of market failures, but we recommend the low-hanging fruit of repurposing generic drugs and dose de-escalation studies for specialist oncology drugs before expanding to other types of unmonopolizable therapies, including medical diets such as ketogenic diet, non-pharmaceutical, and lifestyle interventions that can reduce reliance on expensive therapies.

Conclusion

An IVPE + AMC Fund established under the America COMPETES Act can provide a more flexible, self-sustaining and cost-effective payment model for developing affordable and effective medical therapies, as opposed to the pharmaceutical industry’s traditional model of charging a monopoly price for new patented drugs. Establishing IVPE RCTs that compare low-cost treatments with expensive treatments generates immediate cost savings for payers from reduced reliance on monopoly-priced drugs as well as future cost savings if clinical guidelines are updated to recommend the low-cost treatment. AMC contracts incentivize FDA-approval and help correct misaligned incentives under the patent system by ensuring rewards are de-linked from maximizing the sales of a single monopoly-priced drug. 

If our proposed self-sustaining IVPE + AMC Fund can be implemented, this will create new incentives to leverage the biotech innovations of the last 40 years and optimize the efficient delivery of healthcare, including genetic engineering, personalized medicine (informed by blood tests and low-cost DNA sequencing), artificial intelligence, decentralized clinical trials, and telemedicine. The pharmaceutical industry is not to blame if they can only rely on the patent system to obtain a return on investment for funding medical innovation. New outcomes-based payment models are needed to develop more affordable and effective treatments that can pull the practice of medicine into the 21st century and address significant health inequities. 

Appendix

IVPE RCT + AMC Financial Model Example

This IVPE RCT + AMC financial model uses generic ketamine and patented esketamine as an example of how to leverage immediate and future cost savings by comparing a low-cost intervention to an expensive intervention to incentivize funding of RCTs for unmonopolizable therapies.

Current esketamine costs for treatment-resistant depression patients for payer, e.g. CMS

# of treatment-resistant depression patients in a year10000
x average dosage per patient in a year25
x pricing per esketamine dosage$850
Annual treatment costs to payer$212,500,000
Year of esketamine patent expiration2035
Total treatment costs for payer until esketamine patent expiration$2,762,500,000

Costs of IVPE RCT

Esketamine as control armKetamine as control arm
# of treatment-resistant depression patients in RCT for a year50005000
x average dosage perpatient in a year2525
X pricing per dosage$850$2
Total treatment costs to payer$106,250,000$250,000

Total savings from conducting the IVPE RCT (over 1 year)

Total savings from conducting the IVPE RCT (over 1 year)$106,000,000
Less: costs going to the sponsor for engaging with contract research organization to conduct the RCT100,000,000
Savings to payer from simply conducting the IVPE RCT$6,000,000
(1) Payer cost savings can be transferred to sponsor under IVPE RCT contract to help fund R&D to optimize treatment protocol and FDA approval
If IVPE RCT is successful and ketamine obtains FDA approval for treatment-resistant depression, AMC is triggered and total future savings to payers are as follows:
When ketamine is proved near-equivalent efficacy and is used
# of treatment-resistant depression patients in the U.S. a year10,000
x average dosage per patient in a year25
x pricing per dosage under AMC$100
Total annual treatment costs for payer$25,000,000
Annual future savings for payer$187,500,000
x remaining years of esketamine patent (assuming FDA approval in 2025)10
Total future savings for payer$1,875,000,000
(2) $100 pricing per dosage for FDA-approved ketamine represents pricing under AMC to purchase sponsor’s branded ketamine
Total 10-year revenues for sponsor for branded ketamine under AMC$250,000,000
Frequently Asked Questions
What is an advanced market commitment (AMC)?

Advanced market commitments are a type of pay-for-success contract that guarantees a viable market for a product once it is successfully developed. Harvard economist Michael Kremer was the first to propose AMCs to stimulate private sector investment in innovations undersupplied by the market. In 2005, global foundations supported the creation of a detailed proposal by the Center for Global Development that described how an AMC might be structured. In 2009, the first AMC was launched, with $1.5 billion in funding for vaccines for diseases primarily affecting people living in poverty. Three vaccines have since been developed and more than 150 million children immunized, saving an estimated 700,000 lives.

Would your proposal involve price negotiations with payers that are specific to an indication?

Drug pricing can stay uniform if differential pricing is not permitted. Outcome payments under an AMC can be in the nature of a fixed annual “Netflix” subscription-style payment for the RCT data showing that the repurposed generic is safe and effective for the indication, and can be “de-linked” from sales of a drug. Alternatively, if differential pricing is permitted under applicable regulations and policy, then the repurposed generic drug could be priced higher in the new indication under an AMC.

What are some examples of successfully repurposed generic drugs?

Some of the most widely used prominent repurposed drugs include the following (not exhaustive): 






























Drug name Original indication Disease name
Aspirin Analgesia Colorectal cancer
Gemcitabine Antiviral Cancer
Raloxifene Osteoporosis Breast cancer
Sildenafil Angina Erectile dysfunction
Could doctors just prescribe generic drugs off-label to treat disease?

Off-label drug use is when drugs are prescribed for a condition, a type of patient, or a dosage not officially approved by the FDA, which can be 20% of all prescriptions. Off-label drug use is generally not backed by the level of testing and data that allows FDA approvals, so patients do not have the guides and warnings that come from FDA-approved labels. Doctors and patients therefore do not always have enough information about the effects and dangers of the off-label use of the drug to make informed decisions. This can create a situation where patients are unknowingly at risk for dangerous, unexpected side effects. This validates the need for large Phase 3  RCTs to prove that drugs prescribed off-label can be prescribed for a new indication and granted a new label for that indication. In addition, health insurers often do not reimburse off-label use, which means patients are forced to pay out of pocket.

How will your AMC model work if a healthcare payer does not want to pay for cost savings or value generated by a repurposed generic drug, as opposed to the lowest market price for the active ingredient?

Large health insurers, particularly in the United States, often own hospitals and are not incentivized to reduce healthcare costs. This model will become unsustainable due to an aging baby boomer population and insurance premiums increasing faster than wages. To avoid this situation, payers have started to implement more outcomes-based contracts such as value-based pricing and bundled payments to incentivize innovation that reduces healthcare costs. Similarly, payers can agree to support IVPE RCTs to clinically validate a low-cost off-patent intervention by comparing it to an expensive patented intervention or paying an amount representing future cost savings or QALY gains for repurposing a generic drug under an AMC contract. Currently, payers do not put a price on the clinical trial data or treatment protocol information about which generic drug works in a new disease and the optimal dose, but only pay the marginal cost of the generic drug as a chemical. This is like only paying an electrician for the cost of a new $1 part, rather than for the knowledge of the specific part needed to fix an electrical fault, which is the valuable information that takes years of experience and would save your business thousands of dollars or more.

Why does repurposing generic drugs and nutraceuticals result in lower costs than making a novel patented drug?

Repurposed generic drugs and unmonopolizable therapies such as nutraceuticals are de-risked because they have years of efficacy and safety data from successful Phase 1 safety trials  and post-marketing Phase IV data in the case of generic drugs and being generally recognized as safe (GRAS) compounds in the case of nutraceuticals. IVPE RCTs de-risk early clinical trials, and an AMC would incentivize scale-up of supply of the repurposed generic drug and encourage sponsors to train doctors and patients to ensure more rapid uptake of this innovation.

Would some of the proposed schemes require the monitoring of uptake and/or patient outcomes, which imposes an administrative burden for payers?

Under an AMC, there can be a fixed annual payment or minimum sales commitment calculated with reference to determination of cost savings from substitution with expensive patented drugs and/or QALYs gained, similar to the subscription-style payment model for antibiotics in the NHS. The AMC means that the sponsor would also benefit from additional sales of the “branded” generic drug, so they would be incentivized to monitor its use and conduct standard Phase IV pharmacovigilance (and can also be liable for adverse events and recalls). Moreover, payer cost savings from the IVPE + AMC Fund program would far exceed the cost of monitoring or administrative burden.

What is the commercial incentive for a payer to back an AMC and a sponsor to fund RCTs for repurposing generic drugs and nutraceuticals if other payers can free ride on the knowledge by prescribing the generic drug off-label? Where is the business case?

Other than benefitting from immediate cost savings due to reduced reliance on expensive patented drugs, payers backing an AMC can negotiate a favorable price and guaranteed supply of the “branded” generic drug from the sponsor, whereas other payers would be forced to use an off-label version and expose doctors and patients to increased risk of liability. Sponsors would benefit from outcome payments under the AMC and additional sales of the “branded” generic. They can also leverage data exclusivity and traditional patent rights such as method-of-use patent for the optimal treatment protocol and reformulations to negotiate similar AMCs with other payers and also reduce the risk of generic off-label competition. The more payers backing the IVPE + AMC model, the more costsavings can be shared with free-riding. RCT data regarding optimal treatment protocols informed by genetic testing and other diagnostics can also be commercialized as a clinical decision support tool and trade secret. It is the intention of the authors to support the establishment of Public Good Pharma as a biotech company and clinical trial data marketplace owned by the charity Crowd Funded Cures, to carry out this business model.

Creating Auditing Tools for AI Equity

Summary

The unregulated use of algorithmic decision-making systems (ADS)—systems that crunch large amounts of personal data and derive relationships between data points—has negatively affected millions of Americans. These systems impact equitable access to educationhousingemployment, and healthcare, with life-altering effects. For example, commercial algorithms used to guide health decisions for approximately 200 million people in the United States each year were found to systematically discriminate against Black patients, reducing, by more than half, the number of Black patients who were identified as needing extra care.

One way to combat algorithmic harm is by conducting system audits, yet there are currently no standards for auditing AI systems at the scale necessary to ensure that they operate legally, safely, and in the public interest. According to one research study examining the ecosystem of AI audits, only one percent of AI auditors believe that current regulation is sufficient. 

To address this problem, the National Institute of Standards and Technology (NIST) should invest in the development of comprehensive AI auditing tools, and federal agencies with the charge of protecting civil rights and liberties should collaborate with NIST to develop these tools and push for comprehensive system audits. 

These auditing tools would help the enforcement arms of these federal agencies save time and money while fulfilling their statutory duties. Additionally, there is a pressing need to develop these tools now, with Executive Order 13985 instructing agencies to “focus their civil rights authorities and offices on emerging threats, such as algorithmic discrimination in automated technology.”

Challenge and Opportunity

The use of AI systems across all aspects of life has become commonplace as a way to improve decision-making and automate routine tasks. However, their unchecked use can perpetuate historical inequities, such as discrimination and bias, while also potentially violating American civil rights.

Algorithmic decision-making systems are often used in prioritization, classification, association, and filtering tasks in a way that is heavily automated. ADS become a threat when people uncritically rely on the outputs of a system, use them as a replacement for human decision-making, or use systems with no knowledge of how they were developed. These systems, while extremely useful and cost-saving in many circumstances, must be created in a way that is equitable and secure. 

Ensuring the legal and safe use of ADS begins with recognizing the challenges that the federal government faces. On the one hand, the government wants to avoid devoting excessive resources to managing these systems. With new AI system releases happening everyday, it is becoming unreasonable to oversee every system closely. On the other hand, we cannot blindly trust all developers and users to make appropriate choices with ADS.

This is where tools for the AI development lifecycle come into play, offering a third alternative between constant monitoring and blind trust. By implementing auditing tools and signatory practices, AI developers will be able to demonstrate compliance with preexisting and well-defined standards while enhancing the security and equity of their systems. 

Due to the extensive scope and diverse applications of AI systems, it would be difficult for the government to create a centralized body to oversee all systems or demand each agency develop solutions on its own. Instead, some responsibility should be shifted to AI developers and users, as they possess the specialized knowledge and motivation to maintain proper functioning systems. This allows the enforcement arms of federal agencies tasked with protecting the public to focus on what they do best, safeguarding citizens’ civil rights and liberties.

Plan of Action

To ensure security and verification throughout the AI development lifecycle, a suite of auditing tools is necessary. These tools should help enable outcomes we care about, fairness, equity, and legality. The results of these audits should be reported (for example, in an immutable ledger that is only accessible by authorized developers and enforcement bodies) or through a verifiable code-signing mechanism. We leave the specifics of the reporting and documenting the process to the stakeholders involved, as each agency may have different reporting structures and needs. Other possible options, such as manual audits or audits conducted without the use of tools, may not provide the same level of efficiency, scalability, transparency, accuracy, or security.

The federal government’s role is to provide the necessary tools and processes for self-regulatory practices. Heavy-handed regulations or excessive government oversight are not well-received in the tech industry, which argues that they tend to stifle innovation and competition. AI developers also have concerns about safeguarding their proprietary information and users’ personal data, particularly in light of data protection laws.

Auditing tools provide a solution to this challenge by enabling AI developers to share and report information in a transparent manner while still protecting sensitive information. This allows for a balance between transparency and privacy, providing the necessary trust for a self-regulating ecosystem.

Solution Technical Requirements

A general machine learning lifecycle. Examples of what system developers at each stage would be responsible for signing off on the use of the security and equity tools in the lifecycle. These developers represent companies, teams, or individuals.

The equity tool and process, funded and developed by government agencies such as NIST, would consist of a combination of (1) AI auditing tools for security and fairness (which could be based on or incorporate open source tools such as AI Fairness 360 and the Adversarial Robustness Toolbox), and (2) a standardized process and guidance for integrating these checks (which could be based on or incorperate guidance such as the U.S. Government Accountability Office’s  Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities).1 

Dioptra, a recent effort between NIST and the National Cybersecurity Center of Excellence (NCCoE) to build machine learning testbeds for security and robustness, is an excellent example of the type of lifecycle management application that would ideally be developed. Failure to protect civil rights and ensure equitable outcomes must be treated as seriously as security flaws, as both impact our national security and quality of life. 

Equity considerations should be applied across the entire lifecycle; training data is not the only possible source of problems. Inappropriate data handling, model selection, algorithm design, and deployment, also contribute to unjust outcomes. This is why tools combined with specific guidance is essential. 

As some scholars note, “There is currently no available general and comparative guidance on which tool is useful or appropriate for which purpose or audience. This limits the accessibility and usability of the toolkits and results in a risk that a practitioner would select a sub-optimal or inappropriate tool for their use case, or simply use the first one found without being conscious of the approach they are selecting over others.”

Companies utilizing the various packaged tools on their ADS could sign off on the results using code signing. This would create a record that these organizations ran these audits along their development lifecycle and received satisfactory outcomes. 

We envision a suite of auditing tools, each tool applying to a specific agency and enforcement task. Precedents for this type of technology already exist. Much like security became a part of the software development lifecycle with guidance developed by NIST, equity and fairness should be integrated into the AI lifecycle as well. NIST could spearhead a government-wide initiative on auditing AI tools, leading guidance, distribution, and maintenance of such tools. NIST is an appropriate choice considering its history of evaluating technology and providing guidance around the development and use of specific AI applications such as the NIST-led Face Recognition Vendor Test (FRVT).

Areas of Impact & Agencies / Departments Involved


Security & Justice
The U.S. Department of Justice, Civil Rights Division, Special Litigation SectionDepartment of Homeland Security U.S. Customs and Border Protection U.S. Marshals Service 

Public & Social Sector
The U.S. Department of Housing and Urban Development’s Office of Fair Housing and Equal Opportunity

Education
The U.S. Department of Education

Environment
The U.S. Department of Agriculture, Office of the Assistant Secretary for Civil RightsThe Federal Energy Regulatory CommissionThe Environmental Protection Agency

Crisis Response
Federal Emergency Management Agency 

Health & Hunger
The U.S. Department of Health and Human Services, Office for Civil RightsCenter for Disease Control and PreventionThe Food and Drug Administration

Economic
The Equal Employment Opportunity Commission, The U.S. Department of Labor, Office of Federal Contract Compliance Programs

Infrastructure
The U.S. Department of Transportation, Office of Civil RightsThe Federal Aviation AdministrationThe Federal Highway Administration

Information Verification & Validation
The Federal Trade Commission, The Federal Communication Commission, The Securities and Exchange Commission.

Many of these tools are open source and free to the public. A first step could be combining these tools with agency-specific standards and plain language explanations of their implementation process.

Benefits

These tools would provide several benefits to federal agencies and developers alike. First, they allow organizations to protect their data and proprietary information while performing audits. Any audits, whether on the data, model, or overall outcomes, would be run and reported by the developers themselves. Developers of these systems are the best choice for this task since ADS applications vary widely, and the particular audits needed depend on the application. 

Second, while many developers may opt to use these tools voluntarily, standardizing and mandating their use would allow an evaluation of any system thought to be in violation of the law to be easily assesed. In this way, the federal government will be able to manage standards more efficiently and effectively.

Third, although this tool would be designed for the AI lifecycle that results in ADS, it can also be applied to traditional auditing processes. Metrics and evaluation criteria will need to be developed based on existing legal standards and evaluation processes; once these metrics are distilled for incorporation into a specific tool, this tool can be applied to non-ADS data as well, such as outcomes or final metrics from traditional audits.

Fourth, we believe that a strong signal from the government that equity considerations in ADS are important and easily enforceable will impact AI applications more broadly, normalizing these considerations.   

Example of Opportunity

An agency that might use this tool is the Department of Housing and Urban Development (HUD), whose purpose is to ensure that housing providers do not discriminate based on race, color, religion, national origin, sex, familial status, or disability.

To enforce these standards, HUD, which is responsible for 21,000 audits a year, investigates and audits housing providers to assess compliance with the Fair Housing Act, the Equal Credit Opportunity Act, and other related regulations. During these audits, HUD may review a provider’s policies, procedures, and records, as well as conduct on-site inspections and tests to determine compliance. 

Using an AI auditing tool could streamline and enhance HUD’s auditing processes. In cases where ADS were used and suspected of harm, HUD could ask for verification that an auditing process was completed and specific metrics were met, or require that such a process be undergone and reported to them. 

Noncompliance with legal standards of nondiscrimination would apply to ADS developers as well, and we envision the enforcement arms of protection agencies would apply the same penalties in these situations as they would in non-ADS cases.

R&D

To make this approach feasible, NIST will require funding and policy support to implement this plan. The recent CHIPS and Science Act has provisions to support NIST’s role in developing “trustworthy artificial intelligence and data science,” including the testbeds mentioned above. Research and development can be partially contracted out to universities and other national laboratories or through partnerships/contracts with private companies and organizations.

The first iterations will need to be developed in partnership with an agency interested in integrating an auditing tool into its processes. The specific tools and guidance developed by NIST must be applicable to each agency’s use case. 

The auditing process would include auditing data, models, and other information vital to understanding a system’s impact and use, informed by existing regulations/guidelines. If a system is found to be noncompliant, the enforcement agency has the authority to impose penalties or require changes to be made to the system.

Pilot program

NIST should develop a pilot program to test the feasibility of AI auditing. It should be conducted on a smaller group of systems to test the effectiveness of the AI auditing tools and guidance and to identify any potential issues or areas for improvement. NIST should use the results of the pilot program to inform the development of standards and guidelines for AI auditing moving forward.

Collaborative efforts

Achieving a self-regulating ecosystem requires collaboration. The federal government should work with industry experts and stakeholders to develop the necessary tools and practices for self-regulation.

A multistakeholder team from NIST, federal agency issue experts, and ADS developers should be established during the development and testing of the tools. Collaborative efforts will help delineate responsibilities, with AI creators and users responsible for implementing and maintaining compliance with the standards and guidelines, and agency enforcement arms agency responsible for ensuring continued compliance.

Regular monitoring and updates

The enforcement agencies will continuously monitor and update the standards and guidelines to keep them up to date with the latest advancements and to ensure that AI systems continue to meet the legal and ethical standards set forth by the government.

Transparency and record-keeping

Code-signing technology can be used to provide transparency and record-keeping for ADS. This can be used to store information on the auditing outcomes of the ADS, making reporting easy and verifiable and providing a level of accountability to users of these systems.

Conclusion

Creating auditing tools for ADS presents a significant opportunity to enhance equity, transparency, accountability, and compliance with legal and ethical standards. The federal government can play a crucial role in this effort by investing in the research and development of tools, developing guidelines, gathering stakeholders, and enforcing compliance. By taking these steps, the government can help ensure that ADS are developed and used in a manner that is safe, fair, and equitable.

WHAT IS AN ALGORITMIC DECISION-MAKING SYSTEM
An algorithmic decision-making system (ADS) is software that uses algorithms to make decisions or take actions based on data inputs, sometimes without human intervention. ADS are used in a wide range of applications, from customer service chatbots to screening job applications to medical diagnosis systems. ADS are designed to analyze data and make decisions or predictions based on that data, which can help automate routine or repetitive tasks, improve efficiency, and reduce errors. However, ADS can also raise ethical and legal concerns, particularly when it comes to bias and privacy.
WHAT IS AN ALGORITMIC AUDIT
An algorithmic audit is a process that examines automated decision-making systems and algorithms to ensure that they are fair, transparent, and accountable. Algorithmic audits are typically conducted by independent third-party auditors or specialized teams within organizations. These audits examine various aspects of the algorithm, such as the data inputs, the decision-making process, and the outcomes produced, to identify any biases or errors. The goal is to ensure that the system operates in a manner consistent with ethical and legal standards and to identify opportunities to improve the system’s accuracy and fairness.
WHAT IS CODE SIGNING, AND WHY IS IT INVOLVED?
Code signing is the process of digitally signing software and code to verify the integrity and authenticity of the code. It involves adding a digital signature to the code, which is a unique cryptographic hash that is generated using a private key held by the code signer. The signature is then embedded into the code along with other metadata.

Code signing is used to establish trust in code that is distributed over the internet or other networks. By digitally signing the code, the code signer is vouching for its identity and taking responsibility for its contents. When users download code that has been signed, their computer or device can verify that the code has not been tampered with and that it comes from a trusted source.

Code signing can be extended to all parts of the AI lifecycle as a means of verifying the authenticity, integrity, and function of a particular piece of code or a larger process. After each step in the auditing process, code signing enables developers to leave a well-documented trail for enforcement bodies/auditors to follow if a system were suspected of unfair discrimination or unsafe operation.

Code signing is not essential for this project’s success, and we believe that the specifics of the auditing process, including documentation, are best left to individual agencies and their needs. However, code signing could be a useful piece of any tools developed.
WHAT IS AN AI AUDITOR
An AI auditor is a professional who evaluates and ensures the fairness, transparency, and accountability of AI systems. AI auditors often have experience in risk management, IT or cybersecurity auditing, or engineering, and use frameworks such as IIA’s AI Framework, COSO ERM Framework, or the U.S. GAO’s Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities. Much like other other IT auditors, they review and audit the development, deployment, and operation of systems to ensure that they align with business objectives and legal standards. AI auditors more than in other fields have also had a push to include consideration for sociotechnical issues as well. This includes analyzing the underlying algorithms and data used to develop the AI system, assessing its impact on various stakeholders, and recommending improvements to ensure that it is being used effectively.
WHY SHOULD THE FEDERAL GOVERNMENT BE THE ENTITY TO ACT RATHER THAN THE PRIVATE SECTOR OR STATE/LOCAL GOVERNMENT?
The federal government is uniquely positioned to take the lead on this issue because of its responsibility to protect civil rights and ensure compliance with federal laws and regulations. The federal government can provide the necessary resources, expertise, and implementation guidance to ensure that AI systems are audited in a fair, equitable, and transparent manner.
WHO IS LIKELY TO PUSH BACK ON THIS PROPOSAL AND HOW CAN THAT HURDLE BE OVERCOME?
Industry stakeholders may be resistant to these changes. They should be engaged in the development of tools and guidelines so their concerns can be addressed and effort should be made to clearly communicate the benefits of increased accountability and transparency for both the industry and the public. Collaboration and transparency are key to overcoming potential hurdles, as is making any tools produced user-friendly and accessible.

Additionally, there may be pushback on the tool design. It is important to remember that currently, engineers often use fairness tools at the end of a development process, as a last box to check, instead of as an integrated part of the AI development lifecycle. These concerns can be addressed by emphasizing the comprehensive approach taken and by developing the necessary guidance to accompany these tools—which does not currently exist.
WHAT ARE SOME OTHER EXAMPLES OF HOW AI HAS HARMED SOCIETY
Example #1: Healthcare

New York regulators are calling on a UnitedHealth Group to either stop using or prove there is no problem with a company-made algorithm that researchers say exhibited significant racial bias. This algorithm, which UnitedHealth Group sells to hospitals for assessing the health risks of patients, assigned similar risk scores to white patients and Black patients despite the Black patients being considerably sicker.

In this case, researchers found that changing just one parameter could generate “an 84% reduction in bias.” If we had specific information on the parameters going into the model and how they are weighted, we would have a record-keeping system to see how certain interventions affected the output of this model.

Bias in AI systems used in healthcare could potentially violate the Constitution’s Equal Protection Clause, which prohibits discrimination on the basis of race. If the algorithm is found to have a disproportionately negative impact on a certain racial group, this could be considered discrimination. It could also potentially violate the Due Process Clause, which protects against arbitrary or unfair treatment by the government or a government actor. If an algorithm used by hospitals, which are often funded by the government or regulated by government agencies, is found to exhibit significant racial bias, this could be considered unfair or arbitrary treatment.

Example #2: Policing

A UN panel on the Elimination of Racial Discrimination has raised concern over the increasing use of technologies like facial recognition in law enforcement and immigration, warning that it can exacerbate racism and xenophobia and potentially lead to human rights violations. The panel noted that while AI can enhance performance in some areas, it can also have the opposite effect as it reduces trust and cooperation from communities exposed to discriminatory law enforcement. Furthermore, the panel highlights the risk that these technologies could draw on biased data, creating a “vicious cycle” of overpolicing in certain areas and more arrests. It recommends more transparency in the design and implementation of algorithms used in profiling and the implementation of independent mechanisms for handling complaints.

A case study on the Chicago Police Department’s Strategic Subject List (SSL) discusses an algorithm-driven technology used by the department to identify individuals at high risk of being involved in gun violence and inform its policing strategies. However, a study by the RAND Corporation on an early version of the SSL found that it was not successful in reducing gun violence or reducing the likelihood of victimization, and that inclusion on the SSL only had a direct effect on arrests. The study also raised significant privacy and civil rights concerns. Additionally, findings reveal that more than one-third of individuals on the SSL, approximately 70% of that cohort, have never been arrested or been a victim of a crime yet received a high-risk score. Furthermore, 56% of Black men under the age of 30 in Chicago have a risk score on the SSL. This demographic has also been disproportionately affected by the CPD’s past discriminatory practices and issues, including torturing Black men between 1972 and 1994, performing unlawful stops and frisks disproportionately on Black residents, engaging in a pattern or practice of unconstitutional use of force, poor data collection, and systemic deficiencies in training and supervision, accountability systems, and conduct disproportionately affecting Black and Latino residents.

Predictive policing, which uses data and algorithms to try to predict where crimes are likely to occur, has been criticized for reproducing and reinforcing biases in the criminal justice system. This can lead to discriminatory practices and violations of the Fourth Amendment’s prohibition on unreasonable searches and seizures, as well as the Fourteenth Amendment’s guarantee of equal protection under the law. Additionally, bias in policing more generally can also violate these constitutional provisions, as well as potentially violating the Fourth Amendment’s prohibition on excessive force.

Example #3: Recruiting

ADS in recruiting crunch large amounts of personal data and, given some objective, derive relationships between data points. The aim is to use systems capable of processing more data than a human ever could to uncover hidden relationships and trends that will then provide insights for people making all types of difficult decisions.

Hiring managers across different industries use ADS every day to aid in the decision-making process. In fact, a 2020 study reported that 55% of human resources leaders in the United States use predictive algorithms across their business practices, including hiring decisions.

For example, employers use ADS to screen and assess candidates during the recruitment process and to identify best-fit candidates based on publicly available information. Some systems even analyze facial expressions during interviews to assess personalities. These systems promise organizations a faster, more efficient hiring process. ADS do theoretically have the potential to create a fairer, qualification-based hiring process that removes the effects of human bias. However, they also possess just as much potential to codify new and existing prejudice across the job application and hiring process.

The use of ADS in recruiting could potentially violate several constitutional laws, including discrimination laws such as Title VII of the Civil Rights Act of 1964 and the Americans with Disabilities Act. These laws prohibit discrimination on the basis of race, gender, and disability, among other protected characteristics, in the workplace. Additionally, the these systems could also potentially violate the right to privacy and the due process rights of job applicants. If the systems are found to be discriminatory or to violate these laws, they could result in legal action against the employers.
WHAT OPEN-SOURCE TOOLS COULD BE LEVERAGED FOR THIS PROJECT?
Aequitas, Accenture Algorithmic Fairness. Alibi Explain, AllenNLP, BlackBox Auditing, DebiasWE, DiCE, ErrorAnalysis, EthicalML xAI, Facebook DynaBoard, Fairlearn, FairSight, FairTest, FairVis, FoolBox, Google Explainable AI, Google KnowYourData, Google ML Fairness Gym, Google PAIR Facets, Google PAIR Language Interpretability Tool, Google PAIR Saliency, Google PAIR What-If Tool, IBM Adversarial Robustness Toolbox, IBM AI Fairness 360, IBM AI Explainability 360, Lime, MLI, ODI Data Ethics Canvas, Parity, PET Repository, PwC Responsible AI Toolkit, Pymetrics audit-AI, RAN-debias, REVISE, Saidot, SciKit Fairness, Skater, Spatial Equity Data Tool, TCAV, UnBias Fairness Toolkit

Supporting Historically Disadvantaged Workers through a National Bargaining in Good Faith Fund

Summary

Black, Indigenous, and other people of color (BIPOC) are underrepresented in labor unions. Further, people working in the gig economy, tech supply chain, and other automation-adjacent roles face a huge barrier to unionizing their workplaces. These roles, which are among the fastest-growing segments of the U.S. economy, are overwhelmingly filled by BIPOC workers. In the absence of safety nets for these workers, the racial wealth gap will continue to grow. The Biden-Harris Administration can promote racial equity and support low-wage BIPOC workers’ unionization efforts by creating a National Bargaining in Good Faith Fund.

As a whole, unions lift up workers to a better standard of living, but historically they have failed to protect workers of color. The emergence of labor unions in the early 20th century was propelled by the passing of the National Labor Relations Act (NLRA), also known as the Wagner Act of 1935. Although the NLRA was a beacon of light for many working Americans, affording them the benefits of union membership such as higher wages, job security, and better working conditions, which allowed many to transition into the middle class, the protections of the law were not applied to all working people equally. Labor unions in the 20th century were often segregated, and BIPOC workers were often excluded from the benefits of unionization. For example, the Wagner Act excluded domestic and agricultural workers and permitted labor unions to discriminate against workers of color in other industries, such as manufacturing. 

Today, in the aftermath of the COVID-19 pandemic and amid a renewed interest in a racial reckoning in the United States, BIPOC workers—notably young and women BIPOC workers—are leading efforts to organize their workplaces. In addition to demanding wage equity and fair treatment, they are also fighting for health and safety on the job. Unionized workers earn on average 11.2% more in wages than their nonunionized peers. Unionized Black workers earn 13.7% more and unionized Hispanic workers 20.1% more than their nonunionized peers. But every step of the way, tech giants and multinational corporations are opposing workers’ efforts and their legal right to organize, making organizing a risky undertaking.

A National Bargaining in Good Faith Fund would provide immediate and direct financial assistance to workers who have been retaliated against for attempting to unionize, especially those from historically disadvantaged groups in the United States. This fund offers a simple and effective solution to alleviate financial hardships, allowing affected workers to use the funds for pressing needs such as rent, food, or job training. It is crucial that we advance racial equity, and this fund is one step toward achieving that goal by providing temporary financial support to workers during their time of need. Policymakers should support this initiative as it offers direct payments to workers who have faced illegal retaliation, providing a lifeline for historically disadvantaged workers and promoting greater economic justice in our society.

Challenges and Opportunities

The United States faces several triangulating challenges. First is our rapidly evolving economy, which threatens to displace millions of already vulnerable low-wage workers due to technological advances and automation. The COVID-19 pandemic accelerated automation, which is a long-term strategy for the tech companies that underpin the gig economy. According to a report by an independent research group, self-driving taxis are likely to dominate the ride-hailing market by 2030, potentially displacing 8 million human drivers in the United States alone.

Second, we have a generation of workers who have not reaped the benefits associated with good-paying union jobs due to decades of anti-union activities. As of 2022, union membership has dropped from more than 30% of wage and salary workers in the private sector in the 1950s to just 6.3%. The declining percentage of workers represented by unions is associated with widespread and deep economic inequality, stagnant wages, and a shrinking middle class. Lower union membership rates have contributed to the widening of the pay gap for women and workers of color.

Third, historically disadvantaged groups are overrepresented in nonunionized, low-wage, app-based, and automation-adjacent work. This is due in large part to systemic racism. These structures adversely affect BIPOC workers’ ability to obtain quality education and training, create and pass on generational wealth, or follow through on the steps required to obtain union representation.

Workers face tremendous opposition to unionization efforts from companies that spend hundreds of millions of dollars and use retaliatory actions, disinformation, and other intimidating tactics to stop them from organizing a union. For example, in New York, Black organizer Chris Smalls led the first successful union drive in a U.S. Amazon facility after the company fired him for his activities and made him a target of a smear campaign against the union drive. Smalls’s story is just one illustration of how BIPOC workers are in the middle of the collision between automation and anti-unionization efforts. 

The recent surge of support for workers’ rights is a promising development, but BIPOC workers face challenges that extend beyond anti-union tactics. Employer retaliation is also a concern. Workers targeted for retaliation suffer from reduced hours or even job loss. For instance, a survey conducted at the beginning of the COVID-19 pandemic revealed that one in eight workers perceived possible retaliatory actions by their employers against colleagues who raised health and safety concerns. Furthermore, Black workers were more than twice as likely as white workers to experience such possible retaliation. This sobering statistic is a stark reminder of the added layers of discrimination and economic insecurity that BIPOC workers have to navigate when advocating for better working conditions and wages. 

The time to enact strong policy supporting historically disadvantaged workers is now. Advancing racial equity and racial justice is a focus for the Biden-Harris Administration, and the political and social will is evident. The day one Biden-Harris Administration Executive Order on Advancing Racial Equity and Support for Underserved Communities Through the Federal Government seeks to develop policies designed to advance equity for all, including people of color and others who have been historically underinvested in, marginalized, and adversely affected by persistent poverty and inequality. Additionally, the establishment of the White House  is a significant development. Led by Vice-President Kamala Harris and Secretary of Labor Marty Walsh, the Task Force aims to empower workers to organize and negotiate with their employers through federal government policies, programs, and practices. 

A key focus for the Task Force is to increase worker power in underserved communities by examining and addressing the challenges faced by workers in jurisdictions with restrictive labor laws, marginalized workers, and workers in certain industries. The Task Force is well-timed, given the increased support for workers’ rights demonstrated through the record-high number of petitions filed with the National Labor Relations Board and the rise in strikes over the past two years. The Task Force’s approach to empowering workers and supporting their ability to organize and negotiate through federal government policies and programs offers a promising opportunity to address the unique challenges faced by BIPOC workers in unionization efforts.

The National Bargaining in Good Faith Fund is a critical initiative that can help level the playing field by providing financial assistance to workers facing opposition from employers who refuse to engage in good-faith bargaining, thereby expanding access to unions for Black, Indigenous, and other people of color. In addition, the proposed initiative would reinforce Equal Employment Opportunity Commission (EEOC) and National Labor Relations Board (NLRB) policies regarding employer discrimination and retaliation. The Bargaining in Good Faith Fund will provide direct payments to workers whose employers have retaliated against them for engaging in union organizing activities. The initiative also includes monitoring cases where a violation has occurred against workers involved in union organization and connecting their bargaining unit with relevant resources to support their efforts. With the backing of the Task Force, the fund could make a significant difference in the lives of workers facing barriers to organizing.

Plan of Action

While the adoption of a policy like the Bargaining in Good Faith Fund is unprecedented at the federal level, we draw inspiration from successful state-level initiatives aimed at improving worker well-being. Two notable examples are initiatives enacted in California and New York, where state lawmakers provided temporary monetary assistance to workers affected by the COVID-19 pandemic. Taking a cue from these successful programs, we can develop federal policies that better support workers, especially those belonging to historically disadvantaged groups.

The successful implementation of worker-led, union-organized, and community-led strike assistance funds, as well as similar initiatives for low-wage, app-based, and automation-adjacent workers, indicates that the Bargaining in Good Faith Fund has strong potential for success. For example, the Coworker Solidarity Fund provides legal, financial, and strategic support for worker-activists organizing to improve their companies, while the fund invests in ecosystems that increase worker power and improve economic livelihoods and social conditions across the U.S. South.

New York state lawmakers have also set a precedent with their transformative Excluded Workers Fund, which provided direct financial support to workers left out of pandemic relief programs. The $2.1 billion Excluded Workers Fund, passed by the New York state legislature and governor in April 2021, was the first large-scale program of its kind in the country. By examining and building on these successes, we can develop federal policies that better support workers across the country.

A national program requires multiple funding methods, and several mechanisms have been identified to establish the National Bargaining in Good Faith Fund. First, existing policy needs to be strengthened, and companies violating labor laws should face financial consequences. The labor law violation tax, which could be a percentage of a company’s profits or revenue, would be directed to the Bargaining in Good Faith Fund. Additionally, penalties could be imposed on companies that engage in retaliatory behavior, and the funds generated could also be directed to the Bargaining in Good Faith Fund. New legislation from Congress is required to enforce existing federal policy.

Second, as natural allies in the fight to safeguard workers’ rights, labor unions should allocate a portion of their dues toward the fund. By pooling their resources, a portion of union dues could be directed to the federal fund.

Third, a portion of the fees paid into the federal unemployment insurance program should be redirected to Bargaining in Good Faith Fund. 

Fourth, existing funding for worker protections, currently siloed in agencies, should be reallocated to support the Bargaining in Good Faith Fund more effectively. To qualify for the fund, workers receiving food assistance and/or Temporary Assistance for Needy Families benefits should be automatically eligible once the NLRB and the EEOC recognize the instance of retaliation. Workers who are not eligible could apply directly to the Fund through a state-appointed agency. This targeted approach aims to support those who face significant barriers to accessing resources and protections that safeguard their rights and well-being due to historical labor exploitation and discrimination.

Several federal agencies could collaborate to oversee the Bargaining in Good Faith Fund, including the Department of Labor, the EEOC, the Department of Justice, and the NLRB. These agencies have the authority to safeguard workers’ welfare, enforce federal laws prohibiting employment discrimination, prosecute corporations that engage in criminal retaliation, and enforce workers’ rights to engage in concerted activities for protection, such as organizing a union.

Conclusion

The federal government has had a policy of supporting worker organizing and collective bargaining since the passage of the National Labor Relations Act in 1935. However, the federal government has not fully implemented its policy over the past 86 years, resulting in negative impacts on BIPOC workers, who face systemic racism in the unionization process and on the job. Additionally, rapid technological advances have resulted in the automation of tasks and changes in the labor market that disproportionately affect workers of color. Consequently, the United States is likely to see an increase in wealth inequality over the next two decades.

The Biden-Harris Administration can act now to promote racial equity by establishing a National Bargaining in Good Faith Fund to support historically disadvantaged workers in unionization efforts. Because this is a pressing issue, a feasible short-term solution is to initiate a pilot program over the next 18 months. It is imperative to establish a policy that acknowledges and addresses the historical disadvantage experienced by these workers and supports their efforts to attain economic equity.

How would the Fund identify, prove eligible, and verify the identity of workers who would have access to the Fund?
Any worker currently receiving food assistance and/or Temporary Assistance for Needy Families benefits would automatically become eligible once the instance of retaliation is recognized by NLRB and EEOC. If the worker is not enrolled or currently eligible, they may apply directly to the program.
Why is the focus only on providing direct cash payments?
Demonstrating eligibility for direct payments would depend on policy criteria. Evidence of discrimination could be required through documentation or a claims process where individuals provide testimony. The process could involve a combination of both methods, requiring both documentation and a claims process administered by a state agency.
Are there any examples of federal policies that provide direct payments to specific groups of people?
There are currently no federal policies that provide direct payments to individuals who have been disproportionately impacted by historical injustices, such as discrimination in housing, education, and employment. However, in recent years some local and state governments have implemented or proposed similar policies.

For example, in 2019, the city of Evanston, Illinois, established a fund to provide reparations to Black residents who can demonstrate that they or their ancestors have been affected by discrimination in housing, education, and employment. The fund is financed by a three percent tax on the sale of recreational marijuana and is intended to provide financial assistance for housing, education, and other needs.

Another example is the proposed H.R. 40 bill in the U.S. Congress that aims to establish a commission to study and develop proposals for reparations for African Americans who are descendants of slaves and who have been affected by slavery, discrimination, and exclusion from opportunities. The bill aims to study the impacts of slavery and discrimination and develop proposals for reparations that would address the lingering effects of these injustices, including the denial of education, housing, and other benefits.
Racial equity seems like a lightning rod in today’s political climate. Given that, are there any examples of federal policy concerning racial equity that have been challenged in court?
There have been several federal policies concerning racial equity that have been challenged in court throughout American history. Here are a few notable examples:

The Civil Rights Act of 1964, which banned discrimination on the basis of race, color, religion, sex, or national origin, was challenged in court but upheld by the Supreme Court in 1964.
The Voting Rights Act of 1965, which aimed to eliminate barriers to voting for minorities, was challenged in court several times over the years, with the Supreme Court upholding key provisions in 1966 and 2013, but striking down a key provision in 2013.
The Fair Housing Act of 1968, which banned discrimination in housing, was challenged in court and upheld by the Supreme Court in 1968.
The Affirmative Action policies, which aimed to increase the representation of minorities in education and the workforce, have been challenged in court multiple times over the years, with the Supreme Court upholding the use of race as a factor in college admissions in 2016.

Despite court challenges, policymakers must persist in bringing forth solutions to address racial equity as many complex federal policies aimed at promoting racial equity have been challenged in court over the years, not just on constitutional grounds.

Ensuring Racial Equity in Federal Procurement and Use of Artificial Intelligence

Summary

In pursuit of lower costs and improved decision-making, federal agencies have begun to adopt artificial intelligence (AI) to assist in government decision-making and public administration. As AI occupies a growing role within the federal government, algorithmic design and evaluation will increasingly become a key site of policy decisions. Yet a 2020 report found that almost half (47%) of all federal agency use of AI was externally sourced, with a third procured from private companies. In order to ensure that agency use of AI tools is legal, effective, and equitable, the Biden-Harris Administration should establish a Federal Artificial Intelligence Program to govern the procurement of algorithmic technology. Additionally, the AI Program should establish a strict data collection protocol around the collection of race data needed to identify and mitigate discrimination in these technologies.

Researchers who study and conduct algorithmic audits highlight the importance of race data for effective anti-discrimination interventions, the challenges of category misalignment between data sources, and the need for policy interventions to ensure accessible and high-quality data for audit purposes. However, inconsistencies in the collection and reporting of race data significantly limit the extent to which the government can identify and address racial discrimination in technical systems. Moreover, given significant flexibility in how their products are presented during the procurement process, technology companies can manipulate race categories in order to obscure discriminatory practices. 

To ensure that the AI Program can evaluate any inequities at the point of procurement, the Office of Science and Technology Policy (OSTP) National Science and Technology Council Subcommittee on Equitable Data should establish guidelines and best practices for the collection and reporting of race data. In particular, the Subcommittee should produce a report that identifies the minimum level of data private companies should be required to collect and in what format they should report such data during the procurement process. These guidelines will facilitate the enforcement of existing anti-discrimination laws and help the Biden-Harris Administration pursue their stated racial equity agenda. Furthermore, these guidelines can help to establish best practices for algorithm development and evaluation in the private sector. As technology plays an increasingly important role in public life and government administration, it is essential not only that government agencies are able to access race data for the purposes of anti-discrimination enforcement—but also that the race categories within this data are not determined on the basis of how favorable they are to the private companies responsible for their collection.

Challenge and Opportunity

Research suggests that governments often have little information about key design choices in the creation and implementation of the algorithmic technologies they procure. Often, these choices are not documented or are recorded by contractors but never provided to government clients during the procurement process. Existing regulation provides specific requirements for the procurement of information technology, for example, security and privacy risks, but these requirements do not account for the specific risks of AI—such as its propensity to encode structural biases. Under the Federal Acquisition Regulation, agencies can only evaluate vendor proposals based on the criteria specified in the associated solicitation. Therefore, written guidance is needed to ensure that these criteria include sufficient information to assess the fairness of AI systems acquired during procurement. 

The Office of Management and Budget (OMB) defines minimum standards for collecting race and ethnicity data in federal reporting. Racial and ethnic categories are separated into two questions with five minimum categories for race data (American Indian or Alaska Native, Asian, Black or African American, Native Hawaiian or Other Pacific Islander, and White) and one minimum category for ethnicity data (Hispanic or Latino). Despite these standards, guidelines for the use of racial categories vary across federal agencies and even across specific programs. For example, the Census Bureau classification scheme includes a “Some Other Race” option not used in other agencies’ data collection practices. Moreover, guidelines for collection and reporting of data are not always aligned. For example, the U.S. Department of Education recommends collecting race and ethnicity data separately without a “two or more races” category and allowing respondents to select all race categories that apply. However, during reporting, any individual who is ethnically Hispanic or Latino is reported as only Hispanic or Latino and not any other race. Meanwhile, any respondent who selected multiple race options is reported in a “two or more races” category rather than in any racial group with which they identified.

These inconsistencies are exacerbated in the private sector, where companies are not uniformly constrained by the same OMB standards but rather covered by piecemeal legislation. In the employment context, private companies are required to collect and report on demographic details of their workforce according to the OMB minimum standards. In the consumer lending setting, on the other hand, lenders are typically not allowed to collect data about protected classes such as race and gender. In cases where protected class data can be collected, these data are typically considered privileged information and cannot be accessed by the government. In the case of algorithmic technologies, companies are often able to discriminate on the basis of race without ever explicitly collecting race data by using features or sets of features that act as proxies for protected classes. Facebook’s advertising algorithms, for instance, can be used to target race and ethnicity without access to race data. 

Federal leadership can help create consistency in reporting to ensure that the government has sufficient information to evaluate whether privately developed AI is functioning as intended and working equitably. By reducing information asymmetries between private companies and agencies during the procurement process, new standards will bring policymakers back into the algorithmic governance process. This will ensure that democratic and technocratic norms of agency rule-making are respected even as privately developed algorithms take on a growing role in public administration.

Additionally, by establishing a program to oversee the procurement of artificial intelligence, the federal government can ensure that agencies have access to the necessary technical expertise to evaluate complex algorithmic systems. This expertise is crucial not only during the procurement stage but also—given the adaptable nature of AI—for ongoing oversight of algorithmic technologies used within government. 

Plan of Action

Recommendation 1. Establish a Federal Artificial Intelligence Program to oversee agency procurement of algorithmic technologies. 

The Biden-Harris Administration should create a Federal AI Program to create standards for information disclosure and enable evaluation of AI during the procurement process. Following the two-part test outlined in the AI Bill of Rights, the proposed Federal AI Program would oversee the procurement of any “(1) automated systems that (2) have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services.”

The goals of this program will be to (1) establish and enforce quality standards for AI used in government, (2) enforce rigorous equity standards for AI used in government, (3) establish transparency practices that enable public participation and political accountability, and (4) provide guidelines for AI program development in the private sector.

Recommendation 2. Produce a report to establish what data are needed in order to evaluate the equity of algorithmic technologies during procurement.

To support the AI Program’s operations, the OSTP National Science and Technology Council Subcommittee on Equitable Data should produce a report to establish guidelines for the collection and reporting of race data that balances three goals: (1) high-quality data for enforcing existing anti-discrimination law, (2) consistency in race categories to reduce administrative burdens and curb possible manipulation, and (3) prioritizing the needs of groups most affected by discrimination. The report should include opportunities and recommendations for integrating its findings into policy. To ensure the recommendations and standards are instituted, the President should direct the General Services Administration (GSA) or OMB to issue guidance and request that agencies document how they will ensure new standards are integrated into future procurement vehicles. The report could also suggest opportunities to update or amend the Federal Acquisition Regulations. 

High-Quality Data

The new guidelines should make efforts to ensure the reliability of race data furnished during the procurement process. In particular:

  1. Self-identification should be used whenever possible to ascertain race. As of 2021, Food and Nutrition Service guidance recommends against the use of visual identification based on reliability, respect for respondents’ dignity, and feedback from Child and Adult Care Food Program) and Summer Food Service Program participants.
  2. The new guidelines should attempt to reduce missing data. People may be reluctant to share race information for many legitimate reasons, including uncertainty about how personal data will be used, fear of discrimination, and not identifying with predefined race categories. These concerns can severely impact data quality and should be addressed to the extent possible in the OMB guidelines. New York’s state health insurance marketplace saw a 20% increase in response rate for race by making several changes to the way they collect data. These changes included explaining how the data would be used and not allowing respondents to leave the question blank but instead allowing them to select “choose not to answer” or “don’t know.” Similarly, the Census Bureau found that a single combined race and ethnicity question improved data quality and consistency by reducing the rate of “some other race,” missing, and invalid responses as compared with two separate questions (one for race and one for ethnicity).
  3. The new guidelines should follow best practices established through rigorous research and feedback from a variety of stakeholders. In June 2022, the OMB announced a formal review process to revise Statistical Policy Directive No. 15: Standards for Maintaining, Collecting, and Presenting Federal Data on Race and Ethnicity. While this review process is intended for the revision of federal data requirements, its findings can help inform best practices for collection and reporting requirements for nongovernmental data as well.

Consistency in Data Reporting 

Whenever possible and contextually appropriate, the guidelines for data reporting should align with the OMB guidelines for federal data reporting to reduce administrative burdens. However, the report may find that other data is needed that goes beyond the OMB guidelines for the evaluation of privately developed AI.

Prioritizing the Needs of Affected Groups

In their Toolkit for Centering Racial Equity Throughout Data Integration, the Actionable Intelligence for Social Policy group at the University of Pennsylvania identifies best practices for ensuring that data collection serves the groups most affected by discrimination. In particular, this toolkit emphasizes the need for strong privacy protections and stakeholder engagement. In their final report, the Subcommittee on Equitable Data should establish protocols to secure data and for carefully considered role-based access to it. 

The final report should also engage community stakeholders in determining which data should be collected and establish a plan for ongoing review that engages with relevant stakeholders, prioritizing affected populations and racial equity advocacy groups. The report should evaluate the appropriate level of transparency in the AI procurement process, in particular, trade-offs between desired levels of transparency and privacy.

Conclusion

Under existing procurement law, the government cannot outsource “inherently governmental functions.” Yet key policy decisions are embedded within the design and implementation of algorithmic technology. Consequently, it is important that policymakers have the necessary resources and information throughout the acquisition and use of procured AI tools. A Federal Artificial Intelligence Program would provide expertise and authority within the federal government to assess these decisions during procurement and to monitor the use of AI in government. In particular, this would strengthen the Biden-Harris Administration’s ongoing efforts to advance racial equity. The proposed program can build on both long-standing and ongoing work within the federal government to develop best practices for data collection and reporting. These best practices will not only ensure that the public use of algorithms is governed by strong equity and transparency standards in the public sector but also provide a powerful avenue for shaping the development of AI in the private sector.