Supporting Device Reprocessing to Reduce Waste in Health Care

The U.S. healthcare system produces 5 million tons of waste annually, or approximately 29 pounds per hospital bed daily. Roughly 80 percent of the healthcare industry’s carbon footprint comes from the production, transportation, use, and disposal of single-use devices (SUDs), which are pervasive in the hospital. Notably, 95% of the environmental impact of single-use medical products results from the production of those products. 

While the Food and Drug Administration (FDA) oversees new devices being brought to market, it is up to the manufacturer to determine whether a device will be marketed as single-use or multiple-use. Manufacturers have a financial incentive to market devices for “single-use” or “disposable” as marketing a device as reusable requires expensive cleaning validations.

In order to decrease healthcare waste and environmental impact, FDA leads on identifying reusable devices that can be safely reprocessed and incentivizing manufacturers to test the reprocessing of their device. This will require the FDA to strengthen its management of single-use and reusable device labeling. Further, the Veterans Health Administration, the nation’s largest healthcare system, should reverse the prohibition on reprocessed SUDs and become a national leader in the reprocessing of medical devices.

Challenge and Opportunity

While healthcare institutions are embracing decarbonization and waste reduction plans, they cannot do this effectively without addressing the enormous impact of single-use devices (SUDs). The majority of research literature concludes that SUDs are associated with higher levels of environmental impact than reusable products. 

FDA regulations governing SUD reprocessing make it extremely challenging for hospitals to reprocess low-risk SUDs, which is inconsistent with the FDA’s “least burdensome provisions.” The FDA requires hospitals or commercial SUD reprocessing facilities to act as the device’s manufacturer, meaning they must follow the FDA’s rules for medical device manufacturers’ requirements and take on the associated liabilities. Hospitals are not keen to take on the liability of a manufacturer, yet commercial reprocessors do not offer many lower-risk devices that can be reprocessed. 

As a result, hospitals and clinics are no longer willing to sterilize SUDs through methods like autoclaving even despite documentation showing that sterilization is safe and precedent showing that similar devices have been safely sterilized and reused for many years without adverse events. Many devices, including pessaries for pelvic organ prolapse and titanium phacoemulsification tips for cataract surgery, can be safely reprocessed in their clinical use. These products, given their risk profile, need not be subject to the FDA’s full medical device manufacturer requirements.  

Further, manufacturers are incentivized to bring SUDs to market quicker than those that may be reprocessed. Manufacturers often market devices as single-use solely because the original manufacturer chose not to conduct expensive cleaning and sterilization validations, not because such cleaning and sterilization validations cannot be done. FDA regulations that govern SUDs should be better tailored to each device so that clinicians on the frontlines can provide appropriate and environmentally sustainable health care. 

Reprocessed devices cost 25 to 40% less. Thus, the use of reprocessed SUDs can reduce costs in hospitals significantly — about $465 million in 2023. Per the Association of Medical Device Reprocessors, if the reprocessing practices of the top 10% performing hospitals were maximized across all hospitals that use reprocessed devices, U.S. hospitals could have saved an additional $2.28 billion that same year. Indeed, enabling and encouraging the use of reprocessed SUDs can also yield significant cost reductions without compromising patient care. 

Plan of Action

As the FDA began regulating SUD reprocessing in 2000, it is imperative that the FDA take the lead on creating a clear, streamlined process for clearing or approving reusable devices in order to ensure the safety and efficacy of reprocessed devices. These recommendations would permit healthcare systems to reprocess and reuse medical devices without fear of noncompliance by the Joint Commission or Centers for Medicare and Medicaid Services that reply on FDA regulations. Further, the nation’s largest healthcare system, the Veterans Health Administration, should become a leader in medical device reprocessing, and lead on showcasing the standard of practice for sustainable healthcare.

  1. FDA should publish a list of SUDs that have a proven track record of safe reprocessing to empower hospitals to reduce waste, costs, and environmental impact without compromising patient safety. The FDA should change the labels of single-use devices to multi-use when reuse by hospitals is possible and validated via clinical studies, as the “single-use” label has promoted the mistaken belief that SUDs cannot be safely reprocessed. Per the FDA, the single-use label simply means a given device has not undergone the original equipment manufacturer (OEM) validation tests necessary to label a device “reusable.” The label does not mean the device cannot be cleared for reprocessing. 
  1. In order to help governments and healthcare systems prioritize the environmental and cost benefits of reusable devices over SUDs, FDA should incentivize applications of reusable or commercially reprocessable devices, such as through expediting review. The FDA can also incentivize use of reprocessed devices through payments to hospitals for meeting reprocessing benchmarks. 
  1. The FDA should not subject low-risk devices that can be safely reprocessed for clinical use to full device manufacturer requirements. The FDA should further encourage healthcare procurement staff by creating an accessible database of devices cleared for reprocessing and alerting healthcare systems about regulated reprocessing options. In doing so, the FDA can help reduce the burden on hospitals in reprocessing low-risk SUDs and encourage healthcare systems to sterilize SUDs through methods like autoclaving. 
  1. As the only major health system in the U.S. to prohibit the use of reprocessed SUDs, the U.S. Veterans Health Administration should reverse its prohibition as soon as possible. This prohibition likely remains because of outdated determinations of risks, which comes at major costs for the environment and Americans. Doing so would be consistent with the FDA’s conclusions that reprocessed SUDs are safe and effective.  
  1. FDA should recommend that manufacturers publicly report the materials used in the composition of devices so that end-users can more easily compare products and determine the environmental impact of devices. As explained by AMDR, some Original Equipment Manufacturer (OEM) practices discourage or fully prevent the use of reprocessed devices. It is imperative that the FDA vigorously track and impede these practices. Not only will requiring public reporting device composition help healthcare buyers make more informed decisions, it will also help promote a more circular economy that uplifts sustainability efforts. 

Conclusion

To decrease costs, waste, and environmental impact, the healthcare sector urgently needs to increase its use of reusable devices. One of the largest barriers is FDA requirements that result in needlessly stringent requirements of hospitals, hindering the adoption of less wasteful, less costly reprocessed devices.

FDA’s critical role in medical device labeling, clearing, or approving more devices as reusable, has down market implications and influences many other regulatory and oversight bodies, including the Centers for Medicare & Medicaid Services (CMS), the Association for the Advancement of Medical Instrumentation (AAMI), the Joint Commission, hospitals, health care offices, and health care providers. It is essential for the FDA to step up and take the lead in revising the device reprocessing pipeline. 

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Antitrust in the AI Era: Strengthening Enforcement Against Emerging Anticompetitive Behavior

The advent of artificial intelligence (AI) has revolutionized business practices, enabling companies to process vast amounts of data and automate complex tasks in ways previously unimaginable. However, while AI has gained much praise for its capabilities, it has also raised various antitrust concerns. Among the most pressing is the potential for AI to be used in an anticompetitive manner. This includes algorithms that facilitate price-fixing, predatory pricing, and discriminatory pricing (harming the consumer market), as well as those which enable the manipulation of wages and worker mobility (harming the labor market). More troubling perhaps is the fact that the overwhelming majority of the AI landscape is controlled by just a few market players. These tech giants—some of the world’s most powerful corporations—have established a near-monopoly over the development and deployment of AI. Their dominance over necessary infrastructure and resources makes it increasingly challenging for smaller firms to compete.

While the antitrust enforcement agencies—the FTC and DOJ—have recently begun to investigate these issues, they are likely only scratching the surface. The covert and complex nature of AI makes it difficult to detect when it is being used in an anticompetitive manner. To ensure that business practices remain competitive in the era of AI, the enforcement agencies must be adequately equipped with the appropriate strategies and resources. The best way to achieve this is to (1) require the disclosure of AI technologies during the merger-review process and (2) reinforce the enforcement agencies’ technical strategy in assessing and mitigating anticompetitive AI practices.

Challenge & Opportunity

Since the late 1970s, antitrust enforcement has been in decline, in part due to a more relaxed antitrust approach put forth by the Chicago school of economics. Both the budgets and the number of full-time employees at the enforcement agencies have steadily decreased, while the volume of permitted mergers and acquisitions has risen (see Figure 1). This resource gap has limited the ability of the agencies to effectively oversee and regulate anticompetitive practices.

Figure 1. Merger Enforcement vs. Total Filings

Changing attitudes surrounding big business, as well as recent shifts in leadership at the enforcement agencies—most notably President Biden’s appointment of Lina Khan to FTC Chair—have signaled a more aggressive approach to antitrust law. But even with this renewed focus, the agencies are still not operating at their full potential. 

This landscape provides a significant opportunity to make some much-needed changes. Two areas for improvement stand out. First, agencies can make use of the merger review process to aid in the detection of anticompetitive AI practices. In particular, the agencies should be on the look-out for algorithms that facilitate price-fixing, where competitors use AI to monitor and adjust prices automatically, covertly allowing for tacit collusion; predatory pricing algorithms, which enable firms to undercut competitors only to later raise prices once dominance is achieved; and dynamic pricing algorithms, which allow firms to discriminate against different consumer groups, resulting in price disparities that may distort market competition. On the labor side, agencies should screen for wage-fixing algorithms and other data-driven hiring practices that may suppress wages and limit job mobility. Requiring companies to disclose the use of such AI technologies during merger assessments would allow regulators to examine and identify problematic practices early on. This is especially useful for flagging companies with a history of anticompetitive behavior or those involved in large transactions, where the use of AI could have the strongest anticompetitive effects.

Second, agencies can use AI to combat AI. Research has demonstrated that AI can be more effective in detecting anticompetitive behavior than other traditional methods. Leveraging such technology could transform enforcement capabilities by allowing agencies to cover more ground despite limited resources. While increasing funding for these agencies would be requisite, AI nonetheless provides a cost-effective solution, enhancing efficiency in detecting anticompetitive practices, without requiring massive budget increases.

The success of these recommendations hinges on the enforcement agencies employing technologists who have a deep understanding of AI. Their knowledge on algorithm functionality, the latest insights in AI, and the interplay between big data and anticompetitive behavior is instrumental. A detailed discussion of the need for AI expertise is covered in the following section.

Plan Of Action

Recommendation 1. Require Disclosure of AI Technologies During Merger-Review.

Currently, there is no formal requirement in the merger review process that mandates the reporting of AI technologies. This lack of transparency allows companies to withhold critical information that may help agencies determine potential anticompetitive effects. To effectively safeguard competition, it is essential that the FTC and DOJ have full visibility of businesses’ technologies, particularly those that may impact market dynamics. While the agencies can request information on certain technologies further in the review process, typically during the second request phase, a formalized reporting requirement would provide a more proactive approach. Such an approach would be beneficial for several reasons. First, it would enable the agencies to identify anticompetitive technologies they might have otherwise overlooked. Second, an early assessment would allow the agencies to detect and mitigate risk upfront, rather than having to address it post-merger or further along in the merger review process, when remedies may be more difficult to enforce. This is particularly applicable with regard to deep integrations that often occur between digital products post-merger. For instance, the merger of Instagram and Facebook complicated the FTC’s subsequent efforts to challenge Meta. As Dmitry Borodaenko, a former Facebook engineer, explained: 

“Instagram is no longer viable outside of Facebook’s infrastructure. Over the course of six years, they integrated deeply… Undoing this would not be a simple task—it would take years, not just the click of a button.”

Lastly, given the rapidly evolving nature of AI, this requirement would help the agencies identify trends and better determine which technologies are harmful to competition, under what circumstances, and in which industries. Insights gained from one sector could inform investigations in other sectors, where similar technologies are being deployed. For example, the DOJ recently filed suit against RealPage, a property management software company, for allegedly using price-fixing algorithms to coordinate rent increases among competing landlords. The case is the first of its kind, as there had not been any previous lawsuit addressing price-fixing in the rental market. With this insight, however, if the agencies detect similar algorithms during the merger review process, they would be better equipped to intervene and prevent such practices.

There are several ways the government could implement this recommendation. To start, The FTC and DOJ should issue interpretive guidelines specifying that anticompetitive effects stemming from AI technologies are within the purview of the Hart-Scott-Rodino (HSR) Act, and that accordingly, such technologies should be disclosed in the pre-merger notification process. In particular, the agencies should instruct companies to report detailed descriptions of all AI technologies in use, how they might change post-merger, and their potential impact on competition metrics (e.g., price, market share). This would serve as a key step in signaling to companies that AI considerations are integral during merger review. Building on this, Congress could pass legislation mandating AI disclosures, thereby formalizing the requirement. Ultimately, in a future round of HSR revisions, the agencies could incorporate this mandate as a binding rule within the pre-merger framework. To avoid unnecessary burden on businesses, reporting should only be required when AI plays a significant role in the company’s operations or is expected to post-merger. What constitutes a ‘significant role’ should be left to the discretion of the agencies but could include AI systems central to core functions such as pricing, customer targeting, wage-setting, or automation of critical processes.

Recommendation 2. Reinforce the FTC and DOJ’s Technical Strategy in Assessing and Mitigating Anticompetitive AI Practices.

Strengthening the agencies’ ability to address AI requires two actions: integrating computational antitrust strategies and increasing technical expertise. A wave of recent research has highlighted AI as a powerful tool in helping detect anticompetitive behavior. For instance, scholars at the Stanford Computational Antitrust Project have demonstrated that methods such as machine learning, natural language processing, and network analysis can assist with tasks, ranging from uncovering collusion between firms to distinguishing digital markets. While the DOJ has already partnered with the Project, the FTC could benefit by pursuing a similar collaboration. More broadly, the agencies should deepen their technical expertise by expanding workshops and training with AI academic leaders. Doing so would not only provide them with access to the most sophisticated techniques in the field, but would also help bridge the gap between academic research and real-world implementation. Examples may include the use of machine learning algorithms to identify price-fixing and wage-setting; sentiment analysis, topic modeling, and other natural language processing tools to detect intention to collude in firm communications; or reverse-engineering algorithms to predict outcomes of AI-driven market manipulation. 

Leveraging such computational strategies would enable regulators to analyze complex market data more effectively, enhancing the efficiency and precision of antitrust investigations. Given AI’s immense power, only a small—but highly skilled—team is needed to make significant progress. For instance, the UK’s Competition and Markets Authority (CMA) recently stood up a Data, Technology and Analytics unit, whereby they implement machine learning strategies to investigate various antitrust matters. For the U.S. agencies to facilitate this, the DOJ and FTC should hire more ML/AI experts, data scientists, and technologists, who could serve several key functions. First, they could conduct research on the most effective methods for detecting collusion and anticompetitive behavior in both digital and non-digital markets. Second, based on such research, they could guide the implementation of selected AI solutions in investigations and policy development. Third, they could perform assessments of AI technologies, evaluating the potential risks and benefits of AI applications in specific markets and companies. These assessments would be particularly useful during merger review, as previously discussed in Recommendation 1. Finally, they could help establish guidelines for transparency and accountability, ensuring the responsible and ethical use of AI both within the agencies and across the markets they regulate.

To formalize this recommendation, the President should submit a budget proposal to Congress requesting increased funding for the FTC and DOJ to (1) hire technology/AI experts and (2) provide necessary training for other selected employees on AI algorithms and datasets. The FTC may separately consider using its 6(b) subpoena powers to conduct a comprehensive study of the AI industry or of the use of AI practices more generally (e.g., to set prices or wages). Finally, the agencies should strive to foster collaboration between each other (e.g., establishing a Joint DOJ-FTC Computational Task Force), as well as with those in academia and the private sector, ​​to ensure that enforcement strategies remain at the cutting edge of AI advancements.

Conclusion

The nation is in the midst of an AI revolution, and with it comes new avenues for anticompetitive behavior. As it stands, the antitrust enforcement agencies lack the necessary tools to adequately address this growing threat.

However, this environment also presents a pivotal opportunity for modernization. By requiring the disclosure of AI technologies during the merger review process, and by reinforcing the technical strategy at the FTC and DOJ, the antitrust agencies can strengthen their ability to detect and prevent anticompetitive practices. Leveraging the expertise of technologists in enforcement efforts can enhance the agencies’ capacity to monitor levels of competition in markets, as well as allow them to identify patterns between certain technologies and violations of antitrust.

Given the rapid pace of AI advancement, a proactive effort triumphs over a reactive one. Detecting antitrust violations early allows agencies to save both time and resources. To protect consumers, workers, and the economy more broadly, it is imperative that the FTC and DOJ adapt their enforcement strategies to meet the complexities of the AI era.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Clearing the Path for New Uses for Generic Drugs

The labeling-only 505(b)(2) NDA pathway for non-manufacturers to seek FDA approval

Repurposing generic drugs as new treatments for life-threatening diseases is an exciting yet largely overlooked opportunity due to a lack of market-driven incentives. The low profit margins for generic drugs mean that pharmaceutical companies rarely invest in research, regulatory efforts, and marketing for new uses. Nonprofit organizations and other non-commercial non-manufacturers are increasing efforts to repurpose widely available generic drugs and rapidly expand affordable treatment options for patients. However, these non-manufacturers find it difficult to obtain regulatory approval in the U.S. They face significant challenges in using the existing approval pathways, specifically in: 1) providing the FDA with required chemistry, manufacturing, and controls (CMC) data, 2) providing the FDA with product samples, and 3) conducting post-marketing surveillance. Without a straightforward path for approval and updating drug labeling, non-manufacturers have relied on off-label use of repurposed drugs to drive uptake. This practice results in outdated labeling for generics and hinders widespread clinical adoption, limiting patient access to these potentially life-saving treatments. 

To encourage greater adoption of generic drugs in clinical practice – that is, to encourage the repurposing of these drugs – the FDA should implement a dedicated regulatory pathway for non-manufacturers to seek approval of new indications for repurposed generic drugs. A potential solution is an extension of the existing 505(b)(2) new drug application (NDA) approval pathway. This extension, the “labeling-only” 505(b)(2) NDA, would be a dedicated pathway for non-manufacturers to seek FDA approval of new indications for well-established small molecule drugs when multiple generic products are already available. The labeling-only 505(b)(2) pathway would be applicable for repurposing drugs for any disease. Creating a regulatory pathway for non-manufacturers would unlock access to innovative therapies and enable the public to benefit from the enormous potential of low-cost generic drugs.

Challenge and Opportunity

The opportunity for generic drug repurposing

On-patent, branded drugs are often unaffordable for Americans. Due to the high cost of care, 42% of patients in the U.S. exhaust their life savings within two years of a cancer diagnosis. Generic drug repurposing – the process of finding new uses for FDA-approved generic drugs – is a major opportunity to quickly create low-cost and accessible treatment options for many diseases. In oncology, hundreds of generic drugs approved for non-cancer uses have been tested as cancer treatments in published preclinical and clinical studies. 

The untapped potential for generic drug repurposing in cancer and other diseases is not being realized because of the lack of market incentives. Pharmaceutical companies are primarily focused on de novo drug development to create new molecular and chemical entities. Typically, pharmaceutical companies will invest in repurposing only when the drugs are protected by patents or statutory market exclusivities, or when modification to the drugs can create new patent protection or exclusivities (e.g., through new formulations, dosage forms, or routes of administration). Once patents and exclusivities expire, the introduction of generic drugs creates competition in the marketplace. Generics can be up to 80-85% less expensive than their branded counterparts, driving down overall drug prices. The steep decline in prices means that pharmaceutical companies have little motivation to invest in research and marketing for new uses of off-patent drugs, and this loss of interest often starts in the years preceding generic entry.

In theory, pharmaceutical companies could repurpose generics without changing the drugs and apply for method-of-use patents, which should provide exclusivity for new indications and the potential for higher pricing. However, due to substitution of generic drugs at the pharmacy level, method-of-use patents are of little to no practical value when there are already therapeutically equivalent products on the market. Pharmacists can dispense a generic version instead of the patent-protected drug product, even if the substituted generic does not have the specific indication in its labeling. Currently, nearly all U.S. states permit substitution by the pharmacy, and over a third have regulations that require generic substitution when available.

Nonprofits like Reboot Rx and other non-commercial non-manufacturers are therefore stepping in to advance the use of repurposed generic drugs across many diseases. Non-manufacturers, which do not manufacture or distribute drugs, aim to ensure there is substantial evidence for new indications of generic drugs and then advocate for their clinical use. Regulatory approval would accelerate adoption. However, even with substantial evidence to support regulatory review, non-manufacturers find it difficult or impossible to seek approval for new indications of generic drugs. There is no straightforward pathway to do so within the current U.S. regulatory framework without offering a specific, manufactured version of the drug. This challenge is not unique to the U.S.; recent efforts in the European Union (EU) have sought to address the regulatory gap. In the 2023 EU reform of pharmaceutical legislation, Article 48 is currently under review by the European Parliament as a potential solution to allow nonprofit entities to spearhead submissions for the approval of new indications for authorized medicinal products with the European Medicines Agency. To maximize the patient impact of generic drugs in America, non-manufacturers should be able to drive updates to FDA drug labeling, enabling widespread clinical adoption of repurposed drugs in a formal, predictable, and systematic manner.

The importance of FDA approval

Drugs that are FDA-approved can be prescribed for any indication not listed on the product labeling, often referred to as “off-label use”. Since non-manufacturers face significant challenges pursuing regulatory approval for new indications, they often must rely on advocating for off-label use of repurposed drugs.

While off-label use is widely accepted and helpful for specific circumstances, there are significant advantages to having FDA approval of new drug indications included in labeling. FDA drug labeling is intended to contain up-to-date information about drug products and ensures that the necessary conditions of use (including dosing, warnings, and precautions) are communicated for the specific indications. It is the primary authoritative source for making informed treatment decisions and is heavily valued by the medical community. Approval may increase the likelihood of uptake by clinical guidelines, pathways systems, and healthcare payers. Indications with FDA approval may generate greater awareness of the treatment options, leading to a broader and more rapid impact on clinical practice. 

Clinical practice guidelines are often the leading authority for prescribers and patients regarding off-label use. In oncology, for example, the National Comprehensive Cancer Network (NCCN) Guidelines are commonly used guidelines that include many off-label uses. However, guidelines do not exist for every disease and medical specialty, which can make it more difficult to gain acceptance for off-label uses. The Centers for Medicare and Medicaid Services (CMS) policy routinely covers off-label drug uses if they are listed in certain compendia. The NCCN Compendia, which is based on the NCCN Guidelines, is the only accepted compendium that is disease-specific.

Off-label use requires more effort from individual prescribers and patients to independently evaluate new drug data, thereby slowing uptake of the treatments. This can be especially difficult for community-based physicians, who need to remain up-to-date on new treatment options across many diseases. Off-label prescribing can also introduce medico-legal risks, such as malpractice. These burdens and risks limit off-label prescribing, even when there is supportive evidence for the new uses.

As new uses for generic drugs are discovered, it is crucial to update the labeling to ensure alignment with current clinical practice. Outdated generic drug labeling means that prescribers and patients may not have access to all the necessary information to understand the full risk-benefit profile. Americans deserve to have access to all effective treatment options – especially low-cost and widely available generic drugs that could help mitigate the financial toxicity and health inequities faced by many patients. For the public benefit, the FDA should support approaches that remove regulatory barriers for non-manufacturers and modernize drug labeling.

Existing pathways for manufacturers to obtain FDA approval 

The current FDA approval system is based on the idea that sponsors have discrete physical drug products. Traditionally, sponsors seeking FDA approval are pharmaceutical companies or drug manufacturers that intend to produce (or contract for production), distribute, and sell the finished drug product. For the purposes of FDA regulation, “drug” refers to a substance intended for use in the treatment or prevention of disease; “drug product” is the final dosage form that contains a drug substance and inactive ingredients made and sold by a specific manufacturer. One drug can be present on the market in multiple drug products. In the current regulatory framework, drug products are approved through one of the following:

Manufacturers can add new indications to their approved labeling without modifying the drug product through existing pathways. With supportive clinical evidence for the new indication, the NDA holder can file a supplemental NDA (sNDA), while an ANDA holder may submit a 505(b)(2) NDA as a supplement to their existing ANDA. As previously discussed, the drug product will likely be subject to pharmacy-level substitution with any available therapeutically equivalent generic. The marketing exclusivities that sponsors may receive from the FDA do not protect against this substitution. Therefore, these pathways are rarely, if ever, used by pharmaceutical companies when there are already multiple generic manufacturers of the product. 

Challenges for non-manufacturers in using existing pathways

Since manufacturers are not incentivized to seek regulatory approval for new indications, labeling changes are more likely to happen if driven by non-manufacturers. Yet non-manufacturers face significant challenges in utilizing the existing regulatory pathways. Sponsors must submit the following information for all NDAs for the FDA’s review: 1) clinical and nonclinical data on the safety and effectiveness of the drug for the proposed indication; 2) the proposed labeling; and 3) chemistry, manufacturing, and controls (CMC) data describing the methods of manufacturing and the controls to maintain the drug product’s quality. 

To submit and maintain NDAs, non-manufacturer sponsors would need to address the following challenges: 

  1. Providing the FDA with required CMC data. NDA sponsors must provide CMC data for FDA review. Non-manufacturers would not produce physical drug products, and therefore they would not have information on the manufacturing process. 
  2. Providing the FDA with product samples. If requested, NDA sponsors must have the drug products and other samples (e.g., drug substances or reference standards) available to support the FDA review process and must make available for inspection the facilities where the drug substances and drug products are manufactured. Non-manufacturers would not have physical drug products to provide as samples, the capabilities to produce them, or access to the facilities where they are made. 
  3. Conducting post-marketing surveillance. Post-marketing responsibilities to maintain an NDA include conducting annual safety reporting and maintaining a toll-free number for the public to call with questions or concerns. Non-manufacturers, such as small nonprofits, may not have the bandwidth or resources to meet these requirements.

Within the current statutory framework, a non-manufacturer could sponsor a 505(b)(2) NDA to obtain approval of a new indication by partnering with a current manufacturer of the drug – either an NDA or ANDA holder. The manufacturer would help meet the technical requirements of the 505(b)(2) application that the non-manufacturer could not fulfill independently. Through this partnership, the non-manufacturer would acquire the CMC data and physical drug product samples from the manufacturer and rely on the manufacturer’s facilities to fulfill FDA inspection and quality requirements.

Once approved, the 505(b)(2) NDA would create a new drug product with indication-specific labeling, even though the product would be identical to an existing product under a previous NDA or ANDA. The 505(b)(2) NDA would then be tied to the specific manufacturer due to the use of their CMC data, and that manufacturer would be responsible for producing and distributing the drug product for the new indication.

As a practical matter, this pathway is rarely attainable. Manufacturers of marketed drug products, particularly generic drug manufacturers, lack the incentives needed to partner with non-manufacturers. Manufacturers may not want to provide their CMC data or samples because it may prompt FDA inspection of their facilities, require an update to their CMC information, or open the door to product liability risks. The existing incentive structure strongly discourages generic drug manufacturers from expending any additional resources on researching new uses or making any changes to their product labeling that would deviate from the original RLD product. 

Plan of Action 

To modernize drug labeling and enhance clinical adoption of generic drugs, the FDA should implement a dedicated regulatory pathway for non-manufacturers to seek approval of new indications for repurposed generic drugs. Ultimately, such a pathway would enable drug repurposing and be a crucial step toward equitable healthcare access for Americans. We propose a potential solution – a “labeling-only” 505(b)(2) NDA – as an extension of the existing 505(b)(2) approval pathway. 

Overview of the proposed labeling-only 505(b)(2) NDA pathway 

The labeling-only 505(b)(2) NDA would enable non-manufacturers to reference CMC information from previous FDA determinations and, when necessary, provide the FDA with samples of commercially available drug products. Through this approach, the new indication would not be tied to a specific drug product made by one manufacturer. There is no inherent necessity for a new indication of a generic drug to be exclusively linked to a single manufacturer or drug product when the FDA has already approved multiple therapeutically equivalent generic drugs. Any of these interchangeable drug products would be considered equally safe and effective for the new indication, and patients could receive any of these drug products due to pharmacy-level substitution. 

We describe non-manufacturer repurposing sponsors as entities that intend to submit or reference clinical data through a labeling-only 505(b)(2) NDA. This pathway is designed to expand the FDA-approved labeling of generic drugs for new indications, including those that may already be considered the standard of care. Non-manufacturers do not have the means to independently produce or distribute drug products. Instead, they intend to show that there is substantial evidence to support the new use through FDA approval, and then advocate for the indication in clinical practice. This evidence may be based on their research or research performed by other entities, including clinical trials and real-world data analyses.

The labeling-only 505(b)(2) NDA pathway helps address the three major challenges non-manufacturers face in pursuing regulatory approval. Through this pathway, non-manufacturers would be able to: 

  1. Reference the FDA’s previous determinations on CMC data. Currently, a 505(b)(2) NDA can reference the FDA’s previous determinations of safety and effectiveness for an approved drug product. For eligible generic drugs, the labeling-only 505(b)(2) NDA would build on this practice by allowing non-manufacturer sponsors to reference the FDA’s previous determinations on any NDA or ANDA that the manufacturing process and CMC data are adequate to meet regulatory standards.
  2. Provide the FDA with product samples using commercially available drug product samples. Currently, it is up to the discretion of the FDA whether or not to request samples in the review of an application. With the labeling-only 505(b)(2) NDA, non-manufacturers would provide the FDA with samples of commercially available products from generic manufacturers. Given that the FDA would have already evaluated the products and their bioequivalence to the RLD during the previous reviews, it is not expected that the FDA would need to re-examine the product at the level of requesting samples, except potentially to examine the packaging and physical presentation of the product for compatibility with the new indication and conditions of use. The facilities where the drugs are made would remain available for inspection, under the same terms and conditions as the existing, approved marketing applications. 
  3. Manage post-marketing responsibilities. Since most post-marketing surveillance and adverse event reporting are drug product-specific, these obligations would continue to be the responsibility of the manufacturer of the physical drug product dispensed. With the labeling-only 505(b)(2) NDA, the non-manufacturer would not have product-specific obligations because they are not putting a new product into the marketplace. However, we anticipate the non-manufacturer would be responsible for the repurposed indication on their labeling, including but not limited to post-marketing surveillance as well as indication-specific adverse event reporting and reasonable follow-up.

Under the labeling-only 505(b)(2) NDA, the non-manufacturer sponsor would not introduce a new physical drug product into the market. The new labeling created by the approval would not expressly be associated with one specific product. The non-manufacturer’s labeling would refer to the drug by its established generic name. In that way, the non-manufacturer sponsor’s approval and labeling could be applicable to all equivalent versions of the drug product and would be available for patients to receive from their pharmacy in the same way that generic drugs are typically dispensed at the pharmacy. That is, with the benefit of pharmacy-level substitution, patients could receive any available, therapeutically equivalent drug products from any current manufacturer. 

Eligibility criteria

We envision the users of this pathway to be non-manufacturers that conduct drug repurposing research for the public benefit, including organizations like nonprofits and patient advocacy groups. The FDA should implement and enforce additional guardrails on eligibility to ensure that sponsors operate in good faith and cannot otherwise meet traditional NDA requirements. This process may include pre-submission meetings and reviews. The labeling-only 505(b)(2) NDA should be held to the FDA’s standard level of rigor and scrutiny of safety and effectiveness for the proposed indication during the review process.

The labeling-only 505(b)(2) would only be suitable for well-established, commercially available small molecule generic drugs, which can be identified as: 

  1. Drugs with a U.S. Pharmacopeia and National Formulary (USP-NF) monograph. The USP-NF monograph system ensures the uniformity of available products on the market by setting a consensus minimum standard of identity, strength, quality, and purity among all marketed versions of a drug. It is expressly recognized in the Federal Food, Drug, and Cosmetics Act (FDCA). The USP-NF strives to have substance and product monographs for all FDA-approved drugs. USP-NF monographs for generics are commonly available because the drugs have been on the market for a long time and are typically produced by multiple manufacturers. Drug products in the U.S. market must conform to the standards in the USP-NF, when available, to avoid possible charges of adulteration and misbranding. 

By statute and regulation, the FDA already allows for NDAs and ANDAs to reference the USP-NF to satisfy some CMC requirements, such as for specifications of the drug substance. As an illustration of the acceptance of the USP-NF, clinical trial protocols requiring the use of background therapy or supportive care, as well as trials testing medical devices requiring the use of a drug product, often will specify that any available version of the drug product meeting USP-NF standards can be used. We propose that products without USP-NF monographs, including certain newer drugs and drugs with especially complicated manufacturing processes that are not conducive to standardization, would not be eligible for the labeling-only 505(b)(2) pathway.

  1. Drugs with multiple A-rated, therapeutically equivalent products in the FDA Orange Book. The FDA does not regulate which specific products are dispensed or substituted for a given drug prescription. The listing of therapeutic equivalents in the Orange Book facilitates the seamless replacement of drug products from different manufacturers in clinical practice. Therapeutically equivalent drug products: i) have demonstrated bioequivalence to the RLD; ii) have the same strength, dosage form, and route of administration as the RLD; and iii) are labeled for the same conditions of use as the RLD. Therapeutic equivalents that meet these criteria are designated “A-rated” in the Orange Book. A-rated drug products are substitutable for any other version of that A-rated drug product, including the RLD itself. 

Implementation 

The labeling-only 505(b)(2) NDA pathway could be implemented through an FDA guidance document interpreting the current statute and regulations or through legislation that clarifies the FDA’s existing authority. Guidance documents contain the FDA’s interpretation of a given policy on regulatory issues. The FDA’s Center for Drug Evaluation and Research (CDER) could issue new guidance that allows for interpretation of the existing statute, thereby officially authorizing previous FDA determinations of acceptable CMC data to be referenceable for eligible generic drugs and adjusting drug sample requirements. Alternatively, the labeling-only 505(b)(2) could be enacted by Congress through a statutory change by incorporation into FDCA, which is up for reauthorization through the Prescription Drug User Fee Act (PDUFA) in 2027, or other congressional acts as appropriate. FDA guidance would be a faster pathway to adoption, while statutory authorization would offer additional safeguards for the continuance of the pathway long term.

The labeling-only 505(b)(2) NDA pathway could be funded through user fees, which are established by PDUFA for the cost to file and maintain NDAs. However, many nonprofit sponsors would not be able to afford the same user fees as for-profit pharmaceutical manufacturers. Relevant statutes will likely need to be updated to create a different fee schema for non-manufacturers who use the labeling-only 505(b)(2) pathway. In a similar spirit to reducing barriers to maintaining up-to-date labeling, the 2017 PDUFA update waived the fee for submitting an sNDA, which is how an existing RLD holder would update their labeling with new indication information.

Conclusion

Patients need new and affordable treatment options for diseases that have a devastating societal impact, and repurposing generic drugs can help address this need. Nonprofits and other non-manufacturers are driving these efforts forward due to a lack of interest from pharmaceutical companies. As momentum gains for generic drug repurposing, the U.S. regulatory system needs a pathway for non-manufacturers to seek FDA approval of new indications for existing generic drugs. Our proposed labeling-only 505(b)(2) NDA would eliminate undue administrative burden, enabling non-manufacturers to pursue FDA approval of new indications. It would allow the FDA to provide the public with the most up-to-date drug labeling, improving the ability of patients and physicians to make informed treatment decisions. This dedicated pathway would increase the availability of effective treatment options while reducing costs for the American healthcare system.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
Could the FDA make labeling changes for repurposed generic drugs without the process being driven by non-manufacturer sponsors?

The FDA does not have sufficient bandwidth or resources to meet the opportunity we have with repurposed generic drugs. To be the primary driver of labeling changes for repurposed generic drugs, the FDA would need to identify repurposing opportunities and also thoroughly compile and evaluate the safety and effectiveness data for the new indications. Project Renewal in FDA’s Oncology Center of Excellence is working with RLD holders to update the labeling of certain older oncology drugs where the outdated labeling does not reflect their current clinical use. The initial focus of Project Renewal is limited, and newly repurposed treatments are not included within its scope. For newly repurposed treatments, the FDA could evaluate the drugs and post to the Federal Register reports on their safety and effectiveness that could be referenced by manufacturers. However, this approach is burdensome as it would require a significant commitment of FDA resources. By introducing a motivated, third-party non-manufacturer as the primary driver for labeling changes, non-manufacturers can contribute expertise and resources to enable faster data evaluation for more drugs.

If a repurposed indication for a generic drug was approved through the labeling-only 505(b)(2) NDA pathway, would current manufacturers be required to update their labeling?

The pre-existing NDA sponsor could update their labeling to add the new indication through an sNDA that references the labeling-only 505(b)(2) NDA. ANDA holders would then be legally required to match their labeling to that of the RLD. The FDA should determine whether all current manufacturers would be required to update their labeling following approval of the new indication, and if so, the appropriate process.

Could drug repurposing and expanding the market for generics cause an increase in drug prices?

Generic drugs play a vital role in the U.S. healthcare system by decreasing drug spending and increasing the accessibility of essential medicines. Generics account for 91% of prescriptions filled in the U.S. Expanding the market with generics for new indications could lead to short-term price increases above the inflation rate for off-patent branded and generic drugs in some unlikely circumstances. For example, if a new use for a generic drug substantially increases demand for the drug, there is a short-term risk that prices for the drug or potential substitutes could rise until manufacturers build more capacity to increase supply. To mitigate this risk, generic manufacturers could be notified about potential increases in demand so they can plan for increased production.

Would off-label uses of drugs still be covered by healthcare payers if there is a pathway to seek approval for those uses?

Generally, healthcare payers are not required to cover or reimburse for off-label uses of drugs. Unless a drug undergoes utilization management, payers cover most generic drugs for off-label uses because coverage is agnostic of indication. Clinical practice guidelines are highly influential in the widespread adoption of off-label treatments into the standard of care and are often referenced by payers making reimbursement decisions. In oncology, many off-label treatments are included in guidelines; only 62% of treatments in the NCCN are aligned with FDA-approved indications. For example, more than half of the NCCN recommendations for metastatic breast cancer are off-label treatments. Due to the breadth of off-label use, we anticipate payers would continue to cover repurposed generic drugs used off-label, even if there is a pathway available for non-manufacturers to pursue FDA approval.

Would a labeling-only 505(b)(2) NDA sponsor receive any marketing exclusivities for the new indications?

We do not envision any form of exclusivity being granted for indications pursued via a labeling-only 505(b)(2) NDA. Given that the non-manufacturer sponsor would rely on existing products produced by multiple generic manufacturers, there is no new product to grant exclusivity. Even if some form of exclusivity were given to the non-manufacturer, it would be insufficient to guarantee the use of any particular drug product over another due to pharmacy-level substitution.

An Innovation Agenda for Addiction: Breakthrough Medicines That Scale

The federal government should expand the FDA’s priority review voucher program (PRV) and provide market exclusivity advantages to encourage the development of medications for addiction. 

Taken together, substance use disorders (alcohol, cigarettes, and other drugs) cause more deaths in the U.S. every year than cancer or heart disease and cause devastating downstream social harms. Despite this, only 3% of eligible patients received substance use disorder (SUD) medication, a result of low uptake and efficacy of existing medications and a lack of options for patients addicted to stimulants. This is due to a near-total absence of pharmaceutical research and development activity. To make population level impact to reduce harms from opioids, methamphetamine, cocaine, alcohol, and cigarettes, we must address the broken market dynamics in addiction medicine. 

The PRV program should be expanded to cover opioid use disorder, alcohol use disorder, stimulant use disorder, and smoking. In addition, drugs that are approved for these SUD indications should have extended exclusivity and sponsors that develop these medications should receive vouchers to extend exclusivity for other medications.

Challenge and Opportunity 

Addiction policy efforts on both the left and the right have struggled. Despite substantial progress reducing smoking, 29 million Americans still smoke cigarettes and feel unable to quit and 480,000 Americans die each year from smoking. While overdose deaths from opioids, cocaine, and methamphetamine have fallen slightly from their peak in 2022, they are still near record highs, three  times higher than 20 years ago. Alcohol deaths per capita have doubled since 1999

Roughly 60% of all crimes and 65% of violent crimes are related to drugs or alcohol; and the opioid crisis alone costs the United States $1.5 trillion a year. Progress in reducing addiction is held back because people with a substance use disorder take medication. This low uptake has multiple causes: in opiate use disorder, uptake is persistently low despite recent relaxations of prescription rules, with patients reporting a variety of reasons for refusal; treatments for alcohol use disorder have modest effects; and there are no approved treatments for stimulant use disorder. Only three percent take SUD medications, as shown in figure 1 below [link to image]. In brief, only 2% of those suffering alcohol use disorder, 13% of those with opiate use disorder, 2% of smokers, and approximately 0% of illicit stimulant users are receiving medication, giving a weighted average of about 3%.

There has been rapid innovation in the illicit market as synthetic opioids and expanded meth production have lowered price and increased strength and availability. Meanwhile, there has been virtually no innovation in medicines to prevent and treat addiction. The last significant FDA approval for opioid use disorder was buprenorphine in 2002; progress since then has been minimal, with new formulations or dosing of old medications. For alcohol use disorder, the most recent was acamprosate in 2004 (and it is rarely prescribed due to limited efficacy and three times a day dosing).

None of the ten largest pharmaceutical companies have active addiction medicine programs or drug candidates, and the pharmaceutical industry as a whole has only pursued minimal drug development. According to the trade association BIO, “Venture investment into companies with novel addiction drug programs over the last 10 years is estimated at $130M, 270 times less than oncology.”

There are promising addiction drug candidates being studied by academics but without industry support they will never become medicines. If pharmaceutical companies spent just 10% of what they spend on obesity therapies, we might quickly make progress.

For example, GLP-1 medicines like Ozempic and Mounjaro have strong anti-addictive effects across substances. Randomized trials and real-world patient health record studies show dramatic drops in consumption of drugs and alcohol for patients taking a GLP-1. Many addiction scientists now consider these compounds to be the biggest breakthrough in decades. However, Novo Nordisk and Eli Lilly, who own the drugs currently in the market, do not plan to run phase 3 addiction trials on them, due to fear of adverse events in substance use disorder populations. The result is that a huge medical opportunity is stuck in limbo indefinitely. Fortunately, Lilly has recently signaled that they will run trials on related compounds, but remain years from approval.

Conversations with industry leaders make clear that large pharmas avoid SUD indications for several reasons. First, their upside appears limited, since current SUD medications have modest sales. Second, like other psychiatric disorders, the problem is challenging given the range and complexity of neurological targets and the logistical challenges of recruiting people with substance use disorder as participants. Finally, companies face downside reputational and regulatory risk if participants, who face high baseline rates of death from overdose regardless, were to die in trials. In the case of Ozempic and Mounjaro, sponsors face an obstacle some have termed the “problem of new uses” – clinical trials of an already lucrative drug for a new indication carry downside risk if new side effects or adverse events are reported. 

Image from Charting the fourth wave, based on CDC data

Plan of Action

Market Shaping Interventions

Recommendation 1. Expand the FDA priority review voucher (PRV) program to include addiction medicine indications.

The FDA priority review voucher (PRV) program incentivizes development of drugs for rare pediatric and infectious diseases by rewarding companies who get drugs approved with a transferable voucher that accelerates FDA approval. These vouchers are currently selling for an average of $100M. The PRV program doesn’t cost the government any money but it makes drug development in the designated categories much more lucrative. The PRV program has proven very successful, leading to a surge in approvals of medications.

As a neglected market with urgent unmet medical and public health needs, and which also promises to benefit the broader public by reducing the negative externalities of addiction, addiction medicine is a perfect fit for the PRV program. Doing so could transform the broken market dynamics of the field. The advantage of the PRV program is that it does not require substantial new congressional appropriations, though it will require Congress giving the FDA authority to expand the PRV program, as it has done previously to add other disease areas.

Recommendation 2. Extend exclusivity for addiction medicines and incentivize pursuit of new indications

Market exclusivity is a primary driver of pharmaceutical industry revenue. Extending exclusivities would have a very large effect on industry behavior and is needed to create sufficient incentives. The duration of exclusivity for alcohol use disorder, opioid use disorder, stimulant use disorder, and smoking cessation indications should be extended along with other incentives.  

For precedent, there are already a number of FDA programs that extend medication exclusivity, including ‘orphan drug exclusivity’ and the qualified infectious disease product (QIDP) program. Like rare diseases and antibiotics, addiction is a market that requires incentives to function effectively. In addition, successful treatments, given the negative externalities of addiction, have public benefit beyond the direct medical impact, and deserve additional public incentives.

Recommendation 3. Modernize FDA Standards of Efficacy for Substance Use Disorder Trials

A significant barrier to pharmaceutical innovation in SUDs is outdated or unpredictable efficacy standards sometimes set by the FDA for clinical trials. Efficacy expectations for substance use disorder indications are often rooted in abstinence-only and other binary measure orientations that the scientific and medical community has moved past when evaluating substance use disorder harms.

This article in the American Journal of Drug and Alcohol Abuse demonstrates that binary outcome measures like ‘number of heavy drinking days’ (NHDD) can underestimate the efficacy of treatments. This recent report from NIAAA on alcohol trial endpoints recommends a shift away from abstinence-based endpoints and towards more meaningful consumption-based endpoints. This approach should be adopted by the FDA for all SUD treatments, not just alcohol.

There are some indications that the FDA has begun modernizing their approach. This recent paper from NIH and FDA on smoking cessation therapies provides updated guidance that moves in the right direction.

More broadly, the FDA should work to adopt endpoints and standards of efficacy that mirror standards in other disease areas. This shift is best achieved through new guidance or statements issued by the FDA, which would offer positive assurance to pharmaceutical companies that they have achievable paths to approval. Predictability throughout the medication development life cycle is absolutely essential for companies considering investment.

Congress should include statements in upcoming appropriations and authorizations that state:

  1. The FDA should adopt non-binary standards of efficacy for addiction treatments that are aligned with standards for other common disorders and the FDA shall, within 12 months, report on the standards employed for substance use disorder relative to other prevalent chronic conditions and report steps to eliminate disparities in evidentiary standards and issue new guidance on the subject.
  2. The FDA should publish clear guidance on endpoints across SUDs to support planning among pharmaceutical companies considering work in this field.

Conclusion

Sustained focus and investment in diabetes and heart disease treatments has enabled medical breakthroughs. Addiction medicine, by contrast, has been largely stagnant for decades. Stimulating private-sector interest in addiction medicine through regulatory and exclusivity incentives, as well as modernized efficacy standards, is essential for disrupting the status quo. Breakthroughs in addiction medicine could save hundreds of thousands of lives in the US and provide long-term relief for one of our most intractable social problems. Given the negative externalities of addiction, this would also have enormous benefits for society at large, reducing crime and intergenerational trauma and saving money on social services and law enforcement.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
Why doesn’t the private sector target SUD? Why is government incentive necessary?

Per author conversations with industry leaders, private sector interest in SUD medication development is limited for the following reasons:



  • The upside of pursuing SUD indications appears limited, since current SUD medications, which are generally targeted for specific substances, have modest sales.

  • Even with preliminary evidence that GLP-1 drugs may be efficacious for some SUD indications (e.g, alcohol, opiates, and tobacco), companies are reluctant to pursue label expansion for SUD. As described previously, with already lucrative drugs, companies face a downside risk (termed the “problem of new uses”) from running large clinical trials, and possibly uncovering new side effects or incurring random adverse events which could harm reputation and existing markets.

  • In the specific case of SUD, this downside risk might be especially large, since people with substance use disorder have high baseline rates of overdose and death.


Moreover, there is an argument that a treatment for SUD is a public good, to the degree that it ameliorates the negative externalities of addiction – increasing the case for more public-sector incentives for SUD treatment. The end result is that medical treatments for SUD are stuck in an indefinite limbo, with private-sector interest in SUD, as documented previously, being very low.

Why are we optimistic about SUD medications?

The current lack of effective and widely used SUD medications is disheartening, but this is in the context of private sector disinterest and scant funding. Even modest successes in SUD treatment have the potential to kickstart an innovation loop, akin to the rush of biotech companies hastening to enter the obesity treatment field. Prior to the success of the GLP-1 drugs, obesity treatment had been moribund, and viewed pessimistically in light of drugs that had limited efficacy or had been withdrawn for side effects like suicidality or cardiovascular issues.


An SUD success like GLP-1 for obesity has the potential to kindle a similar rush of interest; the challenge is the initiation of that cascade. Given the very low levels of investment in SUD treatments, there is potential low-hanging fruit that, given sufficient funding, could be trialed and deployed.

What are the innovations in the illicit drug market?

There has been rapid innovation in the field of addiction, but it’s been happening on the wrong side: addiction-inducing technologies are becoming more powerful, while SUD treatments have largely stagnated. This innovation is most evident in synthetic opioids and methamphetamine.


Compared to heroin, fentanyl is about 25x stronger (on a per-weight basis), and hence, much easier to smuggle. As the Commission on Combating Synthetic Opioid Trafficking put it:


Single-digit metric tonnage of pure fentanyl is not a large amount and could easily fit into a shipping container or a truck trailer, which seriously challenges interdiction…Perhaps as much as 5 MT [metric tons] of pure fentanyl would be needed to satisfy the entire annual U.S. consumption for illegally supplied opioids.


Moreover, as a recent Scientific American article documented, innovations in fentanyl production, including the use of safer precursors and methods that don’t require sophisticated equipment, mean that fentanyl production is now decentralized, and resistant to attempts by law enforcement to shut it down.


As fentanyl has come to dominate the opioid supply over the past 10 years, overdose deaths have risen dramatically. New synthetic opioids and non-opioids like xylazine are also becoming common.


At the same time, due to advances in production techniques in Mexico, methamphetamine production has skyrocketed in recent decades while purity has improved. Worst of all, unlike heroin, fentanyl is easily combined with meth and cocaine in pills and powder.


The DEA has highlighted the presence of “super labs” in Mexico capable of producing hundreds of pounds of meth per batch.


Together, these three innovations (fentanyl, cheap meth, and new combinations) have led to a 400% increase in overdose deaths in the past 20 years. Without equally powerful innovations to reduce addiction rates, we will never make long-term and sustainable progress.

Micro-ARPAs: Enhancing Scientific Innovation Through Small Grant Programs

The National Science Foundation (NSF) has long supported innovative scientific research through grant programs. Among these, the EAGER (Early-concept Grants for Exploratory Research) and RAPID (Rapid Response Research) grants are crucial in fostering early-stage questions and ideas. This memo proposes expanding and improving these programs by addressing their current limitations and leveraging the successful aspects of their predecessor program, the Small Grants for Exploratory Research (SGER) program, and other innovative funding models like the Defense Advanced Research Projects Agency (DARPA).

Current Challenges and Opportunities

The landscape of scientific funding has always been a balancing act between supporting established research and nurturing new ideas. Over the years, the NSF has played a pivotal role in maintaining this balance through various grant programs. One way they support new ideas is through small, fast grants. The SGER program, active from 1990 to 2006, provided nearly 5,000 grants, with an average size of about $54,000. This program laid the groundwork for the current EAGER and RAPID grants, which took SGER’s place and were designed to support exploratory and urgent research, respectively. Using the historical data, researchers analyzed the effectiveness of the SGER program and found it wildly effective, with “transformative research results tied to more than 10% of projects.” The paper also found that the program was underutilized by NSF program officers, leaving open questions about how such an effective and relatively inexpensive mechanism was being overlooked.

Did the NSF learn anything from the paper? Probably not enough, according to the data.

In 2013, the year the SGER paper was published, roughly 2% of total NSF grant funding went towards EAGER and RAPID grants (which translated to more than 4% of the total NSF-funded projects that year). Except for a spike in RAPID grants in 2020 in response to the COVID-19 pandemic, there has been a steady decline in the volume, amount, and percentage of EAGER and RAPID grants over the ensuing decade. Over the past few years, EAGER and RAPID have barely exceeded 1% of the award budget. Despite the proven effectiveness of these funding mechanisms and their relative affordability, the rate of small, fast grantmaking has stagnated over the past decade.

There is a pressing need to support more high-risk, high-reward research through more flexible and efficient funding mechanisms. Increasing the small, fast grant capacity of the national research programs is an obvious place to improve, given the results of the SGER study and the fact that small grants are easier on the budget.

The current EAGER and RAPID grant programs, while effective, face administrative and cultural challenges that limit their scalability and impact. The reasons for their underuse remain poorly understood, but anecdotal insights from NSF program officers offer clues. The most plausible explanation is also the simplest: It’s difficult to prioritize small grants while juggling larger ones that carry higher stakes and greater visibility. While deeper, formal studies could further pinpoint the barriers, the lack of such research should not hinder the pursuit of bold, alternative strategies—especially when small grant programs offer a rare blend of impact and affordability.

Drawing inspiration from the ARPA model, which empowers program managers with funding discretion and contracting authority, there is an opportunity to revolutionize how small grants are administered. The ARPA approach, characterized by high degrees of autonomy and focus on high-risk, high-reward projects, has already inspired successful initiatives beyond its initial form in the Department of Defense (DARPA), like ARPA-E for energy and ARPA-H for health. A similar “Micro-ARPA” approach — in which dedicated, empowered personnel manage these funds — could be transformative for ensuring that small grant programs within NSF reach their full potential. 

Plan of Action

To enhance the volume, impact, and efficiency of small, fast grant programs, we propose the following:

  1. Establish a Micro-ARPA program with dedicated funding for small, flexible grants: The NSF should allocate 50% of the typical yearly funding for EAGER/RAPID grants — roughly $50–100 million per year — to a separate dedicated fund. This fund would use the existing EAGER/RAPID mechanisms for disbursing awards but be implemented through a programmatically distinct Micro-ARPA model that empowers dedicated project managers with more discretion and reduces the inherent tension between use of these streamlined mechanisms and traditional applications.
    1. By allocating approximately 50% of the current spend to this fund and using the existing EAGER/RAPID mechanisms within it, this fund would be unlikely to pull resources from other programs. It would instead set a floor for the use of these flexible frameworks while continuing to allow for their use in the traditional program-level manner when desired.
  2. Establish a Micro-ARPA program manager (PM) role: As compared to the current model, in which the allocation of EAGER/RAPID grants is a small subset of broader NSF program director responsibilities, Micro-ARPA PMs (who could be lovingly nicknamed “Micro-Managers”) should be hired or assigned within each directorate to manage the dedicated Micro-ARPA budgets. Allocating these small, fast grants should be their only job in the directorate, though it can and should be a part-time position per the needs of the directorate.
    1. Given the diversity of awards and domains that this officer may consider, they should be empowered to seek the advice of program-specific staff within their directorate as well as external reviewers when they see fit, but should not be required to make funding decisions in alignment with programmatic feedback. 
    2. Applications to the Micro-ARPA PM role should be competitive and open to scientists and researchers at all career levels. Based on our experience managing these programs at the Experiment Foundation, there is every reason to suspect that early-career researchers, community-based researchers, or other innovators from nontraditional backgrounds could be as good or better than experienced program officers. Given the relatively low cost of the program, the NSF should open this role to a wide variety of participants to learn and study the outcomes.
  3. Evaluate: The agency should work with academic partners to design and implement clear metrics—similar to those used in the paper that evaluated the SGER program—to assess the programs’ decision-making and impacts. Findings should be regularly compiled and circulated to PMs to facilitate rapid learning and improvement. Based on evaluation of this program, and comparison to the existing approach to allocating EAGER/RAPID grants, relative funding quantities between the two can be reallocated to maximize scientific and social impact. 

Benefits

The proposed enhancements to the small grant programs will yield several key benefits:

  1. Increased innovation: By funding more early-stage, high-risk projects, we can accelerate scientific breakthroughs and technological advancements, addressing global challenges more effectively.
  2. Support for early-career scientists: Expanded grant opportunities will empower more early-career researchers to pursue innovative ideas, fostering a new generation of scientific leaders.
  3. Experience opportunity for program managers: Running Micro-ARPAs will provide an opportunity for new and emerging program manager talent to train and develop their skills with relatively smaller amounts of money.
  4. Platform for metascience research: The high volume of new Micro-ARPA PMs will create an opportunity to study the effective characteristics of program managers and translate them into insights for larger ARPA programs.
  5. Administrative efficiency: A streamlined, decentralized approach will reduce the administrative burden on both applicants and program officers, making the grant process more agile and responsive. Speedier grants could also help the NSF achieve its stated dwell time goal of 75% (response rate within six months), which they have failed to do consistently in recent years.

Conclusion

Small, fast grant programs are vital to supporting transformative research. By adopting a more flexible, decentralized model, we can significantly enhance their impact. The proposed changes will foster a more dynamic and innovative scientific ecosystem, ultimately driving progress and addressing urgent global challenges.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
Do small grants really matter?

Absolutely. The research supports it, but the stories bring it to life. Ask any scientist about the first grant they received for their own work, and you’ll often hear about a small, pivotal award that changed everything. These grants may not make headlines, but they ignite careers, foster innovation, and open doors to discovery.

Can this be done with reallocating existing budget and under existing authority?

Almost certainly within the existing budget. As for authority, it’s theoretically possible but politically fraught. NSF program officers already have the discretion to use RAPID and EAGER grants as they see fit, so in principle, a program officer could be directed to use only those mechanisms. That mandate would essentially transform their role into a Micro-ARPA program manager. The real challenge lies in the culture and practice of grant-making. There’s a reason that DARPA operates independently from the rest of the military branches’ research and development infrastructure.

Why would dedicated staffing and a Micro-ARPA program structure overcome administrative challenges?

In a word: focus. Program officers juggle large, complex grants that demand significant time and resources. Small grants, though impactful, can get lost in the shuffle. By dedicating staff to exclusively manage these smaller, fast grants, we create the conditions to test an important hypothesis: that administrative burden and competing priorities, not lack of interest, are the primary barriers to scaling small grant programs. It’s about clearing the runway so these grants can truly take off.

Why not just set goals for greater usage of EAGER and RAPID?

Encouraging greater use of EAGER and RAPID is a good start, but it’s not enough. We need to think bigger, trying alternative structures and dedicated programs that push the boundaries of what’s possible. Incremental change can help, but bold experiments are what transform systems.

Driving Equitable Healthcare Innovations through an AI for Medicaid (AIM) Initiative

Artificial intelligence (AI) has transformative potential in the public health space – in an era when millions of Americans have limited access to high-quality healthcare services, AI-based tools and applications can enable remote diagnostics, drive efficiencies in implementation of public health interventions, and support clinical decision-making in low-resource settings. However, innovation driven primarily by the private sector today may be exacerbating existing disparities by training models on homogenous datasets and building tools that primarily benefit high socioeconomic status (SES) populations

To address this gap, the Center for Medicare and Medicaid Innovation (CMMI) should create an AI for Medicaid (AIM) Initiative to distribute competitive grants to state Medicaid programs (in partnership with the private sector) for pilot AI solutions that lower costs and improve care delivery for rural and low-income populations covered by Medicaid. 

Challenge & Opportunity

In 2022, the United States spent $4.5 trillion on healthcare, accounting for 17.3% of total GDP. Despite spending far more on healthcare per capita compared to other high-income countries, the United States has significantly worse outcomes, including lower life expectancy, higher death rates due to avoidable causes, and lesser access to healthcare services. Further, the 80 million low-income Americans reliant on state-administered Medicaid programs often have below-average health outcomes and the least access to healthcare services. 

AI has the potential to transform the healthcare system – but innovation solely driven by the private sector results in the exacerbation of the previously described inequities. Algorithms in general are often trained on datasets that do not represent the underlying population – in many cases, these training biases result in tools and models that perform poorly for racial minorities, people living with comorbidities, and people of low SES. For example, until January 2023, the model used to prioritize patients for kidney transplants systematically ranked Black patients lower than White patients – the race component was identified and removed due to advocacy efforts within the medical community. AI models, while significantly more powerful than traditional predictive algorithms, are also more difficult to understand and engineer, resulting in the likelihood of further perpetuating such biases. 

Additionally, startups innovating the digital health space today are not incentivized to develop solutions for marginalized populations. For example, in FY 2022, the top 10 startups focused on Medicaid received only $1.5B in private funding, while their Medicare Advantage (MA)-focused counterparts received over $20B. Medicaid’s lower margins are not attractive to investors, so digital health development targets populations that are already well-insured and have higher degrees of access to care.

The Federal Government is uniquely positioned to bridge the incentive gap between developers of AI-based tools in the private sector and American communities who would benefit most from said tools. Accordingly, the Center for Medicare and Medicaid Innovation (CMMI) should launch the AI for Medicaid (AIM) Initiative to incentivize and pilot novel AI healthcare tools and solutions targeting Medicaid recipients. Precedents in other countries demonstrate early success in state incentives unlocking health AI innovations – in 2023, the United Kingdom’s National Health Service (NHS) partnered with Deep Medical to pilot AI software that streamlines services by predicting and mitigating missed appointment risk. The successful pilot is now being adopted more broadly and is projected to save the NHS over $30M annually in the coming years. 

The AIM Initiative, guided by the structure of the former Medicaid Innovation Accelerator Program (IAP), President Biden’s executive order on integrating equity into AI development, and HHS’ Equity Plan (2022), will encourage the private sector to partner with State Medicaid programs on solutions that benefit rural and low-income Americans covered by Medicaid and drive efficiencies in the overall healthcare system. 

Plan of Action

CMMI will launch and operate the AIM Initiative within the Department of Health and Human Services (HHS). $20M of HHS’ annual budget request will be allocated towards the program. State Medicaid programs, in partnership with the private sector, will be invited to submit proposals for competitive grants. In addition to funding, CMMI will leverage the former structure of the Medicaid IAP program to provide state Medicaid agencies with technical assistance throughout their participation in the AIM Initiative. The programs ultimately selected for pilot funding will be monitored and evaluated for broader implementation in the future. 

Sample Detailed Timeline

Risks and Limitations

Conclusion

The AI for Medicaid Initiative is an important step in ensuring the promise of artificial intelligence in healthcare extends to all Americans. The initiative will enable the piloting of a range of solutions at a relatively low cost, engage with stakeholders across the public and private sectors, and position the United States as a leader in healthcare AI technologies. Leveraging state incentives to address a critical market failure in the digital health space can additionally unlock significant efficiencies within the Medicaid program and the broader healthcare system. The rural and low-income Americans reliant on Medicaid have too often been an afterthought in access to healthcare services and technologies – the AIM Initiative provides an opportunity to address this health equity gap.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Strategies to Accelerate and Expand Access to the U.S. Innovation Economy

In 2020, we outlined a vision for how the incoming presidential administration could strengthen the nation’s innovation ecosystem, encouraging the development and commercialization of science and technology (S&T) based ventures. This vision entailed closing critical gaps from lab to market, with an emphasis on building a broadly inclusive pipeline of entrepreneurial talent while simultaneously providing key support in venture development. 

During the intervening years, we have seen extraordinary progress, in good part due to ambitious legislation. Today, we propose innovative ways that the federal government can successfully build on this progress and make the most of new programs. With targeted policy interventions, we can efficiently and effectively support the U.S. innovation economy through the translation of breakthrough scientific research from the lab to the market. The action steps we propose are predicated on three core principles: inclusion, relevance, and sustainability. Accelerating our innovation economy and expanding access to it can make our nation more globally competitive, increase economic development, address climate change, and improve health outcomes. A strong innovation economy benefits everyone. 

Challenge

Our Day One 2020 memo began by pitching the importance of innovation and entrepreneurship: “Advances in scientific and technological innovations—and, critically, the ability to efficiently transform breakthroughs into scalable businesses—have contributed enormously to American economic leadership over the past century.” Now, it is widely recognized that innovation and entrepreneurship are key to both global economic leadership and addressing the challenges of changing climate. The question is no longer whether we must innovate but rather how effectively we can stimulate and expand a national innovation economy. 

Since 2020, the global and U.S. economies have gone through massive change and uncertainty.  The Global Innovation Index (GII) 2023 described the challenges involved in its yearly analysis of monitoring global innovation trends amid uncertainty brought on by a sluggish economic recovery from the COVID-19 pandemic, elevated interest rates, and geopolitical tensions. Innovation indicators like scientific publications, research and development (R&D), venture capital (VC) investments, and the number of patents rose to historic levels, but the value of VC investment declined by close to 40%. As a counterweight to this extensive uncertainty, the GII 2023 described the future of S&T innovation and progress as “the promise of Digital Age and Deep Science innovation waves and technological progress.” 

In the face of the pressures of global competitiveness, societal needs, and climate change, the clear way forward is to continue to innovate based on scientific and technical advancements. Meeting the challenges of our moment in history requires a comprehensive and multifaceted effort led by the federal government with many public and private partners.

Grow global competitiveness

Around the world, countries are realizing that investing in innovation is the most efficient way to transform their economies. In 2022, the U.S. had the largest R&D budget internationally, with spending growing by 5.6%, but China’s investment in R&D grew by 9.8%. For the U.S. to remain a global economic leader, we must continue to invest in innovation infrastructure, including the basic research and science, technology, engineering, and math (STEM) education that underpins our leadership, while we grow our investments in translational innovation. This includes reframing how existing resources are used as well as allocating new spending. It will require a systems change orientation and long-term commitments. 

Increase economic development

Supporting and growing an innovation economy is one of our best tools for economic development. From place-based innovation programs to investment in emerging research institutions (ERIs) and Minority-Serving Institutions (MSIs) to training S&T innovators to become entrepreneurs in I-Corps™, these initiatives stimulate local economies, create high-quality jobs, and reinvigorate regions of the country left behind for too long. 

Address climate change

In 2023, for the first time, global warming exceeded 1.5°C for an entire year. It is likely that all 12 months of 2024 will also exceed 1.5°C above pre-industrial temperatures. Nationally and internationally, we are experiencing the effects of climate change; climate mitigation, adaptation, and resilience solutions are urgently needed and will bring outsized economic and social impact.

Improve U.S. health outcomes

The COVID-19 pandemic was devastating, particularly impacting underserved and underrepresented populations, but it spurred unprecedented medical innovation and commercialization of new diagnostics, vaccines, and treatments. We must build on this momentum by applying what we’ve learned about rapid innovation to continue to improve U.S. health outcomes and to ensure that our nation’s health care needs across regions and demographics are addressed. 

Make innovation more inclusive

Representational disparities persist across racial/ethnic and gender lines in both access to and participation in innovation and entrepreneurship. This is a massive loss for our innovation economy. The business case for broader inclusion and diversity is growing even stronger, with compelling data tracking the relationship between leadership diversity and company performance. Inclusive innovation is more effective innovation: a multitude of perspectives and lived experiences are required to fully understand complex problems and create truly useful solutions. To reap the full benefits of innovation and entrepreneurship, we must increase access and pathways for all. 

Opportunity

With the new presidential administration in 2025, the federal government has a renewed opportunity to prioritize policies that will generate and activate a wave of powerful, inclusive innovation and entrepreneurship. Implementing such policies and funding the initiatives that result is crucial if we as a nation are to successfully address urgent problems such as the climate crisis and escalating health disparities. 

Our proposed action steps are predicated on three core principles: inclusion, relevance, and sustainability. 

Inclusion

One of this nation’s greatest and most unique strengths is our heterogeneity. We must leverage our diversity to meet the complexity of the substantial social and economic challenges that we face today. The multiplicity of our people, communities, identities, geographies, and lived experiences gives the U.S. an edge in the global innovation economy: When we bring all of these perspectives to the table, we better understand the challenges that we face, and we are better equipped to innovate to meet them. If we are to harness the fullness of our nation’s capacity for imagination, ingenuity, and creative problem-solving, entrepreneurship pathways must be inclusive, equitable, and accessible to all. Moreover, all innovators must learn to embrace complexity, think expansively and critically, and welcome perspectives beyond their own frame of reference. Collaboration and mutually beneficial partnerships are at the heart of inclusive innovation. 

Relevance

Innovators and entrepreneurs have the greatest likelihood of success—and the greatest potential for impact—when their work is purpose-driven, nimble, responsive to consumer needs, and adaptable to different applications and settings.  Research suggests that “breakthrough innovation” occurs when different actors bring complementary and independent skills to co-create interesting solutions to existing problems. Place-based innovation is one strategy to make certain that technology development is grounded in regional concerns and aspirations, leading to better outcomes for all concerned. 

Sustainability 

Multiple layers of sustainability should be integrated into the innovation and entrepreneurship landscape. First and most salient is supporting the development of innovative technologies that respond to the climate crisis and bolster national resilience. Second is encouraging innovators to incorporate sustainable materials and processes in all stages of research and development so that products benefit the planet and risks to the environment are mitigated through the manufacturing process, whether or not climate change is the focus of the technology. Third, it is vital to prioritize helping ventures develop sustainable business models that will result in long-term viability in the marketplace. Fourth, working with innovators to incorporate the potential impact of climate change into their business planning and projections ensures they are equipped to adapt to changing needs. All of these layers contribute to sustaining America’s social well-being and economic prosperity, ensuring that technological breakthroughs are accessible to all.

Proposed Action

Recommendation 1. Supply and prepare talent.

Continuing to grow the nation’s pipeline of S&T innovators and entrepreneurs is essential. Specifically, creating accessible entrepreneurial pathways in STEM will ensure equitable participation. Incentivizing individuals to become innovators-entrepreneurs, especially those from underrepresented groups, will strengthen national competitiveness by leveraging new, untapped potential across innovation ecosystems.

Expand the I-Corps model

By bringing together experienced industry mentors, commercial experts, research talent, and promising technologies, I-Corps teaches scientific innovators how to evaluate whether their innovation can be commercialized and how to take the first practical steps of bringing their product to market. Ten new I-Corps Hubs, launched in 2022, have expanded the network of engaged universities and collaborators, an important step toward growing an inclusive innovation ecosystem across the U.S. 

Interest in I-Corps far outpaces current capacity, and increasing access will create more expansive pathways for underrepresented entrepreneurs. New federal initiatives to support place-based innovation and to grow investment at ERIs and MSIs will be more successful if they also include lab-to-market training programs such as I-Corps. Federal entities should institute policies and programs that increase awareness about and access to sequenced venture support opportunities for S&T innovators. These opportunities should include intentional “de-risking” strategies through training, advising, and mentoring.

Specifically, we recommend expanding I-Corps capacity so that all interested participants can be accommodated. We should also strive to increase access to I-Corps so that programs reach diverse students and researchers. This is essential given the U.S. culture of entrepreneurship that remains insufficiently inclusive of women, people of color, and those from low-income backgrounds, as well as international students and researchers, who often face barriers such as visa issues or a lack of institutional support needed to remain in the U.S. to develop their innovations. Finally, we should expand the scope of what I-Corps offers, so that programs provide follow-on support, funding, and access to mentor and investor networks even beyond the conclusion of initial entrepreneurial training. 

I-Corps has already expanded beyond the National Science Foundation (NSF) to I-Corps at National Institutes of Health (NIH), to empower biomedical entrepreneurs, and Energy I-Corps, established by the Department of Energy (DOE) to accelerate the deployment of energy technologies. We see the opportunity to grow I-Corps further by building on this existing infrastructure and creating cohorts funded by additional science agencies so that more basic research is translated into commercially viable businesses. 

Close opportunity gaps by supporting emerging research institutions (ERIs) and Minority-Serving Institutions (MSIs)

ERIs and MSIs provide pathways to S&T innovation and entrepreneurship, especially for individuals from underrepresented groups. In particular, a VentureWell-commissioned report identified that “MSIs are centers of research that address the unique challenges and opportunities faced by BIPOC communities. The research that takes place at MSIs offers solutions that benefit a broad and diverse audience; it contributes to a deeper understanding of societal issues and drives innovation that addresses these issues.”

The recent codification of ERIs in the 2022 CHIPS and Science Act pulls this category into focus. Defining this group, which comprises thousands of higher education institutions,  was the first step in addressing the inequitable distribution of federal research funding. That imbalance has perpetuated regional disparities and impacted students from underrepresented groups, low-income students, and rural students in particular. Further investment in ERIs will result in more STEM-trained students, who can become innovators and entrepreneurs with training and engagement. Additional support that could be provided to ERIs includes increased research funding, access to capital/investment, capacity building (faculty development, student support services), industry partnerships, access to networks, data collection/benchmarking, and implementing effective translation policies, incentives, and curricula. 

Supporting these institutions—many of which are located in underserved rural or urban communities that experience underinvestment—provides an anchor for sustained talent development and economic growth. 

Recommendation 2. Support place-based innovation.

Place-based innovation not only spurs innovation but also builds resilience in vulnerable communities, enhancing both U.S. economic and national security. Communities that are underserved and underinvested in present vulnerabilities that hostile actors outside of the U.S. can exploit. Place-based innovation builds resilience: innovation creates high-quality jobs and brings energy and hope to communities that have been left behind, leveraging the unique strengths, ecosystems, assets, and needs of specific regions to drive economic growth and address local challenges.  

Evaluate and learn from transformative new investments

There have been historic levels of government investment in place-based innovation, funding the NSF’s Regional Innovation Engines awards and two U.S. Department of Commerce Economic Development Administration (EDA) programs: the Build Back Better Regional Challenge and Regional Technology and Innovation Hubs awards. The next steps are to refine, improve, and evaluate these initiatives as we move forward. 

Unify the evaluation framework, paired with local solutions

Currently, evaluating the effectiveness and outcomes of place-based initiatives is challenging, as benchmarks and metrics can vary by region. We propose a unified framework paired with solutions locally identified by and tailored to the specific needs of the regional innovation ecosystem. A functioning ecosystem cannot be simply overlaid upon a community but must be built by and for that community. The success of these initiatives requires active evaluation and incorporation of these learnings into effective solutions, as well as deep strategic collaboration at the local level, with support and time built into processes.   

Recommendation 3. Increase access to financing and capital.

Funding is the lifeblood of innovation. S&T innovation requires more investment and more time to bring to market than other types of ventures, and early-stage investments in S&T startups are often perceived as risky by those who seek a financial return. Bringing large quantities of early-stage S&T innovations to the point in the commercialization process where substantial private capital takes an interest requires nondilutive and patient government support. The return on investment that the federal government seeks is measured in companies successfully launched, jobs created, and useful technologies brought to market.

Disparities in access to capital by companies owned by women and underrepresented minority founders are well documented. The federal government has an interest in funding innovators and entrepreneurs from many backgrounds: they bring deep and varied knowledge and a multitude of perspectives to their innovations and to their ventures. This results in improved solutions and better products at a cheaper price for consumers. Increasing access to financing and capital is essential to our national economic well-being and to our efforts to build climate resilience. 

Expand SBIR/STTR access and commercial impact

The SBIR and STTR programs spur innovation, bolster U.S. economic competitiveness, and strengthen the small business sector, but barriers persist. In a recent third-party assessment of the SBIR/STTR program at NIH, the second largest administrator of SBIR/STTR funds, the committee found outreach from the SBIR/STTR programs to underserved groups is not coordinated, and there has been little improvement in the share of applications from or awards to these groups in the past 20 years. Further, NIH follows the same processes used for awarding R01 research grants, using the same review criteria and typically the same reviewers, omitting important commercialization considerations. 

To expand access and increase the commercialization potential of the SBIR/STTR program, funding agencies should foster partnerships with a broader group of organizations, conduct targeted outreach to potential applicants, offer additional application assistance to potential applicants, work with partners to develop mentorship and entrepreneur training programs, and increase the percentage of private-sector reviewers with entrepreneurial experience. Successful example programs of SBIR/STTR support programs include the NSF Beat-The-Odds Boot Camp, Michigan’s Emerging Technologies Fund, and the SBIR/STTR Innovation Summit

Provide entrepreneurship education and training

Initiatives like NSF Engines, Tech Hubs, Build-Back-Better Regional Challenge, the Minority Business Development Agency (MBDA) Capital Challenge, and the Small Business Administration (SBA) Growth Accelerator Fund expansion will all achieve more substantial results with supplemental training for participants in how to develop and launch a technology-based business. As an example of the potential impact, more than 2,500 teams have participated in I-Corps since the program’s inception in 2012. More than half of these teams, nearly 1,400, have launched startups that have cumulatively raised $3.16 billion in subsequent funding, creating over 11,000 jobs. Now is an opportune moment to widely apply similarly effective approaches. 

Launch a local investment education initiative

Angel investors are typically providing the first private funding available to S&T innovators and entrepreneurs. These very early-stage funders give innovators access to needed capital, networks, and advice to get their ventures off the ground. We recommend that the federal government expand the definition of an accredited investor and incentivize regionally focused initiatives to educate policymakers and other regional stakeholders about best practices to foster more diverse and inclusive angel investment networks. With the right approach and support, there is the potential to engage thousands more high-net-worth individuals in early-stage investing, contributing their expertise and networks as well as their wealth.

Encourage investment in climate solutions

Extreme climate-change-attributed weather events such as floods, hurricanes, drought, wildfire, and heat waves cost the global economy an average of $143 billion annually. S&T innovations have the potential to help address the impacts of climate change at every level:

Given the global scope of the problem and the shared resources of affected communities, the federal government can be a leader in prioritizing, collaborating, and investing in solutions to direct and encourage S&T innovation for climate solutions. There is no question whether climate adaptation technologies will be needed, but we must ensure that these solutions are technologies that create economic opportunity in the U.S. We encourage the expansion and regular appropriations of funding for successful climate programs across federal agencies, including the DoE Office of Technology Transitions’ Energy Program for Innovation Clusters, the National Oceanic and Atmospheric Administration’s (NOAA) Ocean-Based Climate Resilience Accelerators program, and the U.S. Department of Agriculture’s Climate Hubs. 

Recommendation 4. Shift to a systems change orientation.

To truly stimulate a national innovation economy, we need long-term commitments in policy, practice, and regulations. Leadership and coordination from the executive branch of the federal government are essential to continue the positive actions already begun by the Biden-Harris Administration.  

These initiatives include: 

Policy

Signature initiatives like the CHIPS and Science Act, Infrastructure Investment and Jobs Act, and the National Quantum Initiative Act are already threatened by looming appropriations shortfalls. We need to fully fund existing legislation, with a focus on innovative and translational R&D. According to a report by PricewaterhouseCoopers, if the U.S. increased federal R&D spending to 1% of GDP by 2030, the nation could support 3.4 million jobs and add $301 billion in labor income, $478 billion in economic value, and $81 billion in tax revenue. Beyond funding, we propose supporting innovative policies to bolster U.S. innovation capacity at the local and national levels. This includes providing R&D tax credits to spur research collaboration between industry and universities and labs, providing federal matching funds for state and regional technology transfer and commercialization efforts, and revising the tax code to support innovation by research-intensive, pre-revenue companies.

Practice

The University and Small Business Patent Procedures Act of 1980, commonly known as the Bayh-Dole Act, allows recipients of federal research funding to retain rights to inventions conceived or developed with that funding. The academic tech transfer system created by the Bayh-Dole Act (codified as amended at 35 U.S.C. §§ 200-212) generated nearly $1.3 trillion in economic output, supported over 4.2 million jobs, and launched over 11,000 startups. We should preserve the Bayh-Dole Act as a means to promote commercialization and prohibit the consideration of specific factors, such as price, in march-in determinations

In addition to the continual practice and implementation of successful laws such as Bayh-Dole, we must repurpose resources to support innovation and the high-value jobs that result from S&T innovation. We believe the new administration should allocate a share of federal funding to promote technology transfer and commercialization and better incentivize commercialization activities at federal labs and research institutes. This could include new programs such as mentoring programs for researcher entrepreneurs and student entrepreneurship training programs. Incentives include evaluating the economic impact of lab-developed technology by measuring commercialization outcomes in the annual Performance Evaluation and Management Plans of federal labs, establishing stronger university entrepreneurship reporting requirements to track and reward universities that create new businesses and startups, and incentivizing universities to focus more on commercialization activities as part of promotion and tenure of faculty, 

Regulations

A common cause of lab-to-market failure is the inability to secure regulatory approval, particularly for novel technologies in nascent industries. Regulation can limit potentially innovative paths, increase innovation costs, and create a compliance burden on businesses that stifle innovation. Regulation can also spur innovation by enabling the management of risk. In 1976 the Cambridge (Massachusetts) City Council became the first jurisdiction to regulate recombinant DNA, issuing the first genetic engineering license and creating the first biotech company. Now Boston/Cambridge is the world’s largest biotech hub: home to over 1,000 biotech companies, 21% of all VC biotech investments, and 15% of the U.S. drug development pipeline.

To advance innovation, we propose two specific regulatory actions:

Conclusion

To maintain its global leadership role, the United States must invest in the individuals, institutions, and ecosystems critical to a thriving, inclusive innovation economy. This includes mobilizing access, inclusion, and talent through novel entrepreneurship training programs; investing, incentivizing, and building the capacity of our research institutions; and enabling innovation pathways by increasing access to capital, networks, and resources.

Fortunately, there are  several important pieces of legislation recommitting the U.S. leadership to bold S&T goals, although much of the necessary resources are yet to be committed to those efforts. As a society, we benefit when federally supported innovation efforts tackle big problems that are beyond the scope of single ventures; notably, the many challenges arising from climate change. A stronger, more inclusive innovation economy benefits the users of S&T-based innovations, individual innovators, and the nation as a whole.

When we intentionally create pathways to innovation and entrepreneurship for underrepresented individuals, we build on our strengths. In the United States, our strength has always been our people, who bring problem-solving abilities from a multitude of perspectives and settings. We must unleash their entrepreneurial power and become, even more, a country of innovators.. 

Earlier memo contributors Heath Naquin and Shaheen Mamawala (2020) were not involved with this 2024 memo.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Policy Experiment Stations to Accelerate State and Local Government Innovation

The federal government transfers approximately $1.1 trillion dollars every year to state and local governments. Yet most states and localities are not evaluating whether the programs deploying these funds are increasing community well-being. Similarly, achieving important national goals like increasing clean energy production and transmission often requires not only congressional but also state and local policy reform. Yet many states and localities are not implementing the evidence-based policy reforms necessary to achieve these goals.

State and local government innovation is a problem not only of politics but also of capacity. State and local governments generally lack the technical capacity to conduct rigorous evaluations of the efficacy of their programs, search for reliable evidence about programs evaluated in other contexts, and implement the evidence-based programs with the highest chances of improving outcomes in their jurisdictions. This lack of capacity severely constrains the ability of state and local governments to use federal funds effectively and to adopt more effective ways of delivering important public goods and services. To date, efforts to increase the use of evaluation evidence in federal agencies (including the passage of the Evidence Act) have not meaningfully supported the production and use of evidence by state and local governments.

Despite an emerging awareness of the importance of state and local government innovation capacity, there is a shortage of plausible strategies to build that capacity. In the words of journalist Ezra Klein, we spend “too much time and energy imagining the policies that a capable government could execute and not nearly enough time imagining how to make a government capable of executing them.”

Yet an emerging body of research is revealing that an effective strategy to build government innovation capacity is to partner government agencies with local universities on scientifically rigorous evaluations of the efficacy of their programs, curated syntheses of reliable evaluation evidence from other contexts, and implementation of evidence-based programs with the best chances of success. Leveraging these findings, along with recent evidence of the striking efficacy of the national network of university-based “Agriculture Experiment Stations” established by the Hatch Act of 1887, we propose a national network of university-based “Policy Experiment Stations” or policy innovation labs in each state, supported by continuing federal and state appropriations and tasked with accelerating state and local government innovation.  

Challenge

Advocates of abundance have identified “failed public policy” as an increasingly significant barrier to economic growth and community flourishing. Of particular concern are state and local policies and programs, including those powered by federal funds, that do not effectively deliver critically important public goods and services like health, education, safety, clean air and water, and growth-oriented infrastructure.

Part of the challenge is that state and local governments lack capacity to conduct rigorous evaluations of the efficacy of their policies and programs. For example, the American Rescue Plan, the largest one-time federal investment in state and local governments in the last century, provided $350 billion in State and Local Fiscal Recovery Funds to state, territorial, local, and Tribal governments to accelerate post-pandemic economic recovery. Yet very few of those investments are being evaluated for efficacy. In a recent survey of state policymakers, 59% of those surveyed cited “lack of time for rigorous evaluations” as a key obstacle to innovation. State and local governments also typically lack the time, resources, and technical capacity to canvass evaluation evidence from other settings and assess whether a program proven to improve outcomes elsewhere might also improve outcomes locally. Finally, state and local governments often don’t adopt more effective programs even when they have rigorous evidence that these programs are more effective than the status quo, because implementing new programs disrupts existing workflows. 

If state and local policymakers don’t know what works and what doesn’t, and/or aren’t able to overcome even relatively minor implementation challenges when they do know what works, they won’t be able to spend federal dollars more effectively, or more generally to deliver critical public goods and services.

Opportunity

A growing body of research on government innovation is documenting factors that reliably increase the likelihood that governments will implement evidence-based policy reform. First, government decision makers are more likely to adopt evidence-based policy reforms when they are grounded in local evidence and/or recommended by local researchers. Boston-based researchers sharing a Boston-based study showing that relaxing density restrictions reduces rents and house prices will do less to convince San Francisco decision makers than either a San Francisco-based study, or San Francisco-based researchers endorsing the evidence from Boston. Proximity matters for government innovation.

Second, government decision makers are more likely to adopt evidence-based policy reforms when they are engaged as partners in the research projects that produce the evidence of efficacy, helping to define the set of feasible policy alternatives and design new policy interventions. Research partnerships matter for government innovation.

Third, evidence-based policies are significantly more likely to be adopted when the policy innovation is part of an existing implementation infrastructure, or when agencies receive dedicated implementation support. This means that moving beyond incremental policy reforms will require that state and local governments receive more technical support in overcoming implementation challenges. Implementation matters for government innovation. 

We know that the implementation of evidence-based policy reform produces returns for communities that have been estimated to be on the order of 17:1. Our partners in government have voiced their direct experience of these returns. In Puerto Rico, for example, decision makers in the Department of Education have attributed the success of evidence-based efforts to help students learn to the “constant communication and effective collaboration” with researchers who possessed a “strong understanding of the culture and social behavior of the government and people of Puerto Rico.” Carrie S. Cihak, the evidence and impact officer for King County, Washington, likewise observes, 

“It is critical to understand whether the programs we’re implementing are actually making a difference in the communities we serve. Throughout my career in King County, I’ve worked with  County teams and researchers on evaluations across multiple policy areas, including transportation access, housing stability, and climate change. Working in close partnership with researchers has guided our policymaking related to individual projects, identified the next set of questions for continual learning, and has enabled us to better apply existing knowledge from other contexts to our own. In this work, it is essential to have researchers who are committed to valuing local knowledge and experience–including that of the community and government staff–as a central part of their research, and who are committed to supporting us in getting better outcomes for our communities.” 

The emerging body of evidence on the determinants of government innovation can help us define a plan of action that galvanizes the state and local government innovation necessary to accelerate regional economic growth and community flourishing. 

Plan of Action 

An evidence-based plan to increase state and local government innovation needs to facilitate and sustain durable partnerships between state and local governments and neighboring universities to produce scientifically rigorous policy evaluations, adapt evaluation evidence from other contexts, and develop effective implementation strategies. Over a century ago, the Hatch Act of 1887 created a remarkably effective and durable R&D infrastructure aimed at agricultural innovation, establishing university-based Agricultural Experiment Stations (AES) in each state tasked with developing, testing, and translating innovations designed to increase agricultural productivity. 

Locating university-based AES in every state ensured the production and implementation of locally-relevant evidence by researchers working in partnership with local stakeholders. Federal oversight of the state AES by an Office of Experiment Stations in the US Department of Agriculture ensured that work was conducted with scientific rigor and that local evidence was shared across sites. Finally, providing stable annual federal appropriations for the AES, with required matching state appropriations, ensured the durability and financial sustainability of the R&D infrastructure. This infrastructure worked: agricultural productivity near the experiment stations increased by 6% after the stations were established.

Congress should develop new legislation to create and fund a network of state-based “Policy Experiment Stations.”

 The 119th Congress that will convene on January 3, 2025 can adapt the core elements of the proven-effective network of state-based Agricultural Experiment Stations to accelerate state and local government innovation. Mimicking the structure of 7 USC 14, federal grants to states would support university-based “Policy Experiment Stations” or policy innovation labs in each state, tasked with partnering with state and local governments on (1) scientifically rigorous evaluations of the efficacy of state and local policies and programs; (2) translations of evaluation evidence from other settings; and (3) overcoming implementation challenges. 

As in 7 USC 14, grants to support state policy innovation labs would be overseen by a federal office charged with ensuring that work was conducted with scientific rigor and that local evidence was shared across sites. We see two potential paths for this oversight function, paths that in turn would influence legislative strategy.

Pathway 1: This oversight function could be located in the Office of Evaluation Sciences (OES) in the General Services Administration (GSA). In this case, the congressional committees overseeing GSA, namely the House Committee on Oversight and Responsibility and the Senate Committee on Homeland Security and Governmental Affairs, would craft legislation providing for an appropriation to GSA to support a new OES grants program for university-based policy innovation labs in each state. The advantage of this structure is that OES is a highly respected locus of program and policy evaluation expertise

Pathway 2: Oversight could instead be located in the Directorate of Technology, Innovation, and Partnerships in the National Science Foundation (NSF TIP). In this case, the House Committee on Science, Space, and Technology and the Senate Committee on Commerce, Science, and Transportation would craft legislation providing for a new grants program within NSF TIP to support university-based policy innovation labs in each state. The advantage of this structure is that NSF is a highly respected grant-making agency. 

Either of these paths is feasible with bipartisan political will. Alternatively, there are unilateral steps that could be taken by the incoming administration to advance state and local government innovation. For example, the Office of Management and Budget (OMB) recently released updated Uniform Grants Guidance clarifying that federal grants may be used to support recipients’ evaluation costs, including “conducting evaluations, sharing evaluation results, and other personnel or materials costs related to the effective building and use of evidence and evaluation for program design, administration, or improvement.” The Uniform Grants Guidance also requires federal agencies to assess the performance of grant recipients, and further allows federal agencies to require that recipients use federal grant funds to conduct program evaluations. The incoming administration could further update the Uniform Grants Guidance to direct federal agencies to require that state and local government grant recipients set aside grant funds for impact evaluations of the efficacy of any programs supported by federal funds, and further clarify the allowability of subgrants to universities to support these impact evaluations.

Conclusion

Establishing a national network of university-based “Policy Experiment Stations” or policy innovation labs in each state, supported by continuing federal and state appropriations, is an evidence-based plan to facilitate abundance-oriented state and local government innovation. We already have impressive examples of what these policy labs might be able to accomplish. At MIT’s Abdul Latif Jameel Poverty Action Lab North America, the University of Chicago’s Crime Lab and Education Lab, the University of California’s California Policy Lab, and Harvard University’s The People Lab, to name just a few, leading researchers partner with state and local governments on scientifically rigorous evaluations of the efficacy of public policies and programs, the translation of evidence from other settings, and overcoming implementation challenges, leading in several cases to evidence-based policy reform. Yet effective as these initiatives are, they are largely supported by philanthropic funds, an infeasible strategy for national scaling.

In recent years we’ve made massive investments in communities through federal grants to state and local governments. We’ve also initiated ambitious efforts at growth-oriented regulatory reform which require not only federal but also state and local action. Now it’s time to invest in building state and local capacity to deploy federal investments effectively and to galvanize regional economic growth. Emerging research findings about the determinants of government innovation, and about the efficacy of the R&D infrastructure for agricultural innovation established over a century ago, give us an evidence-based roadmap for state and local government innovation.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Promoting American Resilience Through a Strategic Investment Fund

Critical minerals, robotics, advanced energy systems, quantum computing, biotechnology, shipbuilding, and space are some of the resources and technologies that will define the economic and security climate of the 21st century. However, the United States is at risk of losing its edge in these technologies of the future. For instance, China processes the vast majority of the world’s batteries and critical metals and has successfully launched a quantum communications satellite. The implications are enormous: the U.S. relies on its qualitative technological edge to fuel productivity growth, improve living standards, and maintain the existing global order. Indeed, the Inflation Reduction Act (IRA) and CHIPS Act were largely reactionary moves to shore up atrophied manufacturing capabilities in the American battery and semiconductor industries, requiring hundreds of billions in outlays to catch up. In an ideal world, critical industries would be sufficiently funded well in advance to avoid economically costly catch-up spending.

However, many of these technologies are characterized by long timelines, significant capital expenditures, and low and uncertain profit margins, presenting major challenges for private-sector investors who are required by their limited partners (capital providers such as pension funds, university endowments, and insurance companies) to underwrite to a certain risk-adjusted return threshold. This stands in contrast to technologies like artificial intelligence and pharmaceuticals: While both are also characterized by large upfront investments and lengthy research and development timelines, the financial payoffs are far clearer, incentivizing private sectors to play a leading role in commercialization. This issue for technologies in economically and geopolitically vital industries such as lithium processing and chips is most acute in the “valley of death,” when companies require scale-up capital for an early commercialization effort: the capital required is too large for traditional venture capital, yet too risky for traditional project finance.

The United States needs a strategic investment fund (SIF) to shepherd promising technologies in nationally vital sectors through the valley of death. An American SIF is not intended to provide subsidies, pick political winners or losers, or subvert the role of private capital markets. On the contrary, its role would be to “crowd in” capital by uniquely managing risk that no private or philanthropic entities have the capacity to do. In doing so, an SIF would ensure that the U.S. maintains an edge in critical technologies, promoting economic dynamism and national security in an agile, cost-efficient manner. 

Challenges

The Need for Private Investment 

A handful of resources and technologies, some of which have yet to be fully characterized, have the potential to play an outsized role in the future economy. Most of these key technologies have meaningful national security implications.

Since ChatGPT’s release in November 2022, artificial intelligence (AI) has experienced a commercial renaissance that has captured the public’s imagination and huge sums of venture dollars, as evidenced by OpenAI’s October 2024 $6.5 billion round at a $150 billion pre-money valuation. However, AI is not the only critical resource or technology that will power the future economy, and many of those critical resources and technologies may struggle to attract the same level of private investment. Consider the following:

Few sectors receive the level of consistent venture attention that software technology, most recently in AI, has gotten in the last 18 months. However, this does not make them unbackable or unimportant; on the contrary, technologies that increase mineral recovery yields or make drone engines cheaper should receive sufficient support to get to scale. While private-sector capital markets have supported the development of many important industries, they are not perfect and may miss important opportunities due to information asymmetries and externalities.

Overcoming the Valley of Death

Many strategically important technologies are characterized by high upfront costs and low or uncertain margins, which tends to dissuade investment by private-sector organizations at key inflection points, namely, the “valley of death.”

By their nature, innovative technologies are complex and highly uncertain. However, some factors make future economic value—and therefore financeability—more difficult to ascertain than others. For example, innovative battery technologies that enable long-term storage of energy generated from renewables would greatly improve the economics of utility-scale solar and wind projects. However, this requires production at scale in the face of potential competition from low-cost incumbents. In addition, there is the element of scientific risk itself, as well as the question of customer adoption and integration. There are many good reasons why technologies and companies that seem feasible, economical, and societally valuable do not succeed.

These dynamics result in lopsided investment allocations. In the early stages of innovation, venture capital is available to fund startups with the promise of outsized return driven partially by technological hype and partially by the opportunity to take large equity stakes in young companies. At the other end of the barbell, private equity and infrastructure capital are available to mature companies seeking an acquisition or project financing based on predictable cash flows and known technologies. 

However, gaps appear in the middle as capital requirements increase (often by an order of magnitude) to support the transition to early commercialization. This phenomenon is called the “valley of death” as companies struggle to raise the capital they need to get to scale given the uncertainties they face.

Figure 1. The “valley of death” describes the mismatch between existing financial structures and capital requirements in the crucial early commercialization phase. (Source: Maryland Energy Innovation Accelerator)

Shortcoming of Federal Subsidies

While the federal government has provided loans and subsidies in the past, its programs remain highly reactive and require large amounts of funding.

Aside from asking investors to take on greater risk and lower returns, there are several tools in place to ameliorate the valley of death. The IRA one such example: It appropriated some $370 billion for climate-related spending with a range of instruments, including tax subsidies for renewable energy production, low-cost loans through organizations such as the Department of Energy’s Loan Program Office (LPO), and discretionary grants.

On the other hand, there are major issues with this approach. First, funding is spread out across many calls for funding that tend to be slow, opaque, and costly. Indeed, it is difficult to keep track of available resources, funding announcements, and key requirements—just try searching for a comprehensive, easy-to-understand list of opportunities.

More importantly, these funding mechanisms are simply expensive. The U.S. does not have the financial capacity to support an IRA or CHIPS Act for every industry, nor should it go down that route. While one could argue that these bills reflect the true cost of achieving the stated policy aims of energy transition or securing the semiconductor supply chain, it is also the case that there both knowledge (engineering expertise) and capital (manufacturing facility) capabilities underpin these technologies. Allowing these networks to atrophy created greater costs down the road, which could have been prevented by targeted investments at the right points of development.

The Future Is Dynamic

The future is not perfectly knowable, and new technological needs may arise that change priorities or solve previous problems. Therefore, agility and constant re-evaluation are essential.

Technological progress is not static. Take the concept of peak oil: For decades, many of the world’s most intelligent geologists and energy forecasters believed that the world would quickly run out of oil reserves as the easiest to extract resources were extracted. In reality, technological advances in chemistry, surveying, and drilling enabled hydraulic fracturing (fracking) and horizontal drilling, creating access to “unconventional reserves” that substantially increased fossil fuel supply.

Figure 2. In 1956, M.K. Hubbert created “peak oil” theory, projecting that reserves would be exhausted around the turn of the millennium.

Fracking greatly expanded fossil fuel production in the U.S., increasing resource supply, securing greater energy independence, and facilitating the transition from coal to natural gas, whose expansion has proved to be a helpful bridge towards renewable energy generation. This transition would not have been possible without a series of technological innovations—and highly motivated entrepreneurs—that arose to meet the challenge of energy costs.

To meet the challenges of tomorrow, policymakers need tools that provide them with flexible and targeted options as well as sufficient scale to make an impact on technologies that might need to get through the valley of death. However, they need to remain sufficiently agile so as not to distort well-functioning market forces. This balance is challenging to achieve and requires an organizational structure, authorizations, and funding mechanisms that are sufficiently nimble to adapt to changing technologies and markets.

Opportunity

Given these challenges, it seems unlikely that solutions that rely solely on the private sector will bridge the commercialization gap in a number of capital-intensive strategic industries. On the other hand, existing public-sector tools, such as grants and subsidies, are too costly to implement at scale for every possible externality and are generally too retrospective in nature rather than forward-looking. The government can be an impactful player in bridging the innovation gap, but it needs to do so cost-efficiently.

An SIF is a promising potential solution to the challenges posed above. By its nature, an SIF would have a public mission focused on strategic technologies crossing the valley of death by using targeted interventions and creative financing structures that crowd in private investors. This would enable the government to more sustainably fund innovation, maintain a light touch on private companies, and support key industries and technologies that will define the future global economic and security outlook.

Plan of Action

Recommendation 1. Shepherd technologies through the valley of death. 

While the SIF’s investment managers are expected to make the best possible returns, this is secondary to the overarching public policy goal of ensuring that strategically and economically vital technologies have an opportunity to get to commercial scale.

The SIF is meant to crowd in capital such that we achieve broader societal gains—and eventually, market-rate returns—enabled by technologies that would not have survived without timely and well-structured funding. This creates tension between two competing goals: The SIF needs to act as if it will intend to make returns, or else there is the potential for moral hazard and complacency. However, it also has to be willing to not make market-rate returns, or even lose some of its principal, in the service of broader market and ecosystem development. 

Thus, it needs to be made explicitly clear from the beginning that an SIF has the intent of achieving market rate returns by catalyzing strategic industries but is not mandated to do so. One way to do this is to adopt a 501(c)(3) structure that has a loose affiliation to a department or agency, similar to that of In-Q-Tel. Excess returns could either be recycled to the fund or distributed to taxpayers.

The SIF should adapt the practices, structures, and procedures of established private-sector funds. It should have a standing investment committee made up of senior stakeholders across various agencies and departments (expanded upon below). Its day-to-day operations should be conducted by professionals who provide a range of experiences, including investing, engineering and technology, and public policy across a spectrum of issue areas. 

In addition, the SIF should develop clear underwriting criteria and outputs for each investment. These include, but are not limited to, identifying the broader market and investment thesis, projecting product penetration, and developing potential return scenarios based on different permutations of outcomes. More critically, each investment needs to create a compelling case for why the private sector cannot fund commercialization on its own and why public catalytic funding is essential.

Recommendation 2. The SIF should have a permanent authorization to support innovation under the Department of Commerce. 

The SIF should be affiliated with the Department of Commerce but work closely with other departments and agencies, including the Department of Energy, Department of Treasury, Department of Defense, Department of Health and Human Services, National Science Foundation, and National Economic Council.

Strategic technologies do not fall neatly into one sector and cut across many customers. Siloing funding in different departments misses the opportunity to capture funding synergies and, more importantly, develop priorities that are built through information sharing and consensus. Enter the Department of Commerce. In addition to administering the National Institute of Standards and Technology, they have a strong history of working across agencies, such as with the CHIPS Act.

Similar arguments can also be made for the Treasury, and it may even be possible to have Treasury and Commerce work together to manage an SIF. They would be responsible for bringing in subject matter experts (for example, from the Department of Energy or National Science Foundation) to provide specific inputs and arguments for why specific technologies need government-based commercialization funding and at what point such funding is appropriate, acting as an honest broker to allocate strategic capital.

To be clear: The SIF is not intended to supersede any existing funding programs (e.g., the Department of Energy’s Loan Program Office or the National Institute of Health’s ARPA-H) that provide fit-for-purpose funding to specific sectors. Rather, an SIF is intended to fill in the gaps and coordinate with existing programs while providing more creative financing structures than are typically available from government programs.

Recommendation 3. Create a clear innovation roadmap.

Every two years, the SIF should develop or update a roadmap of strategically important industries, working closely with private, nonprofit, and academic experts to define key technological and capability gaps that merit public sector investment. 

The SIF’s leaders should be empowered to make decisions on areas to prioritize but have the ability to change and adapt as the economic environment evolves. Although there is a long list of industries that an SIF could potentially support, resources are not infinite. However, a critical mass of investment is required to ensure adequate resourcing. One acute challenge is that this is not perfectly known in advance and changes depending on the technology and sector. However, this is precisely what the strategic investment roadmap is supposed to solve for: It should provide an even-handed assessment of the likely capital requirements and where the SIF is best suited to provide funding compared to other agencies or the private sector.

Moreover, given the ever-changing nature of technology, the SIF should frequently reassess its understanding of key use cases and their broader economic and strategic importance. Thus, after initial development of the SIF, it should be updated every two years to ensure that its takeaways and priorities remain relevant. This is no different than documents such as the National Security Strategy, which are updated every two to four years; in fact, the SIF’s planning documents should flow seamlessly into the National Security Strategy. 

To provide a sufficiently broad set of perspectives, the government should include the expertise and insights of outside experts to develop its plan. Existing bodies, such as the President’s Council of Advisors on Science and Technology and the National Quantum Initiative, provide some of the consultative expertise required. However, the SIF should also stand up subject matter specific advisory bodies where a need arises (for example, on critical minerals and mining) and work internally to set specific investment areas and priorities.

Recommendation 4. Limit the SIF to financing.

The government should not be an outsized player in capital markets. As such, the SIF should receive no governance rights (e.g., voting or board seats) in the companies that it invests in.

Although the SIF aims to catalyze technological and ecosystem development, it should be careful not to dictate the future of specific companies. Thus, the SIF should avoid information rights beyond financial reporting. Typical board decks and stockholder updates include updates on customers, technologies, personnel matters, and other highly confidential and specific pieces of information that, if made public through government channels, would play a highly distortionary role in markets. Given that the SIF is primarily focused on supporting innovation through a particularly tricky stage to navigate, the SIF should receive the least amount of information possible to avoid disrupting markets.

Recommendation 5. Focus on providing first-loss capital.

First-loss capital should be the primary mechanism by which the SIF supports new technologies, providing greater incentives for private-sector funders to support early commercialization while providing a means for taxpayers to directly participate in the economic upside of SIF-supported technologies.

Consider the following stylized example to demonstrate a key issue in the valley of death. A promising clean technology company, such as a carbon-free cement or long-duration energy storage firm, is raising $100mm of capital for facility expansion and first commercial deployment. To date, the company has likely raised $30 – $50mm of venture capital to enable tech development, pilot the product, and grow the team’s engineering, R&D, and sales departments.

However, this company faces a fundraising dilemma. Its funding requirements are now too big for all but the largest venture capital firms, who may or may not want to invest in projects and companies like these. On the other hand, this hypothetical company is not mature enough for private equity buyouts nor is it a good candidate for typical project-based debt, which typically require several commercial proof points in order to provide sufficient risk reduction for an investor whose upside is relatively limited. Hence, the “valley of death.”

First-loss capital is an elegant solution to this issue: A prospective funder could commit to equal (pro rata) terms as other investors, except that this first-loss funder is willing to use its investment to make other investors whole (or at least partially offset losses) in the event that the project or company does not succeed. In this example, a first-loss funder would commit to $33.5 million of equity funding (roughly one-third of the company’s capital requirement). If the company succeeds, the first-loss funder makes the same returns as the other investors. However, if the company is unable to fully meet these obligations, the first-loss funder’s $33.5 million would be used to pay the other investors back (the other $66.5 million that was committed). This creates a floor on losses for the non-first-loss investors: Rather than being at risk of losing 100% of their principal, they are at risk of losing 50% of their principal.

The creation of a first-loss layer has a meaningful impact on the risk-reward profile for non-first-loss investors, who now have a floor on returns (in the case above, half their investment). By expanding the acceptable potential loss ratio, growth equity capital (or another appropriate instrument, such as project finance) can fill the rest, thereby crowding in capital. 

From a risk-adjusted returns standpoint, this is not a free lunch for the government or taxpayers. Rather, it is intended to be a capital-efficient way of supporting the private-sector ecosystem in developing strategically and economically vital technologies. In other words, it leverages the power of the private sector to solve externalities while providing just enough support to get them to the starting line in the first place.

Conclusion

Many of tomorrow’s strategically important technologies face critical funding challenges in the valley of death. Due to their capital intensity and uncertain outcomes, existing financing tools are largely falling short in the critical early commercialization phases. However, a nimble, properly funded SIF could bridge key gaps while allowing the private sector to do most of the heavy lifting. The SIF would require buy-in from many stakeholders and well-defined sources of funding, but these can be solved with the right mandates, structures, and pay-fors. Indeed, the stakes are too high, and the consequences too dire, to not get strategic innovation right in the 21st century.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
If first-loss capital is such a powerful mechanism, why doesn’t the market already provide it?

Put simply, there needs to be an entity that is actually willing and able to absorb lower returns, or even lose some of its principal, in the service of building an ecosystem. Even if the “median” outcome is a market-rate return of capital, the risk-adjusted returns are in effect far lower because the probability of a zero outcome for first-loss providers is substantially nonzero. Moreover, it’s not clear exactly what the right probability estimate should be; therefore, it requires a leap of faith that no economically self-interested private market actor would be willing to take. While some quasi-social-sector organizations can play this role (for example, Bill Gates’s Breakthrough Energy Ventures for climate tech), their capacity is finite, nor is there a guarantee that such a vehicle will appear for every sector of interest. Therefore, a publicly funded SIF is an integral solution to bridging the valley of death.

Does the SIF always have to use first-loss structures?

No, the SIF would not always have to use first-loss structures. However, it is the most differentiated structure that is available to the U.S. government; otherwise, a private-sector player is likely able—and better positioned—to provide funding.

What type of financial instruments (e.g., debt, equity) would the SIF use?

The SIF should be able to use the full range of instruments, including project finance, corporate debt, convertible loans, and equity capital, and all combinations thereof. The instrument of choice should be up to the judgment of the applicant and SIF investment team. This is distinct from providing first-loss capital: Regardless of the financial instrument used, the SIF’s investment would be used to buffer other investors against potential losses.

What is the target rate of return?

The target return rate should be commensurate with that of the instrument used. For example, mezzanine debt should target 13–18% IRR, while equity investments should aim for 20–25% IRR. However, because of the increased risk of capital loss to the SIF given its first loss position, the effective blended return should be expected to be lower. 


The SIF should be prepared to lose capital on each individual investment and, as a blended portfolio, to have negative returns. While it should underwrite such that it will achieve market-rate returns if successful in crowding in other capital that improves the commercial prospects of technologies and companies in the valley of death, the SIF has a public goal of ecosystem development for strategic domains. Therefore, lower-than-market-rate returns, and even some principal degradation, is acceptable but should be avoided as much as possible through the prudence of the investment committee.

What protections will the SIF be offered, if they do not get board seats or representation?

By and large, the necessary public protections are granted through CFIUS, which requires regulatory approval for export or foreign ownership stakes with voting rights above 25% of critical technologies. The SIF can also enact controls around information rights (e.g., customer lists, revenue, product roadmaps) such that they have a veto on parties that can receive such information. However, given its catalytic mission, the SIF does not need board seats or representation and should focus on ensuring that critical technologies and assets are properly protected.

What is the overall investment committee composition, and what are the processes for approval?

In most private investment firms, the investment committee is made up of the most senior individuals in the fund. These individuals can cross asset classes, sectors of expertise, and even functional backgrounds. However, the investment committee represents a wide breadth of expertise and experiences that, when brought together, enable intellectual honesty and the application of collective wisdom and judgment to the opportunity at hand.


Similarly, the SIF’s investment committee could include the head of the fund and representatives from various departments and agencies in alignment with its strategic priorities. The exact size of the investment committee should be defined by these priorities, but approval should be driven by consensus, and unanimity (or near unanimity) should be expected for investments that are approved.


Given the fluid nature of investment opportunities, the committee should be called upon whenever needed to evaluate a potential opportunity. However, given the generally long process times for investments discussed above (6–12 months), the investment committee should have been briefed multiple times before a formal decision is made.

What check size is envisioned? Is there a strict cutoff for the amount of funding that can be disbursed?

Check sizes can be flexible to the needs of the investment opportunity. However, as an initial guiding principle, first loss capital should likely make up 20–35% of capital invested so as to require private-sector investors to have meaningful skin in the game. Depending on the fundraise size, this could imply investments of $25 million to $100 million.


Target funding amounts should be set over multiyear timeframes, but the annual appropriations process implies that there will likely be a set cap in any given year. In order to meet the needs of the market, there should be mechanisms that enable emergency draws, up to a cap (e.g., 10% of the annual target funding amount, which will need to be “paid for” by reducing future outlays). 

How will the SIF be funded?

An economically efficient way to fund a government program in support of a positive externality is a Pigouvian tax on negative externalities (such as carbon). However, carbon taxes are as politically unappealing as they are economically sensible and need to be packaged into other policy goals that could potentially support such legislation. Notwithstanding the questionable economic wisdom of tariffs in general, some 56% of voters support a 10% tax on all imports and 60% tariffs on China. Rather than using tariffs harmfully, they could be used more productively. One such proposal is a carbon import tariff that taxes imports on the carbon emitted in the production and transportation of goods into the U.S.


The U.S. would not be a first mover: in fact, the European Union has already implemented a similar mechanism called the Carbon Border Adjustment Mechanism (CBAM), which is focused on heavy industry, including cement, iron and steel, aluminum, fertilizers, electricity, and hydrogen, with chemicals and polymers potentially to be included after 2026. At full rollout in 2030, the CBAM is expected to generate roughly €10–15 billion of tax revenue. Tax receipts of a similar size could be used to fund an SIF or, if Congress authorizes an upfront amount, could be used to nullify the incremental deficit over time.

If the carbon innovation fee is implemented, how would levies be assessed and exemptions provided?

The EU’s CBAM phased in its reporting requirements over several years. Through July 2024, companies were allowed to use default amounts per unit of production without an explanation as to why actual data was not used. Until January 1, 2026, companies can make estimates for up to 20% of goods; thereafter, the CBAM requires reporting of actual quantities and embedded greenhouse gas emissions.


The U.S. could use a similar phase-in, although given the challenges of carbon reporting, could allow companies to use the lower of actual, verified emissions or per-unit estimates. Under a carbon innovation fee regime, exporters and countries could apply for exemption on a case-by-case basis to the Department of Commerce, which they could approve in line with other goals (e.g., economic development in a region).

Besides a carbon innovation fee, what are other ways to fund the SIF?

The SIF could also be funded by repurposing other funding and elevating their strategic importance. Potential candidates include the Small Business Innovation Research (SBIR) and Small State Business Credit Initiative (SSBCI), which could play a bigger role if moved into the SIF umbrella. For example, the SBIR program, whose latest reporting data is as of FY2019, awarded $3.3 billion in funding that year and $54.6 billion over its lifespan. Moreover, the SSBCI, a $10 billion fund that already provides loan guarantees and other instruments similar to those described above, can be used to support technologies that fall into the purview of the SIF.

What funding options are available if reallocation of existing funds isn’t an option?

Congress could also assess reallocating dollars towards an SIF from spending reforms that are likely inevitable given the country’s fiscal position. In 2023, the Congressional Budget Office (CBO) published a report highlighting potential solutions for reducing the budget deficit. Some potential solutions, like establishing caps on Medicaid federal spending, while fiscally promising, seem unlikely to pass in the near future. However, others are more palatable, especially those that eliminate loopholes or ask higher-income individuals to pay their fair share.


For instance, increasing the amount subject to Social Security taxes above the $250,000 threshold has the potential to raise up to $1.2 trillion over 10 years; while this can be calibrated, an SIF would take only a small fraction of the taxes raised. In addition, the CBO found that federal matching funds for Medicaid frequently ended up getting back to healthcare providers in the form of higher reimbursement rates; eliminating what are effectively kickbacks could reduce the deficit by up to $525 billion over 10 years.

Promoting Fairness in Medical Innovation

There is a crisis within healthcare technology research and development, wherein certain groups due to their age, gender, or race and ethnicity are under-researched in preclinical studies, under-represented in clinical trials, misunderstood by clinical practitioners, and harmed by biased medical technology. These issues in turn contribute to costly disparities in healthcare outcomes, leading to losses of $93 billion a year in excess medical-care costs, $42 billion a year in lost productivity, and $175 billion a year due to premature deaths. With the rise of artificial intelligence (AI) in healthcare, there’s a risk of encoding and recreating existing biases at scale.

The next Administration and Congress must act to address bias in medical technology at the development, testing and regulation, and market-deployment and evaluation phases. This will require coordinated effort across multiple agencies. In the development phase, science funding agencies should enforce mandatory subgroup analysis for diverse populations, expand funding for under-resourced research areas, and deploy targeted market-shaping mechanisms to incentivize fair technology. In the testing and regulation phase, the FDA should raise the threshold for evaluation of medical technologies and algorithms and expand data-auditing processes. In the market-deployment and evaluation phases, infrastructure should be developed to perform impact assessments of deployed technologies and government procurement should incentivize technologies that improve health outcomes.

Challenge and Opportunity

Bias is regrettably endemic in medical innovation. Drugs are incorrectly dosed to people assigned female at birth due to historical exclusion of women from clinical trials. Medical algorithms make healthcare decisions based on biased health data, clinically disputed race-based corrections, and/or model choices that exacerbate healthcare disparities. Much medical equipment is not accessible, thus violating the Americans with Disabilities Act. And drugs, devices, and algorithms are not designed with the lifespan in mind, impacting both children and the elderly. Biased studies, technology, and equipment inevitably produce disparate outcomes in U.S. healthcare.

The problem of bias in medical innovation manifests in multiple ways: cutting across technological sectors in clinical trials, pervading the commercialization pipeline, and impeding equitable access to critical healthcare advances.

Bias in medical innovation starts with clinical research and trials

The 1993 National Institutes of Health (NIH) Revitalization Act required federally funded clinical studies to (i) include women and racial minorities as participants, and (ii) break down results by sex and race or ethnicity. As of 2019, the NIH also requires inclusion of participants across the lifespan, including children and older adults. Yet a 2019 study found that only 13.4% of NIH-funded trials performed the mandatory subgroup analysis, and challenges in meeting diversity targets continue into 2024 . Moreover, the increasing share of industry-funded studies are not subject to Revitalization Act mandates for subgroup analysis. These studies frequently fail to report differences in outcomes by patient population as a result. New requirements for Diversity Action Plans (DAPs), mandated under the 2023 Food and Drug Omnibus Reform Act, will ensure drug and device sponsors think about enrollment of diverse populations in clinical trials. Yet, the FDA can still approve drugs and devices that are not in compliance with their proposed DAPs, raising questions around weak enforcement. 

The resulting disparities in clinical-trial representation are stark: African Americans represent 12% of the U.S. population but only 5% of clinical-trial participants, Hispanics make up 16% of the population but only 1% of clinical trial participants, and sex distribution in some trials is 67% male. Finally, many medical technologies approved prior to 1993 have not been reassessed for potential bias. One outcome of such inequitable representation is evident in drug dosing protocols: sex-aware prescribing guidelines exist for only a third of all drugs.

Bias in medical innovation is further perpetuated by weak regulation

Algorithms

Regulation of medical algorithms varies based on end application, as defined in the 21st Century Cures Act. Only algorithms that (i) acquire and analyze medical data and (ii) could have adverse outcomes are subject to FDA regulation. Thus, clinical decision-support software (CDS) is not regulated even though these technologies make important clinical decisions in 90% of U.S. hospitals. The FDA has taken steps to try and clarify what CDS must be considered a medical device, although these actions have been heavily criticized by industry. Finally, the lack of regulatory frameworks for generative AI tools is leading to proliferation without oversight.

Even when a medical algorithm is regulated, regulation may occur through relatively permissive de novo pathways and 510(k) pathways. A de novo pathway is used for novel devices determined to be low to moderate risk, and thus subject to a lower burden of proof with respect to safety and equity. A 510(k) pathway can be used to approve a medical device exhibiting “substantial equivalence” to a previously approved device, i.e., it has the same intended use and/or same technological features. Different technical features can be approved so long as there are no questions raised around safety and effectiveness.

Medical algorithms approved through de novo pathways can be used as predicates for approval of devices through 510(k) pathways. Moreover, a device approved through a 510(k) pathway can remain on the market even if its predicate device was recalled. Widespread use of 510(k) approval pathways has generated a “collapsing building” phenomenon, wherein many technologies currently in use are based on failed predecessors. Indeed, 97% of devices recalled between 2008 to 2017 were approved via 510(k) clearance. 

While DAP implementation will likely improve these numbers, for the 692 AI-ML enabled medical devices, only 3.6% reported race or ethnicity, 18.4% reported age, and only .9% include any socioeconomic information. Further, less than half did detailed analysis of algorithmic performance and only 9% included information on post-market studies, raising the risk of algorithmic bias following approvals and broad commercialization.

Even more alarming is evidence showing that machine learning can further entrench medical inequities. Because machine learning medical algorithms are powered by data from past medical decision-making, which is rife with human error, these algorithms can perpetuate racial, gender, and economic bias. Even algorithms demonstrated to be ‘unbiased’ at the time of approval can evolve in biased ways over time, with little to no oversight from the FDA. As technological innovation progresses, especially generative AI tools, an intentional focus on this problem will be required.

Medical devices

Currently, the Medical Device User Fee Act requires the FDA to consider the least burdensome appropriate means for manufacturers to demonstrate the effectiveness of a medical device or to demonstrate a device’s substantial equivalence. This requirement was reinforced by the 21st Century Cures Act, which also designated a category for “breakthrough devices” subject to far less-stringent data requirements. Such legislation shifts the burden of clinical data collection to physicians and researchers, who might discover bias years after FDA approval. This legislation also makes it difficult to require assessments on the differential impacts of technology.

Like medical algorithms, many medical devices are approved through 510(k) exemptions or de novo pathways. The FDA has taken steps since 2018 to increase requirements for 510(k) approval and ensure that Class III (high-risk) medical devices are subject to rigorous pre-market approval, but problems posed by equivalence and limited diversity requirements remain. 

Finally, while DAPs will be required for many devices seeking FDA approval, the recommended number of patients in device testing is shockingly low. For example, currently, only 10 people are required in a study of any new pulse oximeter’s efficacy and only 2 of those people need to be “darkly pigmented”. This requirement (i) does not have the statistical power necessary to detect differences between demographic groups, and (i) does not represent the composition of the U.S. population. The standard is currently under revision after immense external pressure. FDA-wide, there are no recommended guidelines for addressing human differences in device design, such as pigmentation, body size, age, and pre-existing conditions.

Pharmaceuticals

The 1993 Revitalization Act strictly governs clinical trials for pharmaceuticals and does not make recommendations for adequate sex or genetic diversity in preclinical research. The results are that a disproportionately high number of male animals are used in research and that only 5% of cell lines used for pharmaceutical research are of African descent. Programs like All of Us, an effort to build diverse health databases through data collection, are promising steps towards improving equity and representation in pharmaceutical research and development (R&D). But stronger enforcement is needed to ensure that preclinical data (which informs function in clinical trials) reflects the diversity of our nation. 

Bias in medical innovation are not tracked post-regulatory approval

FDA-regulated medical technologies appear trustworthy to clinicians, where the approval signals safety and effectiveness. So, when errors or biases occur (if they are even noticed), the practitioner may blame the patient for their lifestyle rather than the technology used for assessment. This in turn leads to worse clinical outcomes as a result of the care received.

Bias in pulse oximetry is the perfect case study of a well-trusted technology leading to significant patient harm. During the COVID-19 pandemic, many clinicians and patients were using oximeter technology for the first time and were not trained to spot factors, like melanin in the skin, that cause inaccurate measurements and impact patient care. Issues were largely not attributed to the device. This then leads to underreporting of adverse events to the FDA — which is already a problem due to the voluntary nature of adverse-event reporting. 

Even when problems are ultimately identified, the federal government is slow to respond. The pulse oximeter’s limitations in monitoring oxygenation levels across diverse skin tones was identified as early as the 1990s. 34 years later, despite repeated follow-up studies indicating biases, no manufacturer has incorporated skin-tone-adjusted calibration algorithms into pulse oximeters. It required the large Sjoding study, and the media coverage it garnered around delayed care and unnecessary deaths, for the FDA to issue a safety communication and begin reviewing the regulation.

Other areas of HHS are stepping up to address issues of bias in deployed technologies. A new ruling by the HHS Office of Civil Rights (OCR) on Section 1557 of the Affordable Care Act requires covered providers and institutions (i.e. any receiving federal funding) to identify their use of patient care decision support tools that directly measure race, color, national origin, sex, age, or disability, and to make reasonable efforts to mitigate the risk of discrimination from their use of these tools. Implementation of this rule will depend on OCR’s enforcement, and yet it provides another route to address bias in algorithmic tools.

Differential access to medical innovation is a form of bias

Americans face wildly different levels of access to new medical innovations. As many new innovations have high cost points, these drugs, devices, and algorithms exist outside the price range of many patients, smaller healthcare institutions and federally funded healthcare service providers, including the Veterans Health Administration, federally qualified health centers and the Indian Health Service. Emerging care-delivery strategies might not be covered by Medicare and Medicaid, meaning that patients insured by CMS cannot access the most cutting-edge treatments. Finally, the shift to digital health, spurred by COVID-19, has compromised access to healthcare in rural communities without reliable broadband access. 

Finally, the Advanced Research Projects Agency for Health (ARPA-H) has a commitment to have all programs and projects consider equity in their design. To fulfill ARPA-H’s commitment, there is a need for action to ensure that medical technologies are developed fairly, tested with rigor, deployed safely, and made affordable and accessible to everyone.

Plan of Action

The next Administration should launch “Healthcare Innovation for All Americans” (HIAA), a whole of government initiative to improve health outcomes by ensuring Americans have access to bias-free medical technologies. Through a comprehensive approach that addresses bias in all medical technology sectors, at all stages of the commercialization pipeline, and in all geographies, the initiative will strive to ensure the medical-innovation ecosystem works for all. HIAA should be a joint mandate of Health and Human Services (HHS) and the Office of Science Technology and Policy (OSTP) to work with federal agencies on priorities of equity, non-discrimination per Section 1557 of the Affordable Care Act and increasing access to medical innovation, and initiative leadership should sit at both HHS and OSTP. 

This initiative will require involvement of multiple federal agencies, as summarized in the table below. Additional detail is provided in the subsequent sections describing how the federal government can mitigate bias in the development phase; testing, regulation, and approval phases; and market deployment and evaluation phases.

Three guiding principles should underlie the initiative:

  1. Equity and non-discrimination should drive action. Actions should seek to improve the health of those who have been historically excluded from medical research and development. We should design standards that repair past exclusion and prevent future exclusion. 
  2. Coordination and cooperation are necessary. The executive and legislative branches must collaborate to address the full scope of the problem of bias in medical technology, from federal processes to new regulations. Legislative leadership should task the Government Accountability Office (GAO) to engage in ongoing assessment of progress towards the goal of achieving bias-free and fair medical innovation.
  3. Transparent, evidence-based decision making is paramount. There is abundant peer-reviewed literature that examines bias in drugs, devices, and algorithms used in healthcare settings — this literature should form the basis of a non-discrimination approach to medical innovation. Gaps in evidence should be focused on through deployed research funding. Moreover, as algorithms become ubiquitous in medicine, every effort should be made to ensure that these algorithms are trained on representative data of those experiencing a given healthcare condition.
AgencyRole
Advanced Research Projects Agency for Health (ARPA-H)ARPA-H has committed to tackling health equity in biomedical research, and to aligning each project it undertakes with that goal. As such, ARPA-H should lead the charge in developing processes for equity in medical technology — from idea conceptualization to large-scale rollout — and serve as a model for other federally funded healthcare programs.
National Institute of Health (NIH)NIH should fund research that addresses health-data gaps, investigates algorithmic and data bias, and assesses bias embedded in medical technical tools. Simultaneously, NIH should create standards for diversity in samples and/or datasets for preclinical research. Finally, NIH must strongly enforce the 1993 NIH Revitalization Act’s diversity provisions.
National Science Foundation (NSF)NSF should collaborate with NIH on cross-agency programs that fund R&D specific to mitigating bias of technologies like AI.
Food and Drug Administration (FDA)FDA should take a more active role in uncovering bias in medical innovation, given its role as a regulatory checkpoint for all new medical technologies. This should include more rigorous evaluation protocols as well as better tracking of emergent bias in medical technologies post-approval.
Assistant Secretary for Technology Policy (ASTP)ASTP publishes standards for effective use of healthcare information technology that ensure quality care delivery. Their standards-setting should offer solutions for compliance with Section 1557 for novel AI/ML algorithms.
Centers for Medicare and Medicaid (CMS)CMS oversees the coordination of coverage, coding, and payment processes with respect to new technologies and procedures. Thus, CMS should focus on ensuring all new technologies developed through federal funding, like those that will be built by ARPA-H and its industry partners, are covered by Medicare and Medicaid. In addition, CMS and its accrediting partners can require compliance with federal regulatory standards, which should be extended to assess medical technologies. Finally, CMS should assess how flawed medical technologies are being used to decide on medical care provision, and update guidelines.
Federal Trade Commission (FTC)FTC should protect America’s medical technology consumers by auditing high-risk medical innovations, such as decision-making algorithms.
Agency for Healthcare Research and Quality (AHRQ)The AHRQ, a component of HHS, should identify areas where technology bias is leading to disparate healthcare outcomes and report its findings to Congress, the White House, and agency leaders for immediate action.
Centers for Disease Control and Prevention (CDC)CDC’s expertise in health-data collection should be mobilized to identify research and development gaps.
Department of Commerce (DOC)Given its role in enforcing U.S. trade laws and regulations, DOC can do much to incentivize equity in medical device design and delivery. The National Institute of Standards and Technology (NIST) should play a key role in crafting standards for identifying and managing bias across key medical-technology sectors.
Department of Education (ED)ED should work with medical schools to develop and implement learning standards and curricula on bias in medical technology.
Department of Defense (DOD)DOD has formalized relationships with FDA to expedite medical products useful to American military personnel. As a DOD priority is to expand diversity and inclusion in the armed forces, these medical products should be assessed for bias that limits safety and efficacy.
Health Resources and Services Administration (HRSA)HRSA should coordinate with federally qualified health centers on digital health technologies, taking advantage of the broadband expansion outlined in the Bipartisan Infrastructure Bill.
Veterans Affairs (VA) and the Veterans Health Administration (VHA)The VA should work with ARPA-H and its industry partners to establish cost-effective rollout of new innovations to VA-run hospitals. VA should also use its procurement power to require diversity in the clinical trials of the drugs, devices, and algorithms it procures. VA could also use prize challenges to spur innovation.
Government Accountability Office (GAO)The GAO should prepare a comprehensive roadmap for addressing bias endemic to the cycle of medical technology development, testing, and deployment, with a focus on mitigating bias in “black box” algorithms used in medical technology.
Office of Management and Budget (OMB)OMB should work with HIAA leadership to design a budget for HIAA implementation, including for R&D funding, personnel for programmatic expansion, data collectives, education, and regulatory enforcement.
Office of Science Technology and Policy (OSTP)OSTP should develop processes and standards for ensuring that individual rights are not violated by biased medical technologies. This work can build on the AI Bill of Rights Initiative.

Addressing bias at the development phase

The following actions should be taken to address bias in medical technology at the innovation phase:

Addressing bias at the testing, regulation, and approval phases

The following actions should be taken to address bias in medical innovation at the testing, regulation, and approval phases:

Addressing bias at the market deployment and evaluation phases 

A comprehensive road map is needed

The GAO should conduct a comprehensive investigation of “black box” medical technologies utilizing algorithms that are not transparent to end users, medical providers, and patients. The investigation should inform a national strategic plan for equity and non-discrimination in medical innovation that relies heavily on algorithmic decision-making. The plan should include identification of noteworthy medical technologies leading to differential healthcare outcomes, creation of enforceable regulatory standards, development of new sources of research funding to address knowledge gaps, development of enforcement mechanisms for bias reporting, and ongoing assessment of equity goals.

Timeline for action

Realizing HIAA will require mobilization of federal funding, introduction of regulation and legislation, and coordination of stakeholders from federal agencies, industry, healthcare providers, and researchers around a common goal of mitigating bias in medical technology. Such an initiative will be a multi-year undertaking and require funding to enact R&D expenditures, expand data capacity, assess enforcement impacts, create educational materials, and deploy personnel to staff all the above.

Near-term steps that can be taken to launch HIAA include issuing a public request for information, gathering stakeholders, engaging the public and relevant communities in conversation, and preparing a report outlining the roadmap to accomplishing the policies outlined in this memo.

Conclusion

Medical innovation is central to the delivery of high-quality healthcare in the United States. Ensuring equitable healthcare for all Americans requires ensuring that medical innovation is equitable across all sectors, phases, and geographies. Through a bold and comprehensive initiative, the next Administration can ensure that our nation continues leading the world in medical innovation while crafting a future where healthcare delivery works for all.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
How will the success of HIAA be evaluated?

HIAA will be successful when medical policies, projects, and technologies yield equitable health care access, treatment, and outcomes. For instance, success would yield the following outcomes:



  • Representation in preclinical and clinical research equivalent to the incidence of a studied condition in the general population.

  • Research on a disease condition funded equally per affected patient.

  • Existence of data for all populations facing a given disease condition.

  • Medical algorithms that have equal efficacy across subgroup populations.

  • Technologies that work equally well in testing as they do when deployed to the market.

  • Healthcare technologies made available and affordable to all care facilities.

Why does this memo propose an expansive multi-agency effort instead of just targeting the FDA?

Regulation alone cannot close the disparity gap. There are notable gaps in preclinical and clinical research data for women, people of color, and other historically underrepresented groups that need to be filled. There are also historical biases encoded in AI/ML decision making algorithms that need to be studied and rectified. In addition, the FDA’s role is to serve as a safety check on new technologies — the agency has limited oversight over technologies once they are out on the market due to the voluntary nature of adverse reporting mechanisms. This means that agencies like the FTC and CMS need to be mobilized to audit high-risk technologies once they reach the market. Eliminating bias in medical technology is only possible through coordination and cooperation of federal agencies with each other as well as with partners in the medical device industry, the pharmaceutical industry, academic research, and medical care delivery.

What challenges might the Administration encounter from industry in launching this initiative?

A significant focus of the medical device and pharmaceutical industries is reducing the time to market for new medical devices and drugs. Imposing additional requirements for subgroup analysis and equitable use as part of the approval process could work against this objective. On the other hand, ensuring equitable use during the development and approval stages of commercialization will ultimately be less costly than dealing with a future recall or a loss of Medicare or Medicaid eligibility if discriminatory outcomes are discovered.

Is there bipartisan support to secure the funding for this initiative?

Healthcare disparities exist in every state in America and are costing billions a year in economic growth. Some of the most vulnerable people live in rural areas, where they are less likely to receive high-quality care because costs of new medical technologies are too high for the federally qualified health centers that serve one in five rural residents as well as rural hospitals. Furthermore, during continued use, a biased device creates adverse healthcare outcomes that cost taxpayers money. A technology functioning poorly due to bias can be expensive to replace. It is economically imperative to ensure technology works as expected, as it leads to more effective healthcare and thus healthier people.

Scaling Effective Methods across Federal Agencies: Looking Back at the Expanded Use of Incentive Prizes between 2010-2020

Policy entrepreneurs inside and outside of government, as well as other stakeholders and advocates, are often interested in expanding the use of effective methods across many or all federal agencies, because how the government accomplishes its mission is integral to what the government is able to produce in terms of outcomes for the public it serves. Adoption and use of promising new methods by federal agencies can be slowed by a number of factors that discourage risk-taking and experimentation, and instead encourage compliance and standardization, too often as a false proxy for accountability. As a result, many agency-specific and government-wide authorities for promising methods go under-considered and under-utilized. 

Policy entrepreneurs within center-of-government agencies (e.g., Executive Office of the President) are well-positioned to use a variety of policy levers and actions to encourage and accelerate federal agency adoption of promising and effective methods. Some interventions by center-of-government agencies are better suited to driving initial adoption, others to accelerating or maintaining momentum, and yet others to codifying and making adoption durable once widespread. Therefore, a policy entrepreneur interested in expanding adoption of a given method should first seek to understand the “adoption maturity” of that method and then undertake interventions appropriate for that stage of adoption. The arc of agency adoption of new methods can be long—measured in years and decades, not weeks and months. Policy entrepreneurs should be prepared to support adoption over similar timescales. In considering adoption maturity of a method of interest, policy entrepreneurs can also reference the ideas of Tom Kalil in a July 2024 Federation of American Scientists blog post on “Increasing the ‘Policy Readiness of Ideas,” which offers sample questions to ask about “the policy landscape surrounding a particular idea.”

As a case study for driving federal adoption of a new method, this paper looks back at actions that supported the widespread adoption of incentive prizes by most federal agencies over the course of fiscal years 2010 through 2020. Federal agency use of prizes increased from several incentive prize competitions being offered by a handful of agencies in the early 2000s to more than 2,000 prize competitions offered by over 100 federal agencies by the end of fiscal year 2022. These incentive prize competitions have helped federal agencies identify novel solutions and technologies, establish new industry benchmarks, pay only for results, and engage new talent and organizations. 

A summary framework below includes types of actions that can be taken by policy entrepreneurs within center-of-government agencies to support awareness, piloting, and ongoing use of new methods by federal agencies in the years ahead. (Federal agency program and project managers who seek to scale up innovative methods within their agencies are encouraged to reference related resources such as this article by Jenn Gustetic in the Winter 2018 Issues in Science and Technology: “Scaling Up Policy Innovations in the Federal Government: Lessons from the Trenches.”) 

Efforts to expand federal capacity through new and promising methods are worthwhile to ensure the federal government can use a full and robust toolbox of tactics to meet its varied goals and missions. 

OPPORTUNITIES AND CHALLENGES IN FEDERAL ADOPTION OF NEW METHODS

Opportunities for federal adoption and use of promising and effective methods

To address national priorities, solve tough challenges, or better meet federal missions to serve the public, a policy entrepreneur may aim to pilot, scale, and make lasting federal use of a specific method. 

A policy entrepreneur’s goals might include new ways for federal agencies to, for example:

To support these and other goals, an array of promising methods exist and have been demonstrated, such as in other sectors like philanthropy, industry, and civil society, in state, local, Tribal, or territorial governments and communities, or in one or several federal agencies—with promise for beneficial impact if more federal agencies adopted these practices. Many methods are either specifically supported or generally allowable under existing government-wide or agency-specific authorities. 

Center-of-government agencies include components of the Executive Office of the President (EOP) like the Office of Management and Budget (OMB) and the Office of Science and Technology Policy (OSTP), as well as the Office of Personnel Management (OPM) and the General Services Administration (GSA). These agencies direct, guide, convene, support, and influence the implementation of law, regulation, and the President’s policies across all Federal agencies, especially the executive departments. An August 2016 report by the Partnership for Public Service and the IBM Center for the Business of Government noted that, “The Office of Management and Budget and other “center of government” agencies are often viewed as adding processes that inhibit positive change—however, they can also drive innovation forward across the government.”

A policy entrepreneur interested in expanding adoption of a given method through actions driven or coordinated by one or more center-of-government agencies should first seek to understand the “adoption maturity” of a given method of interest by assessing: (1) the extent that adoption of the method has already occurred across the federal interagency; (2) any real or perceived barriers to adoption and use; and (3) the robustness of existing policy frameworks and agency-specific and government-wide infrastructure and resources that support agency use of the method.

Challenges in federal adoption and use of new methods

Policy entrepreneurs are usually interested in expanding federal adoption of new methods for good reason: a focus on supporting and expanding beneficial outcomes. Effective leaders and managers across sectors understand the importance of matching appropriate and creative tactics with well-defined problems and opportunities. Ideally, leaders are picking which tactic or tool to use based on their expert understanding of the target problem or opportunity, not using a method solely because it is novel or because it is the way work has always been done in the past. Design of effective program strategies is supported by access to a robust and well-stocked toolbox of tactics. 

However, many currently authorized and allowable methods for achieving federal goals are generally underutilized in the implementation strategies and day-to-day tactics of federal agencies. Looking at the wide variety of existing authorities in law and the various flexibilities allowed for in regulation and guidance, one might expect agency tactics for common activities like acquisition or public comment to be varied, diverse, iterative, and even experimental in nature, where appropriate. In practice, however, agency methods are often remarkably homogeneous, repeated, and standardized.   

This underutilization of existing authorities and allowable flexibilities is due to factors such as:

Strategies for addressing challenges in federal adoption and use of new methods

Attention and action by center-of-government agencies often is needed to address the factors cited above that slow the adoption and use of new methods across federal agencies and to build momentum. The following strategies are further explored in the case study on federal use of incentive prizes that follows: 

Additional strategies can be deployed within federal agencies to address agency-level barriers and scale promising methods—see, for example, this article by Jenn Gustetic in the Winter 2018 Issues in Science and Technology: “Scaling Up Policy Innovations in the Federal Government: Lessons from the Trenches.” 

LOOKING BACK: A DECADE OF POLICY ACTIONS SUPPORTING EXPANDED FEDERAL USE OF INCENTIVE PRIZES

The use of incentive prizes is one method for open innovation that has been adopted broadly by most federal agencies, with extensive bipartisan support in Congress and with White House engagement across multiple administrations. In contrast to recognition prizes, such as the Nobel Prize or various presidential medals, which reward past accomplishments, incentive prizes specify a target, establish a judging process (ideally as objective as possible), and use a monetary prize purse and/or non-monetary incentives (such as media and online recognition, access to development and commercialization facilities, resources, or experts, or even qualification for certain regulatory flexibility) to induce new efforts by solvers competing for the prize. 

The use of incentive prizes by governments (and by high net worth individuals) to catalyze novel solutions certainly is not new. In 1795, Napoleon offered 12,000 francs to improve upon the prevailing food preservation methods of the time, with a goal of better feeding his army. Fifteen years later, confectioner Nicolas François Appert claimed the prize for his method involving heating, boiling and sealing food in airtight glass jars — the same basic technology still used to can foods. Dava Sobel’s book Longitude details how the rulers of Spain, the Netherlands, and Britain all offered separate prizes, starting in 1567, for methods of figuring out longitude at sea, and finally John Harrison was awarded Britain’s top longitude prize in 1773. In 1919, Raymond Orteig, a French-American hotelier, aviation enthusiast, and philanthropist, offered a $25,000 prize for the first person who could perform a nonstop flight between New York and Paris. The prize offer initially expired by 1924 without anyone claiming it. Given technological advances and a number of engaged pilots involved in trying to win the prize, Orteig extended the deadline by 5 years. By 1926, nine teams had come forward to formally compete, and the prize went to a little-known aviator named Charles Lindbergh, who attempted the flight in a custom-built plane known as the “Spirit of St. Louis.”

The U.S. Government did not begin to adopt the use of incentive prizes until the early 21st century, following a 1999 National Academy of Engineering workshop about the use of prizes as an innovation tool. In the first decade of the 2000s, the Defense Advanced Research Projects Agency (DARPA), the National Aeronautics and Space Administration (NASA), and the Department of Energy conducted a small number of pilot prize competitions. These early agency-led prizes focused on autonomous vehicles, space exploration, and energy efficiency, demonstrating a range of benefits to federal agency missions. 

Federal use of incentive prizes did not accelerate until, in the America COMPETES Reauthorization Act of 2010, Congress granted all federal agencies the authority to conduct prize competitions (15 USC § 3719). With that new authority in place, and with the support of a variety of other policy actions, federal use of incentive prizes reached scale, with over 2,000 prize competitions offered on Challenge.gov by over 100 federal agencies between the fiscal years 2010 and 2022

There certainly remains extensive opportunity to improve the design, rigor, ambition, and effectiveness of federal prize competitions. That said, there are informative lessons to be drawn from how incentive prizes evolved in the United States from a method used primarily outside of government, with limited pilots among a handful of early-adopter federal agencies, to a method being tried by many civil servants across an active interagency community of practice and lauded by administration leaders, bipartisan members of Congress, and external stakeholders alike. 

A summary follows of the strategies and tactics used by policy entrepreneurs within the EOP—with support and engagement from Congress as well as program managers and legal staff across federal agencies—that led to increased adoption and use of incentive prizes in the federal government.

role of philanthropy

Summary of strategies and policy levers supporting expanded use of incentive prizes

In considering how best to expand awareness, adoption, and use among federal agencies of promising methods, policy entrepreneurs might consider utilizing some or all of the strategies and policy levers described below in the incentive prizes example. Those strategies and levers are summarized generally in the table that follows. Some of the listed levers can advance multiple strategies and goals. This framework is intended to be flexible and to spark brainstorming among policy entrepreneurs, as they build momentum in the use of particular innovation methods. 

Policy entrepreneurs are advised to consider and monitor the maturity level of federal awareness, adoption, and use, and to adjust their strategies and tactics accordingly. They are encouraged to return to earlier strategies and policy levers as needed, should adoption and momentum lag, should agency ambition in design and implementation of initiatives be insufficient, or should concerns regarding risk management be raised by agencies, Congress, or stakeholders. 

Stage of Federal AdoptionStrategyTypes of Center-of-Government Policy Levers
Early – No or few Federal agencies using methodUnderstand federal opportunities to use method, and identify barriers and challenges* Connect with early adopters across federal agencies to understand use of agency-specific authorities, identify pain points and lessons learned, and capture case studies (e.g., 2000-2009)

* Engage stakeholder community of contractors, experts, researchers, and philanthropy

* Look to and learn from use of method in other sectors (such as by philanthropy, industry, or academia) and document (or encourage third-party documentation of) that use and its known benefits and attributes (e.g., April 1999, July 2009)

* Encourage research, analysis, reports, and evidence-building by National Academies, academia, think tanks, and other stakeholders (e.g., April 1999, July 2009, June 2014)

* Discuss method with OMB Office of General Counsel and other relevant agency counsel

* Discuss method with relevant Congressional authorizing committee staff

* Host convenings that connect interested federal agency representatives with experts

* Support and connect nascent federal “community of interest”
Early – No or few Federal agencies using methodBuild interest among federal agencies* Designate primary policy point of contact/dedicated staff member in the EOP (e.g., 2009-2017, 2017-2021)

* Designate a primary implementation point of contact/dedicated staff at GSA and/or OPM

* Identify leads in all or certain federal agencies

* Connect topic to other administration policy agendas and strategies

* Highlight early adopters within agencies in communications from center-of-government agencies to other federal agencies (and to external audiences)

* Offer congressional briefings and foster bipartisan collaboration (e.g., 2015)
Early – No or few Federal agencies using methodEstablish legal authorities and general administration policy * Engage OMB Office of OMB General Counsel and OMB Legislative Review Division, as well as other relevant OMB offices and EOP policy councils

* Identify existing general authorities and regulations that could support federal agency use of method (e.g., March 2010)

* Establish general policy guidelines, including by leveraging Presidential authorities through executive orders or memoranda (e.g., January 2009)

* Issue OMB directives on specific follow-on agency actions or guidance to support agency implementation (“M-Memos” or similar) (e.g., December 2009, March 2010, August 2011, March 2012)

* Provide technical assistance to Congress regarding government-wide or agency-specific authority (or authorities) (e.g., June-July 2010, January 2011)

* Delegate existing authorities within agencies (e.g., October 2011)

* Encourage issuance of agency-specific guidance (e.g., October 2011, February 2014)

* Include direction to agencies as part of broader Administration policy agendas (e.g., September 2009, 2011-2016)
Early – No or few Federal agencies using methodRemove barriers and “make it easier”* Create a central government website with information for federal agency practitioners (such as toolkits, case studies, and trainings) and for the public (e.g., September 2010)

* Create dedicated GSA schedule of vendors (e.g., July 2011)

* Establish an interagency center of excellence (e.g., September 2011)

* Encourage use of interagency agreements on design or implementation of pilot initiatives (e.g., September 2011)

* Request agency budget submissions to OMB to support pilot use in President’s budget (e.g., December 2013)
Adoption well underway – Many federal agencies have begun to use methodConnect practitioners* Launch a federal “community of practice” with support from GSA for meetings, listserv, and collaborative projects (e.g., April 2010, 2016, June 2019)

* Host regular events, workshops, and conferences with federal agency and, where appropriate and allowable, seek philanthropic or nonprofit co-hosts (e.g., April 2010, June 2012, April 2015, March 2018, May 2022)
Adoption well underway – Many federal agencies have begun to use methodStrengthen agency infrastructure* Foster leadership buy-in through briefings from White House/EOP to agency leadership, including members of the career senior executive service

* Encourage agencies to dedicate agency staff and invest in prize design support within agencies

* Encourage agencies to create contract vehicles as needed to support collaboration with vendors/ experts

* Encourage agencies to develop intra-agency networks of practitioners and to provide external communications support and platforms for outreach

* Request agency budget submissions to OMB for investments in agency infrastructure and expansion of use, to include in the President's budget where needed (e.g., 2012-2013), and request agencies otherwise accommodate lower-dollar support (such as allocation of FTEs) where possible within their budget toplines
Adoption well underway – Many federal agencies have begun to use methodClarify existing policies and authorities* Issue updated OMB, OSTP, or agency-specific policy guidance and memoranda as needed based on engagement with agencies and stakeholders (e.g.,: August 2011, March 2012)

* Provide technical assistance to Congress on any needed updates to government-wide or agency-specific authorities (e.g., January 2017)
Adoption prevalent – Most if not all federal agencies have adopted, with a need to maintain use and momentum over timeHighlight progress and capture lessons learned* Require regular reporting from agencies to EOP (OSTP, OMB, or similar) (e.g., April 2012, May 2022)

* Require and take full advantage of regular reports to Congress (e.g., April 2012, December 2013, May 2014, May 2015, August 2016, June 2019, May 2022, April 2024)

* Continue to capture and publish federal-use case studies in multiple formats online (e.g., June 2012)

* Undertake research, evaluation, and evidence-building

* Co-develop practitioner toolkit with federal agency experts (e.g., December 2016)

* Continue to feature promising examples on White House/EOP blogs and communication channels (e.g., October 2015, August 2020)

* Engage media and seek both general interest and targeted press coverage, including through external awards/honorifics (e.g., December 2013)
Adoption prevalent – Most if not all federal agencies have adopted, with a need to maintain use and momentum over timePrepare for presidential transitions and document opportunities for future administrations* Integrate go-forward proposals and lessons learned into presidential transition planning and transition briefings (e.g., June 2016-January 2017)

* Brief external stakeholders and Congressional supporters on progress and future opportunities

* Connect use of method to other, broader policy objectives and national priorities (e.g., August 2020, May 2022, April 2024)

Phases and timeline of policy actions advancing the adoption of incentive prizes by federal agencies

  1. Growing number of incentive prizes offered outside government (early 2000s)

At the close of the 20th century, federal use of incentive prizes to induce activity toward targeted solutions was limited, though the federal government regularly utilized recognition prizes to reward past accomplishment. In October 2004, the $10 million Ansari XPRIZE—which was first announced in May 1996—was awarded by the XPRIZE Foundation for the successful flights of Spaceship One by Scaled Composites. Following the awarding of the Ansari XPRIZE and the extensive resulting news coverage, philanthropists and high net worth individuals began to offer prize purses to incentivize action on a wide variety of technology and social challenges. A variety of new online challenge platforms sprung up, and new vendors began offering consulting services for designing and hosting challenges, trends that lowered the cost of prize competition administration and broadened participation in prize competitions among thousands of diverse solvers around the world. This growth in the use of prizes by philanthropists and the private sector increased the interest of the federal government in trying out incentive prizes to help meet agency missions and solve national challenges. Actions during this period to support federal use of incentive prizes include:

  1. Obama-Biden Administration Seeks to Expand Federal Prizes Through Administrative Action (2009-2010)

From the start of the Obama-Biden Administration, OSTP and OMB took a series of policy steps to expand the use of incentive prizes across federal agencies and build federal capacity to support those open-innovation efforts. Bipartisan support in Congress for these actions soon led to new legislation to further advance agency adoption of incentive prizes. Actions during this period to support federal use of incentive prizes include:

  1. Implementing New Government-Wide Prizes Authority Provided by the America COMPETES Act (2011-2016)

During this period of expansion in the federal use of incentive prizes supported by new government-wide prize authority provided by Congress, the Obama-Biden Administration continued to emphasize its commitment to the model, including as a key method for accomplishing administration priorities, including priorities related to open government and evidence-based decision making. Actions during this period to support federal use of incentive prizes include:

toolkit
  1. Maintaining Momentum in New Presidential Administrations

Support for federal use of incentive prizes continued beyond the Obama-Biden Administration foundational efforts. Leadership by federal agency prize leads was particularly important to support this momentum from administration to administration. Actions during the Trump-Pence and Biden-Harris Administrations to support federal use of incentive prizes include:

Harnessed American ingenuity through increased use of incentive prizes. Since 2010, more than 80 Federal agencies have engaged 250,000 Americans through more than 700 challenges on Challenge.gov to address tough problems ranging from fighting Ebola, to decreasing the cost of solar energy, to blocking illegal robocalls. These competitions have made more than $220 million available to entrepreneurs and innovators and have led to the formation of over 275 startup companies with over $70 million in follow-on funding, creating over 1,000 new jobs.

In addition, in January 2017, the Obama-Biden Administration OSTP mentioned the use of incentive prizes in its public “exit memo” as a key “pay-for-performance” method in agency science and technology strategies that “can deliver better results at lower cost for the American people,” and also noted:

Harnessing the ingenuity of citizen solvers and citizen scientists. The Obama Administration has harnessed American ingenuity, driven local innovation, and engaged citizen solvers in communities across the Nation by increasing the use of open-innovation approaches including crowdsourcing, citizen science, and incentive prizes. Following guidance and legislation in 2010, over 700 incentive prize competitions have been featured on Challenge.gov from over 100 Federal agencies, with steady growth every year.

By the end of fiscal year 2022, federal agencies had hosted over 2,000 prize competitions on Challenge.gov, since its launch in 2010. OSTP, GSA, and NASA CoECI had provided training to well over 2,000 federal practitioners during that same period. 

Number of Federal Prize Competitions by Authority FY14-FY22

Source: Office of Science and Technology Policy. Biennial Report on “IMPLEMENTATION OF FEDERAL PRIZE AND CITIZEN SCIENCE AUTHORITY: FISCAL YEARS 2021-22.” April 2024.

Federal Agency Practices to Support the Use of Prize Competitions

Source: Office of Science and Technology Policy. Biennial Report on “IMPLEMENTATION OF FEDERAL PRIZE AND CITIZEN SCIENCE AUTHORITY: FISCAL YEARS 2019-20.” March 2022. 

CONCLUSION

Over the span of a decade, incentive prizes had moved from a tool used primarily outside of the federal government to one used commonly across federal agencies, due to a concerted, multi-pronged effort led by policy entrepreneurs and incentive prize practitioners in the EOP and across federal agencies, with bipartisan congressional support, crossing several presidential administrations. And yet, the work to support the use of prizes by federal agencies is not complete–there remains extensive opportunity to further improve the design, rigor, ambition, and effectiveness of federal prize competitions; to move beyond “ideas challenges” to increase the use of incentive prizes to demonstrate technologies and solutions in testbeds and real-world deployment scenarios; to train additional federal personnel on the use of incentive prizes; to learn from the results of federal incentive prizes competitions; and to apply this method to address pressing and emerging challenges facing the nation.

In applying these lessons to efforts to expand the use of other promising methods in federal agencies, policy entrepreneurs in center-of-government federal agencies should be strategic in the policy actions they take to encourage and scale method adoption, by first seeking to understand the adoption maturity of that method (as well as the relevant policy readiness) and then by undertaking interventions appropriate for that stage of adoption. With attention and action by policy entrepreneurs to address factors that discourage risk-taking, experimentation, and piloting of new methods by federal agencies, it will be possible for federal agencies to utilize a further-expanded strategic portfolio of methods to catalyze the development, demonstration, and deployment of technology and innovative solutions to meet agency missions, solve long-standing problems, and address grand challenges facing our nation. 

Photo by Nick Fewings

Automating Scientific Discovery: A Research Agenda for Advancing Self-Driving Labs

Despite significant advances in scientific tools and methods, the traditional, labor-intensive model of scientific research in materials discovery has seen little innovation. The reliance on highly skilled but underpaid graduate students as labor to run experiments hinders the labor productivity of our scientific ecosystem. An emerging technology platform known as Self-Driving Labs (SDLs), which use commoditized robotics and artificial intelligence for automated experimentation, presents a potential solution to these challenges.

SDLs are not just theoretical constructs but have already been implemented at small scales in a few labs. An ARPA-E-funded Grand Challenge could drive funding, innovation, and development of SDLs, accelerating their integration into the scientific process. A Focused Research Organization (FRO) can also help create more modular and open-source components for SDLs and can be funded by philanthropies or the Department of Energy’s (DOE) new foundation. With additional funding, DOE national labs can also establish user facilities for scientists across the country to gain more experience working with autonomous scientific discovery platforms. In an era of strategic competition, funding emerging technology platforms like SDLs is all the more important to help the United States maintain its lead in materials innovation.

Challenge and Opportunity

New scientific ideas are critical for technological progress. These ideas often form the seed insight to creating new technologies: lighter cars that are more energy efficient, stronger submarines to support national security, and more efficient clean energy like solar panels and offshore wind. While the past several centuries have seen incredible progress in scientific understanding, the fundamental labor structure of how we do science has not changed. Our microscopes have become far more sophisticated, yet the actual synthesizing and testing of new materials is still laboriously done in university laboratories by highly knowledgeable graduate students. The lack of innovation in how we historically use scientific labor pools may account for stagnation of research labor productivity, a primary cause of concerns about the slowing of scientific progress. Indeed, analysis of scientific literature suggests that scientific papers are becoming less disruptive over time and that new ideas are getting harder to find. The slowing rate of new scientific ideas, particularly in the discovery of new materials or advances in materials efficiency, poses a substantial risk, potentially costing billions of dollars in economic value and jeopardizing global competitiveness. However, incredible advances in artificial intelligence (AI) coupled with the rise of cheap but robust robot arms are leading to a promising new paradigm of material discovery and innovation: Self-Driving Labs. An SDL is a platform where material synthesis and characterization is done by robots, with AI models intelligently selecting new material designs to test based on previous experimental results. These platforms enable researchers to rapidly explore and optimize designs within otherwise unfeasibly large search spaces.

Today, most material science labs are organized around a faculty member or principal investigator (PI), who manages a team of graduate students. Each graduate student designs experiments and hypotheses in collaboration with a PI, and then executes the experiment, synthesizing the material and characterizing its property. Unfortunately, that last step is often laborious and the most time-consuming. This sequential method to material discovery, where highly knowledgeable graduate students spend large portions of their time doing manual wet lab work, rate limits the amount of experiments and potential discoveries by a given lab group. SDLs can significantly improve the labor productivity of our scientific enterprise, freeing highly skilled graduate students from menial experimental labor to craft new theories or distill novel insights from autonomously collected data. Additionally, they yield more reproducible outcomes as experiments are run by code-driven motors, rather than by humans who may forget to include certain experimental details or have natural variations between procedures.

Self-Driving Labs are not a pipe dream. The biotech industry has spent decades developing advanced high-throughput synthesis and automation. For instance, while in the 1970s statins (one of the most successful cholesterol-lowering drug families) were discovered in part by a researcher testing 3800 cultures manually over a year, today, companies like AstraZeneca invest millions of dollars in automation and high-throughput research equipment (see figure 1). While drug and material discovery share some characteristics (e.g., combinatorially large search spaces and high impact of discovery), materials R&D has historically seen fewer capital investments in automation, primarily because it sits further upstream from where private investments anticipate predictable returns. There are, however, a few notable examples of SDLs being developed today. For instance, researchers at Boston University used a robot arm to test 3D-printed designs for uniaxial compression energy adsorption, an important mechanical property for designing stronger structures in civil engineering and aerospace. A Bayesian optimizer was then used to iterate over 25,000 designs in a search space with trillions of possible candidates, which led to an optimized structure with the highest recorded mechanical energy adsorption to date. Researchers at North Carolina State University used a microfluidic platform to autonomously synthesize >100 quantum dots, discovering formulations that were better than the previous state of the art in that material family.

These first-of-a-kind SDLs have shown exciting initial results demonstrating their ability to discover new material designs in a haystack of thousands to trillions of possible designs, which would be too large for any human researcher to grasp. However, SDLs are still an emerging technology platform. In order to scale up and realize their full potential, the federal government will need to make significant and coordinated research investments to derisk this materials innovation platform and demonstrate the return on capital before the private sector is willing to invest it.

Other nations are beginning to recognize the importance of a structured approach to funding SDLs: University of Toronto’s Alan Aspuru-Guzik, a former Harvard professor who left the United States in 2018, has created an Acceleration Consortium to deploy these SDLs and recently received $200 million in research funding, Canada’s largest ever research grant. In an era of strategic competition and climate challenges, maintaining U.S. competitiveness in materials innovation is more important than ever. Building a strong research program to fund, build, and deploy SDLs in research labs should be a part of the U.S. innovation portfolio.

Plan of Action

While several labs in the United States are working on SDLs, they have all received small, ad-hoc grants that are not coordinated in any way. A federal government funding program dedicated to self-driving labs does not currently exist. As a result, the SDLs are constrained to low-hanging material systems (e.g., microfluidics), with the lack of patient capital hindering labs’ ability to scale these systems and realize their true potential. A coordinated U.S. research program for Self-Driving Labs should:

Initiate an ARPA-E SDL Grand Challenge: Drawing inspiration from DARPA’s previous grand challenges that have catalyzed advancements in self-driving vehicles, ARPA-E should establish a Grand Challenge to catalyze state-of-the-art advancements in SDLs for scientific research. This challenge would involve an open call for teams to submit proposals for SDL projects, with a transparent set of performance metrics and benchmarks. Successful applicants would then receive funding to develop SDLs that demonstrate breakthroughs in automated scientific research. A projected budget for this initiative is $30 million1, divided among six selected teams, each receiving $5 million over a four-year period to build and validate their SDL concepts. While ARPA-E is best positioned in terms of authority and funding flexibility, other institutions like National Science Foundation (NSF) or DARPA itself could also fund similar programs.

Establish a Focused Research Organization to open-source SDL components: This FRO would be responsible for developing modular, open-source hardware and software specifically designed for SDL applications. Creating common standards for both the hardware and software needed for SDLs will make such technology more accessible and encourage wider adoption. The FRO would also conduct research on how automation via SDLs is likely to reshape labor roles within scientific research and provide best practices on how to incorporate SDLs into scientific workflows. A proposed operational timeframe for this organization is five years, with an estimated budget of $18 million over that time period. The organization would work on prototyping SDL-specific hardware solutions and make them available on an open-source basis to foster wider community participation and iterative improvement. A FRO could be spun out of the DOE’s new Foundation for Energy Security (FESI), which would continue to establish the DOE’s role as an innovative science funder and be an exciting opportunity for FESI to work with nontraditional technical organizations. Using FESI would not require any new authorities and could leverage philanthropic funding, rather than requiring congressional appropriations.

Provide dedicated funding for the DOE national labs to build self-driving lab user facilities, so the United States can build institutional expertise in SDL operations and allow other U.S. scientists to familiarize themselves with these platforms. This funding can be specifically set aside by the DOE Office of Science or through line-item appropriations from Congress. Existing prototype SDLs, like the Argonne National Lab Rapid Prototyping Lab or Berkeley Lab’s A-Lab, that have emerged in the past several years lack sustained DOE funding but could be scaled up and supported with only $50 million in total funding over the next five years. SDLs are also one of the primary applications identified by the national labs in the “AI for Science, Energy, and Security” report, demonstrating willingness to build out this infrastructure and underscoring the recognized strategic importance of SDLs by the scientific research community.

Frequently Asked Questions
What factors determine whether an SDL is appropriate for materials innovation?

As with any new laboratory technique, SDLs are not necessarily an appropriate tool for everything. Given that their main benefit lies in automation and the ability to rapidly iterate through designs experimentally, SDLs are likely best suited for:



  • Material families with combinatorially large design spaces that lack clear design theories or numerical models (e.g., metal organic frameworks, perovskites)

  • Experiments where synthesis and characterization are either relatively quick or cheap and are amenable to automated handling (e.g., UV-vis spectroscopy is relatively simple, in-situ characterization technique)

  • Scientific fields where numerical models are not accurate enough to use for training surrogate models or where there is a lack of experimental data repositories (e.g., the challenges of using density functional theory in material science as a reliable surrogate model)


While these heuristics are suggested as guidelines, it will take a full-fledged program with actual results to determine what systems are most amenable to SDL disruption.

What aren’t SDLs?

When it comes to exciting new technologies, there can be incentives to misuse terms. Self-Driving Labs can be precisely defined as the automation of both material synthesis and characterization that includes some degree of intelligent, automated decision-making in-the-loop. Based on this definition, here are common classes of experiments that are not SDLs:



  • High-throughput synthesis, where synthesis automation allows for the rapid synthesis of many different material formulations in parallel (lacks characterization and AI-in-the-loop)

  • Using AI as a surrogate trained over numerical models, which is based on software-only results. Using an AI surrogate model to make material predictions and then synthesizing an optimal material is also not a SDL, though certainly still quite the accomplishment for AI in science (lacks discovery of synthesis procedures and requires numerical models or prior existing data, neither of which are always readily available in the material sciences).

Will SDLs “automate” away scientists? How will they change the labor structure of science?

SDLs, like every other technology that we have adopted over the years, eliminate routine tasks that scientists must currently spend their time on. They will allow scientists to spend more time understanding scientific data, validating theories, and developing models for further experiments. They can automate routine tasks but not the job of being a scientist.


However, because SDLs require more firmware and software, they may favor larger facilities that can maintain long-term technicians and engineers who maintain and customize SDL platforms for various applications. An FRO could help address this asymmetry by developing open-source and modular software that smaller labs can adopt more easily upfront.