In Honor of Patient Safety Day, Four Recommendations to Improve Healthcare Outcomes
Through partnership with the Doris Duke Foundation, FAS is working to ensure that rigorous, evidence-based ideas on the cutting edge of disease prevention and health outcomes are reaching decision makers in an effective and timely manner. To that end, we have been collaborating with the Strengthening Pathways effort, a series of national conversations held in spring 2025 to surface research questions, incentives, and overlooked opportunities for innovation with potential to prevent disease and improve outcomes of care in the United States. FAS is leveraging its skills in policy entrepreneurship, working with session organizers, to ensure that ideas surfaced in these symposia reach decision-makers to drive impact in active policy windows.
On this World Patient Safety Day 2025, we share a set of recommendations that align with the National Quality Strategy of Centers for Medicare and Medicaid Services (CMS) goal for zero preventable harm in healthcare. Working with Patients for Patient Safety US, which co-led one of Strengthening Pathways conversations this spring with the Johns Hopkins University Armstrong Institute for Patient Safety and Quality, the issue brief below outlines a bold, modernized approach that uses Artificial Intelligence technology to empower patients and drive change. FAS continues to explore the rapidly evolving AI and healthcare nexus.
Patient safety is an often-overlooked challenge in our healthcare systems. Whether safety events are caused by medical error, missed or delayed diagnoses, deviations from standards of care, or neglect, hundreds of billions of dollars and hundreds of thousands of lives are lost each year due to patient safety lapses in our healthcare settings. But most patient safety challenges are not really captured and there are not enough tools to empower clinicians to improve. Here we present four critical proposals for improving patient safety that are worthy of attention and action.
Challenge and Opportunity
Reducing patient death and harm from medical error surfaced as a U.S. public health priority at the turn of the century with the landmark National Academy of Sciences (NAS) report, To Err is Human: Building a Safer Health System (2000). Research shows that medical error is the 3rd largest cause of preventable death in the U.S. Analysis of Medicare claims data and electronic health records by the Department of Health and Human Services (DHHS) Office of the Inspector General (OIG) in a series of reports from 2008 to 2025 consistently finds that 25-30% of Medicare recipients experience harm events across multiple healthcare settings, from hospitals to skilled nursing facilities to long term care hospitals to rehab centers. Research on the broader population finds similar rates for adult patients in hospitals. The most recent study on preventable harm in ambulatory care found that 7% of patients experienced at least one adverse event, with wide variation of 1.8% to 23.6% from clinical setting to clinical setting. Improving diagnostic safety has emerged as the largest opportunity for patient harm prevention. New research estimates 795,000 patients in the U.S. annually experience death or harm due to missed, delayed or ineffectively communicated diagnoses. The annual cost to the health care system of preventable harm and its health care cascades is conservatively estimated to exceed $200 billion. This cost is ultimately borne by families and taxpayers.
In its National Quality Strategy, the Centers for Medicare and Medicaid Services (CMS) articulated an aspirational goal of zero preventable harm in healthcare. The National Action Alliance for Patient and Workforce Safety, now managed by the Agency for Healthcare Research and Quality (AHRQ), has a goal of 50% reduction in preventable harm by 2026. These goals cannot be achieved without a bold, modernized approach that uses AI technology to empower patients and drive change. Under-reporting negative outcomes and patient harms keeps clinicians and staff from identifying and implementing solutions to improve care. In its latest analysis (July 2025), the OIG finds that fewer than 5% of medical errors are ever reported to the systems designed to gather insights from them. Hospitals failed to capture half of harm events identified via medical record review, and even among captured events, few led to investigation or safety improvements. Only 16% of events required to be reported externally to CMS or State entities were actually reported, meaning critical oversight systems are missing safety signals entirely.
Multiple research papers over the last 20 years find that patients will report things that providers do not. But there has been no simple, trusted way for patient observations to reach the right people at the right time in a way that supports learning and Improvement. Patients could be especially effective in reporting missed or delayed diagnoses, which often manifest across the continuum of care, not in one healthcare setting or a single patient visit. The advent of AI systems provides an unprecedented opportunity to address patient safety and improve patient outcomes if we can improve the data available on the frequency and nature of medical errors. Here we present three ideas for improving patient safety.
Recommendation 1. Create AI-Empowered Safety Event Reporting and Learning System With and For Patients
The Department of Health and Human Services (HHS) can, through CMS, AHRQ or another HHS agency, develop an AI-empowered National Patient Safety Learning and Reporting System that enables anyone, including patients and families, to directly report harm events or flag safety concerns for improvement, including in real or near real time. Doing so would make sure everyone in the system has the full picture — so healthcare providers can act quickly, learn faster, and protect more patients.
This system will:
- Develop a reporting portal to collect, triage and analyze patient reported data directly from beneficiaries to improve patient and diagnostic safety.
- Redesign and modernize Consumer Assessment of Healthcare Providers and Systems
(CAHPS) surveys to include questions that capture beneficiaries’ experiences and outcomes related to patient and diagnostic safety events.
- Redefine the Beneficiary and Family Centered Care Quality Improvement Organizations (BFCC QIO) scope of work to integrate the QIOs into the National Patient Safety Learning and Reporting System.
The learning system will:
- Use advanced triage (including AI) to distinguish high-signal events and route credible
reports directly to the care team and oversight bodies that can act on them.
- Solicit timely feedback and insights in support of hospitals, clinics, and nursing homes to prevent recurrence, as well as feedback over time on patient outcomes that manifest later, e.g. as a result of missed or delayed diagnoses.
- Protect patients and providers by focusing on efficacy of solutions, not blame assignment.
- Feed anonymized, interoperable data into a national learning network that will spot systemic risks sooner and make aggregated data available for transparency and system learning.
Recommendation 2. Create a Real-time ‘Patient Safety Dashboard’ using AI
HHS should build an AI-driven platform that integrates patient-reported safety data — including data from the new National Patient Reporting and Learning System, recommended above — with clinical data from electronic health records to create a real-time ‘patient safety dashboard’ for hospitals and clinics. This dashboard will empower providers to improve care in real time, and will:
- Assist health care providers make accurate and timely diagnoses and avoid errors.
- Make patient reporting easy, effective, and actionable.
- Use AI to triage harm signals and detect systemic risk in real time.
- Build shared national infrastructure for healthcare reporting for all stakeholders.
- Align incentives to reward harm reduction and safety.
By harnessing the power of AI providers will be able to respond faster, identify patients at risk more effectively, and prevent harm thereby improving outcomes. This “central nervous system” for patient safety will be deployed nationally to help detect safety signals in real time, connect information across settings, and alert teams before harm occurs.
Recommendation 3. Mine Billing Data for Deviations from Standards of Care
Standards of care are guidelines that define the process, procedures and treatments that patients should receive in various medical and professional contexts. Standards ensure that individuals receive appropriate and effective care based on established practices. Most standards of care are developed and promulgated by medical societies. But not all clinicians and clinical settings adhere to standards of care, and deviations from standards of care are normal depending upon the case before them. Nonetheless, standards of care exist for a reason and deviations from standards of care should be noted when medical errors result in negative outcomes for patients so that clinicians can learn from these outcomes and improve.
Some patient safety challenges are evident right in the billing data submitted to CMS and insurers. For example, deviations from standards of care can be captured in billing data by comparing clinical diagnosis codes with billing codes and then compared to widely accepted standards of care. By using CMS billing data, the government could identify opportunities for driving the development, augmentation, and wider adoption of standards of care by showing variability and compliance with standards of care for patients, reducing medical error and improving outcomes.
Giving standard setters real data to adapt and develop new standards of care is a powerful tool for improving patient outcomes.
Recommendation 4. Create a Patient Safety AI Testbed
HHS can also establish a Patient Safety AI Testbed to evaluate how AI tools used in diagnosis, monitoring, and care coordination perform in real-world settings. This testbed will ensure that AI improves safety, not just efficiency — and can be co-led by patients, clinicians, and independent safety experts. This is an expansion of the testbeds in the HHS AI Strategic Plan.
The Patient Safety Testbed could include:
- Funding for independent AI test environments to monitor real-world safety and performance over time.
- Public reliability benchmarks and “AI safety labeling”.
- Required participation by AI vendors and provider systems.
Conclusion
There are several key steps that the government can take to address the major loss of health, dollars, and lives due to medical errors, while simultaneously bolstering treatment guidelines, driving the development of new transparent data, and holding the medical establishment accountable for improving care. Here we present four proposals. None of them are particularly expensive when juxtaposed against the tremendous savings they will drive throughout our healthcare system. We can only hope that the Administration’s commitment to patient safety is such that they will adopt them and drive a new era where caregivers, healthcare systems and insurance payers work together to improve patient safety and care standards.
ASTRA: An American Space Transformation Regulatory Act
From helping farmers maximize crop yields to creating new and exotic pathways for manufacturing, the space economy has the potential to triple over the next decade. Unlocking abundance in the space economy will require lowering the barriers for new space actors, aligning with international partners, and supporting traditional measures of risk assessment (like insurance) to facilitate space investment.
Unlike countries with newer space programs that can benefit from older programs’ experience, exploration, and accidents, the United States has organically developed a patchwork regime to manage human and non-human space flight. While this approach serves and supports the interests of government agencies and their mission-specific requirements, it hinders the deployment of new and novel technologies and gives other countries motive to deploy extraterritorial regulatory regimes, further complicating the outlook for new space actors. There is an urgent need for rationalization, as well as for a clear and logical pathway for the deployment of new technologies, and to facilitate responsible activities in orbit so that space resources need not be governed by scarcity.
As the impacts of human space activities become more clear, there is also a growing need to address the sustainability of human space operations and their capacity to restrain a more abundant human future. While the recent space commercialization executive order attempts to rationalize some of this work, it also preserves some of the regulatory disharmony that exists in the current system while taking actions that are likely to create additional conflicts with impacted communities. The United States should re-take the lead; among the examples set by New Zealand, the European Union, and other emerging space actors; in providing a comprehensive space regulatory framework that ensures the safe, sustainable, and responsible growth of the space industry.
Challenge and Opportunity
The Outer Space Treaty creates a set of core responsibilities that must be followed by any country wishing to operate a space program, including (but not limited to) international responsibility for national activities, authorization and supervision of space activities carried out by non-governmental entities, and liability for damage caused to other countries. In the United States, individual government agencies have adopted responsibilities over individual elements of human activity in space, including (but again, not limited to) the Federal Aviation Administration (FAA) over launch and reentry, the Department of Commerce (DOC) over remote sensing, the Federal Communications Commission (FCC) and DOC over spectrum management, and the State Department and DOC over space-related export controls. The FCC has also asserted its regulatory authority over space into other domains, in particular the risk of in-space collision and space debris. If a company wishes to launch a satellite with remote sensing capabilities, they need to participate in every single one of these regulatory permitting processes.
Staffing and statutory authority create significant challenges for American space regulators at a time when other countries are getting their respective regulatory houses in order. The offices that manage these programs are relatively small– the Commercial Remote Sensing Regulatory Affairs (CRSRA) division of the OSC currently staffed by two full time government employees while the FAA has only five handling space flight authorizations. CRSRA was briefly hamstrung earlier this year when its director was released (and then immediately rehired) as part of the Trump Administration’s firing of probationary employees at the National Oceanic and Atmospheric Administration–likely collateral damage in the Administration’s attempts to target programs that interact with climate change.
The lack of personnel capacity creates particular challenges for the FAA, which has struggled to keep pace with launch approvals under its Part 450 launch authorization process, among a record-breaking number of mishap investigations, novel space applications, and application revisions in 2023. Last year, FAA officials testified that the increase in SpaceX launches, alone, has led to hundreds of hours of monthly overtime logged, constituting over 80% of staff overtime paid for by the American taxpayer. Other companies have described that accidents from SpaceX-related launches create shifting goalposts for their companies, pushing FAA officials to avoid confirming receipt of necessary launch documents to avoid starting Part 450’s 180 day review deadline. The shifting goalposts and prior approvals also means that certain launch vehicles are subject to different requirements, creating incentives for companies to focus their efforts on non-commercial and defense-related missions.
Without updates to the law, the statutory justification for increasingly important regulatory responsibilities is also unsound, particularly those that pertain to orbital debris. After the Supreme Court’s Loper Bright ruling, it is unlikely that the theory of law underpinning FCC’s regulation of space debris could withstand court challenges. This creates a particularly dangerous situation given the long-term impact that the breakup of even small objects can have on the orbital environment, and is likely the reason that space companies have yet to openly challenge the FCC’s assertion of regulatory authority in this space. This also challenges the insurance industry, which has suffered significant financial losses over the past few years and caused certain companies to pull out of the market entirely.
As human space activities increase, the demands created by the Outer Space Treaty’s requirement for supervision and liability are likely to also see corresponding increases. Some countries hosting astronomical observatories that are significantly impaired by light, radio, and other electromagnetic pollution from commercial spacecraft have enacted laws relating to satellite brightness and interference. The number of high-profile debris strikes on property – like when a metal component from the international space station crashed into a Florida family’s occupied home – will also increase as second stages of rockets and larger satellites return to earth. The Mexican government is exploring options to sue SpaceX over environmental contamination and debris near its Starbase Texas launch site.
The unique properties of interstellar space and other planetary surfaces demand other considerations we take for granted on Earth. On Earth, we consider the flexibility of nature to “heal itself” and revert to “natural” states of being, regrowth, and regeneration of destructive resource extraction. Instead, planetary surfaces with little to no atmosphere or wind, such as the moon, will preserve footprints and individual tire tracks for decades to thousands of years, altering the geological features. Flecks of paint and bacteria from rovers can create artificial signatures in spectroscopy and biology, contaminating science in undocumented ways that are likely to interrupt astrobiology and geology for generations to come. We risk rendering an advanced human civilization unable to unlock discoveries resulting from pristine science or explore the existence of extraterrestrial life.
Significant safety concerns resulting from increased human space activities could create additional regulatory molasses if unaddressed. An increasing and under-characterized population of debris increases risk to multi-million dollar instruments and continued operations in the event of a collision cascade. Current studies – both conservative and optimistic – point to the fact we are already in the regime of “unstable” debris growth in orbit, complicating the mass-deployment of large constellations.
Unfortunately, current international law creates challenges for the mass-removal of orbital debris. Article 8 of the Outer Space Treaty establishes that ownership of objects in space does not change by virtue of being in space. This is done to make the seizure of other countries’ objects illegal, and it isn’t difficult to imagine weaponizing satellite removal capabilities (seen in the James Bond film “You Only Live Twice”). If it is not financially advantageous to mitigate space debris, or export control concerns prevent countries from allowing debris removal, then the most-likely long term results are either unchecked debris growth, likely leading to increasingly draconian regulatory requirements. None of this is good for industry.
Absent a streamlined data-sharing platform of satellite location and telemetry, which could be decimated by federal cuts to the Traffic Coordination System for Space (TraCSS), the cost and responsibility of satellite collision and debris avoidance will encourage many commercial space operators to fly blind. The underutilized space insurance industry, already reeling from massive losses in recent years, would face another source of pressure. If the barriers to satellite servicing and recovery satellites remain high, it is probable that the only market for such capabilities will be military missions, inherently inhibiting the ability of these systems to attend to the orbital environment.
While abundance speaks to increasing available resources, chemistry and the law of conservation of matter remind us that our atmosphere and the oxygen we breathe is finite, fragile, and potentially highly reactive to elements commonly found in spacecraft. We are only starting to understand the impact of spacecraft reentry on the upper atmosphere, though there is already significant cause for concern. Nitrous oxide (NOx), a common compound used in spacecraft propulsion, is known to deplete ozone. Aluminum, one of the most common elements in spacecraft, bonds easily with ozone. Black carbon from launches increases stratospheric temperature, changing circulation patterns. When large rockets explode, the aftermath can create enormous impacts for aviation and rain debris on beaches and critical areas. To top it all off, the reliance of space companies on the defense sector for financing means that many of these assets and constellations are often inherently tied to defense activities, increasing the probability that they will be actively targeted or compromised as a result of foreign policy actions or fast-tracked due to regulatory streamlining that circumvents public comment periods from raising valid safety concerns.
We are quickly approaching a day when the United States government may no longer be the primary regulator of our own industry. The European Union in May 2025 introduced its own Space Act with extraterritorial requirements for companies wishing to participate in the European market. Many of these provisions are well-considered and justified, though the uncertainty and extra layer of compliance that they create for American companies is likely to increase the cost of business further. The EU has created a process for recognizing equivalent regimes in other countries. Under current rules, and especially under the Administration’s new commercial space executive order, the United States regulatory regime is unlikely to be judged as “equivalent.” Given the concerns from EU member states and companies alike about the actions of U.S. space companies, it is more likely than not that the EU will seek to rein in the U.S. space industry in ways that could limit our ability to remain internationally competitive.
Plan of Action
Recommendation 1. Congress should devote resources to study that which threatens the abundance of space, such as the impacts of human space exploration, damage to the ozone layer; inadvertent geoengineering as a result of orbital reentry and fuel deposition.
This should include the impact of satellite interference on space situational awareness capabilities and space weather forecasting, which are critical to stabilizing the space economy.
While the regulatory environment for space should be rationalized to unleash the potential of the space economy, research is also needed to better understand the impacts of space activities and exploration given that we are already beginning to feel the impacts of space activities terrestrially. Having an abundant space economy is meaningless if the continual reentry of satellites destroys the ozone layer and renders the planet uninhabitable. Congress should continue to fund research on the upper atmosphere and protect research done by the NOAA Chemical Sciences Laboratory to understand the upper atmospheric impacts from human space activities.
The astronomy community has also voiced significant concerns about the impact of satellites on their observations. Satellites show up as bright streaks in the sky when taking pictures and can cause significant disruption to radio telescopes and weather forecasting sensors, alike. This impact is not only felt by ground-based telescopes and sensors, but also those in orbit like the Hubble. This could have consequences for tracking other interstellar phenomena, including (but not limited to) space debris, space weather, cislunar space domain awareness, and planetary defense. Further, light pollution inhibits our ability to discover new physics through astronomical observations–the Hubble Tension, neutrino mass problem, quantum gravity, and matter-antimatter imbalance all suggest that there are major discoveries waiting for us on the horizon. Failure to preserve the sky could inadvertently restrain our ability to unleash a technological revolution akin to the one that produced Einstein’s theory of relativity and the nuclear age.
There is still much more work to be done to understand these topics and to develop workable solutions that can be adopted by new space actors. The bipartisan Dark and Quiet Skies Act, introduced in 2024, narrowly addresses but one of these needs; sustained support for NASA, NOAA, and NSF science are all necessary given the technology required for taking measurements in the stratosphere and advanced metrology.
Recommendation 2. Congress should create an independent Space Promotion and Regulatory Agency.
Ideally, Congress should create a new and independent space promotion and regulatory agency whose activities would include both the promotion of civil and commercial space activities and provide for authorization and supervision of all U.S.-based commercial space organizations. This body, whose activities should be oriented to fulfilling U.S. obligations under the Outer Space Treaty, should be explicitly empowered to also engage in space traffic coordination or management, to manage liability for U.S. space organizations, and to rationalize all existing permitting processes under one organization. Staff from existing agencies (which is typically 2–25 people) should be relocated from existing departments and agencies to this new body to provide for continuity of operations and institutional knowledge.
Congress should seek to maintain a credible firewall between the promotion and regulatory elements of the organization. The promotion element could be responsible for providing assistance to companies (including through loans and grants for technology and product development like the DOE Loan Programs Office, and also general advocacy). The regulatory element should be responsible for domestic licensing space activities, operating the Traffic Coordination System for Space (TraCSS), and any other supervision activities that may become necessary. This would be distinct from the existing Office of Space Commerce function in that the organization would be independent, have the ability to regulate space commerce, and ideally have resources to fulfill the advocacy and promotion elements of the mission.
In an ideal world, the Office of Space Commerce (OSC) would be able to fulfill this mission with an expanded mission mandate, regulatory authority, and actual resources to promote commercial space development. In practice, recent events under both administrations have pointed toward the office being isolated within the National Oceanic and Atmospheric Administration under the Biden Administration while running into similar bottlenecks with the Secretary of Commerce in the second Trump Administration. Independent authority and resourcing would not only give the director greater plenary authority, but also allow them to better balance the views of interagency partners (and hopefully shedding some of the baggage that comes from broader relationships between government departments with broad mandates).
This recommendation explicitly does not suggest eliminating the FAA or OSC’s functions, but rather merging the two, preserving current staff and institutional knowledge, and allowing them to work in the same (and independent) organization to make it easier to share knowledge and information. Creating a new regulatory agency on top of the FAA or OSC is not recommended; the purpose is to streamline. Preference would be given toward assigning all of the functions to one actor or another rather than creating a new and duplicative function on top of the existing structures in Commerce and FAA.
Given the significant terrestrial impact of spectrum issues related to space, delegating those functions to the FCC and NTIA probably still makes sense, so long as orbital debris and other space regulatory functions are consolidated into a new body that is clearly given such regulatory authority by Congress.
Recommendation 3. Congress should consider requiring that insurance be purchased for all space activities to address the Outer Space Treaty’s liability requirements.
Insurance ensures that nascent areas of growth are minimally disruptive to other interests, i.e. damaging critical infrastructures such as spraying GPS satellites with debris shrapnel, or harming the general public when skyscraper-sized pressurized fuel tanks explode on the ground. Insurance is broadly recognized for its ability to help create the type of market stability that is necessary for large capital investments and promote long-term infrastructure improvements.
The participation of insurance markets is also more likely to encourage venture capital and financial industry participation in commercial space activities, moving the market from dependency on government funding toward self-sustaining commercial enterprise. Despite this, out of 13,000 active satellites, only about 300 are insured. The satellite insurance industry’s losses have been staggering over the last two years, making the pricing of risk difficult for new space actors and investors alike. Correct pricing of risk is essential for investors to be able to make informed decisions about which companies or enterprises to invest in.
Current insurance covers $500 million in damages to third parties – any costs beyond this are drawn from the reservoir of the American taxpayer (unless damages exceed a ceiling cap for the government of about $3.1 billion). The current incentive structure favors the deployment of cheap, mass produced satellites over more sophisticated vehicles that drive technological leadership and progress. The failure or loss of control over such assets can create a permanent hazard to the orbital environment and increase the risk of a collision cascade over the lifetime of the object. Increasing the number of covered satellites should help more correctly price overall market risk, making space investments more accessible and attractive for companies looking to deploy commercial space stations; in space servicing, assembly, and manufacturing satellites; and other similarly sophisticated investments. These types of technologies are more likely to contribute to abundance in the broader market, as opposed to a temporary, mass-produced investment that does only one thing and ends in a loss of everyone’s long-term access to specific orbits.
The Outer Space Treaty’s liability provisions make a healthy and risk-based insurance market particularly important. If a country or company invests in a small satellite swarm, and some percentage of that swarm goes defunct and produces a collision cascade and/or damages on-the ground assets, then U.S. entities (including the government) could be on the hook for potentially unlimited liabilities in a global multi-trillion dollar space economy. It is almost certain that the United States government has not adequately accounted for such an event and that risk is not currently priced into the market.
A thriving insurance market can also help facilitate other forms of investment, which may become more confident in their investments and tolerant of other risks associated with investment. It would also serve as an important signal to international partners that the United States is willing to act responsibly in the orbital environment and has the capacity to create the financial incentive schemes to honor its commitments. By requiring insurance, Congress can use the prescriptive power of law to ensure transparency for both investors and the general public.
Recommendation 4. The United States should create an inventory of abandoned objects and establish rules governing the abandonment of objects to enable commercial orbital salvage operations.
Given that Article 8 of the Outer Space Treaty could serve as an impediment to orbital debris removal, countries could establish rules or lists of objects that have reached end of life and are now effectively abandoned. The Treaty does not necessarily prevent State Parties from creating rules governing the authorization and supervision of objects, including transfer of ownership at the end of a mission. An inventory of abandoned objects that are “OK for recovery” could help manage concerns related to export controls, intellectual property, or other issues associated with one country recovering another country’s objects. Likewise, countries could also explore the creation of salvage rights or rules to incentivize orbital debris removal missions.
Recommendation 5. The State Department should seek equivalency for the United States under the EU Space Act as soon as possible, and seek to engage the EU in productive discussions to limit the probability of regulatory divergence, probably more than doubling the regulatory burden placed on U.S. companies.
With the introduction of the EU Space Act, the primary regulator for U.S. space companies with an international presence is likely to be the European Union. The U.S. Department of State should continue to pursue constructive engagement with the European Commission, Parliament, and Council to limit the risk of regulatory divergence and to ensure that the United States provides adequate safeguards to quickly achieve equivalency, obviating the need for U.S. space companies to worry about compliance with more than one country’s framework. This would ultimately result in lower regulatory burden for the United States, particularly if measures are taken to consolidate the existing U.S. space regulatory environment as described in Recommendation 2.
The failure of the U.S. to get its own house in order is likely to motivate other countries to take similar measures, increasing compliance costs for American companies while foreign operators may only need to rely on their domestic frameworks. Without equivalency, U.S. operators are likely to have to deal with multiple competing regulatory regimes, especially given the past history of other countries outside the EU adopting EU regulatory frameworks in order to secure market access (the Brussels Effect).
There is a foreign policy need for the U.S. and EU to get on the same page (and fast). Given that companies from the United States are more likely to seek access to European markets than those in the PRC, an asymmetric space policy environment opens a new sphere for contentious policy negotiations between the U.S. and EU. Transatlantic alignment is likely to produce greater leverage in negotiations with the PRC while creating a more stable market where U.S. and European industry can both thrive. Similarly, an antagonistic relationship is more likely to push the European Union toward greater strategic autonomy. Fear of dependence on U.S. companies is already creating new barriers for the United States in other areas, and space has been specifically called out as a key area of concern.
Further, space actors are less familiar with the extent to which trade negotiations can result in asymmetric concessions that could disadvantage one industry to gain benefits in another. To put it bluntly, it is unlikely that President Trump will go to bat for SpaceX (especially given his current relationship with its owner) if it means giving up opportunities to sell American farm exports. One need only look at the recent semiconductor export controls decision, allegedly done to facilitate a bilateral meeting between the two presidents in Beijing.
Conclusion
Unlocking the abundance of the space economy, and doing so responsibly, will require the development of a stable and trustworthy regulatory environment, repairing frameworks that enable monopolistic behavior, and correct pricing of risk in order to facilitate sustainable investment in the outer space environment. Abundance in one realm at the expense of all others (like when a new spacecraft pauses all air traffic in the Caribbean after exploding) is no longer “abundance.” If the United States does not act soon, the deployment of more modern regulatory frameworks by other countries offering a more agile environment for new technology deployment is likely to accelerate the growth of their advantages in orbit.
If space is there, and if we are going to climb it, then regulatory reform must be a challenge that we are willing to accept, something that we are unwilling to postpone, for a competition that we intend to win.
Clean Water: Protecting New York State Private Wells from PFAS
This memo responds to a policy need at the state level that originates due to a lack of relevant federal data. The Environmental Protection Agency (EPA) has a learning agenda question that asks,“To what extent does EPA have ready access to data to measure drinking water compliance reliably and accurately?” This memo fills that need because EPA doesn’t measure private wells.
Per- and polyfluoroalkyl substances (PFAS) are widely distributed in the environment, in many cases including the contamination of private water wells. Given their links to numerous serious health consequences, initiatives to mitigate PFAS exposure among New York State (NYS) residents reliant on private wells were included among the priorities outlined in the annual State of the State address and have been proposed in state legislation. We therefore performed a scenario analysis exploring the impacts and costs of a statewide program testing private wells for PFAS and reimbursing the installation of point of entry treatment (POET) filtration systems where exceedances occur.
Challenge and Opportunity
Why care about PFAS?
Per- and polyfluoroalkyl substances (PFAS), a class of chemicals containing millions of individual compounds, are of grave concern due to their association with numerous serious health consequences. A 2022 consensus study report by the National Academies of Sciences, Engineering, and Medicine categorized various PFAS-related health outcomes based on critical appraisal of existing evidence from prior studies; this committee of experts concluded that there is high confidence of an association between PFAS exposure and (1) decreased antibody response (a key aspect of immune function, including response to vaccines) (2) dyslipidemia (abnormal fat levels in one’s blood), (3) decreased fetal and infant growth, and (4) kidney cancer, and moderate confidence of an association between PFAS exposure and (1) breast cancer, (2) liver enzyme alterations, (3) pregnancy-induced high blood pressure, (4) thyroid disease, and (5) ulcerative colitis (an autoimmune inflammatory bowel disease).
Extensive industrial use has rendered these contaminants virtually ubiquitous in both the environment and humans, with greater than 95% of the U.S. general population having detectable PFAS in their blood. PFAS take years to be eliminated from the human body once exposure has occurred, earning their nickname as “forever chemicals.”
Why focus on private drinking water?
Drinking water is a common source of exposure.
Drinking water is a primary pathway of human exposure. Combining both public and private systems, it is estimated that approximately 45% of U.S. drinking water sources contain at least one PFAS. Rates specific to private water supplies have varied depending on location and thresholds used. Sampling in Wisconsin revealed that 71% of private wells contained at least one PFAS and 4% contained levels of perfluorooctanoic acid (PFOA) or perfluorooctanesulfonic acid (PFOS), two common PFAS compounds, exceeding Environmental Protection Agency (EPA)’s Maximum Contaminant Levels (MCLs) of 4 ng/L. Sampling in New Hampshire, meanwhile, found that 39% of private wells exceeded the state’s Ambient Groundwater Quality Standards (AGQS), which were established in 2019 and range from 11-18 ng/L depending on the specific PFAS compound. Notably, while the EPA MCLs represent legally enforceable levels accounting for the feasibility of remediation, the agency has also released health-based, non-enforceable Maximum Contaminant Level Goals (MCLGs) of zero for PFOA and PFOS.
PFAS in private water are unregulated and expensive to remediate.
In New York State (NYS), nearly one million households rely on private wells for drinking water; despite this, there are currently no standardized well testing procedures and effective well water treatment is unaffordable to many New Yorkers. As of April 2024, the EPA has established federal MCLs for several specific PFAS compounds and mixtures of compounds and its National Primary Drinking Water Regulations (NPDWR) require public water systems to begin monitoring and publicly reporting levels of these PFAS by 2027; if monitoring reveals exceedances of the MCLs, public water systems must also implement solutions to reduce PFAS by 2029. In contrast, there are no standardized testing procedures or enforceable limits for PFAS in private water. Additionally, testing and remediating private wells are both associated with high costs which are unaffordable to many well owners; prices range in hundreds of dollars for PFAS testing and can cost several thousands of dollars for the installation and maintenance of effective filtration systems.

How are states responding to the problem of PFAS in private drinking water?
Several states, including Colorado, New Hampshire, and North Carolina, have already initiated programs offering well testing and financial assistance for filters to protect against PFAS.
- After piloting its PFAS Testing and Assistance (TAP) program in one county in 2024, Colorado will expand it to three additional counties in 2025. The program covers the expenses of testing and a $79 nano pitcher (point-of-use) filter. Residents are eligible if PFOA and/or PFOS in their wells exceeds EPA MCLs of 4 ng/L; filters are free if their household income is ≤80% of the area median income and offered at a 30% discount if this income criteria is not met.
- The New Hampshire (NH) PFAS Removal Rebate Program for Private Wells offers greater flexibility and higher cost coverage than Colorado PFAS TAP, with reimbursements of up to $5000 offered for either point-of-entry or point-of-use treatment system installation and up to $10,000 offered for connection to a public water system. Though other residents may also participate in the program and receive delayed reimbursement, households earning ≤80% of the area median family income are offered the additional assistance of payment directly to a treatment installer or contractor (prior to installation) so as to relieve the applicant of fronting the cost. Eligibility is based on testing showing exceedances of the EPA MCLs of 4 ng/L for PFOA or PFOS or 10 ng/L for PFHxS, PFNA, or HFPO-DA (trademarked as “GenX”).
- The North Carolina PFAS Treatment System Assistance Program offers flexibility similar to New Hampshire in terms of the types of water treatment reimbursed, including multiple point-of-entry and point-of-use filter options as well as connection to public water systems. It is additionally notable for its tiered funding system, with reimbursement amounts ranging from $375 to $10,000 based on both the household’s income and the type of water treatment chosen. The tiered system categorizes program participants based on whether their household income is (1) <200%, (2) 200-400%, or (3) >400% the Federal Poverty Level (FPL). Also similar to New Hampshire, payments may be made directly to contractors prior to installation for the lowest income bracket, who qualify for full installation costs; others are reimbursed after the fact. This program uses the aforementioned EPA MCLs for PFOA, PFOS, PFHxS, PFNA, or HFPO-DA (“GenX”) and also recognizes the additional EPA MCL of a hazard index of 1.0 for mixtures containing two or more of PFHxS, PFNA, HFPO-DA, or PFBS.
An opportunity exists to protect New Yorkers.
Launching a program in New York similar to those initiated in Colorado, New Hampshire, and North Carolina was among the priority initiatives described by New York Governor Kathy Hochul in the annual State of the State she delivered in January 2025. In particular, Hochul’s plans to improve water infrastructure included “a pilot program providing financial assistance for private well owners to replace or treat contaminated wells.” This was announced along with a $500 million additional investment beyond New York’s existing $5.5 billion dedicated to water infrastructure, which will also be used to “reduce water bills, combat flooding, restore waterways, and replace lead service lines to protect vulnerable populations, particularly children in underserved communities.” In early 2025, the New York Legislature introduced Senate Bill S3972, which intended to establish an installation grant program and a maintenance rebate program for PFAS removal treatment. Bipartisan interest in protecting the public from PFAS-contaminated drinking water is further evidenced by a hearing focused on the topic held by the NYS Assembly in November 2024.
Though these efforts would likely initially be confined to a smaller pilot program with limited geographic scope, such a pilot program would aim to inform a broader, statewide intervention. Challenges to planning an intervention of this scope include uncertainty surrounding both the total funding which would be allotted to such a program and its total costs. These costs will be dependent on factors such as the eligibility criteria employed by the state, the proportion of well owners who opt into sampling, and the proportion of tested wells found to have PFAS exceedances (which will further vary based on whether the state adopts EPA MCLs or NYS Department of Health MCLs, which are 10 ng/L for PFOA and PFOS). We allay the uncertainty associated with these numerous possibilities by estimating the numbers of wells serviced and associated costs under various combinations of 10 potential eligibility criteria, 5 possible rates (5, 25, 50, 75, and 100%) of PFAS testing among eligible wells, and 5 possible rates (5, 25, 50, 75, and 100%) of PFAS>MCL and subsequent POET installation among wells tested.
Scenario Analysis
Key findings
- Over 900,000 residences across NYS are supplied by private drinking wells (Figure 1).
- The three most costly scenarios were offering testing and installation rebates for (Table 1):
- Every private well owner (901,441 wells; $1,034,403,547)
- Every well located within a census tract designated as disadvantaged (based on NYS Disadvantaged Community (DAC) criteria) AND/OR belonging to a household with annual income <$150,000 (725,923 wells; $832,996,643)
- Every well belonging to a household with annual income <$150,000 (705,959 wells; $810,087,953)
- The three least costly scenarios were offering testing and installation rebates for (Table 1):
- Every well located within a census tract in which at least 51% of households earn below 80% of the area median income (22,835 wells; $26,191,688)
- Every well belonging to a household earning <100% of the Federal Poverty Level (92,661 wells; $106,328,398)
- Every well located within a census tract designated as disadvantaged (based on NYS Disadvantaged Community (DAC) criteria) (93,840 wells; $107,681,400)
- Of six income-based eligibility criteria, household income <$150,000 included the greatest number of wells, whereas location within a census tract in which at least 51% of households earn below 80% the area median income (a definition of low-to-moderate income used for programs coordinated by the U.S. Department of Housing and Urban Development), included the fewest wells. This amounts to a cost difference of $783,896,265 between these two eligibility scenarios.
- Six income-based criteria varied dramatically in terms of their inclusion of wells across NYS which fall within either disadvantaged or small communities (Table 2):
- For disadvantaged communities, this ranged from 12% (household income <100% federal poverty level) to 79% (income <$150,000) of all wells within disadvantaged communities being eligible.
- For small communities, this ranged from 2% (census tracts in which at least 51% of households earn below 80% area median income) to 83% (income <$150,000) of all wells within small communities being eligible.
Plan of Action
New York State is already considering a PFAS remediation program (e.g., Senate Bill S3972). The 2025 draft of the bill directed the New York Department of Environmental Conservation to establish an installation grant program and a maintenance rebate program for PFAS removal treatment, and establishes general eligibility criteria and per-household funding amounts. To our knowledge, S3972 did not pass in 2025, but its program provides a strong foundation for potential future action. Our suggestions below resolve some gaps in S3972, including additional detail that could be followed by the implementing agency and overall cost estimates that could be used by the Legislature when considering overall financial impacts.
Recommendation 1. Remediate all disadvantaged wells statewide
We recommend including every well located within a census tract designated as disadvantaged (based on NYS Disadvantaged Community (DAC) criteria) and/or belonging to a household with annual income <$150,000 as the eligibility criteria which protects the widest range of vulnerable New Yorkers. Using this criteria, we estimate a total program cost of approximately $833 million, or $167 million per year if the program were to be implemented over a 5-year period. Even accounting for the other projects which the state will be undertaking at the same time, this annual cost falls well within the additional $500 million which the 2025 State of the State reports will be added in 2025 to an existing $5.5 million state investment in water infrastructure.
Recommendation 2. Target disadvantaged census tracts and household incomes
Wells in DAC census tracts accounts for a variety of disadvantages. Including NYS DAC criteria helps to account for the heterogeneity of challenges experienced by New Yorkers by weighing statistically meaningful thresholds for 45 different indicators across several domains. These include factors relevant to the risk of PFAS exposure, such as land use for industrial purposes and proximity to active landfills.
Wells in low-income households account for cross-sectoral disadvantage. The DAC criteria alone is imperfect:
- Major criticisms include its underrepresentation of rural communities (only 13% of rural census tracts, compared to 26% of suburban and 48% of urban tracts, have been DAC-designated) and failure to account for some key stressors relevant to rural communities (e.g., distance to food stores and in-migration/gentrification).
- Another important note is that wells within DAC communities account for only 10% of all wells within NYS (Table 2). While wells within DAC-designated communities are important to consider, including only DAC wells in an intervention would therefore be very limiting.
- Whereas DAC designation is a binary consideration for an entire census tract, place-based criteria such as this are limited in that any real community comprises a spectrum of socioeconomic status and (dis)advantage.
The inclusion of income-based criteria is useful in that financial strain is a universal indicator of resource constraint which can help to identify the most-in-need across every community. Further, including income-based criteria can widen the program’s eligibility criteria to reach a much greater proportion of well owners (Table 2). Finally, in contrast to the DAC criteria’s binary nature, income thresholds can be adjusted to include greater or fewer wells depending on final budget availability.
- Of the income thresholds evaluated, income <$150,000 is recommended due to its inclusion not only of the greatest number of well owners overall, but also the greatest percentages of wells within disadvantaged and small communities (Table 2). These two considerations are both used by the EPA in awarding grants to states for water infrastructure improvement projects.
- As an alternative to selecting one single income threshold, the state may also consider maximizing cost effectiveness by adopting a tiered rebate system similar to that used by the North Carolina PFAS Treatment System Assistance Program.
Recommendation 3. Alternatives to POETs might be more cost-effective and accessible
A final recommendation is for the state to maximize the breadth of its well remediation program by also offering reimbursements for point-of-use treatment (POUT) systems and for connecting to public water systems, not just for POET installations. While POETs are effective in PFAS removal, they require invasive changes to household plumbing and prohibitively expensive ongoing maintenance, two factors which may give well owners pause even if they are eligible for an initial installation rebate. Colorado’s PFAS TAP program models a less invasive and extremely cost-effective POUT alternative to POETs. We estimate that if NYS were to provide the same POUT filters as Colorado, the total cost of the program (using the recommended eligibility criteria of location within a DAC-designated census tract and/or belonging to a household with annual income <$150,000) would be $163 million, or $33 million per year across 5 years. This amounts to a total decrease in cost of nearly $670 million if POUTs were to be provided in place of POETs. Connection to public water systems, on the other hand, though a significant initial investment, provides an opportunity to streamline drinking water monitoring and remediation moving forward and eliminates the need for ongoing and costly individual interventions and maintenance.
Conclusion
Well testing and rebate programs provide an opportunity to take preventative action against the serious health threats associated with PFAS exposure through private drinking water. Individuals reliant on PFAS-contaminated private wells for drinking water are likely to ingest the chemicals on a daily basis. There is therefore no time to waste in taking action to break this chain of exposure. New York State policymakers are already engaged in developing this policy solution; our recommendations can help both those making the policy and those tasked with implementing it to best serve New Yorkers. Our analysis shows that a program to mitigate PFAS in private drinking water is well within scope of current action and that fair implementation of such a program can help those who need it most and do so in a cost-effective manner.
While the Safe Drinking Water Act regulates the United States’ public drinking water supplies, there is no current federal government to regulate private wells. Most states also lack regulation of private wells. Introducing new legislation to change this would require significant time and political will. Political will to enact such a change is unlikely given resource limitations, concerns around well owners’ privacy, and the current time in which the EPA is prioritizing deregulation.
Decreasing blood serum levels is likely to decrease negative health impacts. Exposure via drinking water is particularly associated with elevated serum PFAS levels, while appropriate water filtration has demonstrated efficacy in reducing serum PFAS levels.
We estimated total costs assuming that 75% of eligible wells are tested for PFAS and that of these tested wells, 25% are both found to have PFAS exceedances and proceed to have filter systems installed. This PFAS exceedance/POET installation rate was selected because it falls between the rates of exceedances observed when private well sampling was conducted in Wisconsin and New Hampshire in recent years.
For states which do not have their own tools for identifying disadvantaged communities, the Social Vulnerability Index developed by the Centers for Disease Control and Prevention (CDC) and Agency for Toxic Substances and Disease Registry (ATSDR) may provide an alternative option to help identify those most in need.
Turning the Heat Up On Disaster Policy: Involving HUD to Protect the Public
This memo addresses HUD’s learning agenda question, “How do the impacts, costs, and resulting needs of slow-onset disasters compare with those of declared disasters, and what are implications for slow-onset disaster declarations, recovery aid programs, and HUD allocation formulas?” We examine this using heat events as our slow-onset disaster, and hurricanes as declared disaster.
Heat disasters, a classic “slow-onset disaster”, result in significant damages, which can exceed damage caused by more commonly declared disasters like hurricanes due to high loss of life from heat. The Federal Housing and Urban Development agency (HUD) can play an important role in heat disasters because most heat-related deaths occur in the home or among those without homes; therefore, the housing sector is a primary lever for public health and safety during extreme heat events. To enhance HUD’s ability to protect the public from extreme heat, we suggest enhancing interagency data collection/sharing to facilitate the federal disaster declarations needed for HUD engagement, working heat mitigation into HUD’s programs, and modifying allocation formulas, especially if a heat disaster is declared.
Challenge and Opportunity
Slow-Onset Disasters Never Declared As Disasters
Slow-onset disasters are defined as events that gradually develop over extended periods of time. Examples of slow-onset events like drought and extreme heat can evolve over weeks, months, or even years. By contrast, sudden-onset disasters like hurricanes, occur within a short and defined timeframe. This classification is used by international bodies such as the United Nations Office for Disaster Risk Reduction (UNDRR) and the International Federation of Red Cross and Red Crescent Societies (IFRC).
HUD’s main disaster programs typically require a federal disaster declaration , making HUD action reliant on action by the Federal Emergency Management Agency (FEMA) under the Stafford Act. However, to our knowledge, no slow-onset disaster has ever received a federal disaster declaration, and this category is not specifically addressed through federal policy.
We focus on heat disasters, a classic slow-onset disaster that has received a lot of attention recently. No heat event has been declared a federal disaster, despite several requests. Notable examples include the 1980 Missouri heat and drought events, the 1995 Chicago heat wave, which caused an estimated 700 direct fatalities, as well as the 2022 California heat dome and concurrent wildfires. For each request, FEMA determined that the events lacked sufficient “severity and magnitude” to qualify for federal assistance. FEMA holds a precedent that declared disasters need to have a discrete and time-bound nature, rather than a prolonged or seasonal atmospheric condition.
“How do the impacts, costs, and resulting needs of slow-onset disasters compare with those of declared disasters?”
Heat causes impacts in the same categories as traditional disasters, including mortality, agriculture, and infrastructure, but the impacts can be harder to measure due to the slow-onset nature. For example, heat-related illness and mortality as recorded in medical records are widely known to be significant underestimates of the true health impacts. The same is likely true across categories.
Sample Impacts
We analyze impacts within categories commonly considered by federal agencies–human mortality, agricultural impacts, infrastructure impacts, and costs for heat, and compare them to counterparts for hurricanes, a classic sudden-onset disaster. Other multi-sectoral reports of heat impacts have been compiled by other entities, including SwissRe and The Atlantic Council Climate Resilience Center.
We identified 3,478 deaths with a cause of “cataclysmic storms” (e.g., hurricanes; International Classification of Disease Code X.37) and 14,461 deaths with a cause of heat (X.30) between 1999-2020 using data from the Centers for Disease Control and Prevention’s (CDC). It is important to note that the CDC database only includes death certificates that list heat as a cause of death, while it is widely recognized that this can be a significant underaccount. However, despite these limitations, CDC remains the most comprehensive national dataset for monitoring mortality trends.
HUD can play an important role in reducing heat mortality. In the 2021 Pacific Northwest Heat Dome, most of the deaths occurred indoors (reportedly 98% in British Columbia) and many in homes without adequate cooling. In hotter Maricopa County, Arizona, in 2024, 49% of all heat deaths were among people experiencing homelessness and 23% occurred in the home. Therefore, across the U.S., HUD programs could be a critical lever in protecting public health and safety by providing housing and ensuring heat-safe housing.
Agricultural Labor
Farmworkers are particularly vulnerable to extreme heat, and housing can be part of a solution to protect them. According to the Environmental Protection Agency (EPA), between 1992 to 2022, 986 workers across industry sectors died from exposure to heat, with agricultural workers being disproportionately affected. According to the Environmental Defense Fund, farmworkers in California are about 20 times more likely to die from heat-related stress, compared to the general population, and they estimate that the average U.S agricultural worker is exposed to 21 working days in the summer growing season that are unsafe due to heat. A study found that the number of unsafe working days due to extreme heat will double by midcentury, increasing occupational health risks and reducing labor productivity in critical sectors. Adequate cooling in the home could help protect outdoor workers by facilitating cooling periods during nonwork hours, another way in which HUD could have a positive impact on heat.
Infrastructure and Vulnerability
Rising temperatures significantly increase energy demand, particularly due to the widespread reliance on air conditioning. This surge in demand increases the risk of power outages during heat events, exacerbating public health risks due to potential grid failure. In urban areas, the built environment can add heat, while in rural areas residents are at greater risk due to the lack of infrastructure. This effect contributes to increased cooling costs and worsens air quality, compounding health vulnerabilities in low-income and urban populations. All of these impacts are areas where HUD could improve the situation through facilitating and encouraging energy-efficient homes and cooling infrastructure.
Costs
In all categories we examined, estimates of U.S.-wide costs due to extreme heat rivaled or exceeded costs of hurricanes. For mortality, the estimated economic impact of mortality (scaled by value of statistical life, VSL = $11.6 million) caused by extreme heat reached $168 billion, significantly exceeding the $40.3 billion in VSL losses from hurricanes during the same period. Infrastructure costs further reflect this imbalance. Extreme heat resulted in an estimated $100 billion in productivity loss in 2024 alone, with over 60% of U.S. counties currently experiencing reduced economic output due to heat-related labor stress. Meanwhile, Hurricanes Helene and Milton together generated $113 billion in damage during the 2024 Atlantic hurricane season.
Crop damage reveals the disproportionate toll of heat and drought, with 2024 seeing $11 billion in heat/drought impacts compared to $6.8 billion from hurricanes. The dairy industry experiences a substantial recurring burden from extreme heat, with annual losses of $1.5 billion attributed to heat-induced declines in production, reproduction, and livestock fatalities. Broader economic impacts from heat-related droughts are severe, including $14.5 billion in combined damages from the 2023 Southern and Midwestern drought and heatwave, and $22.1 billion from the 2022 Central and Eastern heat events. Comparatively, Hurricane Helene and Hurricane Milton produced $78.7 billion and $34.3 billion in damages, respectively. Extreme heat and drought exert long-term, widespread, and escalating economic pressures across public health, agriculture, energy, and infrastructure sectors. A reassessment of federal disaster frameworks is necessary to appropriately prioritize and allocate funds for heat-related resilience and response efforts.
Resulting Needs
Public Health and Medical Care: Immediate care and resources for heat stroke and exhaustion, dehydration, and respiratory issues are key to prevent deaths from heat exposure. Vulnerable populations including children, elderly, and unhoused are particularly at risk. There is an increased need for emergency medical services and access to cooling centers to prevent the exacerbation of heat stress and to prevent fatalities.
Cooling and Shelter: Communities require access to public cooling centers and for air conditioning. Clean water supply is also essential to maintain health.
Infrastructure and Repair: The use of air conditioning increases energy consumption, leading to power outages. Updated infrastructure is essential to handle demand and prevent blackouts. Building materials need to include heat-resistant materials to reduce Urban Heat Island effects.
Emergency Response Capacity: Emergency management systems need to be strengthened in order to issue early warnings, produce evacuation plans, and mobilize cooling centers and medical services. Reliable communication systems that provide real-time updates with heat index and health impacts will be key to improve community preparedness.
Financial Support and Insurance Coverage: Agricultural, construction, and service workers are populations which are vulnerable to heat events. Loss of income may occur as temperatures rise, and compensation must be given.
Social Support and Community Services: There is an increasing need for targeted services for the elderly, unhoused, and low-income communities. Outreach programs, delivery of cooling resources, and shelter options must be communicated and functional in order to reduce mortality. Resilience across these sectors will be improved as data definitions and methods are standardized, and when allocations of funding specifically for heat increase.
“What are implications for slow-onset disaster declarations, recovery aid programs, and HUD allocation formulas?”
Slow-onset disaster declarations
No heat event–or to our knowledge or other slow-onset disaster–has been declared a disaster under the Stafford Act, the primary legal authority for the federal government to provide disaster assistance. The statute defines a “major disaster” as “any natural catastrophe… which in the determination of the President causes damage of sufficient severity and magnitude to warrant major disaster assistance to supplement the efforts and available resources of States, local governments, and disaster relief organizations in alleviating the damage, loss, hardship, or suffering caused thereby.” Though advocacy organizations have claimed that the reason for the lack of disaster declaration is because the Stafford Act omits heat, FEMA’s position is that amendment is unnecessary and that a heat disaster could be declared if state and local needs exceed their capacity during a heat event. This claim is credible, as the COVID-19 pandemic was declared a disaster without explicit mention in the Stafford Act.
Though FEMA’s official position has been openness to supporting an extreme-heat disaster declaration, the fact remains that none has been declared. There is opportunity to improve processes to enable future heat declarations, especially as heat waves affect more people more severely for more time. The Congressional Research Service suggests that much of the difficulty might stem from FEMA regulations focusing on assessment of uninsured losses makes it less likely that FEMA will recommend that the President declare a disaster. Heat events can be hard to pin down with defined time periods and locations, and the damage is often to health and other impacts that are slow to be quantified. Therefore, real-time monitoring systems that quantify multi-sectoral damage could be deployed to provide the information needed. Such systems have been designed for extreme heat, and similar systems are being tested for wildfire smoke–these systems could rapidly be put into use.
The U.S. Department of Housing and Urban Development (HUD) plays a critical role in long-term disaster recovery, primarily by providing housing assistance and funding for community development initiatives (see table above). However, HUD’s ability to deploy emergency support is contingent upon disaster declaration under the Stafford Act and/or FEMA activation. This restriction limits HUD’s capacity to implement timely interventions, such as retrofitting public housing with cooling systems or providing emergency housing relief during extreme heat events.
Without formal recognition of a heat event as a disaster, HUD remains constrained in its ability to deliver rapid and targeted support to vulnerable populations facing escalating risks from extreme temperatures. Without declared heat disasters, the options for HUD engagement hinge on either modifying program requirements or supporting the policy and practice needed to enable heat disaster declarations.
HUD Allocation Formulas
Congress provides funding through supplemental appropriations to HUD following major disasters, and HUD determines how best to distribute funding based on disaster impact data. The calculations are typically based on Individual and Public Assistance data from FEMA, verified loss data from the Small Business Administration (SBA), claims from insurance programs such as the National Flood Insurance Program (NFIP), and housing and demographic data from the U.S Census Bureau and American Community Survey. CDBG-DR and CDBG-MIT typically require that at least 70% and 50% of funds benefit low and moderate income (LMI) communities respectively. Funding is limited to areas where there has been a presidentially declared disaster.
For example, the Disaster Relief Supplemental Appropriations Act, 2025 (approved on 12/21/2024) appropriated $12.039 billion for CDBG-Disaster Recovery funds (CDBG-DR) for disasters “that occurred in 2023 or 2024.” HUD focused its funding on areas with the most serious and concentrated unmet housing needs from within areas that experienced a declared disaster within the time frame. Data used to determine the severity of unmet housing needs included FEMA and SBA inspections of damaged homes; these data were used in a HUD formula.
Opportunities exist to adjust allocation formulas to be more responsive to extreme heat, especially if CDBG is activated for a heat disaster. For example, HUD is directed to use the funds “in the most impacted and distressed areas,” which it could interpret to include housing stock that is unlikely to protect occupants from heat.
Gaps
Extreme heat presents multifaceted challenges across public health, infrastructure, and agriculture, necessitating a coordinated and comprehensive federal response. The underlying gap is the lack of any precedent for declaring an extreme-heat disaster; without such a declaration, numerous disaster-related programs in HUD, FEMA, and other federal agencies cannot be activated. Furthermore, likely because of this underlying gap, disaster-related programs have not focused on protecting public health and safety from extreme heat despite its large and growing impact.
Plan of Action
Recommendation 1. Improve data collection and sharing to enable disaster declarations.
Because lack of real-time, quantitative data of the type most commonly used by disaster declarations (i.e., uninsured losses; mortality) is likely a key hindrance to heat-disaster declarations, processes should be put in place to rapidly collect and share this data.
Health impacts could be tracked most easily by the CDC using the existing National Syndromic Surveillance System and by expanding the existing influenza-burden methodology, and by the National Highway Traffic Safety Association’s Emergency Medical Services Activation Surveillance Dashboard. To get real-time estimates of mortality, simple tools can be built that estimate mortality based on prior heatwaves; such tools are already being tested for wildfire smoke mortality. Tools like this use weather data as inputs and mortality as outputs, so many agencies could implement–NOAA, CDC, FEMA, and EPA are all potential hosts. Additional systems need to be developed to track other impacts in real time, including agricultural losses, productivity losses, and infrastructure damage.
To facilitate data sharing that might be necessary to develop some of the above tools, we envision a standardized national heat disaster framework modeled after the NIH Data Management and Sharing (DMS) policy. By establishing consistent definitions and data collection methods across health, infrastructure, and socioeconomic sectors, this approach would create a foundation for reliable, cross-sectoral coordination and evidence-based interventions. Open and timely access to data would empower decision-makers at all levels of government, while ethical protections—such as informed consent, data anonymization, and compliance with HIPAA and GDPR—would safeguard individual privacy. Prioritizing community engagement ensures that data collection reflects lived experiences and disparities, ultimately driving equitable, climate-resilient policies to reduce the disproportionate burden of heat disasters.
While HUD or any other agency could lead the collaboration, much of the National Integrated Heat Health Information System (NIHHIS) partnership (HUD is a participant) is already set up to support data-sharing and new tools. NIHHIS is a partner network between many federal agencies and therefore has already started the difficult work of cross-agency collaboration. Existing partnerships and tools can be leveraged to rapidly provide needed information and collaboration, especially to develop real-time quantification of heat-event impacts that would facilitate declaration of heat disasters. Shifting agency priorities have reduced NIHHIS partnerships recently; these should be strengthened, potentially through Congressional action.
Recommendation 2. Incorporate heat mitigation throughout HUD programs
Because housing can play such an important role in heat health (e.g., almost all mortality from the 2021 Heat Dome in British Columbia occurred in the home; most of Maricopa County’s heat mortality is either among the unhoused or in the home), HUD’s extensive programs are in a strong position to protect health and life safety during extreme heat. Spurring resident protection could include gentle behavioral nudges to grant recipients, such as publishing guidance on regionally tailored heat protections for both new construction and retrofits. Because using CDBG funds for extreme heat is uncommon, HUD should publish guidance on how to align heat-related projects with CDBG requirements or how to incorporate heat-related mitigation into projects that have a different focus. In particular, it would be important to provide guidance on how extreme heat related activities meet National Objectives, as required by authorizing legislation.
HUD could also take a more active role, such as incentivizing or requiring heat-ready housing across their other programs, or even setting aside specific amounts of funds for this hazard. The active provision of funding would be facilitated by heat disaster declarations, so until that occurs it is likely that the facilitation guides suggested above are likely the best course of action.
HUD also has a role outside of disaster-related programs. For example, current HUD policy requires residents in Public Housing Agency (PHA) managed buildings to request funding relief to avoid surcharges from heavy use of air conditioning during heat waves; policy could be changed to proactively initiate that relief from HUD. In 2024, Principal Deputy Assistant Secretary Richard Monocchio sent a note encouraging broad thinking to support residents through extreme heat, and such encouragement can be supported with agency action. While this surcharge might seem minor, ability to run air conditioning is key for protecting health, as many indoor heat deaths across Arizona to British Columbia occurred in homes that had air conditioning but it was off.
Recommendation 3. HUD Allocation Formula: Inclusion of Vulnerability Variables
When HUD is able to launch programs focused on extreme heat, likely only following an officially declared heat disaster, HUD allocation formulas should take into account heat-specific variables. This could include areas where heat mortality was highest, or, to enhance mitigation impact, areas with higher concentrations of at-risk individuals (older adults, children, individuals with chronic illness, pregnant people, low-income households, communities of color, individuals experiencing houselessness, and outdoor workers) at-risk infrastructure (older buildings, mobile homes, heat islands). By integrating heat-related vulnerability indicators in allocations formulas, HUD would make the biggest impact on the heat hazard.
Conclusion
Extreme heat is one of the most damaging and economically disruptive threats in the United States, yet it remains insufficiently recognized in federal disaster frameworks. HUD is an agency positioned to make the biggest impact on heat because housing is a key factor for mortality. However, strong intervention across HUD and other agencies is held back by lack of federal disaster declarations for heat. HUD can work together with its partner agencies to address this and other gaps, and thereby protect public health and safety.
Establish a Network of Centers of Excellence in Human Nutrition (CEHN) to Overcome the Data Drought in Nutrition Science Research
NIH needs to invest in both the infrastructure and funding to undertake rigorous nutrition clinical trials, so that we can rapidly improve food and make progress on obesity and nutrition-related chronic disease.
The notion that ‘what we eat impacts our health’ has risen to national prominence with the rise of the “Make America Healthy Again” movement, placing nutrition at the center of American politics. This high degree of interest and enthusiasm is starkly contrasted by the limited high quality data to inform many key nutrition questions, a result of the limited and waning investment in controlled, experimental research on diet’s impact on health and disease. With heightened public interest and increasing societal costs due to diet-related diseases (>$250 billion), it is imperative to re-envision a nutrition research ecosystem that is capable of rapidly producing robust evidence relevant to policymakers, regulators and the food industry. This begins with establishing a network of clinical research centers capable of undertaking controlled human nutrition intervention studies at scale. Such a network, combined with enhanced commitment to nutrition research funding, will revolutionize nutrition science and our understanding of how food impacts obesity and metabolic health.
The proposed clinical trial network would be endowed with high capacity metabolic wards and kitchens, with the ability to oversee the long-term stay of large numbers of participants. The network would be capable of deeply phenotyping body composition, metabolic status, clinical risk factors, and the molecular mediators of diet. Such a clinical trial network would publish numerous rigorous clinical trials per year, testing the leading hypotheses that exist in the literature that lack meaningful clinical trials. Such a network would produce evidence of direct relevance to policy makers and the food industry, to inform how to best make long-overdue progress on reforming the food system and reducing the burden of diet-related diseases like obesity and Type 2 diabetes.
Challenge and Opportunity
While commonly viewed in the modern era as a medical science, nutrition research historically began as an agricultural science. Early researchers sought to define food composition of common agricultural products and ensure the food system supplied adequate levels of nutrients at affordable prices to meet nutrient requirements. This history firmly established the field of nutrition in universities embedded in the agricultural extension network and funded in large part by the United States Department of Agriculture (USDA) and relevant food industries. It took decades, until the late 1980s and early 1990s, for nutrition’s impact on chronic diseases like obesity and cardiometabolic to be taken seriously and viewed through a more medicalized lens. Despite this updated view of nutrition as ostensibly a serious medical science, the study of food has arguably never received the level of attention and resources commensurate with both its importance and the challenges in its rigorous study.
Our understanding of obesity and related chronic diseases has increased dramatically over the past 30 years. Despite improved understanding, many nutrition questions remain. For example, what is the impact of food processing and additives on health? What is the role of factors such as genetics and the microbiome (“Precision Nutrition”) in determining the effect of diet? These and more have emerged as key questions facing the field, policymakers and industry. Unfortunately during this time, the capacity to undertake controlled nutrition interventions, the strongest form of evidence generating causal relationships, has atrophied substantially.
Early research examining nutrition and its relationship to chronic diseases (e.g. type of fat and blood cholesterol responses). benefited from the availability of the general clinical research center (GCRCs). GCRCs were largely extramurally funded clinical research infrastructure that provided the medical and laboratory services, as well as metabolic kitchens and staff funding, to conduct controlled dietary interventions. This model produced evidence that continues to serve as the backbone of existing nutritional recommendations. In the mid-2000s, the GCRC infrastructure was largely defunded and replaced with the Clinical Translational Science Awards (CTSAs). CTSAs’ funding model is significantly less generous and provides limited if any funds for key infrastructure such as metabolic kitchens, nursing and laboratory services, and registered dietitian staff, all essential for undertaking controlled nutrition research. The model outsources the burden of cost from the NIH to the funder, a price tag that the pharmaceutical and medical device industries can bear but is simply not met by the food and supplement industry and is beyond the limited research budgets of academic or government research. Without public investment, there is simply no way for nutrition science to keep up with other fields of biomedicine, exacerbating a perception of the American medical system ignoring preventive measures like nutrition and ensuring that nutrition research is rated as ‘low quality’ in systematic reviews of the evidence.
The results attributed to this funding model are strikingly evident, and were readily predicted in two high profile commentaries mourning the loss of the GCRC model. When the field systematically reviews the data, the evidence from controlled feeding trials and chronic disease risk factors are largely published between the 1980s-2000s. More modern data is overwhelmingly observational in nature or relies on dietary interventions that educate individuals to consume a specific diet, rather than providing food – both types of evidence significantly reduce the confidence in results and introduce various biases that downgrade the certainty of evidence. The reality of the limited ability to generate high quality controlled feeding trial data was most evident in the last edition of the Dietary Guidelines Advisory Committee’s report, which conducted a systematic review of ultraprocessed foods (UFPs) and obesity. This review identified only a single, small experimental study, a two-week clinical trial in 20 adults, with the rest of the literature being observational in nature and graded as too limited to draw firm conclusions about UPFs and obesity risk. This state of the literature is the expected reality for all forthcoming questions in the field of nutrition until the field receives a massive infusion of resources. Without funding for infrastructure and research, the situation will worsen, as both the facilities and investigators trained in this work continue to wither, and academic tenure track lines are filled instead by areas currently prioritized by funders (e.g., basic science, global health). How can we expect ‘high certainty’ conclusions in the evidence to inform dietary guidelines when we simply don’t fund research with the capability of producing such evidence? While the GCRCs were far from perfect, the impact of their defunding on nutrition science over the past two decades is apparent from the quality of evidence on emerging topics and an even cursory look at the faculty at legacy academic nutrition departments. Legislators and policymakers should be alarmed at what the trajectory of the field over the last two decades means for public health.
As we deal with crisis levels of obesity and nutrition-related chronic diseases, we must face the realities of our failures to fund nutrition science seriously over the last two decades, and the data drought a lack of funding has caused. It is a critical failure of the biomedical research infrastructure in the United States that controlled nutrition interventions have fallen by the wayside while rates of diet-related chronic diseases have only worsened. It is essential for the health of our nation and our economy to reinvest in nutrition research to a degree never-before-seen in American history, and produce a state-of-the-art network of clinical trial centers capable of elucidating how food impacts health.
Several key challenges exist to produce a coordinated clinical research center network capable of producing evidence that transforms our understanding of diet’s impact on health:
Challenge 1. Few Existing Research Centers Have the Existing Interdisciplinary Expertise Needed to Tackle Pressing Food and Nutrition Challenges
Both food and health are wildly interdisciplinary in nature, requiring the right mix of expertises across plant and animal agriculture, food science, human nutrition, and various fields of medicine to adequately tackle the pressing nutrition-related challenges facing society. However, the current ‘nutrition’ research landscape of the United States reflects its natural, uncoordinated evolution across diverse agricultural colleges and medical centers.
Any proposed clinical research network needs to harmonize the divides and bring together the broad expertises needed to conduct rigorous experimental human nutrition studies into a coordinated network. Conquering this divide will require funding to intentionally build out research centers with the appropriate mix of researchers, staff, infrastructure and equipment necessary to tackle key questions on large cohorts of study participants consuming controlled diets for extended time periods.
Challenge 2. The Study of Food and Nutrition is Intrinsically Challenging
Despite less investment relative to pharmaceuticals and medical devices, the conduct of rigorous nutrition science is often more cost burdensome due to its unique methodological burdens. Typical gold-standard pharmaceutical designs of placebo-controlled randomized double blind trials are impossible for most research questions. Many interventions cannot be blinded. Placebos do not exist for foods, necessitating comparisons between active interventions, of which there are many viable options. Foods are complex interventions, serving as vehicles for many bioactive compounds, making isolating causal factors challenging in the setting of a single study. Researchers must often make zero-sum decisions that balance internal versus external validity, often trading off between rigorous inference and ‘real-world’ application.
Challenge 3. The Study of Food and Nutrition Is Practically Challenging
Historically, controlled trials, including those conducted in GCRC facilities, have been restricted to shorter term interventions (e.g. 1-4 weeks in duration). These short-term trials are the subject of relevant critique, for both failing to capture long-term adaptations to diet as well as relying on surrogate endpoints, of which there are few with significant prognostic capacity. Observing differences in actual disease endpoints in response to food interventions is ideal but investment in such studies has historically been limited. Attempts at a definitive ‘National Diet Heart Study’ trial to address diet-cardiovascular disease hypotheses in the 1960s were ultimately not funded beyond pilot trials. These challenges have long been used to justify underinvestment in experimental nutrition research and exacerbated the field’s reliance on observational data. While presenting real challenges, investment and innovation are needed to tackle these challenges rather than continue avoiding.
These challenges presented by investing in a nutrition clinical research center network pale in comparison to the benefits of its successful implementation. We need only look at the current state of the scientific literature on how to modify diets and food composition to prevent obesity to understand the harms of not investing. The opportunities from doing so are many:
Benefit 1. Build Back Trust in the Scientific Process Surrounding Dietary Recommendation
The deep distrust of the scientific process and in the dietary recommendations from the federal government should be impetus alone for investing heavily in rigorous nutrition research. It is essential for the public to see that the government is taking nutrition seriously. Data alone will not fix public trust but investing in nutrition research, engaging citizens in the process, and ensuring transparency in the conduct of studies and their results will begin to make a dent in the deep distrust that underlies the MAHA movement and that of many food activists over the past several decades.
Benefit 2. Produce Policy- and Formulator-relevant Data
The atrophying of the clinical research network, limited funding and historical separation of expertises in nutrition have led to a situation where we know little about how food influences disease risk, beyond oft-cited links between sodium and blood pressure and saturated fats and LDL-cholesterol. It should be evident from these two long-standing recommendations that have survived many politicized criticisms that controlled human intervention research is the critical foundation of rigorous policy.
In two decades, we need to look back and be able to say the same things about the discoveries ultimately made from studying the next generation of topics around food and food processing. Such findings will be critical for not only policy makers but also from the food industry, who have shown a willingness to reformulate products but often lack the policy guidance and level of evidence needed to do so in an informed manner, leaving their actions to chase trends over science.
Benefit 3. Enhanced Discovery in Emerging Health Research Topics, such as the Microbiome
The potential to rigorously control and manipulate diets to understand their impact on health and disease holds great promise to improve not only public policy and dietary guidance but also shape our fundamental understanding of human physiology, the gut microbiome, diet-x-gene interactions, and impact of environmental chemicals. The previous GCRC network expired prior to numerous technical revolutions in nucleotide (DNA, RNA) sequencing, mass spectrometry, and cell biology that have left nutrition decades behind other advances in medicine.
Benefit 4. Improved Public Health and Reduced Societal Costs
Ultimately, the funding of a clinical research center network that supports the production of rigorous data on links between diet and disease will address the substantial degree of morbidity and mortality caused by obesity and related chronic conditions. This research can be applied to reduce health risks, improve patient outcomes, and lessen the costly burden of an unhealthy nation.
Plan of Action
Congress must pass legislation that mandates the revival and revolution of experimental human nutrition research through the appropriation of funds to establish a network of Centers of Excellence in Human Nutrition (CEHN) research across the country.
Recommendation 1. Congressional Mandate to Review Historical Human Nutrition Research Funding in America and the GCRCs, and:
- Examine of the current landscape of nutrition research happening across the United States;
- Investigate of the deficiencies in the current funding models and clinical research infrastructure;
- Identify key existing infrastructure and infrastructure needs;
- Establish a 10 year timeline to coordinate and fund high priority nutrition research, appropriating at least $5,000,000,000 per year to the CEHN (representing 200% of the current NIH investment in nutrition research, spanning basic to clinical and epidemiological studies).
Congress should seek NIH, USDA, university, industry and public input to inform the establishment of the CEHN and the initial rounds of projects it ultimately funds. Within six months, Congress should have a clear roadmap for the CEHN, including participating centers and researchers, feasibility analyses and cost estimates, and three initial approved proposals. At least (1) proposal should advance existing intramurally funded work on processed foods that has identified several characteristics, including energy density and palatability, as potential drivers of energy intake.
Recommendation 2. Congress Should Mandate that CEHN establish an Operating Charter that Details a Governing Council for the Network Composed of Multi-sector Stakeholders.
This charter will oversee the network’s management and coordination. Specifically, it will:
- Identify key nutrition science stakeholders, including those with perceived contrasting viewpoints, to guide trial design and prioritization efforts;
- Engage non-nutrition science trialists and form partnerships with contract research organizations (CROs) with experience and demonstrated success managing large pharmaceutical trials to ensure innovative and rigorous trial designs are undertaken and CEHN operations are rigorous;
- Identify methods for public involvement in the proposal and funding of proposals, and; issue quarterly mandates to inform the public on updated progress on studies operating within the CEHN,providing new results from its studies and ways for the public to actively participate in this research.
CEHN should be initiated by Congress. It should also explore novel funding mechanisms that pools resources from specific NIH institutes that have historically supported nutrition research (such as the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), National Heart, Lung, and Blood Institute (NHLBI), NIA, National Institute of Child Health and Human Development (NICHD), the USDA, the Department of Defense, agricultural commodity boards and the food industry to ensure cost-sharing across relevant sectors and a robust, sustainable CEHN. CEHN will ultimately study topics that may produce results that are perceived as adversarial to the food industry and its independence should be protected. However, it is clear that America is not going back to a world where the majority of food is produced in the home in resource scarce settings as occurred when rates of overweight and obesity were low. Thus, engagement with industry actors across the food system will be critical, including food product formulators and manufacturers, restaurants and food delivery companies. Funds should be appropriated to facilitate collaboration between the CEHN and these industries to study effective reformulations and modifications that impact the health of the population.
Conclusion
The current health, social and economic burden of obesity and nutrition-related diseases are indefensible, and necessitate a multi-sector, coordinated approach to reenvisioning our food environment. Such a reenvisioning needs to be based on rigorous science that describes causal links between food and health, and serves innovative solutions to address food and nutrition-related problems. Investment in an intentional, coordinated and well-funded research network capable of conducting rigorous and long-term nutrition intervention trials is long overdue and holds substantial capacity to revolutionize nutritional guidance, food formulation and policy. It is imperative that visionary political leaders overhaul our nutrition research landscape and invest in a network of Centers of Excellence in Human Nutrition that can meet the demands for rigorous nutrition evidence, build back trust in public health, and dramatically mitigate the impacts of nutrition- and obesity-related chronic disease.
This memo produced as part of the Federation of American Scientists and Good Science Project sprint. Find more ideas at Good Science Project x FAS
Terminal Patients Need Better Access to Drugs and Clinical Trial Information
Editor’s note: This policy memo was written by Jake Seliger and his wife Bess Stillman. Jake passed away before meeting his child, Athena, born on October 31, 2024. Except where indicated, the first-person voice is that of Jake. This memo advocates for user-centric technology modernization and data interoperability. More crucially, he urges expanded patient rights to opt-in to experimental drug trials and FDA rule revisions to enable terminal patients to take more risks on behalf of themselves, for the benefit of others.
The FDA is supposed to ensure that treatments are safe and effective, but terminal cancer patients like me are already effectively dead. If I don’t receive advanced treatment quickly, I will die. My hope, in the time I have remaining, is to promote policies that will speed up access to treatments and clinical trials for cancer patients throughout the U.S.
There are about two million cancer diagnoses and 600,000 deaths annually in the United States. Cancer treatments are improved over time via the clinical trial system: thousands of clinical trials are conducted each year (many funded by the government via the National Institutes of Health, or NIH, and many funded by pharmaceutical companies hoping to get FDA approval for their products).
But the clinical trial system is needlessly slow, and as discussed below, is nearly impossible for any layperson to access without skilled consulting. As a result, clinical trials are far less useful than they could be.
The FDA is currently “protecting” me from being harmed or killed by novel, promising, advanced cancer treatments that could save or extend my life, so that I can die from cancer instead. Like most patients, I would prefer a much faster system in which the FDA conditionally approves promising, early-phase advanced cancer treatments, even if those treatments haven’t yet been proven fully effective. Drugmakers will be better incentivized to invest in novel cancer treatments if they can begin receiving payment for those treatments sooner. The true risks to terminal patients like me are low—I’m already dying—and the benefits to both existing terminal patients and future patients of all kinds are substantial.
I would also prefer a clinical trial system that was easy for patients to navigate, rather than next-to-impossible. Easier access for patients could radically lower the cost and time for clinical trials by making recruitment far cheaper and more widespread, rather than including only about 6% of patients. In turn, speeding up the clinical-trial process means that future cancer patients will be helped by more quickly approving novel treatments. About half of pharmaceutical R&D spending goes not to basic research, but to the clinical trial process. If we can cut the costs of clinical trials by streamlining the process to improve access to terminal patients, more treatments will make it to patients, and will be faster in doing so.
Cancer treatment is a non-partisan issue. To my knowledge, both left and right agree that prematurely dying from cancer is bad. Excess safety-ism and excessive caution from the FDA costs lives, including in the near future, my own. Three concrete actions would improve clinical research, particular for terminal cancer patients like me, but for many other patients as well:
Clinical trials should be easier and cheaper. The chief obstacles to this are recruitment and retention.
Congress and NIH should modernize the website ClinicalTrials.gov and vastly expand what drug companies and research sites are required to report there, and the time in which they must report it. Requiring timely updates that include comprehensive eligibility criteria, availability for new participants, and accurate site contact information,would mean that patients and doctors will have much more complete information about what trials are available and for whom.
The process of determining patient eligibility and enrolling in a trial should be easier. Due to narrow eligibility criteria, studies struggle to enroll an adequate number of local patients, which severely delays trial progression. A patient who wishes to participate in a trial must “establish care” with the hospital system hosting the trial, before they are even initially screened for eligibility or told if a slot is available. Due to telemedicine practice restrictions across state lines, this means that patients who aren’t already cared for at that site— patients who are ill and for whom unnecessary travel is a huge burden— must use their limited energy to travel to a site just to find out if they can proceed to requesting a trial slot and starting further eligibility testing. Then, if approved for the study, they must be able to uproot their lives to move to, or spend extensive periods of time at, the study location
Improved access to remote care for clinical trials would solve both these problems. First, by allowing the practice of telemedicine across state lines for visits directly related to screening and enrollment into clinical trials. Second, by incentivizing decentralization—meaning a participant in the study can receive the experimental drug and most monitoring, labs and imaging at a local hospital or infusion clinic—by accepting data from sites that can follow a standardized study protocol.
We should require the FDA to allow companies with prospective treatments for fatal diseases to bring those treatments to market after initial safety studies, with minimal delays and with a lessened burden for demonstrating benefit.
Background
[At the time of writing this] I’m a 40-year-old man whose wife is five months pregnant, and the treatment that may keep me alive long enough to meet my child is being kept from me because of current FDA policies. That drug may be in a clinical trial I cannot access. Or, it may be blocked from coming to market by requirements for additional testing to further prove efficacy that has already been demonstrated.
Instead of giving me a chance to take calculated risks on a new therapy that might allow me to live and to be with my family, current FDA regulations are choosing for me: deciding that my certain death from cancer is somehow less harmful to me than taking a calculated, informed risk that might save or prolong my life. Who is asking the patients being protected what they would rather be protected from? The FDA errs too much on the side of extreme caution around drug safety and efficacy, and that choice leads to preventable deaths.
One group of scholars attempted to model how many lives are lost versus gained from a more or less conservative FDA. They find that “from the patient’s perspective, the approval criteria in [FDA program accelerators] may still seem far too conservative.” Their analysis is consistent with the FDA being too stringent and slow in approving use of drugs for fatal diseases like cancer: “Our findings suggest that conventional standards of statistical significance for approving drugs may be overly conservative for the most deadly diseases.”
Drugmakers also find it difficult to navigate what exactly the FDA wants: “their deliberations are largely opaque—even to industry insiders—and the exact role and weight of patient preferences are unclear.” This exacerbates the difficulty drugmakers face in seeking to get treatments to patients faster. Inaction in the form of delaying patient access to drugs is killing people. Inaction is choosing death for cancer patients.
I’m an example of this; I was diagnosed with squamous cell carcinoma (SCC) of the tongue in Oct. 2022. I had no risk factors, like smoking or alcoholism, that put me at risk for SCC. The original tumor was removed in October 2022, and I then had radiation that was supposed to cure me. It didn’t, and the cancer reappeared in April 2023. At that point, I would’ve been a great candidate for a drug like MCLA-158, which has been stuck in clinical trials, despite “breakthrough therapy designation” by the FDA and impressive data, for more than five years. This, despite the fact that MCLA-158 is much easier to tolerate than chemotherapy and arrests cancer in about 70% of patients. Current standard of care chemotherapy and immunotherapy has a positive response rate of only 20-30%.
Had MCLA-158 been approved, I might have received it in April 2023, and still have my tongue. Instead, in May 2023, my entire tongue was surgically removed in an attempt to save my life, forever altering my ability to speak and eat and live a normal life. That surgery removed the cancer, but two months later it recurred again in July 2023. While clinical-trial drugs are keeping me alive right now, I’m dying in part because promising treatments like MCLA-158 are stuck in clinical trials, and I couldn’t get them in a timely fashion, despite early data showing their efficacy. Merus, the maker of MCLA-158, is planning a phase 3 trial for MCLA-158, despite its initial successes. This is crazy: head and neck cancer patients need MCLA-158 now.

I’m only writing this brief because I was one of the few patients who was indeed, at long last, able to access MCLA-158 in a clinical trial, which incredibly halted my rapidly expanding, aggressive tumors. Without it, I’d have been dead nine months ago. Without it, many other patients already are. Imagine that you, or your spouse, or parent, or child, finds him or herself in a situation like mine. Do you want the FDA to keep testing a drug that’s already been shown to be effective and allow patients who could benefit to suffer and die? Or do you want your loved one to get the drug, and avoid surgical mutilation and death? I know what I’d choose.
Multiply this situation across hundreds of thousands of people per year and you’ll understand the frustration of the dying cancer patients like me.
As noted above, about 600,000 people die annually from cancer—and yet cancer drugs routinely take a decade or more to move from lab to approval. The process is slow: “By the time a drug gets into phase III, the work required to bring it to that point may have consumed half a decade, or longer, and tens if not hundreds of millions of dollars.” If you have a fatal disease today, a treatment that is five to ten years away won’t help. Too few people participate in clinical trials partly because participation is so difficult; one study finds that “across the entire U.S. system, the estimated participation rate to cancer treatment trials was 6.3%.” Given how many people die, the prospect of life is attractive.
There is another option in between waiting decades for a promising drug to come to market and opening the market to untested drugs: Allowing terminal patients to consent to the risk of novel, earlier-phase treatments, instead of defaulting to near-certain death, would potentially benefit those patients as well as generate larger volumes of important data regarding drug safety and efficacy, thus improving the speed of drug approval for future patients. Requiring basic safety data is reasonable, but requiring complete phase 2 and 3 data for terminal cancer patients is unreasonably slow, and results in deaths that could be prevented through faster approval.
Again, imagine you, your spouse, parent, or child, is in a situation like mine: they’ve exhausted current standard-of-care cancer treatments and are consequently facing certain death. New treatments that could extend or even save their life may exist, or be on the verge of existing, but are held up by the FDA’s requirements that treatments be redundantly proven to be safe and highly effective. Do you want your family member to risk unproven but potentially effective treatments, or do you want your family member to die?
I’d make the same choice. The FDA stands in the way.
Equally important, we need to shorten the clinical trial process: As Alex Telford notes, “Clinical trials are expensive because they are complex, bureaucratic, and reliant on highly skilled labour. Trials now cost as much as $100,000 per patient to run, and sometimes up to $300,000 or even $500,000 per patient.” And as noted above, about half of pharmaceutical R&D spending goes not to basic research, but to the clinical trial process.
Cut the costs of clinical trials, and more treatments will make it to patients, faster. And while it’s not reasonable to treat humans like animal models, a lot of us who have fatal diagnoses have very little to lose and consequently want to try drugs that may help us, and people in the future with similar diseases. Most importantly, we understand the risks of potentially dying from a drug that might help us and generate important data, versus waiting to certainly die from cancer in a way that will not benefit anybody. We are capable of, and willing to give, informed consent. We can do better and move faster than we are now. In the grand scheme of things, “When it comes to clinical trials, we should aim to make them both cheaper and faster. There is as of yet no substitute for human subjects, especially for the complex diseases that are the biggest killers of our time. The best model of a human is (still) a human.” Inaction will lead to the continued deaths of hundreds of thousands of people annually.
Trials need patients, but the process of searching for a trial in which to enroll is archaic. We found ClinicalTrials.gov nearly impossible to navigate. Despite the stakes, from the patient’s perspective, the clinical trial process is impressively broken, obtuse and confusing, and one that we gather no one likes: patients don’t, their families don’t, hospitals and oncologists who run the clinical trials don’t, drug companies must not, and the people who die while waiting to get into a trial probably don’t.
I [Bess] knew that a clinical trial was Jake’s only chance. But how would we find one? I’d initially hoped that a head and neck oncologist would recommend a specific trial, preferably one that they could refer us to. But most doctors didn’t know of trial options outside their institution, or, frequently, within it, unless they were directly involved. ,Most recommended large research institutions that had a good reputation for hard cases, assuming they’d have more studies and one might be a match.
How were they, or we, supposed to find out what trials actually existed?
The only large-scale search option is ClinicalTrials.gov. But many oncologists I spoke with don’t engage with ClinicalTrials.gov, because the information is out-of-date, difficult to navigate, and inaccurate. It can’t be relied on. For example, I shared a summary of Jake’s relevant medical information (with his permission) in a group of physicians who had offered to help us with the clinical trial search. Ten physicians shared their top-five search results. Ominously, none listed the same trials.
How is it that ten doctors can put in the same basic, relevant clinical data into an engine meant to list and search for all existing clinical trials, only for no two to surface the same study? The problem is simple: There’s a lack of keyword standardization.
Instead of a drop-down menu or click-boxes with diagnoses to choose from, the first search “filter” on ClinicalTrials.gov is a text box that says “Condition\Disease.” If I search: “Head and Neck Cancer” I get ___________ results. If I search “Tongue Cancer,” I get _________ results. Although Tongue Cancer is a subset of Head and Neck Cancer, I don’t see the studies listed as “Head and Neck Cancer”, unless I type in both, or the person who created the ClinicalTrials.gov post for the study chose to type out multiple variations on a diagnosis. Nothing says they have to. If I search for both, I will still miss studies filed as: “HNSCC” or “All Solid Tumors” or “Squamous Cell Carcinoma of the Tongue.”
The good news is that online retailers solved this problem for us years ago. It’s easier to find a dress to my exact specifications out of thousands on H&M,com than it is to find a clinical trial. I can open a search bar, click “dress,” select my material from another click box (which allows me to select from the same options the people listing the garments chose from), then click on the boxes for my desired color, dry clean or machine wash, fabric, finish, closure, and any other number of pre-selected categories before adding additional search keywords if I choose. I find a handful of options all relevant to my desires within a matter of minutes. H&M provides a list of standardized keywords describing what they are offering, and I can filter from there. This way, H&M and I are speaking the same language. And a dress isn’t life or death. For much more on my difficulties with ClinicalTrials.gov, see here.
Further slowing a patient’s ability to find a relevant clinical trial is a lack of comprehensive, searchable, eligibility criteria. Every study has eligibility criteria, and eligibility criteria—like keywords—aren’t standardized on ClinicalTrials.gov. Nor is it required that an exhaustive explanation of eligibility criteria be provided, which may lead to a patient wasting precious weeks attempting to establish care and enroll in a trial, only to discover there was unpublished eligibility criteria they don’t meet. Instead, the information page for each study outlines inclusion and exclusion criteria using whatever language whoever is typing feels like using. Many have overlapping inclusion and exclusion criteria, but there can be long lists of additional criteria for each arm of a study, and it’s up to the patient or their doctor to read through them line by line—if they’re even listed— to see if prior medications, current medications, certain genomic sequencing findings, numbers of lines of therapy, etc. makes the trial relevant.
In the end, we hired a consultant (Eileen), who leveraged her full-time work helping pharmaceutical companies determine which novel compounds might be worth pouring their R&D efforts into assisting patients find potential clinical trials. She helped us narrow it down to the top 5 candidate trials from the thousands that turned up in initial queries.
- NCT04815720: Pepinemab in Combination With Pembrolizumab in Recurrent or Metastatic Squamous Cell Carcinoma of the Head and Neck (KEYNOTE-B84)
- NCT03526835: A Study of Bispecific Antibody MCLA-158 in Patients With Advanced Solid Tumors.
- NCT05743270: Study of RP3 in Combination With Nivolumab and Other Therapy in Patients With Locoregionally Advanced or Recurrent SCCHN.”
- NCT03485209: Efficacy and Safety Study of Tisotumab Vedotin for Patients With Solid Tumors (innovaTV 207)
- NCT05094336: AMG 193, Methylthioadenosine (MTA) Cooperative Protein Arginine Methyltransferase 5 (PRMT5) Inhibitor, Alone and in Combination With Docetaxel in Advanced Methylthioadenosine Phosphorylase (MTAP)-Null Solid Tumors (MTAP).
Based on the names alone, you can see why it would be difficult if not impossible for someone without some expertise in cancer oncology to evaluate trials. Even with Eileen’s expertise, two of the trials were stricken from our list when we discovered unlisted eligibility criteria, which excluded Jake. When exceptionally motivated patients, the oncologists who care for them, and even consultants selling highly specialized assistance can’t reliably navigate a system that claims to be desperate to enroll patients into trials, there is a fundamental problem with the system. But this is a mismatch we can solve, to everyone’s benefit.
Plan of Action
We propose three major actions. Congress should:
Recommendation 1. Direct the National Library of Medicine (NLM) at the National Institutes of Health (NIH) to modernize ClinicalTrials.gov so that patients and doctors have complete and relevant information about available trials, as well as requiring more regular updates from companies as to all the details of available trials, and
Recommendation 2. Allow the practice of telemedicine across state lines for visits related to clinical trials.
Recommendation 3. Require the FDA to allow companies with prospective treatments for fatal diseases to bring those treatments to market after initial safety studies.
Modernizing ClinicalTrials.gov will empower patients, oncologists, and others to better understand what trials are available, where they are available, and their up-to-date eligibility criteria, using standardized search categories to make them more easily discoverable. Allowing telemedicine across state lines for clinical trial care will significantly improve enrollment and retention. Bringing treatments to market after initial safety studies will speed the clinical trial process, and get more patients treatments, sooner. In cancer, delays cause death.
To get more specific:
The FDA already has a number of accelerated approval options. Instead of the usual “right to try” legislation, we propose creating a second, provisional market for terminal patients that allows partial approval of a drug while it’s still undergoing trials, making it available to trial-ineligible (or those unable to easily access a trial) patients for whom standard of care doesn’t provide a meaningful chance at remission. This partial approval would ideally allow physicians to prescribe the drug to this subset of patients as they see fit: be that monotherapy or a variety of personalized combination therapies, tailored to a patient’s needs, side-effect profile and goals. This wouldn’t just expand access to patients who are otherwise out of luck, but can give important data about real-world reaction and response.
As an incentive, the FDA could require pharmaceutical companies to provide drug access to terminal patients as a condition of continuing forward in the trial process on the road to a New Drug Application. While this would be federally forced compassion, it would differ from “compassionate use.” To currently access a study drug via compassionate use, a physician has to petition both the drug company and the FDA on the patient’s behalf, which comes with onerous rules and requirements. Most drug companies with compassionate use programs won’t offer a drug until there’s already a large amount of compelling phase 2 data demonstrating efficacy, a patient must have “failed” standard of care and other available treatments, and the drug must (usually) be given as a monotherapy. Even if the drugmaker says yes, the FDA can still say no. Compassionate use is an option available to very small numbers, in limited instances, and with bureaucratic barriers to overcome. Not terribly compassionate, in my opinion.
The benefits of this “terminal patient” market can and should go both ways, much as lovers should benefit from each other instead of trying to create win-lose situations. Providing the drug to patients would come with reciprocal benefits to the pharmaceutical companies. Any physician prescribing the drug to patients should be required to report data regarding how the drug was used, in what combination, and to what effect. This would create a large pool of observational, real-world data gathered from the patients who aren’t ideal candidates for trials, but better represent the large subset of patients who’ve exhausted multiple lines of therapies yet aren’t ready for the end. Promising combinations and unexpected effects might be identified from these observational data sets, and possibly used to design future trials.
Dying patients would get drugs faster, and live longer, healthier lives, if we can get better access to both information about clinical trials, and treatments for diseases. The current system errs too much on proving effectiveness and too little on the importance of speed itself, and of the number of people who die while waiting for new treatments. Patients like me, who have fatal diagnoses and have already failed “standard of care” therapies, routinely die while waiting for new or improved treatments. Moreover, patients like me have little to lose: because cancer is going to kill us anyway, many of us would prefer to roll the dice on an unproven treatment, or a treatment that has early, incomplete data showing its potential to help, than wait to be killed by cancer. Right now, however, the FDA does not consider how many patients will die while waiting for new treatments. Instead, the FDA requires that drugmakers prove both the safety and efficacy of treatments prior to allowing any approval whatsoever.
As for improving ClinicalTrials.gov, we have several ideas:
First, the NIH should hire programmers with UX experience, and should be empowered to pay market rates to software developers with experience at designing websites for, say, Amazon or Shein. Clinicaltrials.gov was designed to be a study registry, but needs an overhaul to be more useful to actual patients and doctors.
Second, UX is far from the only problem. Data itself can be a problem. For example, the NIH could require a patient-friendly summary of each trial; it could standardize the names of drugs and conditions so that patients and doctors could see consistent and complete search results; and it could use a consistent and machine-readable format for inclusion and exclusion criteria.
We are aware of one patient group that, in trying to build an interface to ClinicalTrials.gov, found a remarkable degree of inconsistency and chaos: “We have analyzed the inclusion and exclusion criteria for all the cancer-related trials from clinicaltrials.gov that are recruiting for interventional studies (approximately 8,500 trials). Among these trials, there are over 1,000 ways to indicate the patient must not be pregnant during a trial, and another 1,000 ways to indicate that the patient must use adequate birth control.” ClinicalTrials.gov should use a Domain Specific Language that standardizes all of these terms and conditions, so that patients and doctors can find relevant trials with 100x less effort.
Third, the long-run goal should be a real-time, searchable database that matches patients to studies using EMR data and allows doctors to see what spots are available and where. Pharmaceutical and biotech companies would need to be required to contribute up-to-date information on all extant clinical trials on a regular basis (e.g., monthly).
Conclusion
Indeed, we need a national clinical trial database that EMRs can connect to, in the style of Epic’s Care Everywhere. A patient signs a release to share their information, like Care Everywhere allows hospital A to download a patient’s information from hospital B if the patient signs a release. Trials with open slots could mark themselves as “recruiting” and update in real time. A doctor could press a button and identify potential open studies. A patient available for a trial could flag their EMR profile as actively searching, allowing clinical trial sites to browse patients in their region who are looking for a study. Access to that patient’s EMR would allow them to scan for eligibility.
This would be easy to do. OKCupid can do it. The tech exists. Finding a clinical trial already feels a bit like online dating, except if you get the wrong match, you die.
This memo produced as part of the Federation of American Scientists and Good Science Project sprint. Find more ideas at Good Science Project x FAS
The FDA has created a variety of programs that have promising-sounding names: “Currently, four programs—the fast track, breakthrough therapy, accelerated approval, and priority review designations—provide faster reviews and/or use surrogate endpoints to judge efficacy. However, published descriptions […] do not indicate any differences in the statistical thresholds used in these programs versus the standard approval process, nor do they mention adapting these thresholds to the severity of the disease.” The problem is that these programs do not appear to do much to actually accelerate getting drugs to patients. MCLA-158 is an example of the problem: the drug has been shown to be safe and effective, and yet Merus thinks it needs a Phase 3 trial to get it past the FDA and to patients.
Bringing Transparency to Federal R&D Infrastructure Costs
There is an urgent need to manage the escalating costs of federal R&D infrastructure and the increasing risk that failing facilities pose to the scientific missions of the federal research enterprise. Many of the laboratories and research support facilities operating under the federal research umbrella are near or beyond their life expectancy, creating significant safety hazards for federal workers and local communities. Unfortunately, the nature of the federal budget process forces agencies into a position where the actual cost of operations are not transparent in agency budget requests to OMB before becoming further obscured to appropriators, leading to potential appropriations disasters (including an approximately 60% cut to National Institute of Standards and Technology (NIST) facilities in 2024 after the agency’s challenges became newsworthy). Providing both Congress and OMB with a complete accounting of the actual costs of agency facilities may break the gamification of budget requests and help the government prioritize infrastructure investments.
Challenge and Opportunity
Recent reports by the National Research Council and the National Science and Technology Council, including the congressionally-mandated Quadrennial Science and Technology Review have highlighted the dire state of federal facilities. Maintenance backlogs have ballooned in recent years, forcing some agencies to shut down research activities in strategic R&D domains including Antarctic research and standards development. At NIST, facilities outages due to failing steam pipes, electricity, and black mold have led to outages reducing research productivity from 10-40 percent. NASA and NIST have both reported their maintenance backlogs have increased to exceed 3 billion dollars. The Department of Defense forecasts that bringing their buildings up to modern standards would cost approximately 7 billion “putting the military at risk of losing its technological superiority.” The shutdown of many Antarctic science operations and collapse of the Arecibo Observatory have been placed in stark contrast with the People’s Republic of China opening rival and more capable facilities in both research domains. In the late 2010s, Senate staffers were often forced to call national laboratories, directly, to ask them what it would actually cost for the country to fully fund a particular large science activity.
This memo does not suggest that the government should continue to fund old or outdated facilities; merely that there is a significant opportunity for appropriators to understand the actual cost of our legacy research and development ecosystem, initially ramped up during the Cold War. Agencies should be able to provide a straight answer to Congress about what it would cost to operate their inventory of facilities. Likewise, Congress should be able to decide which facilities should be kept open, where placing a facility on life support is acceptable, and which facilities should be shut down. The cost of maintaining facilities should also be transparent to the Office of Management and Budget so examiners can help the President make prudent decisions about the direction of the federal budget.
The National Science and Technology Council’s mandated research and development infrastructure report to Congress is a poor delivery vehicle. As coauthors of the 2024 research infrastructure report, we can attest to the pressure that exists within the White House to provide a positive narrative about the current state of play as well as OMB’s reluctance to suggest additional funding is needed to maintain our inventory of facilities outside the budget process. It would be much easier for agencies who already have a sense of what it costs to maintain their operations to provide that information directly to appropriators (as opposed to a sanitized White House report to an authorizing committee that may or may not have jurisdiction over all the agencies covered in the report)–assuming that there is even an Assistant Director for Research Infrastructure serving in OSTP to complete the America COMPETES mandate. Current government employees suggest that the Trump Administration intends to discontinue the Research and Development Infrastructure Subcommittee.
Agencies may be concerned that providing such cost transparency to Congress could result in greater micromanagement over which facilities receive which investments. Given the relevance of these facilities to their localities (including both economic benefits and environmental and safety concerns) and the role that legacy facilities can play in training new generations of scientists, this is a matter that deserves public debate. In our experience, the wider range of factors considered by appropriation staff are relevant to investment decisions. Further, accountability for macro-level budget decisions should ultimately fall on decisionmakers who choose whether or not to prioritize investments in both our scientific leadership and the health and safety of the federal workforce and nearby communities. Facilities managers who are forced to make agonizing choices in extremely resource-constrained environments currently bear most of that burden.
Plan of Action
Recommendation 1: Appropriations committees should require from agencies annual reports on the actual cost of completed facilities modernization, operations, and maintenance, including utility distribution systems.
Transparency is the only way that Congress and OMB can get a grip on the actual cost of running our legacy research infrastructure. This should be done by annual reporting to the relevant appropriators the actual cost of facilities operations and maintenance. Other costs that should be accounted for include obligations to international facilities (such as ITER) and facilities and collections that are paid for by grants (such as scientific collections which support the bioeconomy). Transparent accounting of facilities costs against what an administration chooses to prioritize in the annual President’s Budget Request may help foster meaningful dialogue between agencies, examiners, and appropriations staff.
The reports from agencies should describe the work done in each building and impact of disruption. Using the NIST as an example, the Radiation Physics Building (still without the funding to complete its renovation) is crucial to national security and the medical community. If it were to go down (or away), every medical device in the United States that uses radiation would be decertified within 6 months, creating a significant single point of failure that cannot be quickly mitigated. The identification of such functions may also enable identification of duplicate efforts across agencies.
The costs of utility systems should be included because of the broad impacts that supporting infrastructure failures can have on facility operations. At NIST’s headquarters campus in Maryland, the entire underground utility distribution system is beyond its designed lifespan and suffering nonstop issues. The Central Utility Plant (CUP), which creates steam and chilled water for the campus, is in a similar state. The CUP’s steam distribution system will be at the complete end of life (per forensic testing of failed pipes and components) in less than a decade and potentially as soon as 2030. If work doesn’t start within the next year (by early 2026), it is likely the system could go down. This would result in a complete loss of heat and temperature control on the campus; particularly concerning given the sensitivity of modern experiments and calibrations to changes in heat and humidity. Less than a decade ago, NASA was forced to delay the launch of a satellite after NIST’s steam system was down for a few weeks and calibrations required for the satellite couldn’t be completed.
Given the varying business models for infrastructure around the Federal government, standardization of accounting and costs may be too great a lift–particularly for agencies that own and operate their own facilities (government owned, government operated, or GOGOs) compared with federally funded research and development centers (FFRDCs) operated by companies and universities (government owned, contractor operated, or GOCOs).
These reports should privilege modernization efforts, which according to former federal facilities managers should help account for 80-90 percent of facility revitalization, while also delivering new capabilities that help our national labs maintain (and often re-establish) their world-leading status. It would also serve as a potential facilities inventory, allowing appropriators the ability to de-conflict investments as necessary.
It would be far easier for agencies to simply provide an itemized list of each of their facilities, current maintenance backlog, and projected costs for the next fiscal year to both Congress and OMB at the time of annual budget submission to OMB. This should include the total cost of operating facilities, projected maintenance costs, any costs needed to bring a federal facility up to relevant safety and environmental codes (many are not). In order to foster public trust, these reports should include an assessment of systems that are particularly at risk of failure, the risk to the agency’s operations, and their impact on surrounding communities, federal workers, and organizations that use those laboratories. Fatalities and incidents that affect local communities, particularly in laboratories intended to improve public safety, are not an acceptable cost of doing business. These reports should be made public (except for those details necessary to preserve classified activities).
Recommendation 2: Congress should revisit the idea of a special building fund from the General Services Administration (GSA) from which agencies can draw loans for revitalization.
During the first Trump Administration, Congress considered the establishment of a special building fund from the GSA from which agencies could draw loans at very low interest (covering the staff time of GSA officials managing the program). This could allow agencies the ability to address urgent or emergency needs that happen out of the regular appropriations cycle. This approach has already been validated by the Government Accountability Office for certain facilities, who found that “Access to full, upfront funding for large federal capital projects—whether acquisition, construction, or renovation—could save time and money.” Major international scientific organizations that operate large facilities, including CERN (the European Organization for Nuclear Research), have similar ability to take loans to pay for repairs, maintenance, or budget shortfalls that helps them maintain financial stability and reduce the risk of escalating costs as a result of deferred maintenance.
Up-front funding for major projects enabled by access to GSA loans can also reduce expenditures in the long run. In the current budget environment, it is not uncommon for the cost of major investments to double due to inflation and doing the projects piecemeal. In 2010, NIST proposed a renovation of its facilities in Boulder with an expected cost of $76 million. The project, which is still not completed today, is now estimated to cost more than $450 million due to a phased approach unsupported by appropriations. Productivity losses as a result of delayed construction (or a need to wait for appropriations) may have compounding effects on industry that may depend on access to certain capabilities and harm American competitiveness, as described in the previous recommendation.
Conclusion
As the 2024 RDI Report points out “Being a science superpower carries the burden of supporting and maintaining the advanced underlying infrastructure that supports the research and development enterprise.” Without a transparent accounting of costs it is impossible for Congress to make prudent decisions about the future of that enterprise. Requiring agencies to provide complete information to both Congress and OMB at the beginning of each year’s budget process likely provides the best chance of allowing us to address this challenge.
A Certification System for Third Party Climate Models to Support Local Planning and Flood Resilience
As the impacts of climate change worsen and become salient to more communities across the country, state and local planners need access to robust and replicable predictive models in order to effectively plan for emergencies like extreme flooding. However, planning agencies often lack the resources to build these models themselves. And models developed by federal agencies are often built on outdated data and are limited in their interoperability. Many planners have therefore begun turning to private-sector providers of models they say offer higher quality and more up-to-date information. But access to these models can be prohibitively expensive, and many remain “black boxes” as these providers rarely open up their methods and underlying data.
The federal government can support more proactive, efficient, and cost-effective resiliency planning by certifying predictive models to validate and publicly indicate their quality. Additionally, Congress and the new Presidential Administration should protect funding at agencies like FEMA, NOAA, and the National Institute of Standards and Technology (NIST) who have faced budget shortfalls in recent years or are currently facing staffing reductions and proposed budget cuts, to support the collection and sharing of high quality and up-to-date information. A certification system and clearinghouse would enable state and local governments to more easily discern the quality and robustness of a growing number of available climate models. Ultimately, such measures could increase cost-efficiencies and empower local communities by supporting more proactive planning and the mitigation of environmental disasters that are becoming more frequent and intense.
Challenge and Opportunity
The United States experienced an unprecedented hurricane season in 2024. Even as hurricanes continued to affect states like Texas, Louisiana, and Florida, the effects of hurricanes and other climate-fueled storms also expanded to new geographies—including inland and northern regions like Asheville, North Carolina and Burlington, Vermont. Our nation’s emergency response systems can no longer keep up—the Federal Emergency Management Agency (FEMA) spent nearly half the agency’s disaster relief fund within the first two weeks of the 2025 fiscal year. More must be done to support proactive planning and resilience measures at state and local levels. Robust climate and flooding models are critical to planners’ abilities to predict the possible impacts of storms, hurricanes, and flooding, and to inform infrastructure updates, funding prioritization, and communication strategies.
Developing useful climate models requires large volumes of data and considerable computational resources, as well as time and data science expertise, making it difficult for already-strapped state and local planning agencies to build their own. Many global climate models have proven to be highly accurate, but planners must often integrate more granular data for these to be useful at local levels. And while federal agencies like FEMA, the National Oceanic and Atmospheric Administration (NOAA), and the Army Corps of Engineers make their flooding and sea level rise models publicly available, these models have limited predictive capacity, and the datasets they are built on are often outdated or contain large gaps. For example, priority datasets, such as FEMA’s Flood Insurance Rate Maps (FIRMs) and floodplain maps, are notoriously out of date or do not integrate accurate information on local drainage systems, preventing meaningful and broad public use. A lack of coordination across government agencies at various levels, low data interoperability, and variations in data formats and standards also further prevent the productive integration of climate and flooding data into planning agencies’ models, even when data are available. Furthermore, recent White House directives to downsize agencies, freeze funding, and in some cases directly remove information from federal websites, have made some public climate datasets, including FEMA’s, inaccessible and put many more at risk.
A growing private-sector market has begun to produce highly granular flooding models, but these are often cost-prohibitive for state and local entities to access. In addition, these models tend to be black boxes; their underlying methods are rarely publicly available, and thus are difficult or impossible to rigorously evaluate or reproduce. A 2023 article in the Arizona State Law Journal found widely varying levels of uncertainty involved in these models’ predictions and their application of different climate scenarios. And a report from the President’s Council of Advisors on Science and Technology also questioned the quality of these private industry models, and called on NOAA and FEMA to develop guidelines for measuring their accuracy.
To address these issues, public resources should be invested in enabling broader access to robust and replicable climate and flooding models, through establishment of a certification system and clearinghouse for models not developed by government agencies. Several realities make implementing this idea urgent. First, research predicts that even with aggressive and coordinated action, the impacts of hurricanes and other storms are likely to worsen (especially for already disadvantaged communities), as will the costs associated with their clean up. A 2024 U.S. Chamber of Commerce report estimates that “every $1 spent on climate resilience and preparedness saves communities $13 in damages, cleanup costs, and economic impact,” potentially adding up to billions of dollars in savings across the country. Second, flooding data and models may need to be updated to accommodate not only new scientific information, but also updates to built infrastructure as states and municipalities continue to invest in infrastructure upgrades. Finally, government agencies at all levels, as well as private sector entities, are already responding to more frequent or intensified flooding events. These agencies, as well as researchers and community organizations, already hold a wealth of data and knowledge that, if effectively integrated into robust and accessible models, could help vulnerable communities plan for and mitigate the worst impacts of flooding.
Plan of Action
Congress should direct the National Institute of Standards and Technology (NIST) to establish a certification system or stamp of approval for predictive climate and weather models, starting with flood models. Additionally, Congress should support the maintenance of these and agencies’ capacities to build and maintain such a system, as well as that of other agencies whose data are regularly integrated into climate models, including FEMA, NOAA, the Environmental Protection Agency (EPA), National Aeronautics and Space Administration (NASA), U.S. Geological Survey (USGS), and Army Corps of Engineers. Congressional representatives can do this through imposing moratoria on Reductions in Force and opposing budget cuts imposed by the Department of Government Efficiency and the budget reconciliation process.
Following the publication of the Office of Science and Technology Policy’s Memorandum on “Ensuring Free, Immediate, and Equitable Access to Federally Funded Research,” agencies that fund or conduct research are now required to update their open access policies by the end of 2025 to make all federally funded publications and data publicly accessible. While this may help open up agency models, it cannot compel private organizations to make their models open or less expensive to access. However, federal agencies can develop guidance, standards, and a certification system to make it easier for state and local agencies and organizations to navigate what’s been called the “Wild West of climate modeling.”
A robust certification system would require both an understanding of the technical capabilities of climate models, as well as the modeling and data needs of resilience planners and floodplain managers. Within NIST, the Special Programs Office or Information Technology Laboratory could work with non-governmental organizations that already convene these stakeholders to gather input on what a certification system should consider and communicate. For example, the Association of State Floodplain Managers, American Flood Coalition, American Society of Adaptation Professionals, and American Geophysical Union are all well-positioned to reach researchers and planners across a range of geographies and capacities. Additionally, NIST could publish requests for information to source input more widely. Alternatively, NOAA’s National Weather Service or Office of Oceanic and Atmospheric Research could perform similar functions. However, this would require concerted effort on Congress’s part to protect and finance the agency and its relevant offices. In the face of impending budget cuts, it would benefit NIST to consult with relevant NOAA offices and programs on the design, scope, and rollout of such a system.
Gathered input could be translated into a set of minimum requirements and nice-to-have features of models, indicating, for example, proven accuracy or robustness, levels of transparency in the underlying data or source code, how up-to-date underlying data are, ease of use, or interoperability. The implementing agency could also look to other certification models such as the Leadership in Energy and Environmental Design (LEED) rating system, which communicates a range of performance indicators for building design. Alternatively, because some of the aforementioned features would be challenging to assess in the short term, a stamp of approval system would communicate that a model has met some minimum standard.
Importantly, the design and maintenance of this system would be best led by a federal agency like NIST, rather than third-party actors, because NIST would be better positioned to coordinate efficiently with other agencies that collect and supply climate and other relevant data such as FEMA, USGS, EPA, and the Army Corps of Engineers. Moreover, there are likely to be cost efficiencies associated with integrating such a system into an existing agency program rather than establishing a new third-party organization whose long-term sustainability is not guaranteed. The fact that this system’s purpose would be to mediate trustworthy information and support the prevention of damage and harm to communities represented by the federal government also necessitates a higher level of accountability and oversight than a third-party organization could offer.
NIST could additionally build and host a clearinghouse or database of replicable models and results, as well as relevant contact information to make it easy for users to find reliable models and communicate with their developers. Ideally information would be presented for technical experts and professionals, as well as non-specialists. Several federal agencies currently host clearinghouses for models, evidence, and interventions, including the Environmental Protection Agency, Department of Labor, and Department of Health and Human Services, among many others. NIST could look to these to inform the goals, design, and structure of a climate model clearinghouse.
Conclusion
Establishing an objective and widely recognized certification standard for climate and weather models would support actors both within and outside of government to use a growing wealth of flooding and climate data for a variety of purposes. For example, state and local agencies could more accurately predict and plan for extreme flooding events more quickly and efficiently, and prioritize infrastructure projects and spending. And if successful, this idea could be adapted for other climate-related emergencies such as wildfire and extreme drought. Ultimately, public resources and data would be put to use to foster safer and more resilient communities across the country, and potentially save billions of dollars in damages, clean up efforts, and other economic impacts.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.
A National Institute for High-Reward Research
The policy discourse about high-risk, high-reward research has been too narrow. When that term is used, people are usually talking about DARPA-style moonshot initiatives with extremely ambitious goals. Given the overly conservative nature of most scientific funding, there’s a fair appetite (and deservedly so) for creating new agencies like ARPA-H, and other governmental and private analogues.
The “moonshot” definition, however, omits other types of high-risk, high-reward research that are just as important for the government to fund—perhaps even more so, because they are harder for anyone else to support or even to recognize in the first place.
Far too many scientific breakthroughs and even Nobel-winning discoveries had trouble getting funded at the outset. The main reason at the time was that the researcher’s idea seemed irrelevant or fanciful. For example, CRISPR was originally thought to be nothing more than a curiosity about bacterial defense mechanisms.
Perhaps ironically, the highest rewards in science often come from the unlikeliest places. Some of our “high reward” funding should therefore be focused on projects, fields, ideas, theories, etc. that are thought to be irrelevant, including ideas that have gotten turned down elsewhere because they are unlikely to “work.” The “risk” here isn’t necessarily technical risk, but the risk of being ignored.
Traditional funders are unlikely to create funding lines specifically for research that they themselves thought was irrelevant. Thus, we need a new agency that specializes in uncovering funding opportunities that were overlooked elsewhere. Judging from the history of scientific breakthroughs, the benefits could be quite substantial.
Challenge and Opportunity
There are far too many cases where brilliant scientists had trouble getting their ideas funded or even faced significant opposition at the time. For just a few examples (there are many others):
- The team that discovered how to manufacture human insulin applied for an NIH grant for an early stage of their work. The rejection notice said that the project looked “extremely complex and time-consuming,” and “appears as an academic exercise.”
- Katalin Karikó’s early work on mRNA was a key contributor to multiple Covid vaccines, and ultimately won the Nobel Prize. But she repeatedly got demoted at the University of Pennsylvania because she couldn’t get NIH funding.
- Carol Grieder’s work on telomerase was rejected by an NIH panel on literally the same day that she won the Nobel Prize, on the grounds that she didn’t have enough preliminary data about telomerase.
- Francisco Mojica (who identified CRISPR while studying archaebacteria in the 1990s) has said, “When we didn’t have any idea about the role these systems played, we applied for financial support from the Spanish government for our research. The government received complaints about the application and, subsequently, I was unable to get any financial support for many years.”
- Peyton Rous’s early 20th century studies on transplanting tumors between purebred chickens was ridiculed at the time, but his work won the Nobel Prize over 50 years later, and provided the basis for other breakthroughs involving reverse transcription, retroviruses, oncogenes, and more.
One could fill an entire book with nothing but these kinds of stories.
Why do so many brilliant scientists struggle to get funding and support for their groundbreaking ideas? In many cases, it’s not because of any reason that a typical “high risk, high reward” research program would address. Instead, it’s because their research can be seen as irrelevant, too far removed from any practical application, or too contrary to whatever is currently trendy.
To make matters worse, the temptation for government funders is to opt for large-scale initiatives with a lofty goal like “curing cancer” or some goal that is equally ambitious but also equally unlikely to be accomplished by a top-down mandate. For example, the U.S. government announced a National Plan to Address Alzheimer’s Disease in 2012, and the original webpage promised to “prevent and effectively treat Alzheimer’s by 2025.” Billions have been spent over the past decade on this objective, but U.S. scientists are nowhere near preventing or treating Alzheimer’s yet. (Around October 2024, the webpage was updated and now aims to “address Alzheimer’s and related dementias through 2035.”)
The challenge is whether quirky, creative, seemingly irrelevant, contrarian science—which is where some of the most significant scientific breakthroughs originated—can survive in a world that is increasingly managed by large bureaucracies whose procedures don’t really have a place for that type of science, and by politicians eager to proclaim that they have launched an ambitious goal-driven initiative.
The answer that I propose: Create an agency whose sole raison d’etre is to fund scientific research that other agencies won’t fund—not for reasons of basic competence, of course, but because the research wasn’t fashionable or relevant.
The benefits of such an approach wouldn’t be seen immediately. The whole point is to allocate money to a broad portfolio of scientific projects, some of which would fail miserably but some of which would have the potential to create the kind of breakthroughs that, by definition, are unpredictable in advance. This plan would therefore require a modicum of patience on the part of policymakers. But over the longer term, it would likely lead to a number of unforeseeable breakthroughs that would make the rest of the program worth it.
Plan of Action
The federal government needs to establish a new National Institute for High-Reward Research (NIHRR) as a stand-alone agency, not tied to the National Institutes of Health or the National Science Foundation. The NIHRR would be empowered to fund the potentially high-reward research that goes overlooked elsewhere. More specifically, the aim would be to cast a wide net for:
- Researchers (akin to Katalin Karikó or Francisco Mojica) who are perfectly well-qualified but have trouble getting funding elsewhere;
- Research projects or larger initiatives that are seen as contrary to whatever is trendy in a given field;
- Research projects or larger initiatives that are seen as irrelevant (e.g., bacterial or animal research that is seen as unrelated to human health).
NIHRR should be funded at, say, $100m per year as a starting point ($1 billion would be better). This is an admittedly ambitious proposal. It would mean increasing the scientific and R&D expenditure by that amount, or else reassigning existing funding (which would be politically unpopular). But it is a worthy objective, and indeed, should be seen as a starting point.
Significant stakeholders with an interest in a new NIHRR would obviously include universities and scholars who currently struggle for scientific funding. In a way, that stacks the deck against the idea, because the most politically powerful institutions and individuals might oppose anything that tampers with the status quo of how research funding is allocated. Nonetheless, there may be a number of high-status individuals (e.g., current Nobel winners) who would be willing to support this idea as something that would have aided their earlier work.
A new fund like this would also provide fertile ground for metascience experiments and other types of studies. Consider the striking fact that as yet, there is virtually no rigorous empirical evidence as to the relative strengths and weaknesses of top-down, strategically-driven scientific funding versus funding that is more open to seemingly irrelevant, curiosity-driven research. With a new program for the latter, we could start to derive comparisons between the results of that funding as compared to equally situated researchers funded through the regular pathways.
Moreover, a common metascience proposal in recent years is to use a limited lottery to distribute funding, on the grounds that some funding is fairly random anyway and we might as well make it official. One possibility would be for part of the new program to be disbursed by lottery amongst researchers who met a minimum bar of quality and respectability, and who had got a high enough score on “scientific novelty.” One could imagine developing an algorithm to make an initial assessment as well. Then we could compare the results of lottery-based funding versus decisions made by program officers versus algorithmic recommendations.
Conclusion
A new line of funding like the National Institute for High-Reward Research (NIHRR) could drive innovation and exploration by funding the potentially high-reward research that goes overlooked elsewhere. This would elevate worthy projects with unknown outcomes so that unfashionable or unpopular ideas can be explored. Funding these projects would have the added benefit of offering many opportunities to build in metascience studies from the outset, which is easier than retrofitting projects later.
This memo produced as part of the Federation of American Scientists and Good Science Project sprint. Find more ideas at Good Science Project x FAS
Absolutely, but that is also true for the current top-down approach of announcing lofty initiatives to “cure Alzheimer’s” and the like. Beyond that, the whole point of a true “high-risk, high-reward” research program should be to fund a large number of ideas that don’t pan out. If most research projects succeed, then it wasn’t a “high-risk” program after all.
Again, that would be a sign of potential success. Many of history’s greatest breakthroughs were mocked for those exact reasons at the time. And yes, some of the research will indeed be irrelevant or silly. That’s part of the bargain here. You can’t optimize both Type I and Type II errors at the same time (that is, false positives and false negatives). If we want to open the door to more research that would have been previously rejected on overly stringent grounds, then we also open the door to research that would have been correctly rejected on those grounds. That’s the price of being open to unpredictable breakthroughs.
How to evaluate success is a sticking point here, as it is for most of science. The traditional metrics (citations, patents, etc.) would likely be misleading, at least in the short-term. Indeed, as discussed above, there are cases where enormous breakthroughs took a few decades to be fully appreciated.
One simple metric in the shorter term would be something like this: “How often do researchers send in progress reports saying that they have been tackling a difficult question, and that they haven’t yet found the answer?” Instead of constantly promising and delivering success (which is often achieved by studying marginal questions and/or exaggerating results), scientists should be incentivized to honestly report on their failures and struggles.
Digital Product Passports: Transforming America’s Linear Economy to Combat Waste, Counterfeits, and Supply Chain Vulnerabilities
The U.S. economy is being held back by outdated, linear supply chains that waste valuable materials, expose businesses to counterfeits, and limit consumer choice. American companies lose billions each year to fraudulent goods—everything from fake pharmaceuticals to faulty electronics—while consumers are left in the dark about what they’re buying. At the same time, global disruptions like the COVID-19 pandemic revealed just how fragile and opaque our supply chains really are, especially in critical industries. Without greater transparency and accountability, the U.S. economy will remain vulnerable to these risks, stifling growth and innovation while perpetuating inequities and environmental harm.
A shift toward more circular, transparent systems would not only reduce waste and increase efficiency, but also unlock new business models, strengthen supply chain resilience, and give consumers better, more reliable information about the products they choose. Digital Product Passports (DPP) – standardized digital records that contain key information about a product’s origin, materials, lifecycle, and authenticity – are a key tool that will help the United States achieve these goals.
The administration should establish a comprehensive Digital Product Passport Initiative that creates the legal, technical, and organizational frameworks for businesses to implement decentralized digital passports for their products while ensuring consumer ownership rights, supply chain integrity, and international interoperability. This plan should consider which entities provide up-front investment until the benefits of a digital product passport (DPP) are manifested.
Challenge and Opportunity
The United States faces an urgent sustainability challenge driven by its linear economic model, which prioritizes resource extraction, production, and disposal over reuse and recycling. This approach has led to severe environmental degradation, excessive waste generation, and unsustainable resource consumption, with marginalized communities—often communities of color and low-income areas—bearing the brunt of the damage. From toxic pollution to hazardous waste dumps, these populations are disproportionately affected, exacerbating environmental injustice. If this trajectory continues, the U.S. will not only fall short of its climate commitments but also deepen existing economic inequities. To achieve a sustainable future, the nation must transition to a more circular economy, where resources are responsibly managed, reused, and kept in circulation, rather than being discarded after a single use.
At the same time, the U.S. is contending with widespread counterfeiting and fragile supply chains that threaten both economic security and public health. Counterfeit goods, from unsafe pharmaceuticals to faulty electronics, flood the market, endangering lives and undermining consumer confidence, while costing the economy billions in lost revenue. Furthermore, the COVID-19 pandemic exposed deep weaknesses in global supply chains, particularly in critical sectors like healthcare and technology, leading to shortages that disproportionately affected vulnerable populations. These opaque and fragmented supply chains allow counterfeit goods to flourish and make it difficult to track and verify the authenticity of products, leaving businesses and consumers at risk.
Achieving true sustainability in the United States requires a shift to item circularity, where products and materials are kept in use for as long as possible through repair, reuse, and recycling. This model not only minimizes waste but also reduces the demand for virgin resources, alleviating the environmental pressures created by the current linear economy. Item circularity helps to close the loop, ensuring that products at the end of their life cycles re-enter the economy rather than ending up in landfills. It also promotes responsible production and consumption by making it easier to track and manage the flow of materials, extending the lifespan of products, and minimizing environmental harm. By embracing circularity, industries can cut down on resource extraction, reduce greenhouse gas emissions, and mitigate the disproportionate impact of pollution on marginalized communities.
One of the most powerful tools to facilitate this transition is the digital product passport (DPP). A DPP is a digital record that provides detailed information about a product’s entire life cycle, including its origin, materials, production process, and end-of-life options like recycling or refurbishment. With this information easily accessible, consumers, businesses, and regulators can make informed decisions about the use, maintenance, and eventual disposal of products. DPPs enable seamless tracking of products through supply chains, making it easier to repair, refurbish, or recycle items. This ensures that valuable materials are recovered and reused, contributing to a circular economy. Additionally, DPPs empower consumers by offering transparency into the sustainability and authenticity of products, encouraging responsible purchasing, and fostering trust in both the products and the companies behind them.
In addition to promoting circularity, digital product passports (DPPs) are a powerful solution for combating counterfeits and ensuring supply chain integrity. In 2016, counterfeits and pirated products represented $509B and 3.3% of world trade. By assigning each product a unique digital identifier, a DPP enables transparent and verifiable tracking of goods at every stage of the supply chain, from raw materials to final sale. This transparency makes it nearly impossible for counterfeit products to infiltrate the market, as every legitimate product can be traced back to its original manufacturer with a clear, tamper-proof digital record. In industries where counterfeiting poses serious safety and financial risks—such as pharmaceuticals, electronics, and luxury goods—DPPs provide a critical layer of protection, ensuring consumers receive authentic products and helping companies safeguard their brands from fraud.
Moreover, DPPs offer real-time insights into supply chain operations, identifying vulnerabilities or disruptions more quickly. This allows businesses to respond to issues such as production delays, supplier failures, or the introduction of fraudulent goods before they cause widespread damage. With greater visibility into where products are sourced, produced, and transported, companies can better manage their supply chains, ensuring that products meet regulatory standards and maintaining the integrity of goods as they move through the system. This level of traceability strengthens trust between businesses, consumers, and regulators, ultimately creating more resilient and secure supply chains.
Beyond sustainability and counterfeiting, digital product passports (DPPs) offer transformative potential in four additional key areas:
- First, they enhance compliance and regulatory oversight by providing clear, accessible records of a product’s materials, production methods, and supply chain journey, helping industries meet environmental, labor, and safety standards.
- Second, DPPs strengthen supply chain risk mitigation and resilience by improving real-time visibility and accountability, allowing businesses to detect and address disruptions or vulnerabilities faster.
- Third, they empower informed consumer choices and consumer protection by offering transparency into a product’s origin, sustainability, and authenticity, enabling people to make ethical, safe purchasing decisions.
- Finally, DPPs fuel data-driven innovation and new business models by generating insights that can inform better product design, maintenance strategies, and circular economy opportunities, such as take-back programs or leasing services. In these ways, DPPs act as a versatile tool that not only addresses immediate challenges but also positions industries for long-term, sustainable growth.
Plan of Action
The administration should establish a comprehensive Digital Product Passport Initiative that creates the legal, technical, and organizational frameworks for businesses to implement decentralized digital passports for their products while ensuring consumer ownership rights, supply chain integrity, and international interoperability. This plan should consider which entities provide up-front investment until the benefits of DPP are realized.
Recommendation 1. Legal Framework Development (Lead: White House Office of Science and Technology Policy)
The foundation of any successful federal initiative must be a clear legal framework that establishes authority, defines roles, and ensures enforceability. The Office of Science and Technology Policy is uniquely positioned to lead this effort given its cross-cutting mandate to coordinate science and technology policy across federal agencies and its direct line to the Executive Office of the President.
- Draft executive order establishing federal DPP program authority
- Coordinate with Department of Commerce (DOC) and Environmental Protection Agency (EPA) to identify rulemaking authority, engaging Congress as needed
- Define enforcement mechanisms and penalties
- Coordinate with DOC to define legal requirements for DPP data portability
- Establish liability framework for DPP data accuracy
- Create legal framework for consumer DPP ownership rights, engaging Congress as needed
- Identify the role of Congress, if needed; for example, in defining rulemaking authorities, DPP data probabilities, and consumer DPP ownership rights
- Timeline: First 9 months
Recommendation 2. Product Category Definition & Standards Development (Lead: DOC/NIST)
The success of the DPP initiative depends on clear, technically sound standards that define which products require passports and what information they must contain. This effort must consider the industries and products that will benefit from DPPs, as goods of varying value will find different returns on the investment of DPPs. NIST, as the nation’s lead standards body with deep expertise in digital systems and measurement science, is the natural choice to lead this critical definitional work.
- Establish an interagency working group led by NIST to define priority product categories
- Develop technical standards for DPP data structure and interoperability
- Timeline: First 6 months
Recommendation 3. Consumer Rights & Privacy Framework (Lead: FTC Bureau of Consumer Protection)
A decentralized DPP system must protect consumer privacy while ensuring consumers maintain control over the digital passports of products they own. The FTC’s Bureau of Consumer Protection, with its statutory authority to protect consumer interests and experience in digital privacy issues, is best equipped to develop and enforce these critical consumer protections.
- Define consumer DPP ownership rights and transfer mechanisms
- Establish privacy standards for DPP data
- Develop consumer access and control protocols
- Create standards for consumer authorization of third-party DPP access
- Define requirements for consumer notification of DPP changes
- Timeline: 12-18 months
Recommendation 4. DPP Architecture & Verification Framework (Lead: GSA Technology Transformation Services)
A decentralized DPP system requires robust technical architecture that enables secure data storage, seamless transfers, and reliable verification across multiple private databases. GSA’s Technology Transformation Services, with its proven capability in building and maintaining federal digital infrastructure and its experience in implementing emerging technologies across government, is well-equipped to design and oversee this complex technical ecosystem.
- Define methodology for storing and verifying DPPs
- Develop API standards for industry integration
- Ensure cybersecurity protocols meet NIST standards
- Implement blockchain or distributed ledger technology for traceability
- Develop standards for DPP transfer between product clouds
- Create protocols for consumer DPP ownership transfer
- Establish verification registry for authorized product clouds
- Define minimum security requirements for private DPP databases
- Timeline: 12-18 months
Recommendation 5. Industry Engagement & Compliance Program (Lead: DOC Office of Business Liaison)
Successful implementation of DPPs requires active participation and buy-in from the private sector, as businesses will be responsible for creating and maintaining their product clouds. The DOC Office of Business Liaison, with its established relationships across industries and experience in facilitating public-private partnerships, is ideally suited to lead this engagement and ensure that implementation guidelines meet both government requirements and business needs.
- Create industry advisory board with representatives from key sectors
- Develop compliance guidelines and technical assistance programs
- Establish pilot programs with volunteer companies
- Partner with trade associations for outreach and education
- Develop guidelines for product cloud certification
- Create standards for DPP ownership transfer during resale
- Establish protocols for managing orphaned DPPs
- Timeline: Ongoing from months 3-24
Recommendation 6. Supply Chain Verification System (Lead: Customs and Border Protection)
Digital Product Passports must integrate seamlessly with existing import/export processes to effectively combat counterfeiting and ensure supply chain integrity. Customs and Border Protection, with its existing authority over imports and expertise in supply chain security, is uniquely positioned to incorporate DPP verification into its existing systems and risk assessment frameworks.
- Integrate DPP verification into existing Customs and Border Protection systems
- Develop automated scanning and verification protocols
- Create risk assessment frameworks for import screening
- Timeline: 18-24 months
Recommendation 7. Sustainability Metrics Integration (Lead: EPA Office of Pollution Prevention)
For DPPs to meaningfully advance sustainability goals, they must capture standardized, verifiable environmental impact data throughout product lifecycles. The EPA’s Office of Pollution Prevention brings decades of expertise in environmental assessment and verification protocols, making it the ideal leader for developing and overseeing these critical sustainability metrics.
- Define required environmental impact data points
- Develop lifecycle assessment standards
- Create verification protocols for environmental claims
- Timeline: 12-18 months
Recommendation 8. International Coordination (Lead: State Department Bureau of Economic Affairs)
The global nature of supply chains requires that U.S. DPPs be compatible with similar initiatives worldwide, particularly the EU’s DPP system. The State Department’s Bureau of Economic Affairs, with its diplomatic expertise and experience in international trade negotiations, is best positioned to ensure U.S. DPP standards align with global frameworks while protecting U.S. interests.
- Survey and coordinate with similar efforts around the world (e.g., European Commission’s Digital Product Passport initiative)
- Engage with WTO to ensure compliance with trade rules
- Develop framework for international data sharing
- Develop protocols for cross-border DPP transfers
- Establish international product cloud interoperability standards
- Timeline: Ongoing from months 6-24
Recommendation 9. Small Business Support Program (Lead: Small Business Administration)
The technical and financial demands of implementing DPPs could disproportionately burden small businesses, potentially creating market barriers. The Small Business Administration, with its mandate to support small business success and experience in providing technical assistance and grants, is the natural choice to lead efforts ensuring small businesses can effectively participate in the DPP system.
- Create technical assistance programs
- Provide implementation grants
- Develop simplified compliance pathways
- Timeline: Launch by month 18
Conclusion
Digital Product Passports represent a transformative opportunity to address two critical challenges facing the United States: the unsustainable waste of our linear economy and the vulnerability of our supply chains to counterfeiting and disruption. Through a comprehensive nine-step implementation plan led by key federal agencies, the administration can establish the frameworks necessary for businesses to create and maintain digital passports for their products while ensuring consumer rights and international compatibility. This initiative will not only advance environmental justice and sustainability goals by enabling product circularity, but will also strengthen supply chain integrity and security, positioning the United States as a leader in the digital transformation of global commerce.
Improve healthcare data capture at the source to build a learning health system
Studies estimate that only one in 10 recommendations made by major professional societies are supported by high-quality evidence. Medical care that is not evidence-based can result in unnecessary care that burdens public finances, harms patients, and damages trust in the medical profession. Clearly, we must do a better job of figuring out the right treatments, for the right patients, at the right time. To meet this challenge, it is essential to improve our ability to capture reusable data at the point of care that can be used to improve care, discover new treatments, and make healthcare more efficient. To achieve this vision, we will need to shift financial incentives to reward data generation, change how we deliver care using AI, and continue improving the technological standards powering healthcare.
The Challenge and Opportunity of health data
Many have hailed health data collected during everyday healthcare interactions as the solution to some of these challenges. Congress directed the U.S. Food and Drug Administration (FDA) to increase the use of real-world data (RWD) for making decisions about medical products. However, FDA’s own records show that in the most recent year for which data are available, only two out of over one hundred new drugs and biologics approved by FDA were approved based primarily on real-world data.
A major problem is that our current model in healthcare doesn’t allow us to generate reusable data at the point of care. This is even more frustrating because providers face a high burden of documentation, and patients report repetitive questions from providers and questionnaires.
To expand a bit: while large amounts of data are generated at the point of care, these data lack the quality, standardization, and interoperability to enable downstream functions such as clinical trials, quality improvement, and other ways of generating more knowledge about how to improve outcomes.
By better harnessing the power of data, including results of care, we could finally build a learning healthcare system where outcomes drive continuous improvement and where healthcare value leads the way. There are, however, countless barriers to such a transition. To achieve this vision, we need to develop new strategies for the capture of high-quality data in clinical environments, while reducing the burden of data entry on patients and providers.
Efforts to achieve this vision follow a few basic principles:
- Data should be entered only once– by the person or entity most qualified to do so – and be used many times.
- Data capture should be efficient, so as to minimize the burden on those entering the data, allowing them to focus their time on doing what actually matters, like providing patient care.
- Data generated at the point of care needs to be accessible for appropriate secondary uses (quality improvement, trials, registries), while respecting patient autonomy and obtaining informed consent where required. Data should not be stuck in any one system but should flow freely between systems, enabling linkages across different data sources.
- Data need to be used to provide real value to patients and physicians. This is achieved by developing data visualizations, automated data summaries, and decision support (e.g. care recommendations, trial matching) that allow data users to spend less time searching for data and more time on analysis, problem solving, and patient care– and help them see the value in entering data in the first place.
Barriers to capturing high-quality data at the point of care:
- Incentives: Providers and health systems are paid for performing procedures or logging diagnoses. As a result, documentation is optimized for maximizing reimbursement, but not for maximizing the quality, completeness, and accuracy of data generated at the point of care.
- Workflows: Influenced by the prevailing incentives, clinical workflows are not currently optimized to enable data capture at the point of care. Patients are often asked the same questions at multiple stages, and providers document the care provided as part of free-text notes, which are frequently required for billing but can make it challenging to find information.
- Technology: Shaped by incentives and workflows, technology has evolved to capture information in formats that frequently lack standardization and interoperability.
Plan of Action
Plan of Action
Recommendation 1. Incentivize generation of reusable data at the point of care
Financial incentives are needed to drive the development of workflows and technology to capture high-quality data at the point of care. There are several payment programs already in existence that could provide a template for how these incentives could be structured.
For example, the Centers for Medicare and Medicaid Services (CMS) recently announced the Enhancing Oncology Model (EOM), a voluntary model for oncology providers caring for patients with common cancer types. As part of the EOM, providers are required to report certain data fields to CMS, including staging information and hormone receptor status for certain cancer types. These data fields are essential for clinical care, research, quality improvement, and ongoing care observation involving cancer patients. Yet, at present, these data are rarely recorded in a way that makes it easy to exchange and reuse this information. To reduce the burden of reporting this data, CMS has collaborated with the HHS Assistant Secretary for Technology Policy (ASTP) to develop and implement technological tools that can facilitate automated reporting of these data fields.
CMS also has a long-standing program that requires participation in evidence generation as a prerequisite for coverage, known as coverage with evidence development (CED). For example, hospitals that would like to provide Transcatheter Aortic Valve Replacement (TAVR) are required to participate in a registry that records data on these procedures.
To incentivize evidence generation as part of routine care, CMS should refine these programs and expand their use. This would involve strengthening collaborations across the federal government to develop technological tools for data capture, and increasing the number of payment models that require generation of data at the point of care. Ideally, these models should evolve to reward 1) high-quality chart preparation (assembly of structured data) 2) establishing diagnoses and development of a care plan, and 3) tracking outcomes. These payment policies are powerful tools because they incentivize the generation of reusable infrastructure that can be deployed for many purposes.
Recommendation 2. Improve workflows to capture evidence at the point of care
With the right payment models, providers can be incentivized to capture reusable data at the point of care. However, providers are already reporting being crushed by the burden of documentation and patients are frequently filling out multiple questionnaires with the same information. To usher in the era of the learning health system (a system that includes continuous data collection to improve service delivery), without increasing the burden on providers and patients, we need to redesign how care is provided. Specifically, we must focus on approaches that integrate generation of reusable data into the provision of routine clinical care.
While the advent of AI is an opportunity to do just that, current uses of AI have mainly focused on drafting documentation in free-text formats, essentially replacing human scribes. Instead, we need to figure out how we can use AI to improve the usability of the resulting data. While it is not feasible to capture all data in a structured format on all patients, a core set of data are needed to provide high-quality and safe care. At a minimum, those should be structured and part of a basic core data set across disease types and health maintenance scenarios.
In order to accomplish this, NIH and the Advanced Research Projects Agency for Health (ARPA-H) should fund learning laboratories that develop, pilot, and implement new approaches for data capture at the point of care. These centers would leverage advances in human-centered design and artificial intelligence (AI) to revolutionize care delivery models for different types of care settings, ranging from outpatient to acute care and intensive care settings. Ideally, these centers would be linked to existing federally funded research sites that could implement the new care and discovery processes in ongoing clinical investigations.
The federal government already spends billions of dollars on grants for clinical research- why not use some of that funding to make clinical research more efficient, and improve the experience of patients and physicians in the process?
Recommendation 3. Enable technology systems to improve data standardization and interoperability
Capturing high-quality data at the point of care is of limited utility if the data remains stuck within individual electronic health record (EHR) installations. Closed systems hinder innovation and prevent us from making the most of the amazing trove of health data.
We must create a vibrant ecosystem where health data can travel seamlessly between different systems, while maintaining patient safety and privacy. This will enable an ecosystem of health data applications to flourish. HHS has recently made progress by agreeing to a unified approach to health data exchange, but several gaps remain. To address these we must
- Increase standardization of data elements: The federal government requires certain data elements to be standardized for electronic export from the EHR. However, this list of data elements, called the United States Core Data for Interoperability (USCDI) currently does not include enough data elements for many uses of health data. HHS could rapidly expand the USCDI by working with federal partners and professional societies to determine which data elements are critical for national priorities, like vaccine safety and use, or protection from emerging pathogens.
- Enable writeback into the EHR: While current efforts focused on interoperability have focused on the ability to export EHR data, developing a vibrant ecosystem of health data applications that are available to patients, physicians, and other data users, requires the capability to write data back into the EHR. This would enable the development of a competitive ecosystem of applications that use health data generated in the EHR, much like the app store on our phones.
- Create widespread interoperability of data for multiple purposes: HHS has made great progress towards allowing health data to be exchanged between any two entities in our healthcare system, thanks to the Trusted Exchange Framework and Common Agreement (TEFCA). TEFCA could allow any two healthcare sites to exchange data, but unfortunately, participation remains spotty and TEFCA currently does not allow data exchange solely for research. HHS should work to close these gaps by allowing TEFCA to be used for research, and incentivizing participation in TEFCA, for example by making joining TEFCA a condition of participation in Medicare.
Conclusion
The treasure trove of health data generated during routine care has given us a huge opportunity to generate knowledge and improve health outcomes. These data should serve as a shared resource for clinical trials, registries, decision support, and outcome tracking to improve the quality of care. This is necessary for society to advance towards personalized medicine, where treatments are tailored to biology and patient preference. However, to make the most of these data, we must improve how we capture and exchange these data at the point of care.
Essential to this goal is evolving our current payment systems from rewarding documentation of complexity or time spent, to generation of data that supports learning and improvement. HHS should use its payment authorities to encourage data generation at the point of care and promote the tools that enable health data to flow seamlessly between systems, building on the success stories of existing programs like coverage with evidence development. To allow capture of this data without making the lives of providers and patients even more difficult, federal funding bodies need to invest in developing technologies and workflows that leverage AI to create usable data at the point of care. Finally, HHS must continue improving the standards that allow health data to travel seamlessly between systems. This is essential for creating a vibrant ecosystem of applications that leverage the benefits of AI to improve care.
This memo produced as part of the Federation of American Scientists and Good Science Project sprint. Find more ideas at Good Science Project x FAS
Reduce Administrative Research Burden with ORCID and DOI Persistent Digital Identifiers
There exists a low-effort, low-cost way to reduce administrative burden for our scientists, and make it easier for everyone – scientists, funders, legislators, and the public – to document the incredible productivity of federal science agencies. If adopted throughout government research these tools would maximize interoperability across reporting systems, reduce the administrative burden and costs, and increase the accountability of our scientific community. The solution: persistent digital identifiers (Digital Object Identifiers, or DOIs) and Open Researcher and Contributor IDs (ORCIDs) for key personnel. ORCIDs are already used by most federal science agencies. We propose that federal science agencies also adopt digital object identifiers for research awards, an industry-wide standard. A practical and detailed implementation guide for this already exists.
The Opportunity
Tracking the impact and outputs of federal research awards is labor-intensive and expensive. Federally funded scientists spend over 900,000 hours a year writing interim progress reports alone. Despite that tremendous effort, our ability to analyze the productivity of federal research awards is limited. These reports only capture research products created while the award is active, but many exciting papers and data sets are not published until after the award is over, making it hard for the funder to associate them with a particular award or agency initiative. Further, these data are often not structured in ways that support easy analysis or collaboration. When it comes time for the funding agency to examine the impact of an award, a call for applications, or even an entire division, staff rely on a highly manual process that is time-intensive and expensive. Thus, such evaluations are often not done. Deep analysis of federal spending is next to impossible, and simple questions regarding which type of award is better suited for one scientific problem over another, or whether one administrative funding unit is more impactful than a peer organization with the same spending level, are rarely investigated by federal research agencies. These questions are difficult to answer without a simple way to tie award spending to specific research outputs such as papers, patents, and datasets.
To simplify tracking of research outputs, the Office of Science and Technology Policy (OSTP) directed federal research agencies to “assign unique digital persistent identifiers to all scientific research and development awards and intramural research protocols […] through their digital persistent identifiers.” This directive builds on work from the Trump White House in 2018 to reduce the burden on researchers and the National Security Strategy guidance. It is a great step forward, but it has yet to be fully implemented, and allows implementation to take different paths. Agencies are now taking a fragmented, agency-specific approach, which will undermine the full potential of the directive by making it difficult to track impact using the same metrics across federal agencies.
Without a unified federal standard, science publishers, awards management systems, and other disseminators of federal research output will continue to treat award identifiers as unstructured text buried within a long document, or URLs tucked into acknowledgement sections or other random fields of a research product. These ad hoc methods make it difficult to link research outputs to their federal funding. It leaves scientists and universities looking to meet requirements for multiple funding agencies, relying on complex software translations of different agency nomenclatures and award persistent identifiers, or, more realistically, continue to track and report productivity by hand. It remains too confusing and expensive to provide the level of oversight our federal research enterprise deserves.
There is an existing industry standard for associating digital persistent identifiers with awards that has been adopted by the Department of Energy and other funders such as the ALS Association, the American Heart Association, and the Wellcome Trust. It is a low-effort, low-cost way to reduce administrative burden for our scientists and make it easier for everyone – scientists, federal agencies, legislators, and the public – to document the incredible productivity of federal science expenditures.
Adopting this standard means funders can automate the reporting of most award products (e.g., scientific papers, datasets), reducing administrative burden, and allowing research products to be reliably tracked even after the award ends. Funders could maintain their taxonomy linking award DOIs to specific calls for proposals, study sections, divisions, and other internal structures, allowing them to analyze research products in much easier ways. Further, funders would be able to answer the fundamental questions about their programs that are usually too labor-intensive to even ask, such as: did a particular call for applications result in papers that answered the underlying question laid out in that call? How long should awards for a specific type of research problem last to result in the greatest scientific productivity? In the light of rapid advances in artificial intelligence (AI) and other analytic tools, making the linkages between research funding and products standardized and easy to analyze opens possibilities for an even more productive and accountable federal research enterprise going forward. In short, assigning DOIs to awards fulfills the requirements of the 2022 directive to maximize interoperability with other funder reporting systems, the promise of the 2018 NSTC report to reduce burden, and new possibilities for a more accountable and effective federal research enterprise.
Plan of Action
The overall goal is to increase accountability and transparency for federal research funding agencies and dramatically reduce the administrative burden on scientists and staff. Adopting a uniform approach allows for rapid evaluation and improvements across the research enterprise. It also enables and for the creation of comparable data on agency performance. We propose that federal science agencies adopt the same industry-wide standard – the DOI – for awards. A practical and detailed implementation guide already exists.
These steps support the existing directive and National Security Strategy guidance issued by OSTP and build on 2018 work from the NSTC:.
Recommendation 1. An interagency committee led by OSTP should coordinate and harmonize implementation to:
- Develop implementation timelines and budgets for each agency that are consistent with existing industry standards;
- Consult with other stakeholders such as scientific publishers and awardee institutions, but consider that guidance and industry standards already exist, so there is no need for lengthy consultation.
Recommendation 2. Agencies should fully adopt the industry standard persistent identifier infrastructure for research funding—DOIs—for awards. Specifically, funders should:
- Ensure they are listed in the Research Organization Registry at the administrative level (e.g., agency, division) that suits their reporting and analytic needs.
- Require and collect ORCIDs, a digital identifier for researchers widely used by academia and scientific publishers, for the key personnel of an award.
- Issue DOIs for individual awards, and link those awards to the appropriate organizational units and research funding initiatives in the metadata to facilitate evaluation.
Recommendation 3. Agencies should require the Principal Investigator (PI) to cite the award DOI in research products (e.g., scientific papers, datasets). This requirement could be included in the terms and conditions of each award. Using DOIs to automate much of progress reporting, as described below, provides a natural incentive for investigators to comply.
Recommendation 4. Agencies should use award persistent identifiers from ORCID and award DOI systems to identify research products associated with an award to reduce PI burden. Awardees would still be required to certify that the product arose directly from their federal research award. After the award and reporting obligation ends, the agency can continue to use these systems to link products to awards based on information provided by the product creators to the product distributors (e.g., authors citing an award DOI when publishing a paper), but without the direct certification of the awardee. This compromise provides the public and the funder with better information about an award’s output, but does not automatically hold the awardee liable if the product conflicts with a federal policy.
Recommendation 5. Agencies should adopt or incorporate award DOIs into their efforts to describe agency productivity and create more efficient and consistent practices for reporting research progress across all federal research funding agencies. Products attributable to the award should be searchable by individual awards, and by larger collections of awards, such as administrative Centers or calls for applications. As an example of this transparency, PubMed, with its publicly available indexing of the biomedical literature, supports the efforts of the National Institutes of Health (NIH)’s RePORTER), and could serve as a model for other fields as persistent identifiers for awards and research products become more available.
Recommendation 6. Congress should issue appropriations reporting language to ensure that implementation costs are covered for each agency and that the agencies are adopting a universal standard. Given that the DOI for awards infrastructure works even for small non-profit funders, the greatest costs will be in adapting legacy federal systems, not in utilizing the industry standard itself.
Challenges
We envision the main opposition to come from the agencies themselves, as they have multiple demands on their time and might have shortcuts to implementation that meet the letter of the requirement but do not offer the full benefits of an industry standard. This short-sighted position denies both the public transparency needed on research award performance and the massive time and cost savings for the agencies and researchers.
A partial implementation of this burden-reducing workflow already exists. Data feeds from ORCID and PubMed populate federal tools such as My Bibliography, and in turn support the biosketch generator in SciENcv or an agency’s Research Performance Progress Report. These systems are feasible because they build on PubMed’s excellent metadata and curation. But PubMed does not index all scientific fields.
Adopting DOIs for awards means that persistent identifiers will provide a higher level of service across all federal research areas. DOIs work for scientific areas not supported by PubMed. And even for the sophisticated existing systems drawing from PubMed, user effort could be reduced and accuracy increased if awards were assigned DOIs. Systems such as NIH RePORTER and PubMed currently have to pull data from citation of award numbers in the acknowledgment sections of research papers, which is more difficult to do.
Conclusion
OSTP and the science agencies have put forth a sound directive to make American science funding even more accountable and impactful, and they are on the cusp of implementation. It is part of a long-standing effort to reduce burden and make the federal research enterprise more accountable and effective. Federal research funding agencies are susceptible to falling into bureaucratic fragmentation and inertia by adopting competing approaches that meet the minimum requirements set forth by OSTP, but offer minimal benefit. If these agencies instead adopt the industry standard that is being used by many other funders around the world, there will be a marked reduction in the burden on awardees and federal agencies, and it will facilitate greater transparency, accountability, and innovation in science funding. Adopting the standard is the obvious choice and well within America’s grasp, but avoiding bureaucratic fragmentation is not simple. It takes leadership from each agency, the White House, and Congress.
This memo produced as part of the Federation of American Scientists and Good Science Project sprint. Find more ideas at Good Science Project x FAS