Unblock Mass Timber by Incentivizing Up-to-date Building Codes
Mass timber can help solve the housing shortage, yet the building material is not widely adopted because old building codes treat it like traditional lumber. The 2021 International Building Code (IBC) addressed this issue, significantly updating mass timber allowances such as increasing height limits. But mass timber use is still broadly limited because state and local building codes usually don’t update automatically. The U.S. Department of Agriculture (USDA) could speed the adoption of mass timber through grants that incentivize state and local governments to adopt the latest IBC codes.
Mass timber can help with housing abundance and the climate transition.
Compared to concrete and steel, mass timber buildings are faster to build (therefore often cheaper), just as safe in fires, and create fewer CO2 emissions. Single- and multi-family housing using mass timber components could help with the 7.3 million gap of affordable homes.
Broader adoption could meaningfully increase productivity and thereby reduce construction costs. Constructing the superstructure for a 25-story mass timber building in Milwaukee completed in 2022 took about half as long compared to concrete. Developers have reported cost savings of up to 35% through lower time and labor costs. Mass timber isn’t only for small projects: Walmart is building a new 2.4-million-square-foot office campus from mass timber.
Most states are on older building codes that inhibit use of mass timber.
Use of mass timber is growing. But building codes, often slow to catch up with the latest research, have limited the impact so far. Only in 2021 did amendments to the IBC enable the construction of mass timber buildings taller than six stories Building taller increases the cost savings from building faster.
State and local government adoption of building codes lags further. By 2023, only 20 states had adopted IBC 2021. Eventually builders might lobby governments to catch up, but for now there’s little reason for many builders to consider mass timber when it’s so restricted.
USDA could incentivize the adoption of the latest IBC.
There should be a federal grantmaking program that implicitly requires the latest IBC codes to participate, incentivizing state and local government adoption.
The USDA could house this program due to policy interest in both the timber industry (Forest Service, FS) and housing (Rural Development, RD).
In fact, USDA is already making grants towards mass-timber housing, just not at a scale that directly incentivizes code changes. Since 2015, the Wood Innovations Grant Program has invested more than $93 million in projects that support the wood products economy, including multifamily buildings. USDA also recently partnered with the Softwood Lumber Board to competitively award more than $4 million to 11 mass timber projects. Most of these buildings are in states or cities that have adopted IBC 2021. For example, one winner is a 12-story multifamily building in Denver, which would be impossible without IBC 2021.
To unlock the adoption of innovative mass timber construction, Congress should take the following steps:
- Appropriate additional discretionary budget to USDA with direction to invest in mass timber innovation. For example, increasing the Wood Innovations Grant Program by 10 times the FY 2023 amount would be ~$430 million, less than 1.5% of USDA’s discretionary budget.
USDA should then take the following steps:
- Allocate those funds to the FS Wood Innovations Grant Program or a similar program within RD.
- Prioritize grants to multi- and single- family housing projects.
- Write funding priority that necessitates the latest IBC mass timber amendments. For example, priority to building designs over eight stories would only be possible in locations where IBC 2021 is in effect (or comparable amendments).
That funding opportunity incentivizes state and local governments to adopt mass timber amendments.
It’s uncertain how much funding would create a strong incentive. But even if most projects were awarded in states already using IBC 2021, there may still be positive downstream impacts from meaningful investment in the industry. While there are far fewer mass timber projects relative to total construction, there are far more than a grant program of this scale could directly support, so there shouldn’t be a shortage of projects. The goal is not to directly build millions of homes but to bring state building codes up-to-date. Updating building codes is necessary but not sufficient for construction at scale.
A simple mechanism to unlock the potential of mass timber.
A federal USDA grant program incentivizing adoption of the latest IBC amendments related to mass timber requires no new funding mechanisms and no new legislation. The structure is already available with the FS Wood Innovations Grant Program as a clear example. That program had ~$43 million in grants for FY 2023; perhaps an order of magnitude more funding would move more states to the updated IBC. This program would not drive mass timber adoption at scale on its own, but updating building codes is a necessary first step. Because mass timber is faster to build with and results in fewer emissions, it is a crucial building material that could contribute to both housing abundance and the climate transition.
This idea of merit originated from our Housing Ideas Challenge, in partnership with Learning Collider, National Zoning Atlas, and Cornell’s Legal Constructs Lab. Find additional ideas to address the housing shortage here.
Incentivizing Developers To Reuse Low Income Housing Tax Credits
The Low-Income Housing Tax Credit (LIHTC) program has been the backbone of new affordable housing construction nationwide for the last 37 years. Developers who receive LIHTC financing are paid twice: they collect a developer fee, and they own the building. They can raise rents to market rate after affordability periods expire. States are unable to leverage any capital gain in the project to develop more housing in the future because those gains have disappeared into the developer’s pockets.
Existing LIHTC incentives for nonprofits do not ensure that profits are recycled to build more housing, because many nonprofits have other, nonhousing missions. For example, the proceeds of the 2007 sale of one large, nonprofit-owned housing project built in the 1960s in Hawaii were donated to schools and hospitals. Those funds were generated from housing subsidies and could have created hundreds of new affordable homes, but they left the housing sector permanently.
Incentivizing organizations to use their profits to build more housing will enable LIHTC to create much more housing in the long term. My proposal would amend 26 U.S. Code §42(m)(1)(B)(ii) to ensure each state’s Qualified Allocation Plan gives preference to applicants that are required to use the profits from their development to construct more below-market housing. States and local governments will also receive preference, as they are mission-driven institutions with no incentive to raise rents to market in the future.
This proposal is based on the Vienna, Austria, housing model. Vienna spends no new taxpayer dollars on housing construction, yet houses 60 percent of its population—all who want it—in well-designed, mixed-income social housing. To produce new social housing, Vienna extends low-interest loans to Limited Profit Housing Associations (LPHAs), corporations that make profits but are required to use them to develop more housing in the future. LPHAs charge tenants an approximately $56,000 buy-in upfront, plus rent. Together, these revenue streams cover the cost of servicing the low-interest loans, enabling each building to be revenue positive, especially after the loan is repaid, and thus allowing the LPHA to build more housing in the future, creating a virtuous, self-sustaining cycle of housing creation.
In lieu of creating a separate, regulated category of business association, the LIHTC program can prioritize entities required to use their profits to construct more housing, such as through restrictions in their organizational documents.
Recommendation
Congress should
- Amend 26 U.S. Code §42(m)(1)(B)(ii) to include in its preferences “(iv) entities obligated to use the profits from their development to construct more below market housing” and;
- Amend 26 U.S. Code §42(m)(1)(C) to include in its selection criteria “(xi) projects that are state- or county owned, in which the state or county is an equity partner, or in which ownership is conveyed to the state or county at a definite time.”
Because states and counties have no incentive to raise rents to market or to pocket the profits from selling such housing, they would also be better recipients of taxpayer financing than for-profit developers.
Political resistance to this concept has come from two main sources. First, state housing finance agencies (HFAs) that administer LIHTC are reluctant to change processes that have been in place for decades. LIHTC currently allows states wide latitude in how to select developers, and HFAs will resist federal restrictions on that flexibility. Second, current LIHTC developers are reluctant to give up any compensation source, even those many years in the future. These arguments have become less persuasive as LIHTC applications have become much more competitive in recent years. If applicants are unwilling to build LIHTC projects without ownership, they will simply forgo those points in the application, and the current system will continue. But if there are applicants willing to use the new structure, as we have anecdotally heard here in Hawaii would be numerous, they will prove the counterarguments wrong.
Persuading Congress to adopt these changes may be challenging. Indeed, private developers successfully lobbied Congress to eliminate support for nonprofit and limited profit cooperatives as early as the Housing Act of 1937. Despite many criticisms over the years, LIHTC is one of the few affordable housing programs with bipartisan support, because it both rewards private sector developers and produces housing for the low income. Yet despite the billions that Congress appropriates year after year, America’s housing shortage has gotten worse and worse. If LIHTC funds created projects that recycled their profits into building more housing, LIHTC would create a virtuous cycle to build more and more housing, moving the needle without additional expenditure of taxpayer funds.
A potential source of support would be the mission-driven nonprofit organizations that would be the beneficiaries of this policy change. As part of their LIHTC applications, they would be very willing to create entities legally required to recycle their profits. They would also likely partner with existing LIHTC developers, who could be paid a fee, to deliver the projects. Existing developers would still be able to profit from producing LIHTC housing, even though they would forgo ownership of the building.
This idea of merit originated from our Housing Ideas Challenge, in partnership with Learning Collider, National Zoning Atlas, and Cornell’s Legal Constructs Lab. Find additional ideas to address the housing shortage here.
Expand the Fair Housing Initiatives Program to Enforce Federal and State Housing Supply and Affordability Laws
The U.S. Department of Housing and Urban Development’s (HUD) forthcoming Affirmatively Furthering Fair Housing (AFFH) final rule and a recent wave of state housing affordability legislation create mechanisms to substantially increase the nation’s affordable housing supply. However, because local noncompliance with these laws poses a crucial obstacle, successful implementation will require robust enforcement. To ensure local governments’ full compliance with AFFH and state housing affordability legislation, Congress and HUD should expand the Fair Housing Initiatives Program (FHIP) to fund external enforcement organizations.
FHIP is a model for enforcement of complementary federal and state housing laws. A “necessary component” of fair housing enforcement, FHIP funds local nonprofit organizations to investigate and raise legal complaints of discrimination in their communities under the Fair Housing Act. FHIP grantees play a “vital role” in Fair Housing Act enforcement because FHIP-initiated civil actions and complaints to enforcement agencies are more likely to be properly filed and successfully resolved.
FHIP should be expanded to enforce a new generation of federal and state laws that promote housing supply and affordability but are at risk of insufficient enforcement. AFFH will require localities to implement equity plans that reduce residential segregation and increase access to affordable housing in high opportunity areas. HUD is empowered to withhold substantial streams of federal funding from noncompliant jurisdictions. Nonetheless, HUD poorly enforced AFFH’s previous iterations, leading to calls for “external, relatively independent” enforcement mechanisms. At the state level, a raft of recent legislation overrides local exclusionary zoning and streamlines local housing development permitting processes. However, many localities have demonstrated fierce resistance to these state laws with efforts designed to avoid compliance, including constitutional challenges, declarations of entire towns as “mountain lion sanctuar[ies],” and proposals to give up public infrastructure. Understaffed state agencies may be strained to strictly enforce such laws against hundreds of statewide localities.
As independent community institutions with extensive legal expertise in the housing field, FHIP grantees are well situated to tailor innovative enforcement of the emerging housing supply and affordability regime to their communities. Grantees could build on their administrative expertise by filing complaints to HUD under §5.170 of the proposed AFFH rule. In addition, grantees could initiate civil actions under state laws and the federal False Claims and Fair Housing Acts against jurisdictions that shirk their housing obligations. Grantees might also use their expertise to educate local policymakers and stakeholders on their responsibilities under emerging laws.
Congress should:
- Amend the Fair Housing Initiatives Program’s authorizing statute (42 U.S.C. 3616a) to permit the allocation of grants to fair housing enforcement organizations for enforcement of relevant state and local housing supply and affordability laws, as determined by HUD.
HUD should:
- Amend the Fair Housing Initiatives Program regulations (24 C.F.R. 125) to add a new Affirmatively Furthering Fair Housing Enforcement Initiative (AFFH-EI). The initiative would fund fair housing enforcement organizations to enforce localities’ obligations under AFFH.
- Promulgate administrative guidance and administer technical assistance to AFFH-EI grantees to specify how funds should be used. Grantees should investigate local compliance with AFFH, raise administrative complaints of noncompliance, and pursue affirmative litigation against noncompliant jurisdictions under the Fair Housing Act, False Claims Act, and other relevant federal and state statutes.
If successfully implemented, an expanded FHIP would support the full enforcement of the forthcoming AFFH final rule and recent state housing supply and affordability legislation by bringing administrative complaints to HUD and state agencies, initiating civil enforcement actions, and educating local stakeholders. Indeed, these FHIP grantees would hold local governments accountable to their duties to equitably plan for, and remove legal barriers to, the development of affordable housing in high opportunity areas for all.
This idea of merit originated from our Housing Ideas Challenge, in partnership with Learning Collider, National Zoning Atlas, and Cornell’s Legal Constructs Lab. Find additional ideas to address the housing shortage here.
Incorporate open science standards into the identification of evidence-based social programs
Evidence-based policy uses peer-reviewed research to identify programs that effectively address important societal issues. For example, several agencies in the federal government run clearinghouses that review and assess the quality of peer-reviewed research to identify programs with evidence of effectiveness. However, the replication crisis in the social and behavioral sciences raises concerns that research publications may contain an alarming rate of false positives (rather than true effects), in part due to selective reporting of positive results. The use of open and rigorous practices — like study registration and availability of replication code and data — can ensure that studies provide valid information to decision-makers, but these characteristics are not currently collected or incorporated into assessments of research evidence.
To rectify this issue, federal clearinghouses should incorporate open science practices into their standards and procedures used to identify evidence-based social programs eligible for federal funding.
Details
The federal government is increasingly prioritizing the curation and use of research evidence in making policy and supporting social programs. In this effort, federal evidence clearinghouses—influential repositories of evidence on the effectiveness of programs—are widely relied upon to assess whether policies and programs across various policy sectors are truly “evidence-based.” As one example, the Every Student Succeeds Act (ESSA) directs states, districts, and schools to implement programs with research evidence of effectiveness when using federal funds for K-12 public education; the What Works Clearinghouse—an initiative of the U.S. Department of Education—identifies programs that meet the evidence-based funding requirements of the ESSA. Similar mechanisms exist in the Departments of Health and Human Services (the Prevention Services Clearinghouse and the Pathways to Work Evidence Clearinghouse), Justice (CrimeSolutions), and Labor (the Clearinghouse for Labor and Evaluation Research). Consequently, clearinghouse ratings have the potential to influence the allocation of billions of dollars appropriated by the federal government for social programs.
Clearinghouses generally follow explicit standards and procedures to assess whether published studies used rigorous methods and reported positive results on outcomes of interest. Yet this approach rests on assumptions that peer-reviewed research is credible enough to inform important decisions about resource allocation and is reported accurately enough for clearinghouses to distinguish which reported results represent true effects likely to replicate at scale. Unfortunately, published research often contains results that are wrong, exaggerated, or not replicable. The social and behavioral sciences are experiencing a replication crisis as a result of numerous large-scale collaborative efforts that had difficulty replicating novel findings in published peer-reviewed research. This issue is partly attributed to closed scientific workflows, which hinder reviewers’ and evaluators’ attempts to detect issues that negatively impact the validity of reported research findings—such as undisclosed multiple hypothesis testing and the selective reporting of results.
Research transparency and openness can mitigate the risk of informing policy decisions on false positives. Open science practices like prospectively sharing protocols and analysis plans, or releasing code and data required to replicate key results, would allow independent third parties such as journals and clearinghouses to fully assess the credibility and replicability of research evidence. Such openness in the design, execution, and analysis of studies on program effectiveness is paramount to increasing public trust in the translation of peer-reviewed research into evidence-based policy.
Currently, standards and procedures to measure and encourage open workflows—and facilitate detection of detrimental practices in the research evidence—are not implemented by either clearinghouses or the peer-reviewed journals publishing the research on program effectiveness that clearinghouses review. When these practices are left unchecked, incomplete, misleading, or invalid research evidence may threaten the ability of evidence-based policy to live up to its promise of producing population-level impacts on important societal issues.
Recommendations
Policymakers should enable clearinghouses to incorporate open science into their standards and procedures used to identify evidence-based social programs eligible for federal funding, and increase the funds appropriated to clearinghouse budgets to allow them to take on this extra work. There are several barriers to clearinghouses incorporating open science into their standards and procedures. To address these barriers and facilitate implementation, we recommend that:
- Dedicated funding should be appropriated by Congress and allocated by federal agencies to clearinghouse budgets so they can better incorporate the assessment of open science practices into research evaluation.
- Funding should facilitate the hiring of additional personnel dedicated to collecting data on whether open science practices were used—and if so, whether they were used well enough to assess the comprehensive of reporting (e.g., checking publications on results with prospective protocols) and reproducibility of results (e.g., rerunning analyses using study data and code).
- The Office of Management and Budget should establish a formal mechanism for federal agencies that run clearinghouses to collaborate on shared standards and procedures for reviewing open science practices in program evaluations. For example, an interagency working group can develop and implement updated standards of evidence that include assessment of open science practices, in alignment with the Transparency and Openness Promotion (TOP) Guidelines for Clearinghouses.
- Once funding, standards, and procedures are in place, federal agencies sponsoring clearinghouses should create a roadmap for eventual requirements on open science practices in studies on program effectiveness.
- Other open science initiatives targeting researchers, research funders, and journals are increasing the prevalence of open science practices in newly published research. As open science practices become more common, agencies can introduce requirements on open science practices for evidence-based social programs, similar to research transparency requirements implemented by the Department of Health and Human Services for the marketing and reimbursement of medical interventions.
- For example, evidence-based funding mechanisms often have several tiers of evidence to distinguish the level of certainty that a study produced true results. Agencies with tiered-evidence funding mechanisms can begin by requiring open science practices in the highest tier, with the long-term goal of requiring a program meeting any tier to be based on open evidence.
Conclusion
The momentum from the White House’s 2022 Year of Evidence for Action and 2023 Year of Open Science provides an unmatched opportunity for connecting federal efforts to bolster the infrastructure for evidence-based decision-making with federal efforts to advance open research. Evidence of program effectiveness would be even more trustworthy if favorable results were found in multiple studies that were registered prospectively, reported comprehensively, and computationally reproducible using open data and code. With policymaker support, incorporating these open science practices in clearinghouse standards for identifying evidence-based social programs is an impactful way to connect these federal initiatives that can increase the trustworthiness of evidence used for policymaking.
Develop a Digital Technology Fund to secure and sustain open source software
Open source software (OSS) is a key part of essential digital infrastructure. Recent estimates indicate that 95% of all software relies upon open source, with about 75% of the code being directly open source. Additionally, as our science and technology ecosystem becomes more networked, computational, and interdisciplinary, open source software will increasingly be the foundation on which our discoveries and innovations rest.
However, there remain important security and sustainability issues with open source software, as evidenced by recent incidents such as the Log4j vulnerability that affected millions of systems worldwide.
To better address security and sustainability of open source software, the United States should establish a Digital Technology Fund through multi-stakeholder participation.
Details
Open source software — software whose source code is publicly available and can be modified, distributed, and reused by anyone — has become ubiquitous. OSS offers myriad benefits, including fostering collaboration, reducing costs, increasing efficiency, and enhancing interoperability. It also plays a key role in U.S. government priorities: federal agencies increasingly create and procure open source software by default, an acknowledgement of its technical benefits as well as its value to the public interest, national security, and global competitiveness.
Open source software’s centrality in the technology produced and consumed by the federal government, the university sector, and the private sector highlights the pressing need for these actors to coordinate on ensuring its sustainability and security. In addition to fostering more robust software development practices, raising capacity, and developing educational programs, there is an urgent need to invest in individuals who create and maintain critical open source software components, often without financial support.
The German Sovereign Tech Fund — launched in 2021 to support the development and maintenance of open digital infrastructure — recently announced such support for the maintainers of Log4j, thereby bolstering its prospects for timely, secure production and sustainability. Importantly, this is one example of numerous that require similar support. Cybersecurity and Infrastructure Security (CISA)’s director Jen Easterly has affirmed the importance of OSS while noting its security vulnerabilities as a national security concern. Easterly rightly called upon moving the responsibility and support for critical OSS components away from individuals to the organizations that benefit from those individuals’ efforts.
Recommendations
To address these challenges, the United States should establish a Digital Technology Fund to provide direct and indirect support to OSS projects and communities that are essential for the public interest, national security, and global competitiveness. The Digital Technology Fund would be funded by a coalition of federal, private, academic, and philanthropic stakeholders and would be administered by an independent nonprofit organization.
To better understand the risks and opportunities:
- The Office of the Cyber National Director should publish a synopsis of the feedback to the recent RFI regarding OSS security; it should then commission a comparative analysis of this synopsis and the German Tech Sovereign Fund to identify the gaps and needs within the U.S. context.
To encourage multi-stakeholder participation and support:
- The White House should task the Open-Source Software Security Initiative (OS3I) working group with developing a strategy, draft legislation, and funding proposal for the Digital Technology Fund. The fund should be established as a public-private partnership with a focus on the security and sustainability of OSS; it could be designed to augment the existing Open Technology Fund, which supports internet freedom and digital rights. The strategy should include approaches for encouraging contribution from the private sector, universities, and philanthropy, along with the federal government, to the fund’s resources and organization.
To launch the Digital Tech Fund:
- Congress should appropriate funding in alignment with the proposal developed by the OS3I working group. Legislation could provide relevant agencies — many of which have identified secure OSS as a priority — with initial implementation and oversight responsibility for the fund, after which point a permanent board could be selected.
The realized and potential impact of open source software is transformative in terms of next-generation infrastructure, innovation, workforce development, and artificial intelligence safety. The Digital Tech Fund can play an essential and powerful role in raising our collective capacity to address important security and sustainability challenges by acknowledging and supporting the pioneering individuals who are advancing open source software.
Advance open science through robust data privacy measures
In an era of accelerating advancements in data collection and analysis, realizing the full potential of open science hinges on balancing data accessibility and privacy. As we move towards a more open scientific environment, the volume of sensitive data being shared is swiftly increasing. While open science presents an opportunity to fast-track scientific discovery, it also poses a risk to privacy if not managed correctly.
Building on existing data and privacy efforts, the White House and federal science agencies should collaborate to develop and implement clear standards for research data privacy across the data management and sharing life cycle.
Details
Federal agencies’ open data initiatives are a milestone in the move towards open science. They have the potential to foster greater collaboration, transparency, and innovation in the U.S. scientific ecosystem and lead to a new era of discovery. However, a shift towards open data also poses challenges for privacy, as sharing research data openly can expose personal or sensitive information when done without the appropriate care, methods, and tools. Addressing this challenge requires new policies and technologies that allow for open data sharing while also protecting individual privacy.
The U.S. government has shown a strong commitment to addressing data privacy challenges in various scientific and technological contexts. This commitment is underpinned by laws and regulations such as the Health Insurance Portability and Accountability Act and the regulations for human subjects research (e.g., Code of Federal Regulations Title 45, Part 46). These regulations provide a legal framework for protecting sensitive and identifiable information, which is crucial in the context of open science.
The White House Office of Science and Technology Policy (OSTP) has spearheaded the “National Strategy to Advance Privacy-Preserving Data Sharing and Analytics,” aiming to further the development of these technologies to maximize their benefits equitably, promote trust, and mitigate risks. The National Institutes of Health (NIH) operate an internal Privacy Program, responsible for protecting sensitive and identifiable information within NIH work. The National Science Foundation (NSF) complements these efforts with a multidisciplinary approach through programs like the Secure and Trustworthy Cyberspace program, aiming to develop new ways to design, build, and operate cyber systems, protect existing infrastructure, and motivate and educate individuals about cybersecurity.
Given the unique challenges within the open science context and the wide reach of open data initiatives across the scientific ecosystem, there remains a need for further development of clear policies and frameworks that protect privacy while also facilitating the efficient sharing of scientific data. Coordinated efforts across the federal government could ensure these policies are adaptable, comprehensive, and aligned with the rapidly evolving landscape of scientific research and data technologies.
Recommendations
To clarify standards and best practices for research data privacy:
- The National Institute of Standards and Technology (NIST) should build on its existing Research Data Framework to develop a new framework that is specific to research data privacy and addresses the unique needs of open science communities and practices. This would provide researchers with a clear roadmap for implementing privacy-preserving data sharing in their work.
- This framework should incorporate the principles of Privacy by Design, ensuring that privacy is an integral part of the research life cycle, rather than an afterthought.
- The framework should be regularly updated to stay current with the changes in state, federal, and international data privacy laws, as well as new privacy-preserving methodologies. This will ensure that it remains relevant and effective in the evolving data privacy landscape.
To ensure best practices are used in federally funded research:
- Funding agencies like the NIH and NSF should work with NIST to develop and implement training for Data Management and Sharing Plan applicants and reviewers. This training would equip both parties with knowledge of best practices in privacy-preserving data sharing in open science, thereby ensuring that data privacy measures are effectively integrated into research workflows.
- Agencies should additionally establish programs to foster privacy education, as recommended in the OSTP national strategy.
- Training on open data privacy could additionally be incorporated into agencies’ existing Responsible Conduct of Research requirements.
To catalyze continued improvements in data privacy technologies:
- Science funding agencies should increase funding for domain-specific research and development of privacy-preserving methods for research data sharing. Such initiatives would spur innovation in fields like cryptography and secure computation, leading to the development of new technologies that can broaden the scope of open and secure data sharing.
- To further stimulate innovation, these agencies could also host privacy/security innovation competitions, encouraging researchers and developers to create and implement cutting-edge solutions.
To facilitate inter-agency coordination:
- OSTP should launch a National Science and Technology Council subcommittee on research data privacy within the Committee on Science. This subcommittee should work closely with the Office of Management and Budget, leveraging its expertise in overseeing federal information resources and implementing data management policies. This collaboration would ensure a coordinated and consistent approach to addressing data privacy issues in open science across different federal agencies.
Incorporate open source hardware into Patent and Trademark Office search locations for prior art
Increasingly, scientific innovations reside outside the realm of papers and patents. This is particularly true for open source hardware — hardware designs made freely and publicly available for study, modification, distribution, production, and sale. The shift toward open source aligns well with the White House’s 2023 Year of Open Science and can advance the accessibility and impact of federally funded hardware. Yet as the U.S. government expands its support for open science and open source, it will be increasingly vital that our intellectual property (IP) system is designed to properly identify and protect open innovations. Without consideration of open source hardware in prior art and attribution, these public goods are at risk of being patented over and having their accessibility lost.
Organizations like the Open Source Hardware Association (OSHWA) — a standards body for open hardware — provide verified databases of open source innovations. Over the past six years, for example, OSHWA’s certification program has grown to over 2600 certifications, and the organization has offered educational seminars and training. Despite the availability of such resources, open source certifications and resources have yet to be effectively incorporated into the IP system.
We recommend that the United States Patent and Trademark Office (USPTO) incorporate open source hardware certification databases into the library of resources to search for prior art, and create guidelines and training to build agency capacity for evaluating open source prior art.
Details
Innovative and important hardware products are increasingly being developed as open source, particularly in the sciences, as academic and government research moves toward greater transparency. This trend holds great promise for science and technology, as more people from more backgrounds are able to replicate, improve, and share hardware. A prime example is the 3D printing industry. Once foundational patents in 3D printing were released, there was an explosion of invention in the field that led to desktop and consumer 3D printers, open source filaments, and even 3D printing in space.
For these benefits to be more broadly realized across science and technology, open source hardware must be acknowledged in a way that ensures scientists will have their contributions found and respected by the IP system’s prior art process. Scientists building open source hardware are rightfully concerned their inventions will be patented over by someone else. Recently, a legal battle ensued from open hardware being wrongly patented over. While the patent was eventually overturned, it took time and money, and revealed important holes in the United States’ prior art system. As another example, the Electronic Frontier Foundation found 30+ pieces of prior art that the ArrivalStar patent was violating.
Erroneous patents can harm the validity of open source and limit the creation and use of new open source tools, especially in the case of hardware, which relies on prior art as its main protection. The USPTO — the administrator of intellectual property protection and a key actor in the U.S. science and technology enterprise — has an opportunity to ensure that open source tools are reliably identified and considered. Standardized and robust incorporation of open source innovations into the U.S. IP ecosystem would make science more reproducible and ensure that open science stays open, for the benefits of rapid improvement, testing, citizen science, and general education.
Recommendations
We recommend that the USPTO incorporate open source hardware into prior art searches and take steps to develop education and training to support the protection of open innovation in the patenting process.
- USPTO should add OSHWA’s certification – a known, compliant open source hardware certification program – to its non-patent search library.
- USPTO should put out a request for information (RFI) seeking input on (a) optimal approaches for incorporating open source innovations into searches for prior art, and (b) existing databases, standards, or certification programs that can/should be added to the agency’s non-patent search library.
- Based on the results of the RFI, USPTO’s Scientific and Technical Information Center should create guidelines and educational training programs to build examiners’ knowledge and capacity for evaluating open source prior art.
- USPTO should create clear public guidelines for the submission of new databases into the agency’s prior art library, and the requirements for their consideration and inclusion.
Incorporation of open hardware into prior art searches will signify the importance and consideration of open source within the IP system. These actions have the potential to improve the efficiency of prior art identification, advance open source hardware by assuring institutional actors that open innovations will be reliably identified and protected, and ensure open science stays open.
Improve research through better data management and sharing plans
The United States government spends billions of dollars every year to support the best scientific research in the world. The novel and multidisciplinary data produced by these investments have historically remained unavailable to the broader scientific community and the public. This limits researchers’ ability to synthesize knowledge, make new discoveries, and ensure the credibility of research. But recent guidance from the Office of Science and Technology Policy (OSTP) represents a major step forward for making scientific data more available, transparent, and reusable.
Federal agencies should take coordinated action to ensure that data sharing policies created in response to the 2022 Nelson memo incentivize high-quality data management and sharing plans (DMSPs), include robust enforcement mechanisms, and implement best practices in supporting a more innovative and credible research culture.
Details
The 2022 OSTP memorandum “Ensuring Free, Immediate, and Equitable Access to Federally Funded Research” (the Nelson memo) represents a significant step toward opening up not only the findings of science but its materials and processes as well. By including data and related research outputs as items that should be publicly accessible, defining “scientific data” to include “material… of sufficient quality to validate and replicate research findings” (emphasis added), and specifying that agency plans should cover “scientific data that are not associated with peer-reviewed scholarly publications,” this guidance has the potential to greatly improve the transparency, equity, rigor, and reusability of scientific research.
Yet while the 2022 Nelson memo provides a crucial foundation for open, transparent, and reusable scientific data, preliminary review of agency responses reveals considerable variation on how access to data and research outputs will be handled. Agencies vary by the degree to which policies will be reviewed and enforced and by the degree of specificity by which they define data as being materials needed to “validate and replicate” research findings. Finally, they could and should go further in including plans to fully support a research ecosystem that supports cumulative scientific evidence by enabling the accessibility, discoverability, and citation of researchers’ data sharing plans themselves.
Recommendations
To better incentivize quality and reusability in data sharing, agencies should:
- Make DMSPs publicly available in an easy-to-use interface on their websites where individual grants are listed to increase accountability for stated plans and discoverability of research outputs.
- Additionally, give DMSPs persistent, unique identifiers (e.g., digital object identifiers, or DOIs) so that they can be cited, read, and used.
- Make DMSPs subject to peer review as part of the same process that other aspects of a proposed research project’s intellectual merit are evaluated. This will directly incentivize high standards of planned data sharing practices and enable the diffusion of best practices across the research community.
To better ensure compliance and comprehensive availability, agencies should:
- Coordinate across agencies to create a consistent mechanism for DMSP enforcement to reduce applicant uncertainty about agencies’ expectations and processes.
- Approaches to enforcement should include evaluation of past adherence to DMSPs in future grant applications and should ensure that early career researchers and researchers from lower-resourced institutions are not penalized for a lack of a data-sharing record.
- Assert that data includes all digital materials needed for external researchers to replicate and validate findings
- Work with domain-specific stakeholders to develop guidance for the specific components that should be included as research outputs (e.g., data, codebooks, metadata, protocols, analytic code, preregistrations).
Updates to the Center for Open Science’s efforts to track, curate, and recommend best practices in implementing the Nelson memo will be disseminated through publication and through posting on our website at https://www.cos.io/policy-reform.
Support scientific software infrastructure by requiring SBOMs for federally funded research
Federally funded research relies heavily on software. Despite considerable evidence demonstrating software’s crucial role in research, there is no systematic process for researchers to acknowledge its use, and those building software lack recognition for their work. While researchers want to give appropriate acknowledgment for the software they use, many are unsure how to do so effectively. With greater knowledge of what software is used in research underlying publications, federal research funding agencies and researchers themselves will better be able to make efficient funding decisions, enhance the sustainability of software infrastructure, identify vital yet often overlooked digital infrastructure, and inform workforce development.
All agencies that fund research should require that resulting publications include a Software Bill of Materials (SBOM) listing the software used in the research.
Details
Software is a cornerstone in research. Evidence from numerous surveys consistently shows that a majority of researchers rely heavily on software. Without it, their work would likely come to a standstill. However, there is a striking contrast between the crucial role that software plays in modern research and our knowledge of what software is used, as well as the level of recognition it receives. To bridge this gap, we propose policies to properly acknowledge and support the essential software that powers research across disciplines.
Software citation is one way to address these issues, but citation alone is insufficient as a mechanism to generate software infrastructure insights. In recent years, there has been a push for the recognition of software as a crucial component of scholarly publications, leading to the creation of guidelines and specialized journals for software citation. However, software remains under-cited due to several challenges, including friction with journals’ reference list standards, confusion regarding which or when software should be cited, and opacity of the roles and dependencies among cited software. Therefore, we need a new approach to this problem.
A Software Bill of Materials (SBOM) is a list of the software components that were used in an effort, such as building application software. Executive Order 14028 requires that all federal agencies obtain SBOMs when they purchase software. For this reason, many high-quality open-source SBOM tools already exist and can be straightforwardly used to generate descriptions of software used in research.
SBOM tools can identify and list the stack of software underlying each publication, even when the code itself is not openly shared. If we were able to combine software manifests from many publications together, we would have the insights needed to better advance research. SBOM data can help federal agencies find the right mechanism (funding, in-kind contribution of time) to sustain software critical to their missions. Better knowledge about patterns of software use in research can facilitate better coordination among developers and reduce friction in their development roadmaps. Understanding the software used in research will also promote public trust in government-funded research through improved reproducibility.
Recommendation
We recommend the adoption of Software Bills of Materials (SBOMs) — which are already used by federal agencies for security reasons — to understand the software infrastructure underlying scientific research. Given their mandatory use for software suppliers to the federal government, SBOMs are ideal for highlighting software dependencies and potential security vulnerabilities. The same tools and practices can be used to generate SBOMs for publications. We, therefore, recommend that all agencies that fund research should require resulting publications to include an SBOM listing the software used in the research. Additionally, for research that has already been published with supplementary code materials, SBOMs should be generated retrospectively. This will not only address the issue of software infrastructure sustainability but also enhance the verification of research by clearly documenting the specific software versions used and directing limited funds to software maintenance that most need it.
- The Office of Science and Technology Policy (OSTP) should coordinate with agencies to undertake feasibility studies of this policy, building confidence that it would work as intended.
- Coordination should include funding agencies, federal actors currently applying SBOMs in software procurement, organizations developing SBOM tools and standards, and scientific stakeholders.
- Coordination should include funding agencies, federal actors currently applying SBOMs in software procurement, organizations developing SBOM tools and standards, and scientific stakeholders.
- Based on the results of the study, OSTP should direct funding agencies to design and implement policies requiring that publications resulting from federal funding include an openly accessible, machine-readable SBOM for the software used in the research.
- OSTP and the Office of Management and Budget should additionally use the Multi-Agency Research and Development Budget Priorities to encourage agencies’ collection, integration, and analysis of SBOM data to inform funding and workforce priorities and to catalyze additional agency resource allocations for software infrastructure assessment in follow-on budget processes.
Create an Office of Co-Production at the National Institutes of Health
The National Institutes of Health (NIH) spent $49 billion in fiscal year 2023 on research and development, a significant annual investment in medical treatment discovery and development. Despite NIH’s research investments producing paradigm-shifting therapies, such as CAR-T cancer treatments, CRISPR-enabled gene therapy for sickle cell, and the mRNA vaccine for COVID-19, the agency and medical scientists more broadly are grappling with declining trust. This further compounds decades-long mistrust in medical research by marginalized populations, whom researchers struggle to recruit as participants in medical research. If things do not improve, a lack of representation may lead to lack of access to effective medical interventions, worsen health disparities, and cost hundreds of billions of dollars.
A new paradigm for research is needed to ensure meaningful public engagement and rebuild trust. Co-production —in which researchers, patients, and practitioners work together as collaborators — offers a framework for embedding collaboration and trust into the biomedical enterprise.
The National Institutes of Health should form an Office of Co-Production in the Office of the Director, Division of Program Coordination, Planning, and Strategic Initiatives.
Details
In accordance with Executive Order 13985 and ongoing public access initiatives, science funding and R&D agencies have been seeking ways to embed equity, accessibility, and public participation into their processes. The NIH has been increasingly working to advance publicly engaged and led research, illustrated by trainings and workshops around patient-engaged research, funding resources for community partnerships like RADx Underserved Populations, community-led research programs like Community Partnerships to Advance Science for Society (ComPASS), and support from the new NIH director.
To ensure that public engagement efforts are sustainable, it is critical to invest in lasting infrastructure capable of building and maintaining these ties. Indeed, in their Recommendation on Open Science, the United Nations Educational, Scientific, and Cultural Organization outlined infrastructure that must be built for scientific funding to include those beyond STEMM practitioners in research decision-making. One key approach involves explicitly supporting the co-production of research, a process by which “researchers, practitioners and the public work together, sharing power and responsibility from the start to the end of the project, including the generation of knowledge.”
Co-production provides a framework with which the NIH can advance patient involvement in research, health equity, uptake and promotion of new technologies, diverse participation in clinical trials, scientific literacy, and public health. Doing so effectively would require new models for including and empowering patient voices in the agency’s work.
Recommendations
The NIH should create an Office of Co-Production within the Office of the Director, Division of Program Coordination, Planning, and Strategic Initiatives (DPCPSI). The Center for Co-Production would institutionalize best practices for co-producing research, train NIH and NIH-funded researchers in co-production principles, build patient-engaged research infrastructure, and fund pilot projects to build the research field.
The NIH Office of Co-Production, co-led by patient advocates (PA) and NIH personnel, should be established with the following key programs:
- A Resources and Training Program that trains patient advocates and researchers, both separately and together, so they can understand and work together as collaborators. This work would include helping researchers develop understanding about the communities affected by diseases they are investigating, relationship-building strategies, and ways to address power differentials, and helping patient advocates gain understanding about research processes, including understanding disease pathogenesis, different mechanisms of action and targets for research, and clinical research processes including regulatory requirements and ethical considerations. PAs could also be trained to qualify to serve on Data and Safety Monitoring Boards (DSMB).
- A Patient Advocate Advisors Management Program that would manage the placement of PAs into community advisory bodies, into advisory roles to NIH institutes’ major initiatives, onto ethical advisory bodies, onto DSMBs, onto peer review committees and study sections, and onto key long-range planning bodies, including those determining research prioritization.
- A Co-Production Principles and Practice Program led by a senior team of PAs and advisors that coordinates, organizes, and facilitates cross-disease understanding and solidarity and establishes basic principles for patient advocate engagement, grant requirements, and ongoing assessment of the quality of co-production and relational infrastructure. This program will focus on key principles such as:
- Sharing of power – the research is jointly owned and people work together to achieve a joint understanding
- Including all perspectives and skills – make sure the research team includes all those who can make a contribution
- Respecting and valuing the knowledge of all those working together on the research – everyone is of equal importance
- Reciprocity – everybody benefits from working together
- Building and maintaining relationships – an emphasis on relationships is key to sharing power. There needs to be joint understanding and consensus and clarity over roles and responsibilities. It is also important to value people and unlock their potential.
- A Communications, Planning, and Policy Program that works with the NIH director and institute directors to advocate for mutual goals to advance the public engagement mission of the NIH and its institutes.
- A Grantmaking Program that can pilot the expansion and scaling of NIH-sponsored Co-Production Cores and support the involvement of patient advocates in NIH-funded research across the country through equitable participation and standard compensation policies.
Creating an Office of Co-Production would achieve the following goals:
- It would address the growing gulf between the public, who ultimately fund biomedical research with their tax dollars, and researchers by directly and meaningfully engaging patient advocates in biomedical and clinical science. Co-production builds relationships and trust as it requires that relationships are valued and nurtured and that efforts are made to redress power differentials.
- By working early and often with patient populations around treatments, co-production helps medical scientists better anticipate and address risk early in the research process.
- It would institutionalize a known model for collaborative research that efficiently uses research dollars. During the HIV/AIDS crisis, rapid advances in biomedical and clinical research were made possible by patient advocate involvement in trial design, recruitment, and analysis.
- The NIH Center would create a replicable model of institutional support for co-production that can be scaled across the federal R&D agencies. The NIH should regularly report on the progress made by the Center for Co-Production to encourage replication in other agencies that can benefit from increased public participation.
Make government-funded hardware open source by default
While scientific publications and data are increasingly made publicly accessible, designs and documentation for scientific hardware — another key output of federal funding and driver of innovation — remain largely closed from view. This status quo can lead to redundancy, slowed innovation, and increased costs. Existing standards and certifications for open source hardware provide a framework for bringing the openness of scientific tools in line with that of other research outputs. Doing so would encourage the collective development of research hardware, reduce wasteful parallel creation of basic tools, and simplify the process of reproducing research. The resulting open hardware would be available to the public, researchers, and federal agencies, accelerating the pace of innovation and ensuring that each community receives the full benefit of federally funded research.
Federal grantmakers should establish a default expectation that hardware developed as part of federally supported research be released as open hardware. To retain current incentives for translation and commercialization, grantmakers should design exceptions to this policy for researchers who intend to patent their hardware.
Details
Federal funding plays an important role in setting norms around open access to research. The White House Office of Science and Technology Policy (OSTP)’s recent Memorandum Ensuring Free, Immediate, and Equitable Access to Federally Funded Research makes it clear that open access is a cornerstone of a scientific culture that values collaboration and data sharing. OSTP’s recent report on open access publishing further declares that “[b]road and expeditious sharing of federally funded research is fundamental for accelerating discovery on critical science and policy questions.”
These efforts have been instrumental in providing the public with access to scientific papers and data — two of the foundational outputs of federally funded research. Yet hardware, another key input and output of science and innovation, remains largely hidden from view. To continue the move towards an accessible, collaborative, and efficient scientific enterprise, public access policies should be expanded to include hardware. Specifically, making federally funded hardware open source by default would have a number of specific and immediate benefits:
Reduce Wasteful Reinvention. Researchers are often forced to develop testing and operational hardware that supports their research. In many cases, unbeknownst to those researchers, this hardware has already been developed as part of other projects by other researchers in other labs. However, since that original hardware was not openly documented and licensed, subsequent researchers are not able to learn from and build upon this previous work. The lack of open documentation and licensing is also a barrier to more intentional, collaborative development of standardized testing equipment for research.
Increase Access to Information. As the OSTP memo makes clear, open access to federally funded research allows all Americans to benefit from our collective investment. This broad and expeditious sharing strengthens our ability to be a critical leader and partner on issues of open science around the world. Immediate sharing of research results and data is key to ensuring that benefit. Explicit guidance on sharing the hardware developed as part of that research is the next logical step towards those goals.
Alternative Paths to Recognition. Evaluating a researcher’s impact often includes an assessment of the number of patents they can claim. This is in large part because patents are easy to quantify. However, this focus on patents creates a perverse incentive for researchers to erect barriers to follow on study even if they have no intention of using patents to commercialize their research. Encouraging researchers to open source the hardware developed as part of their research creates an alternative path to evaluate their impact, especially as those pieces of open source hardware are adopted and improved by others. Uptake of researchers’ open hardware could be included in assessments on par with any patented work. This path recognizes the contribution to a collective research enterprise.
Verifiability. Open access to data and research are important steps towards allowing third parties to verify research conclusions. However, these tools can be limited if the hardware used to generate the data and produce the research are not themselves open. Open sourcing hardware simplifies the process of repeating studies under comparable conditions, allowing for third-party validation of important conclusions.
Recommendations
Federal grantmaking agencies should establish a default presumption that recipients of research funds make hardware developed with those funds available on open terms. This policy would apply to hardware built as part of the research process, as well as hardware that is part of the final output. Grantees should be able to opt out of this requirement with regards to hardware that is expected to be patented; such an exception would provide an alternative path for researchers to share their work without undermining existing patent-based development pathways.
To establish this policy, OSTP should conduct a study and produce a report on the current state of federally funded scientific hardware and opportunities for open source hardware policy.
- As part of the study, OSTP should coordinate and convene stakeholders to discuss and align on policy implementation details — including relevant researchers, funding agencies, U.S. Patent and Trademark Office officials, and leaders from university tech transfer offices.
- The report should provide a detailed and widely applicable definition of open source hardware, drawing on definitions established in the community — in particular, the definition maintained by the Open Source Hardware Association, which has been in use for over a decade and is based on the widely recognized definition of open source software maintained by the Open Source Initiative.
- It should also lay out a broadly acceptable policy approach for encouraging open source by default, and provide guidance to agencies on implementation. The policy framework should include recommendations for:
- Minimally burdensome components of the grant application and progress report with which to capture relevant information regarding hardware and to ensure planning and compliance for making outputs open source
- A clear and well-defined opportunity for researchers to opt out of this mandate when they intend to patent their hardware
The Office of Management and Budget (OMB) should issue a memorandum establishing a policy on open source hardware in federal research funding. The memorandum should include:
- The rationale for encouraging open source hardware by default in federally funded scientific research, drawing on the motivation of public access policies for publications and data
- A finalized definition of open source hardware to be used by agencies in policy implementation
- The incorporation of OMB’s Open Source Scientific Hardware Policy, in alignment with the OSTP report and recommendations
Conclusion
The U.S. government and taxpayers are already paying to develop hardware created as part of research grants. In fact, because there is not currently an obligation to make that hardware openly available, the federal government and taxpayers are likely paying to develop identical hardware over and over again.
Grantees have already proven that existing open publication and open data obligations promote research and innovation without unduly restricting important research activities. Expanding these obligations to include the hardware developed under these grants is the natural next step.
Promoting reproducible research to maximize the benefits of government investments in science
Scientific research is the foundation of progress, creating innovations like new treatments for melanoma and providing behavioral insights to guide policy in responding to events like the COVID-19 pandemic. This potential for real-world impact is best realized when research is rigorous, credible, and subject to external confirmation. However, evidence suggests that, too often, research findings are not reproducible or trustworthy, preventing policymakers, practitioners, researchers, and the public from fully capitalizing on the promise of science to improve social outcomes in domains like health and education.
To build on existing federal efforts supporting scientific rigor and integrity, funding agencies should study and pilot new programs to incentivize researchers’ engagement in credibility-enhancing practices that are presently undervalued in the scientific enterprise.
Details
Federal science agencies have a long-standing commitment to ensuring the rigor and reproducibility of scientific research for the purposes of accelerating discovery and innovation, informing evidence-based policymaking and decision-making, and fostering public trust in science. In the past 10 years alone, policymakers have commissioned three National Academies reports, a Government Accountability Office (GAO) study, and a National Science and Technology Council (NSTC) report exploring these and related issues. Unfortunately, flawed, untrustworthy, and potentially fraudulent studies continue to affect the scientific enterprise.
The U.S. government and the scientific community have increasingly recognized that open science practices — like sharing research code and data, preregistering study protocols, and supporting independent replication efforts — hold great promise for ensuring the rigor and replicability of scientific research. Many U.S. science agencies have accordingly launched efforts to encourage these practices in recent decades. Perhaps the most well-known example is the creation of clinicaltrials.gov and the requirements that publicly and privately funded trials be preregistered (in 2000 and 2007, respectively), leading, in some cases, to fewer trials reporting positive results.
More recent federal actions have focused on facilitating sharing of research data and materials and supporting open science-related education. These efforts seek to build on areas of consensus given the diversity of the scientific ecosystem and the resulting difficulty of setting appropriate and generalizable standards for methodological rigor. However, further steps are warranted. Many key practices that could enhance the government’s efforts to increase the rigor and reproducibility of scientific practice — such as the preregistration of confirmatory studies and replication of influential or decision-relevant findings — remain far too rare. A key challenge is the weak incentive to engage in these practices. Researchers perceive them as costly or undervalued given the professional rewards created by the current funding and promotion system, which encourages exploratory searches for new “discoveries” that frequently fail to replicate. Absent structural change to these incentives, uptake is likely to remain limited.
Recommendations
To fully capitalize on the government’s investments in education and infrastructure for open science, we recommend that federal funding agencies launch pilot initiatives to incentivize and reward researchers’ pursuit of transparent, rigorous, and public good-oriented practices. Such efforts could enhance the quality and impact of federally funded research at relatively low cost, encourage alignment of priorities and incentive structures with other scientific actors, and help science and scientists better deliver on the promise of research to benefit society. Specifically, NIH and NSF should:
Establish discipline-specific offices to launch initiatives around rigor and reproducibility
- Use the National Institute for Neurological Disorders and Stroke’s Office of Research Quality (ORQ) as a model; similar ORQs would encourage uptake of under-incentivized practices through both internal initiatives and external funding programs.
- To ensure that programs are tailored to fit the priorities of a single disciplinary context, offices should be established within individual NIH institutes and within individual NSF directorates.
Incorporate assessments of transparent and credible research methods into their learning agendas
- Include questions to better understand existing practices, such as “How frequently and effectively do [agency]-funded researchers across disciplines engage in open science practices — e.g., preregistration, publication of null results, and external replication — and how do these practices relate to future funding and research outcomes?”
- Include questions to inform new policies and initiatives, such as “What steps could [agency] take to incentivize broader uptake of open science practices, and which ones — e.g., funding programs, application questions, standards, and evaluation models — are most effective?”
- To answer these questions, solicit feedback from applicants, reviewers, and program officers, and partner with external “science of science management” researchers to design rigorous prospective and retrospective studies; use the information obtained to develop new processes to incentivize and reward open science practices in funded research.
Expand support for third-party replications
- Allocate a consistent proportion of funds to support independent replications of key findings through the use of non-grant mechanisms — e.g., prizes, cooperative agreements, and contracts. The high value placed on scientific novelty discourages such studies, which could provide valuable information about treatment, policy, regulatory approval, or future scientific inquiry. A combination of agency prioritization and public requests for information should be used to identify topics for which additional supportive or contradictory evidence would provide significant societal and/or scientific benefit.
- The NSF, in partnership with an independent third party organization like the Institute for Replication, should run a pilot study to assess the utility of commissioning targeted and/or randomized replication studies for advancing research rigor and informing future funding.