Evidence-based policy uses peer-reviewed research to identify programs that effectively address important societal issues. For example, several agencies in the federal government run clearinghouses that review and assess the quality of peer-reviewed research to identify programs with evidence of effectiveness. However, the replication crisis in the social and behavioral sciences raises concerns that research publications may contain an alarming rate of false positives (rather than true effects), in part due to selective reporting of positive results. The use of open and rigorous practices — like study registration and availability of replication code and data — can ensure that studies provide valid information to decision-makers, but these characteristics are not currently collected or incorporated into assessments of research evidence.
To rectify this issue, federal clearinghouses should incorporate open science practices into their standards and procedures used to identify evidence-based social programs eligible for federal funding.
The federal government is increasingly prioritizing the curation and use of research evidence in making policy and supporting social programs. In this effort, federal evidence clearinghouses—influential repositories of evidence on the effectiveness of programs—are widely relied upon to assess whether policies and programs across various policy sectors are truly “evidence-based.” As one example, the Every Student Succeeds Act (ESSA) directs states, districts, and schools to implement programs with research evidence of effectiveness when using federal funds for K-12 public education; the What Works Clearinghouse—an initiative of the U.S. Department of Education—identifies programs that meet the evidence-based funding requirements of the ESSA. Similar mechanisms exist in the Departments of Health and Human Services (the Prevention Services Clearinghouse and the Pathways to Work Evidence Clearinghouse), Justice (CrimeSolutions), and Labor (the Clearinghouse for Labor and Evaluation Research). Consequently, clearinghouse ratings have the potential to influence the allocation of billions of dollars appropriated by the federal government for social programs.
Clearinghouses generally follow explicit standards and procedures to assess whether published studies used rigorous methods and reported positive results on outcomes of interest. Yet this approach rests on assumptions that peer-reviewed research is credible enough to inform important decisions about resource allocation and is reported accurately enough for clearinghouses to distinguish which reported results represent true effects likely to replicate at scale. Unfortunately, published research often contains results that are wrong, exaggerated, or not replicable. The social and behavioral sciences are experiencing a replication crisis as a result of numerous large-scale collaborative efforts that had difficulty replicating novel findings in published peer-reviewed research. This issue is partly attributed to closed scientific workflows, which hinder reviewers’ and evaluators’ attempts to detect issues that negatively impact the validity of reported research findings—such as undisclosed multiple hypothesis testing and the selective reporting of results.
Research transparency and openness can mitigate the risk of informing policy decisions on false positives. Open science practices like prospectively sharing protocols and analysis plans, or releasing code and data required to replicate key results, would allow independent third parties such as journals and clearinghouses to fully assess the credibility and replicability of research evidence. Such openness in the design, execution, and analysis of studies on program effectiveness is paramount to increasing public trust in the translation of peer-reviewed research into evidence-based policy.
Currently, standards and procedures to measure and encourage open workflows—and facilitate detection of detrimental practices in the research evidence—are not implemented by either clearinghouses or the peer-reviewed journals publishing the research on program effectiveness that clearinghouses review. When these practices are left unchecked, incomplete, misleading, or invalid research evidence may threaten the ability of evidence-based policy to live up to its promise of producing population-level impacts on important societal issues.
Policymakers should enable clearinghouses to incorporate open science into their standards and procedures used to identify evidence-based social programs eligible for federal funding, and increase the funds appropriated to clearinghouse budgets to allow them to take on this extra work. There are several barriers to clearinghouses incorporating open science into their standards and procedures. To address these barriers and facilitate implementation, we recommend that:
- Dedicated funding should be appropriated by Congress and allocated by federal agencies to clearinghouse budgets so they can better incorporate the assessment of open science practices into research evaluation.
- Funding should facilitate the hiring of additional personnel dedicated to collecting data on whether open science practices were used—and if so, whether they were used well enough to assess the comprehensive of reporting (e.g., checking publications on results with prospective protocols) and reproducibility of results (e.g., rerunning analyses using study data and code).
- The Office of Management and Budget should establish a formal mechanism for federal agencies that run clearinghouses to collaborate on shared standards and procedures for reviewing open science practices in program evaluations. For example, an interagency working group can develop and implement updated standards of evidence that include assessment of open science practices, in alignment with the Transparency and Openness Promotion (TOP) Guidelines for Clearinghouses.
- Once funding, standards, and procedures are in place, federal agencies sponsoring clearinghouses should create a roadmap for eventual requirements on open science practices in studies on program effectiveness.
- Other open science initiatives targeting researchers, research funders, and journals are increasing the prevalence of open science practices in newly published research. As open science practices become more common, agencies can introduce requirements on open science practices for evidence-based social programs, similar to research transparency requirements implemented by the Department of Health and Human Services for the marketing and reimbursement of medical interventions.
- For example, evidence-based funding mechanisms often have several tiers of evidence to distinguish the level of certainty that a study produced true results. Agencies with tiered-evidence funding mechanisms can begin by requiring open science practices in the highest tier, with the long-term goal of requiring a program meeting any tier to be based on open evidence.
The momentum from the White House’s 2022 Year of Evidence for Action and 2023 Year of Open Science provides an unmatched opportunity for connecting federal efforts to bolster the infrastructure for evidence-based decision-making with federal efforts to advance open research. Evidence of program effectiveness would be even more trustworthy if favorable results were found in multiple studies that were registered prospectively, reported comprehensively, and computationally reproducible using open data and code. With policymaker support, incorporating these open science practices in clearinghouse standards for identifying evidence-based social programs is an impactful way to connect these federal initiatives that can increase the trustworthiness of evidence used for policymaking.
Open source software (OSS) is a key part of essential digital infrastructure. Recent estimates indicate that 95% of all software relies upon open source, with about 75% of the code being directly open source. Additionally, as our science and technology ecosystem becomes more networked, computational, and interdisciplinary, open source software will increasingly be the foundation on which our discoveries and innovations rest.
However, there remain important security and sustainability issues with open source software, as evidenced by recent incidents such as the Log4j vulnerability that affected millions of systems worldwide.
To better address security and sustainability of open source software, the United States should establish a Digital Technology Fund through multi-stakeholder participation.
Open source software — software whose source code is publicly available and can be modified, distributed, and reused by anyone — has become ubiquitous. OSS offers myriad benefits, including fostering collaboration, reducing costs, increasing efficiency, and enhancing interoperability. It also plays a key role in U.S. government priorities: federal agencies increasingly create and procure open source software by default, an acknowledgement of its technical benefits as well as its value to the public interest, national security, and global competitiveness.
Open source software’s centrality in the technology produced and consumed by the federal government, the university sector, and the private sector highlights the pressing need for these actors to coordinate on ensuring its sustainability and security. In addition to fostering more robust software development practices, raising capacity, and developing educational programs, there is an urgent need to invest in individuals who create and maintain critical open source software components, often without financial support.
The German Sovereign Tech Fund — launched in 2021 to support the development and maintenance of open digital infrastructure — recently announced such support for the maintainers of Log4j, thereby bolstering its prospects for timely, secure production and sustainability. Importantly, this is one example of numerous that require similar support. Cybersecurity and Infrastructure Security (CISA)’s director Jen Easterly has affirmed the importance of OSS while noting its security vulnerabilities as a national security concern. Easterly rightly called upon moving the responsibility and support for critical OSS components away from individuals to the organizations that benefit from those individuals’ efforts.
To address these challenges, the United States should establish a Digital Technology Fund to provide direct and indirect support to OSS projects and communities that are essential for the public interest, national security, and global competitiveness. The Digital Technology Fund would be funded by a coalition of federal, private, academic, and philanthropic stakeholders and would be administered by an independent nonprofit organization.
To better understand the risks and opportunities:
- The Office of the Cyber National Director should publish a synopsis of the feedback to the recent RFI regarding OSS security; it should then commission a comparative analysis of this synopsis and the German Tech Sovereign Fund to identify the gaps and needs within the U.S. context.
To encourage multi-stakeholder participation and support:
- The White House should task the Open-Source Software Security Initiative (OS3I) working group with developing a strategy, draft legislation, and funding proposal for the Digital Technology Fund. The fund should be established as a public-private partnership with a focus on the security and sustainability of OSS; it could be designed to augment the existing Open Technology Fund, which supports internet freedom and digital rights. The strategy should include approaches for encouraging contribution from the private sector, universities, and philanthropy, along with the federal government, to the fund’s resources and organization.
To launch the Digital Tech Fund:
- Congress should appropriate funding in alignment with the proposal developed by the OS3I working group. Legislation could provide relevant agencies — many of which have identified secure OSS as a priority — with initial implementation and oversight responsibility for the fund, after which point a permanent board could be selected.
The realized and potential impact of open source software is transformative in terms of next-generation infrastructure, innovation, workforce development, and artificial intelligence safety. The Digital Tech Fund can play an essential and powerful role in raising our collective capacity to address important security and sustainability challenges by acknowledging and supporting the pioneering individuals who are advancing open source software.
In an era of accelerating advancements in data collection and analysis, realizing the full potential of open science hinges on balancing data accessibility and privacy. As we move towards a more open scientific environment, the volume of sensitive data being shared is swiftly increasing. While open science presents an opportunity to fast-track scientific discovery, it also poses a risk to privacy if not managed correctly.
Building on existing data and privacy efforts, the White House and federal science agencies should collaborate to develop and implement clear standards for research data privacy across the data management and sharing life cycle.
Federal agencies’ open data initiatives are a milestone in the move towards open science. They have the potential to foster greater collaboration, transparency, and innovation in the U.S. scientific ecosystem and lead to a new era of discovery. However, a shift towards open data also poses challenges for privacy, as sharing research data openly can expose personal or sensitive information when done without the appropriate care, methods, and tools. Addressing this challenge requires new policies and technologies that allow for open data sharing while also protecting individual privacy.
The U.S. government has shown a strong commitment to addressing data privacy challenges in various scientific and technological contexts. This commitment is underpinned by laws and regulations such as the Health Insurance Portability and Accountability Act and the regulations for human subjects research (e.g., Code of Federal Regulations Title 45, Part 46). These regulations provide a legal framework for protecting sensitive and identifiable information, which is crucial in the context of open science.
The White House Office of Science and Technology Policy (OSTP) has spearheaded the “National Strategy to Advance Privacy-Preserving Data Sharing and Analytics,” aiming to further the development of these technologies to maximize their benefits equitably, promote trust, and mitigate risks. The National Institutes of Health (NIH) operate an internal Privacy Program, responsible for protecting sensitive and identifiable information within NIH work. The National Science Foundation (NSF) complements these efforts with a multidisciplinary approach through programs like the Secure and Trustworthy Cyberspace program, aiming to develop new ways to design, build, and operate cyber systems, protect existing infrastructure, and motivate and educate individuals about cybersecurity.
Given the unique challenges within the open science context and the wide reach of open data initiatives across the scientific ecosystem, there remains a need for further development of clear policies and frameworks that protect privacy while also facilitating the efficient sharing of scientific data. Coordinated efforts across the federal government could ensure these policies are adaptable, comprehensive, and aligned with the rapidly evolving landscape of scientific research and data technologies.
To clarify standards and best practices for research data privacy:
- The National Institute of Standards and Technology (NIST) should build on its existing Research Data Framework to develop a new framework that is specific to research data privacy and addresses the unique needs of open science communities and practices. This would provide researchers with a clear roadmap for implementing privacy-preserving data sharing in their work.
- This framework should incorporate the principles of Privacy by Design, ensuring that privacy is an integral part of the research life cycle, rather than an afterthought.
- The framework should be regularly updated to stay current with the changes in state, federal, and international data privacy laws, as well as new privacy-preserving methodologies. This will ensure that it remains relevant and effective in the evolving data privacy landscape.
To ensure best practices are used in federally funded research:
- Funding agencies like the NIH and NSF should work with NIST to develop and implement training for Data Management and Sharing Plan applicants and reviewers. This training would equip both parties with knowledge of best practices in privacy-preserving data sharing in open science, thereby ensuring that data privacy measures are effectively integrated into research workflows.
- Agencies should additionally establish programs to foster privacy education, as recommended in the OSTP national strategy.
- Training on open data privacy could additionally be incorporated into agencies’ existing Responsible Conduct of Research requirements.
To catalyze continued improvements in data privacy technologies:
- Science funding agencies should increase funding for domain-specific research and development of privacy-preserving methods for research data sharing. Such initiatives would spur innovation in fields like cryptography and secure computation, leading to the development of new technologies that can broaden the scope of open and secure data sharing.
- To further stimulate innovation, these agencies could also host privacy/security innovation competitions, encouraging researchers and developers to create and implement cutting-edge solutions.
To facilitate inter-agency coordination:
- OSTP should launch a National Science and Technology Council subcommittee on research data privacy within the Committee on Science. This subcommittee should work closely with the Office of Management and Budget, leveraging its expertise in overseeing federal information resources and implementing data management policies. This collaboration would ensure a coordinated and consistent approach to addressing data privacy issues in open science across different federal agencies.
Increasingly, scientific innovations reside outside the realm of papers and patents. This is particularly true for open source hardware — hardware designs made freely and publicly available for study, modification, distribution, production, and sale. The shift toward open source aligns well with the White House’s 2023 Year of Open Science and can advance the accessibility and impact of federally funded hardware. Yet as the U.S. government expands its support for open science and open source, it will be increasingly vital that our intellectual property (IP) system is designed to properly identify and protect open innovations. Without consideration of open source hardware in prior art and attribution, these public goods are at risk of being patented over and having their accessibility lost.
Organizations like the Open Source Hardware Association (OSHWA) — a standards body for open hardware — provide verified databases of open source innovations. Over the past six years, for example, OSHWA’s certification program has grown to over 2600 certifications, and the organization has offered educational seminars and training. Despite the availability of such resources, open source certifications and resources have yet to be effectively incorporated into the IP system.
We recommend that the United States Patent and Trademark Office (USPTO) incorporate open source hardware certification databases into the library of resources to search for prior art, and create guidelines and training to build agency capacity for evaluating open source prior art.
Innovative and important hardware products are increasingly being developed as open source, particularly in the sciences, as academic and government research moves toward greater transparency. This trend holds great promise for science and technology, as more people from more backgrounds are able to replicate, improve, and share hardware. A prime example is the 3D printing industry. Once foundational patents in 3D printing were released, there was an explosion of invention in the field that led to desktop and consumer 3D printers, open source filaments, and even 3D printing in space.
For these benefits to be more broadly realized across science and technology, open source hardware must be acknowledged in a way that ensures scientists will have their contributions found and respected by the IP system’s prior art process. Scientists building open source hardware are rightfully concerned their inventions will be patented over by someone else. Recently, a legal battle ensued from open hardware being wrongly patented over. While the patent was eventually overturned, it took time and money, and revealed important holes in the United States’ prior art system. As another example, the Electronic Frontier Foundation found 30+ pieces of prior art that the ArrivalStar patent was violating.
Erroneous patents can harm the validity of open source and limit the creation and use of new open source tools, especially in the case of hardware, which relies on prior art as its main protection. The USPTO — the administrator of intellectual property protection and a key actor in the U.S. science and technology enterprise — has an opportunity to ensure that open source tools are reliably identified and considered. Standardized and robust incorporation of open source innovations into the U.S. IP ecosystem would make science more reproducible and ensure that open science stays open, for the benefits of rapid improvement, testing, citizen science, and general education.
We recommend that the USPTO incorporate open source hardware into prior art searches and take steps to develop education and training to support the protection of open innovation in the patenting process.
- USPTO should add OSHWA’s certification – a known, compliant open source hardware certification program – to its non-patent search library.
- USPTO should put out a request for information (RFI) seeking input on (a) optimal approaches for incorporating open source innovations into searches for prior art, and (b) existing databases, standards, or certification programs that can/should be added to the agency’s non-patent search library.
- Based on the results of the RFI, USPTO’s Scientific and Technical Information Center should create guidelines and educational training programs to build examiners’ knowledge and capacity for evaluating open source prior art.
- USPTO should create clear public guidelines for the submission of new databases into the agency’s prior art library, and the requirements for their consideration and inclusion.
Incorporation of open hardware into prior art searches will signify the importance and consideration of open source within the IP system. These actions have the potential to improve the efficiency of prior art identification, advance open source hardware by assuring institutional actors that open innovations will be reliably identified and protected, and ensure open science stays open.
The United States government spends billions of dollars every year to support the best scientific research in the world. The novel and multidisciplinary data produced by these investments have historically remained unavailable to the broader scientific community and the public. This limits researchers’ ability to synthesize knowledge, make new discoveries, and ensure the credibility of research. But recent guidance from the Office of Science and Technology Policy (OSTP) represents a major step forward for making scientific data more available, transparent, and reusable.
Federal agencies should take coordinated action to ensure that data sharing policies created in response to the 2022 Nelson memo incentivize high-quality data management and sharing plans (DMSPs), include robust enforcement mechanisms, and implement best practices in supporting a more innovative and credible research culture.
The 2022 OSTP memorandum “Ensuring Free, Immediate, and Equitable Access to Federally Funded Research” (the Nelson memo) represents a significant step toward opening up not only the findings of science but its materials and processes as well. By including data and related research outputs as items that should be publicly accessible, defining “scientific data” to include “material… of sufficient quality to validate and replicate research findings” (emphasis added), and specifying that agency plans should cover “scientific data that are not associated with peer-reviewed scholarly publications,” this guidance has the potential to greatly improve the transparency, equity, rigor, and reusability of scientific research.
Yet while the 2022 Nelson memo provides a crucial foundation for open, transparent, and reusable scientific data, preliminary review of agency responses reveals considerable variation on how access to data and research outputs will be handled. Agencies vary by the degree to which policies will be reviewed and enforced and by the degree of specificity by which they define data as being materials needed to “validate and replicate” research findings. Finally, they could and should go further in including plans to fully support a research ecosystem that supports cumulative scientific evidence by enabling the accessibility, discoverability, and citation of researchers’ data sharing plans themselves.
To better incentivize quality and reusability in data sharing, agencies should:
- Make DMSPs publicly available in an easy-to-use interface on their websites where individual grants are listed to increase accountability for stated plans and discoverability of research outputs.
- Additionally, give DMSPs persistent, unique identifiers (e.g., digital object identifiers, or DOIs) so that they can be cited, read, and used.
- Make DMSPs subject to peer review as part of the same process that other aspects of a proposed research project’s intellectual merit are evaluated. This will directly incentivize high standards of planned data sharing practices and enable the diffusion of best practices across the research community.
To better ensure compliance and comprehensive availability, agencies should:
- Coordinate across agencies to create a consistent mechanism for DMSP enforcement to reduce applicant uncertainty about agencies’ expectations and processes.
- Approaches to enforcement should include evaluation of past adherence to DMSPs in future grant applications and should ensure that early career researchers and researchers from lower-resourced institutions are not penalized for a lack of a data-sharing record.
- Assert that data includes all digital materials needed for external researchers to replicate and validate findings
- Work with domain-specific stakeholders to develop guidance for the specific components that should be included as research outputs (e.g., data, codebooks, metadata, protocols, analytic code, preregistrations).
Updates to the Center for Open Science’s efforts to track, curate, and recommend best practices in implementing the Nelson memo will be disseminated through publication and through posting on our website at https://www.cos.io/policy-reform.
Federally funded research relies heavily on software. Despite considerable evidence demonstrating software’s crucial role in research, there is no systematic process for researchers to acknowledge its use, and those building software lack recognition for their work. While researchers want to give appropriate acknowledgment for the software they use, many are unsure how to do so effectively. With greater knowledge of what software is used in research underlying publications, federal research funding agencies and researchers themselves will better be able to make efficient funding decisions, enhance the sustainability of software infrastructure, identify vital yet often overlooked digital infrastructure, and inform workforce development.
All agencies that fund research should require that resulting publications include a Software Bill of Materials (SBOM) listing the software used in the research.
Software is a cornerstone in research. Evidence from numerous surveys consistently shows that a majority of researchers rely heavily on software. Without it, their work would likely come to a standstill. However, there is a striking contrast between the crucial role that software plays in modern research and our knowledge of what software is used, as well as the level of recognition it receives. To bridge this gap, we propose policies to properly acknowledge and support the essential software that powers research across disciplines.
Software citation is one way to address these issues, but citation alone is insufficient as a mechanism to generate software infrastructure insights. In recent years, there has been a push for the recognition of software as a crucial component of scholarly publications, leading to the creation of guidelines and specialized journals for software citation. However, software remains under-cited due to several challenges, including friction with journals’ reference list standards, confusion regarding which or when software should be cited, and opacity of the roles and dependencies among cited software. Therefore, we need a new approach to this problem.
A Software Bill of Materials (SBOM) is a list of the software components that were used in an effort, such as building application software. Executive Order 14028 requires that all federal agencies obtain SBOMs when they purchase software. For this reason, many high-quality open-source SBOM tools already exist and can be straightforwardly used to generate descriptions of software used in research.
SBOM tools can identify and list the stack of software underlying each publication, even when the code itself is not openly shared. If we were able to combine software manifests from many publications together, we would have the insights needed to better advance research. SBOM data can help federal agencies find the right mechanism (funding, in-kind contribution of time) to sustain software critical to their missions. Better knowledge about patterns of software use in research can facilitate better coordination among developers and reduce friction in their development roadmaps. Understanding the software used in research will also promote public trust in government-funded research through improved reproducibility.
We recommend the adoption of Software Bills of Materials (SBOMs) — which are already used by federal agencies for security reasons — to understand the software infrastructure underlying scientific research. Given their mandatory use for software suppliers to the federal government, SBOMs are ideal for highlighting software dependencies and potential security vulnerabilities. The same tools and practices can be used to generate SBOMs for publications. We, therefore, recommend that all agencies that fund research should require resulting publications to include an SBOM listing the software used in the research. Additionally, for research that has already been published with supplementary code materials, SBOMs should be generated retrospectively. This will not only address the issue of software infrastructure sustainability but also enhance the verification of research by clearly documenting the specific software versions used and directing limited funds to software maintenance that most need it.
- The Office of Science and Technology Policy (OSTP) should coordinate with agencies to undertake feasibility studies of this policy, building confidence that it would work as intended.
- Coordination should include funding agencies, federal actors currently applying SBOMs in software procurement, organizations developing SBOM tools and standards, and scientific stakeholders.
- Coordination should include funding agencies, federal actors currently applying SBOMs in software procurement, organizations developing SBOM tools and standards, and scientific stakeholders.
- Based on the results of the study, OSTP should direct funding agencies to design and implement policies requiring that publications resulting from federal funding include an openly accessible, machine-readable SBOM for the software used in the research.
- OSTP and the Office of Management and Budget should additionally use the Multi-Agency Research and Development Budget Priorities to encourage agencies’ collection, integration, and analysis of SBOM data to inform funding and workforce priorities and to catalyze additional agency resource allocations for software infrastructure assessment in follow-on budget processes.
The National Institutes of Health (NIH) spent $49 billion in fiscal year 2023 on research and development, a significant annual investment in medical treatment discovery and development. Despite NIH’s research investments producing paradigm-shifting therapies, such as CAR-T cancer treatments, CRISPR-enabled gene therapy for sickle cell, and the mRNA vaccine for COVID-19, the agency and medical scientists more broadly are grappling with declining trust. This further compounds decades-long mistrust in medical research by marginalized populations, whom researchers struggle to recruit as participants in medical research. If things do not improve, a lack of representation may lead to lack of access to effective medical interventions, worsen health disparities, and cost hundreds of billions of dollars.
A new paradigm for research is needed to ensure meaningful public engagement and rebuild trust. Co-production —in which researchers, patients, and practitioners work together as collaborators — offers a framework for embedding collaboration and trust into the biomedical enterprise.
The National Institutes of Health should form an Office of Co-Production in the Office of the Director, Division of Program Coordination, Planning, and Strategic Initiatives.
In accordance with Executive Order 13985 and ongoing public access initiatives, science funding and R&D agencies have been seeking ways to embed equity, accessibility, and public participation into their processes. The NIH has been increasingly working to advance publicly engaged and led research, illustrated by trainings and workshops around patient-engaged research, funding resources for community partnerships like RADx Underserved Populations, community-led research programs like Community Partnerships to Advance Science for Society (ComPASS), and support from the new NIH director.
To ensure that public engagement efforts are sustainable, it is critical to invest in lasting infrastructure capable of building and maintaining these ties. Indeed, in their Recommendation on Open Science, the United Nations Educational, Scientific, and Cultural Organization outlined infrastructure that must be built for scientific funding to include those beyond STEMM practitioners in research decision-making. One key approach involves explicitly supporting the co-production of research, a process by which “researchers, practitioners and the public work together, sharing power and responsibility from the start to the end of the project, including the generation of knowledge.”
Co-production provides a framework with which the NIH can advance patient involvement in research, health equity, uptake and promotion of new technologies, diverse participation in clinical trials, scientific literacy, and public health. Doing so effectively would require new models for including and empowering patient voices in the agency’s work.
The NIH should create an Office of Co-Production within the Office of the Director, Division of Program Coordination, Planning, and Strategic Initiatives (DPCPSI). The Center for Co-Production would institutionalize best practices for co-producing research, train NIH and NIH-funded researchers in co-production principles, build patient-engaged research infrastructure, and fund pilot projects to build the research field.
The NIH Office of Co-Production, co-led by patient advocates (PA) and NIH personnel, should be established with the following key programs:
- A Resources and Training Program that trains patient advocates and researchers, both separately and together, so they can understand and work together as collaborators. This work would include helping researchers develop understanding about the communities affected by diseases they are investigating, relationship-building strategies, and ways to address power differentials, and helping patient advocates gain understanding about research processes, including understanding disease pathogenesis, different mechanisms of action and targets for research, and clinical research processes including regulatory requirements and ethical considerations. PAs could also be trained to qualify to serve on Data and Safety Monitoring Boards (DSMB).
- A Patient Advocate Advisors Management Program that would manage the placement of PAs into community advisory bodies, into advisory roles to NIH institutes’ major initiatives, onto ethical advisory bodies, onto DSMBs, onto peer review committees and study sections, and onto key long-range planning bodies, including those determining research prioritization.
- A Co-Production Principles and Practice Program led by a senior team of PAs and advisors that coordinates, organizes, and facilitates cross-disease understanding and solidarity and establishes basic principles for patient advocate engagement, grant requirements, and ongoing assessment of the quality of co-production and relational infrastructure. This program will focus on key principles such as:
- Sharing of power – the research is jointly owned and people work together to achieve a joint understanding
- Including all perspectives and skills – make sure the research team includes all those who can make a contribution
- Respecting and valuing the knowledge of all those working together on the research – everyone is of equal importance
- Reciprocity – everybody benefits from working together
- Building and maintaining relationships – an emphasis on relationships is key to sharing power. There needs to be joint understanding and consensus and clarity over roles and responsibilities. It is also important to value people and unlock their potential.
- A Communications, Planning, and Policy Program that works with the NIH director and institute directors to advocate for mutual goals to advance the public engagement mission of the NIH and its institutes.
- A Grantmaking Program that can pilot the expansion and scaling of NIH-sponsored Co-Production Cores and support the involvement of patient advocates in NIH-funded research across the country through equitable participation and standard compensation policies.
Creating an Office of Co-Production would achieve the following goals:
- It would address the growing gulf between the public, who ultimately fund biomedical research with their tax dollars, and researchers by directly and meaningfully engaging patient advocates in biomedical and clinical science. Co-production builds relationships and trust as it requires that relationships are valued and nurtured and that efforts are made to redress power differentials.
- By working early and often with patient populations around treatments, co-production helps medical scientists better anticipate and address risk early in the research process.
- It would institutionalize a known model for collaborative research that efficiently uses research dollars. During the HIV/AIDS crisis, rapid advances in biomedical and clinical research were made possible by patient advocate involvement in trial design, recruitment, and analysis.
- The NIH Center would create a replicable model of institutional support for co-production that can be scaled across the federal R&D agencies. The NIH should regularly report on the progress made by the Center for Co-Production to encourage replication in other agencies that can benefit from increased public participation.
While scientific publications and data are increasingly made publicly accessible, designs and documentation for scientific hardware — another key output of federal funding and driver of innovation — remain largely closed from view. This status quo can lead to redundancy, slowed innovation, and increased costs. Existing standards and certifications for open source hardware provide a framework for bringing the openness of scientific tools in line with that of other research outputs. Doing so would encourage the collective development of research hardware, reduce wasteful parallel creation of basic tools, and simplify the process of reproducing research. The resulting open hardware would be available to the public, researchers, and federal agencies, accelerating the pace of innovation and ensuring that each community receives the full benefit of federally funded research.
Federal grantmakers should establish a default expectation that hardware developed as part of federally supported research be released as open hardware. To retain current incentives for translation and commercialization, grantmakers should design exceptions to this policy for researchers who intend to patent their hardware.
Federal funding plays an important role in setting norms around open access to research. The White House Office of Science and Technology Policy (OSTP)’s recent Memorandum Ensuring Free, Immediate, and Equitable Access to Federally Funded Research makes it clear that open access is a cornerstone of a scientific culture that values collaboration and data sharing. OSTP’s recent report on open access publishing further declares that “[b]road and expeditious sharing of federally funded research is fundamental for accelerating discovery on critical science and policy questions.”
These efforts have been instrumental in providing the public with access to scientific papers and data — two of the foundational outputs of federally funded research. Yet hardware, another key input and output of science and innovation, remains largely hidden from view. To continue the move towards an accessible, collaborative, and efficient scientific enterprise, public access policies should be expanded to include hardware. Specifically, making federally funded hardware open source by default would have a number of specific and immediate benefits:
Reduce Wasteful Reinvention. Researchers are often forced to develop testing and operational hardware that supports their research. In many cases, unbeknownst to those researchers, this hardware has already been developed as part of other projects by other researchers in other labs. However, since that original hardware was not openly documented and licensed, subsequent researchers are not able to learn from and build upon this previous work. The lack of open documentation and licensing is also a barrier to more intentional, collaborative development of standardized testing equipment for research.
Increase Access to Information. As the OSTP memo makes clear, open access to federally funded research allows all Americans to benefit from our collective investment. This broad and expeditious sharing strengthens our ability to be a critical leader and partner on issues of open science around the world. Immediate sharing of research results and data is key to ensuring that benefit. Explicit guidance on sharing the hardware developed as part of that research is the next logical step towards those goals.
Alternative Paths to Recognition. Evaluating a researcher’s impact often includes an assessment of the number of patents they can claim. This is in large part because patents are easy to quantify. However, this focus on patents creates a perverse incentive for researchers to erect barriers to follow on study even if they have no intention of using patents to commercialize their research. Encouraging researchers to open source the hardware developed as part of their research creates an alternative path to evaluate their impact, especially as those pieces of open source hardware are adopted and improved by others. Uptake of researchers’ open hardware could be included in assessments on par with any patented work. This path recognizes the contribution to a collective research enterprise.
Verifiability. Open access to data and research are important steps towards allowing third parties to verify research conclusions. However, these tools can be limited if the hardware used to generate the data and produce the research are not themselves open. Open sourcing hardware simplifies the process of repeating studies under comparable conditions, allowing for third-party validation of important conclusions.
Federal grantmaking agencies should establish a default presumption that recipients of research funds make hardware developed with those funds available on open terms. This policy would apply to hardware built as part of the research process, as well as hardware that is part of the final output. Grantees should be able to opt out of this requirement with regards to hardware that is expected to be patented; such an exception would provide an alternative path for researchers to share their work without undermining existing patent-based development pathways.
To establish this policy, OSTP should conduct a study and produce a report on the current state of federally funded scientific hardware and opportunities for open source hardware policy.
- As part of the study, OSTP should coordinate and convene stakeholders to discuss and align on policy implementation details — including relevant researchers, funding agencies, U.S. Patent and Trademark Office officials, and leaders from university tech transfer offices.
- The report should provide a detailed and widely applicable definition of open source hardware, drawing on definitions established in the community — in particular, the definition maintained by the Open Source Hardware Association, which has been in use for over a decade and is based on the widely recognized definition of open source software maintained by the Open Source Initiative.
- It should also lay out a broadly acceptable policy approach for encouraging open source by default, and provide guidance to agencies on implementation. The policy framework should include recommendations for:
- Minimally burdensome components of the grant application and progress report with which to capture relevant information regarding hardware and to ensure planning and compliance for making outputs open source
- A clear and well-defined opportunity for researchers to opt out of this mandate when they intend to patent their hardware
The Office of Management and Budget (OMB) should issue a memorandum establishing a policy on open source hardware in federal research funding. The memorandum should include:
- The rationale for encouraging open source hardware by default in federally funded scientific research, drawing on the motivation of public access policies for publications and data
- A finalized definition of open source hardware to be used by agencies in policy implementation
- The incorporation of OMB’s Open Source Scientific Hardware Policy, in alignment with the OSTP report and recommendations
The U.S. government and taxpayers are already paying to develop hardware created as part of research grants. In fact, because there is not currently an obligation to make that hardware openly available, the federal government and taxpayers are likely paying to develop identical hardware over and over again.
Grantees have already proven that existing open publication and open data obligations promote research and innovation without unduly restricting important research activities. Expanding these obligations to include the hardware developed under these grants is the natural next step.
Scientific research is the foundation of progress, creating innovations like new treatments for melanoma and providing behavioral insights to guide policy in responding to events like the COVID-19 pandemic. This potential for real-world impact is best realized when research is rigorous, credible, and subject to external confirmation. However, evidence suggests that, too often, research findings are not reproducible or trustworthy, preventing policymakers, practitioners, researchers, and the public from fully capitalizing on the promise of science to improve social outcomes in domains like health and education.
To build on existing federal efforts supporting scientific rigor and integrity, funding agencies should study and pilot new programs to incentivize researchers’ engagement in credibility-enhancing practices that are presently undervalued in the scientific enterprise.
Federal science agencies have a long-standing commitment to ensuring the rigor and reproducibility of scientific research for the purposes of accelerating discovery and innovation, informing evidence-based policymaking and decision-making, and fostering public trust in science. In the past 10 years alone, policymakers have commissioned three National Academies reports, a Government Accountability Office (GAO) study, and a National Science and Technology Council (NSTC) report exploring these and related issues. Unfortunately, flawed, untrustworthy, and potentially fraudulent studies continue to affect the scientific enterprise.
The U.S. government and the scientific community have increasingly recognized that open science practices — like sharing research code and data, preregistering study protocols, and supporting independent replication efforts — hold great promise for ensuring the rigor and replicability of scientific research. Many U.S. science agencies have accordingly launched efforts to encourage these practices in recent decades. Perhaps the most well-known example is the creation of clinicaltrials.gov and the requirements that publicly and privately funded trials be preregistered (in 2000 and 2007, respectively), leading, in some cases, to fewer trials reporting positive results.
More recent federal actions have focused on facilitating sharing of research data and materials and supporting open science-related education. These efforts seek to build on areas of consensus given the diversity of the scientific ecosystem and the resulting difficulty of setting appropriate and generalizable standards for methodological rigor. However, further steps are warranted. Many key practices that could enhance the government’s efforts to increase the rigor and reproducibility of scientific practice — such as the preregistration of confirmatory studies and replication of influential or decision-relevant findings — remain far too rare. A key challenge is the weak incentive to engage in these practices. Researchers perceive them as costly or undervalued given the professional rewards created by the current funding and promotion system, which encourages exploratory searches for new “discoveries” that frequently fail to replicate. Absent structural change to these incentives, uptake is likely to remain limited.
To fully capitalize on the government’s investments in education and infrastructure for open science, we recommend that federal funding agencies launch pilot initiatives to incentivize and reward researchers’ pursuit of transparent, rigorous, and public good-oriented practices. Such efforts could enhance the quality and impact of federally funded research at relatively low cost, encourage alignment of priorities and incentive structures with other scientific actors, and help science and scientists better deliver on the promise of research to benefit society. Specifically, NIH and NSF should:
Establish discipline-specific offices to launch initiatives around rigor and reproducibility
- Use the National Institute for Neurological Disorders and Stroke’s Office of Research Quality (ORQ) as a model; similar ORQs would encourage uptake of under-incentivized practices through both internal initiatives and external funding programs.
- To ensure that programs are tailored to fit the priorities of a single disciplinary context, offices should be established within individual NIH institutes and within individual NSF directorates.
Incorporate assessments of transparent and credible research methods into their learning agendas
- Include questions to better understand existing practices, such as “How frequently and effectively do [agency]-funded researchers across disciplines engage in open science practices — e.g., preregistration, publication of null results, and external replication — and how do these practices relate to future funding and research outcomes?”
- Include questions to inform new policies and initiatives, such as “What steps could [agency] take to incentivize broader uptake of open science practices, and which ones — e.g., funding programs, application questions, standards, and evaluation models — are most effective?”
- To answer these questions, solicit feedback from applicants, reviewers, and program officers, and partner with external “science of science management” researchers to design rigorous prospective and retrospective studies; use the information obtained to develop new processes to incentivize and reward open science practices in funded research.
Expand support for third-party replications
- Allocate a consistent proportion of funds to support independent replications of key findings through the use of non-grant mechanisms — e.g., prizes, cooperative agreements, and contracts. The high value placed on scientific novelty discourages such studies, which could provide valuable information about treatment, policy, regulatory approval, or future scientific inquiry. A combination of agency prioritization and public requests for information should be used to identify topics for which additional supportive or contradictory evidence would provide significant societal and/or scientific benefit.
- The NSF, in partnership with an independent third party organization like the Institute for Replication, should run a pilot study to assess the utility of commissioning targeted and/or randomized replication studies for advancing research rigor and informing future funding.
When creating, using, and buying tools for agency science, federal agencies rely almost entirely on proprietary instruments. This is a missed opportunity because open source hardware — machines, devices, and other physical things whose design has been released to the public so that anyone can make, modify, distribute, and use them — offer significant benefits to federal agencies, to the creators and users of scientific tools, and to the scientific ecosystem.
In scientific work in the service of agency missions, the federal government should use and contribute to open source hardware.
Open source has transformative potential for science and for government. Open source tools are generally lower cost, promote reuse and customization, and can avoid dependency on a particular vendor for products. Open source engenders transparency and authenticity and builds public trust in science. Open source tools and approaches build communities of technologists, designers, and users, and they enable co-design and public engagement with scientific tools. Because of these myriad benefits, the U.S. government has made significant strides in using open source software for digital solutions. For example, 18F, an office within the General Services Administration (GSA) that acts as a digital services consultancy for agency partners, defaults to open source for software created in-house with agency staff as well as in contracts it negotiates.
Open science hardware, as defined by the Gathering for Open Science Hardware, is any physical tool used for scientific investigations that can be obtained, assembled, used, studied, modified, shared, and sold by anyone. It includes standard lab equipment as well as auxiliary materials, such as sensors, biological reagents, and analog and digital electronic components. Beyond a set of scientific tools, open science hardware is an alternative approach to the scientific community’s reliance on expensive and proprietary equipment, tools, and supplies. Open science hardware is growing quickly in academia, with new networks, journals, publications, and events crossing institutions and disciplines. There is a strong case for open science hardware in the service of the United Nations’ Sustainable Development Goals, as a collaborative solution to challenges in environmental monitoring, and to increase the impact of research through technology transfer. Although limited so far, some federal agencies support open science hardware, such as an open source Build-It-Yourself Rover; the development of infrastructure, including NIH 3D, a platform for sharing 3D printing files and documentation; and programs such as the National Science Foundation’s Pathways to Enable Open-Source Ecosystems.
If federal agencies regularly used and contributed to open science hardware for agency science, it would have a transformative effect on the scientific ecosystem.
Federal agency procurement practices are complex, time-intensive, and difficult to navigate. Like other small businesses and organizations, the developers and users of open science hardware often lack the capacity and specialized staff needed to compete for federal procurement opportunities. Recent innovations demonstrate how the federal government can change how it buys and uses equipment and supplies. Agency Innovation Labs at the Department of Defense, Department of Homeland Security, National Oceanic and Atmospheric Association (NOAA), National Aeronautics and Space Administration, National Institute of Standards and Technology, and the Census Bureau have developed innovative procurement strategies to allow for more flexible and responsive government purchasing and provide in-house expertise to procurement officers on using these models in agency contexts. These teams provide much-needed infrastructure for continuing to expand the understanding and use of creative, mission-oriented procurement approaches, which can also support open science hardware for agency missions.
Agencies such as the Environmental Protection Agency (EPA), NOAA, and the Department of Agriculture (USDA) are well positioned to both benefit greatly from and make essential contributions to the open source ecosystem. These agencies have already demonstrated interest in open source tools; for example, the NOAA Technology Partnerships Office has supported the commercialization of open science hardware that is included in the NOAA Technology Marketplace, including an open source ocean temperature and depth logger and a sea temperature sensor designed by NOAA researchers and partners. These agencies have significant need for scientific instrumentation for agency work, and they often develop and use custom solutions for agency science. Each of these agencies has a demonstrated commitment to broadening public participation in science, which open science hardware supports. For example, EPA’s Air Sensor Loan Programs bring air sensor technology to the public for monitoring and education. Moreover, these agencies’ missions invite public engagement in a way that a commitment to open source instrumentation and tools would build a shared infrastructure for progress in the public good.
We recommend that the GSA take the following steps to build capacity for the use of open science hardware across government:
- Create an Interagency Community of Practice for federal staff working on open source–related topics.
- Direct the Technology Transformation Services to create boilerplate language for procurement of open source hardware that is compliant with the Federal Acquisition Authority and the America COMPETES Reauthorization Act of 2010.
- Conduct training on open source and open science hardware for procurement professionals across government.
We also recommend that EPA, NOAA, and USDA take the following steps to build capacity for agency use of open science hardware:
- Task agency representatives to identify agency scientific instrumentation needs that are most amenable to open source solutions. For example, the EPA Office of Research and Development could use and contribute to open source air quality sensors for research on spatial and temporal variation in air quality, and the USDA could use and contribute to an open source soil testing kit.
- Task agency challenge and prize coordinators with working intra-agency on a challenge or prize competition to create an open source option for one of the identified scientific instruments or sensors above that meets agency quality requirements.
- Support agency staff in using open source approaches when creating and using scientific instrumentation. Include open source scientific instrumentation in internal communication products, highlight staff efforts to create and use open science hardware, and provide training to agency staff on its development and use.
- Integrate open source hardware into Procurement Innovation Labs or agency procurement offices. This may include training for acquisition professionals on the use of open science hardware so that they can understand the benefits and better support agency staff use. This can include options for using open source designs and how to understand and use open source licenses.
Defaulting to open science hardware for agency science will result in an open library of tools for science that are replicable and customizable and result in a much higher return on investment. Beyond that, prioritizing open science hardware in agency science would allow all kinds of institutions, organizations, communities, and individuals to contribute to agency science goals in a way that builds upon each of their efforts.
Grant writing is a significant part of a scientist’s work. While time-consuming, this process generates a wealth of innovative ideas and in-depth knowledge. However, much of this valuable intellectual output — particularly from the roughly 70% of unfunded proposals — remains unseen and underutilized. The default secrecy of scientific proposals is based on many valid concerns, yet it represents a significant loss of potential progress and a deviation from government priorities around openness and transparency in science policy. Facilitating public accessibility of grant proposals could transform them into a rich resource for collaboration, learning, and scientific discovery, thereby significantly enhancing the overall impact and efficiency of scientific research efforts.
We recommend that funding agencies implement a process by which researchers can opt to make their grant proposals publicly available. This would enhance transparency in research, encourage collaboration, and optimize the public-good impacts of the federal funding process.
Scientists spend a great deal of time, energy, and effort writing applications for grant funding. Writing grants has been estimated to take roughly 15% of a researcher’s working hours and involves putting together an extensive assessment of the state of knowledge, identifying key gaps in understanding that the researcher is well-positioned to fill, and producing a detailed roadmap for how they plan to fill that knowledge gap over a span of (typically) two to five years. At major federal funding agencies like the National Institutes of Health (NIH) and National Science Foundation (NSF), the success rate for research grant applications tends to fall in the range of 20%–30%.
The upfront labor required of scientists to pursue funding, and the low success rates of applications, has led some to estimate that ~10% of scientists’ working hours are “wasted.” Other scholars argue that the act of grant writing is itself a valuable and generative process that produces spillover benefits by incentivizing research effort and informing future scholarship. Under either viewpoint, one approach to reducing the “waste” and dramatically increasing the benefits of grant writing is to encourage proposals — both funded and unfunded — to be released as public goods, thus unlocking the knowledge, frontier ideas, and roadmaps for future research that are currently hidden from view.
The idea of grant proposals being made public is a sensitive one. Indeed, there are valid reasons for keeping proposals confidential, particularly when they contain intellectual property or proprietary information, or when they are in the early stages of development. However, these reasons do not apply to all proposals, and many potential concerns only apply for a short time frame. Therefore, neither full disclosure nor full secrecy are optimal; a more flexible approach that encourages researchers to choose when and how to share their proposals could yield significant benefits with minimal risks.
The potential benefits to the scientific community, and science funders include:
- Encouraging collaboration by making promising unfunded ideas and shared interests discoverable by disparate researchers
- Supporting early-career scientists by giving them access to a rich collection of successful and unsuccessful proposals from which to learn
- Facilitating cutting-edge science-of-science research to unlock policy-relevant knowledge about research programs and scientific grantmaking
- Allowing for philanthropic crowd-in by creating a transparent and searchable marketplace of grant proposals that can attract additional or alternative funding
- Promoting efficiency in the research planning and budgeting process by increasing transparency
- Giving scientists, science funders, and the public a view into the whole of early-stage scientific thought, above and beyond the outputs of completed projects.
Federal funding agencies should develop a process to allow and encourage researchers to share their grant proposals publicly, within existing infrastructures for grant reporting (e.g., NIH RePORTER). Sharing should be minimally burdensome and incorporated into existing application frameworks. The process should be flexible, allowing researchers to opt in or out — and to specify other characteristics like embargoes — to ensure applicants’ privacy and intellectual property concerns are mitigated.
The White House Office of Management and Budget (OMB) should develop a framework for publicly sharing grant proposals.
- OMB’s Evidence Team — in partnership with federal funding agencies (e.g., NIH, NSF, NASA, DOE) — should review statutory and regulatory frameworks to determine whether there are legal obstacles to sharing proposal content for extramural grant applications with applicant permission.
- OMB should then issue a memo clarifying the manner in which agencies can make proposals public and directing agencies to develop plans to allow and encourage the public availability of scientific grant proposals, in alignment with the Foundations for Evidence-Based Policymaking Act and the “Memorandum on Restoring Trust in Government Through Scientific Integrity and Evidence-Based Policymaking.”
The NSF should run a focused pilot program to assess opportunities and obstacles for proposal sharing across disciplines.
- NSF’s Division of Institution and Award Support (DIAS) should work with at least three directorates to launch a pilot study assessing applicants’ perspectives on proposal sharing, their perceived risks and concerns, and disciplinary differences in applicants’ views.
- The National Science Board (NSB) should produce a report outlining the findings of the pilot study and the implications for optimal approaches to facilitating public access of grant proposals.
Based on the NSB’s report, OSTP and OMB should work with federal funding agencies to refine and implement a proposal-sharing process across agencies.
- OSTP should work with funding agencies to develop a unified application section where researchers can indicate their release preferences. The group should agree on a set of shared parameters to align the request across agencies. For example, the guidelines should establish:
- A set of embargo options, such that applicants can choose to make their proposal available after, for example, 2, 5, or 10 years
- Whether the sharing of proposals can be made conditional on acceptance/rejection
- When in the application process applicants should be asked to opt in or out, and an approach for allowing applicants to revise their decision following submission
- OMB should include public access for grant proposals as a budget priority, emphasizing its potential benefits for bolstering innovation, efficiency, and government data availability. It should also provide guidance and technical assistance to agencies on how to implement the open grants process and require agencies to provide evidence of their plans to do so.
Federal agencies are striving to expand the role of the public, including members of marginalized communities, in developing regulatory policy. At the same time, agencies are considering how to mobilize data of increasing size and complexity to ensure that policies are equitable and evidence-based. However, community engagement has rarely been extended to the process of examining and interpreting data. This is a missed opportunity: community members can offer critical context to quantitative data, ground-truth data analyses, and suggest ways of looking at data that could inform policy responses to pressing problems in their lives. Realizing this opportunity requires a structure for public participation in which community members can expect both support from agency staff in accessing and understanding data and genuine openness to new perspectives on quantitative analysis.
To deepen community involvement in developing evidence-based policy, federal agencies should form Data Collaboratives in which staff and members of the public engage in mutual learning about available datasets and their affordances for clarifying policy problems.
Executive Order 14094 and the Office of Management and Budget’s subsequent guidance memo direct federal agencies to broaden public participation and community engagement in the federal regulatory process. Among the aims of this policy are to establish two-way communications and promote trust between government agencies and the public, particularly members of historically underserved communities. Under the Executive Order, the federal government also seeks to involve communities earlier in the policy process. This new attention to community engagement can seem disconnected from the federal government’s long-standing commitment to evidence-based policy and efforts to ensure that data available to agencies support equity in policy-making; assessing data and evidence is usually considered a job for people with highly specialized, quantitative skills. However, lack of transparency about the collection and uses of data can undermine public trust in government decision-making. Further, communities may have vital knowledge that credentialed experts don’t, knowledge that could help put data in context and make analyses more relevant to problems on the ground.
For the federal government to achieve its goals of broadened participation and equitable data, opportunities must be created for members of the public and underserved communities to help shape how data are used to inform public policy. Data Collaboratives would provide such an opportunity. Data Collaboratives would consist of agency staff and individuals affected by the agency’s policies. Each member of a Data Collaborative would be regarded as someone with valuable knowledge and insight; staff members’ role would not be to explain or educate but to learn alongside community participants. To foster mutual learning, Data Collaboratives would meet regularly and frequently (e.g., every other week) for a year or more.
Each Data Collaborative would focus on a policy problem that an agency wishes to address. The Environmental Protection Agency might, for example, form a Data Collaborative on pollution prevention in the oil and gas sector. Depending on the policy problem, staff from multiple agencies may be involved alongside community participants. The Data Collaborative’s goal would be to surface the datasets potentially relevant to the policy problem, understand how they could inform the problem, and identify their limitations. Data Collaboratives would not make formal recommendations or seek consensus; however, ongoing deliberations about the datasets and their affordances can be expected to create a more robust foundation for the use of data in policy development and the development of additional data resources.
The Office of Management and Budget should
- Establish a government-wide Data Collaboratives program in consultation with the Chief Data Officers Council.
- Work with leadership at federal agencies to identify policy problems that would benefit from consideration by a Data Collaborative. It is expected that deputy administrators, heads of equity and diversity offices, and chief data officers would be among those consulted.
- Hire a full-time director of Data Collaboratives to lead such tasks as coordinating with public participants, facilitating meetings, and ensuring that relevant data resources are available to all collaborative members.
- Ensure agencies’ ability to provide the material support necessary to secure the participation of underrepresented community members in Data Collaboratives, such as stipends, childcare, and transportation.
- Support agencies in highlighting the activities and accomplishments of Data Collaboratives through social media, press releases, open houses, and other means.
Data Collaboratives would move public participation and community engagement upstream in the policy process by creating opportunities for community members to contribute their lived experience to the assessment of data and the framing of policy problems. This would in turn foster two-way communication and trusting relationships between government and the public. Data Collaboratives would also help ensure that data and their uses in federal government are equitable, by inviting a broader range of perspectives on how data analysis can promote equity and where relevant data are missing. Finally, Data Collaboratives would be one vehicle for enabling individuals to participate in science, technology, engineering, math, and medicine activities throughout their lives, increasing the quality of American science and the competitiveness of American industry.