Make publishing more efficient and equitable by supporting a “publish, then review” model

Preprinting – a process in which researchers upload manuscripts to online servers prior to the completion of a formal peer review process – has proven to be a valuable tool for disseminating preliminary scientific findings. This model has the potential to speed up the process of discovery, enhance rigor through broad discussion, support equitable access to publishing, and promote transparency of the peer review process. Yet the model’s use and expansion is limited by a lack of explicit recognition within funding agency assessment practices. 

The federal government should take action to support preprinting, preprint review, and “no-pay” publishing models in order to make scholarly publishing of federal outputs more rapid, rigorous, and cost-efficient.

Details

In 2022, the Office of Science and Technology Policy (OSTP)’s “Ensuring Free, Immediate, and Equitable Access to Federally Funded Research” memo, written by Dr. Alondra Nelson, directed federal funding agencies to make the results of taxpayer-supported research immediately accessible to readers at no cost. This important development extended John P. Holdren’s 2013 “Increasing Access to the Results of Federally Funded Scientific Research” memo by covering all federal agencies and removing 12-month embargoes to free access and mirrored developments such as the open access provisions of Horizon 2020 in Europe. 

One of the key provisions of the Nelson memo is that federal agencies should “allow researchers to include reasonable publication costs … as allowable expenses in all research budgets,” signaling support for the Article Processing Charges (APC) model. Thus, the Nelson memo creates barriers to equitable publishing for researchers with limited access to funds. Furthermore, leaving the definition of “reasonable costs” open to interpretation creates the risk that an increasing proportion of federal research funds will be siphoned by publishing. In 2022, OSTP estimated that American taxpayers are already paying $390 to $798 million annually to publish federally funded research. 

Without further interventions, these costs are likely to rise, since publishers have historically responded to increasing demand for open access publishing by shifting from a subscription model to one in which authors pay to publish with article processing charges (APCs). For example, APC charges increased by 50 percent from 2010 to 2019.

The “no pay” model

In May 2023, the European Union’s council of ministers called for a “no pay” academic publishing model, in which costs are paid directly by institutions and funders to ensure equitable access to read and publish scholarship. There are several routes to achieve the no pay model, including transitioning journals to ‘Diamond’ Open Access models, in which neither authors nor readers are charged.

However, in contrast to models that rely on transforming journal publishing, an alternative approach relies on the burgeoning preprint system. Preprints are manuscripts posted online by authors to a repository, without charge to authors or readers. Over the past decade, their use across the scientific enterprise has grown dramatically, offering unique flexibility and speed to scientists and encouraging dynamic conversation. More recently, preprints have been paired with a new system of preprint peer review. In this model, organizations like Peer Community In, Review Commons, and RR\ID organize expert review of preprints from the community. These reviews are posted publicly and independent of a specific publisher or journal’s process.

Despite the growing popularity of this approach, its uptake is limited by a lack of support and incorporation into science funding and evaluation models. Federal action to encourage the “publish, then review” model offers several benefits:

  1. Research is available sooner, and society benefits more rapidly from new scientific findings. With preprints, researchers share their work with the community months or years ahead of journal publication, allowing others to build off their advances. 
  2. Peer review is more efficient and rigorous because the content of the review reports (though not necessarily the identity of the reviewers) is open. Readers are able to understand the level of scrutiny that went into the review process. Furthermore, an open review process enables anyone in the community to join the conversation and bring in perspectives and expertise that are currently excluded. The review process is less wasteful since reviews are not discarded with journal rejection, making better use of researchers’ time.
  3. Taxpayer research dollars are used more effectively. Disentangling transparent fees for dissemination and peer reviews from a publishing market driven largely by prestige would result in lower publishing costs, enabling additional funds to be used for research.

Recommendations

To support preprint-based publishing and equitable access to research:

Congress should

OSTP should

Science funding agencies should

To learn more about the importance of opening science and to read the rest of the published memos, visit the Open Science Policy sprint landing page.

Establish grant supplements for open science infrastructure security

Open science infrastructure (OSI), such as platforms for sharing research products or conducting analyses, is vulnerable to security threats and misappropriation. Because these systems are designed to be inclusive and accessible, they often require few credentials of their users. However, this quality also puts OSI at risk for attack and misuse. Seeking to provide quality tools to their users, OSI builders dedicate their often scant funding resources to addressing these security issues, sometimes delaying other important software work. 

To support these teams and allow for timely resolution to security problems, science funders should offer security-focused grant supplements to funded OSI projects.

Details

Existing federal policy and funding programs recognize the importance of security to scholarly infrastructure like OSI. For example, in October 2023, President Biden issued an Executive Order to manage the risks of artificial intelligence (AI) and ensure these technologies are safe, secure, and trustworthy. Also, under the Secure and Trustworthy Cyberspace program, the National Science Foundation (NSF) provides grants to ensure the security of cyberinfrastructure and asks scholars who collect data to plan for its secure storage and sharing. Furthermore, agencies like NSF and the National Institutes of Health (NIH) already offer supplements for existing grants. What is still needed is rapid dispersal of funds to address unanticipated security concerns across scientific domains. 

Risks like secure shell (SSH) attacks, data poisoning, and the proliferation of mis/disinformation on OSI threaten the utility, sustainability, and reputation of OSI. These concerns are urgent. New access to powerful generative AI tools, for instance, makes it easy to create disinformation that can convincingly mimic the rigorous science shared via OSI. In fact, increased open access to science can accelerate the proliferation of AI-generated scholarly disinformation by improving the accuracy of the models that generate it.

OSI is commonly funded by grants that afford little support for the maintenance work that could stop misappropriation and security threats. Without financial resources and an explicit commitment to a funder, it is difficult for software teams to prioritize these efforts. To ensure uptake of OSI and its continued utility, these teams must have greater access to financial resources and relevant talent to address these security concerns and norms violations.

Recommendations

Security concerns may be unanticipated and urgent, not aligning with calls for research proposals. To provide support for OSI with security risks in a timely manner, executive action should be taken through federal agencies funding science infrastructure (NSF, NIH, NASA, DOE, DOD, NOAA). These agencies should offer research supplements to address OSI misappropriation and security threats. Supplement requests would be subject to internal review by funding agencies but not subject to peer review, allowing teams to circumvent a lengthier review process for a full grant proposal. Research supplements, unlike full grant proposals, will allow researchers to nimbly respond to novel security concerns that arise after they receive their initial funding. Additionally, researchers who are less familiar with security issues but who provide OSI may not anticipate all relevant threats when the project is conceived and initial funding is distributed (managers of from-scratch science gateways are one possible example). Supplying funds through supplements when the need arises can protect sensitive data and infrastructure.

These research supplements can be made available to principal investigators and co-principal investigators with active awards. Supplements may be used to support additional or existing personnel, allowing OSI builders to bring new expertise to their teams as necessary. To ensure that funds can address unanticipated security issues in OSI from a variety of scholarly domains, supplement recipients need not be funded under an existing program to explicitly support open science infrastructure (e.g., NSF’s POSE program). 

To minimize the administrative burden of review, applications for supplements should be kept short (e.g., no more than five pages, excluding budget) and should include the following:

By appropriating $3 million annually across federal science funders, 40 supplemental awards of $75,000 each could be distributed to OSI projects. While the budget needed to address each security issue will vary, this estimate demonstrates the reach that these supplements could have. 

Research software like OSI often struggles to find funding for maintenance. These much-needed supplemental funds will ensure that OSI developers can speedily prioritize important security-related work without doing so at the expense of other planned software work. Without this funding, we risk compromising the reputation of open science, consuming precious development resources allocated to other tasks, and negatively affecting OSI users’ experience. Grant supplements to address OSI security threats and misappropriation ensure the sustainability of OSI going forward.

To learn more about the importance of opening science and to read the rest of the published memos, visit the Open Science Policy sprint landing page.

Expand capacity and coordination to better integrate community data into environmental governance

Frontline communities bear the brunt of harms created by climate change and environmental pollution, but they also increasingly generate their own data, providing critical social and environmental context often not present in research or agency-collected data. However, community data collectors face many obstacles to integrating this data into federal systems: they must navigate complex local and federal policies within dense legal landscapes, and even when there is interest or demonstrated need, agencies and researchers may lack the capacity to find or integrate this data responsibly.

Federal research and regulatory agencies, as well as the White House, are increasingly supporting community-led environmental justice initiatives, presenting an opportunity to better integrate local and contextualized information into more effective and responsive environmental policy.

The Environmental Protection Agency (EPA) should better integrate community data into environmental research and governance by building internal capacity for recognizing and applying such data, facilitating connections between data communities, and addressing misalignments with data standards.

Details

Community science and monitoring are often overlooked yet vital facets of open science. Community science collaborations and their resulting data have led to historic environmental justice victories that underscore the importance of contextualized community-generated data in environmental problem-solving and evidence-informed policy-making. 

Momentum around integrating community-generated environmental data has been building at the federal level for the past decade. In 2016, the report “A Vision for Citizen Science at EPA,” produced by the National Advisory Council for Environmental Policy and Technology (NACEPT), thoroughly diagnosed the need for a clear framework for moving community-generated environmental data and information into governance processes. Since then, EPA has developed additional participatory science resources, including a participatory science vision, policy guidelines, and equipment loan programs. More recently, in 2022, the EPA created an Equity Action Plan in alignment with their 2022–2026 Strategic Plan and established an Office of Environmental Justice and External Civil Rights (OEJECR). And, in 2023, as a part of the cross-agency Year of Open Science, the National Aeronautics and Space Administration (NASA)’s Transform to Open Science (TOPS) program lists “broadening participation by historically excluded communities” as a requisite part of its strategic objectives. 

It is evident that the EPA and research funding agencies like NASA have a strategic and mission-driven interest in collaborating with communities bearing the brunt of environmental and climate injustice to unlock the potential of their data. It is also clear that current methods aren’t working. Communities that collect and use environmental data still must navigate disjointed reporting policies and data standards and face a dearth of resources on how to share data with relevant stakeholders within the federal government. There is a critical lack of capacity and coordination directed at cross-agency integration of community data and the infrastructure that could enable the use of this data in regulatory and policy-making processes. 

Recommendations

To build government capacity to integrate community-generated data into environmental governance, the EPA should:

To facilitate connections between communities generating data, the EPA should:

To address misaligned data standards, the EPA, in partnership with USDS and the OMB, should:

Community-generated data provides contextualized environmental information essential for evidence-based policy-making and regulation, which in turn reduces wasteful spending by designing effective programs. Moreover, healthcare costs will be reduced for the general public if better evidence is used to address pollution, and climate adaptation costs could be reduced if we can use more localized and granular data to address pressing environmental and climate issues now rather than in the future

Our recommendations call for the addition of at least 10 full-time employees for each regional EPA office. The additional positions proposed could fill existing vacancies in newly established offices like the OEJECR. Additional budgetary allocations can also be made to the EPA’s EN to support technical infrastructure alterations and grant-making.

While there is substantial momentum and attention on community environmental data, our proposed capacity stimulus can make existing EPA processes more effective at achieving their mission and supports rebuilding trust in agencies that are meant to serve the public.

To learn more about the importance of opening science and to read the rest of the published memos, visit the Open Science Policy sprint landing page.

Truly Open Science Needs Knowledge Synthesis

This article was written as part of the Future of Open Science Policy project, a partnership between the Federation of American Scientists, the Center for Open Science, and the Wilson Center. This project aims to crowdsource innovative policy proposals that chart a course for the next decade of federal open science. To read the other articles in the series, and to submit a policy idea of your own, please visit the project page.

Ten years on from the Office of Science and Technology Policy’s 2013 public access memo, federally funded scientific papers and data are more available than ever before. Yet as we look towards the future of open science — and open science policy — it is crucial to recognize that truly open science requires that scientists, stakeholders, and the public are not only able to access the products of research, but the knowledge and insights embedded within those products. Given the ever-increasing quantity and complexity of scientific output, this calls for a new focus on synthesis and communication.

Beyond Open Access

Providing the public with access to cutting edge scientific research is a vital goal of both open science and U.S. policy, and has empowered people around the world to better understand the issues that are most important to their health and flourishing. Yet in many cases, the availability of scientific papers themselves is insufficient, or even counterproductive, for ensuring understanding and usability of state-of-the-art knowledge.

To take one example, the possibility that psychedelics will prove to be effective treatments for mental health disorders has garnered perhaps the most public attention of any psychiatric research area in recent decades. Individual papers have attracted extensive media coverage, and their availability to practitioners and the public is critical. But because of the field’s rapidly growing knowledge base and the unclear implications of individual studies, many scientists have called for the public to withhold judgment until more is known. Given this topic’s importance to public and medical stakeholders, and the potential for pervasive coverage to lead to unregulated self-treatment, there is a clear need for expert-driven, clearly communicated, and up-to-the-moment knowledge synthesis.

The idea of advancing the reach and impact of scientific knowledge through aggregation of findings is not new. The ad-hoc production of scientific syntheses by practicing researchers dates back at least a few centuries. In the last few decades, organizations such as the famed Cochrane Collaboration have provided models for standardized, rigorous synthesis within the health and medical sciences, and institutions across various other fields have followed. 

A Changing Evidence Landscape

Despite widespread awareness of the value of rigorous, open, and up-to-date evidence synthesis, existing structures are increasingly struggling to keep up with shifting scientific processes. Classic approaches to discovering and summarizing research findings on a given topic (i.e., systematic review and meta-analysis) often take over a year to produce and rapidly go out of date once published. When a field is fast-moving, a lack of up-to-date evidence aggregation leads to less efficient science and hinders evidence-based decision making. Additionally, the nature of scientific outputs themselves are rapidly changing — with innovative approaches for publication, improved standards for credibility, and changing academic incentive structures. These changes require a nimble synthesis regime.

New models for evidence aggregation and communication show promise in strengthening the ecosystem. The TRUST Initiative, for example, demonstrated the potential to embed measures of transparency and credibility into policy-relevant research synthesis, and the Living Evidence model provides a new framework for shifting synthesis away from a static – and often redundant – exercise, to a collaborative and ongoing process embedded within diverse partnerships.

The Need for Government Efforts

These developments signal a clear need for robust resources and capacity for evidence synthesis, yet the ecosystem faces barriers to its sustainability. Indeed, Cochrane, arguably the world leader in trusted medical reviews, recently lost roughly $5 million in funding from the UK’s National Institute for Health and Care Research, and a forthcoming shift towards open access reviews has complicated their financial picture. In the US, a collection of federal evidence clearinghouses must work hard to secure and maintain sufficient political support and resources for their vital work. In general, fast-moving technologies, slow-moving statutory constraints, and a precarious funding landscape mean that important knowledge remains too often scattered across individual studies and outdated reviews. 

Much work can and should be done within the academy, industry, and non-governmental institutions. Yet federal actors hold great power – and great responsibility – to advance the cause of trustworthy and up-to-date synthesis and communication of scientific knowledge. Existing efforts show great promise, and span extramural funding (e.g., the NSF’s Opportunities for Promoting Understanding through Synthesis [OPUS] program and the inter-agency Prototype Open Knowledge Network program), organizing/contracting expert-led syntheses (e.g., Office of Disease Preventions’ U.S. Preventive Services Task Force (USPSTF) and the cross-agency evidence clearinghouses), and efforts to generate, synthesize, and apply evidence within government (e.g., agency learning agendas and evaluation plans)

Call to Action

The Year of Open Science provides an important window to both strengthen existing efforts to promote open knowledge and launch ambitious new ones. To meet this moment, we need a broader set of voices contributing ideas on this aspect of open science and countless others. That is why we recently launched an Open Science Policy Sprint, in partnership with the Center for Open Science and the Wilson Center. If you have ideas for federal actions that can help the US meet and exceed its open science goals, we encourage you to submit your proposals here.

Opening Up Scientific Enterprise to Public Participation

This article was written as part of the Future of Open Science Policy project, a partnership between the Federation of American Scientists, the Center for Open Science, and the Wilson Center. This project aims to crowdsource innovative policy proposals that chart a course for the next decade of federal open science. To read the other articles in the series, and to submit a policy idea of your own, please visit the project page.

For decades, communities have had little access to scientific information despite paying for it with their tax dollars. The August 2022 Office of Science and Technology Policy (OSTP) memorandum thus catalyzed transformative change by requiring all federally funded research to be made publicly available by the end of 2025. Implementation of the memo has been supported by OSTP’s “Year of Open Science”, which is coordinating actions across the federal government to advance open access research. Access, though, is the first step to building a more responsive, equitable research ecosystem. A more recent memorandum from the Office of Management and Budget (OMB) and OSTP outlining research and development (R&D) policy priorities for fiscal year (FY) 2025 called on federal agencies to address long-standing inequities by broadening public participation in R&D. This is a critical demand signal for solutions that ensure that federally funded research delivers for the American people.

Public engagement researchers have long been documenting the importance of partnerships with key local stakeholders — such as local government and community-based organizations — in realizing the full breadth of participation with a given community. The lived experience of community members can be an invaluable asset to the scientific process, informing and even shaping research questions, data collection, and interpretation of results. Public participation can also benefit the scientific enterprise by realizing active translation and implementation of research findings, helping to return essential public benefits from the $170 billion invested in R&D each year.

The current reality is that many local governments and community-based organizations do not have the opportunities, incentives, or capacity to engage effectively in federally-funded scientific research. For example, Headwaters Economics found that a significant proportion of communities in the United States do not have the staffing, resources, or expertise to apply to receive and manage federal funding. Additionally, community-based organizations (CBOs) — the groups that are most connected to people facing problems that science could be activated to solve, such as health inequities and environmental injustices — face similar capacity barriers, especially around compliance with federal grants regulations and reporting obligations. Few research funds exist to facilitate the building and maintenance of strong relationships with CBOs and communities, or to provide capacity-building financing to ensure their full participation. Thus, relationships between communities and academia, companies, and the federal government often consume those communities’ time and resources without much return on their investment.

Great participatory science exists, if we know where to look

Place-based investments in regional innovation and research and development (R&D) unlocked by the CHIPS and Science Act (i.e. Economic Development Administration’s (EDA) Tech Hubs and National Science Foundation’s (NSF) Regional Innovation Engines and Convergence Accelerator) are starting to provide transformative opportunities to build local research capacity in an equitable manner. What they’ll need are the incentives, standards, requirements, and programmatic ideas to institutionalize equitable research partnerships.

Models of partnership have been established between community organizations, academic institutions, and/or the federal government focused on equitable relationships to generate evidence and innovations that advance community needs. 

An example of an academic-community partnership is the Healthy Flint Research Coordinating Center (HFRCC). The HFRCC evaluates and must approve all research conducted in Flint, Michigan. HFRCC designs proposed studies that would align better with community concerns and con­text and ensures that benefits flow directly back to the community. Health equity is assessed holistically: considering the economic, environmental, behavioral, and physical health of residents. Finally, all work done in Flint is made open access through this organization. From these efforts we learn that communities can play a vital role in defining problems to solve and ensuring the research will be done with equity in mind.

An example of a federal agency-community partnership is the Environmental Protection Agency’s (EPA) Participatory Science Initiative. Through citizen science processes, the EPA has enabled data collection of under-monitored areas to identify climate-related and environmental issues that require both technical and policy solutions. The EPA helps to facilitate these citizen-science initiatives through providing resources on the best air monitoring equipment and how to then visualize field data. These initiatives specifically empower low-income and minority communities who face greater environmental hazards, but often lack power and agency to vocalize concerns. 

Finally, communities themselves can be the generators of research projects, initially without a partner organization. In response to the lack of innovation in diabetic care management, Type 1 diabetic patients founded openAPS. This open source effort spurred the creation of an overnight, closed loop artificial pancreas system to reduce disease burden and save lives. Through decentralized deployment to over 2700 individuals, there are 63 million hours of real-world “closed-loop” data, with the results of prospective trials and randomized control trials (RCTs) showing fewer highs and less severe lows, i.e., greater quality of life. Thus, this innovation is now ripe for federal investment and partnership for it to reach a further critical scale.

Scaling participatory science requires infrastructure

Participatory science and innovation is still an emerging field. Yet, effective models for infrastructuring participation within scientific research enterprises have emerged over the past 20 years to build community engagement capacity of research institutions. Participatory research infrastructure (PRI) could take the form of the following: 

  1. Offices that develop tools for interfacing with communities, like citizen’s juries, online platforms, deliberative forums, and future-thinking workshops.
  2. Ongoing technology assessment projects to holistically evaluate innovation and research along dimensions of equity, trust, access, etc.
  3. Infrastructure (physical and digital) for research, design experimentation, and open innovation led by community members.
  4. Organized stakeholder networks for co-creation and community-driven citizen science
  5. Funding resources to build CBO capacity to meaningfully engage (examples including the RADx-UP program from the NIH and Civic Innovation Challenge from NSF).
  6. Governance structures with community members in decision-making roles and requirements that CBOs help to shape the direction of the research proposals.
  7. Peer-review committees staffed by members of the public, demonstrated recently by NSF’s Regional Innovation Engines
  8. Coalitions that utilize research as an input for collective action and making policy and governance decisions to advance communities’ goals.

Call to action

The responsibility of federally-funded scientific research is to serve the public good. And yet, because there are so few interventions that have been scaled, participatory science will remain a “nice to have” versus an imperative for the scientific enterprise. To bring participatory science into the mainstream, there will need to be creative policy solutions that create incentive mechanisms, standards, funding streams, training ecosystems, assessment mechanisms, and organizational capacity for participatory science. To meet this moment, we need a broader set of voices contributing ideas on this aspect of open science and countless others. That is why we recently launched an Open Science Policy Sprint, in partnership with the Center for Open Science and the Wilson Center. If you have ideas for federal actions that can help the U.S. meet and exceed its open science goals, we encourage you to submit your proposals here.

How Unmet Desire Surveys Can Advance Learning Agendas and Strengthen Evidence-Based Policymaking

Summary

The 2018 Foundations for Evidence-Based Policymaking Act (Evidence Act) promotes a culture of evidence within federal agencies. A central part of that culture entails new collaboration between decision-makers and those with diverse forms of expertise inside and outside of the federal government. Federal chief evaluation officers lead these efforts, yet they face challenges getting buy-in from agency staff and in obtaining sufficient resources. One tool to overcome these challenges is an “unmet desire survey,” which prompts agency staff to reflect on how the success of their programs relates to what is happening in other agencies and outside government, as well as consider what information about these other programs and organizations would help their work be more effective. The unmet desire survey is an important data-gathering mechanism and also encourages evaluation officers to engage in matchmaking between agency staff and people who have the information they desire. Using existing authorities and resources, agencies can pilot unmet desire surveys as a concrete mechanism for advancing federal learning agendas in a way that builds buy-in by directly meeting the needs of agency staff.

Challenge and Opportunity

A core mission of the Evidence Act is to foster a culture of evidence-based decision-making within federal agencies. Since the problems agencies tackle are multidimensional, with the success of one government program often depending on the performance of others, new collaborative relationships between decision-makers in the federal government and those in other agencies and in organizations outside the federal government are essential to realizing the Evidence Act’s vision. Indeed, Office of Management and Budget (OMB) implementation guidance stresses that learning agendas are “an opportunity to align efforts and promote interagency collaboration in areas of joint focus or shared populations or goals” (OMB M-19-23), and that a culture of evidence “cannot happen solely at the top or in isolated analytical offices, but rather must be embedded throughout each agency…and adopted by the hardworking civil servants who serve on behalf of the American people” (OMB M-21-27). 

Chief evaluation officers at federal agencies are the main point people for fostering cultures of evidence. Yet they and their evaluation staff face many challenges, including getting buy-in from agency staff, understanding the needs of program and operational offices that go beyond the organizational boundaries of those offices, and limited resources. Indeed, OMB guidance acknowledges that many agency staff may view learning agendas as just another compliance exercise.

This memo proposes a flexible tool that evaluation officers can use to generate buy-in among agency staff and leadership while also promoting collaboration as emphasized in OMB guidance and in the Evidence Act. The tool, which has already proven valuable in local government and in the nonprofit sector, is called an “unmet desire survey.” The survey measures unmet desires for collaboration by prompting staff to consider the following: 

Unmet desire surveys elicit critical insights about needs for connection and are highly flexible. For instance, in the first question posed above, evaluation officers can choose to ask staff about new information that would be helpful for any program or only about information relevant to programs that are top priorities for their agency. In other words, unmet desire surveys need not add one more thing to the plate; rather, they can be used to accelerate collaboration directly tied to current learning priorities. 

Unmet desire surveys also legitimize informal collaborative relationships. Too often, calls for new collaboration in the policy sphere immediately segue into overly structured meetings that fail to uncover promising areas for joint learning and problem-solving. Meetings across government agencies are often scripted presentations about each organization’s activities, providing little insight on ways they could partner to achieve better results. Policy discussions with outside research experts tend to focus on formal evaluations and long-term research projects that don’t surface opportunities to accelerate learning in the near term. In contrast, unmet desire surveys explicitly legitimize the idea that diverse thinkers may want to connect only for informal knowledge exchange rather than formal events or partnerships. Indeed, even single conversations can greatly impact decision-makers, and, of course, so can more intensive relationships.

While online platforms for spurring new collaborative relationships have been previously proposed, they have not achieved uptake at scale among federal policymakers. One reason for this is that the problem that needs to be solved is both factual and relational. In other words, the issue isn’t simply that strangers do not know each other—it’s also that strangers do not always know how to talk to one another. People care about how others relate to them and whether they can successfully relate to others. Uncertainty about relationality routinely stops people from interacting with others they do not know. This is why unmet desire surveys also include questions that directly measure hesitations about interacting with people from other agencies and organizations. 

After the surveys are administered, evaluation staff can use survey data to engage in matchmaking: brokering connections among people with similar goals but diverse expertise and helping overcome uncertainty about relationality so that new cross-agency and cross-sector collaborative relationships can take root. In sum, by deliberately inquiring about connections with others who have diverse forms of relevant expertise—and then making those connections anew—evaluation staff can generate greater enthusiasm and ownership among people who may not consider evaluation and evidence-building as part of their core responsibilities.

Plan of Action

Using existing authorities and resources, federal evaluation officers can take three steps to position unmet desire surveys as a standard component of the government’s evidence toolbox. 

Step 1. Design and implement pilot unmet desire surveys. 

Chief evaluation officers are well positioned to pilot unmet desire surveys within their agencies. While individual evaluation officers can work independently to design unmet desire surveys, it may be more fruitful to work together, via the Evaluation Officer Council, to design a baseline survey template. Chief evaluation officers could then work with their teams to adapt the baseline template to their agencies, including identifying which agency staff to prioritize as well as the best way to phrase particular questions (e.g., regarding the types of connections that employees want in order to improve the effectiveness of their work or the types of hesitancies to ask about). Given that the question content is highly flexible, unmet desire surveys can directly accelerate learning agendas and build buy-in at the same time. Thus, they can yield tangible, concrete benefits with very little upfront cost.

Step 2. Meet unmet desires by matchmaking. 

After the pilot surveys are administered, chief evaluation officers should act on their results by matchmaking. There are several ways to do this without new appropriations. One is for evaluation teams within agencies to engage in informal, low-lift matchmaking—wherein those who implement the survey also act as initial matchmakers—as an early proof of concept. A second option is to bring on short-term matchmakers through flexible hiring mechanisms (e.g., through the Intergovernmental Personnel Act). Documenting successes and lessons learned then set the stage for using agency-specific discretionary funds to hire one or more in-house matchmakers as longer-term or staff appointments.

Step 3. Collect information on successes and lessons learned from the pilot.

Unmet desire surveys can be tricky to field because they entail asking employees about topics they may not be used to thinking about. It often takes some trial and error to figure out the best ways to ask about employees’ substantive goals and their hesitations about interacting with people they do not know. Piloting unmet desire surveys and follow-on matchmaking can not only demonstrate value (e.g., the impact of new collaborative relationships fostered through these combined efforts) to justify further investment but also suggest how evaluation leads might best structure future unmet desire surveys and subsequent matchmaking.

Conclusion

An unmet desire survey is an adaptable tool that can reveal fruitful pathways for connection and collaboration. Indeed, unmet desire surveys leverage the science of collaboration by ensuring that efforts to broker connections among strangers consider both substantive goals and uncertainty about relationality. Chief evaluation officers can pilot unmet desire surveys using existing authorities and resources, and then use the information gathered to identify opportunities for productive matchmaking. Ultimately, positioning the survey as a standard component of the government’s evidence toolbox has great potential to support agency staff in advancing federal learning agendas and building a robust culture of evidence across the U.S. government.

Frequently Asked Questions
Who should unmet desire surveys be administered to?

The best place to start—especially when resources are limited—is with potential evidence champions. These are people who already have an idea of what information would help them improve the impact of the programs they run and which people would be helpful to collaborate with. These potential evidence champions may not self-identify as such; rather, they may see themselves as falling into other categories, such as customer-experience experts, bureaucracy hackers, process innovators, or policy entrepreneurs. Regardless of terminology, the unmet desire survey provides people who are already motivated to collaborate and connect with a clear opportunity to articulate their needs. Evaluation staff can then respond by matchmaking to stimulate new and productive relationships for those people.

Who should conduct an unmet desire survey?

The administrator should be someone with whom agency staff feel comfortable discussing their needs (e.g., a member of an agency evaluation team) and who is able to effectively facilitate matchmaking—perhaps because of their network or their reputation within the agency. The latter criterion helps ensure that staff expect useful follow-up, which in turn motivates completion of the survey and participation in follow-on activities; it also generates enthusiasm for engaging in new collaborative relationships (as well as creating broader buy-in for the learning agenda). In some cases, it may make the most sense to have multiple people from an evaluation team surveying different agency staff or co-sponsoring the survey with agency innovation offices. Explicit support from agency leadership for the survey and follow-on activities is also crucial for achieving staff buy-in.

What questions should be asked in an unmet desire survey?

The bulleted list in the body of the memo illustrates the types of questions that an unmet desire survey might ask. Yet survey content is meant to be tailored and agency-specific. For instance, the first suggested question about information that would help increase program effectiveness can be left entirely open-ended or be focused on programs related to learning-agenda priorities. Similarly, the second suggested question may invite responses related to either informal or formal collaboration, or instead may only ask about knowledge exchange (a relatively lower commitment that may be more palatable to agency leadership). The third and fourth questions should refer to specific types of hesitancy that survey administrators believe are most likely (e.g., ask about a few hesitancies that seem most likely to arise, such as lack of explicit permission, concerns about saying something inappropriate, or concerns about lack of trustworthy information). The final question about why these collaborations don’t exist can similarly be left broad or include a few examples to help spark ideas.

Who should conduct matchmaking in response to an unmet desire survey?

Again, the answer will be agency-specific. In many organizations, matchmaking happens informally. Formalizing this duty as a part of one or more people’s official responsibilities sends a signal about how much this work is valued. Exactly who those people are will depend on the agency’s structure, as well as on whether there are already people in a given agency who see matchmaking as part of their job.

When is the right time to field an unmet desire survey?

While unmet desire surveys can be done anytime and on a continuous basis, it is best to field them when there is identified staff capacity for follow-on matchmaking and employee willingness to build collaborative relationships.

Public Value Evidence for Public Value Outcomes: Integrating Public Values into Federal Policymaking

Summary

The federal government––through efforts like the White House Year of Evidence for Action––has made a laudable push to ensure that policy decisions are grounded in empirical evidence. While these efforts acknowledge the importance of social, cultural and Indigenous knowledges, they do not draw adequate attention to the challenges of generating, operationalizing, and integrating such evidence in routine policy and decision making. In particular, these endeavors are generally poor at incorporating the living and lived experiences, knowledge, and values of the public. This evidence—which we call evidence about public values—provides important insights for decision making and contributes to better policy or program designs and outcomes. 

The federal government should broaden institutional capacity to collect and integrate evidence on public values into policy and decision making. Specifically, we propose that the White House Office of Management and Budget (OMB) and the White House Office of Science and Technology Policy (OSTP): 

  1. Provide a directive on the importance of public value evidence.
  2. Develop an implementation roadmap for integrating public value evidence into federal operations (e.g., describe best practices for integrating it into federal decision making, developing skill-building opportunities for federal employees).

Challenge and Opportunity

Evidence about public values informs and improves policies and programs

Evidence about public values is, to put it most simply, information about what people prioritize, care, or think about with respect to a particular issue, which may differ from ideas prioritized by experts. It includes data collected through focus groups, deliberations, citizen review panels, and community-based research, or public opinion surveys. Some of these methods rely on one-way flows of information (e.g., surveys) while others prioritize mutual exchange of information among policy makers and participating publics (e.g., deliberations). 

Agencies facing complex policymaking challenges can utilize evidence about public values––along with expert- and evaluation-based evidence––to ensure decisions truly serve the broader public good. If collected as part of the policy-making process, evidence about public values can inform policy goals and programs in real time, including when program goals are taking shape or as programs are deployed. 

Evidence about public values within the federal government: three challenges to integration

To fully understand and use public values in policymaking, the U.S. government must first broadly address three challenges.

First, the federal government does not sufficiently value evidence about public values when it researches and designs policy solutions. Federal employees often lack any directive or guidance from leadership that collecting evidence about public values is valuable or important to evidence-based decision making. Efforts like the White House Year of Evidence for Action seek to better integrate evidence into policy making. Yet––for many contexts and topics––scientific or evaluation-based evidence is just one type of evidence. The public’s wisdom, hopes, and perspectives play an important mediating factor in determining and achieving desired public outcomes. The following examples illustrate ways public value evidence can support federal decision making:

  1. An effort to implement climate intervention technologies (e.g., solar geoengineering) might be well-grounded in evidence from the scientific community. However, that same strategy may not consider the diverse values Americans hold about (i) how such research might be governed, (ii) who ought to develop those technologies, and (iii) whether or not they should be used at all. Public values are imperative for such complex, socio-technical decisions if we are to make good on the Year of Evidence’s dual commitment to scientific integrity (including expanded concepts of expertise and evidence) and equity (better understanding of “what works, for whom, and under what circumstances”). 
  2. Evidence about the impacts of rising sea levels on national park infrastructure and protected features has historically been tense. To acknowledge the social-environmental complexity in play, park leadership have strived to include both expert assessments and engagement with publics on their own risk tolerance for various mitigation measures. This has helped officials prioritize limited resources as they consider tough decisions on what and how to continue to preserve various park features and artifacts. 

Second, the federal government lacks effective mechanisms for collecting evidence about public values. Presently, public comment periods favor credentialed participants—advocacy groups, consultants, business groups, etc.—who possess established avenues for sharing their opinions and positions to policy makers. As a result, these credentialed participants shape policy and other experiences, voices, and inputs go unheard. While the general public can contribute to government programs through platforms like Challenge.gov, credentialed participants still tend to dominate these processes. Effective mechanisms for collecting public values into decision making or research are generally confined to university, local government, and community settings. These methods include participatory budgeting, methods from usable or co-produced science, and participatory technology assessment. Some of these methods have been developed and applied to complex science and technology policy issues in particular, including climate change and various emerging technologies. Their use in federal agencies is far more limited. Even when an agency might seek to collect public values, it may be impeded by regulatory hurdles, such as the Paperwork Reduction Act (PRA), which can limit the collection of public values, ideas, or other input due to potentially long timelines for approval and perceived data collection burden on the public. Cumulatively, these factors prevent agencies from accurately gauging––and being adaptive to––public responses. 

Third, federal agencies face challenges integrating evidence about public values into policy making. These challenges can be rooted in the regulatory hurdles described above, difficulties integrating with existing processes, and unfamiliarity with the benefits of collecting evidence about public values. Fortunately, studies have found specific attributes present among policymakers and agencies that allowed for the implementation and use of mechanisms for capturing public values. These attributes included: 

  1. Leadership who prioritized public involvement and helped address administrative uncertainties.
  2. An agency culture responsive to broader public needs, concerns, and wants.
  3. Agency staff familiar with mechanisms to capture public values and integrate them in the policy- and decision-making process. The latter can help address translation issues, deal with regulatory hurdles, and can better communicate the benefits of collecting public values with regard to agency needs. Unfortunately, many agencies do not have such staff, and there are no existing roadmaps or professional development programs to help build this capacity across agencies. 

Aligning public values with current government policies promotes scientific integrity and equity

The White House Year of Evidence for Action presents an opportunity to address the primary challenges––namely a lack of clear direction, collection protocols, and evidence integration strategies––currently impeding public values evidence’s widespread use in the federal government. Our proposal below is well aligned with the Year of Evidence’s central commitments, including: 

Furthermore, this proposal aligns with the goals of the Year of Evidence for Action to “share leading practices to generate and use research-backed knowledge to advance better, more equitable outcomes for all America…” and to “…develop new strategies and structures to promote consistent evidence-based decision-making inside the Federal Government.” 

Plan of Action

To integrate public values into federal policy making, the White House Office of Management and Budget (OMB) and the White House Office of Science and Technology Policy (OSTP) should: 

  1. Develop a high-level directive for agencies about the importance of collecting public values as a form of evidence to inform policy making.
  2. Oversee the development of a roadmap for the integration of evidence about public values across government, including pathways for training federal employees. 

Recommendation 1. OMB and OSTP should issue a high-level directive providing clear direction and strong backing for agencies to collect and integrate evidence on public values into their evidence-based decision-making procedures. 

Given the potential utility of integrating public value evidence into science and technology policy as well as OSTP’s involvement in efforts to promote evidence-based policy, OSTP makes a natural partner in crafting this directive alongside OMB. This directive should clearly connect public value evidence to the current policy environment. As described above, efforts like the Foundations for Evidence-Based policy making Act (Evidence Act) and the White House Year of Evidence for Action provide a strong rationale for the collection and integration of evidence about public values. Longer-standing policies––including the Crowdsourcing and Citizen Science Act––provide further context and guidance for the importance of collecting input from broad publics.

Recommendation 2. As part of the directive, or as a follow up to it, OMB and OSTP should oversee the development of a roadmap for integrating evidence about public values across government. 

The roadmap should be developed in consultation with various federal stakeholders, such as members of the Evaluation Officer Council, representatives from the Equitable Data Working Group, customer experience strategists, and relevant conceptual and methods experts from within and outside the government.

A comprehensive roadmap would include the following components:

Conclusion

Collecting evidence about the living and lived experiences, knowledge, and aspirations of the public can help inform policies and programs across government. While methods for collecting evidence about public values have proven effective, they have not been integrated into evidence-based policy efforts within the federal government. The integration of evidence about public values into policy making can promote the provision of broader public goods, elevate the perspectives of historically marginalized communities, and reveal policy or program directions different from those prioritized by experts. The proposed directive and roadmap––while only a first step––would help ensure the federal government considers, respects, and responds to our diverse nation’s values.

Frequently Asked Questions
Which agencies or areas of government could use public value evidence?

Federal agencies can use public value evidence where additional information about what the public thinks, prioritizes, and cares about could improve programs and policies. For example, policy decisions characterized by high uncertainty, potential value disputes, and high stakes could benefit from a broader review of considerations by diverse members of the public to ensure that novel options and unintended consequences are considered in the decision making process. In the context of science and technology related decision making, these situations were called “post-normal science” by Silvio Funtowicz and Jerome Ravetz. They called for an extension of who counts as a subject matter expert in the face of such challenges, citing the potential for technical analyses to overlook important societal values and considerations.

Why should OSTP be engaged in furthering the use of public value evidence?

Many issues where science and technology meet societal needs and policy considerations warrant broad public value input. These issues include emerging technologies with societal implications and existing S&T challenges that have far reaching impacts on society (e.g., climate change). Further, OSTP is already involved in Evidence for Action initiatives and can assist in bringing in external expertise on methods and approaches.

Why do we need this sort of evidence when public values are represented by elective officials?

While guidance from elected officials is an important mechanism for representing public values, evidence collected about public values through other means can be tailored to specific policy making contexts and can explore issue-specific challenges and opportunities. 

Are there any examples of public value evidence being used in the government?

There are likely more current examples of identifying and integrating public value evidence than we can point out in government. The roadmap building process should involve identifying those and finding common language to describe diverse public value evidence efforts across government. For specific known examples, see footnotes 1 and 2.

Is evidence about public values different from evidence collected about evaluations?

Evidence about public values might include evidence collected through program and policy evaluations but includes broader types of evidence. The evaluation of policies and programs generally focuses on assessing effectiveness or efficiency. Evidence about public values would be used in broader questions about the aims or goals of a program or policy.

Unlocking Federal Grant Data To Inform Evidence-Based Science Funding

Summary

Federal science-funding agencies spend tens of billions of dollars each year on extramural research. There is growing concern that this funding may be inefficiently awarded (e.g., by under-allocating grants to early-career researchers or to high-risk, high-reward projects). But because there is a dearth of empirical evidence on best practices for funding research, much of this concern is anecdotal or speculative at best.

The National Institutes of Health (NIH) and the National Science Foundation (NSF), as the two largest funders of basic science in the United States, should therefore develop a platform to provide researchers with structured access to historical federal data on grant review, scoring, and funding. This action would build on momentum from both the legislative and executive branches surrounding evidence-based policymaking, as well as on ample support from the research community. And though grantmaking data are often sensitive, there are numerous successful models from other sectors for sharing sensitive data responsibly. Applying these models to grantmaking data would strengthen the incorporation of evidence into grantmaking policy while also guiding future research (such as larger-scale randomized controlled trials) on efficient science funding.

Challenge and Opportunity

The NIH and NSF together disburse tens of billions of dollars each year in the form of competitive research grants. At a high level, the funding process typically works like this: researchers submit detailed proposals for scientific studies, often to particular program areas or topics that have designated funding. Then, expert panels assembled by the funding agency read and score the proposals. These scores are used to decide which proposals will or will not receive funding. (The FAQ provides more details on how the NIH and NSF review competitive research grants.) 

A growing number of scholars have advocated for reforming this process to address perceived inefficiencies and biases. Citing evidence that the NIH has become increasingly incremental in its funding decisions, for instance, commentators have called on federal funding agencies to explicitly fund riskier science. These calls grew louder following the success of mRNA vaccines against COVID-19, a technology that struggled for years to receive federal funding due to its high-risk profile.

Others are concerned that the average NIH grant-winner has become too old, especially in light of research suggesting that some scientists do their best work before turning 40. Still others lament the “crippling demands” that grant applications exert on scientists’ time, and argue that a better approach could be to replace or supplement conventional peer-review evaluations with lottery-based mechanisms

These hypotheses are all reasonable and thought-provoking. Yet there exists surprisingly little empirical evidence to support these theories. If we want to effectively reimagine—or even just tweak—the way the United States funds science, we need better data on how well various funding policies work.

Academics and policymakers interested in the science of science have rightly called for increased experimentation with grantmaking policies in order to build this evidence base. But, realistically, such experiments would likely need to be conducted hand-in-hand with the institutions that fund and support science, investigating how changes in policies and practices shape outcomes. While there is progress in such experimentation becoming a reality, the knowledge gap about how best to support science would ideally be filled sooner rather than later.

Fortunately, we need not wait that long for new insights. The NIH and NSF have a powerful resource at their disposal: decades of historical data on grant proposals, scores, funding status, and eventual research outcomes. These data hold immense value for those investigating the comparative benefits of various science-funding strategies. Indeed, these data have already supported excellent and policy-relevant research. Examples include Ginther et. al (2011) which studies how race and ethnicity affect the probability of receiving an NIH award, and Myers (2020), which studies whether scientists are willing to change the direction of their research in response to increased resources. And there is potential for more. While randomized control trials (RCTs) remain the gold standard for assessing causal inference, economists have for decades been developing methods for drawing causal conclusions from observational data. Applying these methods to federal grantmaking data could quickly and cheaply yield evidence-based recommendations for optimizing federal science funding.

Opening up federal grantmaking data by providing a structured and streamlined access protocol would increase the supply of valuable studies such as those cited above. It would also build on growing governmental interest in evidence-based policymaking. Since its first week in office, the Biden-Harris administration has emphasized the importance of ensuring that “policy and program decisions are informed by the best-available facts, data and research-backed information.” Landmark guidance issued in August 2022 by the White House Office of Science and Technology Policy directs agencies to ensure that federally funded research—and underlying research data—are freely available to the public (i.e., not paywalled) at the time of publication.

On the legislative side, the 2018 Foundations for Evidence-based Policymaking Act (popularly known as the Evidence Act) calls on federal agencies to develop a “systematic plan for identifying and addressing policy questions” relevant to their missions. The Evidence Act specifies that the general public and researchers should be included in developing these plans. The Evidence Act also calls on agencies to “engage the public in using public data assets [and] providing the public with the opportunity to request specific data assets to be prioritized for disclosure.” The recently proposed Secure Research Data Network Act calls for building exactly the type of infrastructure that would be necessary to share federal grantmaking data in a secure and structured way.

Plan of Action

There is clearly appetite to expand access to and use of federally held evidence assets. Below, we recommend four actions for unlocking the insights contained in NIH- and NSF-held grantmaking data—and applying those insights to improve how federal agencies fund science.

Recommendation 1. Review legal and regulatory frameworks applicable to federally held grantmaking data.

The White House Office of Management and Budget (OMB)’s Evidence Team, working with the NIH’s Office of Data Science Strategy and the NSF’s Evaluation and Assessment Capability, should review existing statutory and regulatory frameworks to see whether there are any legal obstacles to sharing federal grantmaking data. If the review team finds that the NIH and NSF face significant legal constraints when it comes to sharing these data, then the White House should work with Congress to amend prevailing law. Otherwise, OMB—in a possible joint capacity with the White House Office of Science and Technology Policy (OSTP)—should issue a memo clarifying that agencies are generally permitted to share federal grantmaking data in a secure, structured way, and stating any categorical exceptions.

Recommendation 2. Build the infrastructure to provide external stakeholders with secure, structured access to federally held grantmaking data for research. 

Federal grantmaking data are inherently sensitive, containing information that could jeopardize personal privacy or compromise the integrity of review processes. But even sensitive data can be responsibly shared. The NIH has previously shared historical grantmaking data with some researchers, but the next step is for the NIH and NSF to develop a system that enables broader and easier researcher access. Other federal agencies have developed strategies for handling highly sensitive data in a systematic fashion, which can provide helpful precedent and lessons. Examples include:

  1. The U.S. Census Bureau (USCB)’s Longitudinal Employer-Household Data. These data link individual workers to their respective firms, and provide information on salary, job characteristics, and worker and firm location. Approved researchers have relied on these data to better understand labor-market trends.
  2. The Department of Transportation (DOT)’s Secure Data Commons. The Secure Data Commons allows third-party firms (such as Uber, Lyft, and Waze) to provide individual-level mobility data on trips taken. Approved researchers have used these data to understand mobility patterns in cities.

In both cases, the data in question are available to external researchers contingent on agency approval of a research request that clearly explains the purpose of a proposed study, why the requested data are needed, and how those data will be managed. Federal agencies managing access to sensitive data have also implemented additional security and privacy-preserving measures, such as:

Building on these precedents, the NIH and NSF should (ideally jointly) develop secure repositories to house grantmaking data. This action aligns closely with recommendations from the U.S. Commission on Evidence-Based Policymaking, as well as with the above-referenced Secure Research Data Network Act (SRDNA). Both the Commission recommendations and the SRDNA advocate for secure ways to share data between agencies. Creating one or more repositories for federal grantmaking data would be an action that is simultaneously narrower and broader in scope (narrower in terms of the types of data included, broader in terms of the parties eligible for access). As such, this action could be considered either a precursor to or an expansion of the SRDNA, and could be logically pursued alongside SRDNA passage.

Once a secure repository is created, the NIH and NSF should (again, ideally jointly) develop protocols for researchers seeking access. These protocols should clearly specify who is eligible to submit a data-access request, the types of requests that are likely to be granted, and technical capabilities that the requester will need in order to access and use the data. Data requests should be evaluated by a small committee at the NIH and/or NSF (depending on the precise data being requested). In reviewing the requests, the committee should consider questions such as:

  1. How important and policy-relevant is the question that the researcher is seeking to answer? If policymakers knew the answer, what would they do with that information? Would it inform policy in a meaningful way? 
  2. How well can the researcher answer the question using the data they are requesting? Can they establish a clear causal relationship? Would we be comfortable relying on their conclusions to inform policy?

Finally, NIH and NSF should consider including right-to-review clauses in agreements governing sharing of grantmaking data. Such clauses are typical when using personally identifiable data, as they give the data provider (here, the NIH and NSF) the chance to ensure that all data presented in the final research product has been properly aggregated and no individuals are identifiable. The Census Bureau’s Disclosure Review Board can provide some helpful guidance for NIH and NSF to follow on this front.

Recommendation 3. Encourage researchers to utilize these newly available data, and draw on the resulting research to inform possible improvements to grant funding.

The NIH and NSF frequently face questions and trade-offs when deciding if and how to change existing grantmaking processes. Examples include:

Typically, these agencies have very little academic or empirical evidence to draw on for answers. A large part of the problem has been the lack of access to data that researchers need to conduct relevant studies. Expanding access, per Recommendations 1 and 2 above, is a necessary part of but not a sufficient solution. Agencies must also invest in attracting researchers to use the data in a socially useful way.

Broadly advertising the new data will be critical. Announcing a new request for proposals (RFP) through the NIH and/or the NSF for projects explicitly using the data could also help. These RFPs could guide researchers toward the highest-impact and most policy-relevant questions, such as those above. The NSF’s “Science of Science: Discovery, Communication and Impact” program would be a natural fit to take the lead on encouraging researchers to use these data.

The goal is to create funding opportunities and programs that give academics clarity on the key issues and questions that federal grantmaking agencies need guidance on, and in turn the evidence academics build should help inform grantmaking policy.

Conclusion

Basic science is a critical input into innovation, which in turn fuels economic growth, health, prosperity, and national security. The NIH and NSF were founded with these critical missions in mind. To fully realize their missions, the NIH and NSF must understand how to maximize scientific return on federal research spending. And to help, researchers need to be able to analyze federal grantmaking data. Thoughtfully expanding access to this key evidence resource is a straightforward, low-cost way to grow the efficiency—and hence impact—of our federally backed national scientific enterprise.

Frequently Asked Questions
How does the NIH currently select research proposals for funding?

For an excellent discussion of this question, see Li (2017). Briefly, the NIH is organized around 27 “Institutes or Centers” (ICs) which typically correspond to disease areas or body systems. ICs have budgets each year that are set by Congress. Research proposals are first evaluated by around 180 different “study sections”, which are committees organized by scientific areas or methods. After being evaluated by the study sections, proposals are returned to their respective ICs. The highest-scoring proposals in each IC are funded, up to budget limits.

How does the NSF currently select research proposals for funding?

Research proposals are typically submitted in response to announced funding opportunities, which are organized around different programs (topics). Each proposal is sent by the Program Officer to at least three independent reviewers who do not work at the NSF. These reviewers judge the proposal on its Intellectual Merit and Broader Impacts. The Program Officer then uses the independent reviews to make a funding recommendation to the Division Director, who makes the final award/decline decision. More details can be found on the NSF’s webpage.

What data on grant funding at the NIH and NSF is currently (publicly) available?

The NIH and NSF both provide data on approved proposals. These data can be found on the RePORTER site for the NIH and award search site for the NSF. However, these data do not provide any information on the rejected applications, nor do they provide information on the underlying scores of approved proposals.

Open Access to Federally-funded Research Data

Summary

The majority of scientific research data in the United States is not shared, meaning that our nation has vast untapped potential to fuel scientific advances. The Biden-Harris Administration can dramatically accelerate scientific progress by (i) requiring scientists who receive federal funding to share their research data and (ii) directing federal research agencies to coordinate to build an International Research Data Commons that allows research data to be easily discovered and shared.