Open scientific grant proposals to advance innovation, collaboration, and evidence-based policy

Grant writing is a significant part of a scientist’s work. While time-consuming, this process generates a wealth of innovative ideas and in-depth knowledge. However, much of this valuable intellectual output — particularly from the roughly 70% of unfunded proposals — remains unseen and underutilized. The default secrecy of scientific proposals is based on many valid concerns, yet it represents a significant loss of potential progress and a deviation from government priorities around openness and transparency in science policy. Facilitating public accessibility of grant proposals could transform them into a rich resource for collaboration, learning, and scientific discovery, thereby significantly enhancing the overall impact and efficiency of scientific research efforts.

We recommend that funding agencies implement a process by which researchers can opt to make their grant proposals publicly available. This would enhance transparency in research, encourage collaboration, and optimize the public-good impacts of the federal funding process.

Details

Scientists spend a great deal of time, energy, and effort writing applications for grant funding. Writing grants has been estimated to take roughly 15% of a researcher’s working hours and involves putting together an extensive assessment of the state of knowledge, identifying key gaps in understanding that the researcher is well-positioned to fill, and producing a detailed roadmap for how they plan to fill that knowledge gap over a span of (typically) two to five years. At major federal funding agencies like the National Institutes of Health (NIH) and National Science Foundation (NSF), the success rate for research grant applications tends to fall in the range of 20%30%.

The upfront labor required of scientists to pursue funding, and the low success rates of applications, has led some to estimate that ~10% of scientists’ working hours are “wasted.” Other scholars argue that the act of grant writing is itself a valuable and generative process that produces spillover benefits by incentivizing research effort and informing future scholarship. Under either viewpoint, one approach to reducing the “waste” and dramatically increasing the benefits of grant writing is to encourage proposals — both funded and unfunded — to be released as public goods, thus unlocking the knowledge, frontier ideas, and roadmaps for future research that are currently hidden from view.

The idea of grant proposals being made public is a sensitive one. Indeed, there are valid reasons for keeping proposals confidential, particularly when they contain intellectual property or proprietary information, or when they are in the early stages of development. However, these reasons do not apply to all proposals, and many potential concerns only apply for a short time frame. Therefore, neither full disclosure nor full secrecy are optimal; a more flexible approach that encourages researchers to choose when and how to share their proposals could yield significant benefits with minimal risks.

The potential benefits to the scientific community, and science funders include:

Recommendations 

Federal funding agencies should develop a process to allow and encourage researchers to share their grant proposals publicly, within existing infrastructures for grant reporting (e.g., NIH RePORTER). Sharing should be minimally burdensome and incorporated into existing application frameworks. The process should be flexible, allowing researchers to opt in or out — and to specify other characteristics like embargoes — to ensure applicants’ privacy and intellectual property concerns are mitigated. 

The White House Office of Management and Budget (OMB) should develop a framework for publicly sharing grant proposals.

The NSF should run a focused pilot program to assess opportunities and obstacles for proposal sharing across disciplines.

Based on the NSB’s report, OSTP and OMB should work with federal funding agencies to refine and implement a proposal-sharing process across agencies.

To learn more about the importance of opening science and to read the rest of the published memos, visit the Open Science Policy sprint landing page.

How an Obscure Law Shapes the Way the Public Engages with the Food and Drug Administration

Every day, the executive branch of the federal government makes transformative policy changes. When federal agencies need expert input, they look to advice from external experts and interested citizens through a series of public engagement mechanisms, from public meetings to public comment. Of these, only one mechanism allows the executive branch to actively source consensus-based public advice and for external experts to directly advise policymakers, the Federal Advisory Committee Act (FACA). And it’s a law many Americans have never heard of.

FACA enables agencies to create advisory committees

Enacted in 1972, FACA governs expert and public engagement with executive branch decision making. FACA articulates rules for the establishment, operation, and termination of advisory committees (AC), groups of experts that the federal agencies establish, manage, and use to provide external advice on key policy questions. At any given moment in time, there are ~1000 active ACs across the federal government making crucial recommendations to agency leaders.

At the Food and Drug Administration (FDA), FACA is essential to the workings of the agency’s regulatory engine and public health mission. The FDA uses its ACs to provide independent advice on medical products (drugs and devices), providing a unique window for experts and the public to comment on cutting-edge medical products in the approvals pipeline. ACs capture the headlines through their “yes” or “no” votes on product approval, raising spirits or breaking hearts. Industry takes notice: medical product sponsors spend months preparing for these meetings, supported by a boutique industry geared to help them “ace” their AC meetings.

ACs need to be reformed to build public trust in the FDA

While ACs are a crucial transparency measure for an agency like FDA that is currently grappling with declining public trust, the system has been repeatedly under fire. Recent controversies include FDA’s public overruling of AC recommendations against approval for hydrocodone, an opioid pain reliever, and aducanumab, an Alzheimer’s treatment. After aducanumab approval, several high-profile resignations exacerbated the trust-issues. What’s more, FDA’s use of ACs is in decline, with the percentage of new drugs reviewed by ACs decreasing by almost 10 times from 2010-2021. These actions are in direct conflict with current whole-of-government efforts to modernize regulatory review and expand meaningful participation in the regulatory decision making process. Advancing racial equity, opening up the scientific enterprise, and broadening public engagement in regulatory decisions will require transformative policy solutions for the FDA. 

To re-envision how the FDA and other federal agencies engage external scientific experts and the public to address critical challenges facing public health, FAS is diving deep into how FACA is put into action at the FDA. Over the next year, FAS will be engaging AC members on their experiences in service, understanding key evidence needs at the agency that a reformed AC system could better meet, and scoping necessary process, regulatory, and statutory changes to the AC system. This will build upon our previous efforts: FAS has participated in and provided public comment to many AC meetings and documented how ACs are slow to respond to emerging questions of regulatory concern in our ongoing work to address bias in medical innovation. FAS has also documented strategies to improve science advice for the executive branch, including FACA reform. We invite you to follow this work and join us in calling for reforms that strengthen trust in the FDA Advisory Committee system.

Calls for systematic reform are coming from leadership across the FDA, yet consensus does not yet exist on what those reforms should look like. From recommendations to get rid of voting requirements at meetings (already receiving Congressional scrutiny), to broadening membership, including to members with conflicts of interest, to increasing review timelines of sponsor materials before meetings, there is no shortage of ideas for what this new system could look like. Non-profit leaders and academic researchers have also started coming together to make recommendations that address FDA’s influence over Advisory Committee discussions and ongoing issues with agency leadership overruling the AC’s vote. There could also be clearer requirements for the FDA to respond to AC recommendations and make set public timelines for agency action. Twenty-five Attorneys General recently called on the FDA to release updates to its actions on pulse oximetry one year after the AC meeting. 

More broadly, the FDA can learn from other agencies with explicit policies guiding their public engagement, such as the Meaningful Involvement Policy at the Environmental Protection Agency. These FDA-specific recommendations build upon long-standing calls to reform FACA to reduce the administrative barriers that make it challenging to solicit expert advice when needed or lead some agencies to forgo processes that could invoke FACA altogether.

To improve patient care, it is essential to create a nimble, participatory, and transparent process that ensures regulated products will benefit the health of all Americans. AC reform will be essential to building the FDA’s capacity to address increasingly complex regulatory science challenges, from artificial intelligence, to real-world data, to emerging platform technologies, to health inequity, while also improving the federal government’s ability to more rapidly generate consensus-based science advice. FAS is excited to play our part in strengthening evidence-based policy by engaging in policy entrepreneurship to engage stakeholders, develop roadmaps, and advocate for change. 

How Unmet Desire Surveys Can Advance Learning Agendas and Strengthen Evidence-Based Policymaking

Summary

The 2018 Foundations for Evidence-Based Policymaking Act (Evidence Act) promotes a culture of evidence within federal agencies. A central part of that culture entails new collaboration between decision-makers and those with diverse forms of expertise inside and outside of the federal government. Federal chief evaluation officers lead these efforts, yet they face challenges getting buy-in from agency staff and in obtaining sufficient resources. One tool to overcome these challenges is an “unmet desire survey,” which prompts agency staff to reflect on how the success of their programs relates to what is happening in other agencies and outside government, as well as consider what information about these other programs and organizations would help their work be more effective. The unmet desire survey is an important data-gathering mechanism and also encourages evaluation officers to engage in matchmaking between agency staff and people who have the information they desire. Using existing authorities and resources, agencies can pilot unmet desire surveys as a concrete mechanism for advancing federal learning agendas in a way that builds buy-in by directly meeting the needs of agency staff.

Challenge and Opportunity

A core mission of the Evidence Act is to foster a culture of evidence-based decision-making within federal agencies. Since the problems agencies tackle are multidimensional, with the success of one government program often depending on the performance of others, new collaborative relationships between decision-makers in the federal government and those in other agencies and in organizations outside the federal government are essential to realizing the Evidence Act’s vision. Indeed, Office of Management and Budget (OMB) implementation guidance stresses that learning agendas are “an opportunity to align efforts and promote interagency collaboration in areas of joint focus or shared populations or goals” (OMB M-19-23), and that a culture of evidence “cannot happen solely at the top or in isolated analytical offices, but rather must be embedded throughout each agency…and adopted by the hardworking civil servants who serve on behalf of the American people” (OMB M-21-27). 

Chief evaluation officers at federal agencies are the main point people for fostering cultures of evidence. Yet they and their evaluation staff face many challenges, including getting buy-in from agency staff, understanding the needs of program and operational offices that go beyond the organizational boundaries of those offices, and limited resources. Indeed, OMB guidance acknowledges that many agency staff may view learning agendas as just another compliance exercise.

This memo proposes a flexible tool that evaluation officers can use to generate buy-in among agency staff and leadership while also promoting collaboration as emphasized in OMB guidance and in the Evidence Act. The tool, which has already proven valuable in local government and in the nonprofit sector, is called an “unmet desire survey.” The survey measures unmet desires for collaboration by prompting staff to consider the following: 

Unmet desire surveys elicit critical insights about needs for connection and are highly flexible. For instance, in the first question posed above, evaluation officers can choose to ask staff about new information that would be helpful for any program or only about information relevant to programs that are top priorities for their agency. In other words, unmet desire surveys need not add one more thing to the plate; rather, they can be used to accelerate collaboration directly tied to current learning priorities. 

Unmet desire surveys also legitimize informal collaborative relationships. Too often, calls for new collaboration in the policy sphere immediately segue into overly structured meetings that fail to uncover promising areas for joint learning and problem-solving. Meetings across government agencies are often scripted presentations about each organization’s activities, providing little insight on ways they could partner to achieve better results. Policy discussions with outside research experts tend to focus on formal evaluations and long-term research projects that don’t surface opportunities to accelerate learning in the near term. In contrast, unmet desire surveys explicitly legitimize the idea that diverse thinkers may want to connect only for informal knowledge exchange rather than formal events or partnerships. Indeed, even single conversations can greatly impact decision-makers, and, of course, so can more intensive relationships.

While online platforms for spurring new collaborative relationships have been previously proposed, they have not achieved uptake at scale among federal policymakers. One reason for this is that the problem that needs to be solved is both factual and relational. In other words, the issue isn’t simply that strangers do not know each other—it’s also that strangers do not always know how to talk to one another. People care about how others relate to them and whether they can successfully relate to others. Uncertainty about relationality routinely stops people from interacting with others they do not know. This is why unmet desire surveys also include questions that directly measure hesitations about interacting with people from other agencies and organizations. 

After the surveys are administered, evaluation staff can use survey data to engage in matchmaking: brokering connections among people with similar goals but diverse expertise and helping overcome uncertainty about relationality so that new cross-agency and cross-sector collaborative relationships can take root. In sum, by deliberately inquiring about connections with others who have diverse forms of relevant expertise—and then making those connections anew—evaluation staff can generate greater enthusiasm and ownership among people who may not consider evaluation and evidence-building as part of their core responsibilities.

Plan of Action

Using existing authorities and resources, federal evaluation officers can take three steps to position unmet desire surveys as a standard component of the government’s evidence toolbox. 

Step 1. Design and implement pilot unmet desire surveys. 

Chief evaluation officers are well positioned to pilot unmet desire surveys within their agencies. While individual evaluation officers can work independently to design unmet desire surveys, it may be more fruitful to work together, via the Evaluation Officer Council, to design a baseline survey template. Chief evaluation officers could then work with their teams to adapt the baseline template to their agencies, including identifying which agency staff to prioritize as well as the best way to phrase particular questions (e.g., regarding the types of connections that employees want in order to improve the effectiveness of their work or the types of hesitancies to ask about). Given that the question content is highly flexible, unmet desire surveys can directly accelerate learning agendas and build buy-in at the same time. Thus, they can yield tangible, concrete benefits with very little upfront cost.

Step 2. Meet unmet desires by matchmaking. 

After the pilot surveys are administered, chief evaluation officers should act on their results by matchmaking. There are several ways to do this without new appropriations. One is for evaluation teams within agencies to engage in informal, low-lift matchmaking—wherein those who implement the survey also act as initial matchmakers—as an early proof of concept. A second option is to bring on short-term matchmakers through flexible hiring mechanisms (e.g., through the Intergovernmental Personnel Act). Documenting successes and lessons learned then set the stage for using agency-specific discretionary funds to hire one or more in-house matchmakers as longer-term or staff appointments.

Step 3. Collect information on successes and lessons learned from the pilot.

Unmet desire surveys can be tricky to field because they entail asking employees about topics they may not be used to thinking about. It often takes some trial and error to figure out the best ways to ask about employees’ substantive goals and their hesitations about interacting with people they do not know. Piloting unmet desire surveys and follow-on matchmaking can not only demonstrate value (e.g., the impact of new collaborative relationships fostered through these combined efforts) to justify further investment but also suggest how evaluation leads might best structure future unmet desire surveys and subsequent matchmaking.

Conclusion

An unmet desire survey is an adaptable tool that can reveal fruitful pathways for connection and collaboration. Indeed, unmet desire surveys leverage the science of collaboration by ensuring that efforts to broker connections among strangers consider both substantive goals and uncertainty about relationality. Chief evaluation officers can pilot unmet desire surveys using existing authorities and resources, and then use the information gathered to identify opportunities for productive matchmaking. Ultimately, positioning the survey as a standard component of the government’s evidence toolbox has great potential to support agency staff in advancing federal learning agendas and building a robust culture of evidence across the U.S. government.

Frequently Asked Questions
Who should unmet desire surveys be administered to?

The best place to start—especially when resources are limited—is with potential evidence champions. These are people who already have an idea of what information would help them improve the impact of the programs they run and which people would be helpful to collaborate with. These potential evidence champions may not self-identify as such; rather, they may see themselves as falling into other categories, such as customer-experience experts, bureaucracy hackers, process innovators, or policy entrepreneurs. Regardless of terminology, the unmet desire survey provides people who are already motivated to collaborate and connect with a clear opportunity to articulate their needs. Evaluation staff can then respond by matchmaking to stimulate new and productive relationships for those people.

Who should conduct an unmet desire survey?

The administrator should be someone with whom agency staff feel comfortable discussing their needs (e.g., a member of an agency evaluation team) and who is able to effectively facilitate matchmaking—perhaps because of their network or their reputation within the agency. The latter criterion helps ensure that staff expect useful follow-up, which in turn motivates completion of the survey and participation in follow-on activities; it also generates enthusiasm for engaging in new collaborative relationships (as well as creating broader buy-in for the learning agenda). In some cases, it may make the most sense to have multiple people from an evaluation team surveying different agency staff or co-sponsoring the survey with agency innovation offices. Explicit support from agency leadership for the survey and follow-on activities is also crucial for achieving staff buy-in.

What questions should be asked in an unmet desire survey?

The bulleted list in the body of the memo illustrates the types of questions that an unmet desire survey might ask. Yet survey content is meant to be tailored and agency-specific. For instance, the first suggested question about information that would help increase program effectiveness can be left entirely open-ended or be focused on programs related to learning-agenda priorities. Similarly, the second suggested question may invite responses related to either informal or formal collaboration, or instead may only ask about knowledge exchange (a relatively lower commitment that may be more palatable to agency leadership). The third and fourth questions should refer to specific types of hesitancy that survey administrators believe are most likely (e.g., ask about a few hesitancies that seem most likely to arise, such as lack of explicit permission, concerns about saying something inappropriate, or concerns about lack of trustworthy information). The final question about why these collaborations don’t exist can similarly be left broad or include a few examples to help spark ideas.

Who should conduct matchmaking in response to an unmet desire survey?

Again, the answer will be agency-specific. In many organizations, matchmaking happens informally. Formalizing this duty as a part of one or more people’s official responsibilities sends a signal about how much this work is valued. Exactly who those people are will depend on the agency’s structure, as well as on whether there are already people in a given agency who see matchmaking as part of their job.

When is the right time to field an unmet desire survey?

While unmet desire surveys can be done anytime and on a continuous basis, it is best to field them when there is identified staff capacity for follow-on matchmaking and employee willingness to build collaborative relationships.

Public Value Evidence for Public Value Outcomes: Integrating Public Values into Federal Policymaking

Summary

The federal government––through efforts like the White House Year of Evidence for Action––has made a laudable push to ensure that policy decisions are grounded in empirical evidence. While these efforts acknowledge the importance of social, cultural and Indigenous knowledges, they do not draw adequate attention to the challenges of generating, operationalizing, and integrating such evidence in routine policy and decision making. In particular, these endeavors are generally poor at incorporating the living and lived experiences, knowledge, and values of the public. This evidence—which we call evidence about public values—provides important insights for decision making and contributes to better policy or program designs and outcomes. 

The federal government should broaden institutional capacity to collect and integrate evidence on public values into policy and decision making. Specifically, we propose that the White House Office of Management and Budget (OMB) and the White House Office of Science and Technology Policy (OSTP): 

  1. Provide a directive on the importance of public value evidence.
  2. Develop an implementation roadmap for integrating public value evidence into federal operations (e.g., describe best practices for integrating it into federal decision making, developing skill-building opportunities for federal employees).

Challenge and Opportunity

Evidence about public values informs and improves policies and programs

Evidence about public values is, to put it most simply, information about what people prioritize, care, or think about with respect to a particular issue, which may differ from ideas prioritized by experts. It includes data collected through focus groups, deliberations, citizen review panels, and community-based research, or public opinion surveys. Some of these methods rely on one-way flows of information (e.g., surveys) while others prioritize mutual exchange of information among policy makers and participating publics (e.g., deliberations). 

Agencies facing complex policymaking challenges can utilize evidence about public values––along with expert- and evaluation-based evidence––to ensure decisions truly serve the broader public good. If collected as part of the policy-making process, evidence about public values can inform policy goals and programs in real time, including when program goals are taking shape or as programs are deployed. 

Evidence about public values within the federal government: three challenges to integration

To fully understand and use public values in policymaking, the U.S. government must first broadly address three challenges.

First, the federal government does not sufficiently value evidence about public values when it researches and designs policy solutions. Federal employees often lack any directive or guidance from leadership that collecting evidence about public values is valuable or important to evidence-based decision making. Efforts like the White House Year of Evidence for Action seek to better integrate evidence into policy making. Yet––for many contexts and topics––scientific or evaluation-based evidence is just one type of evidence. The public’s wisdom, hopes, and perspectives play an important mediating factor in determining and achieving desired public outcomes. The following examples illustrate ways public value evidence can support federal decision making:

  1. An effort to implement climate intervention technologies (e.g., solar geoengineering) might be well-grounded in evidence from the scientific community. However, that same strategy may not consider the diverse values Americans hold about (i) how such research might be governed, (ii) who ought to develop those technologies, and (iii) whether or not they should be used at all. Public values are imperative for such complex, socio-technical decisions if we are to make good on the Year of Evidence’s dual commitment to scientific integrity (including expanded concepts of expertise and evidence) and equity (better understanding of “what works, for whom, and under what circumstances”). 
  2. Evidence about the impacts of rising sea levels on national park infrastructure and protected features has historically been tense. To acknowledge the social-environmental complexity in play, park leadership have strived to include both expert assessments and engagement with publics on their own risk tolerance for various mitigation measures. This has helped officials prioritize limited resources as they consider tough decisions on what and how to continue to preserve various park features and artifacts. 

Second, the federal government lacks effective mechanisms for collecting evidence about public values. Presently, public comment periods favor credentialed participants—advocacy groups, consultants, business groups, etc.—who possess established avenues for sharing their opinions and positions to policy makers. As a result, these credentialed participants shape policy and other experiences, voices, and inputs go unheard. While the general public can contribute to government programs through platforms like Challenge.gov, credentialed participants still tend to dominate these processes. Effective mechanisms for collecting public values into decision making or research are generally confined to university, local government, and community settings. These methods include participatory budgeting, methods from usable or co-produced science, and participatory technology assessment. Some of these methods have been developed and applied to complex science and technology policy issues in particular, including climate change and various emerging technologies. Their use in federal agencies is far more limited. Even when an agency might seek to collect public values, it may be impeded by regulatory hurdles, such as the Paperwork Reduction Act (PRA), which can limit the collection of public values, ideas, or other input due to potentially long timelines for approval and perceived data collection burden on the public. Cumulatively, these factors prevent agencies from accurately gauging––and being adaptive to––public responses. 

Third, federal agencies face challenges integrating evidence about public values into policy making. These challenges can be rooted in the regulatory hurdles described above, difficulties integrating with existing processes, and unfamiliarity with the benefits of collecting evidence about public values. Fortunately, studies have found specific attributes present among policymakers and agencies that allowed for the implementation and use of mechanisms for capturing public values. These attributes included: 

  1. Leadership who prioritized public involvement and helped address administrative uncertainties.
  2. An agency culture responsive to broader public needs, concerns, and wants.
  3. Agency staff familiar with mechanisms to capture public values and integrate them in the policy- and decision-making process. The latter can help address translation issues, deal with regulatory hurdles, and can better communicate the benefits of collecting public values with regard to agency needs. Unfortunately, many agencies do not have such staff, and there are no existing roadmaps or professional development programs to help build this capacity across agencies. 

Aligning public values with current government policies promotes scientific integrity and equity

The White House Year of Evidence for Action presents an opportunity to address the primary challenges––namely a lack of clear direction, collection protocols, and evidence integration strategies––currently impeding public values evidence’s widespread use in the federal government. Our proposal below is well aligned with the Year of Evidence’s central commitments, including: 

Furthermore, this proposal aligns with the goals of the Year of Evidence for Action to “share leading practices to generate and use research-backed knowledge to advance better, more equitable outcomes for all America…” and to “…develop new strategies and structures to promote consistent evidence-based decision-making inside the Federal Government.” 

Plan of Action

To integrate public values into federal policy making, the White House Office of Management and Budget (OMB) and the White House Office of Science and Technology Policy (OSTP) should: 

  1. Develop a high-level directive for agencies about the importance of collecting public values as a form of evidence to inform policy making.
  2. Oversee the development of a roadmap for the integration of evidence about public values across government, including pathways for training federal employees. 

Recommendation 1. OMB and OSTP should issue a high-level directive providing clear direction and strong backing for agencies to collect and integrate evidence on public values into their evidence-based decision-making procedures. 

Given the potential utility of integrating public value evidence into science and technology policy as well as OSTP’s involvement in efforts to promote evidence-based policy, OSTP makes a natural partner in crafting this directive alongside OMB. This directive should clearly connect public value evidence to the current policy environment. As described above, efforts like the Foundations for Evidence-Based policy making Act (Evidence Act) and the White House Year of Evidence for Action provide a strong rationale for the collection and integration of evidence about public values. Longer-standing policies––including the Crowdsourcing and Citizen Science Act––provide further context and guidance for the importance of collecting input from broad publics.

Recommendation 2. As part of the directive, or as a follow up to it, OMB and OSTP should oversee the development of a roadmap for integrating evidence about public values across government. 

The roadmap should be developed in consultation with various federal stakeholders, such as members of the Evaluation Officer Council, representatives from the Equitable Data Working Group, customer experience strategists, and relevant conceptual and methods experts from within and outside the government.

A comprehensive roadmap would include the following components:

Conclusion

Collecting evidence about the living and lived experiences, knowledge, and aspirations of the public can help inform policies and programs across government. While methods for collecting evidence about public values have proven effective, they have not been integrated into evidence-based policy efforts within the federal government. The integration of evidence about public values into policy making can promote the provision of broader public goods, elevate the perspectives of historically marginalized communities, and reveal policy or program directions different from those prioritized by experts. The proposed directive and roadmap––while only a first step––would help ensure the federal government considers, respects, and responds to our diverse nation’s values.

Frequently Asked Questions
Which agencies or areas of government could use public value evidence?

Federal agencies can use public value evidence where additional information about what the public thinks, prioritizes, and cares about could improve programs and policies. For example, policy decisions characterized by high uncertainty, potential value disputes, and high stakes could benefit from a broader review of considerations by diverse members of the public to ensure that novel options and unintended consequences are considered in the decision making process. In the context of science and technology related decision making, these situations were called “post-normal science” by Silvio Funtowicz and Jerome Ravetz. They called for an extension of who counts as a subject matter expert in the face of such challenges, citing the potential for technical analyses to overlook important societal values and considerations.

Why should OSTP be engaged in furthering the use of public value evidence?

Many issues where science and technology meet societal needs and policy considerations warrant broad public value input. These issues include emerging technologies with societal implications and existing S&T challenges that have far reaching impacts on society (e.g., climate change). Further, OSTP is already involved in Evidence for Action initiatives and can assist in bringing in external expertise on methods and approaches.

Why do we need this sort of evidence when public values are represented by elective officials?

While guidance from elected officials is an important mechanism for representing public values, evidence collected about public values through other means can be tailored to specific policy making contexts and can explore issue-specific challenges and opportunities. 

Are there any examples of public value evidence being used in the government?

There are likely more current examples of identifying and integrating public value evidence than we can point out in government. The roadmap building process should involve identifying those and finding common language to describe diverse public value evidence efforts across government. For specific known examples, see footnotes 1 and 2.

Is evidence about public values different from evidence collected about evaluations?

Evidence about public values might include evidence collected through program and policy evaluations but includes broader types of evidence. The evaluation of policies and programs generally focuses on assessing effectiveness or efficiency. Evidence about public values would be used in broader questions about the aims or goals of a program or policy.

Strengthening Policy by Bringing Evidence to Life

Summary

In a 2021 memorandum, President Biden instructed all federal executive departments and agencies to “make evidence-based decisions guided by the best available science and data.” This policy is sound in theory but increasingly difficult to implement in practice. With millions of new scientific papers published every year, parsing and acting on research insights presents a formidable challenge.

A solution, and one that has proven successful in helping clinicians effectively treat COVID-19, is to take a “living” approach to evidence synthesis. Conventional systematic reviews,  meta-analyses, and associated guidelines and standards, are published as static products, and are updated infrequently (e.g., every four to five years)—if at all. This approach is inefficient and produces evidence products that quickly go out of date. It also leads to research waste and poorly allocated research funding.

By contrast, emerging “Living Evidence” models treat knowledge synthesis as an ongoing endeavor. By combining (i) established, scientific methods of summarizing science with (ii) continuous workflows and technology-based solutions for information discovery and processing, Living Evidence approaches yield systematic reviews—and other evidence and guidance—products that are always current. 
The recent launch of the White House Year of Evidence for Action provides a pivotal opportunity to harness the Living Evidence model to accelerate research translation and advance evidence-based policymaking. The federal government should consider a two-part strategy to embrace and promote Living Evidence. The first part of this strategy positions the U.S. government to lead by example by embedding Living Evidence within federal agencies. The second part focuses on supporting external actors in launching and maintaining Living Evidence resources for the public good.

Challenge and Opportunity

We live in a time of veritable “scientific overload”. The number of scientific papers in the world has surged exponentially over the past several decades (Figure 1), and millions of new scientific papers are published every year. Making sense of this deluge of documents presents a formidable challenge. For any given topic, experts have to (i) scour the scientific literature for studies on that topic, (ii) separate out low-quality (or even fraudulent) research, (iii) weigh and reconcile contradictory findings from different studies, and (iv) synthesize study results into a product that can usefully inform both societal decision-making and future scientific inquiry.

This process has evolved over several decades into a scientific method known as “systematic review” or “meta-analysis”. Systematic reviews and meta-analyses are detailed and credible, but often take over a year to produce and rapidly go out of date once published. Experts often compensate by drawing attention to the latest research in blog posts, op-eds, “narrative” reviews, informal memos, and the like. But while such “quick and dirty” scanning of the literature is timely, it lacks scientific rigor. Hence those relying on “the best available science” to make informed decisions must choose between summaries of science that are reliable or current…but not both.

The lack of trustworthy and up-to-date summaries of science constrains efforts, including efforts championed by the White House, to promote evidence-informed policymaking. It also leads to research waste when scientists conduct research that is duplicative and unnecessary, and degrades the efficiency of the scientific ecosystem when funders support research that does not address true knowledge gaps.

Figure 1

Total number of scientific papers published over time, according to the Microsoft Access Graph (MAG) dataset. (Source: Herrmannova and Knoth, 2016)

The emerging Living Evidence paradigm solves these problems by treating knowledge synthesis as an ongoing rather than static endeavor. By combining (i) established, scientific methods of summarizing science with (ii) continuous workflows and technology-based solutions for information discovery and processing, Living Evidence approaches yield systematic reviews that are always up to date with the latest research. An opinion piece published in The New York Times called this approach “a quiet revolution to surface the best-available research and make it accessible for all.”

To take a Living Evidence approach, multidisciplinary teams of subject-matter experts and methods experts (e.g., information specialists and data scientists) first develop an evidence resource—such as a systematic review—using standard approaches. But the teams then commit to regular updates of the evidence resource at a frequency that makes sense for their end users (e.g., once a month). Using technologies such as natural-language processing and machine learning, the teams continually monitor online databases to identify new research. Any new research is rapidly incorporated into the evidence resource using established methods for high-quality evidence synthesis. Figure 2 illustrates how Living Evidence builds on and improves traditional approaches for evidence-informed development of guidelines, standards, and other policy instruments.

Figure 2

Illustration of how a Living Evidence approach to development of evidence-informed policies (such as clinical guidelines) is more current and reliable than traditional approaches. (Source: Author-developed graphic)

Living Evidence products are more trusted by stakeholders, enjoy greater engagement (up to a 300% increase in access/use, based on internal data from the Australian Stroke Foundation), and support improved translation of research into practice and policy. Living Evidence holds particular value for domains in which research evidence is emerging rapidly, current evidence is uncertain, and new research might change policy or practice. For example, Nature has credited Living Evidence with “help[ing] chart a route out” of the worst stages of the COVID-19 pandemic. The World Health Organization (WHO) has since committed to using the Living Evidence approach as the organization’s “main platform” for knowledge synthesis and guideline development across all health issues. 

Yet Living Evidence approaches remain underutilized in most domains. Many scientists are unaware of Living Evidence approaches. The minority who are familiar often lack the tools and incentives to carry out Living Evidence projects directly. The result is an “evidence to action” pipeline far leakier than it needs to be. Entities like government agencies need credible and up-to-date evidence to efficiently and effectively translate knowledge into impact.

It is time to change the status quo. The 2019 Foundations for Evidence-Based Policymaking Act (“Evidence Act”) advances “a vision for a nation that relies on evidence and data to make decisions at all levels of government.” The Biden Administration’s “Year of Evidence” push has generated significant momentum around evidence-informed policymaking. Demonstrated successes of Living Evidence approaches with respect to COVID-19 have sparked interest in these approaches specifically. The time is ripe for the federal government to position Living Evidence as the “gold standard” of evidence products—and the United States as a leader in knowledge discovery and synthesis.

Plan of Action

The federal government should consider a two-part strategy to embrace and promote Living Evidence. The first part of this strategy positions the U.S. government to lead by example by embedding Living Evidence within federal agencies. The second part focuses on supporting external actors in launching and maintaining Living Evidence resources for the public good. 

Part 1. Embedding Living Evidence within federal agencies

Federal science agencies are well positioned to carry out Living Evidence approaches directly. Living Evidence requires “a sustained commitment for the period that the review remains living.” Federal agencies can support the continuous workflows and multidisciplinary project teams needed for excellent Living Evidence products.

In addition, Living Evidence projects can be very powerful mechanisms for building effective, multi-stakeholder partnerships that last—a key objective for a federal government seeking to bolster the U.S. scientific enterprise. A recent example is Wellcome Trust’s decision to fund suites of living systematic reviews in mental health as a foundational investment in its new mental-health strategy, recognizing this as an important opportunity to build a global research community around a shared knowledge source. 

Greater interagency coordination and external collaboration will facilitate implementation of Living Evidence across government. As such, President Biden should issue an Executive Order establishing an Living Evidence Interagency Policy Committee (LEIPC) modeled on the effective Interagency Arctic Research Policy Committee (IARPC). The LEIPC would be chartered as an Interagency Working Group of the National Science and Technology Council (NSTC) Committee on Science and Technology Enterprise, and chaired by the Director of the White House Office of Science and Technology Policy (OSTP; or their delegate). Membership would comprise representatives from federal science agencies, including agencies that currently create and maintain evidence clearinghouses, other agencies deeply invested in evidence-informed decision making, and non-governmental experts with deep experience in the practice of Living Evidence and/or associated capabilities (e.g., information science, machine learning).

Supporting federal implementation of Living Evidence

Widely accepted guidance for living systematic reviews (LSRs), one type of Living Evidence product, has been published. The LEIPC—working closely with OSTP, the White House Office of Management and Budget (OMB), and the federal Evaluation Officer Council (EOC), should adapt this guidance for the U.S. federal context, resulting in an informational resource for federal agencies seeking to launch or fund Living Evidence projects. The guidance should also be used to update systematic-review processes used by federal agencies and organizations contributing to national evidence clearinghouses.2

Once the federally tailored guidance has been developed, the White House should direct federal agencies to consider and pursue opportunities to embed Living Evidence within their programs and operations. The policy directive could take the form of a Presidential Memorandum, a joint management memo from the heads of OSTP and OMB, or similar. This directive would (i) emphasize the national benefits that Living Evidence could deliver, and (ii) provide agencies with high-level justification for using discretionary funding on Living Evidence projects and for making decisions based on Living Evidence insights.

Identifying priority areas and opportunities for federally managed Living Evidence projects

The LEIPC—again working closely with OSTP, OMB, and the EOC—should survey the federal government for opportunities to deploy Living Evidence internally. Box 1 provides examples of opportunities that the LEIPC could consider.

The product of this exercise should be a report that describes each of the opportunities identified, and recommends priority projects to pursue. In developing its priority list, the LEIPC should account for both the likely impact of a potential Living Evidence project as well as the near-term feasibility of that project. While the report could outline visions for ambitious Living Evidence undertakings that would require a significant time investment to realize fully (e.g., transitioning the entire National Climate Assessment into a frequently updated “living” mode), it should also scope projects that could be completed within two years and serve as pilots/proofs of concept. Lessons learned from the pilots could ultimately inform a national strategy for incorporating Living Evidence into federal government more systematically. Successful pilots could continue and grow beyond the end of the two-year period, as appropriate.

Fostering greater collaboration between government and external stakeholders

The LEIPC should create an online “LEIPC Collaborations” platform that connects researchers, practitioners, and other stakeholders both inside and outside government. The platform would emulate IARPC Collaborations, which has built out a community of more than 3,000 members and dozens of communities of practice dedicated to the holistic advancement of Arctic science. As one stakeholder has explained:

LEIPC Collaborations could deliver the same participatory opportunities and benefits for members of the evidence community, facilitating holistic advancement of Living Evidence.

Part 2. Make it easier for scientists and researchers to develop LSRs

Many government efforts could be supported by internal Living Evidence initiatives, but not every valuable Living Evidence effort should be conducted by government. Many useful Living Evidence programs will require deep domain knowledge and specialized skills that teams of scientists and researchers working outside of government are best positioned to deliver.

But experts interested in pursuing Living Evidence efforts face two major difficulties. The first is securing funding. Very little research funding is awarded for the sole purpose of conducting systematic reviews and other types of evidence syntheses. The funding that is available is typically not commensurate with the resource and personnel needs of a high-quality synthesis. Living Evidence demands efficient knowledge discovery and the involvement of multidisciplinary teams possessing overlapping skill sets. Yet federal research grants are often structured in a way that precludes principal investigators from hiring research software engineers or from founding co-led research groups.

The second is aligning with incentives. Systematic reviews and other types of evidence syntheses are often not recognized as “true” research outputs by funding agencies or university tenure committees—i.e., they are often not given the same weight in research metrics, despite (i) utilizing well-established scientific methodologies involving detailed protocols and advanced data and statistical techniques, and (ii) resulting in new knowledge. The result is that talented experts are discouraged from investing their time on projects that can contribute significant new insights and could dramatically improve the efficiency and impact of our nation’s research enterprise.

To begin addressing these problems, the two biggest STEM-funding agencies—NIH and NSF—should consider the following actions:

  1. Perform a landscape analysis of federal funding for evidence synthesis. Rigorously documenting the funding opportunities available (or lack thereof) for researchers wishing to pursue evidence synthesis will help NIH and NSF determine where to focus potential new opportunities. The landscape analysis should consider currently available funding opportunities for systematic, scoping, and rapid reviews, and could also include surveys and focus groups to assess the appetite in the research community for pursuing additional evidence-synthesis activities if supported.
  2. Establish new grant opportunities designed to support Living Evidence projects. The goal of these grant opportunities would be to deliver definitive and always up-to-date summaries of research evidence and associated data in specified topics. The opportunities could align with particular research focuses (for instance, a living systematic review on tissue-electronic interfacing could facilitate progress on bionic limb development under NSF’s current “Enhancing Opportunities for Persons with Disabilities” Convergence Accelerator track). The opportunities could also be topic-agnostic, but require applicants to justify a proposed project by demonstrating that (i) the research evidence is emerging rapidly, (ii) current evidence is uncertain, and (iii) new research might materially change policy or practice.
  3. Increase support for career research staff in academia. Although contributors to Living Evidence projects can cycle in and out (analogous to turnover in large research collaboratives), such projects benefit from longevity in a portion of the team. With this core team in place, Living Evidence projects are excellent avenues for grad students to build core research skills, including in research study design. 
  4. Leverage prestigious existing grant programs and awards to incentivize work on Living Evidence. For instance, NSF could encourage early-career faculty to propose LSRs in applications for CAREER grants.
  5. Recognize evidence syntheses as research outputs. In all assessments of scientific track record (particularly research-funding schemes), systematic reviews and other types of rigorous evidence synthesis should be recognized as research outputs equivalent to “primary” research. 

The grant opportunities should also:

Conclusion

Policymaking can only be meaningfully informed by evidence if underpinning systems for evidence synthesis are robust. The Biden administration’s Year of Evidence for Action provides a pivotal opportunity to pursue concrete actions that strengthen use of science for the betterment of the American people. Federal investment in Living Evidence is one such action. 

Living Evidence has emerged as a powerful mechanism for translating scientific discoveries into policy and practice. The Living Evidence approach is being rapidly embraced by international actors, and the United States has an opportunity to position itself as a leader. A federal initiative on Living Evidence will contribute additional energy and momentum to the Year of Evidence for Action, ensure that our nation does not fall behind on evidence-informed policymaking, and arm federal agencies with the most current and best-available scientific evidence as they pursue their statutory missions.

Frequently Asked Questions
Which sectors and scientific fields can use Living Evidence?
The Living Evidence model can be applied to any sector or scientific field. While the Living Evidence model has so far been most widely applied to the health sector, Living Evidence initiatives are also underway in other fields, such as education and climate sciences. Living Evidence is domain-agnostic: it is simply an approach that builds on existing, rigorous evidence-synthesis methods with a novel workflow of frequent and rapid updating.
What is needed to run a successful Living Evidence project?
It does not take long for teams to develop sufficient experience and expertise to apply the Living Evidence model. The key to a successful Living Evidence project is a team that possesses experience in conventional evidence synthesis, strong project-management skills, an orientation towards innovation and experimentation, and investment in building stakeholder engagement.
How much does Living Evidence cost?
As with evidence synthesis in general, cost depends on topic scope and the complexity of the evidence being appraised. Budgeting for Living Evidence projects should distinguish the higher cost of conducting an initial “baseline” systematic review from the lower cost of maintaining the project thereafter. Teams initiating a Living Evidence project for the first time should also budget for the inevitable experimentation and training required.
Do Living Evidence initiatives require recurrent funding?
No. Living Evidence initiatives are analogous to other significant scientific programs that may extend over many years, but receive funding in discrete, time-bound project periods with clear deliverables and the opportunity to apply for continuation funding. 


Living Evidence projects do require funding for enough time to complete the initial “baseline” systematic review (typically 3-12 months, depending on scope and complexity), transition to maintenance (“living”) mode, and continue in living mode for sufficient time (usually about 6–12 months) for all involved to become familiar with maintaining and using the living resource. Hence Living Evidence projects work best when fully funded for a minimum of two years.
If there is support for funding beyond this minimum period, there are operational advantages of instantiating the follow-on funding before the previous funding period concludes. If follow-on funding is not immediately available, Living Evidence resources can simply revert to a conventional static form until and if follow-on funding becomes available.

Is Living Evidence sustainable?
Living Evidence is rapidly gaining momentum as organizations conclude that the conventional model of evidence synthesis is no longer sustainable because the volume of research that must be reviewed and synthesized for each update has grown beyond the capacity of typical project teams. Organizations that transition their evidence resources into “living” mode typically find the dynamic synthesis model to be more consistent, more feasible, easier to manage, and easier to plan for and resource. If the conventional model of intermittent synthesis is like climbing a series of  mountains, the Living Evidence approach is like hiking up to and then walking across a plateau.
How can organizations that are already struggling to develop and update conventional evidence resources take on a Living Evidence project?
New initiatives usually need specific resourcing; Living Evidence is no different. The best approach is to identify a champion within the organization that has an innovation orientation and sufficient authority to effect change. The champion plays a key role in building organizational buy-in, particularly from senior leaders, key influencers within the main evidence program, and major partners, stakeholders and end users. Ultimately, the champion (or their surrogate) should be empowered and resourced to establish 1–3 Living Evidence pilots running alongside the organization’s existing evidence activities. Risk can be reduced by starting small and building a “minimum viable product” Living Evidence resource (i.e., by finding a topic area that is relatively modest in scope, of importance to stakeholders, and is characterized by evidence uncertainty as well as relatively rapid movement in the relevant research field). Funding should be structured to enable experimentation and iteration, and then move quickly to scale up, increasing the scope of evidence moving into living mode, as organizational and stakeholder experience and support builds.
Living Evidence sounds neverending. Wouldn’t that lead to burnout in the project team?
One of the advantages of the Living Evidence model is that the project team can gradually evolve over time (members can join and leave as their interests and circumstances change). This is analogous to the evolution of an ongoing scientific network or research collaborative. In contrast, the spikes in workload required for intermittent updates of conventional evidence products often lead to burnout and loss of institutional memory. Furthermore, teams working on Living Evidence are often motivated by participation in an innovative approach to evidence and pride in contributing to a definitive, high-quality, and highly impactful scientific initiative.
How is Living Evidence disseminated?

While dissemination of conventional evidence products involves sharing several dozen key messages in a once-in-several-years communications push, dissemination of Living Evidence amounts to a regular cycle of “what’s new” updates (typically one to two key insights). Living Evidence dissemination feeds become known and trusted by end users, inspiring confidence that end users can “keep up” with the implications of new research. Publication of Living Evidence can take many forms. Typically, the core evidence resource is housed in an organizational website that can be easily and frequently updated, sometimes with an ability for users to access previous versions of the resource. Living Evidence may also be published as articles in academic journals. This could  be intermittent overviews of the evidence resource with links back to the main Living Evidence summaries, or (more ambitiously) as a series of frequently updated versions of an article that are logically linked. Multiple academic journals are innovating to better support “living” publications.

If Living Evidence products are continually updated, doesn’t that confuse end users with constantly changing conclusions?
Living Evidence requires continual monitoring for new research, as well as frequent and rapid incorporation of new research into existing evidence products. The volume of research identified and incorporated can vary from dozens of studies each month to a few each year, depending on the topic scope and research activity.


Even across broad topics in fast-moving research fields, though, the overall findings and conclusions of Living Evidence products change infrequently since the threshold for changing a conclusion drawn from a whole body of evidence is high. The largest Living Evidence projects in existence only yield about one to two new major findings or recommendations each update. Furthermore, any good evidence-synthesis product will contextualize conclusions and recommendations with confidence.

What are the implications of Living Evidence for stakeholder engagement?
Living Evidence projects, due to their persistent nature, are great opportunities for building partnerships with stakeholders. Stakeholders tend to be energized and engaged in an innovative project that gives them, their staff, and their constituencies a tractable mechanism by which to engage with the “current state of science”. In addition, the ongoing nature of a Living Evidence project means that project partnerships are always active. Stakeholders are continually engaged in meaningful, collaborative discussions and activities around the current evidence. Finally, this ongoing, always-active nature of Living Evidence projects creates “accumulative” partnerships that gradually broaden and deepen over time.
What are the equity implications of taking a Living Evidence approach?
Living Evidence resources make the latest science available to all. Conventionally, the lack of high-quality summaries of science has meant the latest science is discovered and adopted by those closest to centers of excellence and expertise. Rapid incorporation of the latest science into Living Evidence resources—as well as the wide promotion and dissemination of that science—means that the immediate benefits of science can be shared much more broadly, contributing to equity of access to science and its benefits.
What are the implications of Living Evidence for knowledge translation?
The activities that use research outputs and evidence resources (such as Living Evidence) to change practice and policy are often referred to as “knowledge translation”. These activities are substantial and often multifaceted interventions that identify and address the complex structural, organizational, and cultural barriers that impede knowledge use. 


Living Evidence has the potential to accelerate knowledge translation: not because of any changes to the knowledge-translation enterprise, but because Living Evidence identifies earlier the high-certainty evidence that underpins knowledge-translation activities.

Living Evidence may also enhance knowledge translation in two ways. First, Living Evidence is a better evidence product and has been shown to increase trust, engagement, and intention to use among stakeholders. Second, as mentioned above, Living Evidence creates opportunities for deep and effective partnerships. Together, these advantages could position Living Evidence to yield a more effective “enabling environment” for knowledge translation.

Does Living Evidence require use of technologies like machine learning?
Technologies such as natural language processing, machine learning and citizen science (crowdsourcing), as well as efforts to build common data structures (and create Findable, Accessible, Interoperable and Reusable (FAIR) data), are advancing alongside Living Evidence. These technologies are often described as “enablers” of Living Evidence. While such technologies are commonly used and developed in Living Evidence projects, they are not essential. Nevertheless, over the longer term, such technologies will likely be indispensable for creating sustainable systems that make sense of science.

Playbook For Opening Federal Government Data — How Executive & Legislative Leadership Can Help

Summary

Enabling government data to be freely shared and accessed can expedite research and innovation in high-value disciplines, create opportunities for economic development, increase citizen participation in government, and inform decision-making in both public and private sectors. Each day government data remains inaccessible, the public, researchers, and policymakers lose an opportunity to leverage data as a strategic asset to improve social outcomes.

Though federal agencies and policymakers alike support the idea of safely opening their data both to other agencies and to the research community, a substantial fraction of the United States (U.S.) federal government’s safely shareable data is not being shared.

This playbook, compiled based on interviews with current and former government officials, identifies the challenges federal agencies face in 2021 as they work to comply with open data statutes and guidances. More importantly, it offers actionable recommendations for Executive and Congressional leadership to enable federal agencies to prioritize open data.

Paramount among these solutions is the need for the Biden Administration to assign open government data as a 2021 Cross-Agency Priority (CAP) Goal in the President’s Management Agenda (PMA). This goal should revitalize the 2018 CAP Goal: Leveraging Data as a Strategic Asset to improve upon the 2020 U.S. Federal Data Strategy (FDS) and emphasize that open data is a priority for the U.S. Government. The U.S. Chief Technology Officer (CTO) should direct a Deputy CTO to focus solely on fulfilling this 2021 CAP Goal. This Deputy CTO should be a joint appointment with the Office of Management and Budget (OMB).

Absent elevating open data as a top priority in the President’s Agenda, the U.S. risks falling behind internationally. Many nations have surged ahead building smart, prosperous AI-driven societies while the U.S. has failed to unlock its nascent data. If the Biden Administration wants the U.S. to prevail as an international superpower and a global beacon of democracy, it must revitalize its waning open data efforts.

Embedding Evidence and Evaluation in Economic Recovery Legislation

Summary

The COVID-19 pandemic has had devastating impacts on communities across the country. Tens of millions of people lost jobs and millions of school children have fallen behind. To help people recover from the effects of the pandemic, the next administration should invest in proven solutions by working with Congress to embed evaluation and evidence-building into economic stimulus legislation, strengthening the foundation for an equitable and efficient recovery.

The new administration and Congress should ensure that any forthcoming economic stimulus legislation include provisions requiring commitments to build new evidence and utilize existing evidence. Specifically, the administration should establish a task force coordinated by the National Economic Council to:

  1. Work with agencies and Congress to set aside a portion of recovery resources (up to 1%) for evaluation and evidence-building, based in part on agency learning agendas created in response to the Evidence Act.
  2. Create a National Economic Mobility Innovation Fund at the U.S. Department of the Treasury.
  3. Empower the Office of Evaluation Services (OES) within the General Services Administration (GSA) to help agencies develop evaluation and evidence-building capacity.
  4. Create Excellence in What Works in Economic Mobility Awards.

These strategies, which we are collectively calling a “Stimulus Evaluation Act,” should be integrated into current and future economic recovery efforts.

Advancing Economic, Health, and Racial Equity by Increasing the Use of Evidence and Data

Summary

As the United States continues to grapple with unprecedented economic, health, and social justice crises that have had a devastating and disproportionate effect on the very communities that have long struggled most, the next administration must act quickly to ensure equitable recovery. Improving economic mobility and increasing equity in communities furthest from opportunity is more urgent than ever.

The next administration must work with Congress to quickly enact a new round of recovery or stimulus legislation. State and local governments, school systems, and small businesses continue to struggle to respond to COVID-19 and the economic and learning losses that have accompanied the resulting closures. But federal resources are not unlimited and there is little time to spare – communities need positive results quickly. It is imperative, furthermore that the administration ensures that the dollars it distributes are used effectively and equitably. The best way to do so is to use existing evidence and data — about what works, for whom. and under what circumstances — to drive recovery investments.

Fortunately, the federal government has access to unprecedented evidence and data tools that can increase the speed and effectiveness of these urgent recovery and equity-building efforts. And where evidence or data do not exist, this unique moment affords an opportunity to build evidence about what does work to help communities recover and rebuild.

Thus, one of the first priorities of the next administration’s Office of Management and Budget (OMB) should be helping agencies develop their capacity to use existing evidence and data and to build evidence where it is lacking in order to advance economic mobility across the country. OMB should also support federal agency efforts to assist state and local governments to build and use local evidence that can accelerate economic growth and help communities recover from the current crises.

Specifically, OMB should issue guidance directing federal agencies to: 1) define and prioritize evidence of effectiveness in their grant programs to help identify what works, for whom, and under what circumstances to advance economic mobility post-COVID; 2) set aside 1% of discretionary funding for evidence building, including evaluations, technical assistance and capacity building; 3) support state and local governments in using recovery funding to build their own data, evidence-building and evaluation capacity to help their communities rebuild; and 4) require that findings from 2021 evidence-building activities be incorporated into strategic plans due in 2022.