What’s Next for Federal Evidence-Based Policymaking
For decades, the federal government has steadily built infrastructure to better incorporate data and evidence as a key part of their decision-making: from establishing the first federally funded research and development centers (FFRDCs) after World War II, to the Elementary and Secondary Education Act of 1965 establishing the first requirements for federal program evaluation, to the Government Performance and Results Act of 1993 leading to early steps in meaningfully measuring progress toward important agency goals. Decision-makers relied upon the more than 1,000 federal advisory committees that provided expert advice and insights. Most recently, the 2018 Evidence-Based Policymaking Act sought to catalyze movement toward a government-wide culture of ‘open-by-default’ data and rigorous evidence use. As a result, federal agencies began to empower champions to lead evidence and data work, clearly outline where they needed new evidence and how they intended to build it, and assess their capacity for evidence-building activities.
In recent months, we’ve seen much of these decades’ worth of progress erased. Contracts for evaluations of government programs were canceled, FFRDCs have been forced to lay off staff, and federal advisory committees have been disbanded. Roles within the federal government that focus on data, evidence, and evaluation have been eliminated as part of sweeping layoffs. Data that enables us to understand what’s working (and what isn’t!) in key areas, like federal workforce performance, natural disaster recovery, and youth behavior, is disappearing.
Over the years to come, many of these systems will need to be rebuilt or reimagined to ensure that the federal government can meet its Congressionally-mandated goals. While we had made strides toward a more evidence-driven government, there are still core challenges: data and evidence are not always easy to access and use, we don’t seek to rigorously evaluate programs that are funded using taxpayer dollars often enough, and even when we do have a base of evidence about “what works”, it feeds into decision-making less often than it should.
Key Areas for Reform
As we look toward this next chapter of rebuilding, we have an opportunity to create something better than what existed before. Earlier this year, FAS hosted a convening, bringing together leaders from academia, government, and evidence-focused nonprofits to engage in thinking about how we might design the future data and evidence ecosystem. From the conversation at that convening and beyond, we’ve identified a few key areas that deserve particular focus:
More iterative, responsive evidence generation
A primary challenge to evidence-based policymaking is that evidence generation and policy development often run on different timelines. To make evidence most useful to policymakers, we need to look toward approaches that are most responsive to their needs. While multi-million dollar, multi-year program evaluations still have a place in giving us the best information about effectiveness, we should be honest about the fact that most decision-makers do not have the time to engage with the 100+ page reports that these projects typically use to share their findings.
Instead, agencies should consider how they can build evidence generation into program design from the start. They should award contracts for program evaluation that allow them to access not just post-hoc conclusions, but real-time data on performance so they can make course corrections as needed and build a culture of continuous improvement.
Creating better feedback loops
While many of the major pieces of legislation described above push the government to collect more data and evidence, it is not clear how or even if decision-makers act on these inputs.
For example, the Department of Education administers the D.C. Opportunity Scholarship, a program that provides scholarships to allow D.C. students to attend private schools. A Congressionally-mandated, large-scale randomized evaluation found that the program does not lead to increased academic achievement for students that are offered a scholarship. The program still offers value to families who participate – it increased high school graduation rates and parent satisfaction – so the study’s results don’t necessarily mean the program should be discontinued. But it does indicate that there is room for improvement; for example, the Department could consider imposing more stringent standards that require these scholarships be used only at schools that have demonstrated success in boosting academic achievement. However, there is no evidence that the program has meaningfully changed since the results of that evaluation were published.
Much more can be done to ensure that when we know what works, the government acts on it. There are some approaches that we know can support this, like a tiered-evidence approach to federal grantmaking that prioritizes larger amounts of funding to practices that are backed by rigorous evidence, though many of these programs have been eliminated or seen dramatic budget cuts over the past few months.
We are starting to take steps in the right direction: in recently released guidance, OMB is now requiring that agencies publish yearly plans that share not only what evidence they plan to collect, but also how they will use that evidence in decision-making. However, this will only be done for a handful of “priority questions” tied to administration or agency head strategic goals for each agency; we need to take additional steps to ensure that when government seeks out evidence, they actually act on it in budget decisions and program design.
Leveraging emerging technologies
In the next few years, we are likely going to see technology emerge that makes the evidence-to-decision process faster and more efficient. AI might speed up existing processes, such as the development of living evidence reviews. Generative AI-based chatbots might help policymakers more quickly find and access summaries of relevant rigorous research. However, we should approach the use of such tools with caution. A recent decision to allow staff at the Department of Health and Human Services to use ChatGPT means introducing the risk that our front-line public health staffers will rely on answers that are based on hallucinated studies (To counter this possibility, we must take care to ensure emerging technology is human-centered, for example, including clinician feedback in healthcare AI development). As emerging technologies continue to grow in use and scale, we should be proactive about designing tools for use in government that give decision makers sorely needed access to high quality evidence – for example, a team from Harvard’s People Lab is designing PolicyBot, an AI-powered tool built on reliable sources for use by policymakers in identifying evidence-backed interventions.
The Path Forward
Despite recent setbacks, the federal government still has the building blocks for data-driven and evidence-based decision making. While we work to rebuild the evidence infrastructure that has been dismantled, we have a unique opportunity to construct something more robust and responsive than what existed before, reimagining how evidence flows through government decision-making processes and leveraging new and emerging technologies.
The federal government should commit to embedding evidence generation directly into program DNA, creating systems where data collection and policy adjustment happen in real-time rather than in separate, disconnected cycles. This requires not just technological innovation, but cultural transformation – shifting from a mindset that views evaluation as an external judgment to one that sees continuous learning as core to effective governance. Decision-makers at every level must be equipped not only with access to high-quality evidence, but with the tools, training, and institutional incentives to act on what they learn.
In an era of declining public trust and competing demands on limited resources, the federal government’s ability to demonstrate that taxpayer dollars are being used effectively depends on its commitment to evidence-based decision-making. In the years to come, we can build a government that doesn’t just collect data and build evidence, but truly learns from it. In doing so, this newly reimagined government will come to better serve the American people it exists to support.
In recent months, we’ve seen much of these decades’ worth of progress erased. Contracts for evaluations of government programs were canceled, FFRDCs have been forced to lay off staff, and federal advisory committees have been disbanded.
At a recent workshop, we explored the nature of trust in specific government functions, the risk and implications of breaking trust in those systems, and how we’d known we were getting close to specific trust breaking points.
What if low trust was not a given? Or, said another way: what if we had the power to improve trust in government – what would that world look like?
Datasets and variables that do not align with Administration priorities, or might reflect poorly on Administration policy impacts, seem to be especially in the cross-hairs.