Increasing Responsible Data Sharing Capacity throughout Government

Deriving insights from data is essential for effective governance. However, collecting and sharing data—if not managed properly—can pose privacy risks for individuals. Current scientific understanding shows that so-called “anonymization” methods that have been widely used in the past are inadequate for protecting privacy in the era of big data and artificial intelligence. The evolving field of Privacy-Enhancing Technologies (PETs), including differential privacy and secure multiparty computation, offers a way forward for sharing data safely and responsibly.

The administration should prioritize the use of PETs by integrating them into data-sharing processes and strengthening the executive branch’s capacity to deploy PET solutions.

Challenge and Opportunity

A key function of modern government is the collection and dissemination of data. This role of government is enshrined in Article 1, Section 2 of the U.S. Constitution in the form of the decennial census—and has only increased with recent initiatives to modernize the federal statistical system and expand evidence-based policymaking. The number of datasets itself has also grown; there are now over 300,000 datasets on data.gov, covering everything from border crossings to healthcare. The release of these datasets not only accomplishes important transparency goals, but also represents an important step toward advancing American society fairer, as data are a key ingredient in identifying policies that benefit the public. 

Unfortunately, the collection and dissemination of data comes with significant privacy risks. Even with access to aggregated information, motivated attackers can extract information specific to individual data subjects and cause concrete harm. A famous illustration of this risk occurred in 1997 when Latanya Sweeney was able to identify the medical record of then-Governor of Massachusetts, William Weld, from a public, anonymized dataset. Since then, the power of data re-identification techniques—and incentives for third parties to learn sensitive information about individuals—have only increased, compounding this risk. As a democratic, civil-rights respecting nation, it is irresponsible for our government agencies to continue to collect and disseminate datasets without careful consideration of the privacy implications of data sharing.

While there may appear to be an irreconcilable tension between facilitating data-driven insight and protecting the privacy of individual’s data, an emerging scientific consensus shows that Privacy-Enhancing Technologies (PETs) offer a path forward. PETs are a collection of techniques that enable data to be used while tightly controlling the risk incurred by individual data subjects. One particular PET, differential privacy (DP), was recently used by the U.S. Census Bureau within their disclosure avoidance system for the 2020 decennial census in order to meet their dual mandates of data release and confidentiality. Other PETs, including variations of secure multiparty computation, have been used experimentally by other agencies, including to link long-term income data to college records and understand mental health outcomes for individuals who have earned doctorates. The National Institute of Standards and Technology (NIST) has produced frameworks and reports on data and information privacy, including PETs topics such as DP (see Q&A section). However, these reports still lack a comprehensive and actionable framework on how organizations should consider, use and deploy PETs in organizations. 

As artificial intelligence becomes more prevalent inside and outside government and relies on increasingly large datasets, the need for responsible data sharing is growing more urgent. The federal government is uniquely positioned to foster responsible innovation and set a strong example by promoting the use of PETs. The use of DP in the 2020 decennial census was an extraordinary example of the government’s capacity to lead global innovation in responsible data sharing practices. While the promise of continuing this trend is immense, expanding the use of PETs within government poses twin challenges: (1) sharing data within government comes with unique challenges—both technical and legal—that are only starting to be fully understood and (2) expertise on using PETs within government is limited. In this proposal, we outline a concrete plan to overcome these challenges and unlock the potential of PETs within government.

Plan of Action

Using PETs when sharing data should be a key priority for the executive branch. The new administration should encourage agencies to consider the use of PETs when sharing data and build a United States DOGE Service (USDS) “Responsible Data Sharing Corps” of professionals who can provide in-house guidance around responsible data sharing.

We believe that enabling data sharing with PETs requires (1) gradual, iterative refinement of norms and (2) increased capacity in government. With these in mind, we propose the following recommendations for the executive branch.

Strategy Component 1. Build consideration of PETs into the process of data sharing

Recommendation 1. NIST should produce a decision-making framework for organizations to rely on when evaluating the use of PETs.

NIST should provide a step-by-step decision-making framework for determining the appropriate use of PETs within organizations, including whether PETs should be used, and if so, which PET and how it should be deployed. Specifically, this guidance should be at the same level of granularity as NIST Risk Management Framework for Cybersecurity. NIST should consult with a range of stakeholders from the broad data sharing ecosystem to create this framework. This includes data curators (i.e., organizations that collect and share data, within and outside the government); data users (i.e., organizations that consume, use and rely on shared data, including government agencies, special interest groups and researchers); data subjects; experts across fields such as information studies, computer science, and statistics; and decision makers within public and private organizations who have prior experience using PETs for data sharing. The report may build on NIST’s existing related publications and other guides for policymakers considering the use of specific PETs, and should provide actionable guidance on factors to consider when using PETs. The output of this process should be not only a decision, but also a report documenting the execution of decision-making framework (which will be instrumental for Recommendation 3).

Recommendation 2. The Office of Management and Budget (OMB) should mandate government agencies interested in data sharing to use the NIST’s decision-making framework developed in Recommendation 1 to determine the appropriateness of PETs to protect their data pipelines.

The risks to data subjects associated with data releases can be significantly mitigated with the use of PETs, such as differential privacy. Along with considering other mechanisms of disclosure control (e.g., tiered access, limiting data availability), agencies should investigate the feasibility and tradeoffs around using PETs to protect data subjects while sharing data for policymaking and public use. To that end, OMB should require government agencies to use the decision-making framework produced by NIST (in Recommendation 1) for each instance of data sharing. We emphasize that this decision-making process may lead to a decision not to use PETs, as appropriate. Agencies should compile the produced reports such that they can be accessed by OMB as part of Recommendation 3.

Recommendation 3. OMB should produce a PET Use Case Inventory and annual reports that provide insights on the use of PETs in government data-sharing contexts.

To promote transparency and shared learning, agencies should share the reports produced as part of their PET deployments and associated decision-making processes with OMB. Using these reports, OMB should (1) publish a federal government PET Use Case Inventory (similar to the recently established Federal AI Use Case Inventory) and (2) synthesize these findings into an annual report. These findings should provide high-level insights into the decisions that are being made across agencies regarding responsible data sharing, and highlight the barriers to adoption of PETs within various government data pipelines. These reports can then be used to update the decision-making frameworks we propose that NIST should produce (Recommendation 1) and inspire further technical innovation in academia and the private sector.

Strategy Component 2. Build capacity around responsible data sharing expertise 

Increasing in-depth decision-making around responsible data sharing—including the use of PETs—will require specialized expertise. While there are some government agencies with teams well-trained in these topics (e.g., the Census Bureau and its team of DP experts), expertise across government is still lacking. Hence, we propose a capacity-building initiative that increases the number of experts in responsible data sharing across government.

Recommendation 4. Announce the creation of a “Responsible Data Sharing Corps.”

We propose that the USDS create a “Responsible Data Sharing Corps” (RDSC). This team will be composed of experts in responsible data sharing practices and PETs. RDSC experts can be deployed into other government agencies as needed to support decision-making about data sharing. They may also be available for as-needed consultations with agencies to answer questions or provide guidance around PETs or other relevant areas of expertise.

Recommendation 5. Build opportunities for continuing education and training for RDSC members.

Given the evolving nature of responsible data practices, including the rapid development of PETs and other privacy and security best practices, members of the RDSC should have 20% effort reserved for continuing education and training. This may involve taking online courses or attending workshops and conferences that describe state-of-the-art PETs and other relevant technologies and methodologies.

Recommendation 6. Launch a fellowship program to maintain the RDSC’s cutting-edge expertise in deploying PETS.

Finally, to ensure that the RDSC stays at the cutting edge of relevant technologies, we propose an RDSC fellowship program similar to or part of the Presidential Innovation Fellows. Fellows may be selected from academia or industry, but should have expertise in PETs and propose a novel use of PETs in a government data-sharing context. During their one-year terms, fellows will perform their proposed work and bring new knowledge to the RDSC.

Conclusion

Data sharing has become a key priority for the government in recent years, but privacy concerns make it critical to modernize technology for responsible data use to leverage data for policymaking and transparency. PETs such as differential privacy, secure multiparty computation, and others offer a promising way forward. However, deploying PETs at a broad scale requires changing norms and increasing capacity in government. The executive branch should lead these efforts by encouraging agencies to consider PETs when making data-sharing decisions and building a “Responsible Data Sharing Corps” who can provide expertise and support for agencies in this effort. By encouraging the deployment of PETs, the government can increase fairness, utility and transparency of data while protecting itself—and its data subjects—from privacy harms.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
What are the concrete risks associated with data sharing?

Data sharing requires a careful balance of multiple factors, with privacy and utility being particularly important.



  • Data products released without appropriate and modern privacy protection measures could facilitate abuse, as attackers can weaponize information contained in these data products against individuals, e.g., blackmail, stalking, or publicly harassing those individuals.

  • On the other hand, the lack of accessible data can also cause harm due to reduced utility: various actors, such as state and local government entities, may have limited access to accurate or granular data, resulting in the inefficient allocation of resources to small or marginalized communities.

What are some examples of PETs to consider?

Privacy-Enhancing Technologies is a broad umbrella category that includes many different technical tools. Leading examples of these tools include differential privacy, secure multiparty computation, trusted execution environments, and federated learning. Each one of these technologies is designed to address different privacy threats. For additional information, we suggest the UN Guide on Privacy-Enhancing Technologies for Official Statistics and the ICO’s resources on Privacy-Enhancing Technologies.

What NIST publications are relevant to PETs?

NIST has multiple publications related to data privacy, such as the Risk Management Framework for Cybersecurity and the Privacy Framework. The report De-Identifying Government Datasets: Techniques and Governance focuses on responsible data sharing by government organizations, while the Guidelines for Evaluating Differential Privacy Guarantees provides a framework to assess the privacy protection level provided by differential privacy for any organization.

What is differential privacy (DP)?

Differential privacy is a framework for controlling the amount of information leaked about individuals during a statistical analysis. Typically, random noise is injected into the results of the analysis to hide individual people’s specific information while maintaining overall statistical patterns in the data. For additional information, we suggest Differential Privacy: A Primer for a Non-technical Audience.

What is secure multiparty computation (MPC)?

Secure multiparty computation is a technique that allows several actors to jointly aggregate information while protecting each actor’s data from disclosure. In other words, it allows parties to jointly perform computations on their data while ensuring that each party learns only the result of the computation. For additional information, we suggest Secure Multiparty Computation FAQ for Non-Experts.

How have privacy-enhancing technologies been used in government before, domestically and internationally?

There are multiple examples of PET deployments at both the federal and local levels both domestically and internationally. We list several examples below, and refer interested readers to the in-depth reports by Advisory Committee on Data for Evidence Building (report 1 and report 2):



  • The Census Bureau used differential privacy in their disclosure avoidance system to release results from the 2020 decennial census data. Using differential privacy allowed the bureau to provide formal disclosure avoidance guarantees as well as precise information about the impact of this system on the accuracy of the data.

  • The Boston Women’s Workforce Council (BWWC) measures wage disparities among employers in the greater Boston area using secure multiparty computation (MPC).

  • The Israeli Ministry of Health publicly released its National Life Birth Registry using differential privacy.

  • Privacy-preserving record linkage, a variant of secure multiparty computation, has been used experimentally by both the U.S. Department of Education and the National Center for Health Statistics. Additionally, it has been used at the county level in Allegheny County, PA.


Additional examples can also be found in the UN’s case-study repository of PET deployments.

What type of expertise is required to deploy PETs solutions?

Data-sharing projects are not new to the government, and pockets of relevant expertise—particularly in statistics, software engineering, subject matter areas, and law—already exist. Deploying PET solutions requires technical computer science expertise for building and integrating PETs into larger systems, as well as sociotechnical expertise in communicating the use of PETs to relevant parties and facilitating decision-making around critical choices.

Reforming the Federal Advisory Committee Landscape for Improved Evidence-based Decision Making and Increasing Public Trust

Federal Advisory Committees (FACs) are the single point of entry for the American public to provide consensus-based advice and recommendations to the federal government. These Advisory Committees are composed of experts from various fields who serve as Special Government Employees (SGEs), attending committee meetings, writing reports, and voting on potential government actions.

Advisory Committees are needed for the federal decision-making process because they provide additional expertise and in-depth knowledge for the Agency on complex topics, aid the government in gathering information from the public, and allow the public the opportunity to participate in meetings about the Agency’s activities. As currently organized, FACs are not equipped to provide the best evidence-based advice. This is because FACs do not meet transparency requirements set forth by GAO: making pertinent decisions during public meetings, reporting inaccurate cost data, providing  official meeting documents publicly available online, and more. FACs have also experienced difficulty with recruiting and retaining top talent to assist with decision making. For these reasons, it is critical that FACs are reformed and equipped with the necessary tools to continue providing the government with the best evidence-based advice. Specifically, advice as it relates to issues such as 1) decreasing the burden of hiring special government employees 2) simplifying the financial disclosure process 3) increasing understanding of reporting requirements and conflict of interest processes 4) expanding training for Advisory Committee members 5) broadening  the roles of Committee chairs and designated federal officials 6) increasing public awareness of Advisory Committee roles 7) engaging the public outside of official meetings 8) standardizing representation from Committee representatives 9) ensuring that Advisory Committees are meeting per their charters and 10) bolstering Agency budgets for critical Advisory Committee issues. 

Challenge and Opportunity

Protecting the health and safety of the American public and ensuring that the public has the opportunity to participate in the federal decision-making process is crucial. We must evaluate the operations and activities of federal agencies that require the government to solicit evidence-based advice and feedback from various experts through the use of federal Advisory Committees (FACs). These Committees are instrumental in facilitating transparent and collaborative deliberation between the federal government, the advisory body, and the American public and cannot be done through the use of any other mechanism. Advisory Committee recommendations are integral to strengthening public trust and reinforcing the credibility of federal agencies. Nonetheless, public trust in government has been waning and efforts should be made to increase public trust. Public trust is known as the pillar of democracy and fosters trust between parties, particularly when one party is external to the federal government. Therefore, the use of Advisory Committees, when appropriately used, can assist with increasing public trust and ensuring compliance with the law. 

There have also been many success stories demonstrating the benefits of Advisory Committees. When Advisory Committees are appropriately staffed based on their charge, they can decrease the workload of federal employees, assist with developing policies for some of our most challenging issues, involve the public in the decision-making process, and more. However, the state of Advisory Committees and the need for reform have been under question, and even more so as we transition to a new administration. Advisory Committees have contributed to the improvement in the quality of life for some Americans through scientific advice, as well as the monitoring of cybersecurity. For example, an FDA Advisory Committee reviewed data and saw promising results for the treatment of sickle cell disease (SCD) which has been a debilitating disease with limited treatment for years. The Committee voted in favor of gene therapy drugs Casgevy and Lyfgenia which were the first to be approved by the FDA for SCD. 

Under the first Trump administration, Executive Order (EO) 13875 resulted in a significant decrease in the number of federal advisory meetings. This  limited agencies’ ability to convene external advisors. Federal science advisory committees met less during this administration than any prior administration, met less than what was required from their charter, disbanded long standing Advisory Committees, and scientists receiving agency grants were barred from serving on Advisory Committees. Federal Advisory Committee membership also decreased by 14%, demonstrating the issue of recruiting and retaining top talent. The disbandment of Advisory Committees, exclusion of key scientific external experts from Advisory Committees, and burdensome procedures can potentially trigger  severe consequences that affect the health and safety of Americans. 

Going into a second Trump administration, it is imperative that Advisory Committees have the opportunity to assist federal agencies with the evidence-based advice needed to make critical decisions that affect the American public. The suggested reforms that follow can work to improve the overall operations of Advisory Committees while still providing the government with necessary evidence-based advice. With successful implementation of the following recommendations, the federal government will be able to reduce administrative burden on staff through the recruitment, onboarding, and conflict of interest processes. 

The U.S. Open Government Initiative encourages the promotion and participation of public and community engagement in  governmental affairs. However, individual Agencies can and should do more to engage the public. This policy memo identifies several areas of potential reform for Advisory Committees and aims to provide recommendations for improving the overall process without compromising Agency or Advisory Committee membership integrity. 

Plan of Action

The proposed plan of action identifies several policy recommendations to reform the federal Advisory Committee (Advisory Committee) process, improving both operations and efficiency. Successful implementation of these policies will  1) improve the Advisory Committee member experience, 2) increase transparency in federal government decision-making, and 3) bolster trust between the federal government, its Advisory Committees, and the public. 

Streamline Joining Advisory Committees

Recommendation 1. Decrease the burden of hiring special government employees in an effort to (1) reduce the administrative burden for the Agency and (2) encourage Advisory Committee members, who are also known as special government employees (SGEs), to continue providing the best evidence-based advice to the federal government through reduced onerous procedures

The Ethics in Government Act of 1978 and Executive Order 12674 lists OGE-450 reporting as the required public financial disclosure for all executive branch and special government employees. This Act provides the Office of Government Ethics (OGE) the authority to implement and regulate a financial disclosure system for executive branch and special government employees whose duties have “heightened risk of potential or actual conflicts of interest”. Nonetheless, the reporting process becomes onerous when Advisory Committee members have to complete the OGE-450 before every meeting even if their information remains unchanged. This presents a challenge for Advisory Committee members who wish to continue serving, but are burdened by time constraints. The process also burdens federal staff who manage the financial disclosure system. 

Policy Pathway 1. Increase funding for enhanced federal staffing capacity to undertake excessive administrative duties for financial reporting.

Policy Pathway 2. All federal agencies that deploy Advisory Committees can conduct a review of the current OGC-450 process, budget support for this process, and work to develop an electronic process that will eliminate the use of forms and allow participants to select dropdown options indicating if their financial interests have changed.  

Recommendation 2. Create and use public platforms such as OpenPayments by CMS to (1) aid in simplifying the financial disclosure reporting process and (2) increase transparency for disclosure procedures

Federal agencies should create a financial disclosure platform that streamlines the process and allows Advisory Committee members to submit their disclosures and easily make updates. This system should also be created to monitor and compare financial conflicts. In addition, agencies that utilize the expertise of Advisory Committees for drugs and devices should identify additional ways in which they can promote financial transparency. These agencies can use Open Payments, a system operated by Centers for Medicare & Medicaid Services (CMS), to “promote a more financially transparent and accountable healthcare system”. The Open Payments system makes payments from medical and drug device companies to individuals, healthcare providers, and teaching hospitals accessible to the public. If for any reason financial disclosure forms are called into question, the Open Payments platform can act as a check and balance in identifying any potential financial interests of Advisory Committee members. Further steps that can be taken to simplify the financial disclosure process would be to utilize conflict of interest software such as Ethico which is a comprehensive tool that allows for customizable disclosure forms, disclosure analytics for comparisons, and process automation.   

Policy Pathway. The Office of Government Ethics should require all federal agencies that operate Advisory Committees to develop their own financial disclosure system and include a second step in the financial disclosure reporting process as due diligence, which includes reviewing the Open Payments by CMS system for potential financial conflicts or deploying conflict of interest monitoring software to streamline the process.

Streamline Participation in an Advisory Committee

Recommendation 3. Increase understanding of annual reporting requirements for conflict of interest (COI)

Agencies should develop guidance that explicitly states the roles of Ethics Officers, also known as Designated Agency Ethics Officials (DAEO), within the federal government. Understanding the roles and responsibilities of Advisory Committee members and the public will help reduce the spread of misinformation regarding the purpose of Advisory Committees. In addition, agencies should be encouraged by the Office of Government Ethics to develop guidance that indicates the criteria for inclusion or exclusion of participation in Committee meetings. Currently, there is no public guidance that states what types of conflicts of interests are granted waivers for participation. Full disclosure of selection and approval criteria will improve transparency with the public and draw clear delineations between how Agencies determine who is eligible to participate. 

Policy Pathway. Develop conflict of interest (COI) and financial disclosure guidance specifically for SGEs that states under what circumstances SGEs are allowed to receive waivers for participation in Advisory Committee meetings.

Recommendation 4. Expand training for Advisory Committee members to include (1) ethics and (2) criteria for making good recommendations to policymakers

Training should be expanded for all federal Advisory Committee members to include ethics training which details the role of Designated Agency Ethics Officials, rules and regulations for financial interest disclosures, and criteria for making evidence-based recommendations to policymakers. Training for incoming Advisory Committee members ensures that all members have the same knowledge base and can effectively contribute to the evidence-based recommendations process.

Policy Pathway. Agencies should collaborate with the OGE and Agency Heads to develop comprehensive training programs for all incoming Advisory Committee members to ensure an understanding of ethics as contributing members, best practices for providing evidence-based recommendations, and other pertinent areas that are deemed essential to the Advisory Committee process.

Leverage Advisory Committee Membership

Recommendation 5. Uplifting roles of the Committee Chairs and Designated Federal Officials

Expanding the roles of Committee Chairs and Designated Federal Officers (DFOs) may assist federal Agencies with recruiting and retaining top talent and maximizing the Committee’s ability to stay abreast of critical public concerns. Considering the fact that the General Services Administration has to be consulted for the formation of new Committees, renewal, or alteration of Committees, they can be instrumental in this change.

Policy Pathway. The General Services Administration (GSA) should encourage federal Agencies to collaborate with Committee Chairs and DFOs to recruit permanent and ad hoc Committee members who may have broad network reach and community ties that will bolster trust amongst Committees and the public. 

Recommendation 6. Clarify intended roles for Advisory Committee members and the public

There are misconceptions among the public and Advisory Committee members about Advisory Committee roles and responsibilities. There is also ambiguity regarding the types of Advisory Committee roles such as ad hoc members, consulting, providing feedback for policies, or making recommendations. 

Policy Pathway. GSA should encourage federal Agencies to develop guidance that delineates the differences between permanent and temporary Advisory Committee members, as well as their roles and responsibilities depending on if they’re providing feedback for policies or providing recommendations for policy decision-making.

Recommendation 7. Utilize and engage expertise and the public outside of public meetings

In an effort to continue receiving the best evidence-based advice, federal Agencies should develop alternate ways to receive advice outside of public Committee meetings. Allowing additional opportunities for engagement and feedback from Committee experts or the public will allow Agencies to expand their knowledge base and gather information from communities who their decisions will affect.

Policy Pathway. The General Services Administration should encourage federal Agencies to create opportunities outside of scheduled Advisory Committee meetings to engage Committee members and the public on areas of concern and interest as one form of engagement. 

Recommendation 8. Standardize representation from Committee representatives (i.e., industry), as well as representation limits

The Federal Advisory Committee Act (FACA) does not specify the types of expertise that should be represented on all federal Advisory Committees, but allows for many types of expertise. Incorporating various sets of expertise that are representative of the American public will ensure the government is receiving the most accurate, innovative, and evidence-based recommendations for issues and products that affect Americans. 

Policy Pathway. Congress should include standardized language in the FACA that states all federal Advisory Committees should include various sets of expertise depending on their charge. This change should then be enforced by the GSA.

Support a Vibrant and Functioning Advisory Committee System

Recommendation 9. Decrease the burden to creating an Advisory Committee and make sure Advisory Committees are meeting per their charters

The process to establish an Advisory Committee should be simplified in an effort to curtail the amount of onerous processes that lead to a delay in the government receiving evidence based advice.

Advisory Committee charters state the purpose of Advisory Committees, their duties, and all aspirational aspects. These charters are developed by agency staff or DFOs with consultation from their agency Committee Management Office. Charters are needed to forge the path for all FACs.

Policy Pathway. Designated Federal Officers (DFOs) within federal agencies should work with their Agency head to review and modify steps to establishing FACs. Eliminate the requirement for FACs to require consultation and/or approval from GSA for the formation, renewal, or alteration of Advisory Committees.

Recommendation 10. Bolster agency budgets to support FACs on critical issues where regular engagement and trust building with the public is essential for good policy

Federal Advisory Committees are an essential component to receive evidence-based recommendations that will help guide decisions at all stages of the policy process. These Advisory Committees are oftentimes the single entry point external experts and the public have to comment and participate in the decision-making process. However, FACs take considerable resources to operate depending on the frequency of meetings, the number of Advisory Committee members, and supporting FDA staff. Without proper appropriations, they have a diminished ability to recruit and retain top talent for Advisory Committees. The Government Accountability Office (GAO) reported that in 2019, approximately $373 million dollars was spent to operate a total of 960 federal Advisory Committees. Some Agencies have experienced a decrease in the number of Advisory Committee convenings. Individual Agency heads should conduct a budget review of average operating and projected costs and develop proposals for increased funding to submit to the Appropriations Committee.  

Policy Pathway. Congress should consider increasing appropriations to support FACs so they can continue to enhance federal decision-making, improve public policy, boost public credibility, and Agency morale. 

Conclusion

Advisory Committees are necessary to the federal evidence-based decision-making ecosystem. Enlisting the advice and recommendations of experts, while also including input from the American public, allows the government to continue making decisions that will truly benefit its constituents. Nonetheless, there are areas of FACs that can be improved to ensure it continues to be a participatory, evidence-based process. Additional funding is needed to compensate the appropriate Agency staff for Committee support, provide potential incentives for experts who are volunteering their time, and finance other expenditures.

Frequently Asked Questions
How will Federal Advisory Committees (Advisory Committees) increase government efficiency?

With reform of Advisory Committees, the process for receiving evidence-based advice will be streamlined, allowing the government to receive this advice in a faster and less burdensome manner. Reform will be implemented by reducing the administrative burden for federal employees through the streamlining of recruitment, financial disclosure, and reporting processes.

A Federal Center of Excellence to Expand State and Local Government Capacity for AI Procurement and Use

The administration should create a federal center of excellence for state and local artificial intelligence (AI) procurement and use—a hub for expertise and resources on public sector AI procurement and use at the state, local, tribal, and territorial (SLTT) government levels. The center could be created by expanding the General Services Administration’s (GSA) existing Artificial Intelligence Center of Excellence (AI CoE). As new waves of AI technologies enter the market, shifting both practice and policy, such a center of excellence would help bridge the gap between existing federal resources on responsible AI and the specific, grounded challenges that individual agencies face. In the decades ahead, new AI technologies will touch an expanding breadth of government services—including public health, child welfare, and housing—vital to the wellbeing of the American people. An AI CoE federal center would equip public sector agencies with sustainable expertise and set a consistent standard for practicing responsible AI procurement and use. This resource ensures that AI truly enhances services, protects the public interest, and builds public trust in AI-integrated state and local government services. 

Challenge and Opportunity 

State, local, tribal, and territorial (SLTT) governments provide services that are critical to the welfare of our society. Among these: providing housing, child support, healthcare, credit lending, and teaching. SLTT governments are increasingly interested in using AI to assist with providing these services. However, they face immense challenges in responsibly procuring and using new AI technologies. While grappling with limited technical expertise and budget constraints, SLTT government agencies considering or deploying AI must navigate data privacy concerns, anticipate and mitigate biased model outputs, ensure model outputs are interpretable to workers, and comply with sector-specific regulatory requirements, among other responsibilities. 

The emergence of foundation models (large AI systems adaptable to many different tasks) for public sector use exacerbates these existing challenges. Technology companies are now rapidly developing new generative AI services tailored towards public sector organizations. For example, earlier this year, Microsoft announced that Azure OpenAI Service would be newly added to Azure Government—a set of AI services that target government customers. These types of services are not specifically created for public sector applications and use contexts, but instead are meant to serve as a foundation for developing specific applications. 

For SLTT government agencies, these generative AI services blur the line between procurement and development: Beyond procuring specific AI services, we anticipate that agencies will increasingly be tasked with the responsible use of general AI services to develop specific AI applications. Moreover, recent AI regulations suggest that responsibility and liability for the use and impacts of procured AI technologies will be shared by the public sector agency that deploys them, rather than just resting with the vendor supplying them.

SLTT agencies must be well-equipped with responsible procurement practices and accountability mechanisms pivotal to moving forward given the shifts across products, practice, and policy. Federal agencies have started to provide guidelines for responsible AI procurement (e.g., Executive Order 13960, OMB-M-21-06, NIST RMF). But research shows that SLTT governments need additional support to apply these resources.: Whereas existing federal resources provide high-level, general guidance, SLTT government agencies must navigate a host of challenges that are context-specific (e.g., specific to regional laws, agency practices, etc.). SLTT government agency leaders have voiced a need for individualized support in accounting for these context-specific considerations when navigating procurement decisions. 

Today, private companies are promising state and local government agencies that using their AI services can transform the public sector. They describe diverse potential applications, from supporting complex decision-making to automating administrative tasks. However, there is minimal evidence that these new AI technologies can improve the quality and efficiency of public services. There is evidence, on the other hand, that AI in public services can have unintended consequences, and when these technologies go wrong, they often worsen the problems they are aimed at solving. For example, by increasing disparities in decision-making when attempting to reduce them. 

Challenges to responsible technology procurement follow a historical trend: Government technology has frequently been critiqued for failures in the past decades. Because public services such as healthcare, social work, and credit lending have such high stakes, failures in these areas can have far-reaching consequences. They also entail significant financial costs, with millions of dollars wasted on technologies that ultimately get abandoned. Even when subpar solutions remain in use, agency staff may be forced to work with them for extended periods despite their poor performance.

The new administration is presented with a critical opportunity to redirect these trends. Training each relevant individual within SLTT government agencies, or hiring new experts within each agency, is not cost- or resource-effective. Without appropriate training and support from the federal government, AI adoption is likely to be concentrated in well-resourced SLTT agencies, leaving others with fewer resources (and potentially more low income communities) behind. This could lead to disparate AI adoption and practices among SLTT agencies, further exacerbating existing inequalities. The administration urgently needs a plan that supports SLTT agencies in learning how to handle responsible AI procurement and use–to develop sustainable knowledge about how to navigate these processes over time—without requiring that each relevant individual in the public sector is trained. This plan also needs to ensure that, over time, the public sector workforce is transformed in their ability to navigate complicated AI procurement processes and relationships—without requiring constant retraining of new waves of workforces. 

In the context of federal and SLTT governments, a federal center of excellence for state and local AI procurement would accomplish these goals through a “hub and spoke” model. This center of excellence would serve as the “hub” that houses a small number of selected experts from academia, non-profit organizations, and government. These experts would then train “spokes”—existing state and local public sector agency workers—in navigating responsible procurement practices. To support public sector agencies in learning from each others’ practices and challenges, this federal center of excellence could additionally create communication channels for information- and resource-sharing across the state and local agencies. 

Procured AI technologies in government will serve as the backbone of local public services for decades to come. Upskilling government agencies to make smart decisions about which AI technologies to procure (and which are best avoided) would not only protect the public from harmful AI systems but would also save the government money by decreasing the likelihood of adopting expensive AI technologies that end up getting dropped. 

Plan of Action 

A federal center of excellence for state and local AI procurement would ensure that procured AI technologies are responsibly selected and used to serve as a strong and reliable backbone for public sector services. This federal center of excellence can support both intra-agency and inter-agency capacity-building and learning about AI procurement and use—that is, mechanisms to support expertise development within a given public sector agency and between multiple public sector agencies. This federal center of excellence would not be deliberative (i.e., SLTT governments would receive guidance and support but would not have to seek approval on their practices). Rather, the goal would be to upskill SLTT agencies so they are better equipped to navigate their own AI procurement and use endeavors. 

To upskill SLTT agencies through inter-agency capacity-building, the federal center of excellence would house experts in relevant domain areas (e.g., responsible AI, public interest technology, and related topics). Fellows would work with cohorts of public sector agencies to provide training and consultation services. These fellows, who would come from government, academia, and civil society, would build on their existing expertise and experiences with responsible AI procurement, integrating new considerations proposed by federal standards for responsible AI (e.g., Executive Order 13960, OMB-M-21-06, NIST RMF). The fellows would serve as advisors to help operationalize these guidelines into practical steps and strategies, helping to set a consistent bar for responsible AI procurement and use practices along the way. 

Cohorts of SLTT government agency workers, including existing agency leaders, data officers, and procurement experts, would work together with an assigned advisor to receive consultation and training support on specific tasks that their agency is currently facing. For example, for agencies or programs with low AI maturity or familiarity (e.g., departments that are beginning to explore the adoption of new AI tools), the center of excellence can help navigate the procurement decision-making process, help them understand their agency-specific technology needs, draft procurement contracts, select amongst proposals, and negotiate plans for maintenance. For agencies and programs with high AI maturity or familiarity, the advisor can train the programs about unexpected AI behaviors and mitigation strategies, when this arises. These communication pathways would allow federal agencies to better understand the challenges state and local governments face in AI procurement and maintenance, which can help seed ideas for improving existing resources and create new resources for AI procurement support.

To scaffold intra-agency capacity-building, the center of excellence can build the foundations for cross-agency knowledge-sharing. In particular, it would include a communication platform and an online hub of procurement resources, both shared amongst agencies. The communication platform would allow state and local government agency leaders who are navigating AI procurement to share challenges, learned lessons, and tacit knowledge to support each other. The online hub of resources can be collected by the center of excellence and SLTT government agencies. Through the online hub, agencies can upload and learn about new responsible AI resources and toolkits (e.g., such as those created by government and the research community), as well as examples of procurement contracts that agencies themselves used. 

To implement this vision, the new administration should expand the U.S. General Services Administration’s (GSA) existing Artificial Intelligence Center of Excellence (AI CoE), which provides resources and infrastructural support for AI adoption across the federal government. We propose expanding this existing AI CoE to include the components of our proposed center of excellence for state and local AI procurement and use. This would direct support towards SLTT government agencies—which are currently unaccounted for in the existing AI CoE—specifically via our proposed capacity-building model.

Over the next 12 months, the goals of expanding the AI CoE would be three-fold:

1. Develop the core components of our proposed center of excellence within the AI CoE. 

2. Launch collaborations for the first sample of SLTT government agencies. Focus on building a path for successful collaborations: 

3. Build a path for our proposed center of excellence to grow and gain experience. If the first few collaborations show strong reviews, design a scaling strategy that will: 

Conclusion

Expanding the existing AI CoE to include our proposed federal center of excellence for AI procurement and use can help ensure that SLTT governments are equipped to make informed, responsible decisions about integrating AI technologies into public services. This body would provide necessary guidance and training, helping to bridge the gap between high-level federal resources and the context-specific needs of SLTT agencies. By fostering both intra-agency and inter-agency capacity-building for responsible AI procurement and use, this approach builds sustainable expertise, promotes equitable AI adoption, and protects public interest. This ensures that AI enhances—rather than harms—the efficiency and quality of public services. As new waves of AI technologies continue to enter the public sector, touching a breadth of services critical to the welfare of the American people, this center of excellence will help maintain high standards for responsible public sector AI for decades to come.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
What is existing guidance for responsible SLTT procurement and use of AI technologies?

Federal agencies have published numerous resources to support responsible AI procurement, including the Executive Order 13960, OMB-M-21-06, NIST RMF. Some of these resources provide guidance on responsible AI development in organizations broadly, across the public, private, and non-profit sectors. For example, the NIST RMF provides organizations with guidelines to identify, assess, and manage risks in AI systems to promote the deployment of more trustworthy and fair AI systems. Others focus on public sector AI applications. For instance, the OMB Memorandum published by the Office of Management and Budget describes strategies for federal agencies to follow responsible AI procurement and use practices.

Why a federal center? Can’t SLTT governments do this on their own?

Research describes how these forms of resources often require additional skills and knowledge that make it challenging for agencies to effectively use on their own. A federal center of excellence for state and local AI procurement could help agencies learn to use these resources. Adapting these guidelines to specific SLTT agency contexts necessitates a careful task of interpretation which may, in turn, require specialized expertise or resources. The creation of this federal center of excellence to guide responsible SLTT procurement on-the-ground can help bridge this critical gap. Fellows in the center of excellence and SLTT procurement agencies can build on this existing pool of guidance to build a strong theoretical foundation to guide their practices.

How has this “hub and spoke” model been used before?

The hub and spoke model has been used across a range of applications to support efficient management of resources and services. For instance, in healthcare, providers have used the hub and spoke model to organize their network of services; specialized, intensive services would be located in “hub” healthcare establishments whereas secondary services would be provided in “spoke” establishments, allowing for more efficient and accessible healthcare services. Similar organizational networks have been followed in transportation, retail, and cybersecurity. Microsoft follows a hub and spoke model to govern responsible AI practices and disseminate relevant resources. Microsoft has a single centralized “hub” within the company that houses responsible AI experts—those with expertise on the implementation of the company’s responsible AI goals. These responsible AI experts then train “spokes”—workers residing in product and sales teams across the company, who learn about best practices and support their team in implementing them.

Who would be the experts selected as fellows by the center of excellence? What kind of training would they receive?

During the training, experts would form a stronger foundation for (1) on-the-ground challenges and practices that public sector agencies grapple with when developing, procuring, and using AI technologies and (2) existing AI procurement and use guidelines provided by federal agencies. The content of the training would be taken from syntheses of prior research on public sector AI procurement and use challenges, as well as existing federal resources available to guide responsible AI development. For example, prior research has explored public sector challenges to supporting algorithmic fairness and accountability and responsible AI design and adoption decisions, amongst other topics.


The experts who would serve as fellows for the federal center of excellence would be individuals with expertise and experience studying the impacts of AI technologies and designing interventions to support more responsible AI development, procurement, and use. Given the interdisciplinary nature of the expertise required for the role, individuals should have an applied, socio-technical background on responsible AI practices, ideally (but not necessarily) for the public sector. The individual would be expected to have the skills needed to share emerging responsible AI practices, strategies, and tacit knowledge with public sector employees developing or procuring AI technologies. This covers a broad range of potential backgrounds.

What are some examples of the skills or competencies fellows might bring to the Center?

For example, a professor in academia who studies how to develop public sector AI systems that are more fair and aligned with community needs may be a good fit. A socio-technical researcher in civil society with direct experience studying or developing new tools to support more responsible AI development, who has intuition over which tools and practices may be more or less effective, may also be a good candidate. A data officer in a state government agency who has direct experience procuring and governing AI technologies in their department, with an ability to readily anticipate AI-related challenges other agencies may face, may also be a good fit. The cohort of fellows should include a balanced mix of individuals coming from government, academia, and civil society.

Strengthening Information Integrity with Provenance for AI-Generated Text Using ‘Fuzzy Provenance’ Solutions

Synthetic text generated by artificial intelligence (AI) can pose significant threats to information integrity. When users accept deceptive AI-generated content—such as large-scale false social media posts by malign foreign actors—as factual, national security is put at risk. One way to help mitigate this danger is by giving users a clear understanding of the provenance of the information they encounter online. 

Here, provenance refers to any verifiable indication of whether text was generated by a human or by AI, for example by using a watermark. However, given the limitations of watermarking AI-generated text, this memo also introduces the concept of fuzzy provenance, which involves identifying exact text matches that appear elsewhere on the internet. As these matches will not always be available, the descriptor “fuzzy” is used. While this information will not always establish authenticity with certainty, it offers users additional clues about the origins of a piece of text.

To ensure platforms can effectively provide this information to users, the National Institute of Standards and Technology (NIST)’s AI Safety Institute should develop guidance on how to display to users both provenance and fuzzy provenance—where available—within no more than one click. To expand the utility of fuzzy provenance, NIST could also issue guidance on how generative AI companies could allow the records of their free AI models to be crawled and indexed by search engines, thereby making potential matches to AI-generated text easier to discover. Tradeoffs surrounding this approach are explored further in the FAQ section.

By creating a reliable, user-friendly framework for surfacing these details, NIST would empower readers to better discern the trustworthiness of the text they encounter, thereby helping to counteract the risks posed by deceptive AI-generated content.

Challenge and Opportunity

Synthetic Text and Information Integrity

In the past two years, generative AI models have become widely accessible, allowing users to produce customized text simply by providing prompts. As a result, there has been a rapid proliferation of “synthetic” text—AI-generated content—across the internet. As NIST’s Generative Artificial Intelligence Profile notes, this means that there is a “[l]owered barrier of entry to generated text that may not distinguish fact from opinion or fiction or acknowledge uncertainties, or could be leveraged for large scale dis- and mis-information campaigns.”

Information integrity risks stemming from synthetic text—particularly when generated for non-creative purposes—can pose a serious threat to national security. For example, in July 2024 the Justice Department disrupted Russian generative-AI-enabled disinformation bot farms. These Russian bots produced synthetic text, including in the form of social media posts by fake personas, meant to promote messages aligned with the interests of the Russian government. 

Provenance Methods For Reducing Information Integrity Risks

NIST has an opportunity to provide community guidance to reduce the information integrity risks posed by all types of synthetic content. The main solution currently being considered by NIST for reducing the risks of synthetic content in general is provenance, which refers to whether a piece of content was generated by AI or a human. As described by NIST, provenance is often ascertained by creating a non-fungible watermark, or a cryptographic signature for a piece of content. The watermark is permanently associated with the piece of content. Where available, provenance information is helpful because knowing the origin of text can help a user know whether to rely on the facts it contains. For example, an AI-generated news report may currently be less trustworthy than a human news report because the former is more prone to fabrications.

However, there are currently no methods widely accepted as effective for determining the provenance of synthetic text. As NIST’s report, Reducing Risks Posed by Synthetic Content, details, “[t]he effectiveness of synthetic text detection is subject to ongoing debate” (Sec. 3.2.2.4). Even if a piece of text is originally AI-generated with a watermark (e.g., by generating words with a unique statistical pattern), people can easily copy a piece of text by paraphrasing (especially via AI), without transferring the original watermark. Text watermarks are also vulnerable to adversarial attacks, with malicious actors able to mimic the watermark signature and make text appear watermarked when it is not.  

Plan of Action

To capture the benefits of provenance, while mitigating some of its weaknesses, NIST should issue guidance on how platforms can make available to users both provenance and “fuzzy provenance” of text. Fuzzy provenance is coined here to refer to exact text matches on the internet, which can sometimes reflect provenance but not necessarily (thus “fuzzy”). Optionally, NIST could also consider issuing guidance on how generative AI companies can make their free models’ records available to be crawled and indexed by search engines, so that fuzzy provenance information would show text matches with generative AI model records. There are tradeoffs to this recommendation, which is why it is optional; see FAQs for further discussion. Making both provenance and fuzzy provenance information available (in no more than one click) will give users more information to help them evaluate how trustworthy a piece of text is and reduce information integrity risks. 

Combined Provenance and Fuzzy Provenance Approach

Figure 1. Mock implementation of combined provenance and fuzzy provenance
The above image captures what an implementation of the combined provenance and fuzzy provenance guidance might include. When a user highlights a piece of text that is sufficiently long, they can click “learn more about this text” to find more information.

There are ways to communicate provenance and fuzzy provenance so that it is both useful and easy-to-understand. In this concept showing the provenance of text, for example:

Benefits of the Combined Approach

Showing both provenance and fuzzy provenance information provides users with critical context to evaluate the trustworthiness of a piece of text. Between provenance and fuzzy provenance, users would have access to information about many pieces of high-impact text, especially claims that could be particularly harmful for individuals, groups, or society at large. Making all this information immediately available also reduces friction for users so that they can get this information right where they encounter text.

Provenance information can be helpful to provide to users when it is available. For instance, knowing that a tech support company’s website description was AI-generated may encourage users to check other sources (like reviews) to see if the company is a real entity (and AI was used just to generate the description) or a fake entity entirely, before giving a deposit to hire the company (see user journey 1 in this video for an example).

Where clear provenance information is not available, fuzzy provenance can help fill the gap by providing valuable context to users in several ways:

Fuzzy provenance is also effective because it shows context and gives users autonomy to decide how to interpret that context. Academic studies have found that users tend to be more receptive when presented with further information they can use for their own critical thinking compared to being shown a conclusion directly (like a label), which can even backfire or be misinterpreted. This is why users may trust contextual methods like crowdsourced information more than provenance labels.

Finally, fuzzy provenance methods are generally feasible at scale, since they can be easily implemented with existing search engine capabilities (via an exact text match search). Furthermore, since fuzzy provenance only relies on exact text matching with other sources on the internet, it works without needing coordination among text-producers or compliance from bad actors. 

Conclusion

To reduce the information integrity risks posed by synthetic text in a scalable and effective way, the National Institute for Standards and Technology (NIST) should develop community guidance on how platforms hosting text-based digital content can make accessible (in no more than one click) the provenance and “fuzzy provenance” of the piece of text, when available. NIST should also consider issuing guidance on how AI companies could make their free generative AI records available to be crawled by search engines, to amplify the effectiveness of “fuzzy provenance”.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Making free generative AI records available to be crawled by search engines includes tradeoffs, which is why it is an optional recommendation to consider. Below are some questions regarding implementation guidance, and trade offs including privacy and proprietary information.

Frequently Asked Questions on free generative AI record optional recommendation
What are some examples of implementation guidance for AI model companies?

Guidance could instruct AI model companies how to make their free generative AI conversation records available to be crawled and indexed by search engines. Similar to ChatGPT logs or Perplexity threads, a unique URL would be created for each conversation, capturing the date it occurred. The key difference is that all free model conversation records would be made available, but only with the AI outputs of the conversation, after removing personally identifiable information (PII) (see “Privacy considerations” section below). Because users can already choose to share conversations with each other (meaning the conversation logs are retained), and conversation logs for major model providers do not currently appear to have an expiration date, this requirement shouldn’t impose an additional storage burden for AI model companies.

What are some examples of implementation guidance for search engines?

Guidance could instruct search engines how to crawl and index these model logs so that queries with exact text matches to the AI outputs would surface the appropriate model logs. This would not be very different from search engines crawling/indexing other types of new URLs and should be well-within existing search engine capabilities. In terms of storage, since only free model logs would be crawled and indexed, and most free models rate-limit the number of user messages allowed, storage should also not be a concern. For instance, even with 200 million weekly active users for ChatGPT, the number of conversations in a year would only be on the order of billions, which is well-within the current scale that existing search engines have to operate to enable users to “search the web”.

How can we ensure user privacy when making free AI model records available?

  • Output filtering on the AI outputs should be done to remove any personal identifiable information (PII) present in the model’s responses. However, it might still be possible to extrapolate who the original user was just by looking at the AI outputs taken together and inferring some of the user prompts. This is a privacy concern that should be further investigated. Some possible mitigations include additionally removing any location references of a certain granularity (i.e. removing mentions of neighborhoods, but retaining mentions of states) and presenting AI responses in the conversation in a randomized order.

  • Removals should be made possible by a user-initiated process demonstrating privacy concerns, similar to existing search engine removal protocols.

  • User consent would also be an important consideration here. NIST could propose that free model users must “opt-in”, or that free model record crawling/indexing be “opt-out” by default for users, though this may greatly compromise the reliability of fuzzy provenance.

What proprietary tradeoffs should be considered when making free AI model outputs available to be crawled and indexed by search engines?

  • Training on AI-generated text: AI companies are concerned about accidentally picking up on too much AI-generated text on the web and training on that instead of higher human-generated text, thus degrading the quality of their own generative models. However, because they would have identifiable domain prefixes (ie chatgpt.com, perplexity.ai), it would be easy to exclude these AI-generated conversation logs if desired during training. Indeed, provenance and fuzzy provenance may help AI companies avoid unintentionally training on AI-generated text.

  • Sharing model outputs: On the flipside, AI companies might be concerned that making so many AI-generated model outputs available for competitors to access may result in helping competitors improve their own models. This is a fair concern, though partially mitigated by a) specific user inputs are not available, and only the AI outputs; and b) only free model outputs would be logged, rather than any premium models, thus providing some proprietary protection. However, it is still possible that competitors may be able to enhance their own responses by training on the structure of AI outputs from other models at scale.

Tending Tomorrow’s Soil: Investing in Learning Ecosystems

“Tending soil.”

That’s how Fred Rogers described Mister Rogers’ Neighborhood, his beloved television program that aired from 1968 to 2001. Grounded in principles gleaned from top learning scientists, the Neighborhood offered a model for how “learning ecosystems” can work in tandem to tend the soil of learning. 

Today, a growing body of evidence suggests that Rogers’ model was not only effective, but that real-life learning ecosystems – networks that include classrooms, living rooms, libraries, museums, and more – may be the most promising approach for preparing learners for tomorrow. As such, cities and regions around the world are constructing thoughtfully designed ecosystems that leverage and connect their communities’ assets, responding to the aptitudes, needs, and dreams of the learners they serve. 

Efforts to study and scale these ecosystems at local, state, and federal levels would position the nation’s students as globally competitive, future-ready learners.

The Challenge

For decades, America’s primary tool for “tending soil” has been its public schools, which are (and will continue to be) the country’s best hope for fulfilling its promise of opportunity. At the same time, the nation’s industrial-era soil has shifted. From the way our communities function to the way our economy works, dramatic social and technological upheavals have remade modern society. This incongruity – between the world as it is and the world that schools were designed for – has blunted the effectiveness of education reforms; heaped systemic, society-wide problems on individual teachers; and shortchanged the students who need the most support.

“Public education in the United States is at a crossroads,” notes a report published by the Alliance for Learning Innovation, Education Reimagined, and Transcend: “to ensure future generations’ success in a globally competitive economy, it must move beyond a one-size-fits-all model towards a new paradigm that prioritizes innovation that holds promise to meet the needs, interests, and aspirations of each and every learner.”

What’s needed is the more holistic paradigm epitomized by Mister Rogers’ Neighborhood: a collaborative ecosystem that sparks engaged, motivated learners by providing the tools, resources, and relationships that every young person deserves.

The Opportunity

With components both public and private, virtual and natural, “learning ecosystems” found in communities around the world reflect today’s connected, interdependent society. These ecosystems are not replacements for schools – rather, they embrace and support all that schools can be, while also tending to the vital links between the many places where kids and families learn: parks, libraries, museums, afterschool programs, businesses, and beyond. The best of these ecosystems function as real-life versions of Mister Rogers’ Neighborhood: places where learning happens everywhere, both in and out of school. Where every learner can turn to people and programs that help them become, as Rogers used to say, “the best of whoever you are.”

Nearly every community contains the components of effective learning ecosystems. The partnerships forged within them can – when properly tended – spark and spread high-impact innovations; support collaboration among formal and informal educators; provide opportunities for young people to solve real-world problems; and create pathways to success in a fast-changing modern economy. By studying and investing in the mechanisms that connect these ecosystems, policymakers can build “neighborhoods” of learning that prepare students for citizenship, work, and life.

Plan of Action

Learning ecosystems can be cultivated at every level. Whether local, state, or federal, interested policymakers should:

Establish a commission on learning ecosystems. Tasked with studying learning ecosystems in the U.S. and abroad, the commission would identify best practices and recommend policy that 1) strengthens an area’s existing learning ecosystems and/or 2) nurtures new connections. Launched at the federal, state, or local level and led by someone with a track record for getting things done, the commission should include representatives from various sectors, including early childhood educators, K-12 teachers and administrators, librarians, researchers, CEOs and business leaders, artists, makers, and leaders from philanthropic and community-based organizations. The commission will help identify existing activities, research, and funding for learning ecosystems and will foster coordination and collaboration to maximize the effectiveness of the ecosystem’s resources.

A 2024 report by Knowledge to Power Catalysts notes that these cross-sector commissions are increasingly common at various levels of government, from county councils to city halls. As policymakers establish interagency working groups, departments of children and youth, and networks of human services providers, “such offices at the county or municipal level often play a role in cross-sector collaboratives that engage the nonprofit, faith, philanthropic, and business communities as well.”

Pittsburgh’s Remake Learning ecosystem, for example, is steered by the Remake Learning Council, a blue-ribbon commission of Southwestern Pennsylvania leaders from education, government, business, and the civic sector committed to “working together to support teaching, mentoring, and design – across formal and informal educational settings – that spark creativity in kids, activating them to acquire knowledge and skills necessary for navigating lifelong learning, the workforce, and citizenship.”

Establish a competitive grant program to support pilot projects. These grants could seed new ecosystems and/or support innovation among proven ecosystems. (Several promising ecosystems are operating throughout the country already; however, many are excluded from funding opportunities by narrowly focused RFPs.) This grant program can be administered by the commission to catalyze and strengthen learning ecosystems at the federal, state, or local levels. Such a program could be modeled after:

Host a summit on learning ecosystems. Leveraging the gravitas of a government and/or civic institution such as the White House, a governor’s mansion, or a city hall, bring members of the commission together with learning ecosystem leaders and practitioners, along with cross-sector community leaders. A summit will underscore promising practices, share lessons learned, and highlight monetary and in-kind commitments to support ecosystems. The summit could leverage for learning ecosystems the philanthropic commitments model developed and used by previous presidential administrations to secure private and philanthropic support. Visit remakelearning.org/forge to see an example of one summit’s schedule, activities, and grantmaking opportunities.

Establish an ongoing learning ecosystem grant program for scaling and implementing lessons learned. This grant program could be administered at the federal, state, or local level – by a city government, for example, or by partnerships like the Appalachian Regional Commission. As new learning ecosystems form and existing ones evolve, policymakers should continue to provide grants that support learning ecosystem partnerships between communities that allow innovations in one city or region to take root in another. 

Invest in research, publications, convenings, outreach, and engagement efforts that highlight local ecosystems and make their work more visible, especially for families. The ongoing grant program can include funding for opportunities that elevate the benefits of learning ecosystems. Events such as Remake Learning Days – an annual festival billed as “the world’s largest open house for teaching and learning” and drawing an estimated 300,000 attendees worldwide – build demand for learning ecosystems among parents, caregivers, and community leaders, ensuring grassroots buy-in and lasting change.

This memo was developed in partnership with the Alliance for Learning Innovation, a coalition dedicated to advocating for building a better research and development infrastructure in education for the benefit of all students. 

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
How do learning ecosystems benefit students?

Within a learning ecosystem, students aren’t limited to classrooms, schools, or even their own districts – nor do they have to travel far to find opportunities that light them up. By blurring the lines between “in school” and “out of school,” ecosystems make learning more engaging, more relevant, and even more joyful. Pittsburgh’s Remake Learning ecosystem, for example, connects robotics professionals with classroom teachers to teach coding and STEM. Librarians partner with teaching artists to offer weeklong deep dives into topics attractive to young people. A school district launches a program – say, a drone academy for girls – and opens it up to learners from neighboring districts.


As ecosystems expand to include more members, the partnerships formed within them spark exciting, ever-evolving opportunities for learners.

How do learning ecosystems benefit communities?

Within an ecosystem, learning isn’t just for young people. An ecosystem’s out-of-school components – businesses, universities, makerspaces, and more – bring real-world problems directly to learners, leading to tangible change in communities and a more talented, competitive future workforce. In greater Washington, D.C., for example, teachers partner with cultural institutions to develop curricula based on students’ suggestions for improving the city. In Kansas City, high schoolers partner with entrepreneurs and health care professionals to develop solutions for everything from salmonella poisoning to ectopic pregnancy. And in Pittsburgh, public school students are studying cybersecurity, training for aviation careers, conducting cutting-edge cancer research, and more.

How do learning ecosystems benefit educators?

Learning ecosystems also support educators. In Pittsburgh, educators involved in Remake Learning note that “they feel celebrated and validated in their work,” writes researcher Erin Gatz. Moreover, the ecosystem’s “shared learning and supportive environment were shown to help educators define or reinforce their professional identity.”

How do learning ecosystems benefit local economies?

Learning ecosystems can aid local economies, too. In eastern Kentucky, an ecosystem of school districts, universities, and economic development organizations empowers students to reimagine former coal land for entrepreneurial purposes. And in West Virginia, an ecosystem of student-run companies has helped the state recover from natural disasters.

Where are examples of learning ecosystems already operating in the United States?

Since 2007, Pittsburgh’s Remake Learning has emerged as the most talked-about learning ecosystem in the world. Studied by scholars, recognized by heads of state, and expanding to include more then 700 schools, libraries, museums, and other sites of learning, Remake Learning has – through two decades of stewardship – inspired more than 40 additional learning ecosystems. Meanwhile, the network’s Moonshot Grants are seeding new ecosystems across the nation and around the world.

What inspiration can we draw from globally?

Global demand for learning ecosystems is growing. A 2020 report released by HundrED, a Finland-based nonprofit, profiles 16 of the most promising examples operating in the United States. Likewise, the World Innovation Summit for Education explores nine learning ecosystems operating worldwide: “Across the globe, there is a growing consensus that education demands radical transformation if we want all citizens to become future-ready in the face of a more digitally enabled, uncertain, and fast-changing world,” the summit notes. “Education has the potential to be the greatest enabler of preparing everyone, young and old, for the future, yet supporting learning too often remains an issue for schools alone.”

What about public schools?

Learning ecosystems support collaboration and community among public schools, connecting classrooms, schools, and educators across diverse districts. Within Remake Learning, for example, a cohort of 42 school districts works together – and in partnership with afterschool programs, health care providers, universities, and others – to make Western Pennsylvania a model for the future of learning.


The cohort’s collaborative approach has led to a dazzling array of awards and opportunities for students: A traditional classroom becomes a futuristic flight simulator. A school district opens its doors to therapy dogs and farm animals. Students in dual-credit classes earn college degrees before they’ve even finished high school. Thanks in part to the ecosystem’s efforts, Western Pennsylvania is now recognized as home to the largest cluster of nationally celebrated school districts in the country.

I’m interested in starting or supporting a learning ecosystem in my community. Where do I start?

As demand for learning ecosystems continues to gather momentum, several organizations have released playbooks and white papers designed to guide policymakers, practitioners, and other interested parties. Helpful resources include:


What are some additional resources?

In addition, Remake Learning has released three publications that draw on more than twenty years of “tending soil.” The publications share methods and mindsets for navigating some of the most critical questions that face ecosystems’ stewards:


Protecting Infant Nutrition Security:
Shifting the Paradigm on Breastfeeding to Build a Healthier Future for all Americans

The health and wellbeing of American babies have been put at risk in recent years, and we can do better. Recent events have revealed deep vulnerabilities in our nation’s infant nutritional security. For example: Pandemic-induced disruptions in maternity care practices that support  the establishment of breastfeeding; the infant formula recall and resulting shortage; and a spate of weather-related natural disasters have demonstrated infrastructure gaps and a lack of resilience to safety and supply chain challenges. All put babies in danger during times of crisis.

Breastfeeding is foundational to lifelong health and wellness, but systemic barriers prevent many families from meeting their breastfeeding goals. The policies and infrastructure surrounding postpartum families often limit their ability to succeed in breastfeeding. Despite important benefits, new data from the CDC shows that while 84.1% of infants start out breastfeeding, these numbers fall dramatically in the weeks after birth, with only 57.5% of infants breastfeeding exclusively at one month of age. Disparities persist across geographic location, and other sociodemographic factors, including race/ethnicity, maternal age, and education. Breastfeeding rates in North America are the lowest in the world. Longstanding evidence shows that it is not a lack of desire but rather a lack of support, access, and resources that creates these barriers.

This administration has an opportunity to take a systems approach to increasing support for breastfeeding and making parenting easier for new mothers. Key policy changes to address systemic barriers include providing guidance to states on expanding Medicaid coverage of donor milk, building breastfeeding support and protection into the existing emergency response framework at the Federal Emergency Management Agency, and expressing support for establishing a national paid leave program. 

Policymakers on both sides of the aisle agree that no baby should ever go hungry, as evidenced by the bipartisan passage of recent breastfeeding legislation (detailed below) and widely supported regulations. However, significant barriers remain. This administration has the power to address long-standing inequities and set the stage for the next generation of parents and infants to thrive. Ensuring that every family has the support they need to make the best decisions for their child’s health and wellness benefits the individual, the family, the community, and the economy. 

Challenge and Opportunity

Breastfeeding plays an essential role in establishing good nutrition and healthy weight, reducing the risk of chronic disease and infant mortality, and improving maternal and infant health outcomes. Breastfed children have a decreased risk of obesity, type 1 and 2 diabetes, asthma, and childhood leukemia. Women who breastfeed reduce their risk of specific chronic diseases, including type 2 diabetes, cardiovascular disease, and breast and ovarian cancers. On a relational level, the hormones produced while breastfeeding, like oxytocin, enhance the maternal-infant bond and emotional well-being. The American Academy of Pediatrics recommends infants be exclusively breastfed for approximately six months with continued breastfeeding while introducing complementary foods for two years or as long as mutually desired by the mother and child.  

Despite the well-documented health benefits of breastfeeding, deep inequities in healthcare, community, and employment settings impede success. Systemic barriers disproportionately impact Black, Indigenous, and other communities of color, as well as families in rural and economically distressed areas. These populations already bear the weight of numerous health inequities, including limited access to nutritious foods and higher rates of chronic disease—issues that breastfeeding could help mitigate. 

Breastfeeding Saves Dollars and Makes Sense 

Low breastfeeding rates in the United States cost our nation millions of dollars through higher health system costs, lost productivity, and higher household expenditures. Not breastfeeding is associated with economic losses of about $302 billion annually or 0.49% of world gross national income. At the national level, improving breastfeeding practices through programs and policies is one of the best investments a country can make, as every dollar invested is estimated to result in a $35 economic return

In the United States, chronic disease management results in trillions of dollars in annual healthcare costs, which increased breastfeeding rates could help reduce. In the workplace setting, employers see significant cost savings when their workers are able to maintain breastfeeding after returning to work. Increased breastfeeding rates are also associated with reduced environmental impact and associated expenses. Savings can be seen at home as well, as following optimal breastfeeding practices reduces household expenditures. Investments in infant nutrition last a lifetime, paying long-term dividends critical for economic and human development. Economists have completed cost-benefit analyses, finding that investments in nutrition are one of the best value-for-money development actions, laying the groundwork for the success of investments in other sectors.

Ongoing trends in breastfeeding outcomes indicate that there are entrenched policy-level challenges and barriers that need to be addressed to ensure that all infants have an opportunity to benefit from access to human milk. Currently, for too many families, the odds are stacked against them. It’s not a question of individual choice but one of systemic injustice. Families are often forced into feeding decisions that do not reflect their true desires due to a lack of accessible resources, support, and infrastructure.

While the current landscape is rife with challenges, the solutions are known and the potential benefits are tremendous. This administration has the opportunity to realize these benefits and implement a smart and strategic response to the urgent situation that our nation is facing just as the political will is at an all-time high. 

The History of Breastfeeding Policy

In the late 1960s and early 1970s less than 30 percent of infants were breastfed. The concerted efforts of individuals and organizations and the emergence of the field of lactation have worked to counteract or remove many barriers, and policymakers have sent a clear and consistent message that breastfeeding is bipartisan. This is evident in the range of recent lactation-friendly legislation, including: 

Administrative efforts ranging from the Business Case for Breastfeeding to The Surgeon General’s Call to Action to Support Breastfeeding and the armed services updates on uniform requirements for lactating soldiers demonstrate a clear commitment to breastfeeding support across the decades. 

These policy changes have made a difference. But additional attention and investment, with a particular focus on the birth and early postpartum period, as well as during and after emergencies, is needed to secure the potential health and economic benefits of comprehensive societal support for breastfeeding. This administration can take considerable steps toward improving  U.S. health and wellness and protecting infant nutrition security.  

Plan of Action

A range of federal agencies coordinate programs, services, and initiatives impacting the breastfeeding journey for new parents. Expanding and building on existing efforts through the following steps can help address some of today’s most pressing barriers to breastfeeding. 

Each of the recommended actions can be implemented independently and would create meaningful, incremental change for families. However, a comprehensive approach that implements all these recommendations would create the marked shift in the landscape needed to improve breastfeeding initiation and duration rates and establish this administration as a champion for breastfeeding families. 

AgencyAgency RoleRecommend ActionAnticipated Outcome
Federal Emergency Management Agency (FEMA)


FEMA coordinates within the federal government to make sure America is equipped to prepare for and respond to disasters.Require FEMA to participate in the Federal Interagency Breastfeeding Workgroup, a collection of federal agencies that come together to connect and collaborate on breastfeeding issues.Increased connection and coordination across agencies.
Federal Emergency Management Agency (FEMA)


FEMA coordinates within the federal government to make sure America is equipped to prepare for and respond to disasters.Update the FEMA Public Assistance Program and Policy Guide to include breastfeeding and lactation as a functional need so that emergency response efforts can include services from lactation support providers.Integration of breastfeeding support into emergency response and recovery efforts.
Office of Management & Budget (OMB)The OMB oversees the implementation of the President’s vision across the Executive Branch, including through budget development and execution.Include funding for the establishment of a national paid family and medical leave program as a priority in the President’s Budget.Setting the stage for Congressional action.
Domestic Policy Council (DPC)The DPC drives the development and implementation of the President’s domestic policy agenda in the White House and across the Federal government.Support the efforts of the bipartisan, bicameral congressional Paid Leave Working Group.Setting the stage for Congressional action.
This table summarizes the recommendations, grouped by the federal agency that would be responsible for implementing the change to increase breastfeeding rates in the U.S. for improved health and economic outcomes.

Recommendation 1. Increase access to pasteurized donor human milk by directing the Centers for Medicare & Medicaid Services (CMS) to provide guidance to states on expanding Medicaid coverage. 

Pasteurized donor human milk is lifesaving for vulnerable infants, particularly those born preterm or with serious health complications. Across the United States, milk banks gently pasteurize donated human milk and distribute it to fragile infants in need. This lifesaving liquid gold reduces mortality rates, lowers healthcare costs, and shortens hospital stays. Specifically, the use of donor milk is associated with increased survival rates and lowered rates of infections, sepsis, serious lung disease, and gastrointestinal complications. In 2022, there were 380,548 preterm births in the United States, representing 10.4% of live births, so the potential for health and cost savings is substantial. Data from one study shows that the cost of a neonatal intensive care unit stay for infants at very low birth weight is nearly $220,000 for 56 days. The use of donor human milk can reduce hospital length of stay by 18-50 days by preventing the development of necrotizing enterocolitis in preterm infants. The benefits of human milk extend beyond the inpatient stay, with infants receiving all human milk diets in the NICU experiencing fewer hospital readmissions and better overall long-term outcomes.

Although donor milk has important health implications for vulnerable infants in all communities and can result in significant economic benefit, donor milk is not equitably accessible. While milk banks serve all states, not all communities have easy access to donated human milk. Moreover, many insurers are not required to cover the cost, creating significant barriers to access and contributing to racial and geographic disparities.

To ensure that more babies in need have access to lifesaving donor milk, the administration should work with CMS to expand donor milk coverage under state Medicaid programs. Medicaid covers approximately 40% of all US births and 50% of all early preterm births. Medicaid programs in at least 17 states and the District of Columbia already include coverage of donor milk. The administration can expand access to this precious milk, help reduce health care costs, and address racial and geographic disparities by releasing guidance for the remaining states regarding coverage options in Medicaid.

Recommendation 2. Include infant feeding in Federal Emergency Management Agency (FEMA) emergency planning and response.

Infants and children are among the most vulnerable in an emergency, so it is critical that their unique needs are considered and included in emergency planning and response guidance. Breastfeeding provides clean, optimal nutrition, requires no fuel, water, or electricity, and is available, even in the direst circumstances. Human milk contains antibodies that fight infection, including diarrhea and respiratory infections common among infants in emergency situations. Yet efforts to protect infant and young child feeding in emergencies are sorely lacking, particularly in the immediate aftermath of disasters and emergencies. 

Ensuring access to lactation support and supplies as part of emergency response efforts is essential for protecting the health and safety of infants. Active support and coordination between federal, state, and local governments, the commercial milk formula industry, lactation support providers, and all other relevant actors involved in response to emergencies is needed to ensure safe infant and young child feeding practices and equitable access to support. There are two simple, cost-effective steps that FEMA can take to protect breastfeeding, preserve resources, and thus save additional lives during emergencies.

Recommendation 3. Expand access to paid family & medical leave by including paid leave as a priority in the President’s Budget and supporting the efforts of the bipartisan, bicameral congressional Paid Leave Working Group. 

Employment policies in the United States make breastfeeding harder than it needs to be. The United States is one of the only countries in the world without a national paid family and medical leave program. Many parents return to work quickly after birth, before a strong breastfeeding relationship is established, because they cannot afford to take unpaid leave or because they do not qualify for paid leave programs with their employer or through state or local programs. Nearly 1 in 4 employed mothers return to work within two weeks of childbirth.

Paid family leave programs make it possible for employees to take time for childbirth recovery, bond with their baby, establish feeding routines, and adjust to life with a new child without threatening their family’s economic well-being. This precious time provides the foundation for success, contributing to improved rates of breastfeeding initiation and duration, yet only a small portion of workers are able to access it. There are significant disparities in access to paid leave among racial and ethnic groups, with Black and Hispanic employees less likely than their white non-Hispanic counterparts to have access to paid parental leave. There are similar disparities in breastfeeding outcomes among racial groups.  

The momentum is building substantially to improve the paid family and medical leave landscape in the United States. Thirteen states and the District of Columbia have established mandatory state paid family leave systems. Supporting paid leave has become an important component of candidate campaign plans, and bipartisan support for establishing a national program remains strong among voters. The formation of Bipartisan Paid Family Leave Working Groups in both the House and Senate demonstrate commitment from policymakers on both sides of the aisle. 

By directing the Office of Management and Budget to include funding for paid leave in the President’s Budget recommendation and working collaboratively with the Congressional Paid Leave Working Groups, the administration can advance federal efforts to increase access to paid family and medical leave, improving public health and helping American businesses.  

Conclusion

These three strategies offer the opportunity for the White House to make an immediate and lasting impact by protecting infant nutrition security and addressing disparities in breastfeeding rates, on day one of the Presidential term. A systems approach that utilizes multiple strategies for integrating breastfeeding into existing programs and efforts would help shift the paradigm for new families by addressing long-standing barriers that disproportionately affect marginalized communities—particularly Black, Indigenous, and families of color. A clear and concerted effort from the Administration, as outlined, offers the opportunity to benefit all families and future generations of American babies. 

The administration’s focused and strategic efforts will create a healthier, more supportive world for babies, families, and breastfeeding parents, improve maternal and child health outcomes, and strengthen the economy. This administration has the chance to positively shape the future for generations of American families, ensuring that every baby gets the best possible start in life and that every parent feels empowered and supported.

Now is the time to build on recent momentum and create a world where families have true autonomy in infant feeding decisions. A world where paid family leave allows parents the time to heal, bond, and establish feeding routines; communities provide equitable access to donor milk; and federal, state, and local agencies have formal plans to protect infant feeding during emergencies, ensuring no baby is left vulnerable. Every family deserves to feel empowered and supported in making the best choices for their children, with equitable access to resources and support systems.

This policy memo was written with support from Suzan Ajlouni, Public Health Writing Specialist at the U.S. Breastfeeding Committee. The policy recommendations have been identified through the collective learning, idea sharing, and expertise of USBC members and partners.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
Isn’t the choice to breastfeed a personal one?

Rather than being a matter of personal choice, infant feeding practice is informed by circumstance and level (or lack) of support. When roadblocks exist at every turn, families are backed into a decision because the alternatives are not available, attainable, or viable. United States policies and infrastructure were not built with the realities of breastfeeding in mind. Change is needed to ensure that all who choose to breastfeed are able to meet their personal breastfeeding goals, and society at large reaps the beneficial social and economic outcomes.

How much would it cost to establish a national paid family and medical leave program?

The Fiscal Year 2024 President’s Budget proposed to establish a national, comprehensive paid family and medical leave program, providing up to 12 weeks of leave to allow eligible workers to take time off to care for and bond with a new child; care for a seriously ill loved one; heal from their own serious illness; address circumstances arising from a loved one’s military deployment; or find safety from domestic violence, sexual assault, or stalking. The budget recommendation included $325 billion for this program. It’s important to look at this with the return on investment in mind, including improved labor force attachment and increased earnings for women; better outcomes and reduced health care costs for ill, injured, or disabled loved ones; savings to other tax-funded programs, including Medicaid, SNAP, and other forms of public assistance; and national economic growth, jobs growth, and increased economic activity.

How will we know if these efforts are having an impact?

There are a variety of national monitoring and surveillance efforts tracking breastfeeding initiation, duration, and exclusivity rates that will inform how well these actions are working for the American people, including the National Immunization Survey (NIS), Pregnancy Risk Assessment and Monitoring System (PRAMS), Infant Feeding Practices Study, and National Vital Statistics System. The CDC Breastfeeding Report card is published biannually to bring these key data points together and place them into context. Significant improvements in the data have already been seen across recent decades, with breastfeeding initiation rates increasing from 73.1 percent in 2004 to 84.1 percent in 2021.

Is there enough buy-in from organizations and individuals to support these systemic changes?

The U.S. Breastfeeding Committee is a coalition bringing together approximately 140 organizations from coast to coast representing the grassroots to the treetops – including federal agencies, national, state, tribal, and territorial organizations, and for-profit businesses – that support the USBC mission to create a landscape of breastfeeding support across the United States. Nationwide, a network of hundreds of thousands of grassroots advocates from across the political spectrum support efforts like these. Together, we are committed to ensuring that all families in the U.S. have the support, resources, and accommodations to achieve their breastfeeding goals in the communities where they live, learn, work, and play. The U.S. Breastfeeding Committee and our network stand ready to work with the administration to advance this plan of action.

Supporting Device Reprocessing to Reduce Waste in Health Care

The U.S. healthcare system produces 5 million tons of waste annually, or approximately 29 pounds per hospital bed daily. Roughly 80 percent of the healthcare industry’s carbon footprint comes from the production, transportation, use, and disposal of single-use devices (SUDs), which are pervasive in the hospital. Notably, 95% of the environmental impact of single-use medical products results from the production of those products. 

While the Food and Drug Administration (FDA) oversees new devices being brought to market, it is up to the manufacturer to determine whether a device will be marketed as single-use or multiple-use. Manufacturers have a financial incentive to market devices for “single-use” or “disposable” as marketing a device as reusable requires expensive cleaning validations.

In order to decrease healthcare waste and environmental impact, FDA leads on identifying reusable devices that can be safely reprocessed and incentivizing manufacturers to test the reprocessing of their device. This will require the FDA to strengthen its management of single-use and reusable device labeling. Further, the Veterans Health Administration, the nation’s largest healthcare system, should reverse the prohibition on reprocessed SUDs and become a national leader in the reprocessing of medical devices.

Challenge and Opportunity

While healthcare institutions are embracing decarbonization and waste reduction plans, they cannot do this effectively without addressing the enormous impact of single-use devices (SUDs). The majority of research literature concludes that SUDs are associated with higher levels of environmental impact than reusable products. 

FDA regulations governing SUD reprocessing make it extremely challenging for hospitals to reprocess low-risk SUDs, which is inconsistent with the FDA’s “least burdensome provisions.” The FDA requires hospitals or commercial SUD reprocessing facilities to act as the device’s manufacturer, meaning they must follow the FDA’s rules for medical device manufacturers’ requirements and take on the associated liabilities. Hospitals are not keen to take on the liability of a manufacturer, yet commercial reprocessors do not offer many lower-risk devices that can be reprocessed. 

As a result, hospitals and clinics are no longer willing to sterilize SUDs through methods like autoclaving even despite documentation showing that sterilization is safe and precedent showing that similar devices have been safely sterilized and reused for many years without adverse events. Many devices, including pessaries for pelvic organ prolapse and titanium phacoemulsification tips for cataract surgery, can be safely reprocessed in their clinical use. These products, given their risk profile, need not be subject to the FDA’s full medical device manufacturer requirements.  

Further, manufacturers are incentivized to bring SUDs to market quicker than those that may be reprocessed. Manufacturers often market devices as single-use solely because the original manufacturer chose not to conduct expensive cleaning and sterilization validations, not because such cleaning and sterilization validations cannot be done. FDA regulations that govern SUDs should be better tailored to each device so that clinicians on the frontlines can provide appropriate and environmentally sustainable health care. 

Reprocessed devices cost 25 to 40% less. Thus, the use of reprocessed SUDs can reduce costs in hospitals significantly — about $465 million in 2023. Per the Association of Medical Device Reprocessors, if the reprocessing practices of the top 10% performing hospitals were maximized across all hospitals that use reprocessed devices, U.S. hospitals could have saved an additional $2.28 billion that same year. Indeed, enabling and encouraging the use of reprocessed SUDs can also yield significant cost reductions without compromising patient care. 

Plan of Action

As the FDA began regulating SUD reprocessing in 2000, it is imperative that the FDA take the lead on creating a clear, streamlined process for clearing or approving reusable devices in order to ensure the safety and efficacy of reprocessed devices. These recommendations would permit healthcare systems to reprocess and reuse medical devices without fear of noncompliance by the Joint Commission or Centers for Medicare and Medicaid Services that reply on FDA regulations. Further, the nation’s largest healthcare system, the Veterans Health Administration, should become a leader in medical device reprocessing, and lead on showcasing the standard of practice for sustainable healthcare.

  1. FDA should publish a list of SUDs that have a proven track record of safe reprocessing to empower hospitals to reduce waste, costs, and environmental impact without compromising patient safety. The FDA should change the labels of single-use devices to multi-use when reuse by hospitals is possible and validated via clinical studies, as the “single-use” label has promoted the mistaken belief that SUDs cannot be safely reprocessed. Per the FDA, the single-use label simply means a given device has not undergone the original equipment manufacturer (OEM) validation tests necessary to label a device “reusable.” The label does not mean the device cannot be cleared for reprocessing. 
  1. In order to help governments and healthcare systems prioritize the environmental and cost benefits of reusable devices over SUDs, FDA should incentivize applications of reusable or commercially reprocessable devices, such as through expediting review. The FDA can also incentivize use of reprocessed devices through payments to hospitals for meeting reprocessing benchmarks. 
  1. The FDA should not subject low-risk devices that can be safely reprocessed for clinical use to full device manufacturer requirements. The FDA should further encourage healthcare procurement staff by creating an accessible database of devices cleared for reprocessing and alerting healthcare systems about regulated reprocessing options. In doing so, the FDA can help reduce the burden on hospitals in reprocessing low-risk SUDs and encourage healthcare systems to sterilize SUDs through methods like autoclaving. 
  1. As the only major health system in the U.S. to prohibit the use of reprocessed SUDs, the U.S. Veterans Health Administration should reverse its prohibition as soon as possible. This prohibition likely remains because of outdated determinations of risks, which comes at major costs for the environment and Americans. Doing so would be consistent with the FDA’s conclusions that reprocessed SUDs are safe and effective.  
  1. FDA should recommend that manufacturers publicly report the materials used in the composition of devices so that end-users can more easily compare products and determine the environmental impact of devices. As explained by AMDR, some Original Equipment Manufacturer (OEM) practices discourage or fully prevent the use of reprocessed devices. It is imperative that the FDA vigorously track and impede these practices. Not only will requiring public reporting device composition help healthcare buyers make more informed decisions, it will also help promote a more circular economy that uplifts sustainability efforts. 

Conclusion

To decrease costs, waste, and environmental impact, the healthcare sector urgently needs to increase its use of reusable devices. One of the largest barriers is FDA requirements that result in needlessly stringent requirements of hospitals, hindering the adoption of less wasteful, less costly reprocessed devices.

FDA’s critical role in medical device labeling, clearing, or approving more devices as reusable, has down market implications and influences many other regulatory and oversight bodies, including the Centers for Medicare & Medicaid Services (CMS), the Association for the Advancement of Medical Instrumentation (AAMI), the Joint Commission, hospitals, health care offices, and health care providers. It is essential for the FDA to step up and take the lead in revising the device reprocessing pipeline. 

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Blank Checks for Black Boxes: Bring AI Governance to Competitive Grants

The misuse of AI in federally-funded projects can risk public safety and waste taxpayer dollars.

The Trump administration has a pivotal opportunity to spot wasteful spending, promote public trust in AI, and safeguard Americans from unchecked AI decisions. To tackle AI risks in grant spending, grant-making agencies should adopt trustworthy AI practices in their grant competitions and start enforcing them against reckless grantees.

Federal AI spending could soon skyrocket. One ambitious legislative plan from a Senate AI Working Group calls for doubling non-defense AI spending to $32 billion a year by 2026. That funding would grow AI across R&D, cybersecurity, testing infrastructure, and small business support. 

Yet as federal AI investment accelerates, safeguards against snake oil lag behind. Grants can be wasted on AI that doesn’t work. Grants can pay for untested AI with unknown risks. Grants can blur the lines of who is accountable for fixing AI’s mistakes. And grants offer little recourse to those affected by an AI system’s flawed decisions. Such failures risk exacerbating public distrust of AI, discouraging possible beneficial uses. 

Oversight for federal grant spending is lacking, with: 

Watchdogs, meanwhile, play a losing game, chasing after errant programs one-by-one only after harm has been done. Luckily, momentum is building for reform. Policymakers recognize that investing in untrustworthy AI erodes public trust and stifles genuine innovation. Steps policymakers could take include setting clear AI quality standards, training grant judges, monitoring grantee’s AI usage, and evaluating outcomes to ensure projects achieve their potential. By establishing oversight practices, agencies can foster high-potential projects for economic competitiveness, while protecting the public from harm. 

Challenge and Opportunity

Poor AI Oversight Jeopardizes Innovation and Civil Rights

The U.S. government advances public goals in areas like healthcare, research, and social programs by providing various types of federal assistance. This funding can go to state and local governments or directly to organizations, nonprofits, and individuals. When federal agencies award grants, they typically do so expecting less routine involvement than they would with other funding mechanisms, for example cooperative agreements. Not all federal grants look the same—agencies administer mandatory grants, where the authorizing statute determines who receives funding, and competitive grants (or “discretionary grants”), where the agency selects award winners. In competitive grants, agencies have more flexibility to set program-specific conditions and award criteria, which opens opportunities for policymakers to structure how best to direct dollars to innovative projects and mitigate emerging risks. 

These competitive grants fall short on AI oversight. Programmatic policy is set in cross-cutting laws, agency-wide policies, and grant-specific rules; a lack of AI oversight mars all three. To date, no government-wide AI regulation extends to AI grantmaking. Even when President Biden’s 2023 AI Executive Order directed agencies to implement responsible AI practices, the order’s implementing policies exempted grant spending (see footnote 25) entirely from the new safeguards. In this vacuum, the 26 grantmaking agencies are on their own to set agency-wide policies. Few have. Agencies can also set AI rules just for specific funding opportunities. They do not. In fact, in a review of a large set of agency discretionary grant programs, only a handful of funding notices announced a standard for AI quality in a proposed program. (See: One Bad NOFO?) The net result? A policy and implementation gap for the use of AI in grant-funded programs.

Funding mistakes damage agency credibility, stifle innovation, and undermines the support for people and communities financial assistance aims to provide. Recent controversies highlight how today’s lax measures—particularly in setting clear rules for federal financial assistance, monitoring how they are used, and responding to public feedback—have led to inefficient and rights-trampling results. In just the last few years, some of the problems we have seen include:

Any grant can attract controversy, and these grants are no exception. But the cases above spotlight transparency, monitoring, and participation deficits—the same kinds of AI oversight problems weakening trust in government that policymakers aim to fix in other contexts.

Smart spending depends on careful planning. Without it, programs may struggle to drive innovation or end up funding AI that infringes peoples’ rights. OMB, as well as agency Inspectors General, and grant managers will need guidance to evaluate what money is going towards AI and how to implement effective oversight. Government will face tradeoffs and challenges promoting AI innovation in federal grants, particularly due to:

1) The AI Screening Problem. When reviewing applications, agencies might fail to screen out candidates that exaggerate their AI capabilities—or fail to report bunk AI use altogether. Grantmaking requires calculated risks on ideas that might fail. But grant judges who are not experts in AI can make bad bets. Applicants will pitch AI solutions directly to these non-experts, and grant winners, regardless of their original proposal, will likely purchase and deploy AI, creating additional oversight challenges. 

2) The grant-procurement divide. When planning a grant, agencies might set overly burdensome restrictions that dissuade qualified applicants from applying or otherwise take up too much time, getting in the way of grant goals. Grants are meant to be hands-off;  fostering breakthroughs while preventing negligence will be a challenging needle to thread. 

 3) Limited agency capacity. Agencies may be unequipped to monitor grant recipients’ use of AI. After awarding funding, agencies can miss when vetted AI breaks down on launch. While agencies audit grantees, those audits typically focus on fraud and financial missteps. In some cases, agencies may not be measuring grantee performance well at all (slides 12-13).  Yet regular monitoring, similar to the oversight used in procurement, will be necessary to catch emergent problems that affect AI outcomes. Enforcement, too, could be cause for concern; agencies clawback funds for procedural issues, but “almost never withhold federal funds when grantees are out of compliance with the substantive requirements of their grant statutes.” Even as the funding agency steps away, an inaccurate AI system can persist, embedding risks over a longer period of time.

Plan of Action

Recommendation 1. OMB and agencies should bake-in pre-award scrutiny through uniform requirements and clearer guidelines

Recommendation 2. OMB and grant marketplaces should coordinate information sharing between agencies

To support review of AI-related grants, OMB and grantmaking agency staff should pool knowledge on AI’s tricky legal, policy, and technical matters. 

Recommendation 3. Agencies should embrace targeted hiring and talent exchanges for grant review boards

Agencies should have experts in a given AI topic judging grant competitions. To do so requires overcoming talent acquisition challenges. To that end:

Recommendation 4. Agencies should step up post-award monitoring and enforcement

You can’t improve what you don’t measure—especially when it comes to AI. Quantifying, documenting, and enforcing against careless AI uses can be a new task for grantmaking agencies.  Incident reporting will improve the chances that existing cross-cutting regulations, including civil rights laws, can reel back AI gone awry. 

Recommendation 5. Agencies should encourage and fund efforts to investigate and measure AI harms 

Conclusion

Little limits how grant winners can spend federal dollars on AI. With the government poised to massively expand its spending on AI, that should change. 

The federal failure to oversee AI use in grants erodes public trust, civil rights, effective service delivery and the promise of government-backed innovation. Congressional efforts to remedy these problems–starting probes, drafting letters–are important oversight measures, but only come after the damage is done. 

Both the Trump and Biden administrations have recognized that AI is exceptional and needs exceptional scrutiny. Many of the lessons learned from scrutinizing federal agency AI procurement apply to grant competitions. Today’s confluence of public will, interest, and urgency is a rare opportunity to widen the aperture of AI governance to include grantmaking.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
What authorities allow agencies to make grant competitions?

Enabling statutes for agencies often are the authority for grant competitions. For grant competitions, the statutory language leaves it to agencies to place further specific policies on the competition. Additionally, laws, like the DATA Act and Federal Grant and Cooperative Agreement Act, offer definitions and guidance to agencies in the use of federal funds.

What kinds of steps do agencies take in pre-award funding?

Agencies already conduct a great deal of pre-award planning to align grantmaking with Executive Orders. For example, in one survey of grantmakers, a little over half of respondents updated their pre-award processes, such as applications and organization information, to comply with an Executive Order. Grantmakers aligning grant planning with the Trump administration’s future Executive Orders will likely follow similar steps.

Who receives federal grant funding for the development and use of AI?

A wide range of states, local governments, companies, and individuals receive grant competition funds. Spending records, available on USASpending.gov, give some insight into where grant funding goes, though these records too, can be incomplete.

Fighting Fakes and Liars’ Dividends: We Need To Build a National Digital Content Authentication Technologies Research Ecosystem

The U.S. faces mounting challenges posed by increasingly sophisticated synthetic content. Also known as digital media ( images, audio, video, and text), increasingly, these are produced or manipulated by generative artificial intelligence (AI).  Already, there has been a proliferation in the abuse of generative AI technology to weaponize synthetic content for harmful purposes, such as financial fraud, political deepfakes, and the non-consensual creation of intimate materials featuring adults or children. As people become less able to distinguish between what is real and what is fake, it has become easier than ever to be misled by synthetic content, whether by accident or with malicious intent. This makes advancing alternative countermeasures, such as technical solutions, more vital than ever before. To address the growing risks arising from synthetic content misuse, the National Institute of Standards and Technology (NIST) should take the following steps to create and cultivate a robust digital content authentication technologies research ecosystem: 1) establish dedicated university-led national research centers, 2) develop a national synthetic content database, and 3) run and coordinate prize competitions to strengthen technical countermeasures. In turn, these initiatives will require 4) dedicated and sustained Congressional funding of these initiatives. This will enable technical countermeasures to be able to keep closer pace with the rapidly evolving synthetic content threat landscape, maintaining the U.S.’s role as a global leader in responsible, safe, and secure AI.

Challenge and Opportunity

While it is clear that generative AI offers tremendous benefits, such as for scientific research, healthcare, and economic innovation, the technology also poses an accelerating threat to U.S. national interests. Generative AI’s ability to produce highly realistic synthetic content has increasingly enabled its harmful abuse and undermined public trust in digital information. Threat actors have already begun to weaponize synthetic content across a widening scope of damaging activities to growing effect. Project losses from AI-enabled fraud are anticipated to reach up to $40 billion by 2027, while experts estimate that millions of adults and children have already fallen victim to being targets of AI-generated or manipulated nonconsensual intimate media or child sexual abuse materials – a figure that is anticipated to grow rapidly in the future. While the widely feared concern of manipulative synthetic content compromising the integrity of the 2024 U.S. election did not ultimately materialize, malicious AI-generated content was nonetheless found to have shaped election discourse and bolstered damaging narratives. Equally as concerning is the accumulative effect this increasingly widespread abuse is having on the broader erosion of public trust in the authenticity of all digital information. This degradation of trust has not only led to an alarming trend of authentic content being increasingly dismissed as ‘AI-generated’, but has also empowered those seeking to discredit the truth, or what is known as the “liar’s dividend”.

From the amusing… to the not-so-benign.

A. In March 2023, a humorous synthetic image of Pope Francis, first posted on Reddit by creator Pablo Xavier, wearing a Balenciaga coat quickly went viral across social media.

B. In May 2023, this synthetic image was duplicitously published on X as an authentic photograph of an explosion near the Pentagon. Before being debunked by authorities, the image’s widespread circulation online caused significant confusion and even led to a temporary dip in the U.S. stock market.

Research has demonstrated that current generative AI technology is able to produce synthetic content sufficiently realistic enough that people are now unable to reliably distinguish between AI-generated and authentic media. It is no longer feasible to continue, as we currently do, to rely predominantly on human perception capabilities to protect against the threat arising from increasingly widespread synthetic content misuse. This new reality only increases the urgency of deploying robust alternative countermeasures to protect the integrity of the information ecosystem. The suite of digital content authentication technologies (DCAT), or techniques, tools, and methods that seek to make the legitimacy of digital media transparent to the observer, offers a promising avenue for addressing this challenge. These technologies encompass a range of solutions, from identification techniques such as machine detection and digital forensics to classification and labeling methods like watermarking or cryptographic signatures. DCAT also encompasses technical approaches that aim to record and preserve the origin of digital media, including content provenance, blockchain, and hashing.

Evolution of Synthetic Media

Screenshot from an AI-manipulated video of President Obama

Published in 2018, this now infamous PSA sought to illustrate the dangers of synthetic content. It shows an AI-manipulated video of President Obama, using narration from a comedy sketch by comedian Jordan Peele.

In 2020, a hobbyist creator employed an open-source generative AI model to ‘enhance’ the Hollywood CGI version of Princess Leia in the film Rouge One.

In 2020, a hobbyist creator employed an open-source generative AI model to ‘enhance’ the Hollywood CGI version of Princess Leia in the film Rouge One.

The hugely popular Tiktok account @deeptomcruise posts parody videos featuring a Tom Cruise imitator face-swapped with the real Tom Cruise’s real face, including this 2022 video, racking up millions of views.

The hugely popular Tiktok account @deeptomcruise posts parody videos featuring a Tom Cruise imitator face-swapped with the real Tom Cruise’s real face, including this 2022 video, racking up millions of views.

The 2024 film Here relied extensively on generative AI technology to de-age and face-swap actors in real-time as they were being filmed.

The 2024 film Here relied extensively on generative AI technology to de-age and face-swap actors in real-time as they were being filmed.

Robust DCAT capabilities will be indispensable for defending against the harms posed by synthetic content misuse, as well as bolstering public trust in both information systems and AI development. These technical countermeasures will be critical for alleviating the growing burden on citizens, online platforms, and law enforcement to manually authenticate digital content. Moreover, DCAT will be vital for enforcing emerging legislation, including AI labeling requirements and prohibitions on illegal synthetic content. The importance of developing these capabilities is underscored by the ten bills (see Fig 1) currently under Congressional consideration that, if passed, would require the employment of DCAT-relevant tools, techniques, and methods.

Figure 1. Congressional bills which would require the use of DCAT tools, techniques, and methods.
Bill NameSenateHouse
AI Labelling ActS.2691H.R.6466
Take It Down ActS.4569H.R.8989
DEFIANCE ActS.3696H.R.7569
Preventing Deepfakes of Intimate Images ActH.R.3106
DEEPFAKES Accountability ActH.R.5586
AI Transparency in Elections ActS.3875H.R.8668
Securing Elections From AI Deception ActH.R. 8858
Protecting Consumers from Deceptive AI ActH.R. 7766
COPIED ActS.4674
NO FAKES ActS.4875H.R.9551

However, significant challenges remain. DCAT capabilities need to be improved, with many currently possessing weaknesses or limitations such brittleness or security gaps. Moreover, implementing these countermeasures must be carefully managed to avoid unintended consequences in the information ecosystem, like deploying confusing or ineffective labeling to denote the presence of real or fake digital media. As a result, substantial investment is needed in DCAT R&D to develop these technical countermeasures into an effective and reliable defense against synthetic content threats.

The U.S. government has demonstrated its commitment to advancing DCAT to reduce synthetic content risks through recent executive actions and agency initiatives. The 2023 Executive Order on AI (EO 14110) mandated the development of content authentication and tracking tools. Charged by the EO 14110 to address these challenges, NIST has taken several steps towards advancing DCAT capabilities. For example, NIST’s recently established AI Safety Institute (AISI) takes the lead in championing this work in partnership with NIST’s AI Innovation Lab (NAIIL).  Key developments include: the dedication of one of the U.S. Artificial Intelligence Safety Institute Consortium’s (AISIC) working groups to identifying and advancing DCAT R&D; the publication of NIST AI 100-4, which “examines the existing standards, tools, methods, and practices, as well as the potential development of further science-backed standards and techniques” regarding current and prospective DCAT capabilities; and the $11 million dedicated to international research on addressing dangers arising from synthetic content announced at the first convening of the International Network of AI Safety Institutes. Additionally, NIST’s Information Technology Laboratory (ITL) has launched the GenAI Challenge Program to evaluate and advance DCAT capabilities. Meanwhile, two pending bills in Congress, the Artificial Intelligence Research, Innovation, and Accountability Act (S. 3312) and the Future of Artificial Intelligence Innovation Act (S. 4178), include provisions for DCAT R&D.

Although these critical first steps have been taken, an ambitious and sustained federal effort is necessary to facilitate the advancement of technical countermeasures such as DCAT. This is necessary to more successfully combat the risks posed by synthetic content—both in the immediate and long-term future. To gain and maintain a competitive edge in the ongoing race between deception and detection, it is vital to establish a robust national research ecosystem that fosters agile, comprehensive, and sustained DCAT R&D.

Plan of Action

NIST should engage in three initiatives: 1) establishing dedicated university-based DCAT research centers, 2) curating and maintaining a shared national database of synthetic content for training and evaluation, as well as 3) running and overseeing regular federal prize competitions to drive innovation in critical DCAT challenges. The programs, which should be spearheaded by AISI and NAIIL, are critical for enabling the creation of a robust and resilient U.S. DCAT research ecosystem. In addition, the 118th Congress should 4) allocate dedicated funding to supporting these enterprises.

These recommendations are not only designed to accelerate DCAT capabilities in the immediate future, but also to build a strong foundation for long-term DCAT R&D efforts. As generative AI capabilities expand, authentication technologies must too keep pace, meaning that developing and deploying effective technical countermeasures will require ongoing, iterative work. Success demands extensive collaboration across technology and research sectors to expand problem coverage, maximize resources, avoid duplication, and accelerate the development of effective solutions. This coordinated approach is essential given the diverse range of technologies and methodologies that must be considered when addressing synthetic content risks.

Recommendation 1. Establish DCAT Research Institutes

NIST should establish a network of dedicated university-based research to scale up and foster long-term, fundamental R&D on DCAT. While headquartered at leading universities, these centers would collaborate with academic, civil society, industry, and government partners, serving as nationwide focal points for DCAT research and bringing together a network of cross-sector expertise. Complementing NIST’s existing initiatives like the GenAI Challenge, the centers’ research priorities would be guided by AISI and NAIIL, with expert input from the AISIC, the International Network of AISI, and other key stakeholders.  

A distributed research network offers several strategic advantages. It leverages elite expertise from industry and academia, and having permanent institutions dedicated to DCAT R&D enables the sustained, iterative development of authentication technologies to better keep pace with advancing generative AI capabilities. Meanwhile, central coordination by AISI and NAIIL would also ensure comprehensive coverage of research priorities while minimizing redundant efforts.  Such a structure provides the foundation for a robust, long-term research ecosystem essential for developing effective countermeasures against synthetic content threats.

There are multiple pathways via which dedicated DCAT research centers could be stood up.  One approach is direct NIST funding and oversight, following the model of Carnegie Mellon University’s AI Cooperative Research Center. Alternatively, centers could be established through the National AI Research Institutes Program, similar to the University of Maryland’s Institute for Trustworthy AI in Law & Society, leveraging NSF’s existing partnership with NIST.

The DCAT research agenda could be structured in two ways.  Informed by NIST’s report NIST AI 100-4, a vertical approach could be taken to centers’ research agendas, assigning specific technologies to each center (e.g. digital watermarking, metadata recording, provenance data tracking, or synthetic content detection). Centers would focus on all aspects of a specific technical capability, including: improving the robustness and security of existing countermeasures; developing new techniques to address current limitations; conducting real-world testing and evaluation, especially in a cross-platform environment; and studying interactions with other technical safeguards and non-technical countermeasures like regulations or educational initiatives. Conversely, a horizontal approach might seek to divide research agendas across areas such as: the advancement of multiple established DACT techniques, tools, and methods; innovation of novel techniques, tools, and methods; testing and evaluation of combined technical approaches in real-world settings; examining the interaction of multiple technical countermeasures with human factors such as label perception and non-technical countermeasures.  While either framework provides a strong foundation for advancing DCAT capabilities, given institutional expertise and practical considerations, a hybrid model combining both approaches is likely the most feasible option.

Recommendation 2. Build and Maintain a National Synthetic Content Database

NIST should also build and maintain a national database of synthetic content database to advance and accelerate DCAT R&D, similar to existing federal initiatives such as NIST’s National Software Reference Library and NSF’s AI Research Resource pilot. Current DCAT R&D is severely constrained by limited access to diverse, verified, and up-to-date training and testing data.  Many researchers, especially in academia, where a significant portion of DCAT research takes place, lack the resources to build and maintain their own datasets.  This results in less accurate and more narrowly applicable authentication tools that struggle to keep pace with rapidly advancing AI capabilities.  

A centralized database of synthetic and authentic content would accelerate DCAT R&D in several critical ways. First, it would significantly alleviate the effort on research teams to generate or collect synthetic data for training and evaluation, encouraging less well-resourced groups to conduct research as well as allowing researchers to focus more on other aspects of R&D. This includes providing much-needed resources for the NIST-facilitated university-based research centers and prize competitions proposed here. Moreover, a shared database would be able to provide more comprehensive coverage of the increasingly varied synthetic content being created today, permitting the development of more effective and robust authentication capabilities. The database would be useful for establishing standardized evaluation metrics for DCAT capabilities – one of NIST’s critical aims for addressing the risks posed by AI technology.

A national database would need to be comprehensive, encompassing samples of both early and state-of-the-art synthetic content. It should have controlled laboratory-generated along with verified “in the wild” or real world synthetic content datasets, including both benign and potentially harmful examples. Further critical to the database’s utility is its diversity, ensuring synthetic content spans multiple individual and combined modalities (text, image, audio, video) and features varied human populations as well as a variety of non-human subject matter. To maintain the database’s relevance as generative AI capabilities continue to evolve, routinely incorporating novel synthetic content that accurately reflects synthetic content improvements will also be required.

Initially, the database could be built on NIST’s GenAI Challenge project work, which includes “evolving benchmark dataset creation”, but as it scales up, it should operate as a standalone program with dedicated resources. The database could be grown and maintained through dataset contributions by AISIC members, industry partners, and academic institutions who have either generated synthetic content datasets themselves or, as generative AI technology providers, with the ability to create the large-scale and diverse datasets required. NIST would also direct targeted dataset acquisition to address specific gaps and evaluation needs.

Recommendation 3. Run Public Prize Competitions on DCAT Challenges

Third, NIST should set up and run a coordinated prize competition program, while also serving as federal oversight leads for prize competitions run by other agencies. Building on existing models such as the DARPA SemaFor’s AI FORCE and the FTC’s Voice Cloning challenge, the competitions would address expert-identified priorities as informed by the AISIC, International Network of AISI, and proposed DCAT national research centers. Competitions represent a proven approach to spurring innovation for complex technical challenges, enabling the rapid identification of solutions through diverse engagement. In particular, monetary prize competitions are especially successful at ensuring engagement. For example, the 2019 Kaggle Deepfake Detection competition, which had a prize of $1 million, fielded twice as many participants as the 2024 competition, which gave no cash prize. 

By providing structured challenges and meaningful incentives, public competitions can accelerate the development of critical DCAT capabilities while building a more robust and diverse research community.  Such competitions encourage novel technical approaches, rapid testing of new methods, facilitate the inclusion of new or non-traditional participants, and foster collaborations. The more rapid-cycle and narrow scope of the competitions would also complement the longer-term and broader research being conducted by the national DCAT research centers. Centralized federal oversight would also prevent the implementation gaps which have occurred in past approved federal prize competitions.  For instance, the 2020 National Defense Authorization Act (NDAA) authorized a $5 million machine detection/deepfakes prize competition (Sec. 5724), and the 2024 NDAA authorized a ”Generative AI Detection and Watermark Competition” (Sec. 1543). However, neither prize competition has been carried out, and Watermark Competition has now been delayed to 2025. Centralized oversight would also ensure that prize competitions are run consistently to address specific technical challenges raised by expert stakeholders, encouraging more rapid development of relevant technical countermeasures.

Some examples of possible prize competitions might include: machine detection and digital forensic methods to detect partial or fully AI-generated content across single or multimodal content; assessing the robustness, interoperability, and security of watermarking and other labeling methods across modalities; testing innovations in tamper-evident or -proofing content provenance tools and other data origin techniques. Regular assessment and refinement of competition categories will ensure continued relevance as synthetic content capabilities evolve.

Recommendation 4. Congressional Funding of DCAT Research and Activities

Finally, the 118th Congress should allocate funding for these three NIST initiatives in order to more effectively establish the foundations of a strong DCAT national research infrastructure. Despite widespread acknowledgement of the vital role of technical countermeasures in addressing synthetic content risks, the DCAT research field remains severely underfunded. Although recent initiatives, such as the $11 million allocated to the International Network of AI Safety Institutes, are a welcome step in the right direction, substantially more investment is needed. Thus far, the overall financing of DCAT R&D has been only a drop in the bucket when compared to the many billions of dollars being dedicated by industry alone to improve generative AI technology.

This stark disparity between investment in generative AI versus DCAT capabilities presents an immediate opportunity for Congressional action. To address the widening capability gap, and to support pending legislation which will be reliant on technical countermeasures such as DCAT, the 118th Congress should establish multi-year appropriations with matching fund requirements. This will encourage private sector investment and permit flexible funding mechanisms to address emerging challenges. This funding should be accompanied by regular reporting requirements to track progress and impact.

One specific action that Congress could take to jumpstart DCAT R&D investment would be to reauthorize and appropriate the budget that was earmarked for the unexecuted machine detection competition it approved in 2020. Despite the 2020 NDAA authorizing $5 million for it, no SAC-D funding was allocated, and the competition never took place. Another action would be for Congress to explicitly allocate prize money for the watermarking competition authorized by the 2024 NDAA, which currently does not have any monetary prize attached to it, to encourage higher levels of participation in the competition when it takes place this year.

Conclusion

The risks posed by synthetic content present an undeniable danger to U.S. national interests and security. Advancing DCAT capabilities is vital for protecting U.S. citizens against both the direct and more diffuse harms resulting from the proliferating misuse of synthetic content. A robust national DCAT research ecosystem is required to accomplish this. Critically, this is not a challenge that can be addressed through one-time solutions or limited investment—it will require continuous work and dedicated resources to ensure technical countermeasures keep pace alongside increasingly sophisticated synthetic content threats. By implementing these recommendations with sustained federal support and investment, the U.S. will be able to more successfully address current and anticipated synthetic content risks, further reinforcing its role as a global leader in responsible AI use.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Supporting Federal Decision Making through Participatory Technology Assessment

The incoming administration needs a robust, adaptable and scalable participatory assessment capacity to address complex issues at the intersections of science, technology, and society. As such, the next administration should establish a special unit within the Science and Technology Policy Institute (STPI)—an existing federally funded research and development center (FFRDC)—to provide evidence-based, just-in-time, and fit-for-purpose capacity for Participatory Technology Assessment (pTA) to the White House Office of Science and Technology Policy and across executive branch agencies.

Robust participatory and multi-stakeholder engagement supports responsible decision making where neither science nor existing policy provide clear guidance. pTA is an established and evidence-based process to assess public values, manage sociotechnical uncertainties, integrate living and lived knowledge, and bridge democratic gaps on contested and complex science and society issues. By tapping into broader community expertise and experiences, pTA identifies plausible alternatives and solutions that may be overlooked by experts and advocates.

pTA provides critical and informed public input that is currently missing in technocratic policy- and decision-making processes. Policies and decisions will have greater legitimacy, transparency, and accountability as a result of enhanced use of pTA. When systematically integrated into research and development (R&D) processes, pTA can be used for anticipatory governance—that is, assessing socio-technical futures, engaging communities, stakeholders and publics, and  directing decisions, policies, and investments toward desirable outcomes.

A pTA unit within STPI will help build and maintain a shared repository of knowledge and experience of the state of the art and innovative applications across government, and provide pTA as a design, development, implementation, integration and training service for the executive branch regarding emerging scientific and technological issues and questions. By integrating public and expert value assessments, the next administration can ensure that federal science and technology decisions provide the greatest benefit to society.

Challenge and Opportunity

Science and technology (S&T) policy problems always involve issues of public values—such as concerns for safety, prosperity, and justice—alongside issues of fact. However, few systematic and institutional processes meaningfully integrate values from informed public engagement alongside expert consultation. Existing public-engagement mechanisms such as public- comment periods, opinion surveys, and town halls have devolved into little more than “checkbox” exercises. In recent years, transition to online commenting, intended to improve access and participation, have also amplified the negatives. They have “also inadvertently opened the floodgates to mass comment campaigns, misattributed comments, and computer-generated comments, potentially making it harder for agencies to extract the information needed to inform decision making and undermining the legitimacy of the rulemaking process. Many researchers have found that a large percentage of the comments received in mass comment responses are not highly substantive, but rather contain general statements of support or opposition.  Commenters are an entirely self selected group, and there is no reason to believe that they are in any way representative of the larger public. … Relatedly, the group of commenters may represent a relatively privileged group, with less advantaged members of the public less likely to engage in this form of political participation.”

Moreover, existing engagement mechanisms tend to be dominated by a small number of experts and organized interest groups: people and institutions who generally have established pathways to influence policy anyway. 

Existing engagement mechanisms leave out the voices of people who may lack the time, awareness, and/or resources to voice their opinions in response to the Federal Register, such as the roofer, the hair stylist, or the bus driver. This means that important public values—widely held ideas about the rights and benefits that ought to guide policy making in a democratic system—go overlooked. For S&T policy, a failure to assess and integrate public values may result in lack of R&D and complementary investments that produce market successes with limited public value, such as treatments for cancer that most patients cannot afford or public failure when there is no immediately available technical or market response, such as early stages of a global pandemic. Failure to integrate public values may also mean that little to no attention gets paid to key areas of societal need, such as developing low-cost tools and approaches for mitigating lead and other contaminants in water supplies or designing effective policy response, such as behavioral and logistical actions to contain viral infections and delivering vaccination to resistant populations.

In its 2023 Letter to the President, the President’s Council of Advisors on Science and Technology (PCAST), observed that, “As a nation, we must strive to develop public policies that are informed by scientific understandings and community values. Achieving this goal will require both access to accurate and trusted scientific information and the ability to create dialogue and participatory engagement with the American people.” The PCAST letter recommends issuing “a clarion call to Federal agencies to make science and technology communication and public engagement a core component of their mission and strategy.” It also recommended the establishment of “a new office to support Federal agencies in their continuing efforts to develop and build participatory public engagement and effective science and technology communications.”

Institutionalizing pTA within the Federal Government would provide federal agencies access to the tools and resources they need to apply pTA to existing and emerging complex S&T challenges, enabling experts, publics, and decision makers to tackle pressing issues together.pTA can be applied toward resolving long-standing issues, as well as to anticipate and address questions around emerging or novel S&T issues.

pTA for Long-Standing S&T Issues

Storage and siting of disposal sites for nuclear waste is an example of the type of ongoing, intractable problems for which pTA is ideally suited. Billions of dollars have been invested to develop a government-managed site for storing nuclear waste in the United States, yet essentially no progress has been made. Entangled political and environmental concerns, such as the risks of leaving nuclear waste in a potentially unsafe state for the long term, have stalled progress. There is also genuine uncertainty and expert disagreement surrounding safety and efficacy of various storage alternatives. Our nation’s inability to address the issue of nuclear waste has long impacted development of new and alternative nuclear power plants and thus has contributed to the slowing the adoption of nuclear energy.

There are rarely unencumbered or obvious optimal solutions to long-standing S&T issues like nuclear-waste disposal. But a nuanced and informed dialogue among a diverse public, experts, and decision makers—precisely the type of dialogue enabled through pTA—can help break chronic stalemates and address misaligned or nonexistent incentives. By bringing people together to discuss options and to learn about the benefits and risks of different possible solutions, pTA enables stakeholders to better understand each other’s perspectives. Deliberative engagements like pTA often generate empathy, encouraging participants to collaborate and develop recommendations based on shared exploration of values. pTA is designed to facilitate timely, adequate, and pragmatic choices in the context of uncertainty, conflicting goals, and various real-world constraints. This builds transparency and trust across diverse stakeholders while helping move past gridlock.

pTA for Emerging and Novel Issues

pTA is also useful for anticipating controversies and governing emerging S&T challenges, such as the ethical dimensions of gene editing or artificial intelligence or nuclear adoption. pTA helps grow institutional knowledge and expertise about complex topics as well as about public attitudes and concerns salient to those topics at scale. For example, challenges associated with COVID-19 vaccines presented several opportunities to deploy pTA. Public trust of the government’s pandemic response was uneven at best. Many Americans reported specific concerns about receiving a COVID-19 vaccine. Public opinion polls have delivered mixed messages regarding willingness to receive a COVID- 19 vaccine, but polls can overlook other historically significant concerns and socio-political developments in rapidly changing environments. Demands for expediency in vaccine development complicated the situation when normal safeguards and oversights were relaxed. Apparent pressure to deliver a vaccine as soon as possible raised public concern that vaccine safety is not being adequately vetted. Logistical and ethical questions about vaccine rollout were also abound: who should get vaccinated first, at what cost, and alongside what other public health measures? The nation needed a portfolio of differentiated and locally robust strategies for vaccine deployment. pTA would help officials anticipate equity challenges and trust deficits related to vaccine use and inform messaging and means of delivery, helping effective and socially robust rollout strategies for different communities across the country.

pTA is an Established Practice

pTA has a history of use in the European Union and more recently in the United States. Inspired partly by the former U.S. Office of Technology Assessment (OTA), many European nations and the European Parliament operate their own technology assessment (TA) agencies. European TA took a distinctive turn from the OTA in further democratizing science and technology decision-making by developing and implementing a variety of effective and economical practices involving citizen participation (or pTA). Recent European Parliamentary Technology Assessment reports have taken on issues of assistive technologies, future of work, future of mobility, and climate-change innovation.

In the United States, a group of researchers, educators, and policy practitioners established the Expert and Citizen Assessment of Science and Technology (ECAST) network in 2010 to develop a distinctive 21st-century model of TA. Over the course of a decade, ECAST developed an innovative and reflexive participatory technology assessment (pTA) method to support democratic decision-making in different technical, social, and political contexts. After a demonstration project providing citizen input to the United Nations Convention on Biological Diversity in collaboration with the Danish Board of Technology, ECAST, worked with the National Aeronautics and Space Administration (NASA) on the agency’s Asteroid Initiative. NASA-sponsored pTA activities about asteroid missions revealed important concerns about mitigating asteroid impact alongside decision support for specific NASA missions. Public audiences prioritized a U.S. role in planetary defense from asteroid impacts. These results were communicated to NASA administrators and informed the development of NASA’s Planetary Defense Coordination Office, demonstrating how pTA can identify novel public concerns to inform decision making.

This NASA pTA paved the way for pTA projects with the Department of Energy on nuclear-waste disposal and with the National Oceanic and Atmospheric Administration on community resilience. ECAST’s portfolio also includes projects on climate intervention research, the future of automated vehicles, gene editing, clean energy demonstration projects and interim storage of spent nuclear fuel. These and other pTA projects have been supported by more than six million dollars of public and philanthropic funding over the past ten years. Strong funding support in recent years highlights a growing demand for public engagement in science and technology decision-making.

However, the current scale of investment in pTA projects is vastly outstripped by the number of agencies and policy decisions that stand to benefit from pTA and are demanding applications for different use cases from public education, policy decisions, public value mapping and process and institutional innovations. ECAST’s capacity and ability to partner with federal agencies is limited and constrained by existing administrative rules and procedures on the federal side and resources and capacity deficiencies and flexibilities on the network side. Any external entity like ECAST will encounter difficulties in building institutional memory and in developing cooperative-agreement mechanisms across agencies with different missions as well as within agencies with different divisions. Integrating public engagement as a standard component of decision making will require aligning the interests of sponsoring agencies, publics, and pTA practitioners within the context of broad and shifting political environments. An FFRDC office dedicated to pTA would provide the embedded infrastructure, staffing, and processes necessary to achieve these challenging tasks. A dedicated home for pTA within the executive branch would also enable systematic research, evaluation, and training related to pTA methods and practices, as well as better integration of pTA tools into decision making involving public education, research, innovation and policy actions.

Plan of Action

The next administration should support and conduct pTA across the Federal Government by expanding the scope of the Science and Technology Policy Institute (STPI) to include a special unit with a separate operating budget dedicated specifically to pTA. STPI is an existing federally funded research and development center (FFRDC) that already conducts research on emerging technological challenges for the Federal Government. STPI is strategically associated with the White House Office of Science and Technology Policy (OSTP). Integrating pTA across federal agencies aligns with STPI’s mission to provide technical and analytical support to agency sponsors on the assessment of critical and emerging technologies.

A dedicated pTA unit within STPI would (1) provide expertise and resources to conduct pTA for federal agencies and (2) document and archive broader public expertise captured through pTA. Much publicly valuable knowledge generated from one area of S&T is applicable to and usable in other areas. As part of an FFRDC associated with the executive branch, STPI’s pTA unit could collaborate with universities to help disseminate best practices across all executive agencies.

We envision that STPI’s pTA unit would conduct activities related to the general theory and practice of pTA as well as partner with other federal agencies to integrate pTA into projects large and small. Small-scale projects, such as a series of public focus groups, expert consultations, or general topic research could be conducted directly by the pTA unit’s staff. Larger projects, such as a series of in-person or online deliberative engagements, workshops, and subsequent analysis and evaluation, would require additional funding and support from the requesting agencies. The STPI pTA unit could also establish longer-term partnerships with universities and science centers (as in the ECAST network), thereby enabling the federal government to leverage and learn from pTA exercises sponsored by non-federal entities.

The new STPI pTA unit would be funded in part through projects requested by other federal agencies. An agency would fund the pTA unit to design, plan, conduct, assess, and analyze a pTA effort on a project relevant to the agency. This model would enable the unit to distribute costs across the executive branch and would ensure that the unit has access to subject-matter experts (i.e., agency staff) needed to conduct an informed pTA effort. Housing the unit within STPI would contribute to OSTP’s larger portfolio of science and technology policy analysis, open innovation and citizen science, and a robust civic infrastructure.

Cost and Capacities

Adding a pTA unit to STPI would increase federal capacity to conduct pTA, utilizing existing pathways and budget lines to support additional staff and infrastructure for pTA capabilities. Establishing a semi-independent office for pTA within STPI would make it possible for the executive branch to share support staff and other costs. We anticipate that $3.5–5 million per year would be needed to support the core team of researchers, practitioners, leadership, small-scale projects, and operations within STPI for the pTA unit. This funding would require congressional approval.

The STPI pTA unit and its staff would be dedicated to housing and maintaining a critical infrastructure for pTA projects, including practical know-how, robust relationships with partner organizations (e.g., science centers, museums, or other public venues for hosting deliberative pTA forums), and analytic capabilities. This unit would not wholly be responsible for any given pTA effort. Rather, sponsoring agencies should provide resources and direction to support individual pTA projects.

We expect that the STPI pTA unit would be able to conduct two or three pTA projects per year initially. Capacity and agility of the unit would expand as time went on to meet the growth and demands from the federal agencies. In the fifth year of the unit (the typical length of an FFRDC contract), the presidential administration should consider whether there is sufficient agency demand for pTA—and whether the STPI pTA unit has sufficiently demonstrated proof-of-concept—to merit establishment of a new and independent FFRDC or other government entity fully dedicated to pTA.

Operations

The process for initiating, implementing and finalizing a pTA project would resemble the following:

Pre:

During:

Post:

Conclusion

Participatory Technology Assessment (pTA) is an established suite of tools and processes for eliciting and documenting informed public values and opinions to contribute to decision making around complex issues at the intersections of science, technology, and society.

However, its creative adaptation and innovative use by federal agencies in recent years demonstrate their utility beyond providing decision support: from increasing scientific literacy and social acceptability to diffusing tensions and improving mutual trust. By creating capacity for pTA within STPI, the incoming administration will bolster its ability to address longstanding and emerging issues that lie at the intersection of scientific progress and societal well-being, where progress depends on aligning scientific, market and public values. Such capacity and capabilities will be crucial to improving the legitimacy, transparency, and accountability of decisions regarding how we navigate and tackle the most intractable problems facing our society, now and for years to come.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
Expert panels are the best way to address complex S&T issues. Why should S&T assessments focus on involving the public and public values?

Experts can help map potential policy and R&D options and their implications. However, there will always be an element of judgment when it comes to deciding among options. This stage is often more driven by ethical and social concerns than by technical assessments. For instance, leaders may need to figure out a fair and just process to govern hazardous-waste disposal, or weigh the implications of using genetically modified organisms to control diseases, or siting clean energy research and demonstration projects in resistant or disadvantaged communities. Involving the public in decision-making can help counter challenges associated with expert judgment (for example, “groupthink”) while bringing in perspectives, values, and considerations that experts may overlook or discount.

How do we know that members of the public are sufficiently informed to be able to contribute to a decision?

pTA incorporates a variety of measures to inform discussion, such as background materials distributed to participants and multimedia tools to provide relevant information about the issue. The content of background materials is developed by experts and stakeholders prior to a pTA event to give the public the information they need to thoughtfully engage with the topic at hand. Evaluation tools, such as those from the informal science-education community, can be used to assess how effective background materials are at preparing the public for an informed discussion, and to identify ineffective materials that may need revision or supplementation. Evaluations of several past pTA efforts have 1) shown consistent learning among public participants and 2) have documented robust processes for the creation, testing, and refinement of pTA activities that foster informed discussions among pTA participants.

Will doing pTA enhance the communications missions of federal agencies?

pTA can result in products and information, such as reports and data on public values, that are relevant and useful for the communication missions of agencies. However, pTA should avoid becoming a tool for strategic communications or a procedural “checkbox” activity for public engagement. Locating the Federal Government’s dedicated pTA unit within an FFRDC will ensure that pTA is informed by and accountable to a broader community of pTA experts and stakeholders who are independent of any mission agency.

Why does the Federal Government need in-house capacity to conduct pTA?

The work of universities, science centers, and nonpartisan think tanks have greatly expanded the tools and approaches available for using pTA to inform decision-making. Many past and current pTA efforts have been driven by such nongovernmental institutions, and have proven agile, collaborative, and low cost. These efforts, while successful, have limited or diffuse ties to federal decision making.


Embedding pTA within the federal government would help agencies overcome the opportunity and time cost of integrating public input into tight decision-making timelines. ECAST’s work with federal agencies has shown the need for a stable bureaucratic infrastructure surrounding pTA at the federal level to build organizational memory, create a federal community of practice, and productively institutionalize pTA into federal decision-making.


Importantly, pTA is a nonpartisan method that can help reduce tensions and find shared values. Involving a diversity of perspectives through pTA engagements can help stakeholders move beyond impasse and conflict. pTA engagements emphasize recruiting and involving Americans from all walks of life, including those historically excluded from policymaking.

How would a pTA unit within STPI complement existing technology assessment capacity? How would it differ from that existing capacity?

Currently, the Government Accountability Office’s Science, Technology Assessment, and Analytics team (STAA) conducts technology assessments for Congress. Technology Assessment (TA) is designed to enhance understanding of the implications of new technologies or existing S&T issues. The STAA certainly has the capacity to undertake pTA studies on key S&T issues if and when requested by Congress. However, the distinctive form of pTA developed by ECAST and exemplified in ECAST’s work with NASA, NOAA, and DOE follows a knowledge co- production model in which agency program managers work with pTA practitioners to co-design, co-develop, and integrate pTA into their decision-making processes. STAA, as a component of the legislative branch, is not well positioned to work alongside executive agencies in this way. The proposed pTA unit within STPI would make the proven ECAST model available to all executive agencies, nicely complementing the analytical TA capacity that STAA offers the federal legislature.

Why should the government establish a pTA unit within an FFRDC instead of using executive orders to conduct pTA or requiring agencies to undertake pTA?

Executive orders could support one-off pTA projects and require agencies to conduct pTA. However, establishing a pTA unit within an FFRDC like STPI would provide additional benefits that would lead to a more robust pTA capacity. 


FFRDCs are a special class of research institutions owned by the federal government but operated by contractors, including universities, nonprofits, and industrial firms. The primary purpose of FFRDCs is to pursue research and development that cannot be effectively provided by the government or other sectors operating on their own. FFRDCs also enable the government to recruit and retain diverse experts without government hiring and pay constraints, providing the government with a specialized, agile workforce to respond to agency needs and societal challenges.
Creating a pTA unit in an FFRDC would provide an institutional home for general pTA know-how and capacity: a resource that all agencies could tap into. The pTA unit would be staffed by a small but highly-trained staff who are well-versed in the knowledge and practice of pTA. The pTA unit would not preclude individual agencies from undertaking pTA on their own, but would provide a “help center” to help agencies figure out where to start and how to overcome roadblocks. pTA unit staff could also offer workshops and other opportunities to help train personnel in other agencies on ways to incorporate the public perspective into their activities.


Other potential homes for a dedicated federal pTA unit include the Government Accountability Office (GAO) or the National Academies of Sciences, Engineering, and Medicine. However, GAO’s association with Congress would weaken the unit’s connections to agencies. The National Academies historically conduct assessments driven purely by expert consensus, which may compromise the ability of National Academies-hosted pTA to include and/or emphasize broader public values.

How will the government evaluate the performance and outcomes of pTA efforts?

Evaluating a pTA effort means answering four questions:


First, did the pTA effort engage a diverse public not otherwise engaged in S&T policy formulation? pTA practitioners generally do not seek statistically representative samples of participants (unlike, for instance, practitioners of mass opinion polling). Instead, pTA practitioners focus on including a diverse group of participants, with particular attention paid to groups who are generally not engaged in S&T policy formulation.


Second, was the pTA process informed and deliberative? This question is generally answered through strategies borrowed from the informal science-learning community, such as “pre- and post-“ surveys of self-reported learning. Qualitative analysis of the participant responses and discussions can evaluate if and how background information was used in pTA exercises. Involving decision makers and stakeholders in the evaluation process—for example, through sharing initial evaluation results—helps build the credibility of participant responses, particularly when decision makers or agencies are skeptical of the ability of lay citizens to provide informed opinions.


Third, did pTA generate useful and actionable outputs for the agency and, if applicable, stakeholders? pTA practitioners use qualitative tools for assessing public opinions and values alongside quantitative tools, such as surveys. A combination of qualitative and quantitative analysis helps to evaluate not just what public participants prefer regarding a given issue but why they hold that preference and how they justify those preferences. To ensure such information is useful to agencies and decision makers, pTA practitioners involve decision makers at various points in the analysis process (for example, to probe participant responses regarding a particular concern). Interviews with decision makers and other stakeholders can also assess the utility of pTA results.


Fourth, what impact did pTA have on participants, decisions and decision-making processes, decision makers, and organizational culture? This question can be answered through interviews with decision makers and stakeholders, surveys of pTA participants, and impact assessments.

How will the government evaluate the performance and outcomes of a dedicated pTA unit? How has pTA been evaluated previously?

Evaluation of a pTA unit within an existing FFRDC would likely involve similar questions as above: questions focused on the impact of the unit on decisions, decision-making processes, and the culture and attitudes of agency staff who worked with the pTA unit. An external evaluator, such as the Government Accountability Office or the National Academies of Sciences, could be tasked with carrying out such an evaluation.

How publicly accessible should the work of a pTA unit be? Should pTA results and processes be made public?

pTA results and processes should typically be made public as long as few risks are posed to pTA participants (in line with federal regulations protecting research participants). Publishing results and processes ensures that stakeholders, other members of government (e.g., Congress), and broader audiences can view and interpret the public values explored during a pTA effort. Further, making results and processes publicly available serves as a form of accountability, ensuring that pTA efforts are high quality.

Unpacking Hiring: Toward a Regional Federal Talent Strategy

Government, like all institutions, runs on people. We need more people with the right skills and expertise for the many critical roles that public agencies are hiring for today. Yet hiring talent in the federal government is a longstanding challenge. The next Administration should unpack hiring strategy from headquarters and launch a series of large scale, cross-agency recruitment and hiring surges throughout the country, reflecting the reality that 85% of federal employees are outside the Beltway. With a collaborative, cross-agency lens and a commitment to engaging jobseekers where they live, the government can enhance its ability to attract talent while underscoring to Americans that the federal government is not a distant authority but rather a stakeholder in their communities that offers credible opportunities to serve. 

Challenge and Opportunity

The Federal Government’s hiring needs—already severe across many mission-critical occupations—are likely to remain acute as federal retirements continue, the labor market remains tight, and mission needs continue to grow. Unfortunately, federal hiring is misaligned with how most people approach job seeking. Most Americans search for employment in a geographically bounded way, a trend which has accelerated following the labor market disruptions of the COVID-19 pandemic. In contrast, federal agencies tend to engage with jobseekers in a manner siloed to a single agency and across a wide variety of professions. 

The result is that the federal government tends to hire agency by agency while casting a wide geographic net, which limits its ability to build deep and direct relationships with talent providers, while also duplicating searches for similar roles across agencies. Instead, the next Administration should align with jobseekers’ expectations by recruiting across agencies within each geography. 

By embracing a new approach, the government can begin to develop a more coordinated cross-agency employer profile within regions with significant federal presence, while still leveraging its scale by aggregating hiring needs across agencies. This approach would build upon the important hiring reforms advanced under the Biden-Harris Administration, including cross-agency pooled hiring, renewed attention to hiring experience for jobseekers, and new investments to unlock the federal government’s regional presence through elevation of the Federal Executive Board (FEB) program. FEBs are cross-agency councils of senior appointees and civil servants in regions of significant federal presence across the country. They are empowered to identify areas for cross-agency cooperation and are singularly positioned to collaborate to pool talent needs and represent the federal government in communities across the country.

Plan of Action

The next Administration should embrace a cross-agency, regionally-focused recruitment strategy and bring federal career opportunities closer to Americans through a series of 2-3 large scale, cross-agency recruitment and hiring pilots in geographies outside of Washington, DC. To be effective, this effort will need both sponsorship from senior leaders at the center of government, as well as ownership from frontline leaders who can build relationships on the ground. 

Recommendation 1. Provide Strategic Direction from the Center of Government 

The Office of Personnel Management (OPM) and the Office of Management and Budget (OMB) should launch a small team, composed of leaders in recruitment, personnel policy and workforce data, to identify promising localities for coordinated regional hiring surges. They should leverage centralized workforce data or data from Human Capital Operating Plan workforce plans to identify prospective hiring needs by government-wide and agency-specific mission-critical occupations (MCOs) by FEB region, while ensuring that agency and sub-agency workforce plans consistently specify where hiring will occur in the future. They might also consider seasonal or cyclical cross-agency hiring needs for inclusion in the pilot to facilitate year-to-year experimentation and analysis. With this information, they should engage the FEB Center of Operations and jointly select 2-3 FEB regions outside of the capital where there are significant overlapping needs in MCOs. 

As this pilot moves forward, it is imperative that OMB and OPM empower on-the-ground federal leaders to drive surge hiring and equip them with flexible hiring authorities where needed. 

Recommendation 2. Empower Frontline Leadership from the FEBs

FEB field staff are well positioned to play a coordinating role to help drive surges, starting by convening agency leadership in their regions to validate hiring needs and make amendments as necessary. Together, they should set a reasonable, measurable goal for surge hiring in the coming year that reflects both total need and headline MCOs (e.g., “in the next 12 months, federal agencies in greater Columbus will hire 750 new employees, including 75 HR Specialists, 45 Data Scientists, and 110 Engineers”). 

To begin to develop a regional talent strategy, the FEB should form a small task force drawn from standout hiring managers and HR professionals, and then begin to develop a stakeholder map of key educational institutions and civic partners with access to talent pools in the region, sharing existing relationships and building new ones. The FEB should bring these external partners together to socialize shared needs and listen to their impressions of federal career opportunities in the region.

With these insights, the project team should announce publicly the number and types of roles needed and prepare sharp public-facing collateral that foregrounds headline MCOs and raises the profile of local federal agencies. In support, OPM should launch regional USAJOBS skins (e.g., “Columbus.USAJOBS.gov”) to make it easy to explore available positions. The team should make sustained, targeted outreach at local educational institutions aligned with hiring needs, so all federal agencies are on graduates’ and administrators’ radar. 

These activities should build toward one or more signature large, in-person, cross-agency recruitment and hiring fairs, perhaps headlined by a high profile Administration leader. Candidates should be able to come to an event, learn what it means to hold a job in their discipline in federal service, and apply live for roles at multiple agencies, all while exploring what else the federal government has to offer and building tangible relationships with federal recruiters. Ahead of the event, the project team should work with agencies to align their hiring cycles so the maximum number of jobs are open at the time of the event, potentially launching a pooled hiring action to coincide. The project team should capture all interested jobseekers from the event to seed the new Talent Campaigns function in USAStaffing that enables agencies to bucket tranches of qualified jobseekers for future sourcing. 

Recommendation 3. Replicate and Celebrate

Following each regional surge, the center of government and frontline teams should collaborate to distill key learnings and conclude the sprint engagement by developing a playbook for regional recruitment surges. Especially successful surges will also present an opportunity to spotlight excellence in recruitment and hiring, which is rarely celebrated. 

The center of government team should also identify geographies with effective relationships between agencies and talent providers for key roles and leverage the growing use of remote work and location negotiable positions to site certain roles in “friendly” labor markets. 

Conclusion

Regional, cross-agency hiring surges are an opportunity for federal agencies to fill high-need roles across the country in a manner that is proactive and collaborative, rather than responsive and competitive. They would aim to facilitate a new level of information sharing between the frontline and the center of government, and inform agency strategic planning efforts, allowing headquarters to better understand the realities of recruitment and hiring on the ground. They would enable OPM and OMB to reach, engage, and empower frontline HR specialists and hiring managers who are sufficiently numerous and fragmented that they are difficult to reach in the present course of business. 

Finally, engaging regionally will emphasize that most of the federal workforce resides outside of Washington, D.C., and build understanding and respect for the work of federal public servants in communities across the nation.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

An Agenda for Ensuring Child Safety in the AI Era

The next administration should continue to make responsible policy on Artificial intelligence (AI) and children, especially in K-12, a top priority and create an AI and Kids Initiative led by the administration. AI is transforming how children learn and live, and policymakers, industry, and educators owe it to the next generation to set in place a responsible policy that embraces this new technology while at the same time ensuring all children’s well-being, privacy, and safety is respected. The federal government should develop clear prohibitions, enforce them, and serve as a national clearinghouse for AI K-12 educational policy. It should also support comprehensive digital literacy related to AI.

Specifically, we think these policy elements need to be front of mind for decision-makers: build a coordinated framework for AI Safety; champion legislation to support youth privacy and online safety in AI; and ensure every child can benefit from the promise of AI. 

In terms of building a coordinated framework for AI Safety, the next administration should: ensure parity with existing child data protections; develop safety guidance for developers, including specific prohibitions to limit harmful designs, and inappropriate uses; and direct the National Institute of Standards and Technology (NIST) to serve as the lead organizer for federal efforts on AI safety for children. When championing legislation to support youth privacy and online safety in AI, the next administration should support the passage of online safety laws that address harmful design features that can lead to medically recognized mental health disorders and patterns of use indicating addiction-like behavior, and modernize federal children’s privacy laws including updating The Family Educational Rights and Privacy Act (FERPA) and passing youth privacy laws to explicitly address AI data use issues, including prohibiting developing commercial models from students’ educational information, with strong enforcement mechanisms. And, in order to ensure every child can benefit from the promise of AI, the next administration should  support comprehensive digital literacy efforts and prevent deepening the digital divide.

Importantly, policy and frameworks need to have teeth and need to take the burden off of individual states, school districts, or actors to assess AI tools for children. Enforcement should be tailored to specific laws, but should include as appropriate private rights of action, well-funded federal enforcers, and state and local enforcement. Companies should feel incentivized to act. The framework cannot be voluntary, enabling companies to pick and choose whether or not to follow recommendations.. We’ve seen what happens when we do not put in place guardrails for tech, such as increased risk of child addiction, depression and self-harm–and it should not happen again. We cannot say that this is merely a nascent technology and that we can delay the development of protections. We already know AI will critically impact our lives. We’ve watched tech critically impact lives and AI-enabled tech is both faster and potentially more extreme. 

Challenge and Opportunity

AI is already embedded in children’s lives and education. According to Common Sense Media research, seven in ten teens have used generative AI, and the most common use is for help with homework. The research also found most parents are in the dark about their child’s generative AI use–only a third of parents whose children reported using generative AI were aware of such use. Beyond generative AI, machine learning systems are embedded in just about every application kids use at school and at home. Further,  most teens and parents say schools have either no AI policy or have not communicated one. 

Educational uses of AI are recognized to pose higher risk, according to the EU Artificial Intelligence Act and other  international frameworks. The EU  recognized that risk management requires special consideration when an AI system is likely to be accessed by children. The U.S. has developed a risk management framework, but the U.S. has not yet articulated risk levels or developed a specific educational or youth profile using NIST’s Risk Management Framework. There is still a deep need to ensure that AI systems likely to be accessed by children, including in schools, to be assessed in terms of risk management and impact on youth.

It is well established that children and teenagers are vulnerable to manipulation by technology. Youth report struggling to set boundaries from technology, and according to a U.S. Surgeon General report, almost a third of teens say they are on social media almost constantly. Almost half of youth say social media has reduced their attention span, and takes time away from other activities they care about. They are unequipped to assess sophisticated and targeted advertising, as most children cannot distinguish ads from content until they are at least eight years old, and most children do not realize ads can be customized. Additionally,  social media design features lead, in addition to addiction, to teens suffering other mental or physical harm: from unattainable beauty filters to friend comparison to recommendation systems that promote harmful content, such as the algorithmic promotion of viral “challenges” that can lead to deathAI technology is particularly concerning given its novelness, and the speed and autonomy at which the technology can operate, and the frequent opacity even to developers of AI systems about how inputs and outputs may be used or exposed. 

Particularly problematic uses of AI in products used in education and/or by children so far include products that use emotion detection, biometric data, facial recognition (built from scraping online images that include children), companion AI, automated education decisions, and social scoring.  This list will continue to grow as AI is further adopted.

There are numerous useful frameworks and toolkits from expert organizations like EdSafe, and TeachAI, and from government organizations like NIST, the National Telecommunications and Information Administration (NTIA), and Department of Education (ED). However, we need the next administration to (1) encourage Congress to pass clear rules regarding AI products used with children, (2) have NIST develop risk management frameworks specifically addressing use of AI in education and by children more broadly, and serve as a clearinghouse function so individual actors and states do not bear that responsibility, and (3) ensure frameworks are required and prohibitions are enforced. This is also reflected in the lack of updated federal privacy and safety laws that protect children and teens. 

Plan of Action

The federal government should take note of the innovative policy ideas bubbling up at the state level. For example, there is legislation and proposals in Colorado, California, Texas, and detailed guidance in over 20 states, including Ohio, Alabama, and Oregon

Policymakers should take a multi-pronged approach to address AI for children and learning, recognizing they are higher risk and therefore additional layers of protection should apply:

Recommendation 1. Build a coordinated framework an AI Safety and Kids Initiative at NIST

As the federal government further details risk associated with uses of AI, common uses of AI by kids should be designated or managed as high risk.  This is a foundational step to support the creation of guardrails or ensure protections for children as they use AI systems. The administration should clearly categorize education and use by children with in a risk level framework. For example, the EU is also considering risk in AI with the EU AI Act, which has different risk levels. If the risk framework includes education and AI systems that are likely to be accessed by children it provides a strong signal to policymakers at the state and federal level that these are uses that require protections (audits, transparency, or enforcement) to prevent or address potential harm. 

NIST, in partnership with others, should develop risk management profiles for platform developers building AI products for use in Education and for products likely to be accessed by children. Emphasis should be on safety and efficacy before technology  products come to market, with audits throughout development. NIST should:

Work in partnership with NTIA, FTC, CPSC, and HHS to  refine risk levels and risk management profiles for AI systems likely to be accessed by children.

The administration should task NIST’s Safety Institute to provide clarity on how safety should be considered for the use of AI in education and for AI systems likely to be accessed by children. This is accomplished through:

Recommendation 2. Ensure every child benefits from the promise of AI innovations 

The administration should support comprehensive digital literacy and prevent a deepening of the digital divide. 

Recommendation 3. Encourage Congress to pass clear enforceable rules re privacy and safety for AI products used by children

Champion Congressional updates to privacy laws like COPPA and FERPA to address use (especially for training) and sharing of personal information (PI) by AI tools. These laws can work in tandem, see for example recent proposed COPPA updates that would address use of technology in educational settings by children. 

Push for Congress to pass AI specific legislation addressing the development and deployment of AI systems for use by children

Support Congressional passage of online safety laws that address harmful design features in technology–specifically addressing design features that can lead to medically recognized mental health disorders like anxiety, depression, eating disorders, substance use, and suicide, and patterns of use indicating addiction-like behavior, as in Title I of the Senate-passed Kids Online Safety and Privacy Act.

Moving Forward

One ultimate recommendation is that, critically, standards and requirements need teeth. Frameworks should require that companies comply with legal requirements or face effective enforcement (such as by a well-funded expert regulator, or private lawsuits), with tools such as fines and injunctions. We have seen with past technological developments that voluntary frameworks and suggestions will not adequately protect children. Social media for example has failed to voluntarily protect children and poses risks to their mental health and well being. From exacerbating body image issues to amplifying peer pressure and social comparison, from encouraging compulsive device use to reducing attention spans, from connecting youth to extremism, illegal products, and deadly challenges, the financial incentives do not appear to exist for technology companies to appropriately safeguard children on their own. The next Administration can support enforcement by funding government positions who will be enforcing such laws.