An Early Warning System for AI-Powered Threats to National Security and Public Safety

In just a few years, state-of-the-art artificial intelligence (AI) models have gone from not reliably counting to 10 to writing software, generating photorealistic videos on demand, combining language and image processing to guide robots, and even advising heads of state in wartime. If responsibly developed and deployed, AI systems could benefit society enormously. However, emerging AI capabilities could also pose severe threats to public safety and national security. AI companies are already evaluating their most advanced models to identify dual-use capabilities, such as the capacity to conduct offensive cyber operations, enable the development of biological or chemical weapons, and autonomously replicate and spread. These capabilities can arise unpredictably and undetected during development and after deployment. 

To better manage these risks, Congress should set up an early warning system for novel AI-enabled threats to provide defenders maximal time to respond to a given capability before information about it is disclosed or leaked to the public. This system should also be used to share information about defensive AI capabilities. To develop this system, we recommend:

Challenge and Opportunity

In just the past few years, advanced AI has surpassed human capabilities across a range of tasks. Rapid progress in AI systems will likely continue for several years, as leading model developers like OpenAI and Google DeepMind plan to spend tens of billions of dollars to train more powerful models. As models gain more sophisticated capabilities, some of these could be dual-use, meaning they will “pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters”—but in some cases may also be applied to defend against serious risks in those domains. 

New AI capabilities can emerge unexpectedly. AI companies are already evaluating models to check for dual-use capabilities, such as the capacity to enhance cyber operations, enable the development of biological or chemical weapons, and autonomously replicate and spread. These capabilities could be weaponized by malicious actors to threaten national security or could lead to brittle, uncontrollable systems that cause severe accidents. Despite the use of evaluations, it is not clear what should happen when a dual-use capability is discovered. 

An early-warning system would allow the relevant actors to access evaluation results and other details of dual-use capability reports to strengthen responses to novel AI-powered threats. Various actors could take concrete actions to respond to risks posed by dual-use AI capabilities, but they need lead time to coordinate and develop countermeasures. For example, model developers could mitigate immediate risks by restricting access to models. Governments could work with private-sector actors to use new capabilities defensively or employ enhanced, targeted export controls to deny foreign adversaries from accessing strategically relevant capabilities.

A warning system should ensure secure information flow between three types of actors:

  1. Finders: the parties that can initially identify dual-use capabilities in models. These include AI company staff, government evaluators such as the U.S. AI Safety Institute (USAISI), contracted evaluators and red-teamers, and independent security researchers.
  2. Coordinators: the parties that provide the infrastructure for collecting, triaging, and directing dual AI capability reports.
  3. Defenders: the parties that could take concrete actions to mitigate threats from dual-use capabilities or leverage them for defensive purposes, such as advanced AI companies and various government agencies.

While this system should cover a variety of finders, defenders, and capability domains, one example of early warning and response in practice might look like the following: 

The current environment has some parts of a functional early-warning system, such as reporting requirements for AI developers described in Executive Order 14110, and existing interagency mechanisms for information-sharing and coordination like the National Security Council and the Vulnerabilities Equities Process.

However, gaps exist across the current system:

  1. There is a lack of clear intake channels and standards for capability reporting to the government outside of mandatory reporting under EO14110. Also, parts of the Executive Order that mandate reporting may be overturned in the next administration, or this specific use of the Defense Production Act (DPA) could be successfully struck down in the courts. 
  2. Various legal and operational barriers mean that premature public disclosure, or no disclosure at all, is likely to happen. This might look like an independent researcher publishing details about a dangerous offensive cyber capability online, or an AI company failing to alert appropriate authorities due to concerns about trade secret leakage or regulatory liability. 
  3. BIS intakes mandatory dual-use capability reports, but it is not tasked to be a coordinator and is not adequately resourced for that role, and information-sharing from BIS to other parts of government is limited. 
  4. There is also a lack of clear, proactive ownership of response around specific types of AI-powered threats. Unless these issues are resolved, AI-powered threats to national security and public safety are likely to arise unexpectedly without giving defenders enough lead time to prepare countermeasures. 

Plan of Action

Improving the U.S. government’s ability to rapidly respond to threats from novel dual-use AI capabilities requires actions from across government, industry, and civil society. The early warning system detailed below draws inspiration from “coordinated vulnerability disclosure” (CVD) and other information-sharing arrangements used in cybersecurity, as well as the federated Sector Risk Management Agency (SRMA) approach used to organize protections around critical infrastructure. The following recommended actions are designed to address the issues with the current disclosure system raised in the previous section.

First, Congress should assign and fund an agency office within the BIS to act as a coordinator–an information clearinghouse for receiving, triaging, and distributing reports on dual-use AI capabilities. In parallel, Congress should require developers of advanced models to report dual-use capability evaluations results and other safety critical information to BIS (more detail can be found in the FAQ). This creates a clear structure for finders looking to report to the government and provides capacity to triage reports and figure out what information should be sent to which working groups.

This coordinating office should establish operational and legal clarity to encourage voluntary reporting and facilitate mandatory reporting. This should include the following:

BIS is suited to house this function because it already receives reports on dual-use capabilities from companies via DPA authority under EO14110. Additionally, it has in-house expertise on AI and hardware from administering export controls on critical emerging technology, and it has relationships with key industry stakeholders, such as compute providers. (There are other candidates that could house this function as well. See the FAQ.)

To fulfill its role as a coordinator, this office would need an initial annual budget of $8 million to handle triaging and compliance work for an annual volume of between 100 and 1,000 dual-use capability reports.2 We provide a budget estimate below:

Budget itemCost (USD)
Staff (15 FTE)$400,000 x 15 = $6 million
Technology and infrastructure (e.g., setting up initial reporting and information-sharing systems)$1.5 million
Communications and outreach (e.g., organizing convenings of working group lead agencies)$300,000
Training and workforce development$200,000
Total$8 million

The office should leverage the direct hire authority outlined by Office of Personnel Management (OPM) and associated flexible pay and benefits arrangements to attract staff with appropriate AI expertise. We expect most of the initial reports would come from 5 to 10 companies developing the most advanced models. Later, if there’s more evidence that near-term systems have capabilities with national security implications, then this office could be scaled up adaptively to allow for more fine-grained monitoring (see FAQ for more detail).

Second, Congress should task specific agencies to lead working groups of government agencies, private companies, and civil society to take coordinated action to mitigate risks from novel threats. These working groups would be responsible for responding to threats arising from reported dual-use AI capabilities. They would also work to verify and validate potential threats from reported dual-use capabilities and develop incident response plans. Each working group would be risk-specific and correspond to different risk areas associated with dual-use AI capabilities:

This working group structure enables interagency and public-private coordination in the style of SRMAs and Government Coordination Councils (GCCs) used for critical infrastructure protection. This approach distributes responsibilities for AI-powered threats across federal agencies, allowing each lead agency to be appointed based on the expertise they can leverage to deal with specific risk areas. For example, the Department of Energy (specifically the National Nuclear Security Administration) would be an appropriate lead when it comes to the intersection of AI and nuclear weapons development. In cases of very severe and pressing risks, such as threats of hundreds or thousands of fatalities, the responsibility for coordinating an interagency response should be escalated to the President and the National Security Council system.

Conclusion

Dual-use AI capabilities can amplify threats to national security and public safety but can also be harnessed to safeguard American lives and infrastructure. An early-warning system should be established to ensure that the U.S. government, along with its industry and civil society partners, has maximal time to prepare for AI-powered threats before they occur. Congress, working together with the executive branch, can lay the foundation for a secure future by establishing a government coordinating office to manage the sharing of safety-critical across the ecosystem and tasking various agencies to lead working groups of defenders focused on specific AI-powered threats.

The longer research report this memo is based on can be accessed here.

This idea is part of our AI Legislation Policy Sprint. To see all of the policy ideas spanning innovation, education, healthcare, and trust, safety, and privacy, head to our sprint landing page.

Frequently Asked Questions
How does this proposal fit into the existing landscape of AI governance?
This plan builds off of earlier developments in the area of AI safety testing and evaluations. First, the early-warning system would concretely connect dual-use capability evaluations with coordinated risk mitigation efforts. USAISI is set to partner with its United Kingdom equivalent to advance measurement science for AI safety and conduct safety evaluations. The Presidential Budget FY2025 makes requests for additional funds going to USAISI and DOE to develop testbeds for AI security evaluations. EO 14110 mandates reporting from companies to the government on safety test results and other safety-relevant information concerning dual-use foundation models. This early-warning system uses this fundamental risk assessment work and improved visibility into model safety to concretely reduce risk.
Will this proposal stifle innovation and overly burden companies?

This plan recommends that companies developing and deploying dual-use foundation models be mandated to report safety-critical information to specific government offices. However, we expect these requirements to only apply to a few large tech companies that would be working with models that fulfill specific technical conditions. A vast majority of businesses and models would not be subject to mandatory reporting requirements, though they are free to report relevant information voluntarily.


The few companies that are required to report should have the resources to comply. An important consideration behind our plan is to, where possible and reasonable, reduce the legal and operational friction around reporting critical information for safety. This can be seen in our recommendation that relevant parties from industry and civil society work together to develop reporting standards for dual-use capabilities. Also, we suggest that the coordinating office should establish operational and legal clarity to encourage voluntary reporting and facilitate mandatory reporting, which is done with industry and other finder concerns in mind.


This plan does not place restrictions on how companies conduct their activities. Instead, it aims to ensure that all parties that have equities and expertise in AI development have the information needed to work together to respond to serious safety and security concerns. Instead of expecting companies to shoulder the responsibility of responding to novel dangers, the early-warning system distributes this responsibility to a broader set of capable actors.

What if EO 14110’s reporting requirements are struck down, and there is no equivalent statutory reporting requirement from legislation?
In the case that broader mandatory reporting requirements are not enshrined in law, there are alternative mechanisms to consider. First, companies may still make voluntary disclosures to the government, as some of the most prominent AI companies agreed to do under the White House Voluntary Commitments from September 2023. There is an opportunity to create more structured reporting agreements between finders and the government coordinator by using contractual mechanisms in the form of Information Sharing and Access Agreements, which can govern the use of dual-use capability information by federal agencies, including (for example) maintaining security and confidentiality, exempting use in antitrust actions, and implementing safeguards against unauthorized disclosure to third parties. These have been used most often by the DHS to structure information sharing with non-government parties and between agencies.
What other federal agencies could house the coordinator role? How do they compare to BIS?

Bureau of Industry and Security (BIS) already intakes reports on dual-use capabilities via DPA authority under EO 14110


Department of Commerce



  • USAISI will have significant AI safety-related expertise and also sits under Commerce

  • Internal expertise on AI and hardware from administering export controls


US AI Safety Institute (USAISI), Department of Commerce



  • USAISI will have significant AI safety-related expertise

  • Part of NIST, which is not a regulator, so there may be fewer concerns on the part of companies when reporting

  • Experience coordinating relevant civil society and industry groups as head of the AI Safety Consortium


Cybersecurity and Infrastructure Security Agency (CISA), Department of Homeland Security



  • Experience managing info-sharing regime for cyber threats that involve most relevant government agencies, including SRMAs for critical infrastructure

  • Experience coordinating with private sector

  • Located within DHS, which has responsibilities covering counterterrorism, cyber and infrastructure protection, domestic chemical, biological, radiological, and nuclear protection, and disaster preparedness and response. That portfolio seems like a good fit for work handling information related to dual-use capabilities.

  • Option of Federal Advisory Committee Act exemption for DHS Federal Advisory Committees would mean working group meetings can be nonpublic and meetings do not require representation from all industry representatives


Office of Critical and Emerging Technologies, Department of Energy (DOE)



  • Access to DOE expertise and tools on AI, including evaluations and other safety and security-relevant work (e.g., classified testbeds in DOE National Labs)

  • Links to relevant defenders within DOE, such as the National Nuclear Security Administration

  • Partnerships with industry and academia on AI

  • This office is much smaller than the alternatives, so would require careful planning and management to add this function.

Is it too early to worry about serious risks from AI models?

Based on dual-use capability evaluations conducted on today’s most advanced models, there is no immediate concern that these models can meaningfully enhance the ability of malicious actors to threaten national security or cause severe accidents. However, as outlined in earlier sections of the memo, model capabilities have evolved rapidly in the past, and new capabilities have emerged unintentionally and unpredictably.


This memo recommends initially putting in place a lean and flexible system to support responses to potential AI-powered threats. This would serve a “fire alarm” function if dual-use capabilities emerge and would be better at reacting to larger, more discontinuous jumps in dual-use capabilities. This also lays the foundation for reporting standards, relationships between key actors, and expertise needed in the future. Once there is more concrete evidence that models have major national security implications, Congress and the president can scale up this system as needed and allocate additional resources to the coordinating office and also to lead agencies. If we expect a large volume of safety-critical reports to pass through the coordinating office and a larger set of defensive actions to be taken, then the “fire alarm” system can be shifted into something involving more fine-grained, continuous monitoring. More continuous and proactive monitoring would tighten the Observe, Orient, Decide, and Act (OODA) loop between working group agencies and model developers, by allowing agencies to track gradual improvements, including from post-training enhancements.

Why focus on capabilities? Would incident reporting be better since it focuses on concrete events? What about vulnerabilities and threat information?

While incident reporting is also valuable, an early-warning system focused on capabilities aims to provide a critical function not addressed by incident reporting: preventing or mitigating the most serious AI incidents before they even occur. Essentially, an ounce of prevention is worth a pound of cure.


Sharing information on vulnerabilities to AI systems and infrastructure and threat information (e.g., information on threat actors and their tactics, techniques, and practices) is also important, but distinct. We think there should be processes established for this as well, which could be based on Information Sharing and Analysis Centers, but it is possible that this could happen via existing infrastructure for sharing this type of information. Information sharing around dual-use capabilities though is distinct to the AI context and requires special attention to build out the appropriate processes.

What role could the executive branch play?

While this memo focuses on the role of Congress, an executive branch that is interested in setting up or supporting an early warning system for AI-powered threats could consider the following actions.


Our second recommendation—tasking specific agencies to lead working groups to take coordinated action to mitigate risks from advanced AI systems—could be implemented by the president via Executive Order or a Presidential Directive.


Also, the National Institute of Standards and Technology could work with other organizations in industry and academia, such as advanced AI developers, the Frontier Model Forum, and security researchers in different risk domains, to standardize dual-use capability reports, making it easier to process reports coming from diverse types of finders. A common language around reporting would make it less likely that reported information is inconsistent across reports or is missing key decision-relevant elements; standardization may also reduce the burden of producing and processing reports. One example of standardization is narrowing down thresholds for sending reports to the government and taking mitigating actions. One product that could be generated from this multi-party process is an AI equivalent to the Stakeholder-Specific Vulnerability Categorization system used by CISA to prioritize decision-making on cyber vulnerabilities. A similar system could be used by the relevant parties to process reports coming from diverse types of finders and by defenders to prioritize responses and resources according to the nature and severity of the threat.

Should all of this be done by the government? What about a more prominent role for industry and civil society, who are at the forefront of understanding advanced AI and its risks?

The government has a responsibility to protect national security and public safety – hence their central role in this scheme. Also, many specific agencies have relevant expertise and authorities on risk areas like biological weapons development and cybersecurity that are difficult to access outside of government.


However, it is true that the private sector and civil society have a large portion of the expertise on dual-use foundation models and their risks. The U.S. government is working to develop its in-house expertise, but this is likely to take time.


Ideally, relevant government agencies would play central roles as coordinators and defenders. However, our plan recognizes the important role that civil society and industry play in responding to emerging AI-powered threats as well. Industry and civil society can take a number of actions to move this plan forward:



  • An entity like the Frontier Model Forum can convene other organizations in industry and academia, such as advanced AI developers and security researchers in different risk domains, to standardize dual-use capability reports independent of NIST.

  • Dual-use foundation model (DUFM) developers should establish clear policies and intake procedures for independent researchers reporting dual-use capabilities.

  • DUFM developers should work to identify capabilities that could help working groups to develop countermeasures to AI threats, which can be shared via the aforementioned information-sharing infrastructure or other channels (e.g., pre-print publication).

  • In the event that a government coordinating office cannot be created, there could be an independent coordinator that fulfills a role as an information clearinghouse for dual-use AI capabilities reports. This could be housed in organizations with experience operating federally funded research and development centers like MITRE or Carnegie Mellon University’s Software Engineering Institute.

  • If it is responsible for sharing information between AI companies, this independent coordinator may need to be coupled with a safe harbor provision around antitrust litigation specifically pertaining to safety-related information. This safe harbor could be created via legislation, like a similar provision used in CISA 2015 or via a no-action letter from the Federal Trade Commission.

What is included in the reporting requirements for companies developing advanced models with potential dual-use capabilities? What companies are subject to these requirements? What information needs to be shared?

We suggest that reporting requirements should apply to any model trained using computing power greater than 1026 floating-point operations. These requirements would only apply to a few companies working with models that fulfill specific technical conditions. However, it will be important to establish an appropriate authority within law to dynamically update this threshold as needed. For example, revising the threshold downwards (e.g., to 1025) may be needed if algorithmic improvements allow developers to train more capable models with less compute or other developers devise new “scaffolding” that enables them to elicit dangerous behavior from already-released models. Alternatively, revising the threshold upwards (e.g., to 1027) may be desirable due to societal adaptation or if it becomes clear that models at this threshold are not sufficiently dangerous. The following information should be included in dual-use AI capability reports, though the specific format and level of detail will need to be worked out in the standardization process outlined in the memo:



  • Name and address of model developer

  • Model ID information (ideally standardized)

  • Indicator of sensitivity of information

  • A full accounting of the dual-use capabilities evaluations run on the model at the training and pre-deployment stages, their results, and details of the size and scope of safety-testing efforts, including parties involved

  • Details on current and planned mitigation measures, including up-to-date incident response plans

  • Information about compute used to train models that have triggered reporting (e.g., amount of compute and training time required, quantity and variety of chips used and networking of compute infrastructure, and the location and provider of the compute)


Some elements would not need to be shared beyond the coordinating office or working group lead (e.g., personal identifying information about parties involved in safety testing or specific details about incident response plans) but would be useful for the coordinating office in triaging reports.


The following information should not be included in reports in the first place since it is commercially sensitive and could plausibly be targeted for theft by malicious actors seeking to develop competing AI systems:



  • Information on model architecture

  • Datasets used in training

  • Training techniques

  • Fine-tuning techniques

Shared Classified Commercial Coworking Spaces

The legislation would establish a pilot program for the Department of Defense (DoD) to establish classified commercial shared spaces (think WeWork or hotels but for cleared small businesses and universities), professionalize industrial security protections, and accelerate the integration of new artificial intelligence (AI) technologies into actual warfighting capabilities. While the impact of this pilot program would be felt across the National Security Innovation Base, this issue is particularly pertinent to the small business and start-up community, for whom access to secure facilities is a major impediment to performing and competing for government contracts.

Challenge and Opportunity 

The process of obtaining and maintaining a facility clearance and the appropriate industrial security protections is a major burden on nontraditional defense contractors, and as a result they are often disadvantaged when it comes to performing on and competing for classified work. Over the past decade, small businesses, nontraditional defense contractors, and academic institutions have all successfully transitioned commercial solutions for unclassified government contracts. However, the barriers to entry (cost, complexity, administrative burden, timeline) to engage in classified contracts has prevented similar successes. There have been significant and deliberate policy revisions and strategic pivots by the U.S. government to ignite and accelerate commercial technologies and solutions for government use cases, but similar reforms have not reduced the significant burden these organizations face when trying to secure follow-on classified work.

For small, nontraditional defense companies and universities, creating their own classified facility is a multiyear endeavor, is often cost-prohibitive, and includes coordination among several government organizations. This makes the prospect of building their own classified infrastructure a high-risk investment with an unknown return, thus deterring many of these organizations from competing in the classified marketplace and preventing the most capable technology solutions from rapid integration into classified programs. Similarly, many government contracting officers, in an effort to satisfy urgent operational requirements, only select from vendors with existing access to classified infrastructure due to knowing the long timelines involved for new entrants getting their own facilities accredited, thus further limiting the available vendor pool and restricting what commercial technologies are available to the government.

In January 2024, the Texas National Security Review published the results of a survey of over 800 companies from the defense industrial base as well as commercial businesses, ranging from small businesses to large corporations. 44 percent ranked “accessing classified environments as the greatest barrier to working with the government.” This was amplified in March 2024 during a House Armed Services Committee hearing on “Outpacing China in Defense Innovation,” where Under Secretary for Acquisition and Sustainment William LaPlante, Under Secretary for Research and Engineering Heidi Shyu, and Defense Innovation Unit Director Doug Beck all acknowledged the seriousness of this issue. 

The current government method of approving and accrediting commercial classified facilities is based on individual customers and contracts. This creates significant costs, time delays, and inefficiencies within the system. Reforming the system to allow for a “shared” commercial model will professionalize industrial security protections and accelerate the integration of new AI technologies into actual national security capabilities. While Congress has expressed support for this concept in both the Fiscal Year 2018 National Defense Authorization Act and the Fiscal Year 2022 Intelligence Authorization Act, there has been little measurable progress with implementation. 

Plan of Action 

Congress should pass legislation to create a pilot program under the Department of Defense (DoD) to expand access to shared commercial classified spaces and infrastructure. The DoD will incur no cost for the establishment of the pilot program as there is a viable commercial market for this model.  Legislative text has been provided and will be socialized with the committees of jurisdiction and relevant congressional members offices for support.

Legislative Specifications

SEC XXX – ESTABLISHMENT OF PILOT PROGRAM FOR ACCESS TO SHARED CLASSIFIED COMMERCIAL INFRASTRUCTURE 

(a) ESTABLISHMENT. – Not later than 180 days after the date of enactment of this act, the Secretary of Defense shall establish a pilot program to streamline access to shared classified commercial infrastructure in order to:

(b) DESIGNATION. – The Secretary of Defense shall designate a principal civilian official responsible for overseeing the pilot program authorized in subsection (a)(1) and shall directly report to the Deputy Secretary of Defense.

(c) REQUIREMENTS. 

(d) DEFINITION. – In this section:

(d) ANNUAL REPORT. – Not later than 270 days after the date of the enactment of this Act and annual thereafter until 2028, the Secretary of Defense shall provide to the congressional defense committees a report on establishment of this pilot program pursuant to this section, to include:

(e) TERMINATION. – The authority to carry out this pilot program under subsection (a) shall terminate on the date that is five years after the date of enactment of this Act.

Conclusion

Congress must ensure that the nonfinancial barriers that prevent novel commercially developed AI capabilities and emerging technologies from transitioning into DoD and government use are reduced. Access to classified facilities and infrastructure continues to be a major obstacle for small businesses, research institutions, and nontraditional defense contractors working with the government. This pilot program will ensure reforms are initiated that reduce these barriers, professionalize industrial security protections, and accelerate the integration of new AI technologies into actual national security capabilities.

A National Center for AI in Education

There are immense opportunities associated with artificial intelligence (AI), yet it is important to vet the tools, establish threat monitoring, and implement appropriate regulations to guide the integration of AI into an equitable education system. Generative AI in particular is already being used in education, through human resource talent acquisition, predictive systems, personalized learning systems to promote students’ learning, automated assessment systems to support teachers in evaluating what students know, and facial recognition systems to provide insights about learners’ behaviors, just to name a few. Continuous research of AI’s use by teachers and schools is important to ensure AI’s positive integration into education systems worldwide is crucial for improved outcomes for all. 

Congress should establish a National Center for AI in Education to build the capacity of education agencies to undertake evidence-based continuous improvement in AI in education. It will increase the body of rigorous research and proven solutions in AI use by teachers and students in education. Teachers will use testing and research to develop guidance for AI in education.

Challenge and Opportunity

It should not fall to one single person, group, industry, or country to decide what role AI’s deep learning should play in education—especially when that utility function will play a major role in creating new learning environments and more equitable opportunities for students. 

Teachers need appropriate professional development on using AI not only so they can implement AI tools in their teaching but also so they can impart those skills and knowledge to their students. Survey research from EdWeek Research Center affirms that teachers, principals, and district leaders view the importance of teaching AI. Most disturbing is the lack of support and guidance around AI that teachers are receiving: 87% of teachers reported receiving zero hours of professional development related to incorporating AI into their work. 

A National Center for AI in Education would transform the current model of how education technology is developed and monitored from a “supply creates the demand system” to a “demand creates the supply” system. Often, education technology resources are developed in isolation from the actual end users, meaning the teachers and students, and this exacerbates inequity. The Center will help to bridge the gap between tech innovators and the classroom, driving innovation and ensuring AI aligns with educational goals.

The collection and use of data in education settings has expanded dramatically in recent decades, thanks to advancements in student information systems, statistical software, and analytic methods, as well as policy frameworks that incentivize evidence generation and use in decision-making. However, this growing body of research all too frequently ignores the effective use of AI in education. The challenges, assets, and context of AI in education vary greatly within states and across the nation. As such, evidence that is generated in real time within school settings should begin to uncover the needs of education related to AI. 

Educators need research, regulation, and policies that are understood in the context of educational settings to effectively inform practice and policy. Students’ preparedness for and transition into college or the workforce is of particular concern, given spatial inequities in the distribution of workforce and higher-education opportunities and the dual imperatives of strengthening student outcomes while ensuring future community vitality. The teaching and use of AI all play into this endeavor.

An analog for this proposal is the National Center for Rural Education Research Networks (NCRERN), an Institute of Education Sciences research and development center that has demonstrated the potential of research networks for generating rigorous, causal evidence in rural settings through multi-site randomized controlled trials. NCRERN’s work leading over 60 rural districts through continuous improvement cycles to improve student postsecondary readiness and facilitate postsecondary transitions generated key insights about how to effectively conduct studies, generate evidence, influence district practice, and improve student outcomes. NCRERN research is used to inform best practices with teachers, counselors, and administrators in school districts, as well as inform and provide guidance for policymaking on state, local, and federal levels.

Another analog is Indiana’s AI-Powered Platform Pilot created by the Indiana Department of Education. The pilot launched during the 2023–2024 school year with 2,500 teachers from 112 schools in 36 school corporations across Indiana using approved AI platforms in their classrooms. More than 45,000 students are impacted by this pilot. A recent survey of teachers in the pilot indicated that 53% rated the overall impact of the AI platform on their students’ learning and their teaching practice as positive or very positive. 

In the pilot, a competitive grant opportunity funds the subscription fees and professional development support for student high dosage tutoring and reducing teacher workload using an AI platform. The vision for this opportunity is to focus on a cohort of teachers and students in the integration of an AI platform. It might be used to support a specific building, grade level, subject area, or student population. Schools are encouraged to focus on student needs in response to academic impact data

Plan of Action

Congress should authorize the establishment of a National Center for AI in Education whose purpose is to research and develop guidance for Congress regarding policy and regulations for the use of AI in educational settings. 

Through a competitive grant process, a university should be chosen to house the Center. This Center should be established within three years of enactment by Congress. The winning institution will be selected and overseen by either the Institute of Education Sciences or another office within the Department of Education. The Department of Education and National Science Foundation will be jointly responsible for supporting professional development along with the Center awardee.

The Center should begin as a pilot with teachers selected from five participating states. These PK-12 teachers will be chosen via a selection process developed by the Center. Selected teachers will have expertise in AI technology and education as evidenced by effective classroom use and academic impact data. Additional criteria could include innovation mindset, willingness to collaborate with knowledge of AI technologies, innovative teaching methods, commitment to professional development, and a passion for improving student learning outcomes. Stakeholders such as students, parents, and policymakers should be involved in the selection process to ensure diverse perspectives are considered. 

The National Center for AI in Education’s duties should include but not be limited to:

Congress should authorize funding for the National Center for AI in Education. Funding should be provided by the federal government to support its research and operations. Plans should be made for a 3–5-year pilot grant as well as a continuation/expansion grant after the first 3–5-year funding cycle. Additional funding may be obtained through grants, donations, and partnerships with private organizations.

Reporting on progress to monitor and evaluate the Center’s pursuits. The National Center for AI in Education would submit an annual report to Congress detailing its research findings, advising and providing regulatory guidance, and impact on education. There will need to be a plan for the National Center for AI in Education to be subject to regular evaluation and oversight to ensure its compliance with legislation and regulations.

To begin this work of the National Center for AI in Education will:

  1. Research and develop courses of action for improvement of AI algorithms to mitigate bias and privacy issues: Regularly reassess AI algorithms used in samples from the Center’s pilot states and school districts and make all necessary adjustments to address those issues.
    1. Incorporate AI technology developers into the feedback loop by establishing partnerships and collaborations. Invite developers to participate in research projects, workshops, and conferences related to AI in education. Research and highlight promising practices in teaching responsible AI use for students:  Teaching about AI is as important, if not more important, as teaching with AI. Therefore, extensive curriculum research should be done for teaching students how to ethically and effective use AI to enhance their learning. Incorporate real-world application of AI into coursework so students are ready to use AI effectively and ethically in the next chapter of their postsecondary journey.
  2. Develop an AI diagnostic toolkit: This toolkit, which should be made publicly available for state agencies and district leaders, will analyze teacher efficacy, students’ grade level mastery, and students’ postsecondary readiness and success. 
  3. Provide professional development for teachers on effective and ethical AI use: Training should include responsible use of generative AI and AI for learning enhancement. 
  4. Monitor systems for bias and discrimination: Test tools to identify unintended bias to ensure that they do not perpetuate gender, racial, or social discrimination. Study and recommend best practices and policies. 
  5. Develop best practices for ensuring privacy: Ensure that student, family, and staff privacy are not compromised by the use of facial recognition or recommender systems. Protect students’ privacy, data security, and informed consent. Research and recommend policies and IT solutions to ensure privacy compliance. 
  6. Curate proven algorithms that protect student and staff autonomy: Predictive systems can limit a person’s ability to act on their own interest and values. The Center will identify and highlight algorithms that are proven to not jeopardize our students or teachers’ self-freedom.

In addition, the National Center for AI in Education will conduct five types of studies: 

  1. Descriptive quantitative studies exploring patterns and predictors of teachers’ and students’ use of AI. Diagnostic studies will draw on district administrative, publicly available, and student survey data. 
  2. Mixed methods case studies describing the context of teachers/schools participating in the Center and how stakeholders within these communities conceptualize students’ postsecondary readiness and success. One case study per pilot state will be used, drawing on survey, focus group, observational, and publicly available data. 
  3. Development evaluations of intervention materials developed by educators and content experts. AI sites/software will be evaluated through district prototyping and user feedback from students and staff. 
  4. Block cluster randomized field trials of at least two AI interventions. The Center will use school-level randomization, blocked on state and other relevant variables, to generate impact estimates on students’ postsecondary readiness and success. The Center will use the ingredients methods to additionally estimate cost-effectiveness estimates. 
  5. Mixed methods implementation studies of at least two AI interventions implemented in real-world conditions. The Center will use intervention artifacts (including notes from participating teachers) as well as surveys, focus groups, and observational data. 

Findings will be disseminated through briefs targeted at a policy and practitioner audience, academic publications, conference presentations, and convenings with district partners. 

A publicly available AI diagnostic toolkit will be developed for state agencies and district leaders to use to analyze teacher efficacy, students on grade level mastery, and students’ postsecondary readiness and success. This toolkit will also serve as a resource for legislators to keep up to date on AI in education. 

Professional development, ongoing coaching, and support to district staff will also be made available to expand capacity for data and evidence use. This multifaceted approach will allow the National Center for AI in Education to expand capacity in research related to AI use in education while having practical impacts on educator practice, district decision-making, and the national field of rural education research. 

Conclusion

The National Center for AI in Education would be valuable for United States education for several reasons. First, it could serve as a hub for research and development in the field, helping to advance our understanding of how AI can be effectively used in educational settings. Second, it could provide resources and support for educators looking to incorporate AI tools into their teaching practices. Third, it could help to inform future policies, as well as standards and best practices for the use of AI education, ensuring that students are receiving high-quality, ethically sound educational experiences. A National Center for AI in Education could help to drive innovation and improvement in the field, ultimately benefiting students and educators alike.

This idea is part of our AI Legislation Policy Sprint. To see all of the policy ideas spanning innovation, education, healthcare, and trust, safety, and privacy, head to our sprint landing page.

Frequently Asked Questions
What is the initial duration of the proposed project?
Three to five years for the pilot, with plans developed for another three-to-five-year continuation expansion.
What is the estimated initial budget request?
$10 million. This figure parallels the funding allocated for the National Center for Rural Education Research Networks (NCRERN), a project of similar scope.
Why should a university house the Center?
Universities have the necessary capabilities to conduct research and to help create and carry out professional development programs. Additionally, this research could inform teacher preparation programs and the data disseminated across teacher preparation programs.
How would this new Center interact with the EdSafeAI alliance or similar coalitions?
The National Center for AI in Education would share research findings widely with all organizations. There could also be opportunities for collaboration.
Would the Center supplant the need for those other coalitions?
No. The Center at its core would be research-based and oriented at street level with teachers and students where the data is created.

Message Incoming: Establish an AI Incident Reporting System

What if an artificial intelligence (AI) lab found their model had a novel dangerous capability? Or a susceptibility to manipulation? Or a security vulnerability? Would they tell the world, confidentially notify the government, or quietly patch it up before release? What if a whistleblower wanted to come forward – where would they go? 

Congress has the opportunity to proactively establish a voluntary national AI Incident Reporting Hub (AIIRH) to identify and share information about AI system failures, accidents, security breaches, and other potentially hazardous incidents with the federal government. This reporting system would be managed by a designated federal agency—likely the National Institute of Standards and Technology (NIST). It would be modeled after successful incident reporting and info-sharing systems operated by the National Cybersecurity FFRDC (funded by the Cybersecurity and Infrastructure Security Agency (CISA)), the Federal Aviation Administration (FAA), and the Food and Drug Administration (FDA). This system would encourage reporting by allowing for confidentiality and guaranteeing only government agencies could access sensitive AI systems specifications.

AIIRH would provide a standardized and systematic way for companies, researchers, civil society, and the public to provide the federal government with key information on AI incidents, enabling analysis and response. It would also provide the public with some access to these data in a reliable way, due to its statutory mandate – albeit often with less granularity than the government will have access to. Nongovernmental and international organizations, including the Responsible AI Collaborative (RAIC) and the Organisation for Economic Co-operation and Development (OECD), already maintain incident reporting systems, cataloging incidents such as facial recognition systems identifying the wrong person for arrest and trading algorithms causing market dislocations. However, these two systems have a number of limitations in their scope and reliability that make them more suitable for public accountability than government use. 

By establishing this system, Congress can enable better identification of critical AI risk areas before widespread harm occurs. This proposal would help both build public trust and, if implemented successfully, would help relevant agencies recognize emerging patterns and take preemptive actions through standards, guidance, notifications, or rulemaking.

Challenge and Opportunity

While AI systems have the potential to produce significant benefits across industries like healthcare, education, environmental protection, finance, and defense, they are also potentially capable of serious harm to individuals and groups. It is crucial that the federal government understand the risks posed by AI systems and develop standards, best practices, and legislation around its use. 

AI risks and harms can take many forms, from representational (such as women CEOs being underrepresented in image searches), to financial (such as automated trading systems or AI agents crashing markets), to possibly existential (such as through the misuse of AI to advance chemical, biological, radiological, and nuclear (CBRN) threats). As these systems become more powerful and interact with more aspects of the physical and digital worlds, a material increase in risk is all but inevitable in the absence of a sensible governance framework. However, in order to craft public policy that maximizes the benefits of AI and ameliorates harms, government agencies and lawmakers must understand the risks these systems pose.

There have been notable efforts by agencies to catalog types of risks, such as NIST’s 2023 AI Risk Management Framework, and to combat the worst of them, such as the Department of Homeland Security’s (DHS) efforts to mitigate AI CBRN threats. However, the U.S. government does not yet have an adequate resource to track and understand specific harmful AI incidents that have occurred or are likely to occur in the real world. While entities like the RAIC and the OECD manage AI incident reporting efforts, these systems primarily collect publicly reported incidents from the media, which are likely a small fraction of the total. These databases serve more as a source of public accountability for developers of problematic systems than a comprehensive repository suitable for government use and analysis. The OECD system lacks a proper taxonomy for different incident types and contexts, and while the RAIC database applies two external taxonomies to their data, it only does so at an aggregated level. Additionally, the OECD and RAIC systems depend on their organizations’ continued support, whereas AIIRH would be statutorily guaranteed. 

The U.S. government should do all it can to facilitate as comprehensive reporting of AI incidents and risks as possible, enabling policymakers to make informed decisions and respond flexibly as the technology develops. As it has done in the cybersecurity space, it is appropriate for the federal government to act as a focal point for collection, analysis, and dissemination of data that is nationally distributed, is multi-sectoral, and has national impacts. Many federal agencies are also equipped to appropriately handle sensitive and valuable data, as is the case with AI system specifications. Compiling this kind of comprehensive dataset would constitute a national public good.

Plan of Action

We propose a framework for a voluntary Artificial Intelligence Incident Reporting Hub, inspired by existing public initiatives in cybersecurity, like the list of Common Vulnerabilities and Exploits (CVE)1 funded by CISA, and in aviation, like the FAA’s confidential Aviation Safety Reporting System (ASRS). 

AIIRH should cover a broad swath of what could be considered an AI incident in order to give agencies maximal data for setting standards, establishing best practices, and exploring future safeguards. Since there is no universally agreed-upon definition of an AI safety “incident,” AIIRH would (at least initially) utilize the OECD definitions of “AI incident” and “AI hazard,” as follows:

With this scope, the system would cover a wide range of confirmed harms and situations likely to cause harm, including dangerous capabilities like CBRN threats. Having an expansive repository of incidents also sets up organizations like NIST to create and iterate on future taxonomies of the space, unifying language for developers, researchers, and civil society. This broad approach does introduce overlap on voluntary cybersecurity incident reporting with the expanded CVE and National Vulnerability Database (NVD) systems proposed by Senators Warner and Tillis in their Secure AI Act. However, the CVE provides no analysis of incidents, so it should be viewed instead as a starting point to be fed into the AIIRH2, and the NVD only applies traditional cybersecurity metrics, whereas the AIIRH could accommodate a much broader holistic analysis.

Reporting submitted to AIIRH should highlight key issues, including whether the incident occurred organically or as the result of intentional misuse. Details of harm either caused or deemed plausible should also be provided. Importantly, reporting forms should allow maximum information but require as little as possible in order to encourage industry reporting without fear of leaking sensitive information and lower the implied transaction costs of reporting. While as much data on these incidents as possible should be broadly shared to build public trust, there should be guarantees that any confidential information and sensitive system details shared remain secure. Contributors should also have the option to reveal their identity only to AIIRH staff and otherwise maintain anonymity.

NIST is the natural candidate to function as the reporting agency, as it has taken a larger role in AI standards setting since the release of the Biden Administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. NIST also has experience with incident reporting through their NVD, which contains agency experts’ analysis of CVE incidents. Finally, similar to how the National Aeronautics and Space Administration (NASA) operates the FAA’s confidential reporting system, ASRS, as a neutral third party, NIST is a non-enforcing agency with excellent industry relationships due to its collaborations on standards and practices. CISA is another option, as it funds and manages several incident reporting systems, including over AI security if the Warner-Tillis bill passes, but there is no reason to believe CISA has the expertise to address harms caused by things like algorithmic discrimination or CBRN threats. 

While NIST might be a trusted party to maintain a confidential system, employees reporting credible threats to AIIRH should have additional guarantees against retaliation from their current/former employers in the form of whistleblower protections. These are particularly relevant in light of reports that OpenAI, an AI industry leader, is allegedly neglecting safety and preventing employee disclosure through restrictive nondisparagement agreements. A potential model could be whistleblower protections introduced in California SB1047, where employers are forbidden from preventing, or retaliating based upon, the disclosure of an AI incident to an appropriate government agent. 

In order to further incentivize reporting, contributors may be granted advanced, real-time, or more complete access to the AIIRH reporting data. While the goal is to encourage the active exchange of threat vectors, in acknowledgment of the aforementioned confidentiality issues, reporters could opt out from having their data shared in this way, forgoing their own advanced access. If they allow a redacted version of their incident to be shared anonymously with other contributors, they could still maintain access to the reporting data.

Key stakeholders include: 

Related proposed bills include:

The proposal is likely to require congressional action to appropriate funds for the creation and implementation of the AIIRH. It would require an estimated $10–25 million annually to create and maintain AIIRH, pay-for to be determined.3

Conclusion

An AI Incident Reporting System would enable informed policymaking as the risks of AI continue to develop. By allowing organizations to report information on serious risks that their systems may pose in areas like CBRN, illegal discrimination, and cyber threats, this proposal would enable the U.S. government to collect and analyze high-quality data and, if needed, promulgate standards to prevent the proliferation of dangerous capabilities to non-state actors. By incentivizing voluntary reporting, we can preserve innovative and high-value uses of AI for society and the economy, while staying up-to-date with the quickly evolving frontier in cases where regulatory oversight is paramount.

This idea is part of our AI Legislation Policy Sprint. To see all of the policy ideas spanning innovation, education, healthcare, and trust, safety, and privacy, head to our sprint landing page.

Frequently Asked Questions
Why house AIIRH at NIST?

NIST has institutional expertise with incident reporting, having maintained the National Vulnerability Database and Disaster Data Portal. NIST’s role as a standard-setting body leaves it ideally placed to keep pace with developments in new areas of technology. This role as a standard-setting body that frequently collaborates with companies, while not regulating them, allows them to act as a trusted home for cross-industry collaboration on sensitive issues. In the Biden Administration’s Executive Order on AI, NIST was given authority over establishing testbeds and guidance for testing and red-teaming of AI systems, making it a natural home for the closely-related work here.

What kinds of follow-up, if any, will be conducted after an initial incident report?

AIIRH staff shall be empowered to conduct follow-ups on credible threat reports, and to share information with Department of Commerce, Department of Homeland Security, Department of Defense, and other agency leadership on those reports.

What could come next after these reports?

AIIRH staff could work with others at NIST to build a taxonomy of AI incidents, which would provide a helpful shared language for standards and regulations. Additionally, staff might share incidents as relevant with interested offices like CISA, Department of Justice, and the Federal Trade Commission, although steps should be taken to minimize retribution against organizations who voluntarily disclosed incidents (in contrast to whistleblower cases).

Why would organizations use a voluntary reporting system?

Similar to the logic of companies disclosing cybersecurity vulnerabilities and incidents, voluntary reporting builds public trust, earns companies favor with enforcement agencies, and increases safety broadly across the community. The confidentiality guarantees provided by AIIRH should make the prospect more appealing as well. Separately, individuals at organizations like OpenAI and Google have demonstrated a propensity towards disclosure through whistleblower complaints when they believe their employers are acting unsafely.

Addressing the Disproportionate Impacts of Student Online Activity Monitoring Software on Students with Disabilities

Student activity monitoring software is widely used in K-12 schools and has been employed in response to address student mental health needs. Education technology companies have developed algorithms using artificial intelligence (AI) that seek to detect risk for harm or self-harm by monitoring students’ online activities. This type of software can track student logins, view the contents of a student’s screen in real time, monitor or flag web search history, or close browser tabs for off-task students. While teachers, parents, and students largely report the benefits of student activity monitoring outweigh the risks, there is still a need to address the ways that student privacy might be compromised and to avoid perpetuating existing inequities, especially for students with disabilities. 

To address these issues, Congress and federal agencies should:

Challenge and Opportunity

People with disabilities have long benefited from technological advances. For decades, assistive technology, ranging from low tech to high tech, has helped students with disabilities with learning. AI tools hold promise for making lessons more accessible. A recent survey conducted by EdWeek of principals and district leaders showed that most schools are considering using AI, actively exploring their use, or are piloting them. The special education research community at large, such as those at the Center for Innovation, Design and Digital Learning (CIDDL) view the immense potential and risks of AI in educating students for disabilities. CIDDL states:

“AI in education has the potential to revolutionize teaching and learning through personalized education, administrative efficiency, and innovation, particularly benefiting (special) education programs across both K-12 and Higher Education. Key impacts include ethical issues, privacy, bias, and the readiness of students and faculty for AI integration.”

At the same time, AI-based student online activity monitoring software is being employed more universally to monitor and surveil what students are doing online. In K-12 schools, AI-based student activity monitoring software is widespread – nearly 9 in 10 teachers say that their school monitors students’ online activities. 

Schools have employed these technologies to attempt to address student mental health needs, such as referring flagged students to counseling or other services. These practices have significant implications for students with disabilities, as they are at higher risk for mental health issues. In 2024, NCLD surveyed 1349 young adults ages 18 to 24 and found that nearly 15% of individuals with a learning disability had a mental health diagnosis and 45% of respondents indicated that having a learning disability negatively impacts their mental health. Knowing these risks for this population, careful attention must be paid to ensure mental health needs are being identified and appropriately addressed through evidence-based supports. 

Yet there is little evidence supporting the efficacy of this software. Researchers at RAND, through review of peer-reviewed and gray literature as well as interviews, raise issues with the software, including threats to student privacy, the challenge of families in opting out, algorithmic bias, and escalation of situations to law enforcement. The Center for Democracy & Technology (CDT) conducted research highlighting that students with disabilities are disproportionately impacted by these AI technologies. For example, licensed special education teachers are more likely to report knowing students who have gotten in trouble and been contacted by law enforcement due to student activity monitoring. Other CDT polling found that 61% of students with learning disabilities report that they do not share their true thoughts or ideas online because of monitoring. 

We also know that students with disabilities are almost three times more likely to be arrested than their nondisabled peers, with Black and Latino male students with disabilities being the most at risk of arrest. Interactions with law enforcement, especially for students with disabilities, can be detrimental to health and education. Because people with disabilities have protections under civil rights laws, including the right to a free appropriate public education in school, actions must be taken. 

Parents are also increasingly concerned about subjecting their children to greater monitoring both in and outside the classroom, leading to decreased support for the practice: 71% of parents report being concerned with schools tracking their children’s location and 66% are concerned with their children’s data being shared with law enforcement (including 78% of Black parents). Concern about student data privacy and security is higher among parents of children with disabilities (79% vs. 69%). Between the 2021–2022 and 2022–2023 school years, parent and student support of student activity monitoring fell 8% and 11%, respectively. 

Plan of Action

Recommendation 1. Improve data collection.

While data collected from private research entities like RAND and CDT captures some important information on this issue, the federal government should collect such relevant data to capture the extent to which these technologies might be misused. Polling data, like the CDT survey of 2000 teachers referenced above, provides a snapshot and is influential research to raise immediate concerns around the procurement of student activity monitoring software. However, the federal government is currently not collecting larger-scale data about this issue and members of Congress, such as Senators Markey and Warren, have relied on CDT’s data in their investigation of the issue because of the absence of federal datasets.

To do this, Congress should charge the National Center for Education Statistics (NCES) within the Institute of Education Sciences (IES) with collecting large-scale data from local education agencies to examine the impact of digital learning tools, including student activity monitoring software. IES should collect data on students disaggregated the student subgroups described in section 1111(b)(2)(B)(xi) of the Elementary and Secondary Education Act of 1965 (20 U.S.C. 6311(b)(2)(B)(xi)) and disseminate such findings to state education agencies and local educational agencies and other appropriate entities. 

Recommendation 2. Enhance parental notification and ensure free appropriate publication education.

Families and communities are not being appropriately informed about the use, or potential for misuse, of technologies installed on school-issued devices and accounts. At the start of the school year, schools should notify parents about what technologies are used, how and why they are used, and alert them of any potential risks associated with them. 

Congress should require school districts to notify parents annually, as they do with other Title I programs as described in Sec. 1116 of the Elementary and Secondary Education Act of 1965 (20 U.S.C. 6318), including “notifying parents of the policy in an understandable and uniform format and, to the extent practicable, provided in a language the parents can understand” and that “such policy shall be made available to the local community and updated periodically to meet the changing needs of parents and the school.”

For students with disabilities specifically, the Individuals with Disabilities Education Act (IDEA) provides procedural safeguards to parents to ensure they have certain rights and protections so that their child receives a free appropriate public education (FAPE). To implement IDEA, schools must convene an Individualized Education Program (IEP) team, and the IEP should outline the academic and/or behavioral supports and services the child will receive in school and include a statement of the child’s present levels of academic achievement and functional performance, including how the child’s disability affects the child’s involvement and progress in the general education curriculum. The U.S. Department of Education should provide guidance about how to leverage the current IEP process to notify parents of the technologies in place in the curriculum and use the IEP development process as a mechanism to identify which mental health supports and services a student might need, rather than relying on conclusions from data produced by the software. 

In addition, IDEA regulations address instances of significant disproportionality of children with disabilities who are students of color, including in disciplinary referrals and exclusionary discipline (which may include referral to law enforcement). Because of this long history of disproportionate disciplinary actions and the fact that special educators are more likely to report knowing students who have gotten in trouble and been contacted by law enforcement due to student activity monitoring, it raises questions about whether these incidents are a loss of instructional time for students with disabilities and, in turn, a potential violation of FAPE. The Department of Education should provide guidance to clarify that such disproportionate discipline might result from the employment of student activity monitoring software and how to mitigate referrals to law enforcement for students of disabilities. 

Recommendation 3. Invest in the Office for Civil Rights within the U.S. Department of Education.

The Office for Civil Rights (OCR) currently receives $140 million and is responsible for investigating and resolving civil rights complaints in education, including allegations of discrimination based on disability status. FY2023 saw a continued increase in complaints filed with OCR, at 19,201 complaints received. The total number of complaints has almost tripled since FY2009, and during this same period OCR’s number of full-time equivalent staff decreased by about 10%. Typically, the majority of complaints received have raised allegations regarding disability.

Congress should double its appropriations for OCR, raising it $280 million. A robust investment would give OCR the resources to address complaints alleging discrimination that involve  an educational technology software, program, or service, including AI-driven technologies. With greater resources, OCR can initiate greater enforcement efforts against potential violations of civil rights law and work with the Office of Education Technology to provide guidance to schools on how to fulfill civil rights obligations. 

Recommendation 4. Support state and local education agencies with technical assistance.

State education agencies (SEAs) and local education agencies (LEAs) are facing enormous challenges to respond to the market of rapidly changing education technologies available. States and districts are inundated with products to select from vendors and often do not have the technical expertise to differentiate between products. When education technology initiatives and products are not conceived, designed, procured, implemented, or evaluated with the needs of all students in mind, technology can exacerbate existing inequalities. 

To support states and school districts in procuring, implementing, and developing state and local policy, the federal government should invest in a national center to provide robust technical assistance focused on safe and equitable adoption of schoolwide AI technologies, including student online activity monitoring software. 

Conclusion

AI technologies will have an enormous impact on public education. Yet, if we do not implement these technologies with students with disabilities in mind, we are at risk for furthering the marginalization of students with disabilities. Both Congress and the U.S. Department of Education can play an important role in taking the necessary steps in developing both policy and guidance, and providing the resources to combat the harms posed by these technologies. NCLD looks forward to working with decision makers to take action to protect students with disabilities’ civil rights and ensure responsible use of AI technologies in schools.

This idea is part of our AI Legislation Policy Sprint. To see all of the policy ideas spanning innovation, education, healthcare, and trust, safety, and privacy, head to our sprint landing page.

Frequently Asked Questions
Why is the Institute of Education Sciences (IES) the right entity to collect such data?
The IES has invested in research to advance AI technologies used in education and coordinated with the National Science Foundation to advance AI-driven research and innovations for learners with or at risk for disabilities, demonstrating a clear commitment to investing in experimental studies that incorporate AI into instruction and piloting new technologies. While this research is important and will help shape the future of teaching and learning, especially for disabled students, additional data and research is needed to fully evaluate the extent to which AI tools already used in schools are impacting students.
What would be the focus of the proposed technical assistance (TA) center?

This TA center could provide guidance to states and local education agencies that lack both the capacity and the subject matter expertise in both the procurement and implementation process. It can coordinate its services and resources with existing TA centers like the T4PA Center or Regional Educational Laboratories, on how to invest in evidence-based mental health supports in schools and communities, including using technology in ways that mitigate discrimination and bias.


As of February 2024, seven states had published AI guidelines (reviewed and collated by Digital Promise). While these broadly recognize the need for policies and guidelines to ensure that AI is used safely and ethically, none explicitly mention the use of student activity monitoring AI software.

Why should the Office of Civil Rights (OCR) be funded at a level of at least $280 million?

This is a funding level requested in other bills seeking to increase OCR’s capacity such as the Showing Up For Students Act. OCR is projecting 23,879 complaint receipts in FY2025. Excluding projected complaints filed by a single complainant, this number is expected to be 22,179 cases. Without staffing increases in FY2025, the average caseload per investigative staff will become unmanageable at 71 cases per staff (22,179 projected cases divided by 313 investigative staff).

How does this proposal fit into the larger landscape of congressional and administrative attention to this issue?

In late 2023, the Biden-Harris Administration issued an Executive Order on AI. Also that fall, Senate Health, Education, Labor, and Pensions (HELP) Committee Ranking Member Bill Cassidy (R-LA) released a White Paper on AI and requested stakeholder feedback on the impact of AI and the issues within his committee’s jurisdiction.


U.S. House of Representatives members Lori Trahan (D-MA) and Sara Jacobs (D-CA), among others, also recently asked Secretary of Education Miguel Cardona to provide information on the OCR’s understanding of the impacts of educational technology and artificial intelligence in the classroom.


Last, Senate Majority Leader Chuck Schumer (D-NY) and Senator Todd Young (R-IN) issued a bipartisan Roadmap for Artificial Intelligence Policy that calls for $32 billion annual investment in research on AI. While K-12 education has not been a core focal point within ongoing legislative and administrative actions on AI, it is imperative that the federal government take the necessary steps to protect all students and play an active role in upholding federal civil rights and privacy laws that protect students with disabilities. Given these commitments from the federal government, there is a ripe opportunity to take action to address the issues of student privacy and discrimination that these technologies pose.

What existing laws should policymakers consider improving the implementation of and/or work to uphold existing statutory protections?

Individuals with Disabilities Education Act (IDEA): IDEA is the law that ensures students with disabilities receive a free appropriate public education (FAPE). IDEA regulations require states to collect data and examine whether significant disproportionality based on race and ethnicity is occurring with respect to the incidence, duration, and type of disciplinary action, including suspensions and expulsions. Guidance from the Department of Education in 2022 emphasized that schools are required to provide behavioral supports and services to students who need them in order to ensure FAPE. It also stated that “a school policy or practice that is neutral on its face may still have the unjustified discriminatory effect of denying a student with a disability meaningful access to the school’s aid, benefits, or services, or of excluding them based on disability, even if the discrimination is unintentional.”


Section 504 of the Rehabilitation Act: This civil rights statute protects individuals from discrimination based on their disability. Any school that receives federal funds must abide by Section 504, and some students who are not eligible for services under IDEA may still be protected under this law (these students usually have a “504 plan”). As the Department of Education works to update the regulations for Section 504, the implications of surveillance software on the civil rights of students with disabilities should be considered.


Elementary and Secondary Education Act (ESEA) Title I and Title IV-A: Title I of the Elementary and Secondary Education Act (ESEA) provides funding to public schools and requires states and public school systems to hold public schools accountable for monitoring and improving achievement outcomes for students and closing achievement gaps between subgroups like students with disabilities. One requirement under Title I is to notify parents of certain policies the school has and actions the school will take throughout the year. As a part of this process, schools should notify families of any school monitoring policies that may be used for disciplinary actions. The Title IV-A program within ESEA provides funding to states (95% of which must be allocated to districts) to improve academic achievement in three priority content areas, including activities to support the effective use of technology. This may include professional development and learning for educators around educational technology, building technology capacity and infrastructure, and more.


Family Educational Rights and Privacy Act (FERPA): FERPA protects the privacy of students’ educational records (such as grades and transcripts) by preventing schools or teachers from disclosing students’ records while allowing caregivers access to those records to review or correct them. However, the information from computer activity on school-issued devices or accounts is not usually considered an education record and is thus not subject to FERPA’s protections.


Children’s Online Privacy Protection Act (COPPA): COPPA requires operators of commercial websites, online services, and mobile apps to notify parents and obtain their consent before collecting any personal information on children under the age of 13. The aim is to give parents more control over what information is collected from their children online. The law regulates companies, not schools.

About the National Center for Learning Disabilities

We are working to improve the lives of individuals with learning disabilities and attention issues—by empowering parents and young adults, transforming schools, and advocating for equal rights and opportunities. We actively work to shape local and national policy to reduce barriers and ensure equitable opportunities and accessibility for students with learning disabilities and attention issues. Visit ncld.org to learn more.

Establish Data-Sharing Standards for the Development of AI Models in Healthcare

The National Institute for Standards and Technology (NIST) should lead an interagency coalition to produce standards that enable third-party research and development on healthcare data. These standards, governing data anonymization, sharing, and use, have the potential to dramatically expedite development and adoption of medical AI technologies across the healthcare sector.

Challenge and Opportunity

The rise of large language models (LLMs) has demonstrated the predictive power and nuanced understanding that comes from large datasets. Recent work in multimodal learning and natural language understanding have made complex problems—for example, predicting patient treatment pathways from unstructured health records—feasible. A study by Harvard estimated that the wider adoption of AI automation would reduce U.S. healthcare spending by $200 billion to $360 billion annually and reduce the spend of public payers, such as Medicare, Medicaid, and the VA, by five to seven percent, across both administrative and medical costs.

However, the practice of healthcare, while information-rich, is incredibly data-poor. There is not nearly enough medical data available for large-scale learning, particularly when focusing on the continuum of care. We generate terabytes of medical data daily, but this data is fragmented and hidden, held captive by lack of interoperability.

Currently, privacy concerns and legacy data infrastructure create significant friction for researchers working to develop medical AI. Each research project must build custom infrastructure to access data from each and every healthcare system. Even absent infrastructural issues, hospitals and health systems face liability risks by sharing data; there are no clear guidelines for sufficiently deidentifying data to enable safe use by third parties.

There is an urgent need for federal action to unlock data for AI development in healthcare. AI models trained on larger and more diverse datasets improve substantially in accuracy, safety, and generalizability. These tools can transform medical diagnosis, treatment planning, drug development, and health systems management.

New NIST standards governing the anonymization, secure transfer, and approved use of healthcare data could spur collaboration. AI companies, startups, academics, and others could responsibly access large datasets to train more advanced models.

Other nations are already creating such data-sharing frameworks, and the United States risks falling behind. The United Kingdom has facilitated a significant volume of public-private collaborations through its establishment of Trusted Research Environments. Australia has a similar offering in its SURE (Secure Unified Research Environment). Finland has the Finnish Social and Health Data Permit Authority (Findata), which houses and grants access to a centralized repository of health data. But the United States lacks a single federally sponsored protocol and research sandbox. Instead, we have a hodgepodge of offerings, ranging from the federal National COVID Cohort Collaborative Data Enclave to private initiatives like the ENACT Network.

Without federal guidance, many providers will remain reticent to participate or will provide data in haphazard ways. Researchers and AI companies will lack the data required to push boundaries. By defining clear technical and governance standards for third-party data sharing, NIST, in collaboration with other government agencies, can drive transformative impact in healthcare.

Plan of Action

The effort to establish this set of guidelines will be structurally similar to previous standard-setting projects by NIST, such as the Cryptographic Standards or Biometric Standards Program. Using those programs as examples, we expect the effort to require around 24 months and $5 million in funding. 

Assemble a Task Force

This standards initiative could be established under NIST’s Information Technology Laboratory, which has expertise in creating data standards. However, in order to gather domain knowledge, partnerships with agencies like the Office of the National Coordinator for Health Information Technology (ONCHIT), Department of Health and Human Services (HHS), the National Institutes of Health (NIH), the Centers for Medicare & Medicaid Services (CMS), and the Agency for Healthcare Research and Quality (AHRQ) would be invaluable.

Draft the Standards

Data sharing would require standards at three levels: 

Syntactic regulations already exist through standards like HL7/FHIR. Semantic formats exist as well, in standards like the Observational Medical Outcomes Partnership’s Common Data Model. We propose to develop the final class of standards, governing fair, privacy-preserving, and effective use.

The governance standards could cover:

  1. Data Anonymization
  1. Secure Data Transfer Protocols
  1. Approved Usage
  1. Public-Private Coordination

Revise with Public Comment

After releasing the first draft of standards, seek input from stakeholders and the public. In particular, these groups are likely to have constructive input: 

Implement and Incentivize

After publishing the final standards, the task force should promote their adoption and incentivize public-private partnerships. The HHS Office of Civil Rights must issue regulatory guidance allowable under HIPAA to allow these guide documents to be used as a means to meet regulatory burden. These standards could be initially adopted by public health data sources, such as CMS, or NIH grants may mandate participation as part of recently launched public disclosure and data sharing requirements.

Conclusion

Developing standards for collaboration on health AI is essential for the next generation of healthcare technologies.

All the pieces are already in place. The HITECH Act and the Office of the National Coordinator for Health Information Technology gives grants to Regional Health Information Exchanges precisely to enable this exchange. This effort directly aligns with the administration’s priority of leveraging AI and data for the national good and the White House’s recent statement on advancing healthcare AI. Collaborative protocols like these also move us toward the vision of an interoperable health system—and better outcomes for all Americans.

This idea is part of our AI Legislation Policy Sprint. To see all of the policy ideas spanning innovation, education, healthcare, and trust, safety, and privacy, head to our sprint landing page.

Frequently Asked Questions
How can we maintain patient privacy when sharing data with third parties?
Sharing data with third parties is not new. Researchers and companies often engage in data-sharing agreements with medical centers or payors. However, these agreements are usually specialized and created ad hoc. This new regulation aims to standardize and scale such data-sharing agreements while still protecting patient privacy. Existing standards, such as HIPAA, may be combined with emerging technologies, like homomorphic encryption, differential privacy, or secure multi-party computation, to spur innovation without sacrificing patient privacy.
Why is NIST the right body for this work, rather than a group like HHS, ONCHIT, or CMS?

Collaboration among several agencies is essential to the design and implementation of these standards. We envision NIST working closely with counterparts at HHS and other agencies. However, we think that NIST is the best agency to lead this coalition due to its rich technical expertise in emerging technologies.


NIST has been responsible for several landmark technical standards, such as the NIST Cloud Computing Reference Architecture, and has previously done related work in its report on deidentification of personal information and extensive work on assisting adoption of the HL7 data interoperability standard.


NIST has the necessary expertise for drafting and developing data anonymization and exchange protocols and, in collaboration with the HHS, ONCHIT, NIH, AHRQ, and industry stakeholders, will have the domain knowledge to create useful and practical standards.

How does this differ from HL7?
HL7 and FHIR are data exchange protocols for healthcare information, maintained by the nonprofit HL7 International. Both HL7 and FHIR play critical roles in enabling interoperability across the healthcare ecosystem. However, they primarily govern data formats and exchange protocols between systems, rather than specifying standards around data anonymization and responsible sharing with third-parties like AI developers.

Establish a Teacher AI Literacy Development Program

The rapid advancement of artificial intelligence (AI) technology necessitates a transformation in our educational systems to equip the future workforce with necessary AI skills, starting with our K-12 ecosystem. Congress should establish a dedicated program within the National Science Foundation (NSF) to provide ongoing AI literacy training specifically for K-12 teachers and pre-service teachers. The proposed program would ensure that all teachers have the necessary knowledge and skills to integrate AI into their teaching practices effectively.

Challenge and Opportunity

Generative artificial intelligence (GenAI) has emerged as a profoundly disruptive force reshaping the landscape of nearly every industry. This seismic shift demands a corresponding transformation in our educational systems to prepare the next generation effectively. Central to this transformation is building a robust GenAI literacy among students, which begins with equipping our educators. Currently, the integration of GenAI technologies in classrooms is outpacing the preparedness of our teachers, with less than 20% feeling adequately equipped to utilize AI tools such as ChatGPT. Moreover, only 29% have received professional development in relevant technologies, and only 14 states offer any guidance on GenAI implementation in educational settings at the time of this writing.

The urgency for federal intervention cannot be overstated. Without it, there is a significant risk of exacerbating educational and technological disparities among students, which could hinder their readiness for future job markets dominated by AI. It is of particular importance that AI literacy training is deployed equitably to counter the disproportionate impact of AI and automation on women and people of color. McKinsey Global Institute reported in 2023 that women are 1.5 times more likely than men to experience job displacement by 2030 as a result of AI and automation. A previous study by McKinsey found that Black and Hispanic/Latino workers are at higher risk of occupational displacement than any other racial demographic. This proposal seeks to address the critical deficit in AI literacy among teachers, which, if unaddressed, will leave our students ill-prepared for an AI-driven world.

The opportunity before us is to establish a government program that will empower teachers to stay relevant and adaptable in an evolving educational landscape. This will not only enhance their professional development but also ensure they can provide high-quality education to their students. Teachers equipped with AI literacy skills will be better prepared to educate students on the importance and applications of AI. This will help students develop critical skills needed for future careers, fostering a workforce that is ready to meet the demands of an AI-driven economy. 

Plan of Action

To establish the NSF Teacher AI Literacy Development Program, Congress should first pass a defining piece of legislation that will outline the program’s purpose, delineate its extent, and allocate necessary funding. 

An initial funding allocation, as specified by the authorizing legislation, will be directed toward establishing the program’s operations. This funding will cover essential aspects such as staffing, the initial setup of the professional development resource hub, and the development of incentive programs for states. 

Key responsibilities of the program include:

Develop comprehensive AI literacy standards for K-12 teachers through a collaborative process involving educational experts, AI specialists, and teachers. These standards could be developed directly by the federal government as a model for states to consider adopting or compiled from existing resources set by reputable organizations, such as the International Society for Technology in Education (ISTE) or UNESCO

Compile a centralized digital repository of AI literacy resources, including training materials, instructional guides, best practices, and case studies. These resources will be curated from leading educational institutions, AI research organizations, and technology companies. The program would establish partnerships with universities, education technology companies, and nonprofits to continuously update and expand the resource hub with the latest tools and research findings.

Design a comprehensive grant program to support the development and implementation of AI literacy programs for both in-service and pre-service teachers. The program would outline the criteria for eligibility, application processes, and evaluation metrics to ensure that funds are distributed effectively and equitably. It would also provide funding to educational institutions to build their capacity for delivering high-quality AI literacy programs. This includes supporting the development of infrastructure, acquiring necessary technology, and hiring or training faculty with expertise in AI.

Conduct regular, comprehensive assessments to gauge the current state of AI literacy among educators. These assessments would include surveys, interviews, and observational studies to gather qualitative and quantitative data on teachers’ knowledge, skills, and confidence in using AI in their classrooms across diverse educational settings. This data would then be used to address specific gaps and areas of need.

Conduct nationwide campaigns to raise awareness about the importance of AI literacy in education, prioritizing outreach efforts in underserved and rural areas to ensure that these communities receive the necessary information and resources. This can include localized campaigns, community meetings, and partnerships with local organizations.

Prepare and present annual reports to Congress and the public detailing the program’s achievements, challenges, and future plans. This ensures transparency and accountability in the program’s implementation and progress.

Regularly evaluate the effectiveness of AI literacy programs and assess their impact on teaching practices and student outcomes. Use this data to inform policy decisions and program improvements.

Proposed Timeline

TimeframeGoals
Year 1: Formation and Setup
Quarter 1Congress passes legislation to establish the program.
Allocate initial funding to support the establishment and initial operations of the program.
Quarter 2Formally establish the program’s administrative office and hire key staff.
Develop and launch the program’s official website for public communication and resource dissemination.
Quarter 3Initiate a national needs assessment to determine the current state of AI literacy among educators.
Develop AI literacy standards for K-12 teachers.
Quarter 4Establish AI literacy resource centers within community college and vocational school Centers of AI Excellence.
Distribute resources and funding to selected pilot school districts and teacher training institutions.
Year 2: Implementation and Expansion
Quarter 1Evaluate pilot programs and integrate initial feedback to refine training materials and strategies.
Expand resource distribution based on feedback from pilot programs.
Quarter 2Launch strategic partnerships with leading technology firms, academic institutions, and educational nonprofits to enhance resource hubs and professional development opportunities.
Initiate public awareness campaigns to emphasize the importance of AI literacy in education.
Quarter 3Offer incentives for states to develop and implement AI literacy training programs for teachers.
Continue to develop and refine AI literacy standards based on ongoing feedback and advancements in AI technology.
Quarter 4Review year-end progress and adjust strategies based on comprehensive evaluations.
Prepare the first annual report to Congress and the public outlining achievements, challenges, and future plans.
Year 3 and Beyond: Maturation and Nationwide Implementation
Scale up successful initiatives to a national level based on proven effectiveness and feedback.
Continuously update the Professional Development Resource Hub with the latest AI educational tools and best practices.
Regularly update AI literacy standards to reflect technological advancements and educational needs.
Sustain focus on incentivizing states and expanding reach to underserved regions to ensure equitable AI education across all demographics.

Conclusion

This proposal expands upon Section D of the Biden Administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, emphasizing the importance of building AI literacy to foster a deeper understanding before providing tools and resources. Additionally, this policy has been developed with reference to the Office of Educational Technology’s report on Artificial Intelligence and the Future of Teaching and Learning, as well as the 2024 National Education Technology Plan. These references underscore the critical need for comprehensive AI education and align with national strategies for integrating advanced technologies in education. 

We stand at a pivotal moment where our actions today will determine our students’ readiness for the world of tomorrow. Therefore, it is imperative for Congress to act swiftly to pass the necessary legislation to establish the NSF Teacher AI Literacy Development Program. Doing so will not only secure America’s technological leadership but also ensure that every student has the opportunity to succeed in the new digital age.

This idea is part of our AI Legislation Policy Sprint. To see all of the policy ideas spanning innovation, education, healthcare, and trust, safety, and privacy, head to our sprint landing page.

Frequently Asked Questions
How can we ensure that the AI literacy training is not biased or does not promote certain agendas, especially given the potential influence of technology companies involved in developing resources?

The program emphasizes developing AI literacy standards through a collaborative process involving educational experts, AI specialists, and teachers themselves. By including diverse perspectives and stakeholders, the goal is to create comprehensive and balanced training materials. Additionally, resources will be curated from a wide range of leading institutions, organizations, and companies to prevent any single entity from exerting undue influence. Regular evaluations and feedback loops will also help identify and address any potential biases.

How will this program address the digital divide and ensure equitable access to AI literacy training for teachers in underfunded schools and rural areas? Many districts may lack the necessary infrastructure and resources.

Ensuring equitable access to AI literacy training is a key priority of this program. The nationwide awareness campaigns will prioritize outreach efforts in underserved and rural areas. Additionally, the program will offer incentives and targeted funding for states to develop and implement AI literacy training programs, with a focus on supporting schools and districts with limited resources.

Given the rapid pace of AI advancements, how frequently will the training materials and resources need to be updated, and what is the long-term cost projection for keeping the program relevant?

The program acknowledges the need for continuous updating of AI literacy standards, training materials, and resources to reflect the latest advancements in AI technology. The proposal outlines plans for regular updates to the Professional Development Resource Hub, as well as periodic revisions to the AI literacy standards themselves. While specific timelines and cost projections are not provided, the program is designed with a long-term view, including strategic partnerships with leading institutions and technology firms to stay current with developments in the field. Annual reports to Congress will help assess the program’s effectiveness and inform decisions about future funding and resource allocation.

What metrics will be used to evaluate the effectiveness of the AI literacy training programs, and how will student outcomes be measured to justify the investment in this initiative?

The program emphasizes the importance of regular, comprehensive assessments to gauge the current state of AI literacy among educators. These assessments will include surveys, interviews, and observational studies to gather both qualitative and quantitative data on teachers’ knowledge, skills, and confidence in using AI in their classrooms across diverse educational settings. Additionally, the program aims to evaluate the effectiveness of AI literacy programs and assess their impact on teaching practices and student outcomes, though specific metrics are not outlined. The data gathered through these evaluations will be used to inform policy decisions, program improvements, and to justify continued investment in the initiative.

A NIST Foundation to Support the Agency’s AI Mandate

The National Institute of Standards and Technology (NIST) faces several obstacles to advancing its mission on artificial intelligence (AI) at a time when the field is rapidly advancing and consequences for falling short are wide-reaching. To enable NIST to quickly and effectively respond, Congress should authorize the establishment of a NIST Foundation to unlock additional resources, expertise, flexible funding mechanisms, and innovation, while ensuring the foundation is stood up with strong ethics and oversight mechanisms.

Challenge

The rapid advancement of AI presents unprecedented opportunities and complex challenges as it is increasingly integrated into the way that we work and live. The National Institute of Standards and Technology (NIST), an agency within the Department of Commerce, plays an important role in advancing AI-related research, measurement, evaluation, and technical standard setting. NIST has recently been given responsibilities under President Biden’s October 30, 2023, Executive Order (EO) on Safe, Security, and Trustworthy Artificial Intelligence. To support the implementation of the EO, NIST launched an AI Safety Institute (AISI), created an AI Safety Institute Consortium (AISIC), and released a strategic vision for AISI focused on safe and responsible AI innovation, among other actions.

While work is underway to implement Biden’s AI EO and deliver on NIST’s broader AI mandate, NIST faces persistent obstacles in its ability to quickly and effectively respond. For example, recent legislation like the Fiscal Responsibility Act of 2023 has set discretionary spending limits for FY26 through FY29, which means less funding is available to support NIST’s programs. Even before this, NIST’s funding has remained at a fractional level (around $1–1.3 billion each year) of the industries it is supposed to set standards for. Since FY22, NIST has received lower appropriations than it has requested.

In addition, NIST is struggling to attract the specialized science and technology (S&T) talent that it needs due to competition for technical talent, a lack of competitive pay compared to the private sector, a gender-imbalanced culture, and issues with transferring institutional knowledge when individuals transition out of the agency, according to a February 2023 Government Accountability Office report. Alongside this, NIST has limitations on how it can work with the private sector and is subject to procurement processes that can be a barrier to innovation, an issue the agency has struggled with in years past, according to a September 2005 Inspector General report.

The consequences of NIST not fulfilling its mandate on AI due to these challenges and limitations are wide-reaching: a lack of uniform AI standards across platforms and countries; reduced AI trust and security; limitations on AI innovation and commercialization; and the United States losing its place as a leading international voice on AI standards and governance, giving the Chinese government and companies a competitive edge as they seek to become a world leader in artificial intelligence.

Opportunity

An agency-related foundation could play a crucial role in addressing these challenges and strengthening NIST’s AI mission. Agency-related nonprofit research foundations and corporations have long been used to support the research and development (R&D) mandates of federal agencies by enabling them to quickly respond to challenges and leverage additional resources, expertise, flexible funding mechanisms, and innovation from the private sector to support service delivery and the achievement of agency programmatic goals more efficiently and effectively.

One example is the CDC Foundation. In 1992, Congress passed legislation authorizing the creation of the CDC Foundation, an independent, 501(c)(3) public charity that supports the mandate of the Centers for Disease Control and Prevention (CDC) by facilitating strategic partnerships between the CDC and the philanthropic community and leveraging private-sector funds from individuals, philanthropies, and corporations. The CDC is legally able to capitalize on these private sector funds through two mechanisms: (1) Section 231 of the Public Health Service Act, which authorizes the Secretary of Health and Human Services “to accept on behalf of the United States gifts made unconditionally by will or otherwise for the benefit of the Service or for the carrying out of any of its functions,” and (2) the legislation that authorized the creation of the CDC Foundation, which establishes its governance structure and provides the CDC director the authority to accept funds and voluntary services from the foundation to aid and facilitate the CDC’s work. 

Since 1995, the CDC Foundation has raised $2.2 billion to support 1,400 public health programs in the United States and worldwide. The importance of this model was evident at the height of the COVID-19 pandemic when the CDC Foundation supported the Centers by quickly raising  to deploy various resources supporting communities. In the same way that the CDC Foundation bolstered the CDC’s work during the greatest public health challenge in 100 years, a foundation model could be critical in helping an agency like NIST deploy private, philanthropic funds from an independent source to quickly respond to the challenge and opportunity of AI’s advancement.

Another example of an agency-related entity is the newly established Foundation for Energy Security and Innovation (FESI), authorized by Congress via the 2022 CHIPS and Science Act following years of community advocacy to support the mission of the Department of Energy (DOE) in advancing energy technologies and promoting energy security. FESI released a Request for Information in February 2023 to seek input on DOE engagement opportunities with FESI and appointed its inaugural board of directors in May 2024.

NIST itself has demonstrated interest in the potential for expanded partnership mechanisms such as an agency-related foundation. In its 2019 report, the agency notes that “foundations have the potential to advance the accomplishment of agency missions by attracting private sector investment to accelerate technology maturation, transfer, and commercialization of an agency’s R&D outcomes.” NIST is uniquely suited to benefit from an agency-related foundation and its partnership flexibilities, given that it works on behalf of, and in collaboration with, industry on R&D and to develop standards, measurements, regulations, and guidance.

But how could NIST actually leverage a foundation model? A June 2024 paper from the Institute for Progress presents ideas for how a foundation model could support NIST’s work on AI and emerging tech. These include setting up a technical fellowship program that can compete with formidable companies in the AI space for top talent; quickly raising money and deploying resources to conduct “rapid capability evaluations for the risks and benefits of new AI systems”; and hosting large-scale prize competitions to develop “complex capabilities benchmarks for artificial intelligence” that would not be subject to usual monetary limitations and procedural burdens.

A NIST Foundation, of course, would have implications for the agency’s work beyond AI and other emerging technologies. Interviews with experts at the Federation of American Scientists working across various S&T domains have revealed additional use cases for a NIST Foundation that map to the agency’s topical areas, including but not limited to: 

Critical to the success of a foundation model is for it to have the funding needed to support NIST’s mission and programs. While it is difficult to estimate exactly how much funding a NIST Foundation could draw in from external sources, there is clearly significant appetite from philanthropy to invest in AI research and initiatives. Reporting from Inside Philanthropy uncovered that some of the biggest philanthropic institutions and individual donors—such as Eric and Wendy Schmidt and Open Philanthropy—have donated approximately $1.5 billion to date to AI work. And in November 2023, 10 major philanthropies announced they were committing $200 million to fund “public interest efforts to mitigate AI harms and promote responsible use and innovation.”

Plan of Action

In order to enable NIST to more effectively and efficiently deliver on its mission, especially as it relates to rapid advancement in AI, Congress should authorize the establishment of a NIST Foundation. While the structure of agency-related foundations may vary depending on the agency they support, they all have several high-level elements in common, including but not limited to:

The activities of existing agency-related foundations have left them subject to criticism over potential conflicts of interest. A 2019 Congressional Research Service report highlights several case studies demonstrating concerning industry influence over foundation activities, including allegations that the National Football League (NFL) attempted to influence the selection of research applicants for a National Institutes of Health (NIH) study on chronic traumatic encephalopathy, funded by the NFL through the FNIH, and the implications of the Coca-Cola Company making donations to the CDC Foundation for obesity and diet research.

In order to mitigate conflict of interest, transparency, and oversight issues, a NIST Foundation should consider rigorous policies that ensure a clear separation between external donations and decisions related to projects. Foundation policies and communications with donors should make explicit that donations will not result in specific project focus, and that donors will have no decision-making authority as it relates to project management. Donors would have to disclose any potential interests in foundation projects they would like to fund and would not be allowed to be listed as “anonymous” in the foundation’s regular financial reporting and auditing processes.

Additionally, instituting mechanisms for engaging with a diverse range of stakeholders is key to ensure the Foundation’s activities align with NIST’s mission and programs. One option is to mandate the establishment of a foundation advisory board composed of topical committees that map to those at NIST (such as AI) and staffed with experts across industry, academia, government, and advocacy groups who can provide guidance on strategic priorities and proposed initiatives. Many initiatives that the foundation might engage in on behalf of NIST, such as AI safety, would also benefit from strong public engagement (through required public forums and diverse stakeholder focus groups preceding program stand-up) to ensure that partnerships and programs address a broad range of potential ethical considerations and serve a public benefit.

Alongside specific structural components for a NIST Foundation, metrics will help measure its effectiveness. While quantitative measures only tell half the story, they are a starting point for evaluating whether a foundation is delivering its intended impact. Examples of potential metrics include:

Conclusion

Given financial and structural constraints, NIST risks being unable to quickly and efficiently fulfill its mandate related to AI, at a time when innovative technologies, systems, and governance structures are sorely needed to keep pace with a rapidly advancing field. Establishing a NIST Foundation to support the agency’s AI work and other priorities would bolster NIST’s capacity to innovate and set technical standards, thus encouraging the safe, reliable, and ethical deployment of AI technologies. It would also increase trust in AI technologies and lead to greater uptake of AI across various sectors where it could drive economic growth, improve public services, and bolster U.S. global competitiveness. And it would help make the case for leveraging public-private partnership models to tackle other critical S&T priorities.

This idea is part of our AI Legislation Policy Sprint. To see all of the policy ideas spanning innovation, education, healthcare, and trust, safety, and privacy, head to our sprint landing page.

Enhancing Federal Climate Initiatives: Integrating Tech-Focused Green Jobs for Equity and Innovation

Federal climate initiatives, like the ‘Climate Corps’ and the National Climate Resilience Framework, overlook the integration of technology-focused green jobs, missing opportunities for equity and innovation in technology, artificial intelligence (AI), and machine learning (ML). Our objective is to advocate for the integration of technology-focused green jobs within these initiatives to foster equity. Leveraging funding opportunities from recent legislation, notably the Infrastructure Investment and Jobs Act (IIJA) and the Environmental Protection Agency’s (EPA) environmental education fund, we aim to craft novel job descriptions, tailored training programs, and foster strategic public-private partnerships.

Methods and Approach

Our approach was based on comprehensive research and extensive stakeholder engagement, including discussions with key federal agencies and industry experts, identifying challenges and opportunities for integrating technology-focused green jobs. We engaged with officials and experts from various organizations, including the Department of Energy, EPA, USDA, FEMA, New America, the Benton Institute, National Urban League, Kajeet, the Blue Green Alliance, and the Alliance for Rural Innovation.We conducted data research and analysis, reviewed government frameworks and CRS reports, as well as surveyed programs and reports from diverse sources.

Challenge and Opportunity

The integration of technology-focused green jobs within existing federal climate initiatives presents both challenges and opportunities. One primary challenge lies in the predominant focus on traditional green jobs within current initiatives, which may inadvertently overlook the potential for equitable opportunities in technology, artificial intelligence (AI), and machine learning (ML). This narrow emphasis risks excluding individuals with expertise in emerging technologies from participating in climate-related efforts, hindering innovation and limiting the scope of solutions. Moreover, the lack of adequate integration of technology within climate strategies creates a gap in inclusive and forward-looking approaches, potentially impeding the effectiveness of initiatives aimed at addressing climate change. Addressing these challenges requires a paradigm shift in how federal climate initiatives are structured and implemented, necessitating a deliberate effort to incorporate technology-driven solutions alongside traditional green job programs.

However, amidst these challenges lie significant opportunities to foster equity and innovation in the climate sector. By advocating for the integration of technology-focused green jobs within federal initiatives, there is an opportunity to broaden the talent pool and harness the potential of emerging technologies to tackle pressing environmental issues. Leveraging funding opportunities from recent legislation, such as the Infrastructure Investment and Jobs Act (IIJA) and the Environmental Protection Agency’s (EPA) environmental education fund, presents a unique opportunity to invest in novel job descriptions, tailored training programs, and strategic public-private partnerships. Furthermore, initiatives aimed at reconciling concerns about equity in job creation and transitions, particularly in designing roles that require advanced degrees and ensuring consistent labor protections, provide avenues for fostering a more inclusive and equitable workforce in the green technology sector. By seizing these opportunities, federal climate initiatives can not only advance technological innovation but also promote diversity, equity, and inclusion in the emerging green economy.

Plan of Action

Moving the integration of these policy frameworks internally and with an aspiration to reflect market and community needs will require a multi-faceted approach. In response to the identified challenges and opportunities, the following policy recommendations are proposed:

Recommendation 1. Restructuring Federal Climate Initiatives to Embrace Technology-Focused Green Jobs

In light of the evolving landscape of climate challenges and technological advancements, there is a pressing need to review existing federal climate initiatives, such as the ‘Climate Corps’ and the National Climate Resilience Framework, to actively integrate technology-focused green jobs. Doing so creates an opportunity for integrated implementation guidance. This recommendation aims to ensure equitable opportunities in technology, artificial intelligence (AI), and machine learning (ML) within the climate sector while addressing the intersection between climate and technology. By undertaking this restructuring, federal climate initiatives can better align with the demands of the modern workforce and foster innovation in climate solutions. For example, the two aforementioned initiatives and the Executive Order on Artificial Intelligence all infer or clearly mention the following: green or climate jobs, equity, job training programs and tech and climate literacy. There is room to create programs to research and generate solutions around the ecological impacts of AI development within the auspices of the Climate Resilience Framework, and consider creating roles to implement those solutions as part of the Climate Corps (see Appendix II). 

The rationale behind this recommendation lies in the recognition of the imperative to adapt federal climate initiatives to embrace emerging technologies and promote diversity and inclusion in green job opportunities. As the climate crisis intensifies and technological advancements accelerate, there is a growing need for skilled professionals who can leverage technology to address environmental challenges effectively. However, existing initiatives predominantly prioritize traditional green jobs, potentially overlooking the untapped potential of technology-driven solutions. Therefore, restructuring federal climate initiatives to actively integrate technology-focused green jobs is essential to harnessing the full spectrum of talent and expertise needed to confront the complexities of climate change.

  1. Developing a Green Tech Job Initiative. This initiative should focus on creating and promoting jobs in the tech, AI, and ML sectors that contribute to climate solutions. This could include roles in developing clean energy technologies, climate modeling, and data analysis for climate research and policy development. Burgeoning industries such as regenerative finance offer opportunities to combine AI and climate resilience goals. 
  2. Ensuring Equitable Opportunities. Policies should be put in place to ensure these job opportunities are accessible to all, regardless of background or location. One example would be to leverage the Justice40 initiative, and use those allocations to underserved communities to create targeted training and education programs in tech-driven environmental solutions for underrepresented groups. Additionally, public-private partnerships could be strategically designed to support community-based projects that utilize technology to address local environmental issues.
  3. Addressing the Intersection of Climate and Technology. The intersection of climate and technology should be a key focus of federal climate policy. This could involve promoting the use of technology in climate mitigation and adaptation strategies, as well as considering the environmental impact of the tech industry itself. (Strengthening community colleges, accredited online programs and other low-cost alternatives to traditional education and job training)

Recommendation 2. Leveraging Funding for Technology-Driven Solutions in Federal Climate Initiatives

In order to harness the funding avenues provided by recent legislation such as the Infrastructure Investment and Jobs Act (IIJA) and the Environmental Protection Agency’s (EPA) environmental education fund, strategic policy measures must be implemented to facilitate the development of comprehensive job descriptions, tailored training plans, and robust public-private partnerships aimed at advancing technology-driven solutions within federal climate initiatives. This recommendation underscores the importance of utilizing available resources to cultivate a skilled workforce, foster innovation, and enhance collaboration between government, industry, and academia in addressing climate challenges through technology.

The rationale behind this recommendation is rooted in the recognition of the transformative potential of technology-driven solutions in mitigating climate change and building resilience. With significant funding streams allocated to climate-related initiatives, there is a unique opportunity to invest in the development of job descriptions that reflect the evolving demands of the green technology sector, as well as training programs that equip individuals with the necessary skills to excel in these roles. Moreover, fostering robust public-private partnerships can facilitate knowledge sharing, resource pooling, and joint innovation efforts, thereby maximizing the impact of federal climate initiatives. By strategically leveraging available funding, federal agencies can catalyze the adoption of technology-driven solutions and drive progress towards a more sustainable and resilient future.

  1. Comprehensive Job Descriptions. Develop comprehensive job descriptions for technology-focused green jobs within federal climate initiatives. These descriptions should clearly outline the roles and responsibilities, required skills and qualifications, and potential career paths. This could be overseen by the Department of Labor (DOL) in collaboration with the Department of Energy (DOE) and the EPA.
  2. Tailored Training Plans. Establish tailored training plans to equip individuals with the necessary skills for these jobs. This could involve partnerships with educational institutions and industry bodies to develop curriculum and training programs. The National Science Foundation (NSF) could play a key role in this, given its mandate to promote science and engineering education.
  3. Public-Private Partnerships. Foster robust public-private partnerships to advance technology-driven solutions within federal climate initiatives. This could involve collaborations between government agencies, tech companies, research institutions, and non-profit organizations. The Department of Commerce, through its National Institute of Standards and Technology (NIST), could facilitate these partnerships, given its role in fostering innovation and industrial competitiveness.

Recommendation 3. Updating Bureau of Labor Statistics Job Categories for Green and Tech Jobs

To address the outdated Bureau of Labor Statistics (BLS) job categories, particularly in relation to the green and innovation economies, federal agencies and stakeholders must collaborate to support an update of these categories and classifications. This recommendation emphasizes the importance of modernizing job classifications to accurately reflect the evolving nature of the workforce, especially in sectors related to green technology and innovation.

The rationale behind this recommendation is rooted in the recognition of the significant impact that outdated job categories can have on program and policy design, particularly in areas related to green and technology-driven jobs. Currently the green jobs categorization work has been interrupted by sequestration.1 The tech job updates are on differing schedules. By updating BLS job categories to align with current market trends and emerging technologies, federal agencies can ensure that workforce development efforts are targeted and effective. Moreover, fostering collaboration between public and private sector stakeholders, alongside inter-agency work, can provide the necessary support for BLS to undertake this update process. Through coordinated efforts, agencies can contribute valuable insights and expertise to inform the revision of job categories, ultimately facilitating more informed decision-making and resource allocation in the domains of green and tech jobs.

  1. Inter-Agency Collaboration. Establish an inter-agency task force, including representatives from the BLS, Department of Energy (DOE), Environmental Protection Agency (EPA), and Department of Labor (DOL), to review and update the current job categories and classifications. This task force would be responsible for ensuring that the classifications accurately reflect the evolving nature of jobs in the green and innovation economies.
  2. Public-Private Partnerships. Engage in public-private partnerships with industry leaders, academic institutions, and non-profit organizations. These partnerships can provide valuable insights into the changing job landscape and help inform the update of job categories and classifications.
  3. Stakeholder Engagement. Conduct regular consultations with stakeholders, including employers, workers, and unions in the green and innovation economies. Their input can ensure that the updated classifications accurately represent the realities of the job market.
  4. Regular Updates. Implement a policy for regular reviews and updates of job categories and classifications, particularly in renewing and syncing the green and tech jobs.The Office of Budget and Management can offer guidance about regular reviews and feedback based on government-wide standards. Initiating such a policy may require additional personnel in the short-term, but long-term this will increase agency efficiency. It will also ensure that the classifications remain relevant as the green and innovation economies continue to evolve (see FAQ section and Appendix I).

Conclusion

The integration of technology-focused green jobs within federal climate initiatives is imperative for fostering equity and innovation in addressing climate challenges. By restructuring existing programs and leveraging funding opportunities, the federal government can create inclusive pathways for individuals to contribute to climate solutions while advancing in technology-driven fields. Collaboration between government agencies, private sector partners, educational institutions, and community stakeholders is essential for developing comprehensive job descriptions, tailored training programs, and strategic public-private partnerships. Moreover, updating outdated job categories and classifications through inter-agency collaboration and stakeholder engagement will ensure that policy design accurately reflects the evolving green and innovation economies. Through these concerted efforts, the federal government can drive sustainable economic growth, promote workforce development, and address climate change in an equitable and inclusive manner.

Frequently Asked Questions
How will the integration of technology-focused green jobs enhance federal climate initiatives?
Integrating technology-focused green jobs within federal climate initiatives aims to broaden the scope of these programs to include opportunities in technology, artificial intelligence (AI), and machine learning (ML). This approach not only fosters innovation in tackling climate challenges but also ensures equitable access to emerging job markets. By leveraging advancements in technology, these initiatives can harness a wider range of solutions to environmental issues, thereby enhancing the effectiveness and inclusivity of climate action efforts.
What measures are proposed to ensure equitable opportunities in technology-focused green jobs?
To ensure equitable opportunities, the policy memo recommends the development of novel job descriptions and tailored training programs that are accessible to all, regardless of background or location. Strategic public-private partnerships are also advocated to leverage resources and expertise from both sectors. These efforts aim to create pathways for diverse candidates to engage in technology-driven roles within the climate sector, promoting diversity, equity, and inclusion in the emerging green economy.
How does BLS normally update its job classifications?
The BLS uses surveys, public feedback, and labor market data to inform its classifications. It also works with state and local governments, private industry and other stakeholders. Different BLS products are released on a variety of timetables.
What are the opportunities to engage with the process?
The next update is in 2028 and calls for comments may happen sometime this year. Beside public comments, there is an opportunity for the Office of Budget and Management (OMB) to assist.OMB can enhance federal agencies’ data reporting efficiency by establishing clear guidelines, promoting the use of advanced technologies like data analytics, and fostering interagency collaboration to share best practices. Encouraging the adoption of modern technologies can automate and streamline data collection, leading to more frequent updates.
How will the recommended changes to BLS job categories impact the green and tech job markets?
Updating the BLS job categories to accurately reflect the evolving nature of green and tech jobs is crucial for informed decision-making and effective resource allocation. This change will provide policymakers, employers, and workers with a clearer understanding of the job market, enabling targeted workforce development efforts and facilitating the alignment of educational programs with industry needs. By accurately classifying these roles, the federal government can better track employment trends, support job creation, and ensure that policies are responsive to the dynamics of the green and innovation economies.

Appendix

The recommendations outlined in this memo represent the culmination of extensive research and collaborative efforts with stakeholders. As of March 2024, while the final project and products are still undergoing refinement through stakeholder collaboration, the values, solutions, and potential implementation strategies detailed here are the outcomes of a thorough research process.

Our research methodology was comprehensive, employing diverse approaches such as stakeholder interviews, data analysis, examination of government frameworks, review of Congressional Research Service (CRS) reports, and surveying of existing programs and reports.

Stakeholder interviews were instrumental in gathering insights and perspectives from officials and experts across various sectors, including the Department of Energy, FEMA, New America, the Benton Institute, National Urban League, Kajeet, and the Alliance for Rural Innovation. Ongoing efforts are also in place to engage with additional key stakeholders such as the EPA, USDA, select Congressional offices, labor representatives, and community-based organizations and alliances.

Furthermore, our research included a thorough analysis of Bureau of Labor Statistics (BLS) data to understand industry projections and job classification limitations. We employed text mining techniques to identify common themes and cross-topic programming or guidance within government frameworks. Additionally, we reviewed CRS reports to gain insights into public policy writings on related topics and examined existing programs and reports from various sources, including think tanks, international non-governmental organizations (INGOs), non-governmental organizations (NGOs), and journalism.
The detailed findings of our research, including analyzed data, report summaries, and interview portfolio, are provided as appendices to this report, offering further depth and context to the recommendations outlined in the main text.

I. BLS Data Analysis: Employment Trends in Tech-Related Industries (2022-2032)

This section provides a detailed analysis of employment statistics extracted from CSV data across various industries, emphasizing green, AI, and tech jobs. The analysis outlines notable growth and potential advancement areas within technology-related sectors.

The robust growth in employment figures across key sectors such as computer and electronic product manufacturing, software publishing, and computer systems design underscores the promising outlook for tech-related job sectors. Similarly, the notable expansion within the information sector, while not explicitly AI-focused due to industry constraints, signals an escalating demand for skill sets closely aligned with technological advancements.

Moreover, the significant growth observed in support activities for agriculture and forestry hints at progressive strides in integrating green technologies within these domains. This holistic analysis not only sheds light on evolving employment trends but also provides valuable insights into market dynamics. Understanding these trends can aid in identifying potential opportunities for workforce development initiatives and strategic investments, ensuring alignment with emerging industry needs and fostering sustainable growth in the broader economic landscape.

Further Analysis

This nuanced analysis illuminates the varied trajectories across different industries, highlighting both areas of growth and challenges. It underscores the importance of proactive strategic planning and adaptation to navigate the evolving employment landscape effectively.

Regarding job classifications, while the Bureau of Labor Statistics (BLS) provides valuable insights, it may not fully capture emerging roles in next-gen fields like AI, Web 3.0, Web 4.0, or climate tech. Exploring analogous roles or interdisciplinary skill sets within existing classifications can offer a starting point for understanding employment trends in these innovative domains. Additionally, leveraging alternative sources of data, such as industry reports or specialized surveys, can complement BLS data to provide a more comprehensive picture of evolving employment dynamics.

Based on the information from the Bureau of Labor Statistics (BLS) and the search results, here’s what I found:

Market Demand for Tech Jobs. The BLS projects that overall employment in computer and information technology occupations is expected to grow much faster than the average for all occupations from 2022 to 20321. This suggests that these jobs are being filled according to market demand but not quickly enough for market demand.

Green and Tech Jobs. The BLS produces data on jobs related to the production of green goods and services, jobs related to the use of green technologies and practices, and green careers23. Many of the jobs listed on the provided BLS links fall under tech jobs, especially those related to AI, Web 3.0, and Web 4.0. However, specific data on jobs related to regenerative finance or climate tech was not found in the search results.

Education Requirements. Most of the jobs listed on the provided BLS links typically require a Bachelor’s degree for entry14. Some occupations may require a Master’s degree or higher. However, the exact education requirement can vary depending on the specific role and employer expectations.

These industries demonstrate growth potential from 2022 to the projected 2032 data, underscoring the increasing demand for tech-related job sectors, especially in computer and electronic product manufacturing, software publishing, and computer systems design. The information sector also shows significant growth, potentially reflecting the rise in AI and technology advancements.

Appendix I.A. BLS Data and Standard Occupation Codes (Climate Corps Specific)

These job classifications encompass a range of roles pertinent to green initiatives, infrastructure technology, and AI/ML development, reflecting the evolving landscape of employment opportunities.

Intersection of Green Jobs

Here’s a summary based on the jobs that explicitly refer to green jobs and the Federal Job Codes requiring different levels of education:

Green Jobs:

Federal Job Codes/Roles Requiring Different Levels of Education:

Bachelor’s Degree

Associate’s Degree

High School Diploma

Please note that while this list includes occupations that explicitly require a bachelor’s degree, associate’s degree, or high school diploma and showed up in an NLP search, it may have missed jobs that require certifications only. Additionally, other green job titles such as environmental engineers, conservationists, social scientists, and environmental scientists may require advanced degrees.

II. Analysis of EOs, Frameworks, TAs and Initiatives

This appendix analyzes executive orders, frameworks, technical assistance guides, and initiatives related to green and climate jobs, equity, job training programs, and tech and climate literacy. It presents findings from documents such as the American Climate Corps initiative, National Climate Resilience Framework, and Executive Order on AI, focusing on their implications for job creation and skills development in the green and tech sectors.

*Note about technologies use, this was text mined (SAS NLP, later bespoke app from team member) and Read (explain creating of text mining browser add on in methods overview/disclosure) using key terms “green”, “climate”, “equity”, “training”, “technology” and “literacy.”

Analysis of Executive Orders, Frameworks, Technical Assistance Guides, and Initiatives

This appendix delves into executive orders, frameworks, technical assistance guides, and initiatives pertaining to green and climate jobs, equity, job training programs, and tech and climate literacy. It scrutinizes documents such as the American Climate Corps initiative, National Climate Resilience Framework, and Executive Order on AI, dissecting their implications for job creation and skills development in the green and tech sectors.

Climate Corps:

National Climate Resilience Framework:

Executive Order on AI:

Please note that this analysis is based on provided excerpts, and the full documents may contain additional relevant insights.

Technical Assistance Guidance: Creating Green or Climate Jobs

The Bipartisan Infrastructure Law (BIL) and the Inflation Reduction Act (IRA) are poised to create green or climate jobs, strengthen equity, and bolster job training programs, signaling a concerted effort towards enhancing tech and climate literacy across the workforce and the general U.S. population.

Creating Green or Climate Jobs

Both the Bipartisan Infrastructure Law (BIL) and the Inflation Reduction Act (IRA) are anticipated to generate green or climate jobs. The BIL aims to enhance the nation’s resilience to extreme weather and climate change, concurrently mitigating greenhouse gas emissions. Similarly, the IRA is forecasted to yield over 9 million quality jobs in the forthcoming decade.

Considering Equity

Both the BIL and the IRA prioritize equity in their provisions. The BIL endeavors to bridge historically disadvantaged and underserved communities to job opportunities and economic empowerment. Similarly, the IRA addresses energy equity through its climate provisions and investment tax credits in renewable energy.

Strengthening Job Training Programs

Both legislations incorporate provisions for enhancing job training programs. The BIL allocates over $800 million in dedicated investments towards workforce development, while the IRA mandates workforce development and apprenticeship requirements.

Increasing Tech and Climate Literacy within the Federal Workforce

Although explicit information on boosting tech and climate literacy within the federal workforce is lacking, both the BIL and the IRA include provisions for workforce development and training. These initiatives could potentially encompass tech and climate literacy training.

Increasing Tech and Climate Literacy in the General U.S. Population

The substantial investments in clean energy and climate mitigation under the BIL and the IRA may indirectly contribute to enhancing tech and climate literacy across the general U.S. populace. However, there is no specific information regarding programs aimed at directly augmenting tech and climate literacy in the general population.

III. Report Summaries

Insights from various reports shed light on the demand for tech and green jobs, digital skills, and challenges in the broadband workforce. Drawing from reputable sources such as Bank of America, BCG, the Federal Reserve of Atlanta, and others, these summaries emphasize the necessity for targeted educational and policy interventions.

These reports collectively underscore the growing demand for digital and green skills in the U.S. workforce, accompanied by a shortage of skilled workers. Collaboration between policymakers and educators is essential to provide adequate training and education for success in the digital and green economies.

IV. Digital Discrimination Reports

This section delves into findings from reports on digital discrimination, broadband access, and AI literacy, sourced from reputable institutions such as The Markup, Consumer Reports, Pew, and the World Economic Forum. These reports illuminate the inequities present in digital access and knowledge and emphasize the necessity for equitable policies to foster widespread participation in the digital economy.

These reports collectively underscore the urgent need for policies supporting digital skill development and ensuring affordable, high-speed internet access across the U.S. Policymakers are urged to collaborate in providing necessary training and education opportunities to empower workers for success in the digital economy.

V. CRS Report Summaries

This section provides a synopsis of Congressional Research Service (CRS) reports addressing skills gaps, broadband considerations, job training programs, and economic assistance for transitioning communities. These reports offer insights into legislative and policy contexts for bridging digital divides and supporting transitions to green economies, with a focus on workforce development and economic assistance.

Overall, these reports underscore the necessity for policies supporting the development of digital and green skills, as well as ensuring equitable access to high-speed internet across the U.S. Policymakers are urged to collaborate in providing necessary training and education opportunities to empower workers for success in evolving economic landscapes.

U.S. Water Policy for a Warming Planet

In 2000, Fortune magazine observed, “Water promises to be to the 21st century what oil was to the 20th century: the precious commodity that determines the wealth of nations.” Like petroleum, freshwater resources vary across the globe. Unlike petroleum, no living creature survives long without it. Recent global episodes of extreme heat intensify water shortages caused by extended drought and overpumping. Creating actionable solutions to the challenges of a warming planet requires cooperation across all water consumers.

The Biden-Harris administration should work with stakeholders to (1) develop a comprehensive U.S. water policy to preserve equitable access to clean water in the face of a changing climate, extreme heat, and aridification; (2) identify and invest in agricultural improvements to address extreme heat-related challenges via U.S. Department of Agriculture (USDA) and Farm Bill funding; and (3) invest in water replenishment infrastructure and activities to maintain critical surface and subsurface reservoirs. America’s legacy water rules, developed under completely different demographic and environmental conditions than today, no longer meet the nation’s current and emerging needs. A well-conceived holistic policy will optimize water supply for agriculture, tribes, cities, recreation, and ecosystem health even as the planet warms.

Challenge and Opportunity

In 2023, the National Oceanic and Atmospheric Administration (NOAA) recorded the hottest global average temperature since records began 173 years prior. In the same year, the U.S. experienced a record 28 billion-dollar disasters. The earth system responds to increasing heat in a variety of ways, most of them involving swings in weather and water cycles. Warming air holds more moisture, increasing the possibility of severe storm events. Extreme heat also depletes soil moisture and increases evapotranspiration. Finally, warmer average temperatures across the U.S. induce northward shifts in plant hardiness zones, reshaping state economies in the process.

As a result, agriculture currently experiences billions of dollars in losses each year (Fig. 1). Drought, made worse by high heat conditions, accounts for a significant amount of the losses. In 2023, 80% of emergency disaster designations declared by USDA were for drought or excessive heat.

Figure 1

Agriculture consumes up to 80% of the freshwater used annually. Farmers rely on surface water and groundwater during dry conditions, as climate change systematically strains water resources. Rising heat can increase overall demand for water for irrigating crops, exacerbating water shortages. Plants need more water; evapotranspiration rates increase to keep internal temperatures in check. Warming is also shrinking the snowpack that feeds rivers, driving a “snow loss cliff” that will impact future supply. Compounding all of this, Americans have overused depleted reservoirs across the country, leading to a system in crisis.

America’s freshwater resources fall under a tangle of state, local, and watershed agreements cobbled together over the past 100 years. In general, rules fall into two main categories: riparian rights and prior appropriation. In the water-replete eastern U.S., states favor riparian rights. Under this doctrine, property owners generally maintain local use of the water running through the property or in the aquifer below it, except in the case of malicious overuse. Most riparian states currently fall under the Absolute Dominion (or the English) Rule, the Correlative Rights Doctrine, or the Reasonable Use Rule, and many use term-limited permitting to regulate water rights (Table 1). In the arid western region, states prefer the Doctrine of Prior Appropriation. Under this scheme, termed “first in time, first in right,” property owners with older claims have priority over all newer claimants. Unlike riparian rights, prior appropriation claims may be separated from the land and sold or leased elsewhere. Part of the rationale for this is that prior appropriation claims refer to shares of water that must be transported to the land via canals or pipes, rather than water that exists natively on the property, as found in the riparian case. Some states use a mix of the two approaches, and some maintain separate groundwater and surface water rules (Fig. 2).

Figure 2

Original “use it or lose it” rules required claimants to take their entire water allotment as a hedge against speculation by absentee owners. While persistent drought and overuse reduced water availability over time, “use it or lose it” rules continue to penalize reduction in usage rates, making efficiency counterproductive. For example, Colorado’s “use it or lose it”’ rule remains on the books, despite repeated efforts to revise it. In a sign of progress, in 2021, Arizona passed a bipartisan law to change their “use it or lose it” rule to guarantee continued water rights if users choose to conserve water.

Water scarcity extends well beyond the arid western states. In the Midwest, higher temperatures and drought exacerbate overpumping that continues to deplete the vast Ogallala Reservoir that underlies the Great Plains (Fig. 3). Driven in part by rising temperatures, the effective 100th meridian that separates the arid West from the humid East appears to have shifted east by about 140 miles since 1980, indicating creeping aridification across the Midwest. The drought-impacted Mississippi River level dropped for the past two consecutive years, impeding river transport and causing saltwater intrusion into Louisiana groundwater, contaminating formerly potable water in many wells.

Figure 3. Changes to the water level of the Ogallala Aquifer that underlies most of the Great Plains states show depletion in most regions

Recognition of water’s increased importance, especially in a future of more extreme heat and its cascading impacts, drives new markets for the trade of physical water. The impetus for some markets arises from the variance in water availability and cost between different industries and communities. Ideally, benefits accrue to both sellers and buyers by offering a valuable revenue stream for meeting a resource need. Markets differ between groundwater and surface water. For groundwater markets, agreements allow one user to trade some portion of allocated pumping rights to another local user, although impacts to neighbors and ecosystems that share the aquifer must be considered. Successful groundwater trades rely on accurate assessments of subsurface water levels over time. For surface water trades, a portion of the prior appropriation water can be sold or leased to another user regardless of proximity, or banked for future use. Legislation passed in 2022 enables Colorado River Indian Tribes to lease or trade newly settled water rights, or to bank them for future use in surface or subsurface reservoirs without facing a “use it or lose it” penalty.

There are less obvious water considerations. Import from and export to foreign nations of heavily irrigated crops or water-intensive commodities equates to virtual water trade. The most common virtual water export involves foreign sale of American farmer-grown crops. Other means include sales or leases of domestic land to foreign entities that grow water-intensive crops on U.S. soil, often on arid land, for export. Virtual water trades occur within the U.S. as well, through exchange of goods and services.

Developing a framework for cooperation across end users, complementary to previous frameworks recommended for the Ogallala Aquifer, creates a mechanism to address urgent water issues. Establishing the federal government’s role to convene and collaborate with stakeholders helps all parties participate within a common structure toward solving a mutual problem. To promote sustained productivity and water resources in the face of extreme heat and aridification, a holistic federal water policy should focus on:

The Biden-Harris administration should develop a plan that creates incentives for all stakeholders to participate in water management policy development in the face of rising heat and climate change. Specifically, discussions must consider real reservoir volumes (surface and subsurface), current and future temperatures, annual rain and snow measurements, evapotranspiration calculations, and estimates of current and future water needs and trades across all end users. History supports federal assistance in thorny resource management areas. One close analog, that of fisheries management, shows the power of compromise to conserve future resources despite fierce competition. 

Plan of Action

Recommendation 1. The White House Council on Environmental Quality should convene a working group of experts from across federal and state agencies to develop a National Water Policy to future-proof water resources for a hotter nation.

Progress toward increased scientific understanding of the large-scale hydrologic cycle offers new opportunities for managing resources in the face of change. Management efforts started at local scales and expanded to regional scales. Country-wide management requires a more holistic view. The U.S. water budget is moving to a more unstable regime. Climate change and extreme heat add complexity by shifting weather and water cycles in real-time. Improving the system balance requires convening stakeholders and experts to formulate a high-level policy framework that:

As such, the White House Council on Environmental Quality should convene a working group of experts from across federal and state agencies to create a comprehensive National Water Policy. Relevant government agencies include the DOI; the U.S. Geological Survey (USGS); the Bureau of Indian Affairs; the U.S. Army Corps of Engineers (USACE); Federal Emergency Management Agency (FEMA); Department of Commerce; NOAA; and the USDA. The envisioned National Water Policy complements the U.S. Government Global Water Strategy.

Figure 4. Map of principal aquifers of the U.S.

via USGS

Data products to support the creation of a robust National Water Policy already exist (Fig. 4). USGS, FEMA, the National Weather Service, USDA’s Natural Resources Conservation Service, and NOAA’s National Climate Data Center, Office of Water Prediction, and National Water Center all contribute data critical to development of both high-level and regional-scale assessments and data layers crucial for short- and mid-term planning. Creating term reassessments as more data accrue and models improve supports effective decision-making as climate change and extreme heat continue to alter the hydrologic cycle. An overall water policy must remain dynamic due to changing trends and new data.

National, regional, and local aspects of the water budget and related models and visualizations help federal and state decision makers develop a strategic plan for modernizing water rights for both river water, basins, and groundwater and to identify risks to supplies (e.g., decreasing snowpack due to higher heat) and opportunities for recharge. Stakeholders and water managers with shared knowledge of well-documented data are best positioned to determine minimum reservoir volumes in the primary storage basins, including aquifers, in alignment with the objectives of the National Strategy to Develop Statistics for Environmental and Economic Decisions. By creating a strategy that uses actual average values to maintain reservoir volumes, some of the potential shocks created by drought years and high heat could be cushioned, and related financial losses could be avoided or mitigated. Ultimately, stakeholders and managers must share a common understanding of the water budget when seeking to resolve water rights disputes, to review and revise water rights, and to inform trades.

Basin and local data promote development of a strategic framework for water trades. As trades and markets continue to grow, states and municipalities must account for water rights, both the lease and sale of rights, to buffer large fluctuations in water prices and availability. Emerging markets to “buy” water to “bank” it for sale at a higher price during drought years and/or high heat events should also be monitored and evaluated by relevant agencies like Commerce. States’ and investors’ maintenance of transparency around market activities, including investor purchases of land with water rights, promotes fair trade and ensures stakeholder confidence in the process. 

Finally, to communicate clearly with the public, funds should be provided through the DOI budget to NOAA and USGS data scientists to create decision-support tools that build on the work already underway through mature databases (e.g., at drought.gov and water.weather.gov). New water visualization tools to show the nowcast and forecast of the national water status would help the public understand policy decisions, akin to depictions used by weather forecasters. Variables should include heat index, humidity, expected evapotranspiration, precipitation, surface volumes, and groundwater levels, along with information on water use restrictions and recharge mechanisms at the local level. Making this product media-friendly aids public education and bolsters policy adoption and acceptance.

Recommendation 2. USDA should invest in infrastructure, research, and development.

Agriculture, as the largest water consumer, faces scarcity in the coming years even as populations continue to grow. Increasing demands on a dwindling resource and growing need for more water lead to conflict and acrimony. To ease tensions and maintain the goods and services needed to fuel the U.S. economy in the future, investment in both immediately practicable future-proofed, heat-resilient water solutions and over-the-horizon research and development must commence. To prepare, USDA will need to:

To support these efforts and broader climate resilience needs of farmers, Congress can:

Recommendation 3. Federal, state, and local governments must invest in replenishing water reserves.

To balance water shortage, federal, state and local governments must invest in recharging aquifers and reservoirs while also reducing losses due to flooding. Opportunities for flood basin recharge arise during wet years, especially accounting for the shift from longer, frequent, lighter rainstorms to shorter, less frequent durations of very heavy rainfall. Federal agencies currently have opportunities to leverage Inflation Reduction Act (IRA) and BIL money for replenishment, including the following:

Congress can further support these actions by:

Figure 5. Map of insured flood claims

via Washington Post, with credit to Federal Emergency Management Agency, Natural Resources Defense Council

Conclusion

Water policy varies regionally, by basin, and by state. Because aquifers cross regions and water supplies vary over interstate and international boundaries, the federal government is the best arbiter for managing a dynamic, precious resource. By treating the hydrologic cycle as a dynamic system, data-driven water policy benefits all stakeholders and serves as a basis for future federal investment.

This idea of merit originated from our Extreme Heat Ideas Challenge. Scientific and technical experts across disciplines worked with FAS to develop potential solutions in various realms: infrastructure and the built environment, workforce safety and development, public health, food security and resilience, emergency planning and response, and data indices. Review ideas to combat extreme heat here.

Frequently Asked Questions
Why should the Department of the Interior coordinate the stakeholder engagement rather than the states?

DOI already manages surface waters in some basins through the Bureau of Reclamation and through the decision in Arizona vs. California. DOI also coordinates water infrastructure investments across multiple states via BIL funding. Furthermore, DOI agencies actively engage in collecting and sharing water resource data across the U.S. Because DOI maintains a holistic view of the hydrologic cycle and currently engages with stakeholders across the country on water concerns, it is best positioned to lead the discussions.

How does DOI know who the stakeholders should be for each region?

DOI, through the USGS, mapped out most of the largest U.S. aquifers (Fig. 4) and drainage basins. The main stakeholders for each reservoir emerge through those maps. 

How can farmers protect their livelihoods in light of all of the competing water interests?

The best way to maintain agricultural production is to invest in increasingly efficient water farming practices and infrastructure. For example, installing canal liners, pipes, and smart watering equipment reduces water loss during conveyance and application. Funds have been allocated under the BIL and IRA for water infrastructure upgrades. Some government and state agencies offer grants in support of increased water efficiency. Working with seed companies to select drought- and/or flood-tolerant variants offers another approach. Farmers should also encourage funding agencies to ramp up groundwater replenishment activities and to accelerate development of new supporting technologies that will help maintain production.

How can farmers add agrivoltaics or other kinds of renewable energy to their property?

Funds or tax credits are available to help defray some of the costs of installing renewable energy on rural land. Various agencies also offer targeted funding opportunities to test agrivoltaics; these opportunities tend to entail collaboration with university partners.

Why is there so much controversy around the Colorado River water allotments?

Over a century ago, the prior appropriation doctrine attracted homesteaders to the arid Colorado River basin by offering set water entitlements. Several early miscalculations contributed to the basin’s current water crisis. First, the average annual flow of the Colorado River used to calculate entitlements was overestimated. Second, entitlements grew to exceed the overestimated annual flow, compounding the deficit. Third, water entitlement plans failed to set aside specific shares for federally recognized tribes as well as the vast populations that responded to the call to move west. Finally, “use it or lose it” rules that govern prior appropriation entitlements created roadblocks to progress in water use efficiency.

Are there any existing water markets?

A water futures market already exists in California.

A lot of the homeowners impacted by repeated flooding are disadvantaged. How can the government help these homeowners without disenfranchising them when converting these properties to buffer zones?

Program leaders would need to work cooperatively with impacted families to find agreeable home sites away from flood zones, especially in close-knit communities where residents have established ties with neighbors and businesses. If desired and when practicable, existing homes could be transported to drier ground. Working with all of the stakeholders in the community to chart a path forward remains the best and most equitable policy.

Defining Disaster: Incorporating Heat Waves and Smoke Waves into Disaster Policy

Extreme heat – and similar people-centered disasters like heavy wildfire smoke – kills thousands of Americans annually, more than any other weather disaster. However, U.S. disaster policy is more equipped for events that damage infrastructure than those that mainly cause deaths. Policy actions can save lives and money by better integrating people-centered disasters.

Challenge and Opportunity

At the federal level, emergency management is coordinated through the Federal Emergency Management Agency (FEMA), with many other agencies as partners, including Centers for Disease Control (CDC), Department of Housing and Urban Development (HUD), and Small Business Administration (SBA). Central to the FEMA process is the requirement under the Stafford Act that the President declare a major disaster, which has never happened for extreme heat. This seems to be caused by a lack of tools to determine when a heat wave event escalates into a heat wave disaster, as well as a lack of a clear vision of federal responsibilities around a heat wave disaster.

Gap 1. When is a heat event a heat disaster?

A core tenet of emergency management is that events escalate into disasters when the impacts exceed available resources. Impact measurement is increasingly quantitative across FEMA programs, including quantitative metrics used in awarding Fire Management Assistance Grant (FMAG), Public Assistance (PA), and Individual Assistance (IA) and in the Benefit Cost Analysis (BCA) for hazard mitigation grants.

However, existing calculations are unable to incorporate the health impacts that are a main impact of heat waves. When health impacts are included in a calculation, it is only in limited cases; for example, the BCA allows mental healthcare savings, but only for residential mitigation projects that reduce post-disaster displacement.

Gap 2. What is the federal government’s role in a heat disaster?

Separate from the declaration of a major disaster is the federal government’s role during that disaster. Existing programs within FEMA and its partner agencies are designed for historic disasters rather than those of the modern and future eras. For example, the National Risk Index (NRI), used to understand the national distribution of risks and vulnerability, bases its risk assessment on events between 1996 and 2019. As part of considering future disasters, disaster policy should consider intensified extreme events and compound hazards (e.g., wildfire during a heat wave) that are more likely in the future. 

A key part of including extreme heat and other people-centered disasters will be to shift toward future-oriented resilience and adaptation. FEMA has already been making this shift, including a reorganization to highlight resilience. The below plan of action will further help FEMA with its mission to help people before, during, and after disasters.

Plan of Action

To address these gaps and better incorporate extreme heat and people-centered disasters into U.S. emergency management, Congress and federal agencies should take several interrelated actions.

Recommendation 1. Defining disaster

To clarify that extreme heat and other people-centered disasters can be disasters, Congress should:

(1) Add heat, wildfire smoke, and compound events (e.g., wildfire during a heat wave) to the list of disasters in Section 102(2) of the Stafford Act. Though the list is intended to be illustrative rather than exhaustive, as demonstrated by the declaration of COVID-19 as a disaster despite not being on the list, explicit inclusion of these other disasters on the list clarifies that intent. This action is widely supported and example legislation includes the Extreme Heat Emergency Act of 2023

(2) FEMA should standardize procedures for determining when disparate events are actually a single compound event. For example, many individual tornadoes in Kentucky in 2021 were determined to be the results of a single weather pattern, so the event was declared as a disaster, but wildfires that started due to a single heat dome in 2022 were determined to be individual events and therefore unable to receive a disaster declaration. Compound hazards are expected to be more common in the future, so it is critical to work toward standardized definitions.

(3) Add a new definition of “damage” to Section 102 of the Stafford Act that includes human impacts such as death, illness, economic impacts, and loss of critical function (i.e., delivery of healthcare, school operations, etc.). Including this definition in the statute facilitates the inclusion of these categories of impact.

To quantify the impacts of heat waves, thereby facilitating disaster decisions, FEMA should adopt strategies already used by the federal government. In particular, FEMA should:

(4) Work with HHS to expand the capabilities of the National Syndromic Surveillance Program (NSSP) to evaluate in real time various societal impacts like the medical-care usage and work or school days lost. Recent studies indicate that lost work productivity is a major impact of extreme heat that is currently unaccounted—a gap of potentially billions of dollars. The NSSP Community of Practice can help expand tools across multiple jurisdictions too. Expanding syndromic surveillance expands our ability to measure the impacts of heat, building on the tools available through the CDC Heat and Health Tracker.

(5) Work with CDC to expand their use of excess-death and flu-burden methods, which can provide official estimates of the health impacts of extreme heat. These methods are already in use for heat, but should be regularly applied at the federal level, and would complement the data available from health records via NSSP because it calculates missing data.

(6) Work with EPA to expand BenMAP software to include official estimates of health impacts attributable to extreme heat. The current software provides official estimates of health impacts attributable to air pollution and is used widely in policy. Research is needed to develop health-impact functions for extreme heat, which could be solicited in a research call such as through NIH’s Climate and Health initiative, conducted by CDC epidemiologists, added to the Learning Agenda for FEMA or a partner agency, or tasked to a national lab. Additional software development is also needed to cover real-time and forecast impacts in addition to the historic impacts it currently covers. The proposed tool complements Recommendations #4-5 because it includes forecast data.

(7) Quantify heat illness and death impacts. Real-time data is available in the CDC Heat and Health Tracker. These impacts can be converted to dollars for comparison to property damage using the Value of a Statistical Life (VSL), which FEMA already does in the NRI ($11.6 million per death and $1.16 million per injury in 2022). VSL should be expanded across FEMA programs, in particular the decision for major disaster declarations. VSL could be immediately applied to current data from NSSP, to expanded NSSP and excess-death data (Recommendations #4-5), and is already incorporated into BenMAP so would be available in the expanded BenMAP (Recommendation #6).

(8) Quantify the impact of extreme heat on critical infrastructure, including agriculture. Improved quantification methods could be achieved by expanding the valuation methods for infrastructure damage already in the NRI and could be integrated with the National Integrated Heat Health Information System (NIHHIS). The damage and degradation of infrastructure is often underestimated and should be accurately quantified. For example,

Together, these proposed data tools would provide FEMA with a comprehensive understanding of the impacts of extreme heat on human health in the past, present, and near future, putting heat on the same playing field as other disasters. 

Real-time impacts are particularly important for FEMA to investigate requests for a major disaster declaration. Forecast impacts are important for the ability to preposition resources, as currently done for hurricanes. The goal for forecasting should be 72 hours. To achieve this goal from current models (e.g., air quality forecasts are generally just one day in advance):

(9) Congress should fund additional sensors for extreme weather disasters, to be installed by the appropriate agencies. More detailed ideas can be found in other FAS memos for extreme heat and wildfire smoke and in recommendation 44 of the recent Wildland Fire Commission report.

(10) Congress should invest in research on integrated wildfire-meteorological models through research centers of excellence funded by national agencies or national labs. Federal agencies can also post specific questions as part of their learning agendas. Models should specifically record the contribution of wildfire smoke from each landscape parcel to overall air pollution in order to document the contribution of impacts. This recommendation aligns with the Fire Environment Center proposed in the Wildland Fire Commission report.

Table 1. Division of proposed improvements by time period addressed and implementation readiness
HistoricReal timeForecast
Integrate existing capabilities with FEMAExcess death methods (#5)Use VSL (#7)
Expand program abilitiesExpand infrastructure calculations, NSSP, BenMAP, and sensors (#4–9)Expand BenMAP (#6) and improve smoke forecasts (#10)
Cross-cutting definitionsStafford Act amendments (#1, 3) and compound events (#2)

Recommendation 2. Determining federal response to heat disasters

To incorporate extreme heat and people-centered disasters across emergency management, FEMA and its peer agencies can expand existing programs into new versions that incorporate such disasters. We split these programs here by the phase of emergency management.

Preparedness

(11) Using Flood Maps as a model, FEMA should create maps for extreme heat and other people-centered disasters. Like flood maps, these new maps should highlight the infrastructure at risk of failure or the loss of access to critical infrastructure (e.g., FEMA Community Lifelines) during a disaster. Failure here is defined as the inability of infrastructure to provide its critical function(s); infrastructure that ceases to be usable for its purpose when an extreme weather event occurs (i.e., bitumen softening on airport tarmacs, train line buckling, or schools canceled because classrooms were too hot or too smokey). This includes impacts to evacuation routes and critical infrastructure that would severely impact the functioning of society. Creating such a map requires a major interagency effort integrating detailed information on buildings, heat forecasts, energy grid capacity, and local heat island maps, which likely requires major interagency collaboration. NIHHIS has most of the interagency collaborators needed for such effort, but should also include the Department of Education. Such an effort likely will need direct funding from Congress in order to have the level of effort needed.

(12) FEMA and its partners should publish catastrophic location-specific scenarios to align preparedness planning. Examples include the ARkStorm for atmospheric rivers, HayWired for earthquake, and Cascadia Rising for tsunami. Such scenarios are useful because they help raise public awareness and increase and align practitioner preparedness. A key part of a heat scenario should be infrastructure failure and its cascading impacts; for example, grid failure and the resulting impact on healthcare is expected to have devastating effects.

(13) FEMA should incorporate future projections of disasters into the NRI. The NRI currently only uses historic data on losses (typically 1996 to 2019). An example framework is the $100 million Prepare California program, which combined historic and projected risks in allocating preparedness funds. An example of the type of data needed for extreme heat includes the changes in extreme events that are part of the New York State Climate Impacts Assessment.

(14) FEMA should expand its Community Lifelines to incorporate extreme heat and cascading impacts for critical infrastructure as a result of extreme heat, which must remain operable during and after a disaster to avoid significant loss of human life and property. 

(15) The strategic national stockpile (SNS) should be expanded to focus on tools that are most useful in extreme weather disasters. A key consideration will be fluids, including intravenous (IV) fluids, which the current medical-focused SNS excludes due to weight. In fact, the SNS relies on the presence of IV fluids at the impacted location, so if there is a shortage due to extreme heat, additional medicines might not be deliverable. To include fluids, a new model will be necessary because of the logistics of great weight.

(16) OSHA should develop occupational safety guidelines to protect workers and students from hazardous exposures, expanding on its outdoor and indoor heat-related hazards directive. Establishing these thresholds, such as max indoor air temperatures similar to California’s Occupational Safety and Health Standards Board, can help define the threshold of when a weather event escalates into a disaster. No federal regulations exist for air quality, so California’s example could be used as a template. The need already exists: an average of 2,700 heat-related injuries and 38 heat-related fatalities were reported annually to OSHA between 2011 and 2019.

(17) FEMA and its partners should expand support for community-led multi-hazard resilience hubs, including learning from those focused on extreme heat. FEMA already has its Hubs for Equitable Resilience and Engagement, and EPA has major funding available to support resilience hubs. This equitable model of disaster resilience that centers on the needs of the specific community should be supported.

Response

(18) FEMA should introduce smaller disaster-assistance grants for extreme weather disasters: HMAG, CMAG, and SMAG (Heat, Cold, and Smoke Management Assistance Grants, respectively). They should be modeled on FMAG grants, which are rapidly awarded when firefighting costs exceed available resources but do not necessarily escalate to the level of a major disaster declaration. For extreme weather disasters, the model would be similar, but the eligible activities might focus on climate-controlled shelters, outreach teams to reach especially vulnerable populations, or a surge in medical personnel and equipment. Just like firefighting equipment and staff needed to fight wildfires, this equipment and staff are needed to reduce the impacts of the disaster. FMAG is supported by the Disaster Relief Fund, so if the H/C/SMAG programs also tap that, it will require additional appropriations. Shelters are already supported by the Public Assistance (PA) program, but PA requires a major disaster declaration, so the introduction of lower-threshold funds would increase access.

(19) HHS could activate Disaster Medical Assistance Teams to mitigate any surge in medical needs. These teams are intended to provide a surge in medical support during a disaster and are deployed in other disasters. See our other memos on this topic.

(20) FEMA could deploy Incident Management Assistance Teams and supporting gear for additional logistics. They can also deploy backup energy resources such as generators to prevent energy failure at critical infrastructure.

Recovery and Mitigation

(21) Programs addressing gray or green infrastructure should consider the impact upgrades will have on heat mitigation. For example, EPA and DOE programs funding upgrades to school gray infrastructure should explicitly consider whether proposed upgrades will meet the heat mitigation needs based on climate projections. Projects funding schoolyard redesign should explicitly consider heat when planning blacktop, playground, and greenspace placement to avoid accidentally creating hot spots where children play. CAL FIRE’s grant to provide $47 million for schools to convert asphalt to green space is a state-level example.

(22) Expand the eligible actions of FEMA’s Hazard Mitigation Assistance (HMA) to include installation/upgrade of heating, ventilation, and cooling (HVAC) systems and a more expansive program to support nature-based solutions (NBS) like green space installation. Existing guidance allows HVAC mitigation for other hazards and incentivizes NBS for other hazards.

(23) Increase alignment across federal programs, identifying programs where goals align. For example, FEMA just announced that solar panels would be eligible for the 75% federal cost share as part of mitigation programs; other climate and weatherization improvements should also be eligible under HMA funds.

(24) FEMA should modify its Benefit Cost Analysis (BCA) process to fairly evaluate mitigation of health and life-safety hazards, to better account for mitigation of multiple hazards, and to address equity considerations introduced in Office of Management and Budget’s recent BCA proposal. Some research is likely needed (e.g., the cost-effectiveness of various nature-based solutions like green space is not yet well-defined enough to use in a BCA); this research could be performed by national labs, put into FEMA’s Learning Agenda, or tasked to a partner agency like DOE.

(25) Expand the definition of medical devices to include items that protect against extreme weather. For example, the Center for Medicare and Medicaid Services could define air-conditioning units and innovative personal cooling devices as eligible for prescription under Medicare/Medicaid.

To support the above recommendations, Congress should:

(26) Ensure FEMA is sufficiently and consistently funded to conduct resilience and adaptation activities. Congress augments the Disaster Relief Fund in response to disasters, but they report that the fund will be billions of dollars in deficit by September 2024. It has furthermore been reported that FEMA has delayed payments due to uncertainty of funding through Congressional budget negotiations. In order to support the above programs, it is essential that Congress fund FEMA at a level needed to act. To support FEMA’s shift to a focus on resilience, the increase in funding should be through annual appropriations rather than the Disaster Relief Fund, which is augmented on an ad hoc basis.

(27) Convene a congressional commission like the recent Wildland Fire Commission to analyze the federal capabilities around extreme weather disasters and/or extreme heat. This commission would help source additional ideas and identify political pathways toward creating these solutions, and is merited by the magnitude of the disaster.

Conclusion

People across the U.S. are being increasingly exposed to extreme heat and other people-centered disasters. The suggested policies and programs are needed to upgrade national emergency management for the modern and future era, thereby saving lives and reducing disaster costs to the public.

Frequently Asked Questions
Are the impacts of extreme heat and other people-centered disasters significant enough to be considered disasters?

We estimate a minimum of 1,670 deaths and $157.8 billion of annual heat impacts. These deaths and dollar amounts exceed almost every recorded disaster in U.S. history. Only COVID-19, 9/11, and Hurricanes Maria and Katrina have more deaths, and only Hurricanes Katrina and Harvey have caused more dollar damage. It should be noted that most of the estimates reported are several years out of date and exclude major heat waves of 2021 and 2022. For example, individual heat waves produced sizable numbers of deaths, including 395 deaths in a 2022 California heat wave and 600 deaths in the 2021 Pacific Northwest heatwave.


How could the Stafford Act be amended to include heat waves?

It is insufficient to just add heat to the list of disasters enumerated in the Stafford Act because it omits (1) the important recognition of compound events that often are associated with extreme heat, (2) other people-centered disasters like smoke waves, and (3) the ability to measure these disasters. We, therefore, recommend some version of the following text:


Section 102(2) of the Robert T. Stafford Disaster Relief and Emergency Assistance Act (42 U.S.C. 5122(2)) is amended by striking “or drought” and inserting “drought, heat, smoke, or any other weather pattern causing a combination of the above”.


Section 102 of the Robert T. Stafford Disaster Relief and Emergency Assistance Act (42 U.S.C. 5122(2)) is amended by inserting


(13) DAMAGE—“Damage” means–



  • (A) Loss of life or health impacts requiring medical care

  • (B) Loss of property or impacts on property reducing its ability to function

  • (C) Diminished usable lifespan for infrastructure

  • (D) Economic damage, which includes the value of a statistical life, burden on the healthcare system due to injury, burden on the economy placed by lost days of work or school, agricultural losses, or any other economic damage that is directly measurable or calculated.

  • (E) Infrastructure failure of any duration, including temporary, that could lead to any of the above

Tracking and Preventing the Health Impacts of Extreme Heat

The response to the 9/11 terrorist attacks included building from scratch a bioterrorism-monitoring system that remains a model for public health systems worldwide. Today we face a similarly galvanizing moment: weather-related hazards cause multiple times the 9/11 death toll each year, with extreme heat often termed the “top weather killer,” at 1,670 official deaths a year and 10,000 attributed via excess deaths analysis. Extreme cold and dense wildfire smoke each cause comparable numbers of deaths. By rapidly upgrading and expanding the health-tracking systems of the Centers for Disease Control (CDC), Veterans Health Administration (VHA), and Centers for Medicare and Medicaid (CMS) to improve real-time surveillance of health impacts of climate change, the U.S. can similarly meet the current moment to promote climate-conscientious care that save lives.

Challenge and Opportunity

The official death toll of extreme heat since 1979 stands at over 11,000, but the methods used to develop this count are known to underestimate the true impacts of heat waves. The undercounting of deaths related to extreme heat and other people-centered disasters — like extreme cold and smoke waves — hinders the political and public drive to address the problem and adds difficulty to declaring heat waves as disasters despite the massive loss of life. Similarly, the lack of integration of critical environmental data like “wet bulb” temperature alongside these health impacts in electronic data systems hinders the provision of medical care.

National Accounting

The national reaction to the 9/11 terrorist attacks provides a roadmap forward: improved data and tracking is fundamental to a nation’s evidence-based threat response. Operated by federal, state, and local public health professionals who comprise the CDC’s National Syndromic Surveillance Program (NSSP), surveillance systems were developed across the nation to meet new challenges in disease detection and situational awareness. Since 2020, the CDC’s Data Modernization Initiative (DMI) has provided a framework for this transformation, with the stated goal of improving the nation’s ability to predict, understand, and share data on new health threats in real time. While the DMI has focused on the pioneering role of new technologies for health protection, this effort also offers a once-in-a-generation opportunity for the public health and medical surveillance establishment to increase their capacity to address pressing future threats to the nation’s welfare, including the evolving climate crisis. Increasingly, extreme weather is responsible for both near-term disasters (more frequent and intense heat waves, dense smoke waves, and cold waves) and the long-term exacerbation of prevalent health conditions (such as heart, lung, and neurological disease). Its increasingly severe  impacts demand a detailed and funded roadmap to attain the DMI’s goals of “real-time, scalable emergency response” capability. 

Patient Care

Syndromic surveillance systems track the impacts of events at a population level, but other resources are needed to directly help individual patients during a disaster. Electronic health records (EHRs) allow medical providers to track relevant information that could help diagnose arising health conditions. Some medical systems have begun tracking nonmedical information to assist in diagnosis, such as the social determinants of health (e.g., housing and food availability) that are linked to improving patient outcomes. However, the environmental conditions a patient has experienced are not typically linked to health records. Improving the links between environmental conditions and EHRs will help patients—for example, by determining if a new asthma diagnosis is related to recent smoke waves—and also support syndromic surveillance.

A similar effect occurs with death records. Death records are typically logged at the patient level with free-form text that is mostly up to a medical professional who is often under time pressure. Text for each death record is later coded to fit into specific cause codes as it is aggregated into population-level datasets such as the National Vital Statistics System. Information about the environmental conditions that contributed to the death can be lost at any step along the process, resulting in the undercount of climate-related mortality. Improved tracking at the individual level will improve accounting at the national level.

Plan of Action

In order to track the health impacts from extreme weather events and thereby enhance the provision of medical care during such events, both disaster and health data must be improved.

Recommendation 1. National accounting for health impacts of the climate crisis

The National Syndromic Surveillance Program provides a world-class starting point for better tracking of climate health impacts, both in terms of technology and a dedicated and knowledgeable workforce.  The following plan will permit the evolution of this underlying infrastructure to provide health systems and policy makers with real-time and forecast impacts.

To modernize real-time monitoring of health impacts:

To improve forecasting capabilities of health impacts:

To improve the ability to track health impacts:

Recommendation 2. Improving Patient Care

To integrate environmental conditions into EHRs nationwide:

To support patients during extreme heat:

Conclusion

Deaths from extreme conditions, already high, are forecast to increase in the coming years and decades and potentially define a new modern era. It is vital to prepare our health system for these threats, including accurate accounting of their toll, and better prepare healthcare providers and the public for the conditions they will face.

This idea of merit originated from our Extreme Heat Ideas Challenge. Scientific and technical experts across disciplines worked with FAS to develop potential solutions in various realms: infrastructure and the built environment, workforce safety and development, public health, food security and resilience, emergency planning and response, and data indices. Review ideas to combat extreme heat here.

Frequently Asked Questions
How would including environmental data on EHRs help patients?

While some emergency care providers might be aware of the extreme weather events unfolding outside and therefore be prepared to treat related illness, the situation can change during lengthy shifts, leaving them less well prepared. This disparity between patient exposure and provider expectations can be even greater in rural areas, where patients might travel significant distances and across diverse terrain such that their exposure differs from conditions at the medical facility.


Time is also a factor. For longer-term impacts like asthma complications that could be related to smoke waves, a medical provider might be unaware that a patient experienced heavy smoke and be less able to diagnose the resulting respiratory issues