Addressing the Disproportionate Impacts of Student Online Activity Monitoring Software on Students with Disabilities

Student activity monitoring software is widely used in K-12 schools and has been employed in response to address student mental health needs. Education technology companies have developed algorithms using artificial intelligence (AI) that seek to detect risk for harm or self-harm by monitoring students’ online activities. This type of software can track student logins, view the contents of a student’s screen in real time, monitor or flag web search history, or close browser tabs for off-task students. While teachers, parents, and students largely report the benefits of student activity monitoring outweigh the risks, there is still a need to address the ways that student privacy might be compromised and to avoid perpetuating existing inequities, especially for students with disabilities. 

To address these issues, Congress and federal agencies should:

Challenge and Opportunity

People with disabilities have long benefited from technological advances. For decades, assistive technology, ranging from low tech to high tech, has helped students with disabilities with learning. AI tools hold promise for making lessons more accessible. A recent survey conducted by EdWeek of principals and district leaders showed that most schools are considering using AI, actively exploring their use, or are piloting them. The special education research community at large, such as those at the Center for Innovation, Design and Digital Learning (CIDDL) view the immense potential and risks of AI in educating students for disabilities. CIDDL states:

“AI in education has the potential to revolutionize teaching and learning through personalized education, administrative efficiency, and innovation, particularly benefiting (special) education programs across both K-12 and Higher Education. Key impacts include ethical issues, privacy, bias, and the readiness of students and faculty for AI integration.”

At the same time, AI-based student online activity monitoring software is being employed more universally to monitor and surveil what students are doing online. In K-12 schools, AI-based student activity monitoring software is widespread – nearly 9 in 10 teachers say that their school monitors students’ online activities. 

Schools have employed these technologies to attempt to address student mental health needs, such as referring flagged students to counseling or other services. These practices have significant implications for students with disabilities, as they are at higher risk for mental health issues. In 2024, NCLD surveyed 1349 young adults ages 18 to 24 and found that nearly 15% of individuals with a learning disability had a mental health diagnosis and 45% of respondents indicated that having a learning disability negatively impacts their mental health. Knowing these risks for this population, careful attention must be paid to ensure mental health needs are being identified and appropriately addressed through evidence-based supports. 

Yet there is little evidence supporting the efficacy of this software. Researchers at RAND, through review of peer-reviewed and gray literature as well as interviews, raise issues with the software, including threats to student privacy, the challenge of families in opting out, algorithmic bias, and escalation of situations to law enforcement. The Center for Democracy & Technology (CDT) conducted research highlighting that students with disabilities are disproportionately impacted by these AI technologies. For example, licensed special education teachers are more likely to report knowing students who have gotten in trouble and been contacted by law enforcement due to student activity monitoring. Other CDT polling found that 61% of students with learning disabilities report that they do not share their true thoughts or ideas online because of monitoring. 

We also know that students with disabilities are almost three times more likely to be arrested than their nondisabled peers, with Black and Latino male students with disabilities being the most at risk of arrest. Interactions with law enforcement, especially for students with disabilities, can be detrimental to health and education. Because people with disabilities have protections under civil rights laws, including the right to a free appropriate public education in school, actions must be taken. 

Parents are also increasingly concerned about subjecting their children to greater monitoring both in and outside the classroom, leading to decreased support for the practice: 71% of parents report being concerned with schools tracking their children’s location and 66% are concerned with their children’s data being shared with law enforcement (including 78% of Black parents). Concern about student data privacy and security is higher among parents of children with disabilities (79% vs. 69%). Between the 2021–2022 and 2022–2023 school years, parent and student support of student activity monitoring fell 8% and 11%, respectively. 

Plan of Action

Recommendation 1. Improve data collection.

While data collected from private research entities like RAND and CDT captures some important information on this issue, the federal government should collect such relevant data to capture the extent to which these technologies might be misused. Polling data, like the CDT survey of 2000 teachers referenced above, provides a snapshot and is influential research to raise immediate concerns around the procurement of student activity monitoring software. However, the federal government is currently not collecting larger-scale data about this issue and members of Congress, such as Senators Markey and Warren, have relied on CDT’s data in their investigation of the issue because of the absence of federal datasets.

To do this, Congress should charge the National Center for Education Statistics (NCES) within the Institute of Education Sciences (IES) with collecting large-scale data from local education agencies to examine the impact of digital learning tools, including student activity monitoring software. IES should collect data on students disaggregated the student subgroups described in section 1111(b)(2)(B)(xi) of the Elementary and Secondary Education Act of 1965 (20 U.S.C. 6311(b)(2)(B)(xi)) and disseminate such findings to state education agencies and local educational agencies and other appropriate entities. 

Recommendation 2. Enhance parental notification and ensure free appropriate publication education.

Families and communities are not being appropriately informed about the use, or potential for misuse, of technologies installed on school-issued devices and accounts. At the start of the school year, schools should notify parents about what technologies are used, how and why they are used, and alert them of any potential risks associated with them. 

Congress should require school districts to notify parents annually, as they do with other Title I programs as described in Sec. 1116 of the Elementary and Secondary Education Act of 1965 (20 U.S.C. 6318), including “notifying parents of the policy in an understandable and uniform format and, to the extent practicable, provided in a language the parents can understand” and that “such policy shall be made available to the local community and updated periodically to meet the changing needs of parents and the school.”

For students with disabilities specifically, the Individuals with Disabilities Education Act (IDEA) provides procedural safeguards to parents to ensure they have certain rights and protections so that their child receives a free appropriate public education (FAPE). To implement IDEA, schools must convene an Individualized Education Program (IEP) team, and the IEP should outline the academic and/or behavioral supports and services the child will receive in school and include a statement of the child’s present levels of academic achievement and functional performance, including how the child’s disability affects the child’s involvement and progress in the general education curriculum. The U.S. Department of Education should provide guidance about how to leverage the current IEP process to notify parents of the technologies in place in the curriculum and use the IEP development process as a mechanism to identify which mental health supports and services a student might need, rather than relying on conclusions from data produced by the software. 

In addition, IDEA regulations address instances of significant disproportionality of children with disabilities who are students of color, including in disciplinary referrals and exclusionary discipline (which may include referral to law enforcement). Because of this long history of disproportionate disciplinary actions and the fact that special educators are more likely to report knowing students who have gotten in trouble and been contacted by law enforcement due to student activity monitoring, it raises questions about whether these incidents are a loss of instructional time for students with disabilities and, in turn, a potential violation of FAPE. The Department of Education should provide guidance to clarify that such disproportionate discipline might result from the employment of student activity monitoring software and how to mitigate referrals to law enforcement for students of disabilities. 

Recommendation 3. Invest in the Office for Civil Rights within the U.S. Department of Education.

The Office for Civil Rights (OCR) currently receives $140 million and is responsible for investigating and resolving civil rights complaints in education, including allegations of discrimination based on disability status. FY2023 saw a continued increase in complaints filed with OCR, at 19,201 complaints received. The total number of complaints has almost tripled since FY2009, and during this same period OCR’s number of full-time equivalent staff decreased by about 10%. Typically, the majority of complaints received have raised allegations regarding disability.

Congress should double its appropriations for OCR, raising it $280 million. A robust investment would give OCR the resources to address complaints alleging discrimination that involve  an educational technology software, program, or service, including AI-driven technologies. With greater resources, OCR can initiate greater enforcement efforts against potential violations of civil rights law and work with the Office of Education Technology to provide guidance to schools on how to fulfill civil rights obligations. 

Recommendation 4. Support state and local education agencies with technical assistance.

State education agencies (SEAs) and local education agencies (LEAs) are facing enormous challenges to respond to the market of rapidly changing education technologies available. States and districts are inundated with products to select from vendors and often do not have the technical expertise to differentiate between products. When education technology initiatives and products are not conceived, designed, procured, implemented, or evaluated with the needs of all students in mind, technology can exacerbate existing inequalities. 

To support states and school districts in procuring, implementing, and developing state and local policy, the federal government should invest in a national center to provide robust technical assistance focused on safe and equitable adoption of schoolwide AI technologies, including student online activity monitoring software. 

Conclusion

AI technologies will have an enormous impact on public education. Yet, if we do not implement these technologies with students with disabilities in mind, we are at risk for furthering the marginalization of students with disabilities. Both Congress and the U.S. Department of Education can play an important role in taking the necessary steps in developing both policy and guidance, and providing the resources to combat the harms posed by these technologies. NCLD looks forward to working with decision makers to take action to protect students with disabilities’ civil rights and ensure responsible use of AI technologies in schools.

This idea is part of our AI Legislation Policy Sprint. To see all of the policy ideas spanning innovation, education, healthcare, and trust, safety, and privacy, head to our sprint landing page.

Frequently Asked Questions
Why is the Institute of Education Sciences (IES) the right entity to collect such data?
The IES has invested in research to advance AI technologies used in education and coordinated with the National Science Foundation to advance AI-driven research and innovations for learners with or at risk for disabilities, demonstrating a clear commitment to investing in experimental studies that incorporate AI into instruction and piloting new technologies. While this research is important and will help shape the future of teaching and learning, especially for disabled students, additional data and research is needed to fully evaluate the extent to which AI tools already used in schools are impacting students.
What would be the focus of the proposed technical assistance (TA) center?

This TA center could provide guidance to states and local education agencies that lack both the capacity and the subject matter expertise in both the procurement and implementation process. It can coordinate its services and resources with existing TA centers like the T4PA Center or Regional Educational Laboratories, on how to invest in evidence-based mental health supports in schools and communities, including using technology in ways that mitigate discrimination and bias.


As of February 2024, seven states had published AI guidelines (reviewed and collated by Digital Promise). While these broadly recognize the need for policies and guidelines to ensure that AI is used safely and ethically, none explicitly mention the use of student activity monitoring AI software.

Why should the Office of Civil Rights (OCR) be funded at a level of at least $280 million?

This is a funding level requested in other bills seeking to increase OCR’s capacity such as the Showing Up For Students Act. OCR is projecting 23,879 complaint receipts in FY2025. Excluding projected complaints filed by a single complainant, this number is expected to be 22,179 cases. Without staffing increases in FY2025, the average caseload per investigative staff will become unmanageable at 71 cases per staff (22,179 projected cases divided by 313 investigative staff).

How does this proposal fit into the larger landscape of congressional and administrative attention to this issue?

In late 2023, the Biden-Harris Administration issued an Executive Order on AI. Also that fall, Senate Health, Education, Labor, and Pensions (HELP) Committee Ranking Member Bill Cassidy (R-LA) released a White Paper on AI and requested stakeholder feedback on the impact of AI and the issues within his committee’s jurisdiction.


U.S. House of Representatives members Lori Trahan (D-MA) and Sara Jacobs (D-CA), among others, also recently asked Secretary of Education Miguel Cardona to provide information on the OCR’s understanding of the impacts of educational technology and artificial intelligence in the classroom.


Last, Senate Majority Leader Chuck Schumer (D-NY) and Senator Todd Young (R-IN) issued a bipartisan Roadmap for Artificial Intelligence Policy that calls for $32 billion annual investment in research on AI. While K-12 education has not been a core focal point within ongoing legislative and administrative actions on AI, it is imperative that the federal government take the necessary steps to protect all students and play an active role in upholding federal civil rights and privacy laws that protect students with disabilities. Given these commitments from the federal government, there is a ripe opportunity to take action to address the issues of student privacy and discrimination that these technologies pose.

What existing laws should policymakers consider improving the implementation of and/or work to uphold existing statutory protections?

Individuals with Disabilities Education Act (IDEA): IDEA is the law that ensures students with disabilities receive a free appropriate public education (FAPE). IDEA regulations require states to collect data and examine whether significant disproportionality based on race and ethnicity is occurring with respect to the incidence, duration, and type of disciplinary action, including suspensions and expulsions. Guidance from the Department of Education in 2022 emphasized that schools are required to provide behavioral supports and services to students who need them in order to ensure FAPE. It also stated that “a school policy or practice that is neutral on its face may still have the unjustified discriminatory effect of denying a student with a disability meaningful access to the school’s aid, benefits, or services, or of excluding them based on disability, even if the discrimination is unintentional.”


Section 504 of the Rehabilitation Act: This civil rights statute protects individuals from discrimination based on their disability. Any school that receives federal funds must abide by Section 504, and some students who are not eligible for services under IDEA may still be protected under this law (these students usually have a “504 plan”). As the Department of Education works to update the regulations for Section 504, the implications of surveillance software on the civil rights of students with disabilities should be considered.


Elementary and Secondary Education Act (ESEA) Title I and Title IV-A: Title I of the Elementary and Secondary Education Act (ESEA) provides funding to public schools and requires states and public school systems to hold public schools accountable for monitoring and improving achievement outcomes for students and closing achievement gaps between subgroups like students with disabilities. One requirement under Title I is to notify parents of certain policies the school has and actions the school will take throughout the year. As a part of this process, schools should notify families of any school monitoring policies that may be used for disciplinary actions. The Title IV-A program within ESEA provides funding to states (95% of which must be allocated to districts) to improve academic achievement in three priority content areas, including activities to support the effective use of technology. This may include professional development and learning for educators around educational technology, building technology capacity and infrastructure, and more.


Family Educational Rights and Privacy Act (FERPA): FERPA protects the privacy of students’ educational records (such as grades and transcripts) by preventing schools or teachers from disclosing students’ records while allowing caregivers access to those records to review or correct them. However, the information from computer activity on school-issued devices or accounts is not usually considered an education record and is thus not subject to FERPA’s protections.


Children’s Online Privacy Protection Act (COPPA): COPPA requires operators of commercial websites, online services, and mobile apps to notify parents and obtain their consent before collecting any personal information on children under the age of 13. The aim is to give parents more control over what information is collected from their children online. The law regulates companies, not schools.

About the National Center for Learning Disabilities

We are working to improve the lives of individuals with learning disabilities and attention issues—by empowering parents and young adults, transforming schools, and advocating for equal rights and opportunities. We actively work to shape local and national policy to reduce barriers and ensure equitable opportunities and accessibility for students with learning disabilities and attention issues. Visit ncld.org to learn more.

Establish Data-Sharing Standards for the Development of AI Models in Healthcare

The National Institute for Standards and Technology (NIST) should lead an interagency coalition to produce standards that enable third-party research and development on healthcare data. These standards, governing data anonymization, sharing, and use, have the potential to dramatically expedite development and adoption of medical AI technologies across the healthcare sector.

Challenge and Opportunity

The rise of large language models (LLMs) has demonstrated the predictive power and nuanced understanding that comes from large datasets. Recent work in multimodal learning and natural language understanding have made complex problems—for example, predicting patient treatment pathways from unstructured health records—feasible. A study by Harvard estimated that the wider adoption of AI automation would reduce U.S. healthcare spending by $200 billion to $360 billion annually and reduce the spend of public payers, such as Medicare, Medicaid, and the VA, by five to seven percent, across both administrative and medical costs.

However, the practice of healthcare, while information-rich, is incredibly data-poor. There is not nearly enough medical data available for large-scale learning, particularly when focusing on the continuum of care. We generate terabytes of medical data daily, but this data is fragmented and hidden, held captive by lack of interoperability.

Currently, privacy concerns and legacy data infrastructure create significant friction for researchers working to develop medical AI. Each research project must build custom infrastructure to access data from each and every healthcare system. Even absent infrastructural issues, hospitals and health systems face liability risks by sharing data; there are no clear guidelines for sufficiently deidentifying data to enable safe use by third parties.

There is an urgent need for federal action to unlock data for AI development in healthcare. AI models trained on larger and more diverse datasets improve substantially in accuracy, safety, and generalizability. These tools can transform medical diagnosis, treatment planning, drug development, and health systems management.

New NIST standards governing the anonymization, secure transfer, and approved use of healthcare data could spur collaboration. AI companies, startups, academics, and others could responsibly access large datasets to train more advanced models.

Other nations are already creating such data-sharing frameworks, and the United States risks falling behind. The United Kingdom has facilitated a significant volume of public-private collaborations through its establishment of Trusted Research Environments. Australia has a similar offering in its SURE (Secure Unified Research Environment). Finland has the Finnish Social and Health Data Permit Authority (Findata), which houses and grants access to a centralized repository of health data. But the United States lacks a single federally sponsored protocol and research sandbox. Instead, we have a hodgepodge of offerings, ranging from the federal National COVID Cohort Collaborative Data Enclave to private initiatives like the ENACT Network.

Without federal guidance, many providers will remain reticent to participate or will provide data in haphazard ways. Researchers and AI companies will lack the data required to push boundaries. By defining clear technical and governance standards for third-party data sharing, NIST, in collaboration with other government agencies, can drive transformative impact in healthcare.

Plan of Action

The effort to establish this set of guidelines will be structurally similar to previous standard-setting projects by NIST, such as the Cryptographic Standards or Biometric Standards Program. Using those programs as examples, we expect the effort to require around 24 months and $5 million in funding. 

Assemble a Task Force

This standards initiative could be established under NIST’s Information Technology Laboratory, which has expertise in creating data standards. However, in order to gather domain knowledge, partnerships with agencies like the Office of the National Coordinator for Health Information Technology (ONCHIT), Department of Health and Human Services (HHS), the National Institutes of Health (NIH), the Centers for Medicare & Medicaid Services (CMS), and the Agency for Healthcare Research and Quality (AHRQ) would be invaluable.

Draft the Standards

Data sharing would require standards at three levels: 

Syntactic regulations already exist through standards like HL7/FHIR. Semantic formats exist as well, in standards like the Observational Medical Outcomes Partnership’s Common Data Model. We propose to develop the final class of standards, governing fair, privacy-preserving, and effective use.

The governance standards could cover:

  1. Data Anonymization
  1. Secure Data Transfer Protocols
  1. Approved Usage
  1. Public-Private Coordination

Revise with Public Comment

After releasing the first draft of standards, seek input from stakeholders and the public. In particular, these groups are likely to have constructive input: 

Implement and Incentivize

After publishing the final standards, the task force should promote their adoption and incentivize public-private partnerships. The HHS Office of Civil Rights must issue regulatory guidance allowable under HIPAA to allow these guide documents to be used as a means to meet regulatory burden. These standards could be initially adopted by public health data sources, such as CMS, or NIH grants may mandate participation as part of recently launched public disclosure and data sharing requirements.

Conclusion

Developing standards for collaboration on health AI is essential for the next generation of healthcare technologies.

All the pieces are already in place. The HITECH Act and the Office of the National Coordinator for Health Information Technology gives grants to Regional Health Information Exchanges precisely to enable this exchange. This effort directly aligns with the administration’s priority of leveraging AI and data for the national good and the White House’s recent statement on advancing healthcare AI. Collaborative protocols like these also move us toward the vision of an interoperable health system—and better outcomes for all Americans.

This idea is part of our AI Legislation Policy Sprint. To see all of the policy ideas spanning innovation, education, healthcare, and trust, safety, and privacy, head to our sprint landing page.

Frequently Asked Questions
How can we maintain patient privacy when sharing data with third parties?
Sharing data with third parties is not new. Researchers and companies often engage in data-sharing agreements with medical centers or payors. However, these agreements are usually specialized and created ad hoc. This new regulation aims to standardize and scale such data-sharing agreements while still protecting patient privacy. Existing standards, such as HIPAA, may be combined with emerging technologies, like homomorphic encryption, differential privacy, or secure multi-party computation, to spur innovation without sacrificing patient privacy.
Why is NIST the right body for this work, rather than a group like HHS, ONCHIT, or CMS?

Collaboration among several agencies is essential to the design and implementation of these standards. We envision NIST working closely with counterparts at HHS and other agencies. However, we think that NIST is the best agency to lead this coalition due to its rich technical expertise in emerging technologies.


NIST has been responsible for several landmark technical standards, such as the NIST Cloud Computing Reference Architecture, and has previously done related work in its report on deidentification of personal information and extensive work on assisting adoption of the HL7 data interoperability standard.


NIST has the necessary expertise for drafting and developing data anonymization and exchange protocols and, in collaboration with the HHS, ONCHIT, NIH, AHRQ, and industry stakeholders, will have the domain knowledge to create useful and practical standards.

How does this differ from HL7?
HL7 and FHIR are data exchange protocols for healthcare information, maintained by the nonprofit HL7 International. Both HL7 and FHIR play critical roles in enabling interoperability across the healthcare ecosystem. However, they primarily govern data formats and exchange protocols between systems, rather than specifying standards around data anonymization and responsible sharing with third-parties like AI developers.

Establish a Teacher AI Literacy Development Program

The rapid advancement of artificial intelligence (AI) technology necessitates a transformation in our educational systems to equip the future workforce with necessary AI skills, starting with our K-12 ecosystem. Congress should establish a dedicated program within the National Science Foundation (NSF) to provide ongoing AI literacy training specifically for K-12 teachers and pre-service teachers. The proposed program would ensure that all teachers have the necessary knowledge and skills to integrate AI into their teaching practices effectively.

Challenge and Opportunity

Generative artificial intelligence (GenAI) has emerged as a profoundly disruptive force reshaping the landscape of nearly every industry. This seismic shift demands a corresponding transformation in our educational systems to prepare the next generation effectively. Central to this transformation is building a robust GenAI literacy among students, which begins with equipping our educators. Currently, the integration of GenAI technologies in classrooms is outpacing the preparedness of our teachers, with less than 20% feeling adequately equipped to utilize AI tools such as ChatGPT. Moreover, only 29% have received professional development in relevant technologies, and only 14 states offer any guidance on GenAI implementation in educational settings at the time of this writing.

The urgency for federal intervention cannot be overstated. Without it, there is a significant risk of exacerbating educational and technological disparities among students, which could hinder their readiness for future job markets dominated by AI. It is of particular importance that AI literacy training is deployed equitably to counter the disproportionate impact of AI and automation on women and people of color. McKinsey Global Institute reported in 2023 that women are 1.5 times more likely than men to experience job displacement by 2030 as a result of AI and automation. A previous study by McKinsey found that Black and Hispanic/Latino workers are at higher risk of occupational displacement than any other racial demographic. This proposal seeks to address the critical deficit in AI literacy among teachers, which, if unaddressed, will leave our students ill-prepared for an AI-driven world.

The opportunity before us is to establish a government program that will empower teachers to stay relevant and adaptable in an evolving educational landscape. This will not only enhance their professional development but also ensure they can provide high-quality education to their students. Teachers equipped with AI literacy skills will be better prepared to educate students on the importance and applications of AI. This will help students develop critical skills needed for future careers, fostering a workforce that is ready to meet the demands of an AI-driven economy. 

Plan of Action

To establish the NSF Teacher AI Literacy Development Program, Congress should first pass a defining piece of legislation that will outline the program’s purpose, delineate its extent, and allocate necessary funding. 

An initial funding allocation, as specified by the authorizing legislation, will be directed toward establishing the program’s operations. This funding will cover essential aspects such as staffing, the initial setup of the professional development resource hub, and the development of incentive programs for states. 

Key responsibilities of the program include:

Develop comprehensive AI literacy standards for K-12 teachers through a collaborative process involving educational experts, AI specialists, and teachers. These standards could be developed directly by the federal government as a model for states to consider adopting or compiled from existing resources set by reputable organizations, such as the International Society for Technology in Education (ISTE) or UNESCO

Compile a centralized digital repository of AI literacy resources, including training materials, instructional guides, best practices, and case studies. These resources will be curated from leading educational institutions, AI research organizations, and technology companies. The program would establish partnerships with universities, education technology companies, and nonprofits to continuously update and expand the resource hub with the latest tools and research findings.

Design a comprehensive grant program to support the development and implementation of AI literacy programs for both in-service and pre-service teachers. The program would outline the criteria for eligibility, application processes, and evaluation metrics to ensure that funds are distributed effectively and equitably. It would also provide funding to educational institutions to build their capacity for delivering high-quality AI literacy programs. This includes supporting the development of infrastructure, acquiring necessary technology, and hiring or training faculty with expertise in AI.

Conduct regular, comprehensive assessments to gauge the current state of AI literacy among educators. These assessments would include surveys, interviews, and observational studies to gather qualitative and quantitative data on teachers’ knowledge, skills, and confidence in using AI in their classrooms across diverse educational settings. This data would then be used to address specific gaps and areas of need.

Conduct nationwide campaigns to raise awareness about the importance of AI literacy in education, prioritizing outreach efforts in underserved and rural areas to ensure that these communities receive the necessary information and resources. This can include localized campaigns, community meetings, and partnerships with local organizations.

Prepare and present annual reports to Congress and the public detailing the program’s achievements, challenges, and future plans. This ensures transparency and accountability in the program’s implementation and progress.

Regularly evaluate the effectiveness of AI literacy programs and assess their impact on teaching practices and student outcomes. Use this data to inform policy decisions and program improvements.

Proposed Timeline

TimeframeGoals
Year 1: Formation and Setup
Quarter 1Congress passes legislation to establish the program.
Allocate initial funding to support the establishment and initial operations of the program.
Quarter 2Formally establish the program’s administrative office and hire key staff.
Develop and launch the program’s official website for public communication and resource dissemination.
Quarter 3Initiate a national needs assessment to determine the current state of AI literacy among educators.
Develop AI literacy standards for K-12 teachers.
Quarter 4Establish AI literacy resource centers within community college and vocational school Centers of AI Excellence.
Distribute resources and funding to selected pilot school districts and teacher training institutions.
Year 2: Implementation and Expansion
Quarter 1Evaluate pilot programs and integrate initial feedback to refine training materials and strategies.
Expand resource distribution based on feedback from pilot programs.
Quarter 2Launch strategic partnerships with leading technology firms, academic institutions, and educational nonprofits to enhance resource hubs and professional development opportunities.
Initiate public awareness campaigns to emphasize the importance of AI literacy in education.
Quarter 3Offer incentives for states to develop and implement AI literacy training programs for teachers.
Continue to develop and refine AI literacy standards based on ongoing feedback and advancements in AI technology.
Quarter 4Review year-end progress and adjust strategies based on comprehensive evaluations.
Prepare the first annual report to Congress and the public outlining achievements, challenges, and future plans.
Year 3 and Beyond: Maturation and Nationwide Implementation
Scale up successful initiatives to a national level based on proven effectiveness and feedback.
Continuously update the Professional Development Resource Hub with the latest AI educational tools and best practices.
Regularly update AI literacy standards to reflect technological advancements and educational needs.
Sustain focus on incentivizing states and expanding reach to underserved regions to ensure equitable AI education across all demographics.

Conclusion

This proposal expands upon Section D of the Biden Administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, emphasizing the importance of building AI literacy to foster a deeper understanding before providing tools and resources. Additionally, this policy has been developed with reference to the Office of Educational Technology’s report on Artificial Intelligence and the Future of Teaching and Learning, as well as the 2024 National Education Technology Plan. These references underscore the critical need for comprehensive AI education and align with national strategies for integrating advanced technologies in education. 

We stand at a pivotal moment where our actions today will determine our students’ readiness for the world of tomorrow. Therefore, it is imperative for Congress to act swiftly to pass the necessary legislation to establish the NSF Teacher AI Literacy Development Program. Doing so will not only secure America’s technological leadership but also ensure that every student has the opportunity to succeed in the new digital age.

This idea is part of our AI Legislation Policy Sprint. To see all of the policy ideas spanning innovation, education, healthcare, and trust, safety, and privacy, head to our sprint landing page.

Frequently Asked Questions
How can we ensure that the AI literacy training is not biased or does not promote certain agendas, especially given the potential influence of technology companies involved in developing resources?

The program emphasizes developing AI literacy standards through a collaborative process involving educational experts, AI specialists, and teachers themselves. By including diverse perspectives and stakeholders, the goal is to create comprehensive and balanced training materials. Additionally, resources will be curated from a wide range of leading institutions, organizations, and companies to prevent any single entity from exerting undue influence. Regular evaluations and feedback loops will also help identify and address any potential biases.

How will this program address the digital divide and ensure equitable access to AI literacy training for teachers in underfunded schools and rural areas? Many districts may lack the necessary infrastructure and resources.

Ensuring equitable access to AI literacy training is a key priority of this program. The nationwide awareness campaigns will prioritize outreach efforts in underserved and rural areas. Additionally, the program will offer incentives and targeted funding for states to develop and implement AI literacy training programs, with a focus on supporting schools and districts with limited resources.

Given the rapid pace of AI advancements, how frequently will the training materials and resources need to be updated, and what is the long-term cost projection for keeping the program relevant?

The program acknowledges the need for continuous updating of AI literacy standards, training materials, and resources to reflect the latest advancements in AI technology. The proposal outlines plans for regular updates to the Professional Development Resource Hub, as well as periodic revisions to the AI literacy standards themselves. While specific timelines and cost projections are not provided, the program is designed with a long-term view, including strategic partnerships with leading institutions and technology firms to stay current with developments in the field. Annual reports to Congress will help assess the program’s effectiveness and inform decisions about future funding and resource allocation.

What metrics will be used to evaluate the effectiveness of the AI literacy training programs, and how will student outcomes be measured to justify the investment in this initiative?

The program emphasizes the importance of regular, comprehensive assessments to gauge the current state of AI literacy among educators. These assessments will include surveys, interviews, and observational studies to gather both qualitative and quantitative data on teachers’ knowledge, skills, and confidence in using AI in their classrooms across diverse educational settings. Additionally, the program aims to evaluate the effectiveness of AI literacy programs and assess their impact on teaching practices and student outcomes, though specific metrics are not outlined. The data gathered through these evaluations will be used to inform policy decisions, program improvements, and to justify continued investment in the initiative.

A NIST Foundation to Support the Agency’s AI Mandate

The National Institute of Standards and Technology (NIST) faces several obstacles to advancing its mission on artificial intelligence (AI) at a time when the field is rapidly advancing and consequences for falling short are wide-reaching. To enable NIST to quickly and effectively respond, Congress should authorize the establishment of a NIST Foundation to unlock additional resources, expertise, flexible funding mechanisms, and innovation, while ensuring the foundation is stood up with strong ethics and oversight mechanisms.

Challenge

The rapid advancement of AI presents unprecedented opportunities and complex challenges as it is increasingly integrated into the way that we work and live. The National Institute of Standards and Technology (NIST), an agency within the Department of Commerce, plays an important role in advancing AI-related research, measurement, evaluation, and technical standard setting. NIST has recently been given responsibilities under President Biden’s October 30, 2023, Executive Order (EO) on Safe, Security, and Trustworthy Artificial Intelligence. To support the implementation of the EO, NIST launched an AI Safety Institute (AISI), created an AI Safety Institute Consortium (AISIC), and released a strategic vision for AISI focused on safe and responsible AI innovation, among other actions.

While work is underway to implement Biden’s AI EO and deliver on NIST’s broader AI mandate, NIST faces persistent obstacles in its ability to quickly and effectively respond. For example, recent legislation like the Fiscal Responsibility Act of 2023 has set discretionary spending limits for FY26 through FY29, which means less funding is available to support NIST’s programs. Even before this, NIST’s funding has remained at a fractional level (around $1–1.3 billion each year) of the industries it is supposed to set standards for. Since FY22, NIST has received lower appropriations than it has requested.

In addition, NIST is struggling to attract the specialized science and technology (S&T) talent that it needs due to competition for technical talent, a lack of competitive pay compared to the private sector, a gender-imbalanced culture, and issues with transferring institutional knowledge when individuals transition out of the agency, according to a February 2023 Government Accountability Office report. Alongside this, NIST has limitations on how it can work with the private sector and is subject to procurement processes that can be a barrier to innovation, an issue the agency has struggled with in years past, according to a September 2005 Inspector General report.

The consequences of NIST not fulfilling its mandate on AI due to these challenges and limitations are wide-reaching: a lack of uniform AI standards across platforms and countries; reduced AI trust and security; limitations on AI innovation and commercialization; and the United States losing its place as a leading international voice on AI standards and governance, giving the Chinese government and companies a competitive edge as they seek to become a world leader in artificial intelligence.

Opportunity

An agency-related foundation could play a crucial role in addressing these challenges and strengthening NIST’s AI mission. Agency-related nonprofit research foundations and corporations have long been used to support the research and development (R&D) mandates of federal agencies by enabling them to quickly respond to challenges and leverage additional resources, expertise, flexible funding mechanisms, and innovation from the private sector to support service delivery and the achievement of agency programmatic goals more efficiently and effectively.

One example is the CDC Foundation. In 1992, Congress passed legislation authorizing the creation of the CDC Foundation, an independent, 501(c)(3) public charity that supports the mandate of the Centers for Disease Control and Prevention (CDC) by facilitating strategic partnerships between the CDC and the philanthropic community and leveraging private-sector funds from individuals, philanthropies, and corporations. The CDC is legally able to capitalize on these private sector funds through two mechanisms: (1) Section 231 of the Public Health Service Act, which authorizes the Secretary of Health and Human Services “to accept on behalf of the United States gifts made unconditionally by will or otherwise for the benefit of the Service or for the carrying out of any of its functions,” and (2) the legislation that authorized the creation of the CDC Foundation, which establishes its governance structure and provides the CDC director the authority to accept funds and voluntary services from the foundation to aid and facilitate the CDC’s work. 

Since 1995, the CDC Foundation has raised $2.2 billion to support 1,400 public health programs in the United States and worldwide. The importance of this model was evident at the height of the COVID-19 pandemic when the CDC Foundation supported the Centers by quickly raising  to deploy various resources supporting communities. In the same way that the CDC Foundation bolstered the CDC’s work during the greatest public health challenge in 100 years, a foundation model could be critical in helping an agency like NIST deploy private, philanthropic funds from an independent source to quickly respond to the challenge and opportunity of AI’s advancement.

Another example of an agency-related entity is the newly established Foundation for Energy Security and Innovation (FESI), authorized by Congress via the 2022 CHIPS and Science Act following years of community advocacy to support the mission of the Department of Energy (DOE) in advancing energy technologies and promoting energy security. FESI released a Request for Information in February 2023 to seek input on DOE engagement opportunities with FESI and appointed its inaugural board of directors in May 2024.

NIST itself has demonstrated interest in the potential for expanded partnership mechanisms such as an agency-related foundation. In its 2019 report, the agency notes that “foundations have the potential to advance the accomplishment of agency missions by attracting private sector investment to accelerate technology maturation, transfer, and commercialization of an agency’s R&D outcomes.” NIST is uniquely suited to benefit from an agency-related foundation and its partnership flexibilities, given that it works on behalf of, and in collaboration with, industry on R&D and to develop standards, measurements, regulations, and guidance.

But how could NIST actually leverage a foundation model? A June 2024 paper from the Institute for Progress presents ideas for how a foundation model could support NIST’s work on AI and emerging tech. These include setting up a technical fellowship program that can compete with formidable companies in the AI space for top talent; quickly raising money and deploying resources to conduct “rapid capability evaluations for the risks and benefits of new AI systems”; and hosting large-scale prize competitions to develop “complex capabilities benchmarks for artificial intelligence” that would not be subject to usual monetary limitations and procedural burdens.

A NIST Foundation, of course, would have implications for the agency’s work beyond AI and other emerging technologies. Interviews with experts at the Federation of American Scientists working across various S&T domains have revealed additional use cases for a NIST Foundation that map to the agency’s topical areas, including but not limited to: 

Critical to the success of a foundation model is for it to have the funding needed to support NIST’s mission and programs. While it is difficult to estimate exactly how much funding a NIST Foundation could draw in from external sources, there is clearly significant appetite from philanthropy to invest in AI research and initiatives. Reporting from Inside Philanthropy uncovered that some of the biggest philanthropic institutions and individual donors—such as Eric and Wendy Schmidt and Open Philanthropy—have donated approximately $1.5 billion to date to AI work. And in November 2023, 10 major philanthropies announced they were committing $200 million to fund “public interest efforts to mitigate AI harms and promote responsible use and innovation.”

Plan of Action

In order to enable NIST to more effectively and efficiently deliver on its mission, especially as it relates to rapid advancement in AI, Congress should authorize the establishment of a NIST Foundation. While the structure of agency-related foundations may vary depending on the agency they support, they all have several high-level elements in common, including but not limited to:

The activities of existing agency-related foundations have left them subject to criticism over potential conflicts of interest. A 2019 Congressional Research Service report highlights several case studies demonstrating concerning industry influence over foundation activities, including allegations that the National Football League (NFL) attempted to influence the selection of research applicants for a National Institutes of Health (NIH) study on chronic traumatic encephalopathy, funded by the NFL through the FNIH, and the implications of the Coca-Cola Company making donations to the CDC Foundation for obesity and diet research.

In order to mitigate conflict of interest, transparency, and oversight issues, a NIST Foundation should consider rigorous policies that ensure a clear separation between external donations and decisions related to projects. Foundation policies and communications with donors should make explicit that donations will not result in specific project focus, and that donors will have no decision-making authority as it relates to project management. Donors would have to disclose any potential interests in foundation projects they would like to fund and would not be allowed to be listed as “anonymous” in the foundation’s regular financial reporting and auditing processes.

Additionally, instituting mechanisms for engaging with a diverse range of stakeholders is key to ensure the Foundation’s activities align with NIST’s mission and programs. One option is to mandate the establishment of a foundation advisory board composed of topical committees that map to those at NIST (such as AI) and staffed with experts across industry, academia, government, and advocacy groups who can provide guidance on strategic priorities and proposed initiatives. Many initiatives that the foundation might engage in on behalf of NIST, such as AI safety, would also benefit from strong public engagement (through required public forums and diverse stakeholder focus groups preceding program stand-up) to ensure that partnerships and programs address a broad range of potential ethical considerations and serve a public benefit.

Alongside specific structural components for a NIST Foundation, metrics will help measure its effectiveness. While quantitative measures only tell half the story, they are a starting point for evaluating whether a foundation is delivering its intended impact. Examples of potential metrics include:

Conclusion

Given financial and structural constraints, NIST risks being unable to quickly and efficiently fulfill its mandate related to AI, at a time when innovative technologies, systems, and governance structures are sorely needed to keep pace with a rapidly advancing field. Establishing a NIST Foundation to support the agency’s AI work and other priorities would bolster NIST’s capacity to innovate and set technical standards, thus encouraging the safe, reliable, and ethical deployment of AI technologies. It would also increase trust in AI technologies and lead to greater uptake of AI across various sectors where it could drive economic growth, improve public services, and bolster U.S. global competitiveness. And it would help make the case for leveraging public-private partnership models to tackle other critical S&T priorities.

This idea is part of our AI Legislation Policy Sprint. To see all of the policy ideas spanning innovation, education, healthcare, and trust, safety, and privacy, head to our sprint landing page.