Supporting States in Balanced Approaches to AI in K-12 Education
Congress must ensure that state education agencies (SEAs) and local education agencies (LEAs) are provided a gold-standard policy framework, critical funding, and federal technical assistance that supports how they govern, map, measure, and manage the deployment of accessible and inclusive artificial intelligence (AI) in educational technology across all K-12 educational settings. Legislation designed to promote access to an industry-designed and accepted policy framework will help guide SEAs and LEAs in their selection and use of innovative and accessible AI designed to align with the National Educational Technology Plan’s (NETP) goals and reduce current and potential divides in AI.
Although the AI revolution is definitively underway across all sectors of U.S. society, questions still remain about AI’s accuracy, accessibility, how its broad application can influence how students are represented within datasets, and how educators use AI in K-12 classrooms. There is both need and capacity for policymakers to support and promote thoughtful and ethical integration of AI in education and to ensure that its use complements and enhances inclusive teaching and learning while also protecting student privacy and preventing bias and discrimination. Because no federal legislation currently exists that aligns with and accomplishes these goals, Congress should develop a bill that targets grant funds and technical assistance to states and districts so they can create policy that is backed by industry and designed by educators and community stakeholders.
Challenge and Opportunity
With direction provided by Congress, the U.S. Department of Commerce, through the National Institute of Standards and Technology (NIST), has developed the Artificial Intelligence Risk Management Framework (NIST Framework). Given that some states and school districts are in the early stages of determining what type of policy is needed to comprehensively integrate AI into education while also addressing both known and potential risks, the hallmark guidance can serve as the impetus for developing legislation and directed-funding designed to help.
A new bill focused on applying the NIST Framework to K-12 education could create both a new federally funded grant program and a technical assistance center designed to help states and districts infuse AI into accessible education systems and technology, and also prevent discrimination and/or data security breaches in teaching and learning. As noted in the NIST Framework:
AI risk management is a key component of responsible development and use of AI systems. Responsible AI practices can help align the decisions about AI system design, development, and uses with intended aim and values. Core concepts in responsible AI emphasize human centricity, social responsibility, and sustainability. AI risk management can drive responsible uses and practices by prompting organizations and their internal teams who design, develop, and deploy AI to think more critically about context and potential or unexpected negative and positive impacts. Understanding and managing the risks of AI systems will help to enhance trustworthiness, and in turn, cultivate public trust.
In a recent national convening hosted by the U.S. Department of Education, Office of Special Education Programs, national leaders in education technology and special education discussed several key themes and questions, including:
- How does AI work and who are the experts in the field?
- What types of professional development are needed to support educators’ effective and inclusive use of AI?
- How can AI be responsive to all learners, including those with disabilities?
Participants emphasized the importance of addressing the digital divide associated with AI and leveraging AI to help improve accessibility for students, addressing AI design principles to help educators use AI as a tool to improve student engagement and performance, and assuring guidelines and policies are in use to protect student confidentiality and privacy. Stakeholders also specifically and consistently noted “the need for policy and guidance on the use of AI in education and, overall, the convening emphasized the need for thoughtful and ethical integration of AI in education, ensuring that it complements and enhances the learning experience,” according to notes from participants.”
Given the rapid advancement of innovation in education tools, states and districts are urgently looking for ways to invest in AI that can support teaching and learning. As reported in fall 2023,
Just two states—California and Oregon—have offered official guidance to school districts on using AI [in Fall 2023]. Another 11 states are in the process of developing guidance, and the other 21 states who have provided details on their approach do not plan to provide guidance on AI for the foreseeable future. The remaining states—17, or one-third—did not respond [to requests for information] and do not have official guidance publicly available.
While states and school districts are in various stages of developing policies around the use of AI in K-12 classrooms, to date there is no federally supported option that would help them make cohesive plans to invest in and use AI in evidence-based teaching and to support the administrative and other tasks educators have outside of instructional time. A major investment for education could leverage the expertise of state and local experts and encourage collaboration around breakthrough innovations to address both the opportunities and challenges. There is general agreement that investments in and support for AI within K-12 classrooms will spur educators, students, parents, and policymakers to come together to consider what skills both educators and students need to navigate and thrive in a changing educational landscape and changing economy. Federal investments in AI – through the application and use of the NIST Framework – can help ensure that educators have the tools to teach and support the learning of all U.S. learners. To that end, any federal policy initiative must also ensure that state, federal, and local investments in AI do not overlook the lessons learned by leading researchers who have spent years studying ways to infuse AI into America’s classrooms. As noted by Satya Nitta, former head researcher at IBM,
To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can represents a profound misunderstanding of what AI is actually capable of… We missed something important. At the heart of education, at the heart of any learning, is [human] engagement.
Additionally, while current work led by Kristen DiCerbo at Khan Academy shows promise in the use of ChatGPT in Khanmingo, DiCerbo admits that their online 30-minute tutoring program, which utilizes AI, “is a tool in your toolbox” and is “not a solution to replacing humans” in the classroom. “In one-to-one teaching, there is an element of humanity that we have not been able to replicate—and probably should not try to replicate—in artificial intelligence. AI cannot respond to emotion or become your friend.”
With these data in mind, there is a great need and timely opportunity to support states and districts in developing flexible standards based on quality evidence. The NIST Framework – which was designed as a voluntary guide – is also “intended to be practical and adaptable.” State and district educators would benefit from targeted federal legislation that would elevate the Framework’s availability and applicability to current and future investments in AI in K-12 educational settings and to help ensure AI is used in a way that is equitable, fair, safe, and supportive of educators as they seek to improve student outcomes. Educators need access to industry-approved guidance, targeted grant funding, and technical assistance to support their efforts, especially as AI technologies continue to develop. Such state- and district-led guidance will help AI be operationalized in flexible ways to support thoughtful development of policies and best practices that will ensure school communities can benefit from AI, while also protecting students from potential harms.
Plan of Action
Federal legislation would provide funding for grants and technical assistance to states and districts in planning and implementing comprehensive AI policy-to-practice plans utilizing the NIST Framework to build a locally designed plan to support and promote thoughtful and ethical integration of AI in education and to ensure that its use complements and enhances inclusive teaching, accessible learning, and an innovation-driven future for all.
- Agency: U.S. Department of Education
- Cost: $500 Million
- Budget Line: ESEA: Title IV: Part A: Student Support and Academic Enrichment (SSAE) grant program, which supports well-rounded education, safe and healthy students, and the effective use of education technology.
Legislative Specifications
Sec. I: Grant Program to States
Purposes:
(A) To provide grants to State Education Agencies (SEA/State) to guide and support local education agencies (LEA/district) in the planning, development, and investment in AI in K-12 educational settings; ensuring AI is used in a way that is equitable, fair, safe, and can support educators and help improve student outcomes.
(B) To provide federal technical assistance (TA) to States and districts in the planning, development, and investments in AI in K-12 education and to evaluate State use of funds.
- SEAs apply for and lead the planning and implementation. Required partners in planning and implementation are:
- A minimum of three LEA/district teams, each of which includes
- A district leader.
- An expert in teacher professional development.
- A systems and or education technology expert.
- A minimum of three LEA/district teams, each of which includes
Each LEA/district must be representative of the students and the school communities across the state in size, demographics, geographic locations, etc.
Other requirements for state/district planning are:
- A minimum of one accredited state university providing undergraduate and graduate level personnel preparation to teachers, counselors, administrators, and/or college faculty
- A minimum of one state-based nonprofit organization or consortia of practitioners from the state with expertise in AI in education
- A minimum of one nonprofit organization with expertise in accessible education materials, technology/assistive technology
- SEAs may also include any partners the State or district(s) deems necessary to successfully conduct planning and carry out district implementation in support of K-12 students and to increase district access to reliable guidance on investments in and use of AI.
- SEAs must develop a plan that will be carried out by the LEA/district partners [and other LEAs] within the timeframe indicated.
- SEAs must utilize the NIST Framework, National Education Technology Plan, and recommendations included in the Office of Education Technology report on AI in developing the plan. Such planning and implementation must also:
- Focus on the state’s population of K-12 students including rural and urban school communities.
- Protect against bias and discrimination of all students, with specificity for student subgroups (i.e., economically disadvantaged students; students from each major racial/ethnic group; children with disabilities as defined under IDEA; and English learners) as defined by Elementary and Secondary Education Act.
- Support educators in the use of accessible AI in UDL-enriched and inclusive classrooms/schools/districts.
- Build in metrics essential to understanding district implementation impacts on both educators and students [by subgroup as indicated above].
- SEAs may also utilize any other resources they deem appropriate to develop a plan.
Timeline
- 12 months to plan
- 12 months to begin implementation, to be carried out over 24–36 months
- SEAs must set aside funds to:
- Conduct planning activities with partners as required/included over 12 months
- Support LEA implementation over 24 months.
- Support SEA/LEA participation in federal TA center evaluation—with the expectation such funding will not exceed 8–10% of the overall grant.
- Evaluation of SEA and LEA planning and implementation required
- Option to renew grant (within the 24-month period). Such renewal(s) are contingent on projected need across the state/LEA uptake and other reliable outcomes supporting an expanded roll-out across the state.
Sec. 2: Federal TA Center: To assist states in planning and implementing state-designed standards for AI in education.
Cost: 6% set-aside of overall appropriated annual funding
The TA center must achieve, at a minimum, the following expected outcomes:
(a) Increased capacity of SEAs to develop useful guidance via the NIST Framework, the National Education Technology Plan of 2024 and recommendations via the Office of Education Technology in the use of artificial intelligence (AI) in schools to support the use of AI for K-12 educators and for K-12 students in the State and the LEAs of the State;
(b) Increased capacity of SEAs, and LEAs to use new State and LEA-led guidance that ensures AI is used in a way that is equitable, fair, safe, protects against bias and discrimination of all students, and can support educators and help improve student outcomes.
(c) Improved capacity of SEAs to assist LEAs, as needed, in using data to drive decisions related to the use of K-12 funds to AI is used in a way that is equitable, fair, safe, and can support educators and help improve student outcomes.
(d) Collect data on these and other areas as outlined by the Secretary.
Timeline: TA Center is funded by the Secretary upon congressional action to fund the grant opportunity.
Conclusion
State and local education agencies need essential tools to support their use of accessible and inclusive AI in educational technology across all K-12 educational settings. Educators need access to industry-approved guidance, targeted grant funding, and technical assistance to support their efforts. It is essential that AI is operationalized in varying degrees and capacities to support thoughtful development of policies and best practices that ensure school communities can benefit from AI–while also being protected from its potential harms—now and in the future.
This idea is part of our AI Legislation Policy Sprint. To see all of the policy ideas spanning innovation, education, healthcare, and trust, safety, and privacy, head to our sprint landing page.
At least 40% of Medicare beneficiaries do not have a documented AHCD. In the absence of one, medical professionals may perform major and costly interventions unknowingly against a patient’s wishes.
AI has transformative potential in the public health space, but innovation driven primarily by the private sector today may be exacerbating existing disparities by training models.
With targeted policy interventions, we can efficiently and effectively support the U.S. innovation economy through the translation of breakthrough scientific research from the lab to the market.
Innovations in artificial intelligence and robotics will allow us to accelerate the search process using foundation AI models for science research and automate much of the experimentation with robotic, self-driving labs.