The FAIR in Education Act: Federal coordination to support responsible AI deployment
Artificial Intelligence (AI) has the potential to enhance education systems by personalizing student learning, providing real-time feedback, and streamlining administrative tasks to optimize teachers’ time and focus on instruction. AI, like other classroom technologies, can expand access to educational resources and when used properly, support student engagement. However, no long-term studies on the impacts of generative AI on student learning outcomes and the cognitive abilities of early learners exist and issues around algorithm transparency and data security persist. To meet these challenges, we propose a Framework for AI Responsibility (FAIR) in Education Act, a Governor’s Conference, and the establishment of a national center to support AI deployment in K-12.
Challenge and Opportunity
No Guardrails or Guidance
Successful integration of classroom technologies relies on the availability and stability of infrastructure as well as the readiness of the end users. The United States AI Action Plan and Advancing Artificial Intelligence Education for American Youth Executive Order aim to promote streamlined pathways for AI adoption. However, neither provide any practical implementation guidance for the responsible deployment of AI in educational settings nor allocate funds to support the local infrastructure necessary. Current actions also fail to address longstanding concerns regarding data privacy, the establishment of guardrails to mitigate algorithm bias, and efforts to reduce the digital divide which are increasingly more important upon interactions with minors.
AI competency is becoming a necessary skill for the future, much like knowing how to use a search engine effectively to navigate online information. Students who understand how AI works, its limitations, and its potential biases will be better equipped to navigate the technology driven world we live in. Establishing guardrails and guidance is not meant to restrict student access to AI, but aims to ensure students can use these tools safely and responsibly. Proper guardrails, transparency, and guidance allow students to leverage AI as a learning aid while minimizing risks to privacy, fairness, and well being.
In the absence of guardrails and guidance, AI can increase inequities, introduce bias, spread misinformation, and risk data security. These negative impacts are often exacerbated in communities that are marginalized or economically disadvantaged. Simply put, the current posture towards AI puts the cart before the horse. The United States needs a better understanding of the impact of AI on student learning and clear guardrails before introducing it large-scale.
More Data Needed
Reeling from the impacts of the COVID-19 pandemic on student learning, such as learning loss and widening achievement gaps, the most recent National Assessment of Educational Progress asserts a clear decline in K-12 science, reading, and mathematics proficiencies compared to 2019. The results of the study will be used to inform educational reforms, however, educators and policymakers should be cautious in framing AI as the cure-all for America’s educational challenges. The promises of similar tech-driven advances foreshadow a likely failed result if the policy does not adapt accordingly .
Currently, there are no federal guidelines that govern AI usage in the classroom and there are no longitudinal studies on AI’s impact on student learning and cognitive development. Short-term studies have demonstrated that AI can have a positive effect on student learning, however, results are highly variable and context specific. In addition, there are significant risks, such as student overreliance on the technology, especially generative AI chatbots. Early learners are particularly at risk for negative impacts and it is unknown how AI use impacts deeper learning and information retention and synthesis. Studies indicate that technology use among school-aged children can negatively affect attention spans, self-control, cognitive development, and problem-solving skills. Moreover, AI chatbots may pose psychological impacts or “empathy gaps” in children that are not well understood. Only recently has the Federal Trade Commission launched an inquiry into the impact of AI chatbots on children. We need more data on the long-term impacts of AI in the classroom in order to develop coherent policies that support educators and learners.
These shortcomings do not imply that AI cannot have a place in the classroom. Instead it demonstrates that a comprehensive understanding of AI’s impact is necessary before its use is scaled up. Furthermore, algorithm transparency is paramount for minimizing bias, ensuring student psychological safety, and promoting data security. Organizations like TeachAI, acknowledge some of these risks and provide resources for schools and universities developing AI policy, however, there is still much to learn.
Federal Support and Coordination are Paramount
Uncertainty around the future of federal support for education and education research is also a key challenge. The Department of Education (DoEd) is currently responsible for addressing national educational issues by setting federal policy, supporting equal access to education, protecting civil rights, collecting educational data, and analyzing trends. The DoEd also works to hold institutions and States accountable for educational outcomes. The current administration, however, has a stated goal of abolishing the DoEd and sending those powers to States. While States should be empowered to support policy development and implementation, federal coordination and oversight is vital for protecting civil rights and understanding long-term national education trends.
If the DoEd is abolished, it is uncertain what government agency would assume responsibility for the development, monitoring, and evaluation of educational standards at the precipice of the AI age. If States are tasked with this responsibility, it will require sustained financial federal support. Proposed cuts to the National Science Foundation (NSF) STEM Education Directorate and other STEM education federal funders would limit the ability for education researchers to effectively assess the impact of AI on the educational and psychological development of students or develop tools for the effective use of AI.
Implementation Requires Community Involvement
While current federal initiatives are in place promoting the role of AI in education, their ultimate success depends on meaningful training experiences for educators and strong collaboration with State and local stakeholders. Federal frameworks, such as the April 2025 Advancing Artificial Intelligence Education for American Youth Executive Order (E.O. 14277), addresses the critical need to provide America’s youth with opportunities to cultivate AI competency, but it does not express the major value of having States and local districts leading the implementation effort to ensure that AI integration meets community needs, supports student achievement, and strengthens workforce development.
State and local communities could potentially draw on federal resources under this E.O. (if available) and work collaboratively with education-focused professional societies, such as the National Science Teachers Association (NSTA) and the Computer Science Teachers Association (CSTA) to help develop community-created standards, define clear metrics, and continuously evaluate what works within their specific contexts. Initiatives such as NSF’s EducateAI and the National AI Research Resource (NAIRR) offer curriculum models, research infrastructure, and other resources that can complement any locally developed approaches. These federal programs can also support collaborative networks among educators, researchers, and industry partners to share best practices and insights. However, realizing the full potential of these federal programs first requires providing teachers with professional development and training to use AI tools effectively and confidently in the classroom, because even the most advanced resources are only as impactful as the educators who apply and understand them.
Recommendations
Framework for AI Responsibility (FAIR) in Education Act
Congress should propose legislation on the responsible use of AI in education. This comprehensive act, known as the Framework for AI Responsibility in Education Act or the FAIR in Education Act, would support a large-scale study on the impact of AI on education, provide funding for education research, support State leadership in AI in education, require greater algorithm transparency for algorithms influencing minors, and provide infrastructure for ongoing monitoring and assessment of a community-centered implementation of AI technologies in the classroom. This legislation should address both K-12 use and higher education.
First, the FAIR in Education Act should instruct the National Academies of Science, Engineering and Mathematics (NASEM) to conduct a study and report on the impact of AI in K-12 schools, higher education, and informal learning settings such as libraries and museums. This landscape study should address student learning, the impact on cognitive abilities, psychological impacts, the ethical use of AI, and provide recommendations for how the federal, state, and local governments can support AI literacy and teacher education.
Next, the use of AI in the classroom raises several academic integrity and scientific integrity issues, including plagiarism, authorship and credit, accuracy, reliability of AI outputs, reproducibility, and data bias. The FAIR in Education Act should instruct the Committee on STEM Education (CoSTEM), a subcommittee of the National Science and Technology Council under Office of Science and Technology Policy (OSTP), to within 270 days of passage of the act provide guidance to assist educational institutions in thoughtfully updating their own definitions of academic integrity in light of AI and other technologies used in educational settings. This guidance would help institutions uphold ethical standards while enabling the responsible use of AI in learning and assessment.
The Act should also require transparency in how AI algorithms used in education are trained, what data was used, and how the guardrails were tested. Educators should be aware of the design decisions and development processes that engineers made for the algorithms and how those decisions might affect the use of AI as a tool to enhance student learning. Such transparency will enable educators to guide students effectively in using AI as a learning tool, particularly supporting equitable outcomes among disadvantaged communities.
The Act will direct federal funds to support the requisite infrastructure and security needed to safely use AI. There are examples from previous administrations of funding opportunities and convenings through the Federal Communications Commission (FCC) to support school district cybersecurity and the infrastructure required to support AI and high speed internet use. Additionally, the Act would support streamlined implementation of the Broadband Equity, Access, and Deployment Program to address high speed internet access across the country.
The responsible use of AI requires not only federal engagement, but State engagement as well. The FAIR in Education Act will require Federal, State, and local coordination on AI use in the classroom and facilitate continued monitoring and evaluation. The Act will also increase funding for teacher professional development, with emphasis on development and training for STEM fields. We envision these goals will be accomplished through the funding and development of a “Supporting Pedagogy and AI Readiness in K-12” (SPARK) Center, which will be informed by an inaugural country-wide Governor’s conference.
Governor’s Conference – State-Led Design of the SPARK Center
The creation of the SPARK Center should be conducted in cooperation with state and local officials, as well as parents, educators, and students. The education system in the United States is heavily dependent on state and local government to provide leadership in the implementation of new initiatives or educational practices, and thus it is essential that they are involved in the decision making. To begin incorporating these essential voices, we recommend hosting a“Governor’s Conference” with a primary focus on AI in education, and specifically the community driven design of the SPARK Center. The National Governor’s Association (NGA) Center for Best Practices has a program area focused on K-12 education and previously led a Governor’s convening on a K-12 education agenda in 2023. NGA can utilize these existing networks to drive a new focus on the use of AI in education, and preparation and design of the SPARK Center.
As of September 2025, thirty States have issued guidance on AI in Education. At the conference, Governors can share the successes and challenges of their current AI policies as they relate to education, engage in real-time conversations with teachers, students, and parents, and inspire policy action in States which may not yet have infrastructure in place to support the responsible deployment of AI in their own education systems. Attendees should include all state Governors (or their proxies, such as Secretaries of Education or people in similar positions), representatives from the American Federation of Teachers, the National Education Association, the Association of American Educators, the Superintendents Associations, possible NGOs such as the leadership from CSTA and NSTA, administrators of TeachAI, and relevant NSF funded researchers and academics conducting pedagogical studies on AI impacts on education and childhood development. In addition to representatives from state Governor offices, educators from local school districts must be an essential part of this process to garner buy-in and receive guidance from the final users.
The event organizer should consider the best way to integrate parent and student feedback into the outcomes of the conference, such as dedicating one day of the conference specifically to receive their feedback through Track 1.5 roundtables, or stakeholder prepared presentations. The goal of the conference is to create an opportunity for state governments to learn where there are insurmountable challenges in the deployment of AI in education for States to address independently, and where students could benefit from federal standardization of the U.S. approach. The outcome of the conference should lead to a deployable roadmap and fulsome design of the SPARK Center, including the accumulation of educational training resources for teachers and teachers associations. It could also lead to the percolation of new initiatives for the federal government, such as drafted federal guidelines for AI in K-12 education, a new country-wide grand challenge, or an increase in funding or resources provided to the States. It could also lead to the design of a new research and potential pilot projects conducted by the NGA’s Center for Best Practices. These are solely illustrative examples, and will ultimately be determined by the involved participants.
A community-created approach, paired with federal resources, enables a two-way exchange in which federal guidance informs local practice, while lessons learned from schools will feed back into federal research, policy, and frameworks. This partnership will ensure AI is integrated responsibly, equitably, and effectively across the education system in America.
Supporting Pedagogy and AI Readiness in K-12 (SPARK) Center
For AI to truly benefit classrooms, communities must create, establish, and embrace standards to help guide responsible AI use and effectiveness. These efforts, such as CSTA’s AI Learning Priorities, will be bolstered through the establishment of the SPARK Center per the FAIR in Education Act.
To maximize AI benefits and minimize risks, AI use in the classroom must be guided by community-created standards. Education stakeholders including students, teachers, and families need to be involved with defining how AI is used in the classroom to ensure it aligns with local values, protects student data, and supports student-centered, teacher-facilitated learning. State and local leadership, creating essential policies for these standards, is critical in order to adapt practices to local contexts and to monitor effective classroom use. What works in one district or school may not work elsewhere; standards must be flexible and informed by the community stakeholders because a one-size-fits all approach will not work in every school across America.
Effective AI use requires ongoing monitoring and evaluation. At a local level, schools should track learning outcomes, student experiences, teacher workload, and overall engagement and productivity with the technology. Feedback from students, teachers, and education stakeholders should be a part of every assessment monitoring and evaluation cycle to help improve AI adoption in the classroom. Implementing routine monitoring and evaluation cycles will enable schools to adjust AI practices, identify unintended consequences, and ensure AI is supporting the learning objectives established in the curriculum instead of creating new challenges in the classroom.
This work overall can be burdensome across teachers and school districts. If a community realizes that the deployment of AI in their educational infrastructure is not reaching anticipated goals, or potentially even causing unintended negative consequences across students, there are few places for educators to turn for answers. The SPARK Center will be designed to be a federally managed resource which manages the monitoring and evaluation capacities across the country, and compiles best practices for educators to pull from based on their analyses. Other functions of the center will be determined through a community-driven approach, and informed by a Governor’s Conference convened at the federal level.
Conclusion: Connecting Federal Support to Advance Community-Created Approaches
AI has enormous potential to enhance teaching and learning but only if its adoption is guided by communities, led locally, and continuously monitored. By combining student-centered, teacher-facilitated classroom practices with State and local guidance and federal support, schools can ensure AI empowers both educators and students while safeguarding equity, ethics, and critical thinking. Federal support should strengthen these community centered approaches, providing resources and guidance without replacing local decision making.
The views contained in this memo reflect the personal views of the authors.
In the absence of guardrails and guidance, AI can increase inequities, introduce bias, spread misinformation, and risk data security for schools and students alike.
At a time when universities are already facing intense pressure to re-envision their role in the S&T ecosystem, we encourage NSF to ensure that the ambitious research acceleration remains compatible with their expertise.
FAS CEO Daniel Correa recently spoke with Adam Marblestone and Sam Rodriques, former FAS fellows who developed the idea for FROs and advocated for their use in a 2020 policy memo.
When the U.S. government funds the establishment of a platform for testing hundreds of behavioral interventions on a large diverse population, we will start to better understand the interventions that will have an efficient and lasting impact on health behavior.