An Agenda for Ensuring Child Safety in the AI Era
The next administration should continue to make responsible policy on Artificial intelligence (AI) and children, especially in K-12, a top priority and create an AI and Kids Initiative led by the administration. AI is transforming how children learn and live, and policymakers, industry, and educators owe it to the next generation to set in place a responsible policy that embraces this new technology while at the same time ensuring all children’s well-being, privacy, and safety is respected. The federal government should develop clear prohibitions, enforce them, and serve as a national clearinghouse for AI K-12 educational policy. It should also support comprehensive digital literacy related to AI.
Specifically, we think these policy elements need to be front of mind for decision-makers: build a coordinated framework for AI Safety; champion legislation to support youth privacy and online safety in AI; and ensure every child can benefit from the promise of AI.
In terms of building a coordinated framework for AI Safety, the next administration should: ensure parity with existing child data protections; develop safety guidance for developers, including specific prohibitions to limit harmful designs, and inappropriate uses; and direct the National Institute of Standards and Technology (NIST) to serve as the lead organizer for federal efforts on AI safety for children. When championing legislation to support youth privacy and online safety in AI, the next administration should support the passage of online safety laws that address harmful design features that can lead to medically recognized mental health disorders and patterns of use indicating addiction-like behavior, and modernize federal children’s privacy laws including updating The Family Educational Rights and Privacy Act (FERPA) and passing youth privacy laws to explicitly address AI data use issues, including prohibiting developing commercial models from students’ educational information, with strong enforcement mechanisms. And, in order to ensure every child can benefit from the promise of AI, the next administration should support comprehensive digital literacy efforts and prevent deepening the digital divide.
Importantly, policy and frameworks need to have teeth and need to take the burden off of individual states, school districts, or actors to assess AI tools for children. Enforcement should be tailored to specific laws, but should include as appropriate private rights of action, well-funded federal enforcers, and state and local enforcement. Companies should feel incentivized to act. The framework cannot be voluntary, enabling companies to pick and choose whether or not to follow recommendations.. We’ve seen what happens when we do not put in place guardrails for tech, such as increased risk of child addiction, depression and self-harm–and it should not happen again. We cannot say that this is merely a nascent technology and that we can delay the development of protections. We already know AI will critically impact our lives. We’ve watched tech critically impact lives and AI-enabled tech is both faster and potentially more extreme.
Challenge and Opportunity
AI is already embedded in children’s lives and education. According to Common Sense Media research, seven in ten teens have used generative AI, and the most common use is for help with homework. The research also found most parents are in the dark about their child’s generative AI use–only a third of parents whose children reported using generative AI were aware of such use. Beyond generative AI, machine learning systems are embedded in just about every application kids use at school and at home. Further, most teens and parents say schools have either no AI policy or have not communicated one.
Educational uses of AI are recognized to pose higher risk, according to the EU Artificial Intelligence Act and other international frameworks. The EU recognized that risk management requires special consideration when an AI system is likely to be accessed by children. The U.S. has developed a risk management framework, but the U.S. has not yet articulated risk levels or developed a specific educational or youth profile using NIST’s Risk Management Framework. There is still a deep need to ensure that AI systems likely to be accessed by children, including in schools, to be assessed in terms of risk management and impact on youth.
It is well established that children and teenagers are vulnerable to manipulation by technology. Youth report struggling to set boundaries from technology, and according to a U.S. Surgeon General report, almost a third of teens say they are on social media almost constantly. Almost half of youth say social media has reduced their attention span, and takes time away from other activities they care about. They are unequipped to assess sophisticated and targeted advertising, as most children cannot distinguish ads from content until they are at least eight years old, and most children do not realize ads can be customized. Additionally, social media design features lead, in addition to addiction, to teens suffering other mental or physical harm: from unattainable beauty filters to friend comparison to recommendation systems that promote harmful content, such as the algorithmic promotion of viral “challenges” that can lead to death. AI technology is particularly concerning given its novelness, and the speed and autonomy at which the technology can operate, and the frequent opacity even to developers of AI systems about how inputs and outputs may be used or exposed.
Particularly problematic uses of AI in products used in education and/or by children so far include products that use emotion detection, biometric data, facial recognition (built from scraping online images that include children), companion AI, automated education decisions, and social scoring. This list will continue to grow as AI is further adopted.
There are numerous useful frameworks and toolkits from expert organizations like EdSafe, and TeachAI, and from government organizations like NIST, the National Telecommunications and Information Administration (NTIA), and Department of Education (ED). However, we need the next administration to (1) encourage Congress to pass clear rules regarding AI products used with children, (2) have NIST develop risk management frameworks specifically addressing use of AI in education and by children more broadly, and serve as a clearinghouse function so individual actors and states do not bear that responsibility, and (3) ensure frameworks are required and prohibitions are enforced. This is also reflected in the lack of updated federal privacy and safety laws that protect children and teens.
Plan of Action
The federal government should take note of the innovative policy ideas bubbling up at the state level. For example, there is legislation and proposals in Colorado, California, Texas, and detailed guidance in over 20 states, including Ohio, Alabama, and Oregon.
Policymakers should take a multi-pronged approach to address AI for children and learning, recognizing they are higher risk and therefore additional layers of protection should apply:
Recommendation 1. Build a coordinated framework an AI Safety and Kids Initiative at NIST
As the federal government further details risk associated with uses of AI, common uses of AI by kids should be designated or managed as high risk. This is a foundational step to support the creation of guardrails or ensure protections for children as they use AI systems. The administration should clearly categorize education and use by children with in a risk level framework. For example, the EU is also considering risk in AI with the EU AI Act, which has different risk levels. If the risk framework includes education and AI systems that are likely to be accessed by children it provides a strong signal to policymakers at the state and federal level that these are uses that require protections (audits, transparency, or enforcement) to prevent or address potential harm.
NIST, in partnership with others, should develop risk management profiles for platform developers building AI products for use in Education and for products likely to be accessed by children. Emphasis should be on safety and efficacy before technology products come to market, with audits throughout development. NIST should:
- Develop a committee with ED,, FTC, and CPSC, to periodically update of risk management framework (RMF) profiles, including benchmarking standards related to safety.
- Refine risk levels and RMFs relevant to education, working in in partnership with NTIA and ED, through an open call to stakeholders.
Work in partnership with NTIA, FTC, CPSC, and HHS to refine risk levels and risk management profiles for AI systems likely to be accessed by children.
The administration should task NIST’s Safety Institute to provide clarity on how safety should be considered for the use of AI in education and for AI systems likely to be accessed by children. This is accomplished through:
- Developer guidance: Promulgate safety guidance for developers of AI systems likely to be accessed by children or used in education
- Procurement guidance: In collaboration with the Dept of Education to provide guidance on safety, efficacy, and privacy to support educational procurement of AI systems
- Information clearinghouse: To support state bodies and other entities developing guidance on use of AI systems by serving as a clearinghouse for information on the state of AI systems, developments in efficacy and safety, and to highlight through periodic reporting the concerns of and needs of users.
Recommendation 2. Ensure every child benefits from the promise of AI innovations
The administration should support comprehensive digital literacy and prevent a deepening of the digital divide.
- Highlighting Meaningful Use: Provide periodically updated guidance on best uses available for schools, teachers, students, and caregivers to support their use of AI technology for education.
- Support Professional Development: Dept of Ed and NSF can collaborate on Professional Development guidelines, and flag new areas for teacher training and administer funding to support educator professional development.
- Comprehensive Digital Literacy: NTIA, Dept of Ed should collaborate to administer funds for digital literacy efforts that support both students and caregivers. Digital literacy guidance should support both use and dynamically addresses concerns around current risks or safety issues as they arise.
- Clearinghouse for AI Developments: In addition to funding this work experts in government at NIST, NTIA, FTC, FCC, and Dept of Ed can work collaboratively to periodically alert and inform consumers and digital literacy organizations about developments with AI systems. Federal government can serve as a resource to alert stakeholders downstream on both positive and negative developments, for example the FCC Consumer Advisory Committee was tasked with developing recommendation with a consumer education outreach plan regarding AI generated robocalls.
Recommendation 3. Encourage Congress to pass clear enforceable rules re privacy and safety for AI products used by children
Champion Congressional updates to privacy laws like COPPA and FERPA to address use (especially for training) and sharing of personal information (PI) by AI tools. These laws can work in tandem, see for example recent proposed COPPA updates that would address use of technology in educational settings by children.
- Consumer Protections: In the consumer space, consider requirements generally prohibiting use of children’s PI for training AI models, unless deidentified or aggregated and with consent (see CA AB 2877).
- Education Protections: In education settings, it may be unclear when information about students shared with AI systems is subject to FERPA. Dept of Ed has acknowledged that educational uses of AI models may not be aligned with FERPA or state student privacy laws. FERPA should be updated to explicitly cover personal information collected by and shared with LLMs: covered education records must include this data; sharing of directory information for all purposes including AI should be limited; the statute should address when ed tech vendors operate as “school officials” and generally prohibit training AI models on student personal information.
Push for Congress to pass AI specific legislation addressing the development and deployment of AI systems for use by children
- Address High Risk Uses: Support legislation to prohibit the use of AI systems in high-risk educational contexts, or when likely to be accessed by children, unless committee-identified benchmarks are met. Use of AI in educational contexts and when accessed by children should be default deemed high risk unless it can be demonstrated otherwise. Specific examples of high risks uses in education include AI for threat detection and disciplinary uses, exam proctoring, automated grading and admissions, and generative and companion AI use by minor students.
- Require Third-Party Audits: Support legislation to require third-party audits at the application, model, and governance level, considering functionality, performance, robustness, security and privacy, safety, educational efficacy (as appropriate), accessibility, risks, and mitigation strategies.
- Require Transparency: Support legislation to require transparency reporting by AI developers.
Support Congressional passage of online safety laws that address harmful design features in technology–specifically addressing design features that can lead to medically recognized mental health disorders like anxiety, depression, eating disorders, substance use, and suicide, and patterns of use indicating addiction-like behavior, as in Title I of the Senate-passed Kids Online Safety and Privacy Act.
Moving Forward
One ultimate recommendation is that, critically, standards and requirements need teeth. Frameworks should require that companies comply with legal requirements or face effective enforcement (such as by a well-funded expert regulator, or private lawsuits), with tools such as fines and injunctions. We have seen with past technological developments that voluntary frameworks and suggestions will not adequately protect children. Social media for example has failed to voluntarily protect children and poses risks to their mental health and well being. From exacerbating body image issues to amplifying peer pressure and social comparison, from encouraging compulsive device use to reducing attention spans, from connecting youth to extremism, illegal products, and deadly challenges, the financial incentives do not appear to exist for technology companies to appropriately safeguard children on their own. The next Administration can support enforcement by funding government positions who will be enforcing such laws.
AI is transforming how children learn and live, and policymakers, industry, and educators owe it to the next generation to set in place a responsible policy that embraces this new technology while at the same time ensuring all children’s well-being, privacy, and safety is respected.
A peer support option should be integrated into the 988 Suicide and Crisis Lifeline so that 988 service users can choose to connect with specialists based on a shared lived experience.
Given the rapid pace of AI advancement, a proactive effort triumphs over a reactive one. To protect consumers, workers, and the economy more broadly, it is imperative that the FTC and DOJ adapt their enforcement strategies to meet the complexities of the AI era.
To encourage greater adoption of generic drugs in clinical practice the FDA should implement a dedicated regulatory pathway for non-manufacturers to seek approval of new indications for repurposed generic drugs.