Emerging Technology
day one project

An Agenda for Ensuring Child Safety in the AI Era

01.12.25 | 10 min read | Text by Amina Fazlullah & Ariel Fox Johnson

The next administration should continue to make responsible policy on Artificial intelligence (AI) and children, especially in K-12, a top priority and create an AI and Kids Initiative led by the administration. AI is transforming how children learn and live, and policymakers, industry, and educators owe it to the next generation to set in place a responsible policy that embraces this new technology while at the same time ensuring all children’s well-being, privacy, and safety is respected. The federal government should develop clear prohibitions, enforce them, and serve as a national clearinghouse for AI K-12 educational policy. It should also support comprehensive digital literacy related to AI.

Specifically, we think these policy elements need to be front of mind for decision-makers: build a coordinated framework for AI Safety; champion legislation to support youth privacy and online safety in AI; and ensure every child can benefit from the promise of AI. 

In terms of building a coordinated framework for AI Safety, the next administration should: ensure parity with existing child data protections; develop safety guidance for developers, including specific prohibitions to limit harmful designs, and inappropriate uses; and direct the National Institute of Standards and Technology (NIST) to serve as the lead organizer for federal efforts on AI safety for children. When championing legislation to support youth privacy and online safety in AI, the next administration should support the passage of online safety laws that address harmful design features that can lead to medically recognized mental health disorders and patterns of use indicating addiction-like behavior, and modernize federal children’s privacy laws including updating The Family Educational Rights and Privacy Act (FERPA) and passing youth privacy laws to explicitly address AI data use issues, including prohibiting developing commercial models from students’ educational information, with strong enforcement mechanisms. And, in order to ensure every child can benefit from the promise of AI, the next administration should  support comprehensive digital literacy efforts and prevent deepening the digital divide.

Importantly, policy and frameworks need to have teeth and need to take the burden off of individual states, school districts, or actors to assess AI tools for children. Enforcement should be tailored to specific laws, but should include as appropriate private rights of action, well-funded federal enforcers, and state and local enforcement. Companies should feel incentivized to act. The framework cannot be voluntary, enabling companies to pick and choose whether or not to follow recommendations.. We’ve seen what happens when we do not put in place guardrails for tech, such as increased risk of child addiction, depression and self-harm–and it should not happen again. We cannot say that this is merely a nascent technology and that we can delay the development of protections. We already know AI will critically impact our lives. We’ve watched tech critically impact lives and AI-enabled tech is both faster and potentially more extreme. 

Challenge and Opportunity

AI is already embedded in children’s lives and education. According to Common Sense Media research, seven in ten teens have used generative AI, and the most common use is for help with homework. The research also found most parents are in the dark about their child’s generative AI use–only a third of parents whose children reported using generative AI were aware of such use. Beyond generative AI, machine learning systems are embedded in just about every application kids use at school and at home. Further,  most teens and parents say schools have either no AI policy or have not communicated one. 

Educational uses of AI are recognized to pose higher risk, according to the EU Artificial Intelligence Act and other  international frameworks. The EU  recognized that risk management requires special consideration when an AI system is likely to be accessed by children. The U.S. has developed a risk management framework, but the U.S. has not yet articulated risk levels or developed a specific educational or youth profile using NIST’s Risk Management Framework. There is still a deep need to ensure that AI systems likely to be accessed by children, including in schools, to be assessed in terms of risk management and impact on youth.

It is well established that children and teenagers are vulnerable to manipulation by technology. Youth report struggling to set boundaries from technology, and according to a U.S. Surgeon General report, almost a third of teens say they are on social media almost constantly. Almost half of youth say social media has reduced their attention span, and takes time away from other activities they care about. They are unequipped to assess sophisticated and targeted advertising, as most children cannot distinguish ads from content until they are at least eight years old, and most children do not realize ads can be customized. Additionally,  social media design features lead, in addition to addiction, to teens suffering other mental or physical harm: from unattainable beauty filters to friend comparison to recommendation systems that promote harmful content, such as the algorithmic promotion of viral “challenges” that can lead to deathAI technology is particularly concerning given its novelness, and the speed and autonomy at which the technology can operate, and the frequent opacity even to developers of AI systems about how inputs and outputs may be used or exposed. 

Particularly problematic uses of AI in products used in education and/or by children so far include products that use emotion detection, biometric data, facial recognition (built from scraping online images that include children), companion AI, automated education decisions, and social scoring.  This list will continue to grow as AI is further adopted.

There are numerous useful frameworks and toolkits from expert organizations like EdSafe, and TeachAI, and from government organizations like NIST, the National Telecommunications and Information Administration (NTIA), and Department of Education (ED). However, we need the next administration to (1) encourage Congress to pass clear rules regarding AI products used with children, (2) have NIST develop risk management frameworks specifically addressing use of AI in education and by children more broadly, and serve as a clearinghouse function so individual actors and states do not bear that responsibility, and (3) ensure frameworks are required and prohibitions are enforced. This is also reflected in the lack of updated federal privacy and safety laws that protect children and teens. 

Plan of Action

The federal government should take note of the innovative policy ideas bubbling up at the state level. For example, there is legislation and proposals in Colorado, California, Texas, and detailed guidance in over 20 states, including Ohio, Alabama, and Oregon

Policymakers should take a multi-pronged approach to address AI for children and learning, recognizing they are higher risk and therefore additional layers of protection should apply:

Recommendation 1. Build a coordinated framework an AI Safety and Kids Initiative at NIST

As the federal government further details risk associated with uses of AI, common uses of AI by kids should be designated or managed as high risk.  This is a foundational step to support the creation of guardrails or ensure protections for children as they use AI systems. The administration should clearly categorize education and use by children with in a risk level framework. For example, the EU is also considering risk in AI with the EU AI Act, which has different risk levels. If the risk framework includes education and AI systems that are likely to be accessed by children it provides a strong signal to policymakers at the state and federal level that these are uses that require protections (audits, transparency, or enforcement) to prevent or address potential harm. 

NIST, in partnership with others, should develop risk management profiles for platform developers building AI products for use in Education and for products likely to be accessed by children. Emphasis should be on safety and efficacy before technology  products come to market, with audits throughout development. NIST should:

Work in partnership with NTIA, FTC, CPSC, and HHS to  refine risk levels and risk management profiles for AI systems likely to be accessed by children.

The administration should task NIST’s Safety Institute to provide clarity on how safety should be considered for the use of AI in education and for AI systems likely to be accessed by children. This is accomplished through:

Recommendation 2. Ensure every child benefits from the promise of AI innovations 

The administration should support comprehensive digital literacy and prevent a deepening of the digital divide. 

Recommendation 3. Encourage Congress to pass clear enforceable rules re privacy and safety for AI products used by children

Champion Congressional updates to privacy laws like COPPA and FERPA to address use (especially for training) and sharing of personal information (PI) by AI tools. These laws can work in tandem, see for example recent proposed COPPA updates that would address use of technology in educational settings by children. 

Push for Congress to pass AI specific legislation addressing the development and deployment of AI systems for use by children

Support Congressional passage of online safety laws that address harmful design features in technology–specifically addressing design features that can lead to medically recognized mental health disorders like anxiety, depression, eating disorders, substance use, and suicide, and patterns of use indicating addiction-like behavior, as in Title I of the Senate-passed Kids Online Safety and Privacy Act.

Moving Forward

One ultimate recommendation is that, critically, standards and requirements need teeth. Frameworks should require that companies comply with legal requirements or face effective enforcement (such as by a well-funded expert regulator, or private lawsuits), with tools such as fines and injunctions. We have seen with past technological developments that voluntary frameworks and suggestions will not adequately protect children. Social media for example has failed to voluntarily protect children and poses risks to their mental health and well being. From exacerbating body image issues to amplifying peer pressure and social comparison, from encouraging compulsive device use to reducing attention spans, from connecting youth to extremism, illegal products, and deadly challenges, the financial incentives do not appear to exist for technology companies to appropriately safeguard children on their own. The next Administration can support enforcement by funding government positions who will be enforcing such laws.

publications
See all publications
Emerging Technology
day one project
Policy Memo
An Agenda for Ensuring Child Safety in the AI Era

AI is transforming how children learn and live, and policymakers, industry, and educators owe it to the next generation to set in place a responsible policy that embraces this new technology while at the same time ensuring all children’s well-being, privacy, and safety is respected.

01.12.25 | 10 min read
read more
Emerging Technology
day one project
Policy Memo
A Peer Support Service Integrated Into the 988 Lifeline

A peer support option should be integrated into the 988 Suicide and Crisis Lifeline so that 988 service users can choose to connect with specialists based on a shared lived experience.

01.12.25 | 10 min read
read more
Emerging Technology
day one project
Policy Memo
Antitrust in the AI Era: Strengthening Enforcement Against Emerging Anticompetitive Behavior

Given the rapid pace of AI advancement, a proactive effort triumphs over a reactive one. To protect consumers, workers, and the economy more broadly, it is imperative that the FTC and DOJ adapt their enforcement strategies to meet the complexities of the AI era.

01.10.25 | 8 min read
read more
Emerging Technology
day one project
Policy Memo
Clearing the Path for New Uses for Generic Drugs

To encourage greater adoption of generic drugs in clinical practice the FDA should implement a dedicated regulatory pathway for non-manufacturers to seek approval of new indications for repurposed generic drugs.

01.09.25 | 16 min read
read more