
Protecting Children’s Privacy at Home, at School, and Everywhere in Between
Summary
Young people today face surveillance unlike any previous generation, at home, at school, and everywhere in between. Constant use of technology while their brains are still developing makes them uniquely vulnerable to privacy harms, including identity theft, cyberbullying, physical risks, algorithmic labeling, and hyper-commercialism. A lack of privacy can ultimately lead children to self-censor and can limit their opportunities. Already-vulnerable populations—who have fewer resources, less digital literacy, or are non-native English speakers—are most at risk.
Congress and the Federal Trade Commission (FTC) have repeatedly considered efforts to better protect children’s privacy, but the next administration must ensure that this is a priority that is actually acted upon by supporting strong privacy laws and providing additional resources and authority to the FTC and support to the Department of Education (ED). The Biden-Harris administration should also establish a task force to explore how to best support and protect students. And the FTC should use its current authority to increase its understanding of the children’s technology market and robustly enforce a strong Children’s Online Privacy Protection Act (COPPA) rule.
To fully harness the benefits of AI, the public must have confidence that these systems are deployed responsibly and enhance their lives and livelihoods.
The first Trump Administration’s E.O. 13859 commitment laid the foundation for increasing government accountability in AI use; this should continue
As new waves of AI technologies continue to enter the public sector, touching a breadth of services critical to the welfare of the American people, this center of excellence will help maintain high standards for responsible public sector AI for decades to come.
By creating a reliable, user-friendly framework for surfacing provenance, NIST would empower readers to better discern the trustworthiness of the text they encounter, thereby helping to counteract the risks posed by deceptive AI-generated content.