Protecting Children’s Privacy at Home, at School, and Everywhere in Between
Summary
Young people today face surveillance unlike any previous generation, at home, at school, and everywhere in between. Constant use of technology while their brains are still developing makes them uniquely vulnerable to privacy harms, including identity theft, cyberbullying, physical risks, algorithmic labeling, and hyper-commercialism. A lack of privacy can ultimately lead children to self-censor and can limit their opportunities. Already-vulnerable populations—who have fewer resources, less digital literacy, or are non-native English speakers—are most at risk.
Congress and the Federal Trade Commission (FTC) have repeatedly considered efforts to better protect children’s privacy, but the next administration must ensure that this is a priority that is actually acted upon by supporting strong privacy laws and providing additional resources and authority to the FTC and support to the Department of Education (ED). The Biden-Harris administration should also establish a task force to explore how to best support and protect students. And the FTC should use its current authority to increase its understanding of the children’s technology market and robustly enforce a strong Children’s Online Privacy Protection Act (COPPA) rule.
It is in the interests of the United States to appropriately protect information that needs to be protected while maintaining our participation in new discoveries to maintain our competitive advantage.
Our analysis of federal AI governance across administrations shows that divergent compliance procedures and uneven institutional capacity challenge the government’s ability to deploy AI in ways that uphold public trust.
To secure the U.S. bio-infrastructure, maintain global leadership in biotechnology, and safeguard American citizens from emerging threats to their privacy, the federal government must modernize its approach to human genetic and biological data.
From use to testing to deployment, the scaffolding for responsible integration of AI into high-risk use cases is just not there.