Securing the Nation’s Educational Technology
Summary
Never before have so many children in America used so much educational technology, and never before has it been so important to ensure that these technologies are secure. Currently, however, school administrators are overburdened with complex security considerations that make it challenging for them to keep student data secure. The educational technologies now common in America’s physical and virtual classrooms should meet security standards designed to protect its students. As a civil rights agency, the Department of Education has a responsibility to lead a coordinated approach to ensuring a baseline of security for all students in the American education system.
This policy initiative will support America’s students and schools at a time when educational experiences—and student information—are increasingly online and vulnerable to exploitation. The plan of action outlined below includes a new Department of Education educational technology security rule, training support for schools, a voluntary technology self-certification system, an online registry of certified technologies to help grow a secure educational technology market, and processes for industry support and collaboration in this work. Combined, these efforts will create a safer digital learning environment for the nation’s students and a more robust educational technology marketplace.
It is in the interests of the United States to appropriately protect information that needs to be protected while maintaining our participation in new discoveries to maintain our competitive advantage.
Our analysis of federal AI governance across administrations shows that divergent compliance procedures and uneven institutional capacity challenge the government’s ability to deploy AI in ways that uphold public trust.
To secure the U.S. bio-infrastructure, maintain global leadership in biotechnology, and safeguard American citizens from emerging threats to their privacy, the federal government must modernize its approach to human genetic and biological data.
From use to testing to deployment, the scaffolding for responsible integration of AI into high-risk use cases is just not there.