Ensuring Platform Transparency and Accountability
Summary
Open-source investigations and public interest research using platform data (e.g., Facebook, YouTube) have enabled the collection of evidence of human rights atrocities, identified the role of foreign adversaries in manipulating public opinion before elections, and uncovered the prevalence and reach of terrorist radicalization and recruitment tactics. Nascent data privacy legislation such as the EU General Data Protection Regulation and the California Consumer Privacy Act have placed increased pressure on platforms to restrict third party access to data. In an overly cautious interpretation of these laws, platforms are increasingly restricting third-party access to the data they collect. In doing so, platforms shield themselves from public scrutiny and accountability.
To support transparency and accountability of platforms, the next administration should work with Congress to ensure that any new data privacy legislation proposed at the federal level does not inadvertently block the ability of third parties to gain access to platform data for open-source investigations and public interest research. The White House Office of Science and Technology Policy should take the lead by convening a workshop among key actors to make progress on these goals. Out of the workshop, a federal working group should be formed to develop principles and operational guides to support ethical third-party access to platform data, including the formation of technical standards to ensure data privacy and security.
The incoming administration must act to address bias in medical technology at the development, testing and regulation, and market-deployment and evaluation phases.
The incoming administration should work towards encouraging state health departments to develop clear and well-communicated data storage standards for newborn screening samples.
Proposed bills advance research ecosystems, economic development, and education access and move now to the U.S. House of Representatives for a vote
NIST’s guidance on “Managing Misuse Risk for Dual-Use Foundation Models” represents a significant step forward in establishing robust practices for mitigating catastrophic risks associated with advanced AI systems.