To sustain America’s leadership in AI innovation, accelerate adoption across the economy, and guarantee that AI systems remain secure and trustworthy, we offer a set of policy recommendations.
Current scientific understanding shows that so-called “anonymization” methods that have been widely used in the past are inadequate for protecting privacy in the era of big data and artificial intelligence.
Dr. Lim will help develop, organize, and implement FAS’s growing contribution in the area of catastrophic risk including on core areas of nuclear weapons, AI and national security, space, and other emerging technologies.
To fully harness the benefits of AI, the public must have confidence that these systems are deployed responsibly and enhance their lives and livelihoods.
As new waves of AI technologies continue to enter the public sector, touching a breadth of services critical to the welfare of the American people, this center of excellence will help maintain high standards for responsible public sector AI for decades to come.
By creating a reliable, user-friendly framework for surfacing provenance, NIST would empower readers to better discern the trustworthiness of the text they encounter, thereby helping to counteract the risks posed by deceptive AI-generated content.
To tackle AI risks in grant spending, grant-making agencies should adopt trustworthy AI practices in their grant competitions and start enforcing them against reckless grantees.
As people become less able to distinguish between what is real and what is fake, it has become easier than ever to be misled by synthetic content, whether by accident or with malicious intent. This makes advancing alternative countermeasures, such as technical solutions, more vital than ever before.
AI is transforming how children learn and live, and policymakers, industry, and educators owe it to the next generation to set in place a responsible policy that embraces this new technology while at the same time ensuring all children’s well-being, privacy, and safety is respected.
Given the rapid pace of AI advancement, a proactive effort triumphs over a reactive one. To protect consumers, workers, and the economy more broadly, it is imperative that the FTC and DOJ adapt their enforcement strategies to meet the complexities of the AI era.
By leveraging its substantial purchasing power responsibly, the government can encourage high-quality, inclusive AI solutions that address diverse citizen needs while setting a strong precedent for innovation and accountability.
The research community lacks strategies to incentivize collaboration on high-quality data acquisition and sharing. The government should fund collaborative roadmapping, certification, collection, and sharing of large, high-quality datasets in life science.