What exactly does “all lawful use” of AI mean? No one knows.
What exactly does “all lawful use” of AI mean? No one knows.
As a result of this weekend’s highly-publicized Department of Defense (DoD)-Anthropic dispute, we’re hearing a lot about the “lawful use” of frontier AI systems in classified environments.
“Lawful” is a legal floor that will look increasingly shaky as AI capabilities advance. It doesn’t answer whether we have adequate civil liberties guardrails or technical safety standards in place. Company “red lines” only matter if they are backed by enforceable technical and contractual safeguards. Otherwise, they function primarily as signaling. From use to testing to deployment, the scaffolding for responsible integration of AI into high-risk use cases is just not there.
Privacy is a major concern for experts and the public alike. When increasingly capable models are paired with large-scale government data holdings—including commercially purchased data on Americans—the result could materially change the practical boundaries of surveillance, even if each underlying dataset was obtained legally. AI systems expand the possibility of large-scale inference, enabling automated link analysis, behavioral pattern detection, and probabilistic assessments about individuals’ networks or intent across disparate datasets.
Next, there’s the reliability problem. Frontier systems remain probabilistic and brittle, particularly in adversarial settings. The companies building this technology do not yet have a mature testing, evaluation, validation, and verification (TEVV) ecosystem for high-stakes national security uses. At the same time, DoD strategy documents are calling for a “wartime” posture toward eliminating blockers in testing and deployment. That tension should concern us all.
Then, there are the numerous cybersecurity risks. Agentic systems that access sensitive data, ingest untrusted inputs, and can take external actions create new attack surfaces that adversaries will probe and exploit. In classified environments, these risks might be mitigated, but they don’t disappear. Subtle manipulation or model failure inside a military workflow can propagate quickly.
Capability is advancing quickly, but policymakers shouldn’t adopt faster than we can test and govern.
It is in the interests of the United States to appropriately protect information that needs to be protected while maintaining our participation in new discoveries to maintain our competitive advantage.
Our analysis of federal AI governance across administrations shows that divergent compliance procedures and uneven institutional capacity challenge the government’s ability to deploy AI in ways that uphold public trust.
To secure the U.S. bio-infrastructure, maintain global leadership in biotechnology, and safeguard American citizens from emerging threats to their privacy, the federal government must modernize its approach to human genetic and biological data.
From use to testing to deployment, the scaffolding for responsible integration of AI into high-risk use cases is just not there.