What exactly does “all lawful use” of AI mean? No one knows.
What exactly does “all lawful use” of AI mean? No one knows.
As a result of this weekend’s highly-publicized Department of Defense (DoD)-Anthropic dispute, we’re hearing a lot about the “lawful use” of frontier AI systems in classified environments.
“Lawful” is a legal floor that will look increasingly shaky as AI capabilities advance. It doesn’t answer whether we have adequate civil liberties guardrails or technical safety standards in place. Company “red lines” only matter if they are backed by enforceable technical and contractual safeguards. Otherwise, they function primarily as signaling. From use to testing to deployment, the scaffolding for responsible integration of AI into high-risk use cases is just not there.
Privacy is a major concern for experts and the public alike. When increasingly capable models are paired with large-scale government data holdings—including commercially purchased data on Americans—the result could materially change the practical boundaries of surveillance, even if each underlying dataset was obtained legally. AI systems expand the possibility of large-scale inference, enabling automated link analysis, behavioral pattern detection, and probabilistic assessments about individuals’ networks or intent across disparate datasets.
Next, there’s the reliability problem. Frontier systems remain probabilistic and brittle, particularly in adversarial settings. The companies building this technology do not yet have a mature testing, evaluation, validation, and verification (TEVV) ecosystem for high-stakes national security uses. At the same time, DoD strategy documents are calling for a “wartime” posture toward eliminating blockers in testing and deployment. That tension should concern us all.
Then, there are the numerous cybersecurity risks. Agentic systems that access sensitive data, ingest untrusted inputs, and can take external actions create new attack surfaces that adversaries will probe and exploit. In classified environments, these risks might be mitigated, but they don’t disappear. Subtle manipulation or model failure inside a military workflow can propagate quickly.
Capability is advancing quickly, but policymakers shouldn’t adopt faster than we can test and govern.
From use to testing to deployment, the scaffolding for responsible integration of AI into high-risk use cases is just not there.
The Federation of American Scientists supports Congress’ ongoing bipartisan efforts to strengthen U.S. leadership with respect to outer space activities.
By preparing credible, bipartisan options now, before the bill becomes law, we can give the Administration a plan that is ready to implement rather than another study that gathers dust.
Even as companies and countries race to adopt AI, the U.S. lacks the capacity to fully characterize the behavior and risks of AI systems and ensure leadership across the AI stack. This gap has direct consequences for Commerce’s core missions.