After a court issued a ruling last spring that a Yemeni detainee held in U.S. custody should be released, the opinion was briefly published in the case docket and then abruptly withdrawn for classification review. When it reappeared, reporter Dafna Linzer discovered, it was not only redacted but had been significantly altered.
“The alterations are extensive,” she found. “Sentences were rewritten. Footnotes that described disputes and discrepancies in the government’s case were deleted. Even the date and circumstances of [the detainee’s] arrest were changed.”
Yet in what seems like an insult to the integrity of the judicial process, no indication was given that the original opinion had been modified — not just censored — as a consequence of the classification review. ProPublica obtained both versions of the ruling and published a comparison of them, highlighting the missing or altered passages. See “In Gitmo Opinion, Two Versions of Reality” by Dafna Linzer, ProPublica (co-published with The National Law Journal), October 8.
By preparing credible, bipartisan options now, before the bill becomes law, we can give the Administration a plan that is ready to implement rather than another study that gathers dust.
Even as companies and countries race to adopt AI, the U.S. lacks the capacity to fully characterize the behavior and risks of AI systems and ensure leadership across the AI stack. This gap has direct consequences for Commerce’s core missions.
The last remaining agreement limiting U.S. and Russian nuclear weapons has now expired. For the first time since 1972, there is no treaty-bound cap on strategic nuclear weapons.
As states take up AI regulation, they must prioritize transparency and build technical capacity to ensure effective governance and build public trust.