When Secrecy News gained unauthorized access to a restricted U.S. Army manual on visual identification of U.S. and foreign aircraft, we supposed that it was just one more case of unnecessary and inappropriate secrecy.
But it turns out to be something worse than that, since the document (pdf) contains a surprising number of technical errors.
The dimensions given in the Army manual for the Predator unmanned aerial vehicle are wrong, the Entropic Memes blog astutely noted. And the entry for the B-52, among others, is likewise incorrect.
“Please,” Entropic Memes exclaimed. “If they can’t get the details of one of their own systems correct, how much faith can you have that they got the details of anyone else’s systems right?”
In this case, the secrecy of the Army manual was not just an arbitrary barrier to public access. It also “protected” numerous errors that may make the document worse than useless.
Conversely, exposing the document to public scrutiny may now make it possible to correct its errors so as to fulfill its intended purpose.
Since it was posted on the Federation of American Scientists website 48 hours ago, the Visual Aircraft Recognition manual has been downloaded over seventy thousand times, an exceptionally high rate of access.
Update: “This is not a subject I’ve so far spent a lot of time on, but the entry for every aircraft I’ve looked up in the manual thus far contains errors,” adds Entropic Memes in a new post.
By preparing credible, bipartisan options now, before the bill becomes law, we can give the Administration a plan that is ready to implement rather than another study that gathers dust.
Even as companies and countries race to adopt AI, the U.S. lacks the capacity to fully characterize the behavior and risks of AI systems and ensure leadership across the AI stack. This gap has direct consequences for Commerce’s core missions.
The last remaining agreement limiting U.S. and Russian nuclear weapons has now expired. For the first time since 1972, there is no treaty-bound cap on strategic nuclear weapons.
As states take up AI regulation, they must prioritize transparency and build technical capacity to ensure effective governance and build public trust.