Secrecy and Error Correction in Open Source Intel
Open source intelligence products, which are based on information gathered in the public domain, are often withheld from public disclosure, for various reasons. These include habit, the cultivation of the mystique of secret intelligence, the protection of copyrighted information, and the preservation of “decision advantage,” i.e. the policy-relevant insight that open source intelligence at its best may offer.
Even when it can be justified, however, such secrecy comes at a price. By restricting the distribution of unclassified intelligence products, government agencies also limit the opportunities for the discovery and correction of erroneous information or analysis. Conversely, expanding access to such materials may be expected to yield an improved product.
So, for example, Secrecy News recently published a previously undisclosed Open Source Center report on Bolivia’s Islamic community (pdf). It had not been approved for public release. Sure enough, once the report became public knowledge, it became possible to identify mistaken information that had been inadvertently disseminated by the Open Source Center throughout the U.S. government.
The report had listed the Association of the Islamic Community of Bolivia as a Shia organization (at page 11). That was incorrect. “La Asociacion de la comunidad Islamica de Bolivia… es una comunidad SUNNITA,” wrote Ahmad Ali Cuttipa Trigo, a representative of the group, in a courteous but emphatic email message from La Paz. “Quisieramos que enmienden ese error de taipeo.”
In fact, mistaking a Sunni community for a Shia one is more than a typographical error. It is the kind of thing that under some circumstances could lead a reader to draw significant unwarranted inferences. And so fixing it is a service to everyone concerned.
From this perspective, the unauthorized publication of such materials may also perhaps be seen as a contribution to the open source intelligence enterprise.
As new waves of AI technologies continue to enter the public sector, touching a breadth of services critical to the welfare of the American people, this center of excellence will help maintain high standards for responsible public sector AI for decades to come.
The Federation of American Scientists supports the Critical Materials Future Act and the Unearth Innovation Act.
By creating a reliable, user-friendly framework for surfacing provenance, NIST would empower readers to better discern the trustworthiness of the text they encounter, thereby helping to counteract the risks posed by deceptive AI-generated content.
By investing in the mechanisms that connect learning ecosystems, policymakers can build “neighborhoods” of learning that prepare students for citizenship, work, and life.