A former Central Intelligence Agency employee, Thomas Waters Jr., filed a lawsuit against the Agency last week, arguing that publication of his book had been improperly blocked in the prepublication review process.
“The Central Intelligence Agency has unlawfully imposed a prior restraint upon Thomas Waters by obstructing and infringing on his right to publish his unclassified memoirs and threatening him with civil and criminal penalties,” according to the March 3 complaint (pdf) filed in DC District Court.
The case seems to reflect the tightening of controls on public disclosure of information at the CIA.
Almost all of Waters’ manuscript had been cleared for publication by the CIA in September 2004, according to the complaint (pdf). But last month, the Agency notified him that substantial portions of the book, including some material that had previously been approved, could not be published after all.
“The CIA continues to deliberately create a hostile environment for its former employees who are seeking to do nothing other than publish nonsensitive, unclassified information,” said Mark S. Zaid, Waters’ attorney. “Its actions are completely unconstitutional and designed to disable the First Amendment.”
See also “CIA Sued Over Right to Publish” by Shaun Waterman, United Press International, March 6.
From California to New Jersey, wildfires are taking a toll—costing the United States up to $424 billion annually and displacing tens of thousands of people. Congress needs solutions.
To secure the U.S. bio-infrastructure, maintain global leadership in biotechnology, and safeguard American citizens from emerging threats to their privacy, the federal government must modernize its approach to human genetic and biological data.
To ensure an energy transition that brings broad based economic development, participation, and direct benefits to communities, we need federal policy that helps shape markets. Unfortunately, there is a large gap in understanding of how to leverage federal policy making to support access to capital and credit.
From use to testing to deployment, the scaffolding for responsible integration of AI into high-risk use cases is just not there.