Goldsmith: “Extreme Secrecy… Led to a Lot of Mistakes”
In October, the Senate Judiciary Committee held a riveting hearing with Jack Goldsmith, the former head of the Justice Department Office of Legal Counsel. The record of that hearing has just been published.
As was widely reported at the time, Mr. Goldsmith challenged the legality of certain aspects of the President’s warrantless surveillance program and raised questions about other policies and procedures in the “war on terrorism.”
“There’s no doubt that the extreme secrecy [surrounding the Terrorist Surveillance Program] — not getting feedback from experts, and not showing it to experts, and not getting a variety of views, even inside the executive branch — led to a lot of mistakes,” he said.
The PDF version of the hearing record includes Mr. Goldsmith’s answers to questions for the record from the Senate Committee members (pp. 38-49). In most cases, he deflected the Senators’ pointed questions. But several of the exchanges are interesting nevertheless.
Asked about the Administration’s refusal to disclose to Congress the legal memoranda justifying its interrogation program, Mr. Goldsmith stated:
“I believe it is the President’s prerogative not to disclose these opinions. And I believe it is the Congress’s prerogative to use political pressure to try to force the Executive to disclose the opinions.”
See “Preserving the Rule of Law in the Fight Against Terrorism,” hearing before the Senate Judiciary Committee, October 2, 2007.
Researchers have many questions about the modernization of Pakistan’s nuclear-capable aircraft and associated air-launched cruise missiles.
The decision casts uncertainty on the role of scientific and technical expertise in federal decision-making, potentially harming our nation’s ability to respond effectively
Congress should foster a more responsive and evidence-based ecosystem for GenAI-powered educational tools, ensuring that they are equitable, effective, and safe for all students.
Without independent research, we do not know if the AI systems that are being deployed today are safe or if they pose widespread risks that have yet to be discovered, including risks to U.S. national security.