Last year the National Academy of Public Administration developed a proposal to perform an “ethics audit” of the National Institutes of Health (NIH).
The proposal was a response, at NIH’s request, to persistent concerns from members of Congress and others that numerous NIH employees had conflicts of interest arising from their compensated activities outside of the agency.
Rumor had it that the resulting NAPA proposal contained in a January 2006 report was “not what NIH wanted, so they simply buried the paper after it was given to the Director.”
“One of the … people who felt it got deep-sixed thought it would be of interest to the NIH research community,” a friendly tipster wrote.
Secrecy News requested the document under the Freedom of Information Act, and it was promptly released by NIH.
See “Enhancing Risk Management at the National Institutes of Health Through an Audit of the Ethics Program,” prepared by a National Academy of Public Administration Staff Study Team, January 2006 (4 MB PDF file).
By preparing credible, bipartisan options now, before the bill becomes law, we can give the Administration a plan that is ready to implement rather than another study that gathers dust.
Even as companies and countries race to adopt AI, the U.S. lacks the capacity to fully characterize the behavior and risks of AI systems and ensure leadership across the AI stack. This gap has direct consequences for Commerce’s core missions.
The last remaining agreement limiting U.S. and Russian nuclear weapons has now expired. For the first time since 1972, there is no treaty-bound cap on strategic nuclear weapons.
As states take up AI regulation, they must prioritize transparency and build technical capacity to ensure effective governance and build public trust.