Last week, in response to a request from Secrecy News for a copy of a thirty year old history of computer development at Los Alamos in the 1940s and 1950s, a reference librarian at Los Alamos National Laboratory apologetically explained that she could not release the requested document.
“We are sorry but due to a mandate from NNSA to the Laboratory and Research Library policies, we are unable to provide technical reports until further notice,” the librarian wrote. You want information from the Library? Don’t be silly!
Fortunately, a copy of the document (pdf), which was not otherwise available online, was obtained independently and it has been added to our Los Alamos document collection.
Among other curiosities, the report describes work on an early chess-playing program for the MANIAC computer in the 1950s:
“Because of the slow speed of MANIAC (about 10,000 instructions per second) we had to restrict play to a 6 by 6 board, removing the bishops and their pawns. Even then, moves averaged about 10 minutes for a two-move look-ahead strategy.”
See “Computing at LASL in the 1940s and 1950s” by Roger B. Lazarus, et al, report number LA-6943-H, May 1978.
At this inflection point, the choice is not between speed and safety but between ungoverned acceleration and a calculated momentum that allows our strategic AI advantage to be both sustained and secured.
Improved detection could strengthen deterrence, but only if accompanying hazards—automation bias, model hallucinations, exploitable software vulnerabilities, and the risk of eroding assured second‑strike capability—are well managed.
New initiative brings nine experts with federal government experience to work with the FAS and Tech & Society’s Beeck Center for Social Impact + Innovation, the Knight-Georgetown Institute, and the Institute for Technology Law & Policy Wednesday, June 11, 2025—Today Georgetown University’s Tech & Society Initiative and the Federation of American Scientists (FAS) announce two […]
A dedicated and properly resourced national entity is essential for supporting the development of safe, secure, and trustworthy AI to drive widespread adoption, by providing sustained, independent technical assessments and emergency coordination.