The easy availability of high-resolution imagery of much of the Earth’s surface through Google Earth has presented a significant challenge to longstanding secrecy and national security policies, and has produced several distinct types of reactions from concerned governments, according to a recent report (pdf) from the DNI Open Source Center (OSC).
“As the initial shock wore off, five main responses to the ‘Google threat’ emerged from nations around the world: negotiations with Google, banning Google products, developing a similar product, taking evasive measures, and nonchalance,” the OSC report said.
The report documents these responses with citations to published news sources. It also notes several incidents in which terrorists or irregular military forces reportedly used Google Earth to plan or conduct attacks.
The OSC report has not been approved for public release, but a copy was obtained by Secrecy News. See “The Google Controversy — Two Years Later,” Open Source Center, 30 July 2008.
Further background on the impact of commercial satellite imagery may be found in “Can You Spot the Chinese Nuclear Sub?” by Sharon Weinberger, Discover, August 2008.
Due to government restrictions, lawsuits or other arrangements with Google, quite a few locations have been excluded from detailed coverage in Google Earth. Many of these were identified in “Blurred Out: 51 Things You Aren’t Allowed to See on Google Maps,” IT Security, July 15, 2008.
Both articles were cited by the OSC in its new report.
Researchers have many questions about the modernization of Pakistan’s nuclear-capable aircraft and associated air-launched cruise missiles.
The decision casts uncertainty on the role of scientific and technical expertise in federal decision-making, potentially harming our nation’s ability to respond effectively
Congress should foster a more responsive and evidence-based ecosystem for GenAI-powered educational tools, ensuring that they are equitable, effective, and safe for all students.
Without independent research, we do not know if the AI systems that are being deployed today are safe or if they pose widespread risks that have yet to be discovered, including risks to U.S. national security.