The threat to public safety from unmanned aerial systems (drones) is not just foreseeable — it already exists in the form of numerous near-collisions with manned aircraft, a new report from the Congressional Research Service observes.
“Between 2016 and 2019, airline pilots reported, on average, more than 100 drone sightings per month to FAA, and social media have transmitted photos and videos taken by drones in close proximity to airports and passenger airliners,” the report said.
“In addition to careless and reckless drone operations, homeland security and law enforcement agencies have uncovered incidents involving drones transporting illegal drugs across U.S. borders, dropping contraband into prison yards, and conducting industrial espionage,” CRS said. See Protecting Against Rogue Drones, CRS In Focus, May 14, 2020.
And see, relatedly:
Counter-Unmanned Aircraft System Techniques, Army Techniques Publication 3-01.81, April 2017
Department of Defense Counter-Unmanned Aircraft Systems, Congressional Research Service, April 7, 2020
Guidance Regarding Department Activities to Protect Certain Facilities or Assets from Unmanned Aircraft and Unmanned Aircraft Systems, memorandum from the Attorney General, April 2020
Protecting the health and safety of the American public and ensuring that the public has the opportunity to participate in the federal decision-making process is crucial. As currently organized, FACs are not equipped to provide the best evidence-based advice.
As new waves of AI technologies continue to enter the public sector, touching a breadth of services critical to the welfare of the American people, this center of excellence will help maintain high standards for responsible public sector AI for decades to come.
The Federation of American Scientists supports the Critical Materials Future Act and the Unearth Innovation Act.
By creating a reliable, user-friendly framework for surfacing provenance, NIST would empower readers to better discern the trustworthiness of the text they encounter, thereby helping to counteract the risks posed by deceptive AI-generated content.