Prosecutorial Discretion in Immigration Cases, and More from CRS

“Under the Federal criminal justice system, the prosecutor has wide latitude in determining when, whom, how, and even whether to prosecute for apparent violations of Federal criminal law,” says the U.S. Attorneys’ Manual. “The prosecutor’s broad discretion in such areas as initiating or foregoing prosecutions, selecting or recommending specific charges, and terminating prosecutions by accepting guilty pleas has been recognized on numerous occasions by the courts.” (Chapter 9-27).

Although prosecutors enjoy broad discretion concerning whether and whom to prosecute, there are limits, the Manual says, and consequences for prosecutorial overreach:  “Serious, unjustified departures from the principles set forth herein are [to be] followed by such remedial action, including the imposition of disciplinary sanctions, when warranted, as are deemed appropriate.”

(After the execution of Socrates, remorseful Athenians rose up against his three prosecutors, according to the uncorroborated account of Diogenes Laertius.  Meletus was stoned to death, while Anytus and Lycon were banished.)

The exercise of prosecutorial discretion is discussed in a new report from the Congressional Research Service, which focuses particularly on immigration cases.

The report “addresses the constitutional and other foundations for the doctrine of prosecutorial discretion, as well as the potential ways in which prosecutorial discretion may be exercised in the immigration context.” It also considers “potential constitutional, statutory, and administrative constraints upon the exercise of prosecutorial discretion.”

See Prosecutorial Discretion in Immigration Enforcement: Legal Issues, January 17, 2013.

Some other new and updated CRS products that Congress has not authorized CRS to release to the public include these:

Chemical Facility Security: Issues and Options for the 113th Congress, January 14, 2013

Nonstrategic Nuclear Weapons, December 19, 2012

The Protection of Classified Information: The Legal Framework, January 10, 2013

Crisis in Mali, January 14, 2013

FAS Roundup: January 21, 2013

U.S. and North Korea relations, disposal of nuclear weapons components and much more.

From the Blogs

  • Strategy Lacking for Disposal of Nuclear Weapons Components: According to an internal Department of Energy contractor report, there is a “large inventory” of classified nuclear weapons components “scattered across” the nation’s nuclear weapons complex and awaiting disposal. But, there is no effective disposal strategy currently in place.
  • Sandia Scientists Model Dynamics of Social Protest: Steven Aftergood writes that researchers at Sandia National Laboratories have been studying the ways that information, ideas and behaviors propagate through social networks in order to gain advance warning of cyber attacks or other threatening behavior.

Continue reading

Sandia Scientists Model Dynamics of Social Protest

Researchers at Sandia National Laboratories have been studying the ways that information, ideas and behaviors propagate through social networks in order to gain advance warning of cyber attacks or other threatening behavior.

The initial problem is how to explain the disparate consequences of seemingly similar triggering events.  Thus, in 2005, the Danish newspaper Jyllands-Posten published cartoons featuring the Muslim Prophet Muhammad, prompting widespread protests.  In 2006, by contrast, the Pope gave a lecture in which he made comments about Islam that were considered derogatory by some, but the ensuing controversy quickly faded away.

“While each event appeared at the outset to have the potential to trigger significant protests, the ‘Danish cartoons’ incident ultimately led to substantial Muslim mobilization, including massive protests and considerable violence, while outrage triggered by the pope lecture quickly subsided with essentially no violence,” wrote Sandia authors Richard Colbaugh and Kristin Glass.  “It would obviously be very useful to have the capability to distinguish these two types of reaction as early in the event lifecycle as possible.”

What accounts for the difference in these outcomes? The intrinsic qualities of the events are not sufficient to explain why one had disruptive consequences and the other did not. Rather, the authors say, one must factor in the mechanisms of influence by which individual responses are shaped and spread.

By way of analogy, it has been shown that “it is likely to be impossible to predict movie revenues, even very roughly, based on the intrinsic information available concerning the movie” such as cast or genre, but that “it *is* possible to identify early indicators of movie success, such as temporal patterns in pre-release ‘buzz’, and to use these indicators to accurately predict ultimate box office revenues.”

The Sandia authors developed a methodology that reflects the “topological properties” of social and information networks — including the density and hierarchy of connections among network members — and modeled the dynamics of “social diffusion events” in which individuals exercise influence on one another.

They report that their model lends itself, among other things, to “distinguishing successful mobilization and protest events, that is, mobilizations that become large and self-sustaining, from unsuccessful ones early in their lifecycle.”

They tested the model to predict the spread of textual memes, to distinguish between events that generated significant protest (a May 2005 Quran desecration) and those that did not (the knighting of Salman Rushdie in 2007), and to provide early warning of cyber attacks.

The authors’ research was sponsored by the Department of Defense and the Department of Homeland Security, among others.  See Early warning analysis for social diffusion events by Richard Colbaugh and Kristin Glass, originally published in Security Informatics, Vol. 1, 2012, SAND 2010-5334C.

Strategy Lacking for Disposal of Nuclear Weapons Components

There is a “large inventory” of classified nuclear weapons components “scattered across” the nation’s nuclear weapons complex and awaiting disposal, according to an internal Department of Energy contractor report last year.

But “there is no complex-wide cost-effective classified weapon disposition strategy.” And as a result, “Only a small portion of the inventory has been dispositioned and it has not always been in a cost-effective manner.”

See Acceptance of Classified Excess Components for Disposal at Area 5, presented at the Spring 2012 Waste Generator Workshop, April 24, 2012.

FAS Roundup: January 14, 2013

 New debate on cyber security, sailors sue TEPCO and much more.

Up for Debate: U.S. Cyber Policy

The United States has incorporated cyber security into its foreign policy, using the Stuxnet worm to destroy nearly 1,000 Iranian centrifuges in June 2012. Countries such as Iran are also using  cyber technologies to cause disruptions such as the October 2012 cyberattacks on U.S. banks. With the growing threat of cyber attacks, how should the U.S. operate in this arena? Has cyber warfare made the United States more or less safe?

In a new edition of the FAS online debate series “Up for Debate,” Mr. Joe Costa of the Cohen Group, Dr. James Lewis of the Center for Strategic and International Studies(CSIS),and Dr. Martin Libicki of the RAND Corporation debate how the United States should operate within the cyber domain.

Read the debate here.

Continue reading

Preparatory Commission CTBTO Conference

The Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO), an international organization based in Vienna, is organizing a science and technology conference in Vienna from June 17-21, 2013.

This is the fourth in a series of international   conferences promoting the exchange of knowledge and ideas between the CTBTO and leading scientists around the world, with the goal of strengthening the relationship between the CTBTO and the broader scientific community.

For more information on the conference and the call for papers, click here.


Up for Debate: Cybersecurity

Mr. Joe Costa of the Cohen Group, Dr. James Lewis of the Center for Strategic and International Studies(CSIS),and Dr. Martin Libicki of the RAND Corporation debate how the United States should operate within the cyber domain.

Debate: How Should the United States Operate in the Cyber Domain?

The United States has incorporated cyber security into its foreign policy, using the Stuxnet worm to destroy nearly 1,000 Iranian centrifuges in June 2012. Countries such as Iran are also using cyber technologies to cause disruptions such as the October 2012 cyberattacks on U.S. banks. With the growing threat of cyber attacks, how should the U.S. operate in this arena? Has cyber warfare made the United States more or less safe?

Mr. Joe Costa of the Cohen Group, Dr. James Lewis of the Center for Strategic and International Studies (CSIS),and Dr. Martin Libicki of the RAND Corporation debate  how the United States should operate within the cyber domain.


Mr. Joe Costa, The Cohen Group

The consequences of a nuclear-armed Iran to international peace and security are so severe that any responsible country must exhaust all options short of war to prevent that outcome.  The narrow and direct use of cyberweapons against Tehran is an additional policy tool to resolve the Iranian nuclear challenge diplomatically. To mitigate the long-term dangers created by cyberattacks, the United States has taken important first steps, and must continue to advance an international conversation that will place appropriate constraints on offensive cyberspace operations.

By the time President Obama assumed office in January 2009, Iran had amassed nearly a bomb’s worth of low-enriched uranium. It had the technical capability to turn this material into weapons-usable fuel if a decision was made to do so. Negotiations with Tehran had failed on multiple occasions over the previous six years. The United States had intelligence that Iran was developing a second covert enrichment plant with no civilian application under the hardened mountains of Qom. Israel was sending a clear and direct message that there was limited time remaining before it may launch a military strike.

The President was approaching a choice between two worst-case scenarios: the possibility that a nuclear-armed Iran could emerge under his watch; or, that a military conflict in the Middle East would occur to prevent that outcome. Both would have catastrophic consequences for global stability.

It was under these circumstances that a malicious worm reportedly developed by the United States and Israel infiltrated Iran’s computer network at the Natanz enrichment plant and disrupted 20% of its operating centrifuges. Nearly a year later, a separate virus collected information from the personal computers of senior Iranian officials. A third wiped out data at Iran’s Oil Ministry, forcing the government to temporarily disconnect some of its oil terminals from the Internet.

These cyberattacks served several useful purposes. The so-called Stuxnet virus that struck Iran’s spinning centrifuges temporarily delayed the program and created a slightly longer window of time to assemble a diplomatic resolution to the crisis. More importantly, they demonstrated to Israel that there was credible determination to delay a nuclear-armed Iran and thereby contributed to holding off a potential military strike.

The Flame virus secretly gathered sensitive information from the personal computers of high-ranking Iranian officials. Acquiring real-time intelligence is critical in identifying potential threats before they evolve and demonstrating to the Iranian leadership that they are being watched 24-hours a day, seven days a week. The Supreme Leader is much less likely to pursue a nuclear weapon if he believes there is a high probability of getting caught.

These tangible benefits have come at a cost. Due to a programming glitch, the Stuxnet virus was released to the world.  It is now accessible by states or individuals who do not have the U.S.’s best interest in mind.

In 2011, Iran’s military created a cyber unit that U.S. officials believe is behind recent cyberattacks that knocked some U.S. banks offline, and rendered useless 30,000 computers at Saudi Arabia’s state oil company, Aramco, in what Secretary of Defense Leon Panetta called, “The most destructive attack that the private sector has seen to date.” Soon after, a similar virus shut down the website and e-mail servers of Qatar’s national energy company, RasGas.

The danger of Iranian retaliation, however, is being managed. In an indirect warning to Tehran, Secretary Panetta declared, “If we detect an imminent threat of attack that will cause significant, physical destruction in the United States or kill American citizens, we need to have the option to take action against those who would attack us to defend this nation when directed by the president.” Iran is not likely to test the credibility of that statement.

Looking to the risks of the future, the U.S. is seeking to constrain state behavior in cyberspace by applying established laws of war to this new domain. As State Department Legal Counsel Harold Hongju Koh recently said, “Cyberspace is not a ‘law-free’ zone where anyone can conduct hostile activities without rules or restraint.”

In the next four years, the Administration must continue to maintain its leadership position on this issue and drive a global dialogue that will create the international institutions and governing principles that will place appropriate boundaries around this emerging technology.


Dr. James Lewis, Center for Strategic and International Studies (CSIS) 

Some say we have opened Pandora ’s Box and militarized cyberspace, unleashing an out of control cyber arms race. What anyone who says this has really unleashed is a herd of clichés.  Nations have been exploiting computer networks since the 1980s. Cyber techniques provide powerful new tools for espionage, coercion, and attack. Since the line between espionage and “attack” is negligibly thin – once you are in, you can harvest information or, if you wish, do damage – any nation that can conduct espionage in cyberspace can also carry out an attack. No country will forsake espionage and in consequence, cyberattack is inescapably with us.

Cyber provides a new tool of coercion available to nations (and to some private actors).  It is fast, covert and relatively cheap. Few defenses are in place against it.  At least a dozen countries are developing offensive cyber capabilities, experimenting with its use and testing plans and doctrine. The U.S. is one of them, and so is Iran.

There has been a sporadic, largely covert conflict between the U.S. and Iran since 1979.  Iran intervened in the Iraq war to attack U.S. troops. It plays a murky role in Afghanistan and a harmful, not-so-murky role in Lebanon and Syria. In turn, there are credible reports of U.S covert actions against Iran’s nuclear programs, in cooperation with several allies – televising the wreckage of a captured drone is a good indicator of American activity.

Covert action makes people uncomfortable, but the U.S. has used it in the past against hostile authoritarian regimes. If there is covert action against Iran, it is Iran’s unwillingness to comply with IAEA rules and give up its atomic bomb program that inspires it. In any case, Iran is no stranger itself to covert warfare and can hardly complain as it flies weapons to Assad.

Cyber espionage and attack now play a role in this covert engagement.  Iran suspects that many nations exploit its networks for intelligence purposes. Recently, unknown hands used a cyberattack to interfere with Iran’s major oil terminal at Kharg Island. Iran has experienced at least one serious attack. Despite Iranian denials, the Stuxnet virus did real damage to the nuclear program. Stuxnet was precise. Although it spread to many networks, it damaged only one. There was no collateral damage and little political risk – much more attractive than an air strike or raid.

Iran has been developing cyber capabilities for about five years. The initial motive was political. Iran does not want its citizens to have untrammeled access to information and its rulers, after their bloody suppression of the 2009 election, out of fear that the power of social networks will unleash something like Arab Spring. Iran has developed an impressive array of institutions to manage its new cyber tools, with a “High Council of Cyberspace,” and a proxy “cyber army” controlled by intelligence agencies and the Iranian Revolutionary Guard. It is even trying to build its own national internet and national search engine (that would find only approved sites). Its programs resemble those of China, and Russia may also help.

Iran has limited technical capabilities for cyber attack, but it has shown it can use these in unexpected ways.  Iran used its skills in August to erase data from 30,000 computers at Saudi Aramco, a major oil producer (probably in retaliation for Kharg) and in September against major U.S. banks (in relation for sanctions). The two attacks were probably tests – of a simple weapon in the case of Aramco and of the U.S. reaction in the case of the banks.

In response, Secretary of Defense Panetta announced a new doctrine that would allow Cyber Command to block attacks or preemptively disable an attacking computer in another country. In his speech, he mentioned only Russia, China – and Iran. Coincidently, Iranian action against the banks seems to have stopped after this, but they may have simply run their course.

The Gulf has become an active theater for cyberattack, with many nations engaged. This is uncertain terrain.  The internet creates unknown political forces and offers new possibilities for disruption.  There is little understanding among nations on how to manage the new arena for conflict.  It is not a stable situation, but the source of instability is not cyber weapons but the tense relations between the two countries. Cyberattack is an effect, not the cause, another chapter in a thirty year dispute.


Dr. Martin Libicki, The RAND Corporation

A matter of degree: Who can authorize a cyberattack?

Understanding when the United States should engage in cyberwar and who should approve cyberattacks requires understanding that cyberwar has multiple personalities: operational, strategic, and that great gray area in-between.

Operational cyberwar, for instance, is the use of cyberattacks to support the use of traditional use of physical (aka kinetic) force. An example (if true) would be how cyberattacks on air-defense radar enabled Israeli jets to safely knock out a Syrian nuclear reactor in 2007. Operational cyberwar is no more problematic than the kinetic operation it would support. If lethal means are acceptable, non-lethal means cannot be a problem. Thus operational cyberwar decisions need not be made by the president, at least not once a precedent is set.

Strategic cyberwar, for its part, is the use of cyberattacks to punish, harass, or annoy the people of another country. The attack by Russians on Estonians in 2007 was an act of strategic cyberwar, albeit one that stayed comfortably within the zone of annoyance rather than anything worse. Once a country has carried out a strategic cyberwar campaign on another country, there is no hiding the fact that the attacker rejoices in the other’s discomfort. The decision to carry out a strategic cyberwar campaign has to be a decision made by a head of state – the president, in the case of America – and not by any military command or intelligence agency, just as the decision to blockade another country’s harbors cannot be made by the U.S. Navy acting on its own.

It’s that great gray area in between where the authority to carry out cyberattacks could profit from further definition. Take Stuxnet. Whoever carried it out is not at war with Iran (no one is), and the Natanz enrichment plant was not a military system in a war zone. So it wasn’t an operational cyberattack. However, the purpose of the attack did not appear aimed at making life miserable for the average Iranian; so it really could not be characterized as a strategic attack, either. Stuxnet was closer to an act of sabotage. Although sabotage is not an act of war, the difference between sabotage and a strategic bombing campaign is a matter of degree (and, invariably, casualties). At a lower level, the United Kingdom reportedly penetrated a jihadist web site and substituted a harmless article (on cupcake manufacturing) for a harmful one (bomb manufacturing); this may not have been the only interference with such web sites. A good rule of thumb is that if the results of the action are going to come to the president’s attention then the responsibility rests there as well. Whether repeat applications need specific authorization is a matter of details.

But the most difficult example is an action that (supposedly) has to take place faster than presidential authorization can be acquired. Let’s say there’s an incoming cyberattack, which as we all know takes place at the speed of light. All will be lost if no one can pre-empt or at least react to it at comparable speed. And so, a return cyberattack takes place, and the president is awakened to find that disaster has been averted. Hence, the case for pre-authorization of “active defense.” But is pre-authorization wise? If intelligence on the nature, potential, and source of an attack were perfect, the response precise, and the rationale unassailable, why not? Alas, not only do men fall short of gods, but cyberwar does not really work that way. Consider, again, Stuxnet. By the time it wormed its way into the right computers at Natanz, exactly which system it came out of is not only past but irrelevant; it’s gone. It worked for months before the Iranians caught on (perhaps only by reading the New York Times). The cyberespionage campaigns that suck intellectual property from U.S. corporations take place over months; indeed, such attacks typically go on for a year prior to discovery. The attacks on bank web sites that Secretary of Defense Panetta ascribed to the Iranians did not have a detonation point that had to be stopped within milliseconds. And even if one could imagine an attack in progress that has yet to reach an imminent detonation point, blocking the attack at its destination rather than source is technically easier and raises fewer issues.

And that takes us back to our first rule. If the president has to answer to it, the president has to authorize it. In cyberspace, as in physical space, the buck stops there.


About the Debaters:

Joe Costa is an Associate at The Cohen Group. Previously, Joe was a researcher at Harvard’s Belfer Center for Science and International Affairs, where his focus was Iran’s nuclear program. He was a member of Harvard’s Iran Nuclear Negotiations Working Group, and is the current Director of the Truman National Security Project’s Nuclear Nonproliferation Expert Group. Joe served as a Rosenthal Fellow on the Committee on Homeland Security in the U.S. House of Representatives and earned a Masters in Public Policy at the University of Chicago.


James Lewis is a senior fellow and director of the Technology and Public Policy Program at CSIS. Before joining CSIS, he worked at the Departments of State and Commerce as a Foreign Service officer and as a member of the Senior Executive Service. Lewis’s recent work has focused on cybersecurity, including the groundbreaking report “Cybersecurity for the 44th Presidency,” space, and innovation. His current research examines the political effect of the Internet, strategic competition among nations, and technological innovation. Lewis received his Ph.D. from the University of Chicago.


Martin Libicki is a senior management scientist at the RAND Corporation. His research focuses on the impacts of information technology on domestic and national security. This work is documented in commercially published books—e.g., Conquest in Cyberspace: National Security and Information Warfare (Cambridge University Press, 2007) andInformation Technology Standards: Quest for the Common Byte (Digital Press, 1995)—as well as in numerous monographs, notably How Insurgencies End (with Ben Connable, 2010), Cyberdeterrence and Cyberwar (2009), How Terrorist Groups End: Lessons for Countering al Qa’ida (with Seth G. Jones, 2008), Exploring Terrorist Targeting Preferences (with Peter Chalk and Melanie W. Sisson, 2007), and Who Runs What in the Global Information Grid (2000). His most recent research involved organizing the U.S. Air Force for cyberwar, exploiting cell phones in counterinsurgency, developing a post-9/11 information technology strategy for the U.S. Department of Justice, using biometrics for identity management, assessing the Terrorist Information Awareness program of the Defense Advanced Research Project Agency, conducting information security analysis for the FBI, and evaluating In-Q-Tel. Prior to joining RAND, Libicki spent 12 years at the National Defense University, three years on the Navy staff as program sponsor for industrial preparedness, and three years as a policy analyst for the U.S. General Accounting Office’s Energy and Minerals Division. Libicki received his Ph.D. in economics from the University of California, Berkeley.


About Up for Debate:

In Up For Debate, FAS invites knowledgeable outside contributors to discuss science policy and security issues. This debate among experts is conducted via email and posted on FAS invites a demographically and ideologically diverse group to comment – a unique FAS feature that allows readers to reach conclusions based on both sides of an argument rather than just one point of view.


Please read the guidelines for the official debate and rebuttal policy for participants of FAS’s ‘Up For Debate.’ All participants are required to follow these rules. Each opinion must stay on topic and feature relevant content, or be a rebuttal. No ad hominem and personal attacks, name calling, libel, or defamation is allowed, and proper citations must be given.

Homeland Security Has Too Many Definitions, Says CRS

The existence of multiple, overlapping and inconsistent definitions of the term “homeland security” reflects and reinforces confusion in the homeland security mission, according to a newly updated report from the Congressional Research Service.

“Ten years after the September 11, 2001, terrorist attacks, the U.S. government does not have a single definition for ‘homeland security.’ [Instead,] different strategic documents and mission statements offer varying missions that are derived from different homeland security definitions.”

Most official definitions of homeland security include terrorism prevention.  Many but not all encompass disaster response. Most do not include border security, or maritime security, or immigration matters, or general resilience, though some do.

“An absence of consensus about the inclusion of these policy areas may result in unintended consequences for national homeland security operations,” the CRS report said. “For example, not including maritime security in the homeland security definition may result in policymakers, Congress, and stakeholders not adequately addressing maritime homeland security threats, or more specifically being able to prioritize federal investments in border versus intelligence activities.”

“The competing and varied definitions in these documents may indicate that there is no succinct homeland security concept. Without a succinct homeland security concept, policymakers and entities with homeland security responsibilities may not successfully coordinate or focus on the highest prioritized or most necessary activities.”

“At the national level, there does not appear to be an attempt to align definitions and missions among disparate federal entities,” CRS said.

Without a uniform definition, a coherent strategy cannot be formulated and homeland security policy is rudderless.  “Potentially, funding is driving priorities rather than priorities driving the funding.”

Speaking of funding, there are thirty federal departments, agencies, and entities receiving annual homeland security funding excluding the Department of Homeland Security, the CRS report said.  In fact, approximately 50% of homeland security funding is appropriated for agencies other than the Department of Homeland Security.

See Defining Homeland Security: Analysis and Congressional Considerations, January 8, 2013.

Desalination, DNA Testing, and More from CRS

New and updated reports from the Congressional Research Service that have not been made available to the public include the following.

Desalination and Membrane Technologies: Federal Research and Adoption Issues, January 8, 2013

The Corporation for Public Broadcasting: Federal Funding and Issues, January 8, 2013

DNA Testing in Criminal Justice: Background, Current Law, Grants, and Issues, December 6, 2012

Environmental Considerations in Federal Procurement: An Overview of the Legal Authorities and Their Implementation, January 7, 2013

Responsibility Determinations Under the Federal Acquisition Regulation: Legal Standards and Procedures, January 4, 2013

Social Security: The Windfall Elimination Provision (WEP), January 8, 2013

Social Security: The Government Pension Offset (GPO), January 8, 2013

Economic Growth and the Unemployment Rate, January 7, 2013

Overview and Issues for Implementation of the Federal Cloud Computing Initiative: Implications for Federal Information Technology Reform Management, January 4, 2013

The National Telecommunications and Information Administration (NTIA): Issues for the 113th Congress, January 3, 2013

Military Medical Care: Questions and Answers, January 7, 2013

Israel: 2013 Elections Preview, January 8, 2013

Surveillance Court Orders Prove Hard to Declassify

The Foreign Intelligence Surveillance Court (FISC), which authorizes intelligence surveillance activities, acknowledged in 2007 that it has issued “legally significant decisions that remain classified and have not been released to the public.”

In 2010, the Office of the Director of National Intelligence and the Department of Justice undertook to declassify those Court rulings, but since then none has been released. Why not?

“We tried,” a senior intelligence agency official said, but the rulings were hard to declassify. After redacting classified operational information and other sensitive details, no intelligible text of any consequence remained, according to this official.

The Department of Justice made a similar assertion years ago in response to a lawsuit brought by the ACLU, stating that “Any legal discussion that may be contained in these materials would be inextricably intertwined with the operational details of the authorized surveillance.”

Although the 2010 declassification initiative has not been formally cancelled, it is unclear how or why the failure to date to declassify the FISC orders would change.

In the debate over reauthorization of the FISA Amendments Act, Sen. Jeff Merkley offered an amendment that was intended to break the current impasse.  If a surveillance court order could not be declassified, the amendment proposed, then an unclassified summary of the order should be prepared.  (If even that were not possible, the amendment would have required a report on the status of the declassification process.)

The Merkley amendment, like others, was rejected by the full Senate.  But Senator Dianne Feinstein, the Intelligence Committee chair, offered her assistance to Sen. Merkley in advancing public access to FIS Court opinions.

“If the opinion cannot be made public, hopefully a summary of the opinion can,” Sen. Feinstein said on December 27. “And I have agreed with Senator Merkley to work together on this issue.”

But the intelligence agency official said that unclassified summaries of surveillance court decisions were probably not a satisfactory alternative.  A summary written by the Department of Justice would not be a statement of the court’s opinion at all, the official said.  At best, it would represent the Administration’s own understanding of what the court had ruled, paraphrased for public release.

What if the Court itself were to prepare its opinions in a “tearline” format, with a general statement of its findings presented separately from the more highly classified specifics of the case under discussion?  Would that not facilitate declassification and release of the court rulings?

“That might work,” the official said.  However, he said, it would be “awkward” for agencies to presume to tell the court how to format its opinions.

But it would not be awkward for members of Congress to make such a request, perhaps in a forthcoming letter referenced by Sen. Feinstein.

“I have offered to Senator Merkley to write a letter requesting declassification of more FISA Court opinions,” she said. “If the letter does not work, we will do another intelligence authorization bill next year, and we can discuss what can be added to that bill on this issue.”

In the past, a handful of FISA Court opinions have been declassified and made public, including a FISC opinion dated May 17, 2002, a FIS Court of Review (FISCR) opinion dated November 18, 2002, and a FISCR opinion dated August 22, 2008.