FAS | Government Secrecy | Library || Index | Search | Join FAS



Security Classification of Information: Table of Contents

Appendix G.
CLASSIFICATION DURATION BASED ON THE
PROBABILITY OF UNAUTHORIZED DISCLOSURE
OF THE CLASSIFIED INFORMATION

INTRODUCTION

When a classifier (the government) determines that certain information is National Security Information (NSI), then that classifier also must decide whether that NSI can be assigned a classification duration.1 Such a classification duration, when assigned,* is usually specified in terms of the happening of an event or with reference to a future date (see Chapter 8). When classification duration is specified, the classifier thereby decides that the information can be automatically declassified. However, even though the declassification is automatic, it is under governmental control because the government controls the happening of the event which declassifies the information or by which the future date is determined.

Unauthorized disclosures of classified information+ sometimes also essentially declassify that information, depending on the extent of dissemination of the unauthorized disclosure and the recipients thereof.++ "Declassifications" by such disclosures are not directly controlled by the government even though the government "owns" the information. However, the probability of an unauthorized disclosure of classified information depends on some factors that can be controlled by the government.

The major factor that determines the probability of unauthorized disclosure of classified information is the number of persons to whom the classified information has been given.** This probability increases with an increase in the number of persons who know that information. Although it is not possible to specify exactly when an unauthorized disclosure to an adversary will occur, it may be possible to predict when the probability of that unauthorized disclosure will become relatively high (because of the large number of persons who know the information) and thereby to estimate an effective classification duration. The government can certainly establish rules concerning who can legitimately have access to classified information (e.g., by security clearances and need-to-know requirements) and can thereby control the number of persons who know the classified information. Therefore, the government can control (to some extent) the probability of unauthorized disclosure of that information and thus the classification duration.

Unauthorized disclosures can be either deliberate or inadvertent, direct or indirect, oral or written. The purpose of this appendix is to describe how unauthorized disclosures of classified information to adversary foreign governments, which thereby lead to possible de facto declassifications, depend on the number of persons who know the classified information, on the number of communications by those persons, on the number of copies of each communication, on classification management practices, and on other factors. The following sections describe some relationships for estimating the probability of unauthorized disclosure of classified information to an adversary for several different unauthorized disclosure scenarios. Those discussions may be more applicable to unauthorized disclosures of classified scientific or technical information concerning major classified projects than to unauthorized disclosures in other situations.* However, the basic principles should be applicable to most types of classified information and unauthorized disclosure scenarios. It should be noted that estimates of effective classification duration based on unauthorized disclosures are applicable to Restricted Data and Formerly Restricted Data as well as to NSI.

The material in this appendix is discussed in a very qualitative manner and does not encompass all possible scenarios. The information presented is intended to stimulate further discussions about the possibility of estimating classification duration based on the number of persons who know the classified information and on other factors. This information may be useful in developing guidelines for specific types of classified information that state that this information should be considered for declassification when it becomes known to more than a certain number of authorized persons (cleared persons with a need to know). Similarly, this information may be useful in developing guidelines, based on the number of persons authorized to know the information, for downgrading some types of classified information.

Estimates of the probabilities of unauthorized disclosures by different scenarios may be of significant assistance to classification management and security personnel. Such estimates may identify, for specific classified projects, activities, or technologies, certain scenarios as having unacceptable risks of unauthorized disclosure of classified information. Classification management personnel can then channel their classification education and document review efforts toward reducing those major vulnerabilities. Security personnel can also use those estimates to allocate their resources.

The discussion in this appendix does not include "second order" effects. That is, the discussion has not considered the increased unauthorized disclosure when a recipient of an unauthorized disclosure of classified information includes that information in an original communication prepared and disseminated by that recipient.

PROBABILITY OF DELIBERATE UNAUTHORIZED
DISCLOSURE TO AN ADVERSARY

The probability of a deliberate unauthorized disclosure of classified information to an adversary government (PDD) is a linear function of the number of persons who know that information. That is, if ten persons know the information, then the probability of that information being deliberately disclosed is ten times the probability of that information being deliberately disclosed when only one person knows the information. The mathematical equation for that probability is

PDD = k1 x NP

where PDD is the probability of deliberate unauthorized disclosure of classified information to an adversary government, NP is the number of persons who know that information, and k1 is the probability that one person will deliberately disclose classified information to an adversary government (a "disloyalty" constant). The constant k1 is assumed to be the same for all persons.* Based on the number of U.S. citizens with security clearances who have deliberately revealed classified information to foreign governments in recent decades (as publicly reported), k1 might be on the order of 10–5 (1 spy in 100,000 cleared citizens).+, ++, **, +++

Deliberate unauthorized disclosures are direct disclosures. The classified information goes directly to someone not authorized to receive that information. Whether deliberate disclosures are oral or written is not important—those deliberate disclosures get directly to an adversary foreign government.

PROBABILITY OF INADVERTENT UNAUTHORIZED
DISCLOSURE TO AN ADVERSARY

Inadvertent unauthorized disclosures occur by mistake. They occur during communications (either oral or written) between persons. The probability that an inadvertent unauthorized disclosure of classified information will occur and will subsequently reach an adversary government depends on several factors. That probability, PID (probability of inadvertent unauthorized disclosure to an adversary government), depends directly on the number of different "unclassified" communications that inadvertently contain classified information, the number of such communications that reach an unauthorized person, and the probability that such an unauthorized recipient will transmit the classified information to an adversary government. The general outline for determining PID for a classified project will be given in the following paragraphs. Subsequent sections in this appendix will discuss this probability in more detail for some common inadvertent unauthorized disclosure scenarios.

When an unclassified communication is originated and transmitted by a person associated with a classified project and that person knows classified project information, then there is always a probability that such a communication will inadvertently contain classified project information. For a classified project, the total number of different (i.e., separate and distinct) unclassified communications (i.e., intended to be unclassified but which might contain classified information) issued during the course of a year by persons associated with that project is equal to the number of persons working on that project who know classified project information (NP), multiplied by the average number of different communications per such person per year (NCOM).* This product represents the total number of communications by project personnel in which unauthorized disclosures concerning that project have the opportunity to appear during that year.

Whether or not an unclassified communication actually contains classified information when originated depends on the carelessness of the person who generates that communication. Multiplying the total number of communications per year (NP × NCOM) by a "carelessness constant," kc, which expresses the probability that a project person will inadvertently include classified information in an unclassified communication, gives the number of communications that contain classified information when they are originated (but before they are issued).

The number of unclassified communications that inadvertently contain classified information when issued can be significantly reduced by a review of those communications by classification experts before they are issued. Such classification management procedures are particularly effective for written communications but are less successful for oral communications (see later in this appendix). Therefore, a classification review factor (CREV), should be included which incorporates the effectiveness of classification review procedures in reducing the inadvertent disclosure of classified information. As used in this discussion, CREV expresses the probability that classification experts will overlook classified information during their classification review of a communication (i.e., this factor represents the probability that the classified information will escape detection and removal by a classification expert).* Effective classification management procedures result in low values for CREV. If there is no review by classification experts, then CREV = 1.

The total number of different unclassified communications that are issued during a year by personnel working on a classified project and which actually contain classified information, NCOMCL, is represented by the following equation:

NCOMCL = kc x NP x NCOM x CREV

The number of such communications that will actually reach unauthorized persons depends on the total number of distributed copies of such communications and the probability that one of those distributed communications will be obtained by an unauthorized person. The total number of distributed copies of communications containing classified information is equal to the number of different such communications, NCOMCL, multiplied by COP, the average number of distributed copies of each different communication.+ The probability that one of those communications will reach an unauthorized person is expressed by the probability factor UAR (unauthorized recipient).++

Even if an unclassified communication that inadvertently contains classified information reaches an unauthorized recipient, it is not a certainty that this recipient will transmit the classified information to an adversary government. Therefore, the equation for the probability that an inadvertent unauthorized disclosure will occur and will reach an adversary government, PID, must include the probability that an unauthorized recipient will transmit the classified information to an adversary government. This latter probability is expressed by the probability factor RTA (recipient to adversary). [The value for RTA was presumed to be 1 for the deliberate disclosure scenario.] The overall equation for PID can therefore be written as follows:

PID = kc x NP x NCOM x CREV x COP x UAR x RTA

As is evident, PID is a linear function of NP, the number of persons who know the classified project information.

There are two major types of inadvertent unauthorized disclosures, direct and indirect. Within each of those types, there can be either oral or written disclosures. The following sections will discuss values for the terms in the above equation for PID for four different unauthorized disclosure scenarios (direct–oral; direct–written; indirect–oral; indirect–written).

Probability of Inadvertent Direct Unauthorized
Disclosure to an Adversary

An inadvertent direct unauthorized disclosure occurs when a person working on a classified project communicates classified information directly (by mistake, in a communication intended to be unclassified) to a person not authorized to receive that information. [As mentioned earlier, for the purposes of this appendix the recipients of an unauthorized disclosure are assumed to not have security clearances.] This unauthorized disclosure can be by an oral or written communication to a colleague not working on the classified project, orally to a friend or neighbor, to many persons via an article in a professional journal, or by other similar means. The probability of inadvertent direct unauthorized disclosure (PIDD) is expressed as follows:

PIDD = k2 x NP x NCOM x CREV x COP x UAR x RTA

where k2 is a "carelessness" constant, the probability that a person working on a classified project will inadvertently disclose classified project information by this scenario, and the other terms are as defined earlier. This carelessness constant k2 is expected to be greater than the disloyalty constant k1 because there seem to be more careless people than espionage agents. That constant is also assumed to be the same for all individuals who have access to classified information (i.e., who have the same clearance levels).+ Therefore, if k1 is about 10–5, then k2 might be expected to be on the order of 10–4 or 10–3. If there are 100 times fewer disloyal persons than careless persons, then k1 would be about 10–3.

The carelessness constant k2 is likely to be larger for oral disclosures than for written disclosures. Writers generally review their communications before they are sent and may thereby discover and correct any classification errors. There is no opportunity to review and delete spoken words.++ It might be reasonable to assume that the oral carelessness constant, k2(o), is ten times larger than the written carelessness constant, k2(w) [e.g., k2(o) = 10–3; k2(w) = 10–4].

Possible values for NCOM, the average number of different communications per project person, must be estimated for each specific classified project. The average number of copies of each communication, COP, also must be estimated for each project. For oral disclosures, COP(o) could vary from one (a one-to-one communication) to several hundred or more (via a presentation at a large meeting). For written disclosures, COP(w) could vary from one (a letter to a colleague) to several thousand (an article in a professional journal).

Within the Department of Energy (DOE) and its contractors, the probability of inadvertent direct unauthorized written disclosures, PIDD(w), is significantly reduced by classification management procedures. DOE requires that all written unclassified scientific and technical communications related to a classified project and intended for widespread internal distribution or for public release must be reviewed by a Classification Officer prior to distribution of those communications.*, 2 Sometimes an organization's procedures will also require a review by an Authorized Derivative Classifier (ADC) prior to the Classification Officer review. Presumably those reviews would detect most inadvertent (careless) direct disclosures via writings. Assuming ADCs to be about 99% accurate in their reviews, that Classification Officers are about 99.9% accurate in their reviews, and that the classification management system functions as intended (the documents are actually reviewed prior to their release), then the value of the classification review factor, CREV, would be expected to be about 10–2 (ADC review) or 10–5 (Classification Officer and ADC review) for the inadvertent direct unauthorized written disclosure scenario.

The DOE classification management program also requires a similar classification review for formal oral presentations. However, this review is usually limited to a review of the view graphs, slides, abstracts, and handouts that are used for these oral presentations. It is difficult to control the impromptu oral statements that are a part of those presentations, especially those made during the question and answer periods after the formal presentations. The classification review factor for oral presentations will be greater than for written presentations [e.g., CREV(o) may be about 10–1 to 10–2].

In the inadvertent direct unauthorized disclosure scenario, the recipient of the information disclosed is an uncleared individual (by definition of the scenario). Therefore, UAR, the probability that an unauthorized person will receive the communication, is equal to 1. The "recipient to adversary" factor, RTA, which is the probability that the recipient of the unauthorized disclosure will send the classified information to an adversary government, must be estimated. In the ideal situation, the recipient would not recognize that the disclosed information was classified, or would not otherwise recognize its importance, and would not disseminate that information further or use it in any manner adverse to U.S. interests. In the worst case situation, the recipient would be a foreign agent who would recognize the significance of the information and transmit it to an adversary government.

Earlier, for the deliberate disclosure situation, it was assumed that about 1 in every 100,000 persons who had a security clearance would be an espionage agent. For the inadvertent direct disclosure situation, the question is "What is the probability that an uncleared individual who received the classified information would reveal that information to an adversary government (i.e., what is the probability that the uncleared individual is a spy)?" If 80% of the persons who apply for a security clearance get that clearance (a not unreasonable assumption), then there is only about a 20% greater probability that an individual member of the general population is an espionage agent, as compared with the probability that a cleared individual is an espionage agent.+

For the current probability estimations, a 20% increase is minor when considering the other uncertainties in these estimates. Therefore, it will be assumed that the probability that a member of the general public is an espionage agent is about 1 in 100,000.* Based on that assumption, the risk that the recipient of an inadvertent direct disclosure would send the classified information to an adversary would be about 10–5 (RTA = 10–5). However, this view neglects possible real-life situations where persons working on classified projects might be targeted by adversaries. Such a person's national or international colleagues might include individuals who gather intelligence for adversary governments. That circumstance might significantly increase the value of RTA to about 10–2 or 10–3. If the written communication was via a professional journal or other widely circulated publication and the author was well known as a participant in classified projects, then the probability that an agent of an adversary government would read the article would be expected to be very nearly 1. Generally, RTA might range from about 10–2 to 10–5 for these scenarios.

The overall probabilities of inadvertent direct unauthorized oral or written disclosures, PIDD(o) and PIDD(w), may be written as follows:

PIDD(o) = k2(o) x NP x NCOM(o) X CREV(o) x COP(o) x RTA(o)

PIDD(w) = k2(w) x NP x NCOM(w) x CREV(w) x COP(w) x RTA(w)

As mentioned previously, UAR(o) and UAR(w) have a value of 1, by definition of the scenario, and are not included in the above equations.

Probability of Inadvertent Indirect Unauthorized
Disclosure to an Adversary

The indirect type of inadvertent unauthorized disclosure of classified information occurs by scenarios somewhat like the following scenario (all steps in the following scenario must occur for an unauthorized disclosure to occur by that scenario):

In this scenario, the communication (either oral or written) is between individuals with the appropriate security clearance. That communication is intended to be unclassified (the communicator knows that classified information should not be included in that communication since the communication is via a unsecure mode). However, an inadvertent unauthorized disclosure of classified information occurs because some classified information about the classified project is inadvertently included in the intended-to-be-unclassified communication. This disclosure is termed an "inadvertent indirect unauthorized disclosure" because one of the eventual recipients of the communication was not an intended (direct) recipient of the communication (e.g., the indirect recipient overheard a conversation or by chance saw a written communication addressed to another person).

Probability of Inadvertent Indirect Unauthorized Written Disclosure. An inadvertent indirect unauthorized written disclosure of classified information occurs, for example, when a person working on a classified project prepares, for primary distribution to someone else working on this project, an unclassified document concerning the project. The writer intends to convey substantive information about the project but also intends that this information be unclassified. Unfortunately, when a writer tries to "cleverly" convey substantive information about a classified project in an unclassified document, the writer inadvertently may reveal some classified information.* Other times, the writer simply makes a mistake (i.e., when not even trying to be clever about conveying information) and inadvertently includes classified information in the document. In either case, because the document is not marked as classified then its subsequent distribution is not necessarily limited to cleared persons. The document is likely to become available to persons not authorized to receive the classified information (e.g., the primary recipient may send the document to nonproject personnel; the document may be put into the company's library and become available to many persons; or the document may subsequently be made available to auditing, regulatory, or similar nonproject personnel). When this happens, an unauthorized indirect disclosure of classified information has occurred.

The probability of inadvertent indirect unauthorized written disclosures PIID(w) is expressed by the following equation:

PIDD(w) = k3(w) x NP x NCOM(w) x CREV(w) x COP(w) x UAR(w) x RTA(w)

All the factors have been mentioned previously. The carelessness constant k3(w) in this scenario is expected to be greater than in the direct disclosure scenario. In the direct disclosure scenario, the writer was communicating with someone not associated with the classified project and therefore the writer would generally be expected to be quite sensitive to not including classified information in the communication. In the current scenario, the writer is communicating with someone on the classified project about the classified project and, even though the communication is intended to be unclassified, the writer is likely to be more careless about not including classified information. The fact that the subject of the document concerns the classified project is also likely to lead to classification errors. Therefore, whereas k2(w) was estimated to be about 10–4, k3(w) is estimated to be about 10–3.

The classification review factor CREV will also be greater for the current scenario. For the direct disclosure scenario, the value for CREV was expected to be between about 10–2 to 10–5, depending on whether the written document received one or two classification reviews. In the current scenario, the document would receive, at most, one classification review, and therefore CREV will be about 10–2 or greater (CREV would have a value of 1 for no classification review).*

It is difficult to estimate a value for the factor UAR, the probability that an unauthorized person will read the communication that contains the classified information. In this scenario, the direct recipients of the communication (those on the distribution list) are persons connected with a classified project. If the original distribution is limited to persons closely connected to the project, then the probability that an unauthorized person would read the communication might be quite low (e.g., persons closely connected with the project would be expected to have their offices in secure areas; unauthorized persons would not have access to those areas and therefore would not have the opportunity to see the communication by chance). If the communication is widely distributed and sent to persons only peripherally connected with the project, then UAR might be relatively high (e.g., persons with peripheral connections to the project might be located in areas accessible to uncleared persons, with concurrent increased probability that an uncleared person would see the communication). The extent to which secondary distributions are made of the communication is also a factor.+ A communication that covers a broad topic might receive a relatively large secondary distribution. If the communication became part of a company library (e.g., a "reports" library), then the UAR factor might be relatively large. A value for UAR of 10–2 to 10–3 will be assumed for this scenario.

The factor that represents the probability that the recipient of the disclosure will transmit the classified information to an adversary, RTA, may be larger for an inadvertent indirect written unauthorized disclosure than was the case for an inadvertent direct written unauthorized disclosure. The reason for that increase is because an adversary may target all available unclassified written communications about a classified project. Written communications, because of their permanence (copies stay in personal files or in libraries for a long time) are especially likely to be accessed by an adversary. Therefore, RTA(w) for this scenario may be as high as about 10–1 and may be as low as 10–5.

Probability of Inadvertent Indirect Unauthorized Oral Disclosures. Inadvertent indirect unauthorized oral disclosures of classified information occur when persons working on a classified project discuss this project, in nonsecure locations, with their colleagues who also work on that classified project. Such discussions may occur at nonsecure workplace areas such as while eating in the company cafeteria, walking between secure workplace areas, riding in vehicles between secure workplace areas, or at other nonsecure locations in the workplace. Such discussions may occur during work-related travel (e.g., in airport waiting rooms, on airplanes, in taxis, or in restaurants). Such discussions may occur in carpools traveling to and from the workplace. Such discussions may also occur during social functions in the community where the workplace is located (e.g., when two colleagues on the project meet and one tries to convey, in an unclassified oral communication, recent significant project developments).* In each of the above situations, the persons communicating with each other are aware that they should not mention classified information. However, sometimes they simply make a mistake, a "slip of the tongue," and reveal classified information. Other times they try to talk around classified information and inadvertently reveal classified information. Unfortunately, most persons are not adept at talking around classified information.3 Therefore, classified information may be revealed during such conversations and that classified information may be overheard by persons not authorized to receive that information.

The equation to estimate the probability of inadvertent indirect unauthorized oral disclosures to adversaries PIID(o) contains the same factors as did the equation for PIID(w):

PIDD(o) = k3 x NP x NCOM(o) x CREV(o) x COP(o) x UAR(o) x RTA(o)

where k3(o) is a "talkativeness" constant+ that may be significantly higher than k2(o).++ If k1 is approximately equal to 10–5 and k2(o) is approximately equal to 10–3, then k3(o) may have a value of about 10–2.

In the other scenarios, the average number of communications per person per time period NCOM was to be estimated for each case and was assumed to be relatively small since those communications are usually planned and usually, at least for written communications, require more than a little time and effort to prepare. For those scenarios, NCOM does not seem to depend on any "general" factor and must be estimated for each case. The total number of communications NCOM(tot) was equal to NP × NCOM. However, in the current scenario, the total number of communications depends on the number of interactions (in nonsecure locations) between different persons who know the classified information. These interactions are not generally planned, as was the case with writings, but occur somewhat randomly. At each meeting between project colleagues, there is the opportunity to communicate and, generally, the expectation that there will be some information exchanged (i.e., greetings, small-talk, etc.) which may lead to discussions about the project. The number of theoretically possible interactions, I, between different persons in a group of persons who know classified information about a project, is given by the following equation:

I = NP(NP-1)/2

where NP is the number of persons who know the classified information (see Fig. G.1 for help in visualizing the situation [not included here]). The interactions calculated by this equation are the "instantaneous" number of possible interactions, and an additional parameter must be introduced to estimate the number of interactions in a specified time period. The number of probable interactions during a relatively short time period will be significantly less than the number of possible interactions. [The number of probable interactions depends on the opportunity for interaction. For large projects, each person does not normally have the opportunity to interact with all of the other persons working on that project.] Therefore, the parameter I should be multiplied by another parameter, kI, an interaction constant, to get a realistic estimate for the number of probable interactions (communications). This kI should also include time as a variable so that the number of communications can be estimated for any time period (e.g., for 1 year). For this unauthorized disclosure scenario, the total number of communications, NCOM(total), can be expressed by the following equation:

NCOM(total) = kI x I = kI x NP(NP-1)/2

According to this equation for NCOM(total), if only one person knows the classified information, then the number of inadvertent indirect oral disclosures by this scenario is zero (NP – 1 = 0), an answer consistent with Ben Franklin's adage that "three may keep a secret if two of them are dead."4 Note that for high values of NP (practically speaking, above about NP = 50) the value of NCOM(total) essentially varies with the square of NP. Thus PIID(o) is approximately a function of the square of NP, the number of persons who know the classified information. When NP increases by a factor of 10, PIID(o) will increase by a factor of 100. The probabilities for the other scenarios mentioned above vary only linearly with NP. The significance of this is that even though the number of probable interactions may be only a small fraction of the number of possible interactions (i.e., kI may be small), when NP becomes large, the total number of probable interactions may become very large (since it varies with the square of NP). Consequently, when NP is large, then PIID(o) may become larger than the other probabilities and this scenario may be the major route by which unauthorized disclosure of classified information occurs.

As mentioned earlier, the number of probable interactions depends on the opportunities for interaction. For large projects or activities (e.g., 10,000 persons), each person usually neither knows of, nor has the opportunity to interact with, all of the other persons working on that project. Therefore, kI, the interaction constant, in the above equation for NCOM(total) might be relatively small. For small projects (e.g., 100 persons), each person might know most of the other persons working on that project and might interact relatively frequently in nonsecure locations with many of those persons. Therefore, kI might be significantly higher for small projects than for large projects. Another way of examining the situation would be to assume that there is generally a maximum practical value for NP, with respect to interactions between persons working on a project. That is, even if NP is equal to 10,000 persons, in practice one person probably does not usually interact with more than, for example, a few hundred persons. Therefore, kI could be assumed to be relatively constant, but NP may be assigned a limiting maximum value (e.g., 100) in the equation for NCOM(total) for this scenario.* The latter method for evaluating NCOM(total) seems the easier. A value for kI could be obtained by estimating a value for NCOM(total) for a case where NP has a value of 100 or less. For example, when NP = 100, then the number of possible instantaneous interactions is 4950. One might do a survey (e.g., over a week's period of time) to estimate how many interactions, in nonsecure locations, take place between members of a specific group that includes about 100 persons. This could be then be extrapolated to obtain the number of such interactions for 1 year, and thereby one could estimate an approximate value for kI.

In the previous scenarios (except for the deliberate disclosure scenario), the value for the number of copies factor, COP, depended on the specific case. However, in the current inadvertent indirect oral unauthorized disclosure, there is assumed to be only one copy of a communication (i.e., the conversation is assumed to be overheard by an average of only one unauthorized persons). Therefore, COP = 1 for this scenario.

In this inadvertent oral disclosure scenario, there is no opportunity for classification review of the conversation before it is overheard-- the Classification Officer did not even known that such a conversation would take place. Therefore, similar to the deliberate disclosure scenario, the classification review factor CREV = 1.

The probability that an uncleared person will overhear the unauthorized oral disclosure, UAR, is difficult to estimate. That probability may be relatively small (e.g., 10–4 to 10–5) when the disclosure occurs in nonsecure areas of the facility that houses the classified project, since many of the persons in those workplace areas would be expected to have clearances. That probability would be expected to be greater when the disclosure occurs in public locations, such as in public transportation (airplanes) or in community social situations (cocktail parties). Values for UAR might range from 10–2 to 10–1 in those latter situations.

The probability that a recipient of an unauthorized oral disclosure will transmit it to an adversary has about the same range of values as mentioned earlier for other scenarios. Thus, RTA might vary from 10–5 (random probability that the recipient is a spy) to 10–2 [when a foreign agent targets community gathering places frequented by project workers (e.g., bars or taverns) or community organizations or social circles to which project workers belong].

COMPARTMENTALIZATION OF CLASSIFIED
INFORMATION

Under the above-mentioned hypothesis for estimating the probability of inadvertent indirect unauthorized oral disclosure of classified information PIID(o), that probability is proportional to the number of different interactions between persons who know the classified information. The theoretical number of those interactions is approximately proportional to the square of the number of persons who know the classified information. Of course, the actual number of interactions depends on the opportunity to interact. If the persons who know the classified information are not all at one geographic location, but are individually widely dispersed, or are members of small groups that are widely dispersed, then the opportunities to interact are decreased and PIID(o) will also be decreased.

If 1000 persons work on a classified project and those persons are widely dispersed geographically, then there are few opportunities for them to interact under the example situations described above. (If they were completely isolated, then PIID(o) would be zero.) However, if those 1000 persons were living in a single, relatively small community, there would be very many opportunities for interactions. If there were 10 groups with 100 persons in each group, and each group was isolated geographically from the other groups, the number of interactions would be intermediate between the two previously mentioned extremes. These examples may be summarized as follows:

The above information indicates that there is a "dispersal" or "group size" effect that may significantly affect PIID(o).

The approximate tenfold reduction in probability in going from one group of 1000 to 10 isolated groups of 100 provides an approximate example of the value of compartmentalization (which was stringently practiced by Gen. L. R. Groves during the Manhattan Project) in greatly reducing the probability of inadvertent unauthorized oral disclosure of information.*

This dispersal, or compartmentalization, effect can be represented as a function of the number of groups into which NP persons are divided. For equal-sized groups and when NP is large, the maximum possible number of interactions I is a function of G–1, where G is the number of equal-size groups onto which NP persons are divided.

I = NP(NP-1)/2G

For actual situations, I would have to be evaluated for each group and then summed to give an aggregate value for I. Note that from earlier discussions of the value of NCOM(total) for this scenario, the relative benefits of compartmentalization may be reduced when NP exceeds a certain value because for large values of NP self-compartmentalization may occur.

GENERALIZED EQUATION FOR PROBABILITY OF
UNAUTHORIZED DISCLOSURE OF CLASSIFIED
INFORMATION TO AN ADVERSARY

The probability P of unauthorized disclosure of classified information to an adversary, for the scenarios mentioned above, can be represented by the following general equation:

Probability(S) = ks x NPs x NCOMs x CREVs x COPs x UARs x RTAs

where S represents the scenario, ks is a constant for that scenario, NP is the number of persons who know the classified information, NCOM is the average number of communications per person per year, CREV is a classification review factor, COP is the average number of copies of a communication, UAR is the probability that an unauthorized person would be the recipient of an unauthorized disclosure, and RTA is the probability that the unauthorized recipient will transmit the information to an adversary government. Estimated values for some of those factors for the different scenarios mentioned previously are shown in Table G.1. Where values for the factors are given in Table G.1, they should be considered only as rough approximations. However, the relative values of those factors might be of more significance.

The generalized equation for the probability of unauthorized disclosures provides a simple, direct way to describe the factors important to protecting classified information. The equation shows, in a straightforward manner, that classified information about a project can best be protected by limiting the number of persons to whom that information is given (stringently enforcing need-to-know requirements), by minimizing the number of "unclassified" communications related to that project and generated by such persons, by minimizing the number of copies of such communications, and by requiring classification review of all job-related communications originated by project personnel. The generalized equation can also be used, as will be described in a subsequent section, to assist security personnel in concentrating their resources on the most vulnerable pathways for unauthorized disclosures of information from specific classified projects. Benefits of compartmentalization and isolated "project communities" are also directly shown by that equation.

CLASSIFICATION DURATION CONSIDERATIONS

The above discussions with respect to probabilities of unauthorized disclosures of classified information were initiated for possible use in estimating the duration of classification. That is, when the number of authorized persons who have been given the classified information becomes large enough, the probability of unauthorized disclosure becomes great enough so that de facto declassification becomes likely. One could specify an acceptable value for this probability and concurrently establish a classification rule stating that whenever this probability was exceeded then the information should be declassified. By keeping records of the number of persons who know the classified information, then one could determine when the information should be declassified. Certain declassifications could perhaps be triggered when the cumulative number of persons who know the classified information exceeds a certain value, or when the person-years for knowledge of the classified information exceeds a certain value. It should not be too difficult to keep a record of the number of persons who know certain classified information and thereby to calculate, at any point in time, the number of persons to whom the classified information has been told and the value for the person-years parameter. Such an approach might also serve to emphasize the need-to-know requirement for access to classified information, a requirement which does not seem to have received as much emphasis in recent years as it should receive.

In practice, a classification duration is not likely to be established based only on the considerations presented in this appendix. There are currently too many uncertainties in the various factors to enable someone to responsibly base a classification duration decision on the disclosure probability equations given above. However, further investigations could lead to establishing some reasonably reliable values for those factors. Then, at the least, estimated probabilities of unauthorized disclosures could be used to determine when the information should be evaluated for possible declassification.

On the other hand, in principle there may be more of a basis for establishing a classification duration based on the number of persons who know the classified information than there was a basis for establishing a classification duration based on mere passage of time. Some executive orders on classification of information, prior to the currently applicable Executive Order 12356, established a requirement that certain types of classified information should be designated to be automatically downgraded and, in some instances, declassified after a certain number of years had elapsed from date of issuance. (See Chapter 8 for a brief discussion of automatic declassification and downgrading of NSI as a function of time.)

With respect to downgrading matters, a rationale similar to that given above for declassification might lead to the conclusion that, for example, for Top Secret information, when the number of persons who are authorized to know that information exceeds a certain value (say 1000), then the information should be downgraded to Secret. If the information is so important to national security to warrant a Top Secret classification level, then the number of authorized recipients of that information should be stringently limited to reduce the probability of unauthorized disclosure. Disclosure to a large number of persons indicates that maybe the information is no longer so important to national security and maybe it should be downgraded.* In any event, as mentioned above for declassification, a certain value for the number of persons who know the classified information might be used to trigger a review of the information for downgrading even if that value was not used to automatically downgrade that information.

CLASSIFICATION MANAGEMENT CONSIDERATIONS

Classification management procedures can reduce significantly the probability that unauthorized disclosures of classified information will result from the issuance of unclassified documents by project personnel. Classification management procedures that include two independent classification reviews of all unclassified documents generated by personnel working on a classified project can reduce, by a factor of about 10–5, the probability that such documents will contain classified information. The "two-review" rule is a requirement at some contractor-operated DOE facilities for documents intended for release to the public. It would perhaps be advisable to have at least a "one-review" rule for all documents that touch on or concern classified projects, activities, or technologies. Such classification review procedures are less effective for oral presentations because those presentations are generally less structured and because impromptu remarks by the speaker (or startled reactions by a speaker to certain questions) are not subject to prior review by classification experts.

Classification briefings to employees working on classified projects can inform and remind those employees of the information that is classified. Such briefings could also emphasize how bits and pieces of seemingly innocuous project information are actually classified because that information, when combined with the information that has already been released as unclassified, can reveal classified information. Such briefings could sensitize employees to the importance of not revealing any information beyond that which they definitely know to be unclassified. Such briefings could reduce the "carelessness" and "talkativeness" constants and thereby reduce the probability of inadvertent direct and indirect disclosures of classified information.

INFORMATION SECURITY CONSIDERATIONS

Information security is concerned with protecting classified information. Information security resources should be directed toward reducing the probability of unauthorized disclosures for those disclosure scenarios that have the highest probabilities of unauthorized disclosure of classified information (e.g., toward protecting the most vulnerable routes for the unauthorized disclosure of classified information). The general equation for the probability of unauthorized disclosure of classified information can be used to evaluate probabilities for several disclosure scenarios for specific classified projects or facilities. Those estimated probabilities may reveal the most vulnerable routes for unauthorized disclosures. The resources of the security organization can then be focused on those routes.

An evaluation of the probability equation for different disclosure scenarios, and the experience of this author with respect to unauthorized disclosures, suggests that efforts in briefing employees who work on classified projects on the dangers of inadvertent indirect unauthorized oral disclosures might be very productive. A recent article on security of industrial information (e.g., trade secrets) contains three pertinent observations which support this conclusion:

The most vulnerable form of information is still that which is spoken.

Information security managers should not develop controls against technological misuse [e.g., wiretapping] until they have successful baseline controls such as safeguarding paper trash and teaching employees to resist telephone deceptions.

Criminals tell me that there are far better ways to obtain information than electronic eavesdropping, such as finding it printed on paper or overhearing what people say.5

A 1988 article on industrial intelligence activities stated that Japanese companies "station people with tape recorders on commuter trains to pick up careless conversations."6 Knowing the right "social hour" was said to aid espionage agents in seeking England's trade secrets in the 18th century.7 "If you were to add up all the information that you get at a cocktail party, you might find that there is an awful lot of classified information there."8 Enough said.

REFERENCES

1. Executive Order 12356, Fed. Reg., 47, 14874 (Apr. 6, 1982), §§1.4, 1.5(4).

2. U.S. Department of Energy, DOE Order 5650.2B, "Identification of Classified Information," Chap. V, Part G.2.c (Dec. 31, 1991).

3. R. G. Priddy, "Security: The Philosophical Bureaucracy," J. Natl Class. Mgmt. Soc., 23, 1–5, (1987) p. 3.

4. Ben Franklin's Wit and Wisdom, Peter Pauper Press, Mt. Vernon, N.Y., p. 46. See also Poor Richard: The Almanacks for the Years 1733-1758, Van Wyck Brooks, ed., New York, Heritage Press, 1964, July 1735, p. 30, as reported by S. Bok in Secrets, Pantheon Books, New York, 1982, p. 108.

5. D. B. Parker, "Seventeen Information Security Myths Debunked," ISSA Access, Information Systems Security Association, Inc., Newport Beach, Calif., 3(1), (1990) pp. 1, 42–43.

6. E. M. Fowler, "Careers--Intelligence Experts for Corporations," The New York Times, 138(47,641) (Sept. 27, 1988), p. D23.

7. J. Harris, "Spies Who Sparked the Industrial Revolution," New Scientist, 110(1509), 42–47 (May 22, 1986), p. 43.

8. S. Fernbach, "Panel--Science and Technology, and Classification Management," J. Natl. Class. Mgmt. Soc., 2, 48–53 (1966), pp. 50–51.




FAS | Government Secrecy | Library || Index | Search | Join FAS