FAS | Government Secrecy | Congress ||| Index | Search | Join FAS


STATEMENT
OF
MR. MICHAEL H. CAPPS
DEPUTY DIRECTOR FOR DEVELOPMENTAL PROGRAMS
DEFENSE SECURITY SERVICE

BEFORE THE SENATE COMMITTEE ON THE JUDICIARY

APRIL 25, 2001

Mr. Chairman and Members of the Judiciary Committee, my name is Michael H. Capps. I am the Deputy Director for Developmental Programs for the Defense Security Service. Among my responsibilities is the oversight of the Department of Defense Polygraph Institute for which I served over five years as Director. I have been involved in the polygraph profession for over 26 years as an examiner, researcher, and educator. I was invited here today to respond to questions on issues surrounding the use of polygraphs.

The instrument we now call the polygraph was introduced into federal service in the 1940s and, in addition to its standard role in criminal investigations, was used in such noteworthy events as investigative support for the Nuremberg Trials, counterintelligence support to the then-new atomic weapons facilities and investigations of crimes in prisoner-of-war camps.

The U.S. government now has 24 polygraph programs, staffed with approximately 500 polygraph examiners. These men and women serve in all regions of the country and much of the world, in the military, intelligence, and law enforcement sectors. Current polygraph applications for the federal government include: protection of the President; vetting of intelligence sources; protection of classified programs; confidential informant validation; as part of counternarcotics, counterinsurgency, and counterterrorism programs; screening of applicants to intelligence agencies; investigation of human rights violations; management of convicted sex offenders; investigation of food and drug tampering; location of assets concealed by convicted thieves and drug traffickers, and; traditional criminal investigation. The U.S. government has supported the use of the polygraph among allied nations when mutual interests were at stake, such as when it supplied training and state-of-the-art polygraph equipment to Russia, to help them maintain security over their nuclear weaponry after the fall of Communism. It has, on numerous occasions, considered providing polygraph training for friendly governments, and the U.S. Department of Defense Polygraph Institute (DoDPI) regularly receives requests for polygraph training from these nations.

The DoDPI is the U.S. government’s consolidated training facility for polygraph examiners from all Federal agencies. To qualify for entry into the 13-week program, a candidate must be a U.S. citizen, be at least 25 years of age, hold a 4-year degree or demonstrate an ability to master graduate-level courses, have two years of investigative experience, have completed a background investigation to confirm a sound temperament and character, and be nominated and supported by his or her home agency. The DoDPI polygraph curriculum is taught at the master’s degree level, and provides a balance of a challenging academic load and technical skills practica. Those students who satisfactorily complete the DoDPI education program are released to their home agencies, where they serve internships, and remain subject to quality control and continuing education requirements for their entire professional careers as Federal polygraph examiners.

One of the recurring concerns for Congress has been the scientific foundation of the polygraph technique. In the last 30 years, scientists have given their attention to fundamental questions regarding polygraphy, such as the methods, reliability and validity. There is common agreement in the scientific community that modern polygraph techniques do produce very high inter-scorer agreement, usually in excess of 90%, and this compares favorably with many other common techniques in the behavioral sciences. Algorithms developed by the government and commercial entities in recent years hold the promise of even better reliability.

While reliability has not been a major issue for federal polygraph programs, a controversy exists on the question of polygraph validity. There is a significant body of literature that demonstrates that polygraph decisions, based on techniques employed by the U.S. government for criminal investigations, have an error rate of perhaps 10% or lower. However, these findings have been challenged by critics for many years because of unique problems associated with the research of polygraphy.

Validation of the polygraph technique has taken two forms: laboratory research, and field studies. During laboratory studies, volunteer participants are given polygraph tests regarding whether they committed a mock crime that had been scripted for them by the researchers. Some examinees are programmed to be innocent, and others guilty. Laboratory studies provide an excellent opportunity to investigate variables of interest to the researcher, because they can be controlled with certainty. The shortcoming of laboratory research is that mock crimes are not as emotionally engaging to the volunteer examinees as is the experience of a field polygraph examination, for which failure may have serious consequences for the examinees who are suspected of real crimes. Critics point out that laboratory studies may be prone to underestimating error rates for innocent examinees (false positives) because these examinees are less concerned about being accused of the pretended crime than would be an innocent person accused of a real crime. Proponents concede this point, but note that the laboratory studies also show high accuracy with the examinees who were “guilty” of the mock crime, an outcome that would not be expected in a simulated crime.

Field research of polygraphy is an approach that takes advantage of cases that occur as part of existing polygraph programs. Examinees are actual criminal suspects who face real world consequences for a failed polygraph examination. The examiners have practical experience in the administration of examinations with criminal suspects, something usually lacking in laboratory designs. Polygraph decisions can subsequently be compared to other evidence, such as confessions, DNA, or other forensic tests, to determine how closely the polygraph outcome matches ground truth. Unlike laboratory studies, in which ground truth is known in every case, the ultimate truth in the field setting is more elusive.

DoDPI administers an independent government-wide quality assurance program, to verify that the agencies conform to written policies in the preparation, conduct, reporting, and reviewing of their polygraph examinations. DoDPI quality assurance teams make scheduled site visits to each agency biannually. DoDPI inspectors do samplings of the work product of the participating agencies, and note deficiencies. DoDPI does not evaluate individual cases for accuracy of decisions, nor does it become involved in adjudicative issues as part of this quality assurance program. However, DoDPI does determine whether polygraph practices are consistent with relevant policies.

The impact of government polygraph programs are best understood in the context of the larger process of which they are a part. While polygraphy is valued by those agencies that use it, polygraphers are not involved in determining the action an agency takes based on the results of a polygraph examination. Rather, these decisions are the responsibility of adjudication officers, hiring officials, investigating officers, or other agency customers of the polygraph reports, who must weigh the results along with whatever information is available from other sources. Questions regarding hiring, investigation, or prosecution in which polygraph results may be a consideration, are better answered by those responsible for those decisions.

Counterintelligence screening of applicants and employees is one of the more controversial applications of polygraphy. Questions regarding the validity of this method are at the core of the debate. Critics argue that, as an imperfect tool, the polygraph wrongly classifies a percentage of both truthful and untruthful examinees, leading to grave consequences in both cases. I suggest that reducing the argument to this premise obscures a more relevant issue. First, let us agree that polygraphy is imperfect. Under the best of circumstances, errors occur. It is imperfect, like every personnel screening tool, including the personal interview, background investigation, credit check or employment check. However, a properly conducted polygraph screening program, with the level of oversight imposed on government polygraph programs, results in more adjudicable information than all other sources combined. If one takes the position that employment decisions should be made on methods that exclude polygraphy, we must agree that more errors will occur, not fewer. Second, there is a presumption that polygraph results dictate employment destiny. Typically, an adverse polygraph results triggers more investigative resources being brought to bear to help resolve the doubt. These resources could include an investigative interview, enhanced background investigation, or simply further polygraph testing. Only in a subset of cases where the polygraph results were initially unfavorable does the case remain unresolved, and even then, the ultimate employment action depends on decisions of those not involved in polygraphy.

Because of the complexity of the counterintelligence polygraph screening process, only a tentative estimate of accuracy can be stated. An error rate of perhaps less than 5% is projected for examinees who do not demonstrate a significant response in a strictly counterintelligence polygraph examination (not including suitability coverage), under the combined condition that the examinee cooperates with all polygraph processes, including retesting, does not try to manipulate the examination, and clearly understands the questions. Retesting serves to reduce errors for that category of examinees. The error rate for examinees who demonstrate a significant response to the polygraph may be higher; however, this can be mitigated if subsequent examinations are more focused on discrete issues as opposed to the broad and general questions asked during an initial screening examination. Limiting the number of retests for examinees who demonstrate a significant response to the initial examination could reduce this error rate to less than 20%. Retesting practices are policy issues, however, not scientific issues.

This concludes my prepared statement. I appreciate your willingness to entertain my comments, and I am now ready to answer your questions.




FAS | Government Secrecy | Congress ||| Index | Search | Join FAS