Symposium: ‘Will brain imaging be lie detector test of the future?’
For almost a century, one of the staples of crime stories has been the wires, cuffs, and jiggling recording needle of the polygraph machine. In its time, the “lie detector” was hailed as a way to measure the telltale physiological signs of deception, including hard breathing, high blood pressure, and excess perspiration.
But in truth, the polygraph was never very accurate, hitting the mark only about 85 percent of the time – and meanwhile creating a lot of “false positives.” As many as 25 percent of people telling the truth during a polygraph exam come out looking like liars.
In most federal courtrooms, polygraphs have not been admissible as evidence since 1923. Many businesses avoid them in personnel screening because of unreliable results – and because of protections afforded by federal law, in the 1988 Employee Polygraph Protection Act.
A 2003 report by the National Research Council (NRC) seemed to seal the doom of this old technology, flatly concluding that the polygraph was bad science and needed replacing.
Despite the wavering faith in polygraphs, insurance companies, government and law enforcement agencies, and others still do about 40,000 of the exams a year in the United States. And interest in a technology that catches liars has been heightened by the current war on terror.
“There’s an incredible hunger to have some test that separates truth from deception,” said Harvard University Provost Steven Hyman, professor of neurobiology at Harvard Medical School. He helped moderate “Is There Science Underlying Truth Detection?” a Feb. 2 symposium at the American Academy of Arts and Sciences, co-sponsored by Harvard and the McGovern Institute for Brain Research at the Massachusetts Institute of Technology (MIT). It drew a crowd of about 200.
One panel looked at the existing science of brain imaging as a means of truth detection; another examined the legal and ethical implications.
“It’s inevitable we need better ways to detect deception,” said diagnostic imaging pioneer Marcus E. Raichle, a professor of radiology at the Washington University School of Medicine, who was on the NRC panel critical of polygraphs.
But given the shaky state of current technology, he said, the fact that 40,000 polygraphs are still done annually “is very sobering.”
What might replace the polygraph is functional magnetic resonance imaging (fMRI), a technology that uses fluctuations in blood flow to create vivid serial images of brain activity – a sort of cartography of cognition. (Traditional MRIs capture a static image; fMRIs – using scans every 2 or 3 seconds – capture a series of images in order to map changes in neural activity.)
For more than a decade, MRIs have been a noninvasive way to capture images of the inside of an object. They can identify tumors, detect the early stages of Alzheimer’s, and pinpoint areas to be avoided during surgery. Since 1999, some scientists believe that fMRIs might be a high-tech, reliable way of quantifying deception. The sequential images seem to indicate that certain brain centers light up when a person is lying. If this is true, the scientists say, fMRIs will be harder to outwit than polygraphs.
One study, done at Temple University in 2004, found that lying activated twice as many areas of the brain as telling the truth did. Nerve cells in the brain that are active consume more oxygen than those at rest. Pulses from an MRI scanner’s doughnut-shaped magnets can record subtle changes in blood oxygen levels. The resulting images show high or low contrast that is blood oxygen level-dependent (BOLD).
These telltale shifts in blood oxygen levels suggest a bold future for the fMRI as the coming golden standard of lie detection – at least if you believe the hype, hope, and hypotheses that dominate Internet discussions.
That future might even include lie detectors at airports, where security personnel at a distance shine specialized lasers on travelers. It might even include fMRI lie detectors no bigger than a suitcase, which download images to a diagnostic central computer far away.
That would eliminate one problem with current MRI scanners, which are the size of a compact car. Current scanners are also expensive – about $1 million per tesla, a measure of magnetic energy. (The latest scanners are 7 teslas or more.) MRI scans are costly – as much as $900 a crack. And fMRIs – more elaborate and time-consuming – cost as much as 10 times more.
On the Internet, the fMRI is at the center of a brave new world of lie detection – that elusive “magic lasso” that Raichle said the NRC committee wished for, to reel in the truth.
At the Feb. 2 symposium, the fMRI was at the center of debate and discussion on the science of measuring deception. But while Internet speculation on the future of high-tech lie detection is accelerating, the panelists advised going slow.
The three science experts, including Raichle, seemed to agree that the fMRI shows promise as a lie detector, but that current research is not enough to support using it now.
“The numbers aren’t terrible,” and images and methods are getting better, said Nancy Kanwisher, an investigator at MIT’s McGovern Institute. She critiqued three recent studies, one of which seemed to detect 90 percent of the time who was lying and who was telling the truth.
But the studies, done in controlled settings and involving made-up “crimes,” don’t take into account the real world, where – at least for an accused criminal – the stakes are higher, along with the anxieties. Studies also don’t take into account known countermeasures for defeating the fMRI, like performing mental arithmetic – or simply fidgeting: BOLD images, secured from subjects strapped horizontally in an MRI scanner, are blurred by as little as 3 mm of movement.
“Any use of the current technology,” concluded Kanwisher, “is likely to cause more harm than good.”
Elizabeth Phelps, a professor of psychology and neural science at New York University, was skeptical, too – in part because the neural circuitry highlighted in fMRI imaging can be altered by a subject’s emotions and even mental imagery.
“In time,” she said skeptically of fMRIs, “they’ll wind up being like polygraphs.”
Despite doubts, “we need to be involved,” said Raichle of scientists. “We have an ethical role – not as advocates, but as critics of how [lie detection] gets done.”
In an age of anxiety over terror, a feverish hunt for better lie detection will continue, he added, with the fMRI at the forefront. “Like it or not,” said Raichle, “this is a very wide agenda going on worldwide – peeking into people’s brains.”
That kind of peeking might not make it into courtrooms as evidence anytime soon. “There’s a lot of reason to be concerned about the accuracy of this technology today,” said panelist Henry T. Greely, a professor of law and genetics at Stanford University. “We shouldn’t allow it unless it’s proven effective.”
He called for public debate on the issue, along with an FDA-like agency that would oversee premarket approval of any new lie detection gear. (It may be too late. One company, No Lie MRI Inc., is already marketing the fMRI as a new-wave polygraph, and has one testing center in California. Another company, the Massachusetts-based Cephos Corporation, may launch soon.)
Panelist Jed Rakoff, a United States district judge for the Southern District of New York, made his objections clear, starting with the title of his remarks, “Can Science Detect Lies? Not in My Court.”
Rakoff said he had respect for the neuroscience behind brain imaging, and acknowledged its likely future impact on the legal arena. But in the meantime, he said, fMRIs – like polygraphs – are “more likely to cause mischief than be a real help.”
The traditional tool of lie detection in the courtroom – the cross-examination – is still the best, said Rakoff. To complicate things, he said, there is no standard legal definition of what lying is. For one, “puffing,” a form of deliberate exaggeration, is permitted in the courtroom.
Science also has no commonly accepted theory of how blood flow in the brain relates to deception, said Rakoff, and fMRI studies so far do not reflect the complications and anxieties of real-life crime and criminals.
Among those complications – not considered in current fMRI/deception studies – is substance abuse. In the top 50 U.S. cities, 55 percent to 95 percent of all those arrested for felonies test positive for drugs or alcohol, a fact that would confound any fMRI, said lawyer and psychologist Stephen J. Morse J.D. ’70, Ph.D. ’73, a professor of law at the University of Pennsylvania.
Any new lie detection technology must have both good science and legal relevance, he said – and so far the fMRI fails on both counts. An fMRI could someday have “weight” in court, said Morse, but not admissibility.
“Legally relevant neuroscience must begin with behavior,” not with a picture of blood flows in the brain, he said. “Brains don’t kill people – people kill people.”