The body does not lie. So stated William Moulton Marston, a psychology professor who devised components for the polygraph and in 1938 published The Lie Detector Test. Marston and his co-inventors maintained that regardless of how well a person could control his voice and face, other signs such as blood pressure, heart rate, respiration and skin conductivity would betray him when he told a lie. The physiological changes, they said, were triggered by the anxiety an individual feels when he knows he is fabricating information. Marstons own work with the machine convinced him that women were more trustworthy than men, and he went on to champion the females role in society, in part by creating and writing the comic strip “Wonder Woman” (who wielded a “truth lasso,” among other gadgets). The trouble with the polygraph, scientists found later, was that a person could become anxious simply by being hooked up to the machine and even more so when asked probing questions. After years of controversy, evidence gleaned from lie detectors remains inadmissible in most courtrooms. Undeterred, todays inventors have devised a second generation of equipment that senses signals inside the brain and body, which the creators say provide clear evidence of lying. Tests are under way to determine whether any of these schemes is reliable all the time. Although the results are not clear-cut, it seems inevitable that sooner or later governments or courts will allow a new kind of test result to be used as evidence in trials. Given the pace of innovation and its potential payoff–identifying terrorists, convicting criminals and reducing the number of innocent people wrongly sentenced to prison–neuroethicists are playing catch-up. They must find some fast answers as to whether mental clues picked up by machines can actually expose a persons true intentions and whether any part of our inner minds should be considered private and inviolable. Content from Carrier All lie detectors operate on the basic assumption that a person who intentionally says something untrue is conscious of doing so. The new techniques further posit that a physical correlate must exist for the subjective experience of knowingly lying–a pattern of neuron activation or some other physiological sign. Identifying such a correlate is problematic, however. Imagine that a Martian visits us, picks up some chalk and writes a series of symbols on a blackboard. The chalk marks are the physical carrier through which the alien is attempting to convey his message. But what are the symbols contents–what do they mean? To the Martian, everything–but to us, nothing. Many philosophers maintain that researchers who are attempting to read thoughts are merely defining the carrier–the neuronal pattern on which a thought is riding–but not the content; the researcher will never be able to tell us what the message means, much less whether the messenger is lying. But the current inventors argue that there may not be a meaningful difference between the carrier and the content of a thought. They say empirical studies of how the brain represents information indicate that carrier and content may be one and the same. These researchers cite modern theories of mental representations that suggest that information processing in the neuronal network is subsymbolic and not rule-based–meaning that, unlike a computer, the brain does not follow a rigorous syntax. Mental content actually takes the form of the strength of connections among myriad neurons; it is directly reflected by the physical structure and dynamics of the synaptic gaps that connect neurons in a network. Mental content is indeed the physical carrier. Caught in the Act The new lie detection schemes exploit this point of view. The working theory is that as soon as a person intentionally lies he is conscious of doing so and that a neuronal correlate exists for that consciousness. The challenge is defining what the correlate is. The approach closest to commercialization comes from Lawrence A. Farwell, who calls his technique brain fingerprinting and runs a company called Brain Fingerprinting Laboratories in Seattle. Suppose French police have just boarded an American airliner that has landed in Paris, on the suspicion that the crew are CIA agents who had abducted several Afghan citizens and taken them to a secret detention center. To determine if this is the case, the French investigators place a helmet on a suspected agents head. The helmet contains electrodes that record a persons brain waves on an electroencephalogram (EEG). The investigators then show images to the suspect: some of random items, some of the missing Afghanis, some of recognizable CIA offices and some of the purported detention center. According to Farwell, if the crew member sees an image of something he has already seen in real life, a specific brain wave known as P300 will arise. Neuroscience studies have shown that P300 occurs when the brain recognizes information as familiar. Thus, if the suspected agent falsely says he does not recognize a missing Afghani or the detention center, a P300 wave will appear on the machines recording. Critics of brain fingerprinting say that anxiety, as well as alcohol or drug use, can adversely affect the P300 correlation. They also note that if the crew members were indeed CIA agents who had seen simple mug shots of the missing people or photographs of the detention center, those images alone would be enough to raise the P300 wave indicating familiarity but certainly not be indicative of guilt. Nevertheless, the real CIA and FBI have given Farwell a good deal of funding. He maintains that the P300 wave is a very reliable indicator of whether a respondent is being truthful. Like Marston and the polygraph, Farwell says that a guilty person would possess a mental representation of people or objects related to a crime in ways that no other person would harbor. Of course, that means interrogators must find such evidence and prove its novelty. Field tests of brain fingerprinting are under way. The apparatus figured prominently in an Iowa courts 2003 reexamination of a case involving Terry Harrington, who had been convicted of murdering a security guard in 1977 and had spent 25 years in prison. When hooked up to the machine, Harringtons brain did not react to items that the killer certainly would have known. Partly as a result of this evidence, the states highest court reversed his conviction and set him free. Other technologies are equally compelling and controversial. Psychiatrist Daniel D. Langleben of the University of Pennsylvania has developed a “guilty knowledge test” based on magnetic resonance imaging. Deliberate lying, he says, shows up on scans as a particular neural correlate in the anterior cingulate gyrus as well as part of the left prefrontal cortex, regions of the brain that are associated with mental representations of conflict. Langleben claims that the scientific problems involved in optimizing his lie detector are solvable. But his procedure has one operational drawback: subjects must be prepared to cooperate and stay motionless inside a scanner during interrogation. Some psychologists have raised a fundamental objection to Langlebens procedure: even if the method can detect a mental conflict, they say, it cannot detect its resolution. There is no way to tell if the subject is experiencing a conflict because he is lying or merely because he is considering whether or not to lie. Another University of Pennsylvania professor, biophysicist Britton Chance, has focused on a different brain property. He has fashioned a headband that sends near-infrared light into the skull and captures its reflection. Chance says the sensors can detect changes in the prefrontal cortex–the site of decision making–that occur when a person decides to lie. The device is still in development. James A. Levine, an endocrinologist at the Mayo Clinic in Rochester, Minn., is working with heat-sensing cameras that can detect a rush of blood to the face, particularly around the eyes, that occurs as a person tells a lie. Such a noninvasive, easily applied technology could be handy for rapidly screening people–for example, at airport security gates. But the technique is preliminary, and its accuracy remains an open question. Paul Ekman, professor emeritus of psychology at the University of California, San Francisco, is working on a lie detector based on microexpressions–tiny changes in facial expressions that most people cannot deliberately control [see “A Look Tells All,” by Siri Schubert, on page 26]. But Ekman has said he is not interested in his method being applied to judicial proceedings, because it cannot provide 100 percent accuracy. Fatal Flaws? Whether any of the technologies can be considered foolproof remains to be seen. Psychology professor J. Peter Rosenfeld of Northwestern University is among the sharpest critics. One fundamental flaw, he notes, is that the contents of memory change over time. Furthermore, many people, in particular those who are mentally retarded or addicted to drugs, do not store memories accurately or recall them reliably. Rosenfeld and others also say that investigators can easily influence how subjects react to P300 tests, merely by using emotionally laden language during questioning. University of South Florida psychologist Emanuel Donchin adds that P300 waves are very sensitive to the order in which stimuli are presented and that the subjective decisions about questions that police must necessarily make during interrogations would compromise test results. Donchin, who used to work with Farwell, also says false positives are likely. For example, the brain of a person who sees a green sweater and responds with a P300 wave is not necessarily reacting because he had seen the murder victim wearing the garment; the same effect could arise if the suspect had recently seen a similar sweater in a store window, marked down to a very affordable price. Paul Root Wolpe, a psychiatry professor and fellow of the University of Pennsylvania Center for Bioethics, points out that the 170 or so “scientific tests” that Farwell cites as support for the reliability of brain fingerprinting refer not to separate studies but to individual research subjects, all of whom were tested by Farwell himself. So far Farwell has not permitted independent researchers to confirm his results. Wolpe also worries that premature commercialization of any of these techniques will thwart the basic research that still needs to be done to prove them and could undermine their long-term credibility if they appear faulty in early applications. Ethics Needed Assuming one technology does demonstrate its accuracy, a second question arises: Is using it ethical? This question might first arise in connection with criminal trials. Just as the improving science of genotyping led courts to allow DNA evidence to help determine the guilt of defendants, attorneys are already trying to introduce methods of “brain typing” into court. Neuroethicists would be well advised to start working on the issues now. Already lawyers are attempting to use brain science to characterize an individuals personality–notably whether someone accused of a violent crime has an inborn tendency toward aggressive behavior. A persons capacity for empathy, degree of neuroticism, even unconscious racial prejudice are other examples of psychological traits that can be traced to certain patterns of brain activity. But do these traits, if provable, bear on a persons potential to commit, or culpability in, a crime? On a societal scale, use of accurate lie detectors could have far-reaching consequences for peoples private lives. First we must define privacy as it pertains to the brain. Should our inner mind be inviolable, a place that must not be invaded? Do mental representations constitute a private domain that the police and security agencies have no right to enter? That stance might be a tough limitation for criminal law, where guilt often revolves around the intent of the perpetrator. If mental representations are off-limits, neuroethicists must balance this view against the potential social good that lie detectors could provide: helping to defend people and nations against terrorists, preventing false accusations and convictions against the innocent, simplifying investigations, and protecting society from potential criminals. Lie detectors could create a more transparent society, too, which would strengthen democratic culture. Imagine that the leading candidates running for president had to appear on a televised debate, but this time a big red light in front of each politician would turn on when a scanner sensed that the candidate was telling a premeditated lie. “Political openness” would take on new meaning. Society must also consider its assumptions about personal autonomy: Would we as citizens be willing to lose some freedom in exchange for security if in principle we could no longer hide anything from the government? What would it mean if such resistance tactics as lying or refusing to answer questions were no longer possible? Would simply knowing about the existence of sophisticated lie detectors change our mental lives? Before the technology advances any further, we will need some answers.
The trouble with the polygraph, scientists found later, was that a person could become anxious simply by being hooked up to the machine and even more so when asked probing questions. After years of controversy, evidence gleaned from lie detectors remains inadmissible in most courtrooms.
Undeterred, todays inventors have devised a second generation of equipment that senses signals inside the brain and body, which the creators say provide clear evidence of lying. Tests are under way to determine whether any of these schemes is reliable all the time. Although the results are not clear-cut, it seems inevitable that sooner or later governments or courts will allow a new kind of test result to be used as evidence in trials. Given the pace of innovation and its potential payoff–identifying terrorists, convicting criminals and reducing the number of innocent people wrongly sentenced to prison–neuroethicists are playing catch-up. They must find some fast answers as to whether mental clues picked up by machines can actually expose a persons true intentions and whether any part of our inner minds should be considered private and inviolable.
Content from Carrier All lie detectors operate on the basic assumption that a person who intentionally says something untrue is conscious of doing so. The new techniques further posit that a physical correlate must exist for the subjective experience of knowingly lying–a pattern of neuron activation or some other physiological sign.
Identifying such a correlate is problematic, however. Imagine that a Martian visits us, picks up some chalk and writes a series of symbols on a blackboard. The chalk marks are the physical carrier through which the alien is attempting to convey his message. But what are the symbols contents–what do they mean? To the Martian, everything–but to us, nothing. Many philosophers maintain that researchers who are attempting to read thoughts are merely defining the carrier–the neuronal pattern on which a thought is riding–but not the content; the researcher will never be able to tell us what the message means, much less whether the messenger is lying.
But the current inventors argue that there may not be a meaningful difference between the carrier and the content of a thought. They say empirical studies of how the brain represents information indicate that carrier and content may be one and the same. These researchers cite modern theories of mental representations that suggest that information processing in the neuronal network is subsymbolic and not rule-based–meaning that, unlike a computer, the brain does not follow a rigorous syntax. Mental content actually takes the form of the strength of connections among myriad neurons; it is directly reflected by the physical structure and dynamics of the synaptic gaps that connect neurons in a network. Mental content is indeed the physical carrier.
Caught in the Act The new lie detection schemes exploit this point of view. The working theory is that as soon as a person intentionally lies he is conscious of doing so and that a neuronal correlate exists for that consciousness. The challenge is defining what the correlate is. The approach closest to commercialization comes from Lawrence A. Farwell, who calls his technique brain fingerprinting and runs a company called Brain Fingerprinting Laboratories in Seattle.
Suppose French police have just boarded an American airliner that has landed in Paris, on the suspicion that the crew are CIA agents who had abducted several Afghan citizens and taken them to a secret detention center. To determine if this is the case, the French investigators place a helmet on a suspected agents head. The helmet contains electrodes that record a persons brain waves on an electroencephalogram (EEG). The investigators then show images to the suspect: some of random items, some of the missing Afghanis, some of recognizable CIA offices and some of the purported detention center. According to Farwell, if the crew member sees an image of something he has already seen in real life, a specific brain wave known as P300 will arise. Neuroscience studies have shown that P300 occurs when the brain recognizes information as familiar. Thus, if the suspected agent falsely says he does not recognize a missing Afghani or the detention center, a P300 wave will appear on the machines recording.
Critics of brain fingerprinting say that anxiety, as well as alcohol or drug use, can adversely affect the P300 correlation. They also note that if the crew members were indeed CIA agents who had seen simple mug shots of the missing people or photographs of the detention center, those images alone would be enough to raise the P300 wave indicating familiarity but certainly not be indicative of guilt.
Nevertheless, the real CIA and FBI have given Farwell a good deal of funding. He maintains that the P300 wave is a very reliable indicator of whether a respondent is being truthful. Like Marston and the polygraph, Farwell says that a guilty person would possess a mental representation of people or objects related to a crime in ways that no other person would harbor. Of course, that means interrogators must find such evidence and prove its novelty.
Field tests of brain fingerprinting are under way. The apparatus figured prominently in an Iowa courts 2003 reexamination of a case involving Terry Harrington, who had been convicted of murdering a security guard in 1977 and had spent 25 years in prison. When hooked up to the machine, Harringtons brain did not react to items that the killer certainly would have known. Partly as a result of this evidence, the states highest court reversed his conviction and set him free.
Other technologies are equally compelling and controversial. Psychiatrist Daniel D. Langleben of the University of Pennsylvania has developed a “guilty knowledge test” based on magnetic resonance imaging. Deliberate lying, he says, shows up on scans as a particular neural correlate in the anterior cingulate gyrus as well as part of the left prefrontal cortex, regions of the brain that are associated with mental representations of conflict. Langleben claims that the scientific problems involved in optimizing his lie detector are solvable. But his procedure has one operational drawback: subjects must be prepared to cooperate and stay motionless inside a scanner during interrogation.
Some psychologists have raised a fundamental objection to Langlebens procedure: even if the method can detect a mental conflict, they say, it cannot detect its resolution. There is no way to tell if the subject is experiencing a conflict because he is lying or merely because he is considering whether or not to lie.
Another University of Pennsylvania professor, biophysicist Britton Chance, has focused on a different brain property. He has fashioned a headband that sends near-infrared light into the skull and captures its reflection. Chance says the sensors can detect changes in the prefrontal cortex–the site of decision making–that occur when a person decides to lie. The device is still in development.
James A. Levine, an endocrinologist at the Mayo Clinic in Rochester, Minn., is working with heat-sensing cameras that can detect a rush of blood to the face, particularly around the eyes, that occurs as a person tells a lie. Such a noninvasive, easily applied technology could be handy for rapidly screening people–for example, at airport security gates. But the technique is preliminary, and its accuracy remains an open question.
Paul Ekman, professor emeritus of psychology at the University of California, San Francisco, is working on a lie detector based on microexpressions–tiny changes in facial expressions that most people cannot deliberately control [see “A Look Tells All,” by Siri Schubert, on page 26]. But Ekman has said he is not interested in his method being applied to judicial proceedings, because it cannot provide 100 percent accuracy.
Fatal Flaws? Whether any of the technologies can be considered foolproof remains to be seen. Psychology professor J. Peter Rosenfeld of Northwestern University is among the sharpest critics. One fundamental flaw, he notes, is that the contents of memory change over time. Furthermore, many people, in particular those who are mentally retarded or addicted to drugs, do not store memories accurately or recall them reliably.
Rosenfeld and others also say that investigators can easily influence how subjects react to P300 tests, merely by using emotionally laden language during questioning. University of South Florida psychologist Emanuel Donchin adds that P300 waves are very sensitive to the order in which stimuli are presented and that the subjective decisions about questions that police must necessarily make during interrogations would compromise test results. Donchin, who used to work with Farwell, also says false positives are likely. For example, the brain of a person who sees a green sweater and responds with a P300 wave is not necessarily reacting because he had seen the murder victim wearing the garment; the same effect could arise if the suspect had recently seen a similar sweater in a store window, marked down to a very affordable price.
Paul Root Wolpe, a psychiatry professor and fellow of the University of Pennsylvania Center for Bioethics, points out that the 170 or so “scientific tests” that Farwell cites as support for the reliability of brain fingerprinting refer not to separate studies but to individual research subjects, all of whom were tested by Farwell himself. So far Farwell has not permitted independent researchers to confirm his results. Wolpe also worries that premature commercialization of any of these techniques will thwart the basic research that still needs to be done to prove them and could undermine their long-term credibility if they appear faulty in early applications.
Ethics Needed Assuming one technology does demonstrate its accuracy, a second question arises: Is using it ethical?
This question might first arise in connection with criminal trials. Just as the improving science of genotyping led courts to allow DNA evidence to help determine the guilt of defendants, attorneys are already trying to introduce methods of “brain typing” into court. Neuroethicists would be well advised to start working on the issues now.
Already lawyers are attempting to use brain science to characterize an individuals personality–notably whether someone accused of a violent crime has an inborn tendency toward aggressive behavior. A persons capacity for empathy, degree of neuroticism, even unconscious racial prejudice are other examples of psychological traits that can be traced to certain patterns of brain activity. But do these traits, if provable, bear on a persons potential to commit, or culpability in, a crime?
On a societal scale, use of accurate lie detectors could have far-reaching consequences for peoples private lives. First we must define privacy as it pertains to the brain. Should our inner mind be inviolable, a place that must not be invaded? Do mental representations constitute a private domain that the police and security agencies have no right to enter? That stance might be a tough limitation for criminal law, where guilt often revolves around the intent of the perpetrator.
If mental representations are off-limits, neuroethicists must balance this view against the potential social good that lie detectors could provide: helping to defend people and nations against terrorists, preventing false accusations and convictions against the innocent, simplifying investigations, and protecting society from potential criminals.
Lie detectors could create a more transparent society, too, which would strengthen democratic culture. Imagine that the leading candidates running for president had to appear on a televised debate, but this time a big red light in front of each politician would turn on when a scanner sensed that the candidate was telling a premeditated lie. “Political openness” would take on new meaning.
Society must also consider its assumptions about personal autonomy: Would we as citizens be willing to lose some freedom in exchange for security if in principle we could no longer hide anything from the government? What would it mean if such resistance tactics as lying or refusing to answer questions were no longer possible? Would simply knowing about the existence of sophisticated lie detectors change our mental lives? Before the technology advances any further, we will need some answers.