Advertisment
Ouch! Computer system spots fake expressions of pain better than people
The system may also be used to detect deceptive actions. A new study by researchers can distinguish between real or faked expressions of pain more accurately than can humans.
This ability has obvious uses for uncovering pain malingering — fabricating or exaggerating the symptoms of pain for a variety of motives — but the system also could be used to detect deceptive actions in the realms of security, psychopathology, job screening, medicine and law.
The study, “Automatic Decoding of Deceptive Pain Expressions,” is published in the latest issue of Current Biology.
The study employed two experiments with a total of 205 human observers who were asked to assess the veracity of expressions of pain in video clips of individuals, some of whom were being subjected to the cold presser test in which a hand is immersed in ice water to measure pain tolerance, and of others who were faking their painful expressions.
“Human subjects could not discriminate real from faked expressions of pain more frequently than would be expected by chance,” Frank (pictured) says. “Even after training, they were accurate only 55 percent of the time. The computer system, however, was accurate 85 percent of the time.”
Bartlett noted that the computer system “managed to detect distinctive, dynamic features of facial expressions that people missed. Human observers just aren’t very good at telling real from faked expressions of pain.”
The researchers employed the computer expression recognition toolbox (CERT), an end-to-end system for fully automated facial-expression recognition that operates in real time. It was developed by Bartlett, Littlewort, Frank and others to assess the accuracy of machine versus human vision.
They found that machine vision was able to automatically distinguish deceptive facial signals from genuine facial signals by extracting information from spatiotemporal facial-expression signals that humans either cannot or do not extract.
“In highly social species such as humans,” says Lee, “faces have evolved to convey rich information, including expressions of emotion and pain. And, because of the way our brains are built, people can simulate emotions they’re not actually experiencing so successfully that they fool other people. The computer is much better at spotting the subtle differences between involuntary and voluntary facial movements.”
Frank adds, “Our findings demonstrate that automated systems like CERT may analyze the dynamics of facial behavior at temporal resolutions previously not feasible using manual coding methods.”
Bartlet says this approach illuminates basic questions pertaining to many social situations in which the behavioral fingerprint of neural control systems may be relevant.
“As with causes of pain, these scenarios also generate strong emotions, along with attempts to minimize, mask and fake such emotions, which may involve ‘dual control’ of the face,” Bartlett says.
“Dual control of the face means that the signal for our spontaneous felt emotion expressions originate in different areas in the brain than our deliberately posed emotion expressions,” Frank explains, “and they proceed through different motor systems that account for subtle appearance, and in the case of this study, dynamic movement factors.”
The computer-vision system, Bartlett says, “can be applied to detect states in which the human face may provide important clues as to health, physiology, emotion or thought, such as drivers’ expressions of sleepiness, students’ expressions of attention and comprehension of lectures, or responses to treatment of affective disorders.”
The single most predictive feature of falsified expressions, the study showed, is how and when the mouth opens and closes. Fakers’ mouths open with less variation and too regularly. The researchers say further investigations will explore whether such over-regularity is a general feature of fake express
Contact: Patricia Donovan
pdonovan@buffalo.edu
The authors are Marian Bartlett, PhD, research professor, Institute for Neural Computation, University of California, San Diego; Gwen C. Littlewort, PhD, co-director of the institute’s Machine Perception Laboratory; Mark G. Frank, PhD (pictured), professor of communication, University at Buffalo, and Kang Lee, PhD, Dr. Erick Jackman Institute of Child Study, University of Toronto.
For more information http://www.eurekalert.org/multimedia/pub/71238.php?from=264462