July 21 (UPI) -- Scientists with Disney Research have developed a method for analyzing the facial expressions of movie viewers.
The new deep-learning algorithm is designed to analyze the full range of facial expressions offered by a diverse audience, but it learns by first watching a series of cues produced by a single face. Scientists dubbed the analysis technology "factorized variational autoencoders," or FVAEs.
"The FVAEs were able to learn concepts such as smiling and laughing on their own," Zhiwei Deng, a doctoral student at Simon Fraser University and former Disney Research lab associate. "What's more, they were able to show how these facial expressions correlated with humorous scenes."
Researchers tested their algorithm on a few thousand audience members during viewings of several box office hits, including Big Hero 6, The Jungle Book and Star Wars: The Force Awakens. Four infrared cameras inside the theater recorded the facial expressions of the audience. The algorithm picked up 16 million facial cues.
"It's more data than a human is going to look through," said researcher Peter Carr. "That's where computers come in -- to summarize the data without losing important details."
The analysis software is designed to hone in on faces exhibiting similar facial cues, which help the algorithm develop an understanding of the stereotypical response to a film scene. This helps the algorithm better understand the expressions of other viewers.
Researchers believe their algorithm could be used to analyze a range of subjects. For example, scientists suggest the technology could be used to analyze how different trees respond to winds.
"Once a model is learned, we can generate artificial data that looks realistic," said researcher Yisong Yue.
Scientists described their work this week at the IEEE Conference on Computer Vision and Pattern Recognition.