This web site has been restored for historical purposes. Many members of the Machine Perception Laboratory have moved on to exciting new opportunities at Apple, Inc. Unfortunately, INC can only now offer limited support for locating and restoring articles referenced outside of the main MP Lab web site. You may use the Contact Form for reporting broken links and missing articles. We will do our best to locate them from the MP Lab archives.
RUFACS-1 Database
Principal Investigators: Mark S. Frank ** Javier Movellan ** Marian Stewart Bartlett ** Gwen Littlewort
Rochester/UCSD FacialActionCodingSystem Database1
The RU-FACS-1 database (short for Rutgers and UCSD FACS database) is being collected by Mark Frank at Rutgers University as part of a collaboration with the Machine perception lab to automate FACS. This database consists of spontaneous facial expressions from multiple views, with ground truth FACS codes provided by two facial expression experts. The data collection equipment, environment, and paradigm were designed with advice from machine vision consultant Yaser Yacoob and facial behavior consultant Paul Ekman. The system records synchronized digital video from 4 Point Grey Dragonfly video cameras and writes directly to RAID.
Subjects participated in a false opinion paradigm in which they were randomly assigned to either lie or tell the truth about their opinion on an issue about which the person has indicated strong feelings (e.g., Frank & Ekman, 1997; 2004). The subject attempts to convince an interviewer he or she is telling the truth. Interviewers are current and former members of the police and FBI. The participants are informed ahead of time of the following pay-offs: (1) If they tell the truth and are believed, they will receive $10; (2) If they lie and are believed they will receive $50; (3) If they are not believed regardless whether they are lying or telling the truth, they are told that they will receive no money and will have to fill out a long boring questionnaire. This paradigm reliably generates many different expressions in a short period of time, which is key for collecting data for training computer vision systems. The paradigm involves natural interaction with another person and does not make the subject aware that the emphasis of the study is on his or her facial expression.
We have collected data from 100 subjects, 2.5 minutes each. This database constitutes a significant contribution towards the 400-800 minute database recommended in the feasibility study for fully automating FACS. To date we have human FACS coded the upper faces of 20% the subjects, which is now available for release. We are on target for finishing the coding by the end of June, and to complete the machine based scoring by the end of the project in 8/31/04.