RUFACS1


HOME PAGE
Projects
Software
Databases
Standards
Demos
People
Publications
Tech Reports
Tutorials
Presentations
Address & Directions
MPLab in the News
Twiki
MPbugs

RUFACS1

Rochester/UCSD FacialActionCodingSystem Database1

RUFACS1 (Rochester/UCSD FacialActionCodingSystem Database 1)
Data collection set up. a. The frontal camera was mounted in a bookshelf above the interrogator’s head. b. Two side view cameras were wall-mounted. A fourth camera was mounted under the interrogator’s chair. c. Interrogators were retired members of the police and FBI, including this county sherrif
 

The RU-FACS-1 database (short for Rutgers and UCSD FACS database) is being collected by Mark Frank at Rutgers University as part of a collaboration with the Machine perception lab to automate FACS. This database consists of spontaneous facial expressions from multiple views, with ground truth FACS codes provided by two facial expression experts. The data collection equipment, environment, and paradigm were designed with advice from machine vision consultant Yaser Yacoob and facial behavior consultant Paul Ekman. The system records synchronized digital video from 4 Point Grey Dragonfly video cameras and writes directly to RAID.

Subjects participated in a false opinion paradigm in which they were randomly assigned to either lie or tell the truth about their opinion on an issue about which the person has indicated strong feelings (e.g., Frank & Ekman, 1997; 2004). The subject attempts to convince an interviewer he or she is telling the truth. Interviewers are current and former members of the police and FBI. The participants are informed ahead of time of the following pay-offs: (1) If they tell the truth and are believed, they will receive $10; (2) If they lie and are believed they will receive $50; (3) If they are not believed regardless whether they are lying or telling the truth, they are told that they will receive no money and will have to fill out a long boring questionnaire. This paradigm reliably generates many different expressions in a short period of time, which is key for collecting data for training computer vision systems. The paradigm involves natural interaction with another person and does not make the subject aware that the emphasis of the study is on his or her facial expression.

We have collected data from 100 subjects, 2.5 minutes each. This database constitutes a significant contribution towards the 400-800 minute database recommended in the feasibility study for fully automating FACS. To date we have human FACS coded the upper faces of 20% the subjects, which is now available for release. We are on target for finishing the coding by the end of June, and to complete the machine based scoring by the end of the project in 8/31/04.