Feasibility Study


HOME PAGE
Projects
Software
Databases
Standards
Demos
People
Publications
Tech Reports
Tutorials
Presentations
Address & Directions
MPLab in the News
Twiki
MPbugs

Feasibility Study

Feasibility Study for Spontaneous Expressions

The collaboration between the research teams in this project emerged from the 1992 NSF planning workshop on Facial Expression Understanding (Ekman et al., 1993). Our early work established proof of principle, and was supported by NSF grant No. 9120868, ‘Automating Facial Expression’ to Paul Ekman and Terry Sejnowski in 1992-1995.

Prior to the year 2000, work on automatic facial expression recognition was based on datasets of posed expressions collected under controlled conditions with the subjects deliberately facing the camera at all times. In 2000-2001our group at UCSD, in conjunction with a group lead by Jeffrey Cohn and Takeo Kanade at CMU, undertook the first attempt that we know of to automatically measure spontaneous facial expressions (Bartlett et. al, 2001; Cohn et al. 2001). Thus we conducted a feasibility study with the goal of classifying a small set of facial actions in 20 subjects who participated in an emotion eliciting mock crime experiment conducted previously by Frank and Ekman (1997). The results were evaluated by a team of computer vision experts (Yaser Yacoob, Pietro Perona) and behavioral experts (P Ekman, M Frank). These experts produced a report (Fully automated facial action coding) which concluded that a fully automated system for measuring human expression is a realistic goal by the year 2010, and that our approach of 3D tracking and warping to frontal views, followed by machine learning techniques directly applied to the warped images, is a viable and promising technology for automatic FACS coding of spontaneous expressions.

The consultants identified the most important challenges to fully automating FACS as 1) Collection of large FACS-coded video databases of spontaneous behavior of at least 200 subjects, filmed for 2-4 minutes each, for a total of 400-800 minutes of FACS-coded video; 2). Development of robust methods for handling out-of-plane head rotations inherent to unconstrained facial behavior; 3) Creation of a community of interdisciplinary teams working towards automatic facial expression measurement

Related Publications:
UCSD tech report
CMU tech report
Consultants Report