Machine Perception Laboratory

Feature Detection

Automatic Face Tracking

The face detector is robust to complex background and illumination conditions, and partial occlusion

We developed an automatic detector which enables fully automated FACS coding (Fasel et al., submitted; Littlewort et al., in press). The face detector employs boosting techniques in a generative framework, and extends work by Viola & Jones (2001). The system works in real time at 30 frames per second on a fast PC. We made source code for the face detector freely available at Performance on standard test sets are equal to the state-of-the-art in the computer vision literature (e.g. 90% detection and 1 in a million false alarms on the CMU face detection test set). The CMU test set has unconstrained lighting and background. When lighting and background can be controlled, such as in behavioral experiments, accuracy is much higher.

Automatic facial feature detection

a. Automatic face detection and eye detection on a sample subject from the RU-FACS-1 dataset. b. Sample output of eyeblink detector (bottom) compared to eyeblink artifacts in simultaneously recorded EEG signals (top).

We applied generative boosting techniques developed in the face detector the problem of detecting facial features within the face (Fasel et al., submitted). We developed an eye detector which enables more precise alignment of face images, including correction of in-the-plane rotations. The precision of the current system is on the order of 1/4 of an iris, similar to the precision obtained by human labelers in our previous study. The system also detects eyeblinks. Because it is a data-driven system trained on a large number of real-world images, the system detects blinks robustly in wide range of lighting, orientation, and occlusion conditions (80% correct on random images from the web. Performance is in the high 90’s for controlled orientation and lighting conditions.)