This web site has been restored for historical purposes. Many members of the Machine Perception Laboratory have moved on to exciting new opportunities at Apple, Inc. Unfortunately, INC can only now offer limited support for locating and restoring articles referenced outside of the main MP Lab web site. You may use the Contact Form for reporting broken links and missing articles. We will do our best to locate them from the MP Lab archives.
Recognition of Basic Emotion
Fully Automatic Recognition of Expressions of Basic Emotion
The output of the face detector is scaled to 90×90 and fed directly to the facial expression analysis system (see Figure 1). The system is essentially the same as the one used for Automatic FACS coding. First the face image is passed through a bank of Gabor filters at 8 orientations and 9 scales (2-32 pixels/cycle at 0.5 octave steps). The filterbank representations are then channeled to a classifier to code the image in terms of a set of expression dimensions. We have found support vector machines to be very effective for classifying facial expressions (Littlewort et al., in press, Bartlett et al., 2003). Recent research at our lab has demonstrated that both speed and accuracy are enhanced by performing feature selection on the Gabor filters prior to classification (e.g. Bartlett et al., 2003). This approach employs Adaboost (Freund & Shapire, 1996) a state of the art technique for feature selection that sequentially selects the feature that gives the most information about classification given the features that have been already selected.