Principal
Investigators


2D Face Tracking


HOME PAGE
Projects
Software
Databases
Standards
Demos
People
Publications
Tech Reports
Tutorials
Presentations
Address & Directions
MPLab in the News
Twiki
MPbugs

2D Tracking

Real-time, Adaptive, Color-based Object Tracking Using Generative Models

Motivation
Color-based approaches to tracking are popular because of their low computational cost and their resistance to out of plane rotations, object deformations, and motion blur. Unfortunately current color-based methods have some major disadvantages: (1) They rely on local search algorithms and thus tend to get trapped in local maxima created when an object moves too fast for the given capturing frame rate. (2) Color is very sensitive to illumination conditions. (3) Most color-based models ignore information about the background. We present a new method for real-time color-based object tracking which addresses each of these issues

Real-time, Adaptive Tracking
Our method performs a global search in each frame, making it resistant to fast, unpredictable motion. Second, we run a slower but more accurate feature-based face detector in parallel with the color-based system, and use its results to dynamically adapt the color models of object and background, reducing sensitivity to lighting conditions.

Original image with results from
color search (green box)
Log likelihood ratio of pixels belonging
to face background

Generative Model
The color system uses a generative model, allowing us to model the colors in the object and the colors in the background independently, and to adapt the parameters of the model over time.

Bayesian Filtering Using Convolutional HMMs


Download Bayesian Filtering .pdf file here.
View Demo Movie here. (mpg)

Bayesian filtering provides a principled approach for a variety of problems in machine perception and robotics. Current filtering methods work with analog hypothesis spaces and find approximate solutions to the resulting non-linear filtering problem using Monte-Carlo approximations (i.e., particle filters) or linear approximations (e.g., extended Kalman filter). Instead, we digitize the hypothesis space into a large number, n _ 100, 000, of discrete hypotheses. Thus, the approach becomes equivalent to standard hidden Markov models (HMM) except for the fact that we use a very large number of states. One reason this approach has not been tried in the past is that the standard forward filtering equations for discrete HMMs require order n2 operations per time step and thus rapidly become prohibitive. In our model, however, the states are arranged in two-dimensional topologies, with location independent dynamics. With this arrangement, predictive distributions can be computed via convolutions. In addition, the computation of log-likelihood ratios can also be performed via convolutions.

The hidden variable H determines which pixels belong to the object and which belong to the background. The object pixels are rendered independently from an object histogram. The background pixels are rendered independently from a space variant background histogram model.


An example of uncertainty propagation.The left side shows the most probable hypotheses at time t, i.e., the filtering distribution. The image on the left shows the predictive distribution for time t+1, i.e, the prior distribution for the next time step.


Experiments
In order to determine the conditions in which our new system works effectively and the conditions in which it falters, we are carrying out experimental simulations on datasets of video footage containing, among other confounding factors, variable lighting conditions and complex backgrounds. The source code, written in multi-platform C++, is freely available to the research community, and is in use as a control and feedback mechanism for communication robots and for the Colorado University platform for computer agent animation.