MAIN INDEX

DOCUMENTS

VIDEOS

SAMPLE APPLICATIONS

INTEGRAL PROJECTIONS

PROCESSING PROBLEMS

FACE PROCESSING PROBLEMS

Detection | Location | Tracking | Recognition | Expression | Pose


Face Detection

The proposed method performs an exhaustive multi-scale search, where the face class is described by the vertical projection of the whole face (MVface) and the horizontal projection of the eyes' region (MHeyes).

Step 1. Vertical projections of overlapped strips are obtained. The pattern of MVface is searched for, and the best candidates are selected. This process is repeated at different scales.

Step 2. For each candidate, the horizontal projection of the expected eye region is computed. This is done with a certain tolerance in position and scale. The pattern of MHeyes is searched for. If it is not found, the candidate is rejected.

Step 3. The resulting candidates are grouped and pruned. In case of overlapping, only the best candidate is selected.

SAMPLE RESULTS
CMU/MIT Face Database
UMU Face Database


Facial Feature Location

The facial feature locator takes as input the results of the face detector. It refines the location of the face by using the models of the vertical projection of the face (MVface) and the horizontal projection of the eyes' region (MHeyes). The alignment algorithm is a key problem in the three steps of the process.

Step 1. The vertical projections of the regions expected for both eyes are computed. These projections are aligned. The resulting distance is used to compute the face orientation angle. This is a robust way of using face symmetry.

Step 2. After extracting the rectified face (with an affine transform), the vertical projection of the face is computed, including a tolerance margin. This projection is aligned with respect to MVface. Consequently, the face is relocated in Y.

Step 3. The horizontal projection of the expected eyes' region is computer, including a tolerance margin. This projection is aligned with respect to MHeyes. Consequently, the face is relocated in X.

SAMPLE RESULTS
CMU/MIT and UMU Face Databases


Face Tracking

Face tracking is an iterative process which consists of prediction and relocation. To make the prediction, a trivial method (use the locations in the previous frame) or a color method (CamShift algorithm) can be used.

The relocation process is very similar to the facial feature location algorithm. The main differences are:

- The projection models used in alignment (MVface and MHeyes) are learned from the same sequence; more precisely, from the first frame in the sequence.

- The step of "orientation estimation" is now the last step in the process.

- The distances obtained from the alignment (from PVface to MVface, and from PHeyes to MHeyes) are used to detect when the tracked face is lost.

Sample Videos
Low Resolution Video


Face Recognition

Integral projections can be applied to perform biometric face recognition. Projections are extracted from all the samples in the gallery and the probe set. The scores sij between each gallery sample gi and probe pj is defined as the distance between two projections after alignment. PVface and PHeyes can be combined to obtain better recognition results.

This method outperforms template matching using correlation, the eigenfaces approach, and face recognition using Hidden Markov Models.


Facial Expression Analysis

We propose a simple method to analyze facial expressions, based in a discrete number of action units (AU). Four states are defined for the eyes (normal, closed, eyebrows raised, and eyebrows lowered) and for the mouth (closed, half opened, opened, and teeth showing).

Face detection, location and tracking are used to find the region of eyes and mouth in each frame of the video sequence. Vertical projections are extracted from these regions. There exists a classifier of eye projections, and another for mouth projections.

Sample Video and Result


Facial Pose Estimation

This method is designed to produce control signals for the navigation in a virtual environment. Thus, some heuristics are introduced and the results may not be very precise. Accuracy is sacrificed at the expense of manageability.

3D location of the face (x, y, z) and roll angle are estimated using the tracked locations of eyes and mouth. Yaw is heuristically computed using the horizontal projection of the eyes region. Pitch is estimated using the vertical projection of the face.

Sample Result of Pose
Perceptual Interface Using Pose Estimation
Sample Video of the Perceptual Interface


Facultad de Informática. Despacho 2.34
Campus de Espinardo. Universidad de Murcia
30100 Espinardo, Murcia (SPAIN)
Teléfono: +34 968 39 85 30
Fax: +34 968 36 41 51
E-mail: ginesgm@um.es