Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
[2.1] 3D Imaging, Analysis and Applications-Springer-Verlag London (2012).pdf
Скачиваний:
12
Добавлен:
11.12.2021
Размер:
12.61 Mб
Скачать

8 3D Face Recognition

337

and matched each region independently using ICP. They used Borda Count and consensus voting to combine the scores. Matching expression insensitive regions of the face is a potentially useful approach to overcome the sensitivity of ICP to expressions. However, determining such regions is a problem worth exploring because these regions are likely to vary between individuals as well as expressions. Another challenge in matching sub-regions is that it requires accurate segmentation of the sub-regions.

Finally we note that, rather than minimizing a mean squared error metric between the probe and gallery surfaces, other metrics are possible, although a significantly different approach to minimization must be adopted and the approach is no longer termed ‘ICP’. One such metric is termed the Surface Interpenetration Measure (SIM) [81] which measures the degree to which two aligned surfaces cross over each other. The SIM metric has recently been used with a Simulated Annealing approach to 3D face recognition [76]. A verification rate of 96.5 % was achieved on the FRGC v2 dataset at 0.1 % FAR and a rank-one accuracy of 98.4 % was achieved in identification tests.

In the following two sections we discuss PCA and LDA-based 3D face recognition systems that operate on depth maps and surface feature maps (e.g. arrays of curvature values) rather than on point clouds.

8.7 PCA-Based 3D Face Recognition

Once 3D face scans have been filtered, pose normalized and re-sampled so that, for example, standard size depth maps are generated, the simplest way to implement a face recognition system is to compare the depth maps directly. In this sense we see a p × q depth map as an m × 1 feature vector in an m = pq dimensional space and we can implement a 1-nearest neighbor scheme, for example, based on either a Euclidean (L2 norm) metric or cosine distance metric. However, this is not generally recommended. Typical depth map sizes mean that m can be a large dimensional space with a large amount of redundancy, since we are only imaging faces and not other objects. Dimension reduction using Principal Component Analysis (PCA) can express the variation in the data in a smaller space, thus improving speed of feature vector comparisons, and removing dimensions that express noise, thus improving recognition performance. Note that PCA is also known in various texts as the Hotelling transform or the Karhunen-Lóeve transform. The transform involves a zero-mean operation and a rotation of the data such that the variables associated with each dimension become uncorrelated. It is then possible to form a reduced dimension subspace by discarding those dimensions that express little variance in the (rotated) data. This is equivalent to a projection into a subspace of the zero-mean dataset that decorrelates the data. It is based on the second order statistical properties of the data and maps the general covariance matrix in the original basis to a diagonal matrix in the new, rotated basis.

PCA based 3D face recognition has become a benchmark at least in near-frontal poses, such as is provided in the FRGC v2 dataset [73]. This implies that, when

338

A. Mian and N. Pears

a researcher presents a new 3D face recognition method for this kind of 3D scan, it is expected to at least improve upon a standard PCA performance (for the given set of features employed). The method is similar to the seminal approaches where 2D facial images are decomposed into a linear combination of eigenvectors [82] and employed within a face recognition scenario [87], except that, instead of 2D images, depth maps are used as the input feature vectors. Another important difference is that in 2D ‘eigenfaces’, the three most significant eigenvalues are usually affected by illumination variations and discarding them improves recognition performance [8]. Since depth maps do not contain any illumination component, all significant eigenvalues are used for 3D face recognition.

One of the earliest works on PCA based 3D face recognition is of Achermann et al. [2]. Hesher et al. [46] explored the use of different numbers of eigenvectors and image sizes for PCA based 3D face recognition. Heseltine et al. [44] generated a set of twelve feature maps based on the gradients and curvatures over the facial surface, and applied PCA-based face recognition to these maps. Pan et al. [70] constructed a circular depth map using the nose tip as center and axis of symmetry as starting point. They applied a PCA based approach to the depth map for face recognition. Chang et al. [17] performed PCA based 3D face recognition on a larger dataset and later expanded their work to perform a comparative evaluation of PCA based 3D face recognition with 2D eigenfaces and found similar recognition performance [18].

In order to implement and test PCA-based 3D face recognition, we need to partition our pose-normalized 3D scans into a training set and a test set. The following sub-sections provide procedures for training and testing a PCA-based 3D face recognition system.

8.7.1 PCA System Training

1.For the set of n training images, xi , i = 1 . . . n, where each training face is represented as an m-dimensional point (column vector) in depth map or surface feature space,

x = [x1, . . . , xm]T ,

(8.12)

stack the n training face vectors together (as rows) to construct the n × m training data matrix:

x1T

 

.

 

(8.13)

.

.

X = .

xnT

 

 

2. Perform a mean-centering operation by subtracting the mean of the training face

1

n

, from each row of matrix X to form the zero-mean train-

vectors, x¯ = n

i 1 xi

ing data matrix:

=

 

 

 

 

 

 

 

X0 = X Jn,1x¯ T ,

(8.14)

where Jn,1 is an n × 1 matrix of ones.

 

8 3D Face Recognition

 

 

 

339

3. Generate the m × m covariance matrix of the training data as:

 

C =

 

1

 

X0T X0.

(8.15)

n

1

 

 

 

 

 

Note that dividing by n 1 (rather than n) generates an unbiased covariance estimate from the training data (rather than a maximum likelihood estimate). As we tend to use large training sets (of the order of several hundred images), in practice there is no significant difference between these two covariance estimates.

4.Perform a standard eigendecomposition on the covariance matrix. Since the covariance matrix is symmetric, its eigenvectors are orthogonal to each other and can be chosen to have unit length such that:

VDVT = C,

(8.16)

where both V and D are m × m matrices. The columns of matrix V are the eigenvectors, vi , associated with the covariance matrix and D is a diagonal matrix whose elements contain the corresponding eigenvalues, λi . Since the covariance is a symmetric positive semidefinite matrix, these eigenvalues are real and nonnegative. A key point is that these eigenvalues describe the variance along each of the eigenvectors. (Note that eigendecomposition can be achieved with a standard function call such as the MATLAB eig function.) Eigenvalues in D and their corresponding eigenvectors in V are in corresponding columns and we require them to be in descending order of eigenvalue. If this order is not automatically performed within the eigendecomposition function, column reordering should be implemented.

5.Select the number of subspace dimensions for projecting the 3D faces. This is the dimensionality reduction step and is usually done by analyzing the ratio of cumulative variance associated with the first k dimensions of the rotated image space to the total variance associated with the full set of m dimensions in that space. This proportion of variance ratio is given by:

 

 

k

λi

 

a

k =

i=1

(8.17)

 

 

m

λi

 

 

 

i=1

 

 

and takes a value between 0 and 1, which is often expressed as a percentage 0–100 %. A common approach is to choose a minimum value of k such that ak is greater than a certain percentage (90 % or 95 % are commonly used). Figure 8.7 shows a plot of ak versus k for 455 3D faces taken from the FRGC v2 dataset [73]. From Fig. 8.7 one can conclude that the shape of human faces lies in a significantly lower dimensional subspace than the dimensionality of the original depth maps. Note that the somewhat arbitrary thresholding approach described here is likely to be sub-optimal and recognition performance can be tuned later by searching for an optimal value of k in a set of face recognition experiments.

6. Project the training data set (the gallery) into the k-dimensional subspace:

˜

=

0

k .

(8.18)

X

 

X V

 

 

340

A. Mian and N. Pears

Fig. 8.7 Proportion of variance (%) of the first k eigenvalues to the total variance for 455 3D faces. The first 26 most significant eigenvalues retain 95 % of the total variance and the first 100 eigenvalues retain 99 % of the total variance [73]

Here Vk is a m × k matrix containing the first k eigenvectors (columns, vi ) of V

and ˜ is a n × k matrix of n training faces (stored as rows) in the k-dimensional

X

subspace (k dimensions stored as columns).

8.7.2 PCA Training Using Singular Value Decomposition

Several variants of PCA-based 3D face recognition exist in the literature and one of the most important variants is to use Singular Value Decomposition (SVD) directly on the n × m zero-mean training data matrix, X0, thus replacing steps 3 and 4 in the previous subsection. The advantage of using SVD is that it can often provide superior numerical stability compared to eigendecomposition algorithms, additionally the storage required for a data matrix is often much less than a covariance matrix (the number of training scans is much less than the dimension of the feature vector). The SVD is given as:

USVT = X0,

(8.19)

where U and V are orthogonal matrices of dimension n × n and m × m respectively and S is a n × m matrix of singular values along its diagonal. Note that, in contrast to the eigendecomposition approach, no covariance matrix is formed, yet the required matrix of eigenvectors, V, spanning the most expressive subspace of the training data is obtained. Furthermore, we can determine the eigenvalues from the corresponding singular values. By substituting for X0 in Eq. (8.15), using its SVD in Eq. (8.19), and then comparing to the eigendecomposition of covariance in Eq. (8.16) we see that:

D =

 

1

 

S2.

(8.20)

n

1

 

 

 

 

 

The proof of this is given as one of the questions at the end of this chapter. Typically SVD library functions order the singular values from highest to lowest along the