Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
[2.1] 3D Imaging, Analysis and Applications-Springer-Verlag London (2012).pdf
Скачиваний:
12
Добавлен:
11.12.2021
Размер:
12.61 Mб
Скачать

Chapter 8

3D Face Recognition

Ajmal Mian and Nick Pears

Abstract Face recognition using standard 2D images struggles to cope with changes in illumination and pose. 3D face recognition algorithms have been more successful in dealing with these challenges. 3D face shape data is used as an independent cue for face recognition and has also been combined with texture to facilitate multimodal face recognition. Additionally, 3D face models have been used for pose correction and calculation of the facial albedo map, which is invariant to illumination. Finally, 3D face recognition has also achieved significant success towards expression invariance by modeling non-rigid surface deformations, removing facial expressions or by using parts-based face recognition. This chapter gives an overview of 3D face recognition and details both well-established and more recent state-of-the-art 3D face recognition techniques in terms of their implementation and expected performance on benchmark datasets.

8.1 Introduction

Measurement of the intrinsic characteristics of the human face is a socially acceptable biometric method that can be implemented in a non-intrusive way [48]. Face recognition from 2D images has been studied extensively for over four decades [96]. However, there has been a lot of research activity and media publicity in 3D face recognition over the last decade. With the increased availability of affordable 3D scanners, many algorithms have been proposed by researchers and a number of competitions have been arranged for benchmarking their performance. Some commercial products have also appeared in the market and one can now purchase a range of Commercial Off-The-Shelf (COTS) 3D face recognition systems.

A. Mian ( )

School of Computer Science and Software Engineering, University of Western Australia, 35 Stirling Highway, Crawley, WA 6009, Australia

e-mail: ajmal.mian@uwa.edu.au

N. Pears

Department of Computer Science, University of York, Deramore Lane, York, YO10 5GH, UK e-mail: nick.pears@york.ac.uk

N. Pears et al. (eds.), 3D Imaging, Analysis and Applications,

311

DOI 10.1007/978-1-4471-4063-4_8, © Springer-Verlag London 2012

 

312

A. Mian and N. Pears

This chapter will introduce the main concepts behind 3D face recognition algorithms, give an overview of the literature, and elaborate upon some carefully selected representative and seminal techniques. Note that we do not intend to give a highly comprehensive literature review, due to the size of the field and the tutorial nature of this text.

A 2D image is a function of the scene geometry, the imaging geometry, the scene reflectance and the illumination conditions. The same scene appears completely different from different viewpoints or under different illuminations. For images of human faces, it is known that the variations due to pose and illumination changes are greater than the variations between images of different subjects under the same pose and illumination conditions [3]. Therefore, 2D image-based face recognition algorithms usually struggle to cope with such imaging variations.

On the other hand, a captured face surface1 much more directly represents the geometry of the viewed scene, and is much less dependent on ambient illumination and the viewpoint (or, equivalently, the facial pose). Therefore, 3D face recognition algorithms have been more successful in dealing with the challenges of varying illumination and pose.

Strictly, however, we observe that 3D imaging is not fully independent of pose, because when imaging with a single 3D camera with its limited field of view, the part of the face imaged is clearly dependent on pose. In other words, self-occlusion is a problem, and research issues concerning the fact that the surface view is partial come into play. Additionally, 3D cameras do have some sensitivity to strong ambient lighting as, in the active imaging case, it is more difficult to detect the projected light pattern, sometimes leading to missing parts in the 3D data. Camera designers often attempt to counter this by the use of optical filters and modulated light schemes. Finally, as pose varies, the orientation of the imaged surface affects the footprint of the projected light and how much light is reflected back to the camera. This varies the amount of noise on the measured surface geometry.

Despite these issues, 3D imaging for face recognition still provides clear benefits over 2D imaging. 3D facial shape is used both as an independent cue for face recognition, in multimodal 2D/3D recognition schemes [18, 64], or to assist (pose correct) 2D image-based face recognition. These last two are possible because most 3D cameras also capture color-texture in the form of a standard 2D image, along with the 3D shape, and the data from these two modalities (2D and 3D) is registered.

3D face recognition developments have also achieved significant success towards robust operation in the presence of facial expression variations. This is achieved either by building expression-invariant face surface representations [15], or modeling non-rigid surface deformations [58], or by avoiding expression deformations by only considering the more rigid upper parts of the face [64] or regions around the nose [19].

1This may be referred to as a 3D model, a 3D scan or a 3D image, depending on the mode of capture and how it is stored, as discussed in Chap. 1, Sect 1.1. Be careful to distinguish between a specific face model relating to a single specific 3D capture instance and a general face model, such as Blanz and Vetter’s morphable face model [11], which is generated from many registered 3D face captures.

8 3D Face Recognition

313

Data from 3D scanners can be used to construct generative 3D face models offline, where such models can synthesize the face under novel pose and illumination conditions. Using such models, an online recognition system can employ standard 2D image probes, obviating the need for a 3D scanner in the live system. For example, 3D face models have been used to estimate the illumination invariant facial albedo map. Once a 3D model and face albedo are available, any new image can be synthetically rendered under novel poses and illumination conditions. A large number of such images under different illuminations are used to build the illumination cones of a face [38] which are subsequently used for illumination invariant face recognition. Similarly, synthesized images under different poses are used to sample the pose space of human faces and to train classifiers for pose invariant face recognition [38]. In another approach of this general modeling type, Blanz and Vetter [11] built a statistical, morphable model, which is learnt offline from a set of textured 3D head scans acquired with a laser scanner. This model can be fitted to single probe images and the model parameters of shape and texture are used to represent and recognize faces.

Thus there are a lot of different possibilities when using 3D face captures in face recognition systems. Although we examine several of these, the emphasis of this chapter is on recognition from 3D shape data only, where geometric features are extracted from the 3D face and matched against a dataset to determine the identity of an unknown subject or to verify his/her claimed identity. Even within these 3D facial shape recognition systems, there are many different approaches in the literature that can be categorized in several different ways. One of the main categorizations is the use of holistic face representations or feature-based face representations. Holistic methods encode the full visible facial surface after its pose has been normalized (to a canonical frontal pose) and the surface and its properties are resampled to produce a standard size feature vector. The feature vector could contain raw depth values and/or any combination of surface property, such as gradients and curvature. This approach has been employed, for example, using depth maps and the associated surface feature maps in nearest neighbor schemes within both Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) derived subspaces [45]. (Note that the counterpart of these methods for 2D images are often called appearance-based methods, since their low-dimensional representation is faithful to the original image.)

Although holistic methods often do extract features, for example to localize a triplet of fiducial points (landmarks) for pose normalization, and a feature vector (e.g. of depths, normals, curvatures, etc.) is extracted for 3D face matching, the term feature-based method usually refers to those techniques that only encode the facial surface around extracted points of interest, also known as keypoints. For example, these could be the local extrema of curvature on the facial surface or keypoints for which we have learnt their local properties. Structural matching (e.g. graph matching) approaches can then be employed where the relative spatial relations of features is key to the face matching process [65]. Alternatively, a ‘bag of features’ approach could be employed, where spatial relations are completely discarded and the content of more complex ‘information rich’ features is the key to the face matching process.

314

A. Mian and N. Pears

An advantage of holistic methods is that they try to use all of the visible facial surface for discrimination. However, when 3D scans are noisy or low resolution, accurate and reliable pose normalization is difficult and feature-based approaches may perform better.

An outline of the broad steps involved in a typical 3D face recognition system are as follows:

1.3D face scan acquisition. A 3D face model is acquired using one of the techniques described in Chap. 2 (passive techniques) or Chap. 3 (active techniques). Currently, most cameras used for 3D face recognition are active, due to the lack of sufficiently large scale texture on most subject’s facial surface.

2.3D face scan preprocessing. Unlike 2D images from a digital camera, the 3D data is visibly imperfect and usually contains spikes, missing data and significant surface noise. These anomalies are removed during preprocessing and any small holes, both those in the original scan and those created by removal of data spikes and pits (negative spikes), are filled by some form of surface interpolation process. Surface smoothing, for example with Gaussians, is often performed as a final stage of 3D data preprocessing.

3.Fiducual point localization and pose normalization. Holistic face recognition approaches require pose normalization so that when a feature vector is generated, specific parts of the feature vector represent properties of specific parts of the facial surface. Generally this can be done by localizing a set of three fiducial points (e.g. inner eye corners and tip of the nose), mapping them into a canonical frame and then refining the pose by registering the rigid upper face region to a 3D facial template in canonical pose, using some form of Iterative Closest Points (ICP) [9] variant. Note that many feature-based methods avoid this pose normalization stage as, in challenging scenarios, it is often difficult to get a sufficiently accurate normalization.

4.Feature vector extraction. A set of features are extracted from the refined 3D scan. These features represent the geometry of the face rather than the 2D colortexture appearance. The choice of features extracted is crucial to the system performance and is often a trade-off between invariance properties and the richness of information required for discrimination. Example ‘features’ include the raw depth values themselves, normals, curvatures, spin images [49], 3D adaptations of the Scale-Invariant Feature Transform (SIFT) descriptor [57] and many others.

5.Facial feature vector matching/classification. The final step of feature matching/classification is similar to any other pattern classification problem, and most, if not all, of the well-known classification techniques have been applied to 3D face recognition. Examples include k-nearest neighbors (k-NN) in various subspaces, such as those derived from PCA and LDA, neural nets, Support Vector Machines (SVM), Adaboost and many others.