Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
[2.1] 3D Imaging, Analysis and Applications-Springer-Verlag London (2012).pdf
Скачиваний:
12
Добавлен:
11.12.2021
Размер:
12.61 Mб
Скачать

8 3D Face Recognition

327

Fig. 8.6 From left to right. A 3D capture of a face with spikes in the point cloud. Shaded view with holes and noise. Final preprocessed 3D data after cropping, removal of spikes, hole filling and noise removal. Figure courtesy of [64]

Gaussian filters. Many of these methods have been adapted so that they can be applied to 3D meshes. One example of this is Bilateral Mesh Denoising [35], which is based on shifting mesh vertices along their normal directions.

Figure 8.6 shows a face scan with spikes, holes and noise before and after preprocessing.

8.5.5 Fiducial Point Localization and Pose Correction

The pose of different subjects or the same subject can vary between scans even when they are cooperative. Therefore, pose correction is a necessary preprocessing step for holistic approaches that require normalized resampling of the facial surface in order to generate a feature vector. (Such feature vectors are often subsequently mapped into a subspace; for example, in PCA and LDA-based methods described later.) This may also be necessary for other algorithms which rely on features that are not inherently pose-invariant.

A common approach to pose correction uses fiducial points on the 3D face. Three points are necessary to normalize the pose to a canonical form. Often these points are manually identified, however, automatic detection of such points is desirable particularly for online verification and identification processes.

The shape index, derived from principle curvatures, has been used to automatically detect the inside eye corners and the nose tip for facial pose correction [59]. Although, a minimum of three fiducial points are sufficient to correct the pose, it has proved a challenging research problem to detect these points, where all three are identified correctly and localized with high repeatability. This problem is more difficult in the presence of varying facial expression, which can change the local shape around a point. Worse still, as pose changes, one of the three fiducial points selected may become occluded. For example, the nose bridge occludes an inner eye corner as the head is turned from frontal view towards a profile view. To counter this, some approaches have attempted to extract a large number of fiducial points so that three or more are always visible [24].

In addition to a shape scan, most 3D cameras capture a registered color-texture map of the face (i.e. a standard 2D image, where the color associated with each 3D

328

A. Mian and N. Pears

point is known). Fiducial point detection can be performed on the basis of 2D images or both 2D and 3D images. Gupta et al. [41] detected 10 anthropometric fiducial points to calculate cranio-facial proportions [32]. Three points were detected using the 3D face alone and the rest were detected based on 2D and 3D data. Mian et al. [64] performed pose correction based on a single point, the nose tip, which is automatically detected. The 3D face was cropped using a sphere of fixed radius centered at the nose tip and its pose was then corrected by iteratively applying PCA and resampling the face on a uniform grid. This process also filled the holes (due to self occlusions) that were exposed during pose correction. Pears et al. [72] performed pose correction by detecting the nose tip using pose invariant features based on the spherical sampling of a radial basis function (RBF) representation of the facial surface. Another sphere centered on the nose tip intersected the facial surface and the tangential curvature of this space curve was used in a correlation scheme to normalize facial pose. The interpolating properties of RBF representations gave all steps in this approach a good immunity to missing parts, although some steps in the method are computationally expensive.

Another common pose correction approach is to register all 3D faces to a common reference face using the Iterative Closest Points (ICP) algorithm [9]. The reference is usually an average face model, in canonical pose, calculated from training data. Sometimes only the rigid parts of the face are used in this face model, such as upper face area containing nose, eyes and forehead. ICP can find the optimal registration only if the two surfaces are already approximately registered. Therefore, the query face is first coarsely aligned with the reference face, either by zeromeaning both scans or using fiducial points, before applying ICP to refine the registration [59]. In refining pose, ICP establishes correspondences between the closest points of the two surfaces and calculates the rigid transformation (rotation and translation) that minimizes the mean-squared distance between the corresponding points. These two steps are repeated until the change in mean-squared error falls below a threshold or the maximum number of iterations is reached. In case of registering a probe face to an average face, the surfaces are dissimilar. Hence, there may be more than one comparable local minima and ICP may converge to a different minimum each time a query face is registered to the reference face. The success of ICP depends upon the initial coarse registration and the similarity between the two surfaces.

As a final note on pose correction, Blanz et al. [12] used a morphable model in a unified framework to simultaneously optimize pose, shape, texture, and illumination. The algorithm relies on manual identification of seven fiducial points and uses the Phong lighting model [6] to optimize shape, texture and illumination (in addition to pose) which can be used for face recognition. This algorithm would be an expensive choice if only pose correction is the aim and it is not fully automatic.

8.5.6 Spatial Resampling

Unlike 2D images, 3D scans have an absolute scale, which means that the distance between any two fiducial points (landmarks), such as the inner corners of the eye,

8 3D Face Recognition

329

can be measured in absolute units (e.g. millimeters). Thus scanning the same face from near or far will only alter the spatial sampling rate and the measured distance between landmarks should vary very little, at least in scans of reasonable quality and resolution.

However, many face recognition algorithms require the face surface, or parts of the face surface, to be sampled in a uniform fashion, which requires some form of spatial sampling normalization or spatial resampling via an interpolation process. Basic interpolation processes usually involve some weighted average of neighbors, while more sophisticated schemes employ various forms of implicit or explicit surface fitting.

Assuming that we have normalized the pose of the face (or facial part), we can place a standard 2D resampling grid in the x, y plane (for example, it could be centered on the nose-tip) and resample the facial surface depth orthogonally to generate a resampled depth map. In many face recognition schemes, a standard size feature vector needs to be created and the standard resampling grid of size p × q = m creates this. For example, in [64], all 3D faces were sampled on a uniform x, y grid of 161 × 161 where the planar distance between adjacent pixels was 1 mm. Although the sampling rate was uniform in this case, subjects had a different number of points sampled on their faces, because of their different facial sizes. The resampling scheme employed cubic interpolation. In another approach, Pears et al. [72] used RBFs as implicit surface representations in order to resample 3D face scans.

An alternative way of resampling is to identify three non-collinear fiducial points on each scan and resample it such that the number of sample points between the fiducial points is constant. However, in doing this, we are discarding information contained within the absolute scale of the face, which often is useful for subject discrimination.

8.5.7 Feature Extraction on Facial Surfaces

Depth maps may not be the ideal representations for 3D face recognition because they are quite sensitive to pose. Although the pose of 3D faces can be normalized with better accuracy compared to 2D images, the normalization is never perfect. For this reason it is usually preferable to extract features that are less sensitive to pose before applying holistic approaches. The choice of features extracted is crucial to the system performance and is often a trade-off between invariance properties and the richness of information required for discrimination. Example ‘features’ include the raw depth values themselves, normals, curvatures, spin images [49], 3D adaptations of the Scale-Invariant Feature Transform (SIFT) descriptor [57] and many others. More detail on features can be found in Chaps. 5 and 7.