Face Recognition Using a Unified 3D Morphable Model

We address the problem of 3D-assisted 2D face recognition in scenarios when the input image is subject to degradations or exhibits intra-personal variations not captured by the 3D model. The proposed solution involves a novel approach to learn a subspace

  • PDF / 974,939 Bytes
  • 17 Pages / 439.37 x 666.142 pts Page_size
  • 11 Downloads / 211 Views

DOWNLOAD

REPORT


3

Anyvision, Queen’s Road, Belfast BT39DT, UK [email protected] 2 CVSSP, University of Surrey, Guildford GU27XH, UK Beijing University of Posts and Telecommunications, Beijing 100876, China 4 ECIT, Queen’s University of Belfast, Belfast BT39DT, UK http://www.anyvision.co

Abstract. We address the problem of 3D-assisted 2D face recognition in scenarios when the input image is subject to degradations or exhibits intra-personal variations not captured by the 3D model. The proposed solution involves a novel approach to learn a subspace spanned by perturbations caused by the missing modes of variation and image degradations, using 3D face data reconstructed from 2D images rather than 3D capture. This is accomplished by modelling the difference in the texture map of the 3D aligned input and reference images. A training set of these texture maps then defines a perturbation space which can be represented using PCA bases. Assuming that the image perturbation subspace is orthogonal to the 3D face model space, then these additive components can be recovered from an unseen input image, resulting in an improved fit of the 3D face model. The linearity of the model leads to efficient fitting. Experiments show that our method achieves very competitive face recognition performance on Multi-PIE and AR databases. We also present baseline face recognition results on a new data set exhibiting combined pose and illumination variations as well as occlusion.

Keywords: 3D morphable model

1

· Face recognition

Introduction

3D-assisted 2D face recognition has been attracting increasing attention because it can be used for pose-invariant face matching. This requires fitting a 3D face model to the input image, and using the fitted model to align the input and reference images for matching. As 3D facial shapes are intrinsically invariant to pose and illumination, the fitted shape also provides an invariant representation that can be used directly for recognition. The use of a face prior has been demonstrated to offer impressive performance on images of faces subject to a wide pose variations, even outperforming deep learning [1,2]. c Springer International Publishing AG 2016  B. Leibe et al. (Eds.): ECCV 2016, Part VIII, LNCS 9912, pp. 73–89, 2016. DOI: 10.1007/978-3-319-46484-8 5

74

G. Hu et al.

Most popular are 3D morphable face models which represent 3D face images in a PCA subspace. 3D face models proposed in the literature can capture and represent different modes of variations. Some focus solely on 3D shape (3DSM) [3–6]. Others (3DMM) model also the skin texture [7–9], or even face expression (E-3DMM) [10,11]. When fitting 3DMM to an input image, it is essential to estimate the scene illumination, as skin texture and lighting are intrinsically entwined, and need to be separated. The problem of 3D model to the 2D image fitting becomes challenging when the input image exhibits intra-personal variations not captured by the 3D model, or the image is corrupted in some way. In this work, we use the term ‘intrapersonal’ to represent any vari