Detail-Enhanced Cross-Modality Face Synthesis via Guided Image Filtering
Face images in different modalities are often encountered in many applications, such as face image in photo and sketch style, visible light and near-infrared style. As an active yet challenging task, cross-modality face synthesis aims to transform face im
- PDF / 800,882 Bytes
- 10 Pages / 439.37 x 666.14 pts Page_size
- 4 Downloads / 197 Views
Abstract. Face images in different modalities are often encountered in many applications, such as face image in photo and sketch style, visible light and near-infrared style. As an active yet challenging task, cross-modality face synthesis aims to transform face images between modalities. Many existing methods successfully recover global features for a given photo, however, fail to capture fine-scale details in the synthesis results. In this paper, we propose a two-step algorithm to tackle this problem. Firstly, KNN is used to select the K most similar patches in training set for an input patch centered on each pixel. Then combination of patches is calculated for initial results. In the second step, guided image filtering is used on initial results with test photo as guidance. Fine-scale details can be transferred to the results via local linear transformation. Comparison experiments on public datasets demonstrated the proposed method is superior to the state-of-the-art method in simultaneously keeping global features and enhancing fine-scale details. Keywords: Photo-sketch synthesis · Guided image filtering · KNN · Local linear transformation
1
Introduction
In many cases, we can obtain face image pairs of the same person in different modalities, such as face image in photo or sketch style, visible light (VIS) or near-infrared (NIR) style, etc. For example, since there are difficulties in obtaining photos of the criminal suspects, sketches of suspects are usually drawn by artists to hunt them. However, drawing face sketches is both time consuming and restricted by painting level of artists. Face images under NIR are on good condition and unaffected by visible lights in the environment. So, face recognition [1] using NIR images is contributive. Thus, automatic cross-modality face synthesis plays an important role in law enforcement. Besides, face sketch can also be applied to digital entertainment [2,3,4]. Studies on cross-modality face synthesis problem has been carried out for several years. And a number of algorithms had been proposed. Among these existing approaches, there are several representative ones. Linear subspace learning-based approaches [5,6,7] are based on the assumption that each output patch can be generated © Springer-Verlag Berlin Heidelberg 2015 H. Zha et al. (Eds.): CCCV 2015, Part I, CCIS 546, pp. 200–209, 2015. DOI: 10.1007/978-3-662-48558-3_20
Detail-Enhanced Cross-Modality Face Synthesis via Guided Image Filtering
201
by using a linear combination of the selected nearest neighbors. But the synthesis results tend to be over-smoothed and lose some details. Sparse representation-based method is also an important branch [8,9]. Image patches could be sparsely represented by an over-complete dictionary of atoms. Although it is effective to use sparse coding and dictionary learning to address this problem, it needs excessive time to learn the dictionary and mapping between dictionaries in different modalities. In this paper, we propose a detail-enhanced approach for cross-modality face synthes
Data Loading...