Optimization of Color Conversion for Face Recognition
- PDF / 763,357 Bytes
- 8 Pages / 600 x 792 pts Page_size
- 97 Downloads / 190 Views
Optimization of Color Conversion for Face Recognition Creed F. Jones III Bradley Department of Electrical and Computer Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA 24061-0111, USA Department of Computer Science, Seattle Pacific University, Seattle, WA 98119-1957, USA Email: [email protected]
A. Lynn Abbott Bradley Department of Electrical and Computer Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA 24061-0111, USA Email: [email protected] Received 5 November 2002; Revised 16 October 2003 This paper concerns the conversion of color images to monochromatic form for the purpose of human face recognition. Many face recognition systems operate using monochromatic information alone even when color images are available. In such cases, simple color transformations are commonly used that are not optimal for the face recognition task. We present a framework for selecting the transformation from face imagery using one of three methods: Karhunen-Lo`eve analysis, linear regression of color distribution, and a genetic algorithm. Experimental results are presented for both the well-known eigenface method and for extraction of Gabor-based face features to demonstrate the potential for improved overall system performance. Using a database of 280 images, our experiments using these methods resulted in performance improvements of approximately 4% to 14%. Keywords and phrases: face recognition, color image analysis, color conversion, Karhunen-Lo`eve analysis.
1.
INTRODUCTION
Most single-view face recognition systems operate using intensity (monochromatic) information alone. This is true even for systems that accept color imagery as input. The reason for this is not that multispectral data is lacking in information content, but often because of practical considerations—difficulties associated with illumination and color balancing, for example, as well as compatibility with legacy systems. Associated with this is a lack of color image databases with which to develop and test new algorithms. Although work is in progress that will eventually aid in colorbased tasks (e.g., through color constancy [1]), those efforts are still in the research stage. When color information is present, most of today’s face recognition systems convert the image to monochromatic form using simple transformations. For example, a common mapping [2, 3] produces an intensity value Ii by taking the average of red, green, and blue (RGB) values (Ir , Ig , and Ib , resp.): Ii (x, y) =
Ir (x, y) + Ig (x, y) + Ib (x, y) . 3
(1)
The resulting image is then used for feature extraction and analysis.
We argue that more effective system performance is possible if a color transformation is chosen that better matches the task at hand. For example, the mapping in (1) implicitly assumes a uniform distribution of color values over the entire color space. For a task such as face recognition, color values tend to be more tightly confined to a small portion of the color space, and it is possible to exploit this narrow c
Data Loading...