Reliability-Based Decision Fusion in Multimodal Biometric Verification Systems
- PDF / 777,294 Bytes
- 9 Pages / 600.03 x 792 pts Page_size
- 93 Downloads / 170 Views
Research Article Reliability-Based Decision Fusion in Multimodal Biometric Verification Systems Krzysztof Kryszczuk, Jonas Richiardi, Plamen Prodanov, and Andrzej Drygajlo Signal Processing Institute, Swiss Federal Institute of Technology, 1015 Lausanne, Switzerland Received 18 May 2006; Revised 1 February 2007; Accepted 31 March 2007 Recommended by Hugo Van Hamme We present a methodology of reliability estimation in the multimodal biometric verification scenario. Reliability estimation has shown to be an efficient and accurate way of predicting and correcting erroneous classification decisions in both unimodal (speech, face, online signature) and multimodal (speech and face) systems. While the initial research results indicate the high potential of the proposed methodology, the performance of the reliability estimation in a multimodal setting has not been sufficiently studied or evaluated. In this paper, we demonstrate the advantages of using the unimodal reliability information in order to perform an efficient biometric fusion of two modalities. We further show the presented method to be superior to state-of-the-art multimodal decision-level fusion schemes. The experimental evaluation presented in this paper is based on the popular benchmarking bimodal BANCA database. Copyright © 2007 Krzysztof Kryszczuk et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1.
INTRODUCTION
Biometric verification systems deployed in a real-world environment often have to contend with adverse conditions of biometric signal acquisition, which can be very different from the carefully controlled enrollment conditions. Examples of such conditions include additive acoustic noise that may contaminate the speech signal, or nonuniform directional illumination that can alter the appearance of a face in a two-dimensional image. Methods of signal conditioning and normalization as well as tailor-made feature extraction schemes help reduce the recognition errors due to the degraded signal quality, however they invariably do not eliminate the problem (see, e.g., [1, 2]). Combining independent biometric modalities has proved to be an effective manner of improving accuracy in biometric verification systems [3]. A fusion of discriminative powers of independent biometric traits, not equally affected by the same environmental conditions, affords robustness to possible degradations of acquired biometric signals. Common methods of classifier fusion at the decision level employ a prediction of the average error of each of the unimodal classifiers, typically based on resampling of the training data [3, 4]. This average modality error information can be applied to weight the unimodal classifier decisions
during the fusion process. The drawback of this approach is that it does not take into account the fact that individual decisions depend on the acquisition conditions of the data presented to the
Data Loading...