Distant-talking speaker identification by generalized spectral subtraction-based dereverberation and its efficient compu

  • PDF / 1,056,560 Bytes
  • 12 Pages / 595 x 794 pts Page_size
  • 73 Downloads / 133 Views

DOWNLOAD

REPORT


RE SE A RCH

Open Access

Distant-talking speaker identification by generalized spectral subtraction-based dereverberation and its efficient computation Zhaofeng Zhang1 , Longbiao Wang1* and Atsuhiko Kai2

Abstract Previously, a dereverberation method based on generalized spectral subtraction (GSS) using multi-channel least mean-squares (MCLMS) has been proposed. The results of speech recognition experiments showed that this method achieved a significant improvement over conventional methods. In this paper, we apply this method to distant-talking (far-field) speaker recognition. However, for far-field speech, the GSS-based dereverberation method using clean speech models degrades the speaker recognition performance. This may be because GSS-based dereverberation causes some distortion between clean speech and dereverberant speech. In this paper, we address this problem by training speaker models using dereverberant speech obtained by suppressing reverberation from arbitrary artificial reverberant speech. Furthermore, we propose an efficient computational method for a combination of the likelihood of dereverberant speech using multiple compensation parameter sets. This addresses the problem of determining optimal compensation parameters for GSS. We report the results of a speaker recognition experiment performed on large-scale far-field speech with different reverberant environments to the training environments. The proposed GSS-based dereverberation method achieves a recognition rate of 92.2%, which compares well with conventional cepstral mean normalization with delay-and-sum beamforming using a clean speech model (49.0%) and a reverberant speech model (88.4%). We also compare the proposed method with another dereverberation technique, multi-step linear prediction-based spectral subtraction (MSLP-GSS). The proposed method achieves a better recognition rate than the 90.6% of MSLP-GSS. The use of multiple compensation parameters further improves the speech recognition performance, giving our approach a recognition rate of 93.6%. We implement this method in a real environment using the optimal compensation parameters estimated from an artificial environment. The results show a recognition rate of 87.8% compared with 72.5% for delay-and-sum beamforming using a reverberant speech model. Keywords: Hands-free speaker recognition; Blind dereverberation; Multi-channel least mean-squares; Generalized spectral subtraction; Gaussian Mixture Model

1 Introduction Because of the existence of reverberation in far-field environments, the recognition performance for distanttalking speech/speakers is drastically degraded. The current approaches to automatic speech recognition (ASR)/speaker recognition that are robust to reverberation can be classified as speech signal processing (pre-processing), robust feature extraction, or model adaptation [1-4]. *Correspondence: [email protected] 1 Nagaoka University of Technology, Nagaoka 940-2188, Japan Full list of author information is available at the end of the article

In this pape