Robust Distant Speech Recognition by Combining Multiple Microphone-Array Processing with Position-Dependent CMN

  • PDF / 750,400 Bytes
  • 11 Pages / 600.03 x 792 pts Page_size
  • 81 Downloads / 212 Views

DOWNLOAD

REPORT


Robust Distant Speech Recognition by Combining Multiple Microphone-Array Processing with Position-Dependent CMN Longbiao Wang, Norihide Kitaoka, and Seiichi Nakagawa Department of Information and Computer Sciences, Toyohashi University of Technology, Toyahashi-shi 441-8580, Japan Received 29 December 2005; Revised 20 May 2006; Accepted 11 June 2006 We propose robust distant speech recognition by combining multiple microphone-array processing with position-dependent cepstral mean normalization (CMN). In the recognition stage, the system estimates the speaker position and adopts compensation parameters estimated a priori corresponding to the estimated position. Then the system applies CMN to the speech (i.e., positiondependent CMN) and performs speech recognition for each channel. The features obtained from the multiple channels are integrated with the following two types of processings. The first method is to use the maximum vote or the maximum summation likelihood of recognition results from multiple channels to obtain the final result, which is called multiple-decoder processing. The second method is to calculate the output probability of each input at frame level, and a single decoder using these output probabilities is used to perform speech recognition. This is called single-decoder processing, resulting in lower computational cost. We combine the delay-and-sum beamforming with multiple-decoder processing or single-decoder processing, which is termed multiple microphone-array processing. We conducted the experiments of our proposed method using a limited vocabulary (100 words) distant isolated word recognition in a real environment. The proposed multiple microphone-array processing using multiple decoders with position-dependent CMN achieved a 3.2% improvement (50% relative error reduction rate) over the delay-and-sum beamforming with conventional CMN (i.e., the conventional method). The multiple microphone-array processing using a single decoder needs about one-third the computational time of that using multiple decoders without degrading speech recognition performance. Copyright © 2006 Longbiao Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1.

INTRODUCTION

Automatic speech recognition (ASR) systems are known to perform reasonably well when the speech signals are captured using a close-talking microphone. However, there are many environments where the use of a close-talking microphone is undesirable for reasons of safety or convenience. Hands-free speech communication [1–5] has been more and more popular in some special environments such as an office or the cabin of a car. Unfortunately, in a distant environment, channel distortion may drastically degrade speech recognition performance. This is mostly caused by the mismatch between the practical environment and the training environment. Compensating an input feature is the main way