A Psychoacoustic "NofM"-Type Speech Coding Strategy for Cochlear Implants
- PDF / 1,006,756 Bytes
- 16 Pages / 600 x 792 pts Page_size
- 33 Downloads / 177 Views
A Psychoacoustic “NofM”-Type Speech Coding Strategy for Cochlear Implants Waldo Nogueira Laboratorium f¨ur Informationstechnologie, Universit¨at Hannover, Schneiderberg 32, 30167 Hannover, Germany Email: [email protected]
¨ Andreas Buchner Department of Otolaryngology, Medical University Hanover, Carl-Neuberg-Strasse 1, 30625 Hannover, Germany Email: [email protected]
Thomas Lenarz Department of Otolaryngology, Medical University Hanover, Carl-Neuberg-Strasse 1, 30625 Hannover, Germany Email: [email protected]
Bernd Edler Laboratorium f¨ur Informationstechnologie, Universit¨at Hannover, Schneiderberg 32, 30167 Hannover, Germany Email: [email protected] Received 1 June 2004; Revised 10 March 2005 We describe a new signal processing technique for cochlear implants using a psychoacoustic-masking model. The technique is based on the principle of a so-called “NofM” strategy. These strategies stimulate fewer channels (N) per cycle than active electrodes (NofM; N < M). In “NofM” strategies such as ACE or SPEAK, only the N channels with higher amplitudes are stimulated. The new strategy is based on the ACE strategy but uses a psychoacoustic-masking model in order to determine the essential components of any given audio signal. This new strategy was tested on device users in an acute study, with either 4 or 8 channels stimulated per cycle. For the first condition (4 channels), the mean improvement over the ACE strategy was 17%. For the second condition (8 channels), no significant difference was found between the two strategies. Keywords and phrases: cochlear implant, NofM, ACE, speech coding, psychoacoustic model, masking.
1.
INTRODUCTION
Cochlear implants are widely accepted as the most effective means of improving the auditory receptive abilities of people with profound hearing loss. Generally, these devices consist of a microphone, a speech processor, a transmitter, a receiver, and an electrode array which is positioned inside the cochlea. The speech processor is responsible for decomposing the input audio signal into different frequency bands or channels and delivering the most appropriate stimulation pattern to the electrodes. When signal processing strategies like continuous interleaved sampling (CIS) [1] or advanced combinational encoder (ACE) [2, 3, 4] are used, electrodes near the base of the cochlea represent high-frequency information, This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
whereas those near to the apex transmit low-frequency information. A more detailed description of the process by which the audio signal is converted into electrical stimuli is given in [5]. Speech coding strategies play an extremely important role in maximizing the user’s overall communicative potential, and different speech processing strategies have been developed over the past two decades to mimic firing patterns inside the
Data Loading...