Subspace-Based Noise Reduction for Speech Signals via Diagonal and Triangular Matrix Decompositions: Survey and Analysis
- PDF / 2,021,081 Bytes
- 24 Pages / 600.03 x 792 pts Page_size
- 118 Downloads / 196 Views
Research Article Subspace-Based Noise Reduction for Speech Signals via Diagonal and Triangular Matrix Decompositions: Survey and Analysis Per Christian Hansen1 and Søren Holdt Jensen2 1 Informatics
and Mathematical Modelling, Technical University of Denmark, Building 321, 2800 Lyngby, Denmark of Electronic Systems, Aalborg University, Niels Jernes Vej 12, 9220 Aalborg, Denmark
2 Department
Received 1 October 2006; Revised 18 February 2007; Accepted 31 March 2007 Recommended by Marc Moonen We survey the definitions and use of rank-revealing matrix decompositions in single-channel noise reduction algorithms for speech signals. Our algorithms are based on the rank-reduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using both diagonal (eigenvalue and singular value) decompositions and rank-revealing triangular decompositions (ULV, URV, VSV, ULLV, and ULLIV). In addition, we show how the subspace-based algorithms can be analyzed and compared by means of simple FIR filter interpretations. The algorithms are illustrated with working Matlab code and applications in speech processing. Copyright © 2007 P. C. Hansen and S. H. Jensen. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1.
INTRODUCTION
The signal subspace approach has proved itself useful for signal enhancement in speech processing and many other applications—see, for example, the recent survey [1]. The area has grown dramatically over the last 20 years, along with advances in efficient computational algorithms for matrix computations [2–4], especially singular value decompositions and rank-revealing decompositions. The central idea is to approximate a matrix, derived from the noisy data, with another matrix of lower rank from which the reconstructed signal is derived. As stated in [5]: “Rank reduction is a general principle for finding the right trade-off between model bias and model variance when reconstructing signals from noisy data.” Throughout the literature of signal processing and applied mathematics, these methods are formulated in terms of different notations, such as eigenvalue decompositions, Karhunen-Lo`eve transformations, and singular value decompositions. All these formulations are mathematically equivalent, but nevertheless the differences in notation can be an obstacle to understanding and using the different methods in practice. Our goal is to survey the underlying mathematics and present the techniques and algorithms in a common frame-
work and a common notation. In addition to methods based on diagonal (eigenvalue and singular value) decompositions, we survey the use of rank-revealing triangular decompositions. Within this framework, we also discuss alternatives to the classical least-squares formulation, and we show how signals with general (nonwhite) noise are treated by explicit and, in particular, implicit prewhite
Data Loading...