On Extended RLS Lattice Adaptive Variants: Error-Feedback, Normalized, and Array-Based Recursions
- PDF / 728,323 Bytes
- 16 Pages / 600 x 792 pts Page_size
- 68 Downloads / 125 Views
On Extended RLS Lattice Adaptive Variants: Error-Feedback, Normalized, and Array-Based Recursions Ricardo Merched Signal Processing Laboratory (LPS), Department of Electronics and Computer Engineering, Federal University of Rio de Janeiro, P.O. Box 68504, Rio de Janeiro, RJ 21945-970, Brazil Email: [email protected] Received 12 May 2004; Revised 10 November 2004; Recommended for Publication by Hideaki Sakai Error-feedback, normalized, and array-based recursions represent equivalent RLS lattice adaptive filters which are known to offer better numerical properties under finite-precision implementations. This is the case when the underlying data structure arises from a tapped-delay-line model for the input signal. On the other hand, in the context of a more general orthonormality-based input model, these variants have not yet been derived and their behavior under finite precision is unknown. This paper develops several lattice structures for the exponentially weighted RLS problem under orthonormality-based data structures, including errorfeedback, normalized, and array-based forms. As a result, besides nonminimality of the new recursions, they present unstable modes as well as hyperbolic rotations, so that the well-known good numerical properties observed in the case of FIR models no longer exist. We verify via simulations that, compared to the standard extended lattice equations, these variants do not improve the robustness to quantization, unlike what is normally expected for FIR models. Keywords and phrases: RLS algorithm, orthonormal model, lattice, regularized least squares.
1.
INTRODUCTION
In a recent paper [1], a new framework for exploiting data structure in recursive-least-squares (RLS) problems has been introduced. As a result, we have shown how to derive RLS lattice recursions for more general orthonormal networks other than tapped-delay-line implementations [2]. As is well known, the original fast RLS algorithms are obtained by exploiting the shift structure property of the successive rows of the corresponding input data matrix to the adaptive algorithm. That is, consider two successive regression (row) vectors {uM,N , uM,N+1 }, of order M, say,
uM,N = u0 (N) u1 (N) · · · uM −1 (N) uM,N+1
= uM −1,N uM −1 (N) , = u0 (N + 1) u1 (N + 1) · · · uM −1 (N + 1) = u0 (N + 1) u¯ M −1,N+1 .
(1)
By recognizing that, in tapped-delay-line models we have u¯ M −1,N+1 = uM −1,N .
(2)
One can exploit this relation to obtain the LS solution in a fast manner. The key for extending this concept to more
general structures in [1, 3] was to show that, although the above equality no longer holds for general orthonormal models, it is still possible to relate the entries of {uM,N , uM,N+1 } as u¯ M −1,N+1 = uM,N ΦM ,
(3)
where ΦM is an M × (M − 1) structured matrix induced by the underlying orthonormal model. Figure 1 illustrates such structure for which the RLS lattice algorithm of [1] was derived. They constitute what we will refer to in this paper as the a-posteriori-based lattice algorithm, since all
Data Loading...