Sequential and Adaptive Learning Algorithms for M-Estimation

  • PDF / 1,094,623 Bytes
  • 13 Pages / 600.05 x 792 pts Page_size
  • 62 Downloads / 193 Views

DOWNLOAD

REPORT


Research Article Sequential and Adaptive Learning Algorithms for M-Estimation Guang Deng Department of Electronic Engineering, Faculty of Science, Technology and Engineering, La Trobe University, Bundoora, VIC 3086, Australia Correspondence should be addressed to Guang Deng, [email protected] Received 1 October 2007; Revised 9 January 2008; Accepted 1 April 2008 Recommended by Sergios Theodoridis The M-estimate of a linear observation model has many important engineering applications such as identifying a linear system under non-Gaussian noise. Batch algorithms based on the EM algorithm or the iterative reweighted least squares algorithm have been widely adopted. In recent years, several sequential algorithms have been proposed. In this paper, we propose a family of sequential algorithms based on the Bayesian formulation of the problem. The basic idea is that in each step we use a Gaussian approximation for the posterior and a quadratic approximation for the log-likelihood function. The maximum a posteriori (MAP) estimation leads naturally to algorithms similar to the recursive least squares (RLSs) algorithm. We discuss the quality of the estimate, issues related to the initialization and estimation of parameters, and robustness of the proposed algorithm. We then develop LMS-type algorithms by replacing the covariance matrix with a scaled identity matrix under the constraint that the determinant of the covariance matrix is preserved. We have proposed two LMS-type algorithms which are effective and low-cost replacement of RLS-type of algorithms working under Gaussian and impulsive noise, respectively. Numerical examples show that the performance of the proposed algorithms are very competitive to that of other recently published algorithms. Copyright © 2008 Guang Deng. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1.

INTRODUCTION

We consider a robust estimation problem for a linear observation model: y = xT w + r,

(1)

where w is the impulse response to be estimated, { y, x} is the known training data and the noise r follows an independent and identical distribution (i.i.d.). Given a set of training data { yk , xk }k=1:n , the maximum likelihood estimation (MLE) of w leads to the following problem: wn = arg min w

n   

ρ rk ,

(2)

k=1

where ρ(rk ) = − log p(yk | w) is the negative log likelihood function. The M-estimate of a linear model can also be expressed as the above MLE problem when those welldeveloped penalty functions [1, 2] are regarded as generalized negative log-likelihood function. This is a robust

regression problem. The solution not only is an essential data analysis tool [3, 4], but also has many practical engineering applications such as in system identification, where the noise model is heavy tailed [5]. The batch algorithms and the sequential algorithms are two basic approaches to solve the problem of (2). The batch alg