A Neurodynamic Algorithm for Sparse Signal Reconstruction with Finite-Time Convergence

  • PDF / 912,869 Bytes
  • 15 Pages / 439.37 x 666.142 pts Page_size
  • 4 Downloads / 189 Views

DOWNLOAD

REPORT


A Neurodynamic Algorithm for Sparse Signal Reconstruction with Finite-Time Convergence Hongsong Wen1 · Hui Wang2 · Xing He1 Received: 21 December 2019 / Revised: 29 April 2020 / Accepted: 4 May 2020 © Springer Science+Business Media, LLC, part of Springer Nature 2020

Abstract In this paper, a neurodynamic algorithm with finite-time convergence to solve L 1 minimization problem is proposed for sparse signal reconstruction which is based on projection neural network (PNN). Compared with the existing PNN, the proposed algorithm is combined with the sliding mode technique in control theory. Under certain conditions, the stability of the proposed algorithm in the sense of Lyapunov is analyzed and discussed, and then the finite-time convergence of the proposed algorithm is proved and the setting time bound is given. Finally, simulation results on a numerical example and a contrast experiment show the effectiveness and superiority of our proposed neurodynamic algorithm. Keywords Finite-time convergence · Sparse signal reconstruction · Projection neural network (PNN) · L 1 -minimization

1 Introduction In recent years, the development of compression sensing (CS) theory has led to extensive research in the field of sparse signal reconstruction [8]. In general, the method of solving L 1 -minimization problem [18] is used to estimate sparse solutions. Assume there is an unknown signal x ∈ R n , a measurement vector b ∈ R m , and a full row rank matrix A ∈ R m×n satisfying b = Ax. The main purpose of CS is to recover x

B B

Hui Wang [email protected] Xing He [email protected] Hongsong Wen [email protected]

1

Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, College of Electronic and Information Engineering, Southwest University, Chongqing 400715, China

2

School of Mathematical Sciences, Chongqing Normal University, Chongqing 401331, China

Circuits, Systems, and Signal Processing

from the known A and b. If the above problem wants to be solved, we can solve the following L 0 -minimization problem min x0 (1) s.t. Ax = b where  · 0 is the L 0 -norm, which denotes the number of nonzero elements in the vector. However, in underdetermined linear equations, the optimization problem (1) is NP-hard [30]. If the solution of the problem (1) is sparse enough and satisfies the restricted isometry property (RIP) condition, then the optimization problem of (1) is equivalent to the L 1 -minimization problem [12]: min x1 (2) s.t. Ax = b with  · 1 being the L 1 -norm. In the literature, the problem (2) is widely used in the problem of transforming from sparse signal reconstruction, such as data clustering [13], blind source separation [21,22], face recognition [31,35], gesture recognition [1], image classification [36], and image restoration [28]. And robust face recognition [35,37] method is proposed for sparse signal reconstruction. Moreover, to solve the sparse recovery problem of nonuniform sparse model, the weighted L 1 -minimization method is proposed in [19]. In particular, there have