A new conjugate gradient hard thresholding pursuit algorithm for sparse signal recovery
- PDF / 968,087 Bytes
- 20 Pages / 439.37 x 666.142 pts Page_size
- 56 Downloads / 221 Views
A new conjugate gradient hard thresholding pursuit algorithm for sparse signal recovery Zhibin Zhu1 · Jinyao Ma1
· Benxin Zhang2
Received: 25 December 2019 / Revised: 30 May 2020 / Accepted: 21 August 2020 © SBMAC - Sociedade Brasileira de Matemática Aplicada e Computacional 2020
Abstract The hard thresholding pursuit algorithm in compressive sensing theory is widely used to recover sparse signal because of its good convergence. In this paper, combined with the conjugate gradient method, a modified conjugate gradient hard thresholding pursuit algorithm is proposed to accelerate the convergence speed of the traditional hard thresholding algorithm, and the theoretical proof of the convergence of the algorithm is given. Finally, numerical experiments such as signal recovery and image reconstruction demonstrate the validity of the proposed algorithm. Keywords Compressive sensing · Sparse signal · Conjugate gradient · Signal recovery · Image reconstruction Mathematics Subject Classification 65K10 · 90C25
1 Introduction Compressive sensing (CS) asserts that one can properly compress the data while acquiring the signal, and the sampling frequency can be lower than the Nyquist frequency, which reduces the sampling data and saves storage space. It also contains enough information. Currently, it is widely used in many aspects, such as signal image recovery, and face recognition. In compressed sensing, one tries to recover x from the following optimization problem (Eldar and Rauhut 2009) min x0 subject to Ax = y,
(1)
x∈R N
Communicated by Antonio José Silva Neto.
B
Jinyao Ma [email protected]
1
School of Mathematics and Computing Science, Guangxi Colleges and Universities Key Laboratory of Data Analysis and Computation, Guilin University of Electronic Technology, Guilin 541004, China
2
School of Electronic Engineering and Automation, Guangxi Key Laboratory of Automatic Detecting Technology and Instruments, Guilin University of Electronic Technology, Guilin 541004, China 0123456789().: V,-vol
123
270
Page 2 of 20
Z. Zhu et al.
where x0 is called l0 norm which denotes the number of nonzero entries of a vector x, A is the measurement matrix, and y is the measurement vector. If we consider the noise ε on the measurements, then one would solve the corrected optimization problem (Rauhut et al. 2008) min x0 subject to Ax − y ≤ ε.
x∈R N
(2)
Unfortunately, problems (1) and (2) are NP-hard in general. However, l1 minimization can be interpreted as the convex relaxation of l0 minimization. Thus, we can approximate (1) by the problem min x1 subject to Ax = y,
x∈R N
where x1 =
N
(3)
|xi |, and l1 minimization also known as basis pursuit (Foucart 2010).
i=1
Because l1 minimization is a convex optimization problem, the solution to the problem exists and is unique under certain conditions. There are many convex optimization algorithms that can solve (3). Similarly, if we consider the presence of noise in problem (3), one may use the quadratically constrained l1 -minimization (Rauhut et al. 2008) m
Data Loading...