Compressed Sensing for Sparse Error Correcting Model

  • PDF / 407,119 Bytes
  • 13 Pages / 439.37 x 666.142 pts Page_size
  • 6 Downloads / 239 Views

DOWNLOAD

REPORT


Compressed Sensing for Sparse Error Correcting Model Yuli Fu · Qiheng Zhang · Shengli Xie

Received: 22 February 2012 / Revised: 12 March 2013 © Springer Science+Business Media New York 2013

Abstract Compressed sensing (CS)-based cross-and-bouquet (CAB) model was proposed by J. Wright et al. to reduce the complexity of sparse error correcting. For the sake of leading to better performance of CS-based decoding for the CAB model, an algorithm is proposed in this paper for constructing a well-designed projection matrix to minimize the average measures of mutual coherence. One was proposed by M. Elad. Another is defined in this paper for higher dimensional cases. Using the equivalent dictionary, the dimensionality is reduced. Also high-dimensional singular value decomposition (SVD) is avoided in the procedure of constructing a well-designed projection matrix. The high-dimensional CAB model of sparse error correcting can be solved by the proposed algorithm without computational difficulty. The validity of the proposed method is illustrated by decoding experiments in highdimensional cases. Keywords Compressed sensing · Sparse error correcting · Cross-and-bouquet model · Particle swarm optimization 1 Introduction Modern data processing applications in signal/image processing, web search and ranking, and bioinformatics are increasingly characterized by large quantities of very Y. Fu () · Q. Zhang School of Electronic and Information Engineering, South China University of Technology, Guangzhou, 510640, China e-mail: [email protected] Q. Zhang e-mail: [email protected] S. Xie School of Automation, Guangdong University of Technology, Guangzhou, 510006, China e-mail: [email protected]

Circuits Syst Signal Process

high-dimensional data. As datasets grow larger, however, methods of data collection have necessarily become less controlled, introducing corruption, outliers, and missing data [15]. High-dimensional error correction is one method to solve the problems. Consider the error correcting problem in coding theory [3], a natural error correcting problem with real valued input/output. The problem is to recover an input vector α ∈ R k0 from following n-dimensional corrupted measure: y = Aα + e,

(1.1)

where the coding matrix A ∈ R n×k0 satisfies n  k0 and the error vector e ∈ R n is unknown. The constraint on the error usually is e0 := |{i : ei = 0}| ≤ ρn for some constant 1 > ρ > 0. Obviously, the error e is a sparse vector in some sense. To recover the vector α from the measure y, the following optimization is used: α = arg min y − Agl0 . g∈R k0

Under suitable assumptions of the coding matrix A, the input α can be recovered uniquely via the l1 -minimization problem as follows [3]: α = arg min y − Agl1 . g∈R k0

(1.2)

The sparse error correcting problem is changed into a sparse error recovery problem by multiplying both sides of (1.1) with a(n − k0 ) × n matrix F whose kernel is the range of A in R n , so called parity-check matrix or elimination matrix, such that FA = 0 (see [3]). Therefore, (1.1) is