Linear convergence of inexact descent method and inexact proximal gradient algorithms for lower-order regularization pro
- PDF / 663,555 Bytes
- 31 Pages / 439.37 x 666.142 pts Page_size
- 112 Downloads / 280 Views
Linear convergence of inexact descent method and inexact proximal gradient algorithms for lower-order regularization problems Yaohua Hu1 · Chong Li2 · Kaiwen Meng3
· Xiaoqi Yang4
Received: 22 October 2019 / Accepted: 27 September 2020 © Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract The p regularization problem with 0 < p < 1 has been widely studied for finding sparse solutions of linear inverse problems and gained successful applications in various mathematics and applied science fields. The proximal gradient algorithm is one of the most popular algorithms for solving the p regularisation problem. In the present paper, we investigate the linear convergence issue of one inexact descent method and two inexact proximal gradient algorithms (PGA). For this purpose, an optimality condition theorem is explored to provide the equivalences among a local minimum, second-order optimality condition and second-order growth property of the p regularization problem. By virtue of the second-order optimality condition and second-order growth property, we establish the linear convergence properties of the inexact descent method and inexact PGAs under some simple assumptions. Both linear convergence to a local minimal value and linear convergence to a local minimum are provided. Finally, the linear convergence results of these methods are extended to the infinite-dimensional Hilbert spaces. Our results cannot be established under the framework of Kurdyka–Łojasiewicz theory.
B
Kaiwen Meng [email protected] Yaohua Hu [email protected] Chong Li [email protected] Xiaoqi Yang [email protected]
1
Shenzhen Key Laboratory of Advanced Machine Learning and Applications, College of Mathematics and Statistics, Shenzhen University, Shenzhen 518060, People’s Republic of China
2
School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, People’s Republic of China
3
School of Economics and Mathematics, Southwestern University of Finance and Economics, Chengdu 611130, People’s Republic of China
4
Department of Applied Mathematics, The Hong Kong Polytechnic University, Kowloon, Hong Kong
123
Journal of Global Optimization
Keywords Sparse optimization · Nonconvex regularization · Inexact approach · Descent methods · Proximal gradient algorithms · Linear convergence
1 Introduction The following linear inverse problem is at the core of many problems in various areas of mathematics and applied sciences: finding x ∈ Rn such that Ax = b, where A ∈ Rm×n and b ∈ Rm are known, and an unknown noise is included in b. If m n, the above linear inverse problem is seriously ill-conditioned and has infinitely many solutions, and researchers are interested in finding solutions with certain structures, e.g., the sparsity structure. A popular technique for approaching a sparse solution of the linear inverse problem is to solve the 1 regularization problem min Ax − b2 + λx1 ,
x∈Rn
n |xi | is a sparsity promoting norm, and where · denotes the Euclidean norm, x1 := i=1 λ > 0 is a regularizat
Data Loading...