Superiorization methodology and perturbation resilience of inertial proximal gradient algorithm with application to sign
- PDF / 2,856,169 Bytes
- 22 Pages / 439.37 x 666.142 pts Page_size
- 111 Downloads / 180 Views
Superiorization methodology and perturbation resilience of inertial proximal gradient algorithm with application to signal recovery Nuttapol Pakkaranang1,2 · Poom Kumam1,2,3 · Vasile Berinde4 · Yusuf I. Suleiman5
© Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract In this paper, we construct a novel algorithm for solving non-smooth composite optimization problems. By using inertial technique, we propose a modified proximal gradient algorithm with outer perturbations, and under standard mild conditions, we obtain strong convergence results for finding a solution of composite optimization problem. Based on bounded perturbation resilience, we present our proposed algorithm with the superiorization method and apply it to image recovery problem. Finally, we provide the numerical experiments to show efficiency of the proposed algorithm and comparison with previously known algorithms in signal recovery. Keywords Composite optimization problems · Proximal gradient algorithm · Inertial algorithm · Superiorization methodology · Bounded perturbation resilience Mathematics Subject Classification 47H05 · 47H09 · 49M37 · 65K10
1 Introduction Let 𝛤0 (H) be a class of convex, lower semicontinuous, and proper functions from a Hilbert space H to (∞, +∞]. The non-smooth composite optimization problem (shortly, NSCOP) is defined by ( ) min 𝜓(x) + 𝜑(x) , (1) x∈H
where 𝜓, 𝜑 ∈ 𝛤0 (H) , 𝜓 is differentiable, 𝜑 is not necessarily differentiable, and ∇𝜓 is Lipschitz continuous on H. The NSCOP can be traced back to classical work of Bauschke and Combettes [1] and it also has a typical research fields in linear inverse * Poom Kumam [email protected] Extended author information available on the last page of the article
13
Vol.:(0123456789)
N. Pakkaranang et al.
problems [2]. This class of optimization problem covers a large number of applications in applied mathematics, robotics, medical imaging, financial engineering, machine learning, signal processing and resource allocation, see e.g. [3–9]. Due to recent boom in automations and machine learning, smart iterative algorithms are indispensable in the fields of artificial intelligence. This, however, has lead to an increase in demand for feasible and faster algorithms in which all the relevant components can be evaluated sufficiently quickly and at least time. One of the goals in the present paper is to seek to explore a novel and faster algorithm for solving NSCOP. The important and well-known power tool for solving the problem (1) is proximal gradient algorithm. It is as an analogous tool for non-smooth, constrained, large-scale, or distributed versions of Newton’s method for solving unconstrained smooth optimization problems of modest size. Assumption 1 [1]: (The existence of solution of NSCOP) Let 𝛺 be the set of all solutions of (1). If 𝜓(x) + 𝜑(x) ∶= 𝛷 is coercive, that is,
lim 𝛷(x) = +∞.
‖x‖→∞
Then 𝛺 = Arg min 𝛷 ≠ �. This means that 𝛷 has a minimizer over H. Assumption 2 [1]: (Uniqueness of Solution of Proximal Mappings) Let 𝜑 ∈ 𝛤0 (H), then
Data Loading...