Fast Proximal Gradient Methods for Nonsmooth Convex Optimization for Tomographic Image Reconstruction
- PDF / 3,105,122 Bytes
- 31 Pages / 439.37 x 666.142 pts Page_size
- 109 Downloads / 231 Views
Fast Proximal Gradient Methods for Nonsmooth Convex Optimization for Tomographic Image Reconstruction Elias S. Helou1 · Marcelo V. W. Zibetti2 · Gabor T. Herman3 Received: 18 October 2019 / Revised: 18 July 2020 © Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract The Fast Proximal Gradient Method (FPGM) and the Monotone FPGM (MFPGM) for minimization of nonsmooth convex functions are introduced and applied to tomographic image reconstruction. Convergence properties ( ) of the sequence of objective function values are derived, including a O 1∕k2 non-asymptotic bound. The presented theory broadens current knowledge and explains the convergence behavior of certain methods that are known to present good practical performance. Numerical experimentation involving computerized tomography image reconstruction shows the methods to be competitive in practical scenarios. Experimental comparison with Algebraic Reconstruction Techniques are performed uncovering certain behaviors of accelerated Proximal Gradient algorithms that apparently have not yet been noticed when these are applied to tomographic image reconstruction. Keywords Computerized tomography imaging · Convex optimization · Proximal gradient methods · Iterative algorithms
1 Introduction In this paper we introduce the Fast Proximal Gradient Methods (FPGM) for the solution of convex minimization problems of the form * Elias S. Helou [email protected] Marcelo V. W. Zibetti [email protected] Gabor T. Herman [email protected] 1
Instituto de Ciências Matemáticas e de Computação, São Carlos, Brazil
2
Center for Advanced Imaging Innovation and Research (CAI²R), New York University School of Medicine, New York, USA
3
PhD Program in Computer Science, City University of New York, New York, USA
13
Vol.:(0123456789)
45
Page 2 of 31
Sensing and Imaging
min
𝛹 (𝐱) ∶= f (𝐱) + 𝜙(𝐱),
(2020) 21:45
(1)
where f ∶ ℝn → ℝ is a smooth convex function with Lipschitz continuous gradient and 𝜙 ∶ ℝn → ℝ is a proper convex function. It is not assumed that 𝜙 is smooth. From now on, the symbols f and 𝜙 will always represent functions satisfying these assumptions. Many practical problems fit this general model. In particular, for image reconstruction from tomographic data, f (x) is usually going to be a measure of consistency, according to the data acquisition model, of image x with the data coming from the scanner whereas 𝜙(x) is a measure of structure. For example, 𝜙 could enforce sparsity, smoothness, or some other desirable a priori information known about the image being reconstructed. The algorithms we propose here lie between the Optimized Iterative ShrinkageThresholding Algorithm (OISTA) presented and experimented with in [18], but for which no convergence proof exists, and the Overrelaxed Monotone Fast Iterative Shrinkage-Thresholding Algorithm (OMFISTA) [25], for which convergence proofs do exist. One contribution of the present paper is a better understanding of the convergence properties of these algorithms. We g
Data Loading...