An efficient augmented Lagrangian method with applications to total variation minimization
- PDF / 1,102,572 Bytes
- 24 Pages / 439.37 x 666.142 pts Page_size
- 81 Downloads / 248 Views
An efficient augmented Lagrangian method with applications to total variation minimization Chengbo Li · Wotao Yin · Hong Jiang · Yin Zhang
Received: 19 August 2012 © Springer Science+Business Media New York 2013
Abstract Based on the classic augmented Lagrangian multiplier method, we propose, analyze and test an algorithm for solving a class of equality-constrained nonsmooth optimization problems (chiefly but not necessarily convex programs) with a particular structure. The algorithm effectively combines an alternating direction technique with a nonmonotone line search to minimize the augmented Lagrangian function at each iteration. We establish convergence for this algorithm, and apply it to solving problems in image reconstruction with total variation regularization. We present numerical results showing that the resulting solver, called TVAL3, is competitive with, and often outperforms, other state-of-the-art solvers in the field. Keywords Compressive sensing · Non-smooth optimization · Augmented Lagrangian method · Nonmonotone line search · Barzilai-Borwein method · Single-pixel camera
C. Li () · W. Yin · Y. Zhang Department of Computational and Applied Mathematics, Rice University, 6100 Main, Houston, TX 77005, USA e-mail: [email protected] W. Yin e-mail: [email protected] Y. Zhang e-mail: [email protected] H. Jiang Bell Laboratories, Alcatel-Lucent, 700 Mountain Avenue, Murray Hill, NJ 07974, USA e-mail: [email protected]
C. Li et al.
1 Introduction 1.1 A class of non-smooth minimization problems In this paper, we consider solving a class of equality-constrained minimization problems of the form min f (x, y), x,y
s.t.
h(x, y) = 0,
(1)
where x ∈ Rn1 , y ∈ Rn2 , the vector-valued function h : Rn1 +n2 → Rm (m < n1 + n2 ) is differentiable with respect to both x and y, but the function f may or may not be differentiable with respect to y. In addition, we will later impose a special structure on such problems under consideration. For solving this class of problems, we will propose and study an algorithm in the framework of the classic augmented Lagrangian multiplier (ALM) method, first suggested by Hestenes [21] and Powell [31]. In the ALM framework, one obtains the k-th iterate (x k , y k ) by minimizing the augmented Lagrangian function LA (x, y, λ; μ) = f (x, y) − λT h(x, y) +
μ h(x, y)T h(x, y), 2
μ > 0,
(2)
jointly with respect to both x and y for a given multiplier estimate λ = λk−1 , then updates the multiplier estimate by the formula λk = λk−1 − μh(xk , yk ). In principle, the positive parameter μ in the augmented Lagrangian function, known as the penalty parameter, can also be altered from iteration to iteration. It is evident that the iteration-complexity of an ALM algorithm depends almost entirely on how the augmented Lagrangian function is minimized jointly with respect to both x and y. In order to solve such subproblems efficiently, one should utilize useful structures existing in the augmented Lagrangian function. Therefore, we concentrate on solving unconstrained minimizat
Data Loading...