A general framework for ADMM acceleration

  • PDF / 1,873,864 Bytes
  • 20 Pages / 439.642 x 666.49 pts Page_size
  • 100 Downloads / 224 Views

DOWNLOAD

REPORT


A general framework for ADMM acceleration Alessandro Buccini1 · Pietro Dell’Acqua2 · Marco Donatelli3 Received: 27 December 2018 / Accepted: 28 October 2019 / © Springer Science+Business Media, LLC, part of Springer Nature 2019

Abstract The Alternating Direction Multipliers Method (ADMM) is a very popular algorithm for computing the solution of convex constrained minimization problems. Such problems are important from the application point of view, since they occur in many fields of science and engineering. ADMM is a powerful numerical tool, but unfortunately its main drawback is that it can exhibit slow convergence. Several approaches for its acceleration have been proposed in the literature and in this paper we present a new general framework devoted to this aim. In particular, we describe an algorithmic framework that makes possible the application of any acceleration step while still having the guarantee of convergence. This result is achieved thanks to a guard condition that ensures the monotonic decrease of the combined residual. The proposed strategy is applied to image deblurring problems. Several acceleration techniques are compared; to the best of our knowledge, some of them are investigated for the first time in connection with ADMM. Numerical results show that the proposed framework leads to a faster convergence with respect to other acceleration strategies recently introduced for ADMM. Keywords Alternating Direction Multipliers Method (ADMM) · Acceleration techniques  Pietro Dell’Acqua

[email protected] Alessandro Buccini [email protected] Marco Donatelli [email protected] 1

Dipartimento di Matematica e Informatica, Universit`a degli Studi di Cagliari, Via Ospedale 72 - Palazzo delle Scienze, 09124 Cagliari, Italy

2

Libera Universit`a di Bolzano, Piazza Domenicani 3, 39100 Bolzano, Italy

3

Dipartimento di Scienza e Alta Tecnologia, Universit`a degli Studi dell’Insubria, Via Valleggio 11, 22100 Como, Italy

Numerical Algorithms

1 Introduction The task of minimizing a convex function f (x) subject to some constraints is a wellknown problem which arises in many fields of science and engineering. Consider the following minimization problem min f (x) + g(y) x,y

s.t. Ax + By = c,

(1)

where f and g are convex functions. A very popular method for computing an approximated solution of (1) is the Alternating Direction Multipliers Method (ADMM) [1]. The ADMM can be easily implemented, however, its main drawback is that it can exhibit slow convergence. Thus, several approaches have been proposed to accelerate it (see, e.g., [2, 3]). In [2] the acceleration is obtained by an extrapolation strategy that involves a constant acceleration factor that, for ensuring convergence, has to be smaller than 1/3. In [3] the well-known FISTA approach [4, 5] is employed to speed up the ADMM iterations and a restart condition is introduced to guarantee the convergence of the accelerated algorithm. In this work, we would like to present a general framework for the acceleration of ADMM. In