Golden ratio algorithms for variational inequalities

  • PDF / 3,398,252 Bytes
  • 28 Pages / 439.37 x 666.142 pts Page_size
  • 76 Downloads / 209 Views

DOWNLOAD

REPORT


Series A

Golden ratio algorithms for variational inequalities Yura Malitsky1 Received: 23 March 2018 / Accepted: 22 July 2019 © Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society 2019

Abstract The paper presents a fully adaptive algorithm for monotone variational inequalities. In each iteration the method uses two previous iterates for an approximation of the local Lipschitz constant without running a linesearch. Thus, every iteration of the method requires only one evaluation of a monotone operator F and a proximal mapping g. The operator F need not be Lipschitz continuous, which also makes the algorithm interesting in the area of composite minimization. The method exhibits an ergodic O(1/k) convergence rate and R-linear rate under an error bound condition. We discuss possible applications of the method to fixed point problems as well as its different generalizations. Keywords Variational inequality · First-order methods · Linesearch · Saddle point problem · Composite minimization · Fixed point problem Mathematics Subject Classification 47J20 · 65K10 · 65K15 · 65Y20 · 90C33

1 Introduction We are interested in the variational inequality (VI) problem: find

z∗ ∈ V

s.t.

F(z ∗ ), z − z ∗  + g(z) − g(z ∗ ) ≥ 0 ∀z ∈ V ,

(1)

where V is a finite dimensional vector space and we assume that (C1) the solution set S of (1) is nonempty; (C2) g : V → (−∞, +∞] is a proper convex lower semicontinuous (lsc) function; (C3) F : dom g → V is monotone: F(u) − F(v), u − v ≥ 0 ∀u, v ∈ dom g.

This research was supported by the German Research Foundation Grant SFB755-A4.

B 1

Yura Malitsky [email protected] Institute for Numerical and Applied Mathematics, University of Göttingen, Göttingen, Germany

123

Y. Malitsky

The function g can be nonsmooth, and it is very common to consider VI with g = δC , the indicator function of a closed convex set C. In this case (1) reduces to find

z∗ ∈ C

s.t.

F(z ∗ ), z − z ∗  ≥ 0 ∀z ∈ C,

(2)

which is a more widely-studied problem. It is clear that one can rewrite (1) as a monotone inclusion: 0 ∈ (F + ∂ g)(x ∗ ). Henceforth, we implicitly assume that we can (relatively simply) compute the resolvent of ∂ g (the proximal operator of g), that is (Id +∂ g)−1 , but cannot do this for F, in other words computing the resolvent (Id +F)−1 is prohibitively expensive. VI is a useful way to reduce many different problems that arise in optimization, PDE, control theory, games theory to a common problem (1). We recommend [21,29] as excellent references for a broader familiarity with the subject. As a motivation from the optimization point of view, we present two sources where VI naturally arise. The first example is a convex-concave saddle point problem: min max L (x, y) := g1 (x) + K (x, y) − g2 (y), x

y

(3)

where x ∈ Rn , y ∈ Rm , g1 : Rn → (−∞, +∞], g2 : Rm → (−∞, +∞] are proper convex lsc functions and K : dom g1 × dom g2 → R is a smooth convex-concave function. By writing down the first-order optimality condition, it is easy to see that problem (3)