On the Adaptive Proximal Method for a Class of Variational Inequalities and Related Problems
- PDF / 425,221 Bytes
- 12 Pages / 612 x 792 pts (letter) Page_size
- 65 Downloads / 250 Views
the Adaptive Proximal Method for a Class of Variational Inequalities and Related Problems F. S. Stonyakin1 Received February 8, 2019; revised May 7, 2019; accepted May 13, 2019
Abstract—For problems of unconstrained optimization, the concept of inexact oracle proposed by O. Devolder, F. Glineur, and Yu.E. Nesterov is well known. We introduce an analog of the concept of inexact oracle (model of a function) for abstract equilibrium problems, variational inequalities, and saddle-point problems. This allows us to propose an analog of Nemirovskii’s known mirror prox method for variational inequalities with an adaptive adjustment to the smoothness level for a fairly wide class of problems. The auxiliary problems at the iterations of the method can be solved with error. It is shown that the resulting errors do not accumulate during the operation of the method. Estimates of the convergence rate of the method are obtained, and its optimality from the viewpoint of the theory of lower oracle estimates is established. It is shown that the method is applicable to mixed variational inequalities and composite saddle-point problems. An example showing the possibility of an essential acceleration of the method as compared to the theoretical estimates due to the adaptivity of the stopping rule is given. Keywords: inexact model of a function, variational inequality, saddle-point problem, abstract equilibrium problem, adaptive stopping rule.
DOI: 10.1134/S0081543820040161
1. INTRODUCTION Variational inequalities and saddle-point problems often arise in the solution of a wide variety of optimization problems and have numerous applications in mathematical economics, mathematical simulation of traffic flows, game theory, and other areas of mathematics (see, for example, [1]). Research in the field of algorithmic methods for variational inequalities and saddle-point problems is actively continuing (see, for example, [2–12]). The most known analog of the gradient method for variational inequalities is Korpelevich’s extragradient method [13], and Nemirovskii’s mirror prox method [8] is a modern variant of the latter. In the present paper we propose an analog of Nemirovskii’s method based on a number of new ideas in the field of algorithmic optimization that have arisen in recent years (see [14–16]). Let us say a few words about these ideas. In 2015 Nesterov proposed [14] the so-called universal gradient methods for convex minimization problems. The universality of a method is understood as the possibility of its adaptive adjustment to the smoothness level of a problem, which can accelerate the operation of the method (see [14, Sect. 5]). 1
Crimean Federal University, Simferopol, 295007 Russia e-mail: [email protected]
S139
S140
STONYAKIN
A universal analog of Nemirovskii’s method for variational inequalities was proposed in [16]. For an operator g : Q → Rn given on a convex compact set Q ⊂ Rn , a strong variational inequality is an inequality of the form (1.1) g(x∗ ), x∗ − x ≤ 0, where g satisfies the H¨ older condition ||g(x) − g(y)||∗ ≤
Data Loading...