A Version of the Mirror descent Method to Solve Variational Inequalities *

  • PDF / 145,851 Bytes
  • 10 Pages / 594 x 792 pts Page_size
  • 9 Downloads / 148 Views

DOWNLOAD

REPORT


A VERSION OF THE MIRROR DESCENT METHOD TO SOLVE VARIATIONAL INEQUALITIES*

V. V. Semenov

UDC 517.988

Abstract. Nemirovski and Yudin proposed the mirror descent algorithm at the late 1970s to solve convex optimization problems. This method is suitable to solve huge-scale optimization problems. In the paper, we describe a new version of the mirror descent method to solve variational inequalities with pseudomonotone operators. The method can be interpreted as a modification of Popov’s two-step algorithm with the use of Bregman projections on the feasible set. We prove the convergence of the sequences generated by the proposed method. Keywords: variational inequality, pseudomonotonicity, Bregman distance, Kullback–Leibler distance, mirror descent method, convergence. INTRODUCTION Many interesting and important problems of operations research and mathematical physics can be written as variational inequalities. Solving these inequalities is a rapidly developing trend of applied nonlinear analysis [1–16]. By now, plenty of methods have been developed to solve variational inequalities, in particular, those of projection type, i.e., using metric design on a feasible set [1, 5, 7, 8]. It is well known that in problems of finding a saddle point or Nash equilibrium, for the most simple projection method to converge, strengthened monotonicity conditions should be satisfied [1]. If they are not satisfied, several approaches can be used. One of them is to regularize the original problem in order to provide a required property to it. Convergence without problem modification is ensured in iterative extragradient methods, proposed for the first time by Korpelevich [5] for inequalities with Lipschitzian operators. Later, the method with dynamic step adjustment was considered in [6]. It does not need the Lipschitz constant of the inequality operator to be known, which considerably extends the domain of potential application of the idea outlined in [5]. These methods were analyzed in many studies [7–13]. In particular, modifications of the Korpelevich algorithm with one metric projection on feasible set were proposed [8, 9, 12, 13]. In so-called subgradient–extragradient algorithms and Korpelevich algorithm, the first stages of iteration coincide, and then to obtain the next approximation, projection on some half-space that is support for the feasible set is carried out instead of projection on the feasible set. In the early 1980s, an interesting modification of the Arrow–Hurwitz algorithm of search for saddle point of convex–concave functions was proposed [14]. Some modifications of the Popov method for the solution of variational inequalities with monotone operator are analyzed in recent studies [15, 16]. The paper [17] proposes two-stage proximal algorithm to solve equilibrium programming problems, which is an adaptation of the method [14] to general Ky Fan inequalities. In all the above-mentioned methods, Euclidean distance and projection were used, which does not allow taking into account the structure of feasible sets and solvi