Using Conical Regularization in Calculating Lagrangian Estimates in Quadratic Optimization Problems

  • PDF / 188,361 Bytes
  • 13 Pages / 594 x 792 pts Page_size
  • 79 Downloads / 185 Views

DOWNLOAD

REPORT


USING CONICAL REGULARIZATION IN CALCULATING LAGRANGIAN ESTIMATES IN QUADRATIC OPTIMIZATION PROBLEMS Yu. P. Laptin1† and O. A. Berezovskyi1‡

UDC 519.8

Abstract. For nonconvex quadratic optimization problems, the authors consider calculation of global extreme value estimates on the basis of Lagrangian relaxation of the original problems. On the boundary of the feasible region of the estimate problem, functions of the problem are discontinuous, ill-conditioned, which imposes certain requirements on the computational algorithms. The paper presents a new approach taking into account these features, based on the use of conical regularizations of convex optimization problems. It makes it possible to construct an equivalent unconditional optimization problem whose objective function is defined on the entire space of problem variables and satisfies the Lipschitz condition. Keywords: quadratic optimization problem, Lagrangian relaxation, condition of nonnegative definiteness of matrix, conical regularization. INTRODUCTION The present paper is in cherished memory of Naum Shor, an outstanding scientist in mathematical cybernetician, a founder of the theory of nondifferentiable optimization, academician of the NAS of Ukraine. He would have been 80 this year. One of the sections of his scientific inheritance is related to the general class of quadratic optimization problems (maximization of quadratic form under quadratic constraints). To estimate values of global extrema, Lagrangian relaxations are applied in such problems (for example, [1, 2]), as well as SDP-relaxations (relaxations to semi-definite programming problems) [3], which are somewhat equivalent [4, 5]. When using Lagrangian relaxations, estimating problems are generated in the space of dual variables, whose feasible sets are in the field of nonpositive definiteness of linearly dependent matrices of Lagrangian functions. At interior points of feasible sets, functions of estimating problems are continuously differentiable and take finite values. These functions are discontinuous on the boundary of negative definiteness domain (if there are positive eigenvalues, they take values + ¥ ), and ill-conditioned near the boundary of the matrix of the function (the number of eigenvalues close to zero can tend to the space dimension). In the paper, we will propose new approaches to accounting for these special features, based on conical regularizations of convex optimization problems [6, 7]. Section 1 describes the statements of quadratic optimization problem and its Lagrangian relaxation. Section 2 overviews numerical algorithms of the solution of dual problem. Section 3 considers generalization of conical regularizations presented in [6], for the used estimating problems, dual to the original quadratic optimization problem. Section 4 presents new results that allow implementing efficient algorithms of one-dimensional search where conical regularizations for considered problems are used. These results are conceptually close to the “boundary oracle” algorithm introduced in [8