Sequential Difference-of-Convex Programming
- PDF / 958,035 Bytes
- 24 Pages / 439.37 x 666.142 pts Page_size
- 4 Downloads / 197 Views
Sequential Difference-of-Convex Programming Welington de Oliveira1 Received: 2 December 2019 / Accepted: 13 July 2020 © Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract Optimization methods for difference-of-convex programs iteratively solve convex subproblems to define iterates. Although convex, depending on the problem’s structure, these subproblems are very often challenging and require specialized solvers. This work investigates a new methodology that defines iterates as approximate critical points of significantly easier difference-of-convex subproblems approximating the original one. Since there is considerable freedom to choose such more accessible subproblems, several algorithms can be designed from the given approach. In some cases, the resulting algorithm boils down to a straightforward process with iterates given in an analytic form. In other situations, decomposable subproblems can be chosen, opening the way for parallel computing even when the original program is not decomposable. Depending on the problem’s assumptions, a possible variant of the given approach is the Josephy–Newton method applied to the system of (necessary) optimality conditions of the original difference-of-convex program. In such a setting, local convergence with superlinear and even quadratic rates can be achieved under certain conditions. Keywords DC programming · Nonsmooth optimization · Variational analysis Mathematics Subject Classification 49J52 · 49J53 · 49K99 · 90C26
1 Introduction This work deals with the problem of minimizing a difference-of-convex (DC) function over a convex set in the n-dimensional Euclidean space. Lest we prematurely introduce notation, the convex functions in the problem’s objective are denoted by component functions, yielding thus a DC decomposition that we assume available. The discipline
B 1
Welington de Oliveira [email protected] CMA – Centre de Mathématiques Appliquées, MINES ParisTech, PSL – Research University, Sophia Antipolis, France
123
Journal of Optimization Theory and Applications
of mathematical programming concerned with the theory and methods for finding extrema of DC functions on sets of given vector spaces is called DC programming, a subject that has been receiving significant attention from researchers and practitioners. Optimality conditions and relevant properties of DC optimization problems can be found in [1–3]. Examples of applications are cluster analysis [4,5], sensor covering [6], data analysis and classification [7–10], machine learning [11–13], engineering problems and others [14–17]. DC programs are, in the general situation, NP-hard. In this work, we are concerned with local-solution methods to find stationary/critical points. The most employed localsolution technique for DC programming is the DC algorithm (DCA) [17,18]. Similarly to other algorithms for this class of problems, the DCA computes trial points by solving a sequence of convex subproblems obtained by replacing the second component function with its first-o
Data Loading...