A Gradient Sampling Method Based on Ideal Direction for Solving Nonsmooth Optimization Problems
- PDF / 561,749 Bytes
- 24 Pages / 439.37 x 666.142 pts Page_size
- 33 Downloads / 185 Views
A Gradient Sampling Method Based on Ideal Direction for Solving Nonsmooth Optimization Problems Morteza Maleknia1 · Mostafa Shamsi1 Received: 29 May 2019 / Accepted: 17 August 2020 © Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract In this paper, a modification to the original gradient sampling method for minimizing nonsmooth nonconvex functions is presented. One computational component in the gradient sampling method is the need to solve a quadratic optimization problem at each iteration, which may result in a time-consuming process, especially for largescale objectives. To resolve this difficulty, this study proposes a new descent direction, for which there is no need to consider any quadratic or linear subproblem. It is shown that this direction satisfies the Armijo step size condition. We also prove that under proposed modifications, the global convergence of the gradient sampling method is preserved. Moreover, under some moderate assumptions, an upper bound for the number of serious iterations is presented. Using this upper bound, we develop a different strategy to study the convergence of the method. We also demonstrate the efficiency of the proposed method using small-, medium- and large-scale problems in our numerical experiments. Keywords Nonsmooth and nonconvex optimization · Subdifferential · Steepest descent direction · Gradient sampling · Armijo line search Mathematics Subject Classification 49M05 · 65K05 · 90C26
1 Introduction In this paper, we propose a modification to the original gradient sampling (GS) method for minimizing a function f : Rn → R, where the objective function f is locally Lipschitz and continuously differentiable on an open set D with full measure in Rn .
B
Mostafa Shamsi [email protected] Morteza Maleknia [email protected]
1
Amirkabir University of Technology, Tehran, Iran
123
Journal of Optimization Theory and Applications
The GS algorithm, originally developed by Burke et al. [1], is a robust approach that can be applied to a broad range of nonsmooth functions. The convergence analysis and theoretical results of the GS algorithm were first developed in [1]. Soon after, these results were strengthened by the work of Kiwiel [2]. In particular, under subtle modifications, Kiwiel derived a lower bound for step sizes and suggested a limited Armijo line search, in which the number of backtracking steps can be managed through our choice of initial step size. Another variant of the GS method, in which the Clarke ε-subdifferential is approximated by sampling estimates of mollifier gradients, was presented by Kiwiel [3]. In the work of Curtis and Que [4], the GS framework was merged with quasi-Newton techniques to improve the efficiency of the method. In the work of Curtis and Overton [5], the ideas of gradient sampling and sequential quadratic programming (SQP) were combined for solving nonsmooth constrained optimization problems. In addition, the local convergence rate of the GS algorithm for the class of finite-max functions was studied in [6,7].
Data Loading...