A limited memory q -BFGS algorithm for unconstrained optimization problems
- PDF / 435,395 Bytes
- 20 Pages / 439.37 x 666.142 pts Page_size
- 42 Downloads / 222 Views
A limited memory q-BFGS algorithm for unconstrained optimization problems Kin Keung Lai1 · Shashi Kant Mishra2 · Geetanjali Panda3 · Suvra Kanti Chakraborty4 · Mohammad Esmael Samei5 · Bhagwat Ram6 Received: 3 July 2020 / Revised: 25 August 2020 / Accepted: 29 August 2020 © Korean Society for Informatics and Computational Applied Mathematics 2020
Abstract A limited memory q-BFGS (Broyden–Fletcher–Goldfarb–Shanno) method is presented for solving unconstrained optimization problems. It is derived from a modified BFGS-type update using q-derivative (quantum derivative). The use of Jackson’s derivative is an effective mechanism for escaping from local minima. The q-gradient method is complemented to generate the parameter q for computing the step length in such a way that the search process gradually shifts from global in the beginning to almost local search in the end. Further, the global convergence is established under Armijo-Wolfe conditions even if the objective function is not convex. The numerical experiments show that proposed method is potentially efficient. Keywords Unconstrained optimization · Large-scale optimization · Quasi-Newton method · Limited memory BFGS method Mathematics Subject Classification 90C30 · 65K05 · 90C53
1 Introduction There are several useful methods for solving unconstrained optimization problems such as the steepest descent method [1], Newton method [2], quasi-Newton methods [3] and conjugate gradient methods [4], etc. The steepest descent method converges linearly and is badly affected by ill-conditioning [5]. The Newton method requires the fewest number of function evaluations, and it has a quadratic order of convergence [6]. The conjugate gradient methods require no matrix storage and they are faster than the steepest descent method, but not as well as the Newton method. Quasi-Newton methods require only the gradient of the objective function to be supplied at every iterate. A model of the objective function is constructed by measuring the changes in gradients which is good enough to produce the superlinear convergence. From the computational point of view, the BFGS (Broyden-Fletcher-Goldfarb-Shanno) method
Extended author information available on the last page of the article
123
K. K. Lai et al.
[7,8] is an efficient, robust, and inexpensive for solving the following unconstrained optimization problems (UOP): min f (x),
x ∈ Rn ,
(1)
where f : Rn → R is a continuously differentiable function. The theoretical convergence of the BFGS method with inexact Wolfe-type line search was given by Powell [9]. For this method, the Hessian approximation of the objective function is required at every iteration. These matrices should be symmetric and necessary to have n(n+1) 2 storage location for each one [10]. But, for large scale unconstrained optimization problems, this will not be possible to retain the matrices in the high-speed storage memory of the computer and we need to move to other kinds of methods. The limited memory BFGS (L-BFGS) described by Nocedal [10] is suitable for so
Data Loading...