A Modified Descent Spectral Conjugate Gradient Method for Unconstrained Optimization
- PDF / 423,069 Bytes
- 12 Pages / 595.276 x 790.866 pts Page_size
- 62 Downloads / 253 Views
(0123456789().,-volV)(0123456789(). ,- volV)
RESEARCH PAPER
A Modified Descent Spectral Conjugate Gradient Method for Unconstrained Optimization Saeed Nezhadhosein1 Received: 21 January 2020 / Accepted: 13 October 2020 Ó Shiraz University 2020
Abstract Recently, Peyghami et al. (Optim Meth Soft 43:1–28, 2015) proposed a modified secant equation, which applied a new updating Yabe and Takano’s rule as an adaptive version of the parameter of conjugate gradient. Here, using this modified secant equation, we propose a new modified descent spectral nonlinear conjugate gradient method. The proposed method includes two major features; the higher-order accuracy in approximating the second-order curvature information of the objective function and the sufficient descent condition. Global convergence of the proposed method is proved for uniformly convex functions and general functions. Numerical experiments are done on a set of test functions of the CUTEr collection. The results are compared with some well-known methods. Keywords Unconstrained optimization Modified secant equations Spectral conjugate gradient method
1 Introduction Spectral conjugate gradient (SCG) methods are a class of iterative line search methods for solving large-scale unconstrained optimization problems which is n n minx2R f ðxÞ, where f : R ! R is continuously differentiable and its gradient is available. SCG methods are the extended of CG methods which have all advantages of them including simplicity iterations, low memory requirements and strong global convergence properties (Dai et al. 2000). Simultaneously, they are superior to the CG methods in numerical computational aspects, concluding them to more efficient than CGs. SCGs generate a sequence of solutions in the form of fxk g with the following recursive formula: xkþ1 ¼ xk þ ak dk ;
k 0;
ð1Þ
where x0 2 Rn is an initial solution, ak is the step length and dk is the direction at kth iteration. The step length ak is usually chosen to satisfy certain line search conditions Wolfe (1969), as inexact line search. Among them, the so& Saeed Nezhadhosein [email protected] 1
called Wolfe conditions Sun and Yuan (2006) have attracted special attention in the convergence analyses and the implementations of CG methods, requiring that: f ðxk þ ak dk Þ f ðxk Þ dak gTk dk ;
ð2Þ
gðxk þ ak dk ÞT dk rgTk dk ;
ð3Þ
where 0\d\r\1, fk ¼ f ðxk Þ and gk ¼ rf ðxk Þ. These conditions guarantee that sTk yk [ 0, where yk ¼ gkþ1 gk and sk ¼ xkþ1 xk . In Eq. (1), the search directions are calculated by the following recursive formula Birgin and Martı´nez (2001): dkþ1 ¼ hkþ1 gkþ1 þ bk dk ;
ð4Þ
where hkþ1 and bk are the scaled and CG (update) parameters, respectively, and the initial search direction is set as d0 ¼ g0 . In special cases if hkþ1 ¼ 1, the SCG methods are converted to CG methods and if bk ¼ 0, they converted to the spectral gradient method (SGM), proposed by Raydan (1997). It is based on a two-point approximation of the standard secant equation suggested by Barzilai and Borwein
Data Loading...