Two-step conjugate gradient method for unconstrained optimization

  • PDF / 608,221 Bytes
  • 15 Pages / 439.37 x 666.142 pts Page_size
  • 55 Downloads / 197 Views

DOWNLOAD

REPORT


Two-step conjugate gradient method for unconstrained optimization R. Dehghani1 · N. Bidabadi1 Received: 15 February 2019 / Revised: 20 May 2020 / Accepted: 8 August 2020 © SBMAC - Sociedade Brasileira de Matemática Aplicada e Computacional 2020

Abstract Using Taylor’s series, we propose a modified secant relation to get a more accurate approximation of the second curvature of the objective function. Then, using this relation and an approach introduced by Dai and Liao, we present a conjugate gradient algorithm to solve unconstrained optimization problems. The proposed method makes use of both gradient and function values, and utilizes information from the two most recent steps, while the usual secant relation uses only the latest step information. Under appropriate conditions, we show that the proposed method is globally convergent without needing convexity assumption on the objective function. Comparative results show computational efficiency of the proposed method in the sense of the Dolan–Moré performance profiles. Keywords Unconstrained optimization · Two-step secant relation · Conjugate gradients · Global convergence Mathematics Subject Classification 49M37 · 65K10 · 90C30

1 Introduction Consider the unconstrained nonlinear optimization problem: min f (x), x ∈ Rn , where f : Rn → R, is twice continuously differentiable. The conjugate gradient (CG) methods are popular in solving large-scale unconstrained minimization problems, since they avoid storing matrices. One step of these iterative methods is defined as follows: xk+1 = xk + αk dk , (1)

Communicated by Andreas Fischer.

B

N. Bidabadi [email protected] R. Dehghani [email protected]

1

Department of Mathematical Science, Yazd University, Yazd, Iran 0123456789().: V,-vol

123

241

Page 2 of 15

R. Dehghani, N. Bidabadi

where αk > 0 is a step length and dk is the search direction defined by:  for k = 0, −gk , dk = −gk + βk dk−1 , for k > 0,

(2)

where gk = ∇ f (xk ) and βk being a scalar called the CG parameter. Different choices for the CG parameter in (2) lead to different CG methods. The step length αk is determined by an exact/inexact line search technique Nocedal and Wright (2006). Throughout our work here, we utilize the strong Wolfe conditions for determining the step length as follows Nocedal and Wright (2006): f (xk + αk dk ) ≤ f (xk ) + σ1 αk gkT dk ,         g(xk + αk dk )T dk  ≤ σ2 g(xk )T dk , with 0 < σ1 < σ2 < 1. Well-known CG methods include those proposed by Hestenes–Stiefel (HS) (1952), Fletcher–Reeves (FR) (1964), and Polak–Ribire (PR) (Polak and Ribiére 1969; Polyak 1969). Zoutendijk (1970) established the global convergence of the FR method with the exact line search. Al-Baali (1985) extended this result using inexact line search. Powell (1977) showed that the FR method had a poor performance in practice. In this regard, although PR and HS methods work very well in practice, their global convergence is not guaranteed. To overcome this problem, Powell (1984) suggested another modified conjugate parameter. Re