Improved conjugate gradient method for nonlinear system of equations

  • PDF / 574,783 Bytes
  • 17 Pages / 439.37 x 666.142 pts Page_size
  • 15 Downloads / 279 Views

DOWNLOAD

REPORT


Improved conjugate gradient method for nonlinear system of equations Mohammed Yusuf Waziri1

· Aliyu Yusuf2 · Auwal Bala Abubakar1

Received: 21 July 2020 / Revised: 13 October 2020 / Accepted: 27 October 2020 © SBMAC - Sociedade Brasileira de Matemática Aplicada e Computacional 2020

Abstract In this paper, we propose a hybrid conjugate gradient (CG) method based on the approach of convex combination of Fletcher–Reeves (FR) and Polak–Ribière–Polyak (PRP) parameters, and Quasi-Newton’s update. This is made possible by using self-scaling memory-less Broyden’s update together with a hybrid direction consisting of two CG parameters. However, an important property of the new algorithm is that, it generates a descent search direction via non-monotone type line search. The global convergence of the algorithm is established under appropriate conditions. Finally, numerical experiments on some benchmark test problems, demonstrate the effectiveness of the proposed algorithm over some existing alternatives. Keywords Conjugate gradient method · Convex combination · Self-scaling memory-less Broyden’s update · Global convergence Mathematics Subject Classification 46N10 · 47A05 · 47N10 · 90C26

1 Introduction The system of nonlinear equations has the general form: F(x) = 0,

(1)

Communicated by Andreas Fischer.

B

Mohammed Yusuf Waziri [email protected] Aliyu Yusuf [email protected] Auwal Bala Abubakar [email protected]

1

Department of Mathematical Sciences, Faculty of Physical Sciences, Bayero University, Kano, Nigeria

2

Department of Sciences, School of Continuing Education, Bayero University, Kano, Nigeria 0123456789().: V,-vol

123

321

Page 2 of 17

M. Y. Waziri et al.

where F : Rn → Rn is a nonlinear mapping which is continuously differentiable. System of nonlinear equations have many application in the fields of science and engineering, hence various methods have been developed to handle such problems. Some of the numerical methods for solving (1) include: Newton’s and Quasi-Newton’s methods see (Halilu and Waziri 2017; Waziri et al. 2010a; Li and Fukushima 1999; Dauda et al. 2019b), the diagonal Jacobian approximation method (Waziri et al. 2010b) and the derivative-free method (Fang and Ni 2017). However, the requirement of computation and storage of Jacobian matrix and its approximation at each iteration, makes the first two methods above to be unattractive for solving large-scale nonlinear systems (Waziri and Sabi’u 2015). One of the famous methods for finding a numerical solution of large-scale problems is the CG method, it is a method mostly used for solving large-scale unconstrained optimization problems. Furthermore, the method is popular with mathematicians and engineers. This is because, it has low memory requirement, simple implementation and global convergence properties (Waziri et al. 2010a). It generates a sequence of iterates {x k } starting from a given initial point x0 ∈ Rn using the recurrence relation, xk+1 = xk + αk dk , k = 0, 1, 2, . . . , (2) where xk is the k th approximati