Two nonmonotone trust region algorithms based on an improved Newton method

  • PDF / 354,921 Bytes
  • 16 Pages / 439.37 x 666.142 pts Page_size
  • 82 Downloads / 159 Views

DOWNLOAD

REPORT


Two nonmonotone trust region algorithms based on an improved Newton method T. Dehghan Niri1 · M. Heydari1 · M. M. Hosseini2 Received: 22 January 2020 © Korean Society for Informatics and Computational Applied Mathematics 2020

Abstract In this paper, we present two improved regularized Newton methods for minimizing nonconvex functions with the trust region method. The fundamental idea of this method is based on the combination of nonmonotone Armijo-type line search proposed by Gu and Mo (Comput Math Appl 55:2158–2172, 2008) and the traditional trust region method. Under some standard assumptions, we analyze the global convergence property for the new methods. Primary numerical results are reported. The obtained results confirm the higher efficiency of the modified algorithms compared to the other presented algorithms. Keywords Modified Newton method · Trust region method · Nonmonotone Armijo-type line search · Convergence analysis Mathematics Subject Classification 65K05 · 90C26 · 90C30

1 Introduction We consider the unconstrained optimization problem min f (x),

x∈Rn

B

(1)

M. Heydari [email protected] T. Dehghan Niri [email protected] M. M. Hosseini [email protected]

1

Department of Mathematics, Yazd University, Yazd, Iran

2

Department of Applied Mathematics, Faculty of Mathematics and Computer, Shahid Bahonar University of Kerman, Kerman, Iran

123

T. D. Niri et al.

where f :Rn → R is smooth function and f ∈ C 2 (R). Throughout this investigation, we assume that the solution set of (1) is nonempty. Gradient ∇ f (x) and Hessian ∇ 2 f (x) are denoted by g(x) and H (x), respectively. Also, B(x) ∈ Rn×n is an approximate Hessian matrix of f . In the sequel, we adopt the notations f k := f (xk ), gk := g(xk ), Hk := H (xk ), Bk := B(xk ). There are a lot of practical algorithms to solve unconstrained optimization problems. Among optimization algorithms, the classical Newton method is one of the applied methods because of its fast convergence property. There are several improvements of the Newton method for unconstrained minimization to achieve convergence, [10,22]. In classical Newton method, the positive definiteness of the Hessian matrix of f is an essential condition to get convergence. To overcome the difficulty caused by the possible singularity of Hessian, Sun [24] proposed a regularized Newton method, where the trial step is the solution of the linear equations dk = −(Hk + λk I )−1 gk ,

(2)

where I is the identity matrix and λk ∈ R+ . Fan [14], proposed λk = gk δ with δ ∈ [1, 2] and also showed the choice of λk =  f k  performs more stable and preferable [15]. A new trust region algorithm to solve the nonlinear equations with the trust region radius converging to zero is presented in [13], and its convergence under some mild conditions proved. Ueda and Yamashita [25] employed a regularized algorithm for nonconvex minimization problems. They presented a global complexity bound and analyzed the super linear convergence of the proposed method. Wang [26] introduced a modified regularized