A note on hybridization process applied on transformed double step size model
- PDF / 622,119 Bytes
- 17 Pages / 439.642 x 666.49 pts Page_size
- 70 Downloads / 174 Views
A note on hybridization process applied on transformed double step size model Milena J. Petrovi´c1
· Vladimir Rakoˇcevi´c2,3 · Dragana Valjarevi´c1 · Dejan Ili´c3
Received: 7 May 2019 / Accepted: 27 September 2019 / © Springer Science+Business Media, LLC, part of Springer Nature 2019
Abstract We introduce a hybrid gradient model for solving unconstrained optimization problems based on one specific accelerated gradient iteration. Having applied a three term hybridization relation on transformed accelerated double step size model, we develop an efficient hybrid accelerated scheme. We determine an iterative step size variable using Backtracking line search technique in which we take an optimally calculated starting value for the posed method. In convergence analysis, we show that the proposed method is at least linearly convergent on the sets of uniformly convex functions and strictly convex quadratic functions. Numerical computations confirm a significant improvement compared with some relevant hybrid and accelerated gradient processes. More precisely, subject to the number of iterations, the CPU time metric and the number of evaluations of the objective function, defined process outperforms comparative schemes multiple times. Keywords Unconstrained optimization · Line search · Gradient descent methods · Newton method · Convergence rate Mathematics Subject Classification (2010) 65K05 · 90C30 · 90C53
1 Analysis of the relevant gradient descent iterations Our aim is to find an easy, operative way of solving unconstrained optimization problems, simply posed as min f (x), x ∈ Rn , (1.1) n knowing that R is the set of n-tuples with elements in the set of real numbers R. Objective function f : Rn → R presents a concrete, generally nonlinear problem Milena J. Petrovi´c
[email protected]
Extended author information available on the last page of the article.
Numerical Algorithms
needed to be minimized. In this regard, we presume that the goal function f : Rn → R is uniformly convex and twice continuously differentiable. Starting from the most common general expression of the iterative methods for solving unconstrained optimization tasks xk+1 = xk + tk dk ,
(1.2)
many authors proposed various differently defined iterations that can be applied in resolving these problems. These methods are usually a gradient descent schemes. They differ from one another depending on how the next two important elements in iterations (1.2) are defined: (1) an iterative step length value tk and (2) a direction vector dk . Regarding the research proposed in this paper, we mainly mention the SM method introduced in [19] (i.e., Accelerated gradient method), the TADSS method presented in [14] (i.e., Transformed accelerated double step size method), and the HSM method described in [12] (i.e., Hybrid SM method). In [19], the authors present an accelerated gradient descent model where the step size value is calculated trough the features based on Armijos’ Backtracking inexact line search algorithm. Herein, the vector direction is chose
Data Loading...