New subspace minimization conjugate gradient methods based on regularization model for unconstrained optimization
- PDF / 1,285,983 Bytes
- 34 Pages / 439.642 x 666.49 pts Page_size
- 0 Downloads / 202 Views
New subspace minimization conjugate gradient methods based on regularization model for unconstrained optimization Ting Zhao1 · Hongwei Liu1 · Zexian Liu2,3 Received: 4 March 2019 / Accepted: 20 September 2020 / © Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract In this paper, two new subspace minimization conjugate gradient methods based on p-regularization models are proposed, where a special scaled norm in p-regularization model is analyzed. Different choices of special scaled norm lead to different solutions to the p-regularized subproblem. Based on the analyses of the solutions in a two-dimensional subspace, we derive new directions satisfying the sufficient descent condition. With a modified nonmonotone line search, we establish the global convergence of the proposed methods under mild assumptions. R-linear convergence of the proposed methods is also analyzed. Numerical results show that, for the CUTEr library, the proposed methods are superior to four conjugate gradient methods, which were proposed by Hager and Zhang (SIAM J. Optim. 16(1):170–192, 2005), Dai and Kou (SIAM J. Optim. 23(1):296–320, 2013), Liu and Liu (J. Optim. Theory. Appl. 180(3):879–906, 2019) and Li et al. (Comput. Appl. Math. 38(1):2019), respectively. Keywords Conjugate gradient method · p-regularization model · Subspace technique · Nonmonotone line search · Unconstrained optimization Mathematics Subject Classification (2010) 90C30 · 90C06 · 65K05 Hongwei Liu
[email protected] Ting Zhao zhaoting [email protected] Zexian Liu [email protected] 1
School of Mathematics and Statistics, Xidian University, Xi’an, 710126, China
2
State Key Laboratory of Scientific and Engineering Computing, Institute of Computational Mathematics and Scientific/Engineering Computing, AMSS, Chinese Academy of Sciences, Beijing, 100190, China
3
School of Mathematics and Statistics, Guizhou University, Guiyang, 550025, China
Numerical Algorithms
1 Introduction Conjugate gradient (CG) methods are of great importance for solving the large-scale unconstrained optimization problem (1)
min f (x),
x∈R n
where f : R n → R is a continuously differentiable function. The key features of CG methods are that they do not require matrix storage. The iterations {xn } satisfy the iterative form (2) xk+1 = xk + αk dk , where αk is the stepsize and dk is the search direction defined by −gk , if k = 0, dk = if k > 0, −gk + βk dk−1 ,
(3)
where gk = ∇f (xk ), f (xk ) = fk and βk ∈ R is called the CG parameter. For general nonlinear functions, various choices of βk cause different CG methods. Some well-known options for βk are called FR [19], HS [27], PRP [39], DY [13] and HZ [24], and are given by βkF R =
gk+1 2 gk
2
, βkH S =
and βkH Z
T y gk+1 k
dkT yk
, βkP RP =
T y gk+1 k
gk
2
, βkDY =
gk+1 2 dkT yk
,
T yk 2 1 yk − 2dk T = T gk+1 , dk yk dk yk
where yk = gk+1 − gk and . denotes the Euclidean norm. Recently, other efficient CG methods have been proposed by different ideas, which can be seen in [12, 18, 24, 25,
Data Loading...