Rayleigh quotient minimization method for symmetric eigenvalue problems

  • PDF / 636,348 Bytes
  • 16 Pages / 439.37 x 666.142 pts Page_size
  • 97 Downloads / 238 Views

DOWNLOAD

REPORT


(2019) 38:155

Rayleigh quotient minimization method for symmetric eigenvalue problems Cun-Qiang Miao1

· Hao Liu2

Received: 19 February 2018 / Revised: 27 August 2019 / Accepted: 24 September 2019 © SBMAC - Sociedade Brasileira de Matemática Aplicada e Computacional 2019

Abstract In this paper, we present a new method, which is referred to as the Rayleigh quotient minimization method, for computing one extreme eigenpair of symmetric matrices. This method converges globally and attains cubic convergence rate locally. In addition, inexact implementations and its numerical stability of the Rayleigh quotient minimization method are explored. Finally, we use numerical experiments to demonstrate the convergence properties and show the competitiveness of the new method for solving symmetric eigenvalue problems. Keywords Eigenvalue · Eigenvector · Rayleigh quotient · Convergence analysis · Newton method Mathematics Subject Classifications 15A18 · 65F15 · 65F50 · 34L15 · 65N25

1 Introduction For computing several extreme or interior eigenvalues of the standard eigenvalue problem Ax = λx, with x = 1,

(1.1)

Communicated by Jinyun Yuan. Cun-Qiang Miao is supported by The National Natural Science Foundation of China (No. 11901361) and The Natural Science Foundation of Shandong Province (No. ZR2018BG002). Hao Liu is supported by The National Natural Science Foundation of China (No. 11401305 and No. 61573181).

B

Cun-Qiang Miao [email protected] Hao Liu [email protected]

1

School of Mathematics and Statistics, Central South University, Changsha 410083, People’s Republic of China

2

College of Science, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, People’s Republic of China

123

155

Page 2 of 16

C.-Q. Miao, H. Liu

with A ∈ Rn×n being a large sparse symmetric matrix, a lot of popular methods have been available so far, such as the steepest descent method (Hestenes and Karush 1951), the conjugate gradient method or, more generally, the locally optimal preconditioned conjugate gradient method (Knyazev 2001), the Davidson-type iteration method (Crouzeix et al. 1994; Sleijpen and Van der Vorst 1996), the Lanczos (Grimes et al. 1994; Saad 2011, 1980) and the Arnoldi methods or their variants (Jiang and Wu 2010; Miao 2018a, b; Morgan and Scott 1993; Parlett 1998; Saad 2011). The main framework of these methods is to generate a sequence of subspaces containing increasingly accurate approximations to the desired eigenpairs, and then extract approximate eigenpairs by the Rayleigh–Ritz procedure (Parlett 1998). As is known to all, the Jacobi–Davidson method, proposed by Sleijpen and Van der Vorst (1996), is practically useful and effective for the eigenvalue problem (1.1). In each iteration of the Jacobi–Davidson method, a so-called correction equation 

I − x (k) (x (k) )T



A − θ (k) I



 I − x (k) (x (k) )T t = −r (k) , for t ⊥ x (k) ,

(1.2)

where x (k) is the current k-th approximation to the desired eigenvector with norm unity, θ (k) = (x (k) )T Ax (k) and r (k) = (A − θ (k) I )x