Eigenvalue Problems
In this chapter, we describe domain decomposition and block matrix methods for large sparse symmetric eigenproblems [WI8, PA7, CH21, GO4, CI4, SA]. We focus on algorithms which iteratively approximate the minimal eigenvalue and corresponding eigenvector o
- PDF / 190,040 Bytes
- 9 Pages / 439.324 x 666.21 pts Page_size
- 75 Downloads / 212 Views
In this chapter, we describe domain decomposition and block matrix methods for large sparse symmetric eigenproblems [WI8, PA7, CH21, GO4, CI4, SA]. We focus on algorithms which iteratively approximate the minimal eigenvalue and corresponding eigenvector of a matrix, though most such methods can also be extended to simultaneously approximate several eigenvalues, and their associated eigenvectors, see [KR, KU2, BO10, BO11, BO12, BR10, MA9, LU5] and [LU6, KN2, BO13, KN3, CH16]. Our discussion will be organized as follows. In Chap. 16.1, we describe some background on the symmetric eigenvalue problem. Following this, Chap. 16.2 describes preconditioned gradient methods for eigenvalue problems. Chap. 16.3 describes block matrix methods for eigenvalue problems, involving a Schur complement matrix. Chap. 16.4 describes Schwarz subspace algorithms for eigenvalue problems. We conclude our discussion with an outline of the modal synthesis Rayleigh-Ritz approximation of eigenproblems in Chap. 16.5. We focus primarily of the matrix formulation of the underlying algorithms.
680
16 Eigenvalue Problems
16.1 Background We consider an eigenvalue problem associated with an elliptic operator L. It seeks λ ∈ IR and sufficiently smooth u(.) = 0 such that: L u(x) ≡ −∇ · (a(x)∇u) + c(x) u = λ u(x) in Ω u = 0, on ∂Ω. A discretization of the eigenproblem, by a finite element or finite difference method, will seek λ ∈ IR and u ∈ IRn with u = 0, satisfying: A u = λ M u,
(16.1)
where AT = A is a real symmetric matrix of size n, obtained by the discretization of the self adjoint elliptic operator L, while M = M T > 0 is a matrix of size n, corresponding to the mass matrix in finite element methods [ST14], or to the identity matrix M = I in finite difference methods. The Rayleigh quotient function associated with (16.1) is defined as: R (v) ≡
vT Av , vT M v
for v = 0.
(16.2)
Computing ∇R(u) and solving ∇R(u) = 0 yields: 2 ∇R(u) = (A u − R(u) M u) = 0 =⇒ A u = R(u) M u. uT M u Thus, if u is a critical point of R(·), then u will be an eigenvector of the generalized eigenvalue problem (16.1) corresponding to eigenvalue λ = R(u). Since A and M are Hermitian, applying inner products yields that the generalized eigenvalues of (16.1) are real, see [ST13], and can be ordered as: λ1 ≤ λ 2 ≤ · · · ≤ λn . We let uk ∈ IRn denote the eigenvector corresponding to eigenvalue λk : A uk = λk M uk . By the preceding, the minimum value of R(·) will be attained when:
λ1 M −1 A = R(u1 ) = min R(v). {v=0}
Additionally, the following property will hold for the k’th eigenvalue: λk = R(uk ) =
min
{v=0 : vT M ui = 0, 1≤i≤k−1}
R(v),
see for instance [GO4], where also the min-max characterization of the eigenvalues is described for the case M = I. In the following, we indicate several
16.1 Background
681
traditional methods for determining the minimal eigenvalue of a generalized eigenvalue problem [WI8, PA7, CH21, GO4, CI4, SA]. Shifted Inverse Power Method. The shifted inverse power method is motivated by the following observation. Suppose M = I an
Data Loading...