Optimal orthogonalization processes

  • PDF / 434,344 Bytes
  • 18 Pages / 612 x 792 pts (letter) Page_size
  • 14 Downloads / 199 Views

DOWNLOAD

REPORT


. ARTICLES .

https://doi.org/10.1007/s11425-018-1711-x

Optimal orthogonalization processes Marko Huhtanen∗ & Pauliina Uusitalo Department of Electrical and Information Engineering, University of Oulu, Oulu 90570, Finland Email: [email protected], [email protected] Received November 13, 2018; accepted June 5, 2020

Abstract

Two optimal orthogonalization processes are devised to orthogonalize, possibly approximately, the

columns of a very large and possibly sparse matrix A ∈ Cn×k . Algorithmically the aim is, at each step, to optimally decrease nonorthogonality of all the columns of A. One process relies on using translated small rank corrections. Another is a polynomial orthogonalization process for performing the L¨ owdin orthogonalization. The steps rely on using iterative methods combined, preferably, with preconditioning which can have a dramatic effect on how fast nonorthogonality decreases. The speed of orthogonalization depends on how bunched the singular values of A are, modulo the number of steps taken. These methods put the steps of the Gram-Schmidt orthogonalization process into perspective regarding their (lack of) optimality. The constructions are entirely operator theoretic and can be extended to infinite dimensional Hilbert spaces. Keywords

optimal orthogonalization, sparse matrix, Gram-Schmidt orthogonalization, L¨ owdin orthogonaliza-

tion, polynomial orthogonalization, implicit orthogonalization, preconditioning, Gram matrix, frame inequality MSC(2010)

65T99, 65F25, 65F50

Citation: Huhtanen M, Uusitalo P. Optimal orthogonalization processes. Sci China Math, 2020, 63, https:// doi.org/10.1007/s11425-018-1711-x

1

Introduction

This paper is concerned with optimal orthogonalization, possibly approximate, of the columns of a very large and possibly sparse matrix A ∈ Cn×k with linearly independent columns. Preconditioning is investigated to speed up orthogonalization processes. Denote by Q the set of n-by-k matrices with orthonormal columns. By interpreting the steps of the Gram-Schmidt process as translated rank-one corrections, among such methods they are far from optimal. For optimality, consider min

X∈X , Q∈Q

∥AX − Q∥,

(1.1)

where the subset X ⊂ Ck×k is determined by the constraints that X should satisfy. It is assumed that X is set in such a way that the minimum exists. There should also exist a parameter j to increase so as to improve approximations. Moreover, for moderate values of j, matrix-vector products with X should be inexpensive. We are primarily interested in translated small rank corrections CI + Fj , where Fj ⊂ Ck×k denotes the set of matrices of rank j at most with j ≪ n. Also a polynomial * Corresponding author c Science China Press and Springer-Verlag GmbH Germany, part of Springer Nature 2020 ⃝

math.scichina.com

link.springer.com

Huhtanen M et al.

2

Sci China Math

orthogonalization scheme is devised to perform a sparse L¨owdin orthogonalization of quantum chemistry [25,26]. In both cases the aim is, at each step, to optimally decrease nonorthogonal