A Note on Nonclosed Tensor Formats

  • PDF / 294,218 Bytes
  • 11 Pages / 439.642 x 666.49 pts Page_size
  • 57 Downloads / 231 Views

DOWNLOAD

REPORT


A Note on Nonclosed Tensor Formats W. Hackbusch1 Received: 24 February 2019 / Accepted: 23 August 2019 / © The Author(s) 2019

Abstract Various tensor formats exist which allow a data-sparse representation of tensors. Some of these formats are not closed. The consequences are (i) possible non-existence of best approximations and (ii) divergence of the representing parameters when a tensor within the format tends to a border tensor outside. The paper tries to describe the nature of this divergence. A particular question is whether the divergence is uniform for all border tensors. Keywords Tensor representation · Tensor format · Nonclosed tensor format · Numerical instability Mathematics Subject Classification (2010) 15A69 · 65F99

1 Introduction Given (finite-dimensional) vector spaces Vj we denote the corresponding tensor space by V=

d 

Vj .

j =1

 Since usually the dimension dj =1 dim(Vj ) of V is rather huge, the numerical treatment of tensor needs special data-sparse representation techniques. The oldest one is the r-term format: given a representation rank r we form all tensors which can be written as a sum of r elementary tensors, where r ∈ N0 := N ∪ {0}. This yields the subset ⎫ ⎧ r  d ⎬ ⎨  (j ) (j ) Rr = v ∈ V : v = vν with vν ∈ Vj ⎭ ⎩ ν=1 j =1

Dedicated to the 65th birthday of Volker Mehrmann.  W. Hackbusch

[email protected] 1

Max-Planck-Institut Mathematik in den Naturwissenschaften, Inselstraße 22, 04103 Leipzig, Germany

W. Hackbusch

of V. Under certain conditions a tensor in Rr might have an essentially unique representation, i.e., different representations only differ by the order of the terms and the scaling of (j ) the vectors {vν : 1 ≤ j ≤ d} (cf. Section 3.2). Although this approach may be very successful for certain problems, it also has an unpleasant property which we are going to explain. The tensor rank of v ∈ V is defined as the smallest r with v ∈ Rr :1 rank(v) := min {r ∈ N0 : v ∈ Rr } . This allows to describe Rr by {v ∈ V : rank(v) ≤ r}. In the case of d = 2, tensor spaces are isomorphic to matrix spaces. Then the tensor rank coincides with the usual matrix rank. For matrices it is well known that a convergent sequence {Mk } with rank(Mk ) ≤ r has a limit M with rank(M) ≤ r, i.e., the set of matrices of rank ≤ r is closed. This is not true for tensors of order d ≥ 3. As an example consider the tensor-valued function w(t) := (a + tb) ⊗ (a + tb) ⊗ (a + tb) ∈ ⊗3 V , where a, b ∈ V are linearly independent vectors (i.e., V = Vj and dim(V ) ≥ 2). The derivative v := w (0) is the symmetric tensor v = b ⊗ a ⊗ a + a ⊗ b ⊗ a + a ⊗ a ⊗ b.

(1)

The derivative can be approximated by Newton’s divided difference quotient v˜ (h) :=

1 (w(h) − w(0)) → v h

for h → 0.

(2)

Remark 1 (a) It can be proved that rank(v) = 3 for v in (1). (b) Since w(t) is of rank 1, the approximation satisfies rank(˜v(h)) ≤ 2, i.e., v˜ (h) ∈ R2 . (c) From (a) and (b) we conclude that R2 is not closed. Part (c) states that, in general, the r-term representation is not closed. This leads to the notation of the border