MERACLE: Constructive Layer-Wise Conversion of a Tensor Train into a MERA

  • PDF / 2,193,673 Bytes
  • 23 Pages / 439.37 x 666.142 pts Page_size
  • 96 Downloads / 208 Views

DOWNLOAD

REPORT


MERACLE: Constructive Layer‑Wise Conversion of a Tensor Train into a MERA Kim Batselier1   · Andrzej Cichocki2 · Ngai Wong3 Received: 20 December 2019 / Revised: 17 July 2020 / Accepted: 19 July 2020 © The Author(s) 2020

Abstract In this article, two new algorithms are presented that convert a given data tensor train into either a Tucker decomposition with orthogonal matrix factors or a multi-scale entanglement renormalization ansatz (MERA). The Tucker core tensor is never explicitly computed but stored as a tensor train instead, resulting in both computationally and storage efficient algorithms. Both the multilinear Tucker-ranks as well as the MERA-ranks are automatically determined by the algorithm for a given upper bound on the relative approximation error. In addition, an iterative algorithm with low computational complexity based on solving an orthogonal Procrustes problem is proposed for the first time to retrieve optimal rank-lowering disentangler tensors, which are a crucial component in the construction of a low-rank MERA. Numerical experiments demonstrate the effectiveness of the proposed algorithms together with the potential storage benefit of a low-rank MERA over a tensor train. Keywords  Tensors · Tensor train · Tucker decomposition · HOSVD · MERA · Disentangler Mathematics Subject Classification  15A23 · 15A69 · 65F99

1 Introduction Tensor decompositions have played an important role over the past two decades in lifting the curse of dimensionality in a myriad of applications [3–5, 18, 26]. The key idea in lifting the curse of dimensionality with tensor decompositions is the usage of a low-rank approximation. Many kinds of decompositions have consequently been developed and each This research was partially supported by the Ministry of Education and Science of the Russian Federation (grant 14.756.31.0001). * Kim Batselier [email protected] 1

Delft Center for Systems and Control, Delft University of Technology, Delft, the Netherlands

2

Skolkovo Institute of Science and Technology (SKOLTECH), 121205 Moscow, Russia

3

The Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China



13

Vol.:(0123456789)



Communications on Applied Mathematics and Computation

has its own rank definition. The canonical polyadic decomposition (CPD) [2, 15, 16] and the Tucker decomposition [2, 27] both generalize the notion of the matrix singular value decomposition (SVD) to higher-order tensors and have, therefore, received a lot of attention. More recent tensor decompositions are the tensor train (TT) [8, 9, 18, 21] and the hierarchical Tucker decomposition [12, 13]. It turns out that the latter two decompositions were already known in the quantum mechanics and condensed matter physics communities as the matrix product state (MPS) [23] and the tensor tree network (TTN) [25], respectively. The multi-scale entanglement renormalization ansatz (MERA) [10, 30] is an extension of the TTN decomposition, recently proposed in quantum mechanics but has so far not received enough

Data Loading...