Quantum locally linear embedding for nonlinear dimensionality reduction
- PDF / 783,749 Bytes
- 21 Pages / 439.37 x 666.142 pts Page_size
- 111 Downloads / 169 Views
Quantum locally linear embedding for nonlinear dimensionality reduction Xi He1
· Li Sun1 · Chufan Lyu1 · Xiaoting Wang1
Received: 5 November 2019 / Accepted: 12 August 2020 © Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract Reducing the dimension of nonlinear data is crucial in data processing and visualization. The locally linear embedding algorithm (LLE) is specifically a representative nonlinear dimensionality reduction method with maintaining well the original manifold structure. In this paper, we present two implementations of the quantum locally linear embedding (QLLE) algorithm to perform the nonlinear dimensionality reduction on quantum devices. One implementation, the linear-algebra-based QLLE algorithm, utilizes quantum linear algebra subroutines to reduce the dimension of the given data. The other implementation, the variational quantum locally linear embedding (VQLLE) algorithm, utilizes a variational hybrid quantum-classical procedure to acquire the lowdimensional data. The classical LLE algorithm requires polynomial time complexity of N , where N is the global number of the original high-dimensional data. Compared with the classical LLE, the linear-algebra-based QLLE achieves quadratic speedup in the number and dimension of the given data. The VQLLE can be implemented on the near-term quantum devices in two different designs. In addition, the numerical experiments are presented to demonstrate that the two implementations in our work can achieve the procedure of locally linear embedding. Keywords Locally linear embedding · Quantum dimensionality reduction · Quantum machine learning · Quantum computation
1 Introduction The dimensionality reduction in the field of machine learning refers to using some methods to map the data points in the original high-dimensional space into some low-
This work is supported by the National Key R&D Program of China, Grant No. 2018YFA0306703.
B 1
Xi He [email protected] Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu 610054, Sichuan, China 0123456789().: V,-vol
123
309
Page 2 of 21
X. He et al.
dimensional space. The reason why we reduce the data dimension is that the original high-dimensional data contain redundant information, which sometimes causes errors in practical applications. By reducing the data dimension, we hope to reduce the error caused by the noise information and to improve the accuracy of identification or other applications. In addition, we also want to find the intrinsic structure of the data through the dimensionality reduction in some cases. Among the many dimensionality reduction algorithms, one of the most widely used algorithms is the principal component analysis (PCA). PCA is a linear dimensionality reduction algorithm embedding the given data into a linear low-dimensional subspace [1,2]. In addition to the PCA algorithm, the linear discriminant analysis (LDA) algorithm [3] and the A-optimal projection (AOP) algorithm [4] both show outstand
Data Loading...