Quantum-inspired learning vector quantizers for prototype-based classification

  • PDF / 509,057 Bytes
  • 10 Pages / 595.276 x 790.866 pts Page_size
  • 8 Downloads / 199 Views

DOWNLOAD

REPORT


(0123456789().,-volV) (0123456789().,-volV)

S.I. : WSOM 2019

Quantum-inspired learning vector quantizers for prototype-based classification Confidential: for personal use only—submitted to Neural Networks and Applications 5/2020 Thomas Villmann1 Marika Kaden1



Alexander Engelsberger1 • Jensun Ravichandran1 • Andrea Villmann2



Received: 17 May 2020 / Accepted: 6 November 2020 Ó The Author(s) 2020

Abstract Prototype-based models like the Generalized Learning Vector Quantization (GLVQ) belong to the class of interpretable classifiers. Moreover, quantum-inspired methods get more and more into focus in machine learning due to its potential efficient computing. Further, its interesting mathematical perspectives offer new ideas for alternative learning scenarios. This paper proposes a quantum computing-inspired variant of the prototype-based GLVQ for classification learning. We start considering kernelized GLVQ with real- and complex-valued kernels and their respective feature mapping. Thereafter, we explain how quantum space ideas could be integrated into a GLVQ using quantum bit vector space in the quantum state space Hn and show the relations to kernelized GLVQ. In particular, we explain the related feature mapping of data into the quantum state space Hn . A key feature for this approach is that Hn is an Hilbert space with particular inner product properties, which finally restrict the prototype adaptations to be unitary transformations. The resulting approach is denoted as Qu-GLVQ. We provide the mathematical framework and give exemplary numerical results. Keywords Quantum machine learning  Learning vector quantization  Classification  Interpretable models  Prototype base models

1 Introduction & Thomas Villmann [email protected] Alexander Engelsberger [email protected] Jensun Ravichandran [email protected] Andrea Villmann [email protected] Marika Kaden [email protected] 1

Saxon Institute for Computational Intelligence and Machine Learning (SICIM), University of Applied Sciences Mittweida, Technikumplatz 17, 09648 Mittweida, Germany

2

Berufliches Schulzentrum Do¨beln-Mittweida, Schulteil Mittweida, Poststraße 13, 09648 Mittweida, Germany

Classification learning still belongs to the main tasks in machine learning [5]. Although powerful methods are available, still there is need for improvements and search for alternatives to the existing strategies. A huge progress was achieved by the realization of deep networks [15, 28]. These networks overcame the hitherto dominating support vector machines (SVM) in classification learning [11, 51]. However, deep architectures have the disadvantage that the interpretability is at least difficult. Therefore, great effort is currently spent to explain deep models, see [40] and references therein. However, due to the complexity of deep networks this is quite often impossible [70]. Thus, alternatives are required for many applications like in medicine [39]. A promising alternative is the concept of distancebased