Asymptotic Bayesian Generalization Error in Latent Dirichlet Allocation and Stochastic Matrix Factorization

  • PDF / 3,814,112 Bytes
  • 22 Pages / 595.276 x 790.866 pts Page_size
  • 17 Downloads / 203 Views

DOWNLOAD

REPORT


ORIGINAL RESEARCH

Asymptotic Bayesian Generalization Error in Latent Dirichlet Allocation and Stochastic Matrix Factorization Naoki Hayashi1   · Sumio Watanabe2 Received: 26 August 2019 / Accepted: 30 January 2020 © Springer Nature Singapore Pte Ltd 2020

Abstract Latent Dirichlet allocation (LDA) is useful in document analysis, image processing, and many information systems; however, its generalization performance has been left unknown because it is a singular learning machine to which regular statistical theory can not be applied. Stochastic matrix factorization (SMF) is a restricted matrix factorization in which matrix factors are stochastic; the column of the matrix is in a simplex. SMF is being applied to image recognition and text mining. We can understand SMF as a statistical model by which a stochastic matrix of given data is represented by a product of two stochastic matrices, whose generalization performance has also been left unknown because of non-regularity. In this paper, using an algebraic and geometric method, we show the analytic equivalence of LDA and SMF, both of which have the same real log canonical threshold (RLCT), resulting in that they asymptotically have the same Bayesian generalization error and the same log marginal likelihood. Moreover, we derive the upper bound of the RLCT and prove that it is smaller than the dimension of the parameter divided by two, hence the Bayesian generalization errors of them are smaller than those of regular statistical models. Keywords  Topic model · Latent Dirichlet allocation · Matrix factorization · Singular model · Bayesian learning · Algebraic geometry

Introduction Latent Dirichlet Allocation The topic model [13] is a ubiquitous learning machine used in many research areas, including text mining [8, 14], computer vision [21], marketing research [32], and geology [40]. Latent Dirichlet allocation (LDA) [8] is one of the most popular Bayesian topic models. It has been devised for text analysis, and it can utilize information in documents by defining the topics of the words. The topics are formulated as one-hot vectors subject to categorical distributions which are different for each document (Fig. 1). The standard inference algorithms, such as Gibbs sampling [14] and the variational * Naoki Hayashi [email protected] 1



Simulation & Mining Division, NTT DATA Mathematical Systems Inc., 1F Shinano‑machi‑Renga‑kan, Shinano‑machi 35, Shinjuku‑Ku 160‑0016, Tokyo, Japan



Department of Mathematical and Computing Science, Tokyo Institute of Technology, W8‑42, 2‑12‑1, Oookayama, Meguro‑ku 152‑8552, Tokyo, Japan

2

Bayesian approximation [8], require the appropriate number of the topics to be set. Different topics are inferred as the same thing if the chosen number of topics is too small; that is, LDA suffers from underfitting. On the other hand, if the chosen number of topics is too large, the model suffers from overfitting on the training data. In practical applications, the optimal number of topics of the ground truth is unknown; thus, researchers and pra