Efficient integration of generative topic models into discriminative classifiers using robust probabilistic kernels
- PDF / 2,430,923 Bytes
- 25 Pages / 595.276 x 790.866 pts Page_size
- 5 Downloads / 220 Views
THEORETICAL ADVANCES
Efficient integration of generative topic models into discriminative classifiers using robust probabilistic kernels Koffi Eddy Ihou1 · Nizar Bouguila1 · Wassim Bouachir2 Received: 7 January 2020 / Accepted: 9 September 2020 © Springer-Verlag London Ltd., part of Springer Nature 2020
Abstract We propose an alternative to the generative classifier that usually models both the class conditionals and class priors separately, and then uses the Bayes theorem to compute the posterior distribution of classes given the training set as a decision boundary. Because SVM (support vector machine) is not a probabilistic framework, it is really difficult to implement a direct posterior distribution-based discriminative classifier. As SVM lacks in full Bayesian analysis, we propose a hybrid (generative–discriminative) technique where the generative topic features from a Bayesian learning are fed to the SVM. The standard latent Dirichlet allocation topic model with its Dirichlet (Dir) prior could be defined as Dir–Dir topic model to characterize the Dirichlet placed on the document and corpus parameters. With very flexible conjugate priors to the multinomials such as generalized Dirichlet (GD) and Beta-Liouville (BL) in our proposed approach, we define two new topic models: the BL– GD and GD–BL. We take advantage of the geometric interpretation of our generative topic (latent) models that associate a K-dimensional manifold (K is the size of the topics) embedded into a V-dimensional feature space (word simplex) where V is the vocabulary size. Under this structure, the low-dimensional topic simplex (the subspace) defines a document as a single point on its manifold and associates each document with a single probability. The SVM, with its kernel trick, performs on these document probabilities in classification where it utilizes the maximum margin learning approach as a decision boundary. The key note is that points or documents that are close to each other on the manifold must belong to the same class. Experimental results with text documents and images show the merits of the proposed framework. Keywords Hybrid (generative–discriminative) models · Support vector machine · Conjugate priors · Beta-Liouville · Generalized Dirichlet · Probabilistic kernels · Document classification
1 Introduction Machine learning and AI (artificial intelligence) have been responsible for a wide variety of applications such as object detection and recognition, information retrieval, and natural language understanding and processing. These are very hot topics in the research community. However, object * Nizar Bouguila [email protected] Koffi Eddy Ihou [email protected] Wassim Bouachir [email protected] 1
Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC H3G 1M8, Canada
Department of Science and Technology, TÉLUQ University, Montreal, QC H2S 3L5, Canada
2
categorization has always received a particular attention from researchers in the area of computer vis
Data Loading...