Probabilistic inductive constraint logic
- PDF / 1,885,227 Bytes
- 32 Pages / 439.37 x 666.142 pts Page_size
- 22 Downloads / 233 Views
Probabilistic inductive constraint logic Fabrizio Riguzzi1 · Elena Bellodi2 · Riccardo Zese2 · Marco Alberti1 · Evelina Lamma2 Received: 22 March 2020 / Revised: 22 July 2020 / Accepted: 26 August 2020 © The Author(s) 2020
Abstract Probabilistic logical models deal effectively with uncertain relations and entities typical of many real world domains. In the field of probabilistic logic programming usually the aim is to learn these kinds of models to predict specific atoms or predicates of the domain, called target atoms/predicates. However, it might also be useful to learn classifiers for interpretations as a whole: to this end, we consider the models produced by the inductive constraint logic system, represented by sets of integrity constraints, and we propose a probabilistic version of them. Each integrity constraint is annotated with a probability, and the resulting probabilistic logical constraint model assigns a probability of being positive to interpretations. To learn both the structure and the parameters of such probabilistic models we propose the system PASCAL for “probabilistic inductive constraint logic”. Parameter learning can be performed using gradient descent or L-BFGS. PASCAL has been tested on 11 datasets and compared with a few statistical relational systems and a system that builds relational decision trees (TILDE): we demonstrate that this system achieves better or comparable results in terms of area under the precision–recall and receiver operating characteristic curves, in a comparable execution time.
Editors: Nikos Katzouris, Alexander Artikis, Luc De Raedt, Artur d’Avila Garcez, Ute Schmid, Jay Pujara. * Riccardo Zese [email protected] Fabrizio Riguzzi [email protected] Elena Bellodi [email protected] Marco Alberti [email protected] Evelina Lamma [email protected] 1
Dipartimento di Matematica e Informatica – University of Ferrara, via Machiavelli 30, 44121 Ferrara, Italy
2
Dipartimento di Ingegneria – University of Ferrara, Via Saragat 1, 44122 Ferrara, Italy
13
Vol.:(0123456789)
Machine Learning
1 Introduction Uncertain information is being taken into account in an increasing number of application fields. Probabilistic logical models are a suitable framework to handle uncertain information, but usually require expensive inference and learning procedures. For this reason, in the last decade many languages that impose limitations to the form of sentences have been proposed. A possible way to pursue this goal is the application of learning from interpretations (De Raedt and Džeroski 1994; Blockeel et al. 1999) instead of the classical setting of learning from entailment. In fact, given fixed bounds on the maximal length of clauses and the maximal arity of literals, first-order clausal theories are polynomial-sample polynomial-time PAC-learnable (De Raedt and Džeroski 1994). Moreover, examples in learning from interpretations can be considered in isolation (Blockeel et al. 1999), so coverage tests are local and learning algorithms take a time that i
Data Loading...