Statistical Relational Learning with Soft Quantifiers
Quantification in statistical relational learning (SRL) is either existential or universal, however humans might be more inclined to express knowledge using soft quantifiers, such as “most” and “a few”. In this paper, we define the syntax and semantics of
- PDF / 731,999 Bytes
- 16 Pages / 439.37 x 666.142 pts Page_size
- 17 Downloads / 182 Views
Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Ghent, Belgium [email protected] 2 Department of Computer Science, Katholieke Universiteit Leuven, Leuven, Belgium [email protected] 3 Statistical Relational Learning Group, University of Maryland, College Park, USA [email protected] 4 Ghent University Global Campus, Incheon, South Korea [email protected] 5 Statistical Relational Learning Group, University of California, Santa Cruz, USA [email protected] 6 Center for Data Science, University of Washington, Tacoma, USA [email protected]
Abstract. Quantification in statistical relational learning (SRL) is either existential or universal, however humans might be more inclined to express knowledge using soft quantifiers, such as “most” and “a few”. In this paper, we define the syntax and semantics of PSLQ , a new SRL framework that supports reasoning with soft quantifiers, and present its most probable explanation (MPE) inference algorithm. To the best of our knowledge, PSLQ is the first SRL framework that combines soft quantifiers with first-order logic rules for modeling uncertain relational data. Our experimental results for link prediction in social trust networks demonstrate that the use of soft quantifiers not only allows for a natural and intuitive formulation of domain knowledge, but also improves the accuracy of inferred results.
1
Introduction
Statistical relational learning (SRL) has become a popular paradigm for knowledge representation and inference in application domains with uncertain data that is of a complex, relational nature. A variety of different SRL frameworks has been developed over the last decade, based on ideas from probabilistic graphical models, first-order logic, and programming languages (see e.g., [11,21,26]). Quantification in first-order logic is traditionally either existential (∃) or universal (∀). Given the strong roots of the existing SRL frameworks in (a subset of) first-order logic as a knowledge representation language, it is no surprise that these are the two kinds of quantifications that are known and commonly used in c Springer International Publishing Switzerland 2016 K. Inoue et al. (Eds.): ILP 2015, LNAI 9575, pp. 60–75, 2016. DOI: 10.1007/978-3-319-40566-7 5
Statistical Relational Learning with Soft Quantifiers
61
SRL, even though in many application scenarios humans might be more inclined to express knowledge using softer quantifiers, such as most and a few. For example, in models for social networks it is common to include the knowledge that the behaviour, beliefs, and preferences of friends all influence each other. How this information can be incorporated depends on the expressivity of the model. In a traditional probabilistic model, a dependency might be included for each pair of friends (corresponding to a universally quantified rule), each expressing the knowledge that it is more probable that two friends share a trait in common. An often cited example in SRL contexts describing smoking behaviour among friends
Data Loading...