Beyond Prediction: Directions for Probabilistic and Relational Learning
Research over the past several decades in learning logical and probabilistic models has greatly increased the range of phenomena that machine learning can address. Recent work has extended these boundaries even further by unifying these two powerful learn
- PDF / 1,331,194 Bytes
- 18 Pages / 430 x 660 pts Page_size
- 51 Downloads / 195 Views
Abstract. Research over the past several decades in learning logical and probabilistic models has greatly increased the range of phenomena that machine learning can address. Recent work has extended these boundaries even further by unifying these two powerful learning frameworks. However, new frontiers await. Current techniques are capable of learning only a subset of the knowledge needed by practitioners in important domains, and further unification of probabilistic and logical learning offers a unique ability to produce the full range of knowledge needed in a wide range of applications. Keywords: Statistical relational learning, causal inference, quasi-experimental design.
1 Introduction The past decade has produced substantial progress in unifying methods for learning logical and probabilistic models. We now have a large array of representations, algorithms, performance characterizations, and applications that bring together these two keys to effective learning and reasoning. Much of this work has sought to preserve key attributes of existing learning algorithms (e.g., efficiency) while extending the expressiveness of the representations that can be learned. However, combining probabilistic and logical techniques may open up entirely new capabilities that should not be ignored. Specifically, these recent advances in machine learning for relational data sets have revealed a surprising new opportunity to learn causal models of complex systems. The opportunity is a potentially deep and unexploited technical interaction between two previously unconnected areas: (1) work in statistical relational learning; and (2) work on quasi-experimental design in the social sciences. Specifically, the type of new data representations conceived and exploited recently by researchers in statistical relational learning (SRL) may provide all the information needed to automatically apply powerful statistical techniques from the social sciences known as quasiexperimental design (QED). QEDs allow a researcher to exploit unique characteristics of sub-populations of data to make strong inferences about cause-and-effect H. Blockeel et al. (Eds.): ILP 2007, LNAI 4894, pp. 4 – 21, 2008. © Springer-Verlag Berlin Heidelberg 2008
Beyond Prediction: Directions for Probabilistic and Relational Learning
5
dependencies that would otherwise be undetectable. Such causal dependencies infer whether manipulating one variable will affect the value of another variable, and they make such inferences based on non-experimental data. To date, QEDs have been painstakingly applied by social scientists in an entirely manual way. However, data representations from SRL that record relations (organizational, temporal, spatial, and others) could facilitate automatic application of QEDs. Constructing methods that automatically identify sub-populations of data that meet the requirements of specific QEDs would enable strong and automatic causal inferences from non-experimental data. This fusion of work in SRL and QED would lead to: (1) large increases in the percen
Data Loading...