Conditional learning through causal models
- PDF / 386,658 Bytes
- 23 Pages / 439.37 x 666.142 pts Page_size
- 74 Downloads / 244 Views
Conditional learning through causal models Jonathan Vandenburgh1 Received: 24 April 2020 / Accepted: 23 September 2020 © Springer Nature B.V. 2020
Abstract Conditional learning, where agents learn a conditional sentence ‘If A, then B,’ is difficult to incorporate into existing Bayesian models of learning. This is because conditional learning is not uniform: in some cases, learning a conditional requires decreasing the probability of the antecedent, while in other cases, the antecedent probability stays constant or increases. I argue that how one learns a conditional depends on the causal structure relating the antecedent and the consequent, leading to a causal model of conditional learning. This model extends traditional Bayesian learning by incorporating causal models into agents’ epistemic states. On this theory, conditional learning proceeds in two steps. First, an agent learns a new causal model with the appropriate relationship between the antecedent and the consequent. Then, the agent narrows down the set of possible worlds to include only those which make the conditional proposition true. This model of learning can incorporate both standard cases of Bayesian learning and the non-uniform learning required to learn conditional information. Keywords Conditionals · Bayesian Learning · Causal Models
1 Introduction Suppose someone is looking for their keys in drawers A, B and C; they think each drawer is equally likely to contain the keys, so the probability that the keys are in any given drawer is 13 . Upon learning that the keys are not in drawer A, the most reasonable way to change one’s beliefs is to believe that the keys are in either drawer B or drawer C, each with probability 21 . This kind of learning is successfully captured by the Bayesian theory of learning, which assumes that beliefs are represented by a
I would like to thank Fabrizio Cariani, as well as an anonymous reviewer, for helpful comments on an earlier version of this paper.
B 1
Jonathan Vandenburgh [email protected] Northwestern University, 1880 Campus Dr., Evanston, IL 60208, USA
123
Synthese
probability distribution and that when someone learns a proposition A, their beliefs change from some prior distribution Pr to a posterior distribution Pr A according to Bayes’ Theorem.1 The posterior distribution Pr A is given by conditionalization, so for any proposition B, Pr A (B) = Pr(B|A) = Pr(B∧A) Pr(A) . Bayesian learning therefore predicts that learning is uniform, so the same updating procedure applies for any proposition in any situation. Now suppose the information one learns is in the form of a conditional sentence ‘If A, then B.’ While one might expect conditional learning to be uniform in the same way Bayesian learning is, many examples (Douven 2012) suggest this is not the case. Consider, for example, the conditionals ‘If my brother is here, the keys are in drawer C’ and ‘If the keys are in drawer A, then someone moved them.’ In the first example, the conditional should not change the credence that the s
Data Loading...