Jeffrey Meets Kolmogorov
- PDF / 886,144 Bytes
- 39 Pages / 439.642 x 666.49 pts Page_size
- 99 Downloads / 229 Views
Jeffrey Meets Kolmogorov A General Theory of Conditioning Alexander Meehan1 · Snow Zhang1 Received: 16 January 2019 / Accepted: 19 December 2019 / © Springer Nature B.V. 2020
Abstract Jeffrey conditionalization is a rule for updating degrees of belief in light of uncertain evidence. It is usually assumed that the partitions involved in Jeffrey conditionalization are finite and only contain positive-credence elements. But there are interesting examples, involving continuous quantities, in which this is not the case. Q1 Can Jeffrey conditionalization be generalized to accommodate continuous cases? Meanwhile, several authors, such as Kenny Easwaran and Michael Rescorla, have been interested in Kolmogorov’s theory of regular conditional distributions (rcds) as a possible framework for conditional probability which handles probability-zero events. However the theory faces a major shortcoming: it seems messy and ad hoc. Q2 Is there some axiomatic theory which would justify and constrain the use of rcds, thus serving as a possible foundation for conditional probability? These two questions appear unrelated, but they are not, and this paper answers both. We show that when one appropriately generalizes Jeffrey conditionalization as in Q1, one obtains a framework which necessitates the use of rcds. It is then a short step to develop a general theory which addresses Q2, which we call the theory of extensions. The theory is a formal model of conditioning which recovers Bayesian conditionalization, Jeffrey conditionalization, and conditionalization via rcds as special cases. Keywords Conditional probability · Jeffrey conditionalization · Kolmogorovian conditionalization · Conditional distributions · Probability-zero conditioning
Alexander Meehan
[email protected] Snow Zhang [email protected] 1
Department of Philosophy, Princeton University, 1879 Hall, Princeton, NJ 08544, USA
A. Meehan, S. Zhang
1 Introduction Jeffrey conditionalization is a well-known generalization of Bayesian conditionalization. It specifies how an agent should update her degrees of belief in light of uncertain evidence. Given a change in the agent’s credences over some finite partition of possibilities {G1 , G2 , ..., Gn } with non-zero prior credence (P (Gi ) > 0 for all i), Jeffrey’s rule says how the rest of the agent’s credences should be redistributed [16]. It is natural to ask whether the rule can be generalized to infinite partitions with credence-zero elements [4, 15]. Imagine it’s a misty morning out on the lake. You are trying to discern your distance to the pier. Every time you look up, more of the mist has cleared and you’re able to get a better view. Your smooth credence distribution over the possible distances, which form an infinite partition, starts out wide and significantly narrows upon each observation. Q1 Can Jeffrey conditionalization be generalized to accommodate this type of continuous case?1 Less well-known to philosophers is Kolmogorov’s [18] generalization of conditional probability, in particular his notion
Data Loading...