Bayesians Still Don't Learn from Conditionals
Bayesians Still Don't Learn from Conditionals
Mario Günther and Borut Trpin
Presentation to “Bayesian Epistemology: Perspectives and Challenges”, Virtual Conference hosted by the Munich Centre for Mathematical Philosophy.
How should a rational Bayesian agent update her beliefs when she learns an indicative conditional? Douven (2016, Ch. 6) surveys the extant answers to this question and comes to the conclusion that none is satisfactory. Should Bayesians then abandon all hope and admit defeat when it comes to updating on conditionals? Not according to Eva et al. (2019) who put forth ‘a normatively privileged updating procedure for this kind of learning’ (p. 1). Their procedure can be roughly glossed as follows. A Bayesian agent learns ‘If A then C’ by minimizing the Inverse Kullback-Leibler divergence (IKL) subject to two constraints, one on the conditional probability Q(C|A), the other on the probability of the antecedent Q(A). The authors argue that learning by minimizing the IKL is normatively privileged. The reasons be that this minimization method generalizes Jeffrey conditionalizing, an update rule that is widely embraced as rational, and that it minimizes expected epistemic inaccuracy. However, to learn ‘If A then C’ by minimizing the IKL only subject to the constraint on Q(C|A) keeps the probability of the antecedent fixed. In general, as Eva et al. (2019, p. 32) note, this is implausible. In many examples, the probability of the antecedent should intuitively change. Hence, they impose an additional constraint on Q(A). Here we will show that Eva et al.’s (2019) ‘general updating strategy for learning non-strict conditionals’ fails for two reasons (p. 32). Firstly, their updating strategy does not account for cases where propositions besides the antecedent and consequent are relevant. This is surprising, to say the least, because they write in the first half of their paper ‘If more than two relevant propositions are involved, then these have to be accounted for and modelled in a proper way’ (p. 15). Secondly, the constraint on Q(A) leads in many scenarios to the intuitively false results. We generalize their updating strategy to overcome the restriction to two variables. And yet, as we will see, the constraint on Q(A) is still inappropriate for non-evidential conditionals. We will present Eva et al.’s (2019) updating strategy before we investigate its claimed generality. Then we will show that their updating strategy fails for a number of examples, including the Ski Trip Example. We generalize their updating strategy to give some hope to the Bayesians. Yet we will show that Bayesians still don’t learn from conditionals.
Details of the conference here: https://www.mcmp.philosophie.uni-muenchen.de/events/workshops/container/bayesian_epistemology_2020/index.html