Descriptive Uncertainty and Maximizing Expected Choice-Worthiness

  • PDF / 396,926 Bytes
  • 15 Pages / 439.37 x 666.142 pts Page_size
  • 24 Downloads / 222 Views

DOWNLOAD

REPORT


Descriptive Uncertainty and Maximizing Expected Choice-Worthiness Andrew Kernohan 1 Accepted: 28 October 2020/ # Springer Nature B.V. 2020

Abstract

A popular model of normative decision-making under uncertainty suggests choosing the option with the maximum expected moral choice-worthiness (MEC), where the choiceworthiness values from each moral theory, which are assumed commensurable, are weighted by credence and combined. This study adds descriptive uncertainty about the non-moral facts of a situation into the model by treating choice-worthiness as a random variable. When agents face greater descriptive uncertainty, the choice-worthiness random variable will have a greater spread and a larger standard deviation. MEC, as a decision rule, is sensitive only to the expected value of the random variable and not to its standard deviation. For example, MEC is insensitive to cases where an option with a large degree of descriptive uncertainty may have a higher probability of being below some threshold of impermissibility than does an option with less dispersion, even though the latter has a higher expected choice-worthiness. When applied to the same situation, similar moral theories will have statistically correlated choice-worthiness values. This correlation affects the dispersion of the credence-weighted sum of the random variables but not its expected value. Thus, MEC is insensitive to aspects of the normative situation to which a good decision-rule should be sensitive. Keywords Normative uncertainty . Maximum expected choice-worthiness . Commensurability . Credence . Moral theories

1 Introduction I can be uncertain about what to do in two separate ways. First, my uncertainty can be normative. This is uncertainty about which theoretical moral principles are true. For example, I am somewhat confident, but not fully certain, that I ought to respect everyone’s autonomy, and

* Andrew Kernohan [email protected]

1

Department of Philosophy, Dalhousie University, 6135 University Avenue, PO Box 15000, Halifax, NS B3H 4R2, Canada

A. Kernohan

I am a bit more confident, but also not fully certain, that I ought to maximize everyone’s pleasure. Sometimes these principles conflict, and then it matters how confident I am in the truth of each principle. Second, my uncertainty can be descriptive. This is uncertainty about the non-moral, empirical facts of the case (Jackson and Smith 2006; Hedden 2016). For example, to apply my moral principles, I need to predict when people will experience pleasure, how they will choose autonomously, and what the effects of my actions will be. Unfortunately, I can never be fully certain about these predictions. One interesting approach to these problems represents moral reasoning under uncertainty the way that decision theory represents rational choice under uncertainty (Hudson 1989; Oddie 1995; Gracely 1996; Smith 2002; Sepielli 2009; MacAskill 2016; Bykvist 2017; MacAskill and Ord forthcoming). This approach models the agent’s credence in each moral theory, and in the facts of the case, by usin