Accuracy-based argument for Conditionalization

In two previous posts (here and here), I described Jim Joyce's accuracy-based argument in favour of Probabilism, the norm that says that an agent's credence function ought to be a probability function.  As noted there, the argument has the following structure:
  • First, Joyce claims that the cognitive value of a credence function at a possible world is given by its accuracy at that world, where this is measured by a particular mathematical function, such as the Brier score.
  • Second, he states a norm of decision theory:  in this case it is Dominance, which says (roughly) that an option is irrational if there is another that is guaranteed to be better than it.  
  • Finally, he proves a mathematical theorem, which shows that an epistemic norm -- in this case, Probabilism -- follows from his account of cognitive value together with the decision-theoretic norm. 
In this post, I will describe an accuracy-based argument in favour of Conditionalization, the norm that says that an agent who learns a proposition with certainty ought to update her credences by conditionalizing on that proposition.  This argument shares Joyce's first premise concerning cognitive value.  But it adopts a different decision-theoretic norm:  Joyce's argument employs Dominance; this argument employs Maximize Subjective Expected Utility.  The argument is originally sketched in (Oddie, 1997); it is presented more fully in (Greaves and Wallace, 2006); and the mathematical theorem on which it turns is due originally (as far as I know) to (Brown, 1976).

Throughout this post, we assume that the credence functions we are discussing are probability functions.

What is Conditionalization?


Our first job is to say precisely what Conditionalization is.  For the purpose of this post it is as follows:

Definition 1  An updating rule is a function that takes a credence function $c$, a partition $\mathcal{E}$, and an element $E$ of $\mathcal{E}$ to a credence function $c_{\mathbf{R}(c, \mathcal{E}, E)}$.

Intuitively, of course, $c_{\mathbf{R}(c, \mathcal{E}, E)}$ is the credence function that the updating rule demands of an agent with initial credence function $c$, who knows their evidence will come from $\mathcal{E}$, and who then receives evidence $E$.

Definition 2  Let $\mathbf{Cond}$ be the following updating rule:
\[
c_{\mathbf{Cond}(c, \mathcal{E}, E)}(X) = c(X | E)
\]
 
Conditionalization  An agent ought to plan to update in accordance with the updating rule $\mathbf{Cond}$.

Brown's pragmatic argument for Conditionalization


Before we turn to the accuracy-based argument for Conditionalization, we present Peter M. Brown's pragmatic argument for that norm.  The accuracy-based argument essentially adapts this argument.

Brown's argument is based on the following result:  Suppose the present time is $t$.  Our agent knows that, by the later time $t'$, she will learn with certainty a proposition from the partition $\mathcal{E}$.  At time $t$, she must choose how she will update her credences in the light of whatever new evidence she obtains.  That is, at $t$ she must choose the updating rule she will use to set her credences at $t'$.  Suppose that one of the available updating rules is $\mathbf{Cond}$.  And suppose further that she knows that, at time $t'$, she will have to choose an action from a set of alternative actions; and she knows she will choose an action that has maximal subjective expected utility when that is calculated relative to her credences at $t'$.  Then, relative to her current credences at $t$, she will always expect herself to do as well if she conditionalizes as if she chooses any of the alternative updating rules; and there will be sets of actions from amongst which she has to choose such that she will expect herself to do better in this choice if she conditionalizes than if she updates using an alternative rule.

More formally:
  • $c$ is the agent's credence function at $t$.
  • $\mathcal{W}$ is the set of possible worlds or states of the world.
  • $\mathcal{A}$ is the set of alternative actions from which the agent must choose at $t'$.
  • $U(a, w)$ is the utility of $a$ at $w$, where $a$ is an action in $\mathcal{A}$ and $w$ is a world in $\mathcal{W}$.
  • $E_w$ is the element of $\mathcal{E}$ that is true at world $w$.
Theorem 1 (Brown)  Suppose $\mathbf{R}$ is an updating rule.  For each $w$ in $\mathcal{W}$:
  • Let $a^*_w$ be an action with maximal subjective expected utility relative to $c_{\mathbf{Cond}(c, \mathcal{E}, E_w)}$.
  • Let $a_w$ be an action with maximal subjective expected utility relative to $c_{\mathbf{R}(c, \mathcal{E}, E_w)}$.
Then:
\[
\sum_{w \in \mathcal{W}} c(w)U(a^*_w, w) \geq \sum_{w \in \mathcal{W}} c(w) U(a_w, w)
\]
Moreover, suppose that, for some $w$ in $\mathcal{W}$, $a^*_w$ is the only action with maximal subjective expected utility relative to $c_{\mathbf{Cond}(c, \mathcal{E}, E_w)}$.  Then:
\[
\sum_{w \in \mathcal{W}} c(w)U(a^*_w, w) > \sum_{w \in \mathcal{W}} c(w) U(a_w, w)
\]
Proof. By the definition of $a^*_w$, we have
\[
\sum_{w \in \mathcal{W}} c_{\mathbf{Cond}(c, \mathcal{E}, E)}(w) U(a^*_w, w) \geq \sum_{w \in \mathcal{W}} c_{\mathbf{Cond}(c, \mathcal{E}, E)}(w) U(a, w)
\]
for all $a$ in $\mathcal{A}$.  Thus, by the definition of $c_{\mathbf{Cond}(c, \mathcal{E}, E)}$, we have
\[
\sum_{w \in \mathcal{W}} c(w|E) U(a^*_w, w) \geq \sum_{w \in \mathcal{W}} c(w|E) U(a, w)
\]
So, writing $w \in E$ to mean that $E$ is true at $w$, we have
\[
\sum_{w \in E} \frac{c(w)}{c(E)} U(a^*_w, w) \geq \sum_{w \in E} \frac{c(w)}{c(E)} U(a, w)
\]
Multiplying both sides by $c(E)$, we get
\[
\sum_{w \in E} c(w) U(a^*_w, w) \geq \sum_{w \in E} c(w) U(a, w)
\]
for all $a$ in $\mathcal{A}$.  In particular:
\[
\sum_{w \in E} c(w) U(a^*_w, w) \geq \sum_{w \in E} c(w) U(a_w, w)
\]
Thus, summing over all $E$s in $\mathcal{E}$, we get
\[
\sum_{w \in \mathcal{W}} c(w) U(a^*_w, w) \geq \sum_{w \in \mathcal{W}} c(w) U(a_w, w)
\]
Clearly the inequality becomes strict if $a^*_w$ is ever the unique expected utility maximizer.
$\Box$

Oddie and Greaves & Wallace's accuracy-based argument for Conditionalization


One way to understand Brown's argument for Conditionalization is this:  An agent is faced with a choice of updating rules; Conditionalization always maximizes subjective expected utility; sometimes, it is the only updating rule that does.  Of course, this relies on an account of the utility of an updating rule at a world.  But our utility function is not defined on updating rules; it is only defined on actions $a$ in $\mathcal{A}$.  However, Brown gives us an account of the utility of an updating rule in terms of the utility of actions.  For instance, for Brown, the utility of $\mathbf{Cond}$ at world $w$ is $U(a^*_w, w)$.  And the utility of $\mathbf{R}$ at $w$ is $U(a_w, w)$.  In general, the utility of an updating rule at a world is given by the utility of the action that an agent would choose based on the credence function demanded by that updating rule at that world (if she were to choose by maximizing subjective expected utility).

This gives us a hint how to give an accuracy-based argument for Conditionalization.  We want to argue that Conditionalization is the updating rule that maximizes subjective expected accuracy.  So we need to say what the accuracy of an updating rule is at a given world.  Following Brown's lead, we define the accuracy of an updating rule at a world to be the accuracy of the credence function that it demands an agent has at that world.  Thus, if the inaccuracy of a credence function $c$ at a world $w$ is given by $I(c, w)$, then the inaccuracy at $w$ of an updating rule $\mathbf{R}$ for an agent with credence function $c$ and partition $\mathcal{E}$ is given by $I(c_{\mathbf{R}(c, \mathcal{E}, E_w)}, w)$.  So, since the accuracy of $c$ at $w$ is $-I(c, w)$, the accuracy of $\mathbf{R}$ at $w$ is $-I(c_{\mathbf{R}(c, \mathcal{E}, E_w)}, w)$.

It might seem that we can now simply take Brown's argument and substitute our measure of accuracy (namely, $-I$) in place of the utility function (namely, $U$) and we will get an accuracy-based argument for Conditionalization.  But that isn't quite true.  Our proof of Brown's theorem began by using the fact that $a^*_w$ has maximal subjective expected utility relative to $c_{\mathbf{R}(c, \mathcal{E}, E)}$.  But, for all that has been said so far, we have no reason for thinking that $c_{\mathbf{R}(c, \mathcal{E}, E)}$ has maximal subjective expected accuracy relative to $c_{\mathbf{R}(c, \mathcal{E}, E)}$.  If we make that assumption, we can simply use Brown's proof, and we get the result we want, namely, that $\mathbf{Cond}$ maximizes subjective expected utility.  But is it a reasonable assumption?

Here is an argument in its favour:  For every probabilistic credence function, there is an evidential situation to which that credence function is the unique rational response, namely, the situation in which one learns that that probability function gives the objective chances (see (Joyce, 2009) for the original argument; see (Hájek, 2009) for an objection; see (Pettigrew, ms, section 5.2.1) for a response to Hájek on Joyce's behalf).  Thus, each credence function ought to assign maximal expected accuracy to itself and only itself.  If it does not, then for an agent who has that credence function, it will be permissible for her to adopt another credence function that also has maximal expected accuracy relative to her original credence function.  But, by hypothesis, this is not permissible for an agent in the evidential situation posited.  Thus, for each probabilistic credence function $c$, we ought to have:
\[
\sum_{w \in \mathcal{W}} c(w) I(c, w) < \sum_{w \in \mathcal{W}} c(w) I(c', w)
\]
for $c' \neq c$.  If this is true, we say that $I$ renders probabilistic credence functions immodest.  We note that the Brier score render probabilistic credence functions immodest, as do any of the inaccuracy measures based on Bregman divergences mentioned in my previous post.  Then we have the following theorem:

Theorem 2 (Greaves and Wallace)  Suppose $\mathbf{R}$ is an updating rule.  And suppose $I$ is an inaccuracy measure that renders probabilistic credence functions immodest.  Then
\[
\sum_{w \in \mathcal{W}} c(w)I(c_{\mathbf{Cond}(c, \mathcal{E}, E_w}), w) < \sum_{w \in \mathcal{W}} c(w) I(c_{\mathbf{R}(c, \mathcal{E}, E_w)}, w)
\]
Proof. Follows the proof of Brown's theorem exactly.
$\Box$

This is the accuracy-based argument for Conditionalization.  It has the following form:
  1. The cognitive value of a credence function is given by its accuracy.  All accuracy measures ought to render all probabilistic credence functions immodest.
  2. Maximize Subjective Expected Utility
  3. Theorem 2
  4. Therefore, Conditionalization.
In this post, as in previous posts, I have assumed that $\mathcal{F}$ is finite.  To see how the argument works when $\mathcal{F}$ is infinite, see (Easwaran, 2013).

References


  • Brown, Peter M. (1976) 'Conditionalization and Expected Utility' Philosophy of Science 43(3): 415-419.
  • Easwaran, Kenny (2013) 'Expected Accuracy Supports Conditionalization--and Conglomerability and Reflection' Philosophy of Science 80(1): 119-142. 
  • Greaves, Hilary and David Wallace (2006) 'Justifying Conditionalization: Conditionalization Maximizes Expected Epistemic Utility' Mind 115(459): 607-632.
  • Hájek, Alan (2009) 'Arguments for--or against--Probabilism?' in Huber, F. & C. Schmidt-Petri (eds.) Degrees of Belief, Synthese Library 342: 229-251.
  • Joyce, James M. (2009) 'Accuracy and Coherence: Prospects for an Alethic Epistemology of Partial Belief' in Huber, F. & C. Schmidt-Petri (eds.) Degrees of Belief, Synthese Library 342: 263-97.
  • Oddie, Graham (1997) 'Conditionalization, Cogency, and Cognitive Value' British Journal for Philosophy of Science 48: 533-541.
  • Pettigrew, Richard (ms) 'Accuracy, Risk, and the Principle of Indifference'

Comments