Chloé de Canson (LSE)
Abstract
The thought that agents ought not assign extremal credences to propositions they think could be either true or false is intuitively appealing to many. In the Bayesian literature, it has been captured by a putative norm of rationality called regularity. However, arguments against regularity abound, and arguments in its favour are notoriously unsatisfactory. In this paper, I argue that the thought is best captured by a weaker claim, which I call humility. I argue for humility by showing that it follows from what it means to be rational.
Biography
Chloé is completing a PhD at the London School of Economics, and, in September 2020, will start as an assistant professor in the Department of Theoretical Philosophy at the University of Groningen. She works mainly in epistemology. She is working on two research projects; one on the relationship between an agent’s means of inquiry and what it is rational for her to believe; and another on the nature of agential perspective, its relationship to objectivity, and the rationality of perspectival change. She holds a BSc in Philosophy, Logic, and Scientific Method from LSE, and an MPhil in Philosophy from the University of Cambridge.
Graeme A Forbes
11th July 2020 at 12:56 pm
I enjoyed that (and I’m very sympathetic to your argument).
Do you think the telic/poric distinction is similar to Friedman’s epistemic/zetetic distinction? (https://philpapers.org/rec/FRITEA-3)
There certainly seems to be something to the idea (that virtue epistemologists like Hookway and Zagzebski have long been keen on) that the regulation of inquiry has been neglected compared to simply having the right mental state. Making the connection between the regulation of inquiry and one’s epistemic state seems like a good move.
Chloé de Canson
11th July 2020 at 8:50 pm
Thank you Graeme for your question, and for pointing me to Jane Friedman’s really interesting paper! I have thought about it, and I think that her distinction and mine relate in the following way.
Friedman begins her paper with the observation that there are many mental acts that one could perform at any given time: counting windows, reflecting on the implications of views one already holds, etc. And since these mental acts take time, one must choose which to engage in. From this she identifies a tension between two types of norms which seek to regulate this kind of activity: norms she calls zetetic (for instance: if you want to figure out Q, you ought to take the necessary means to figuring out Q), and norms she calls epistemic (for instance and with changed terminology: if [a belief in p is justified] at t, then one is permitted to [form the belief that] p at t.). The problem is that taking the necessary means to figuring out Q and forming the belief that p are incompatible (in the weak sense that, since they are both activities, one may not do both at once), but the antecedents are compatible (in the sense that it might very well be that the agent wants to figure out Q, and that a belief in p being justified for her). Thus a tension arises.
On my end, the tension between the poric and the telic accounts concerns which beliefs are justified at some time and for some agent. So, the distinction I am drawing is between two different ways one might analyse the antecedent of the epistemic norms that Friedman considers. (To repeat it from the paragraph above, the norm is: if [a belief in p is justified] at t, then one is permitted to [form the belief that] p at t.)
I think Friedman and I are using the term “inquiry” differently. She views inquiry as the mental activity of e.g. gathering specific evidence, forming specific beliefs, etc., and is interested in the norms that regulate this process. But when I talk of the means and ends of inquiry, I mean something quite different: the means of inquiry are what are sometimes referred to as “sources of knowledge” (observation, reason, testimony, etc.), and the end of inquiry is the ideal epistemic state of being certain of the truth-value of propositions. But I completely agree with you that the kind of inquiry Friedman has in mind, which I think (?) is also the kind that the virtue epistemologists you mention have in mind, is an important and neglected topic for epistemologists. I will think some more about whether I should use a different term to avoid conflations!
Graeme A Forbes
11th July 2020 at 10:27 pm
Thanks for your reply. That’s helpful.
I think the virtue epistemologists I mentioned (at least Hookway) were thinking of inquiry as quite an active and social activity (so would involve discussing things, asking questions, formulating hypotheses to test, running experiments, conducting a fingertip search of a crime scene, etc. Now sitting back in an armchair and having a good old mull might also be a form inquiry takes, but it needn’t be stuff that happens in one’s head.
Daniel Whiting
11th July 2020 at 3:17 pm
Thanks for the interesting talk.
I have a question about your criticism of the Poric Argument in 2.1, which probably reveals a misunderstanding on my part. (In that case, apologies in advance.)
According to Premise 2, no particular credence in poems is justified (by the means of inquiry). But doesn’t the objectivist deny this? The objectivist thinks that credence 0.5 is justified. The objectivist can then also explain why extremal credences in poems are particularly bad – they are farthest from the justified credence.
Perhaps your thought is that the Poric view doesn’t deliver the objectivist view. But why not? I think I need to hear more about what constitutes a means of inquiry. But prima facie reflecting on one’s evidential situation counts as a means. Such reflection, the objectivist might say, warrants a credence of 0.5 in Poems.
Perhaps instead the thought is that the Poric view should not pre-judge the dispute between objectivists and subjectivists. But, for what it’s worth, I’m not sure why we should accept that.
Let me know what I’ve missed!
Chloé de Canson
12th July 2020 at 4:44 pm
Thank you Daniel for your question! I actually believe that the poric account of justification entails subjectivism (or more precisely, that there is a very compelling argument for subjectivism, and that the usual objections to subjectivism do not have the required traction, on the poric account). I make the case for this in another paper, and I argue on that basis that the dispute between subjectivists and anti-subjectivists (including objectivists) is best understood as a dispute about the account of justification that is assumed.
Roughly, the reason I have for thinking that subjectivism holds on the poric account is this. Take a proposition about the unobserved, maybe, that the book will be a poem. What credence do my means of inquiry warrant in this proposition? Well, observation alone does not tell me anything about this proposition: all that observation tells me is, by definition, about the observed. Reason alone does not tell me anything either: all that reason tells me is, by definition, about the a priori. And since the proposition is about the unobserved, no other means will help, not testimony, or memory, or introspection. (This is one way of reconstructing Hume’s argument for inductive scepticism.) As you can see, I use “means of inquiry” to refer to those abilities that an agent has, on the basis of which she might . T
Now of course the objectivist will disagree with this, and insist that an agent ought to have credence 1/2 in poems. There are several ways an objectivist can argue for this. One is to say that it would be bad for the agent to have a credence other than 1/2 in poems, because the agent would then be taking an undue (pragmatic or epistemic) risk. This is roughly the way in which Jon Williamson and Richard Pettigrew argue for the principle of indifference—though their arguments differ significantly in the details. This is clearly operating on a telic account of justification, on which what is justified is what it would be best for the agent to believe; so it is not going to move the theorist working with the poric account.
Another way one can argue for objectivism, which you mention in your question, is by appealing to a kind of evidentialism. An argument of this kind might go: the agent ought to have credence 1/2 in each of poems and novel, because they are both equally supported by the evidence. You suggest that this can be interpreted as assuming a poric account of justification, if we take one’s “evidential situation” as a means of inquiry. But I don’t think this can work. If you interpret “evidential situation” as “that which the agent has observed”, then the argument is poric. It explicates justification by reference to the agent’s means of inquiry, here, observation. But it also doesn’t yield the required result! For the proposition in question is about the unobserved, and so outside the scope of observation (to date). Alternatively, you might interpret “evidential situation” in a different way, and I think that this is what evidentialists in fact do. But this raises the question: how does one interpret/explicate this notion of evidential support? My sense is that this was a difficulty for the original objectivists (Keynes, maybe even Laplace), and that it is still a weak point of Bayesian evidentialism. But more importantly for our purposes, even if such an interpretation can be provided, and even if it indeed yields the conclusion that the agent ought to have credence 1/2 in poems and novel, this is unlikely to convince the proponent of the poric account of justification. This is because, unless the evidentialist has not solved the problem of induction (that is, has not identified which means of inquiry that the agent has can reach to the unobserved), this will not be a poric interpretation.
Sorry this was quite long! Before finishing, I want to say something about your contention that “The objectivist can then also explain why extremal credences in poems are particularly bad – they are farthest from the justified credence.” I’m not convinced by this. Because suppose that a non-subjectivist believes that the justified credence in a particular proposition is 0.8. Then, credence 1 will be closer to the justified credence than 0.5. But I take it that the non-subjectivist would still think that having credence 1 is worse than having credence 0.5. But if they do not, then they will disagree with my argument for humility, which is based on a principled distinction between extremal and non-extremal credences. I’d love to talk with such a non-subjectivist if they exist!
Jon Williamson
11th July 2020 at 3:30 pm
Thanks for a very nice talk. A couple of thoughts:
– might it be a problem for humility that it can’t extend to more complex cases? In probability theory there are various zero-one laws that force extreme probabilities on certain contingent propositions that are neither logical consequences nor logically incompatible with the evidence. These seem to conflict with humility.
– I agree that it’s right to exempt the evidence from humility, but we (reasonably) take as evidence lots of propositions that go beyond what we know – e.g., various universal laws. So we give probability 1 to these propositions, even though they’re not absolutely secure. Does this practice undermine humility perhaps?
Jonathan Birch, LSE
12th July 2020 at 2:36 pm
We give probability 1 to contingent, empirical laws? Is there a particular example you have in mind? My reaction was the opposite – that in real scientific inquiry, all contingent propositions are uncertain, including the evidence.
Chloé de Canson
12th July 2020 at 5:48 pm
Thank you Jon and Jonathan for your questions about assigning credence 1 to the evidence. I will answer them together because they pull apart in ways that I think reflect nicely what’s being said in the literature.
I agree with Jon that formal epistemologists model their agents as assigning credence 1 to a host of contingent propositions about the unobserved. In fact in the talk, I assumed that the agent has credence 1 in the proposition that she has received a book that is either a novel or a collection of poems. But of course she ought not be *absolutely certain* that this is the case (she could be deceived by an evil daemon etc.). And furthermore, proponents of conditionalisation hold that agents ought to assign credence 1 to their evidence, even while (presumably) recognising that agents ought not be absolutely certain about contingent propositions about the world.
There are three possible responses to this, I think.
1) The first and most popular is to reject conditionalisation in favour of a more general norm like Jeffrey conditionalisation, which allows you to acquire evidence with a non-extremal probability.
2) The second is that one might say that agents can be and in fact are absolutely certain of their evidence. It is just that their evidence is about their mind and not the external world, so that all evidential propositions must be of the form “this book seems red to me”, and not “this book is red”.
3) The third is that one might appeal to an idealisation strategy, and insist that although real human agents are not always in a position to become certain of their evidence, we can assume they are, or proceed as if they were.
My sense is that no one accepts 2), and that most people accept 1) in principle but 3) in practice. That is, they say things like: “If I did things properly, I’d have to make space for the fact that the agent is not totally certain that E, but for the purpose of analysing this specific case, let’s just assume that the agent is in fact certain that E”. I think that this is quite an interesting state of affairs. For if most people most of the time make an idealisation assumption, this suggests that there might be a reason for this beyond economy. Indeed this is what I argue in another paper: that Bayesian epistemology is an epistemology of the empirical unobserved—these are the cases of interest to Bayesian theorists, and these are the cases for which it makes sense to use Bayesian tools (no one would attempt to use Bayesian tools for metaphysical epistemology, or for mathematical epistemology! And although some have attempted to use such tools for moral epistemology, I think it’s not a coincidence that they have had such trouble.) If that’s the case, then the central question for Bayesian epistemologists is: what ought one believe about the unobserved? And since this is a difficult question (for on what basis would such a belief be justified?), we often assume that other questions have been solved, including questions about whether the agent’s observations have been successful, and whether the agent is a competent logician. This is not because anyone believes that these questions admit of easy solutions, but because by making such idealising assumptions, we are “propping up” the agent: we are granting her as much success as possible in all domains but the empirical unobserved, in order to determine how much about the empirical unobserved she might be able to find out in principle.
If this is right, then it explains the general interest in cases about the empirical unobserved amongst Bayesian epistemologists, and it allows me to restrict the domain of applicability of humility as I have.
Jonathan Birch, LSE
12th July 2020 at 8:19 pm
Thanks! Are people who go for option (1) also free to accept Regularity “in principle”, at least for countable cases? The Humility principle, for such people, would then have the status of something like “What I assume instead of Regularity when, for the sake of tractability, I’m making the ‘certain evidence’ idealization.”
Chloé de Canson
15th July 2020 at 9:52 pm
Yes on the question: if one is committed to option (1), then one can just accept regularity (on countable algebras).
However, I’m not sure about your second sentence. I agree that if one makes the ‘certain evidence’ idealisation, one must weaken regularity to humility, on pain of inconsistency. But this raises the question of what a norm is. It cannot be the case that the norm which applies *to the agent* depends on the idealisations made by the theorist. And it seems plausible that if one accepts (say) regularity, then one accepts it as applying to the agent. This is just to say that I think the question of which idealisations are justified and which norms are true are too closely related to be able to make the move you suggest: from regularity (in reality) to humility (when one makes the ‘certain evidence’ idealisation).
Jon Williamson
13th July 2020 at 10:47 am
Thanks Chloe. I agree that (3) is the right sort of approach. Here’s the kind of thing I was thinking of: a structural engineer takes Newtonian mechanics as part of her evidence base for a proposition about whether a building will stay up, even though she knows it’s false, strictly speaking. Clearly, the engineer is reasonable to grant Newtonian mechanics in this particular operating context and to act as if she fully believes it. So she is rational to take it as evidence in this context. Epistemically it’s the right thing to do, because it will lead to a correct inference about whether the building will stay up, if the rest of her reasoning is sound. (In another context – e.g., astrophysics – it would not be reasonable to take it as evidence.)
Chloé de Canson
15th July 2020 at 10:00 pm
I’m actually quite conflicted about option (3), although ultimately I think it’s the right one. It seems to me not obvious at all that we are justified in the kind of idealisation you mention, for instance when you write: “Clearly, the engineer is reasonable to grant Newtonian mechanics in this particular operating context and to act as if she fully believes it.” In the comment above I give a very sketchy overview of my argument for a ‘certain evidence’ idealisation, though it wouldn’t yield that idealisation to the effect that the agent can have credence 1 in Newtonian mechanics, because that goes beyond her observations (even understood in a loose sense). So now I’m wondering why you think this kind of idealisation is acceptable! (And if you have made or know of this argument in print, I would be very grateful for a pointer!)
Chloé de Canson
12th July 2020 at 5:14 pm
Thank you Jon for your questions! Let me answer the first one here. You mention cases where there are zero-one laws that force extreme probabilities on certain contingent propositions. These do in fact conflict with humility. But, and let me know if I’m mistaken, I think that these cases all occur in cases where the algebra on which the probabilities are defined is uncountable. This brings up a puzzle. We can (1) reject humility, (2) hold that humility holds in finite cases and only in those, (3) hold that humility holds in all cases, finite and infinite. I take your question to be that of why we shouldn’t go with (1). The reason I personally veer towards (2) or even (3) is this. The reason that humility fails in infinite cases has nothing to do with the reason why it might hold in finite cases, and it does not transfer over. So, given that I’m convinced by my own argument for humility in the finite cases (for now at least), I do not want to reject it unless I have a good reason to think that the finite and infinite cases must absolutely be continuous, and that humility must absolutely be rejected in the infinite cases. But I am convinced by neither of these. Starting with the latter: I agree that probabilities are such that what I’ve called “propositions about the unobserved” must be assigned extremal credences in uncountable cases. But why does that not entail that we should cease using (classical) probabilities for infinite cases? For if one is convinced by my argument for humility, and if non-classical probabilities could vindicate humility, then why not do that? Here, I suspect that my opponent will say either that they are unconvinced by my argument for humility, or that they are convinced but that it is “not worth it” to move to non-classical probabilities. And in that case, either we have a substantive disagreement about my argument, or (and I think this is more likely), my opponent and I will come to realise that we are using the Bayesian framework in different ways—that the kind of use they make of the Bayesian machinery is very different from mine, and in particular, that they are not engaging in what I’ve called “means-ends epistemology”. I think that’s completely fine, and great that the Bayesian tools are being used in many different ways! But in that case, it seems clear to me that I shouldn’t worry about the fact that my argument doesn’t transfer over to the cases they are concerned with.
Jon Williamson
13th July 2020 at 10:07 am
I think that’s a nice line of reply, Chloe. I think you’re right that Bayesian epistemology and the practice of Bayesian probability and statistics might need to diverge at some points. Personally, I prefer a slightly different strategy, though: one can say that probability 1 is ok for consequences of the evidence, where you include probabilistic consequences as well as logical consequences of the evidence – i.e., probability 0 and 1 are ok if forced by the deductive logic together with the axioms of probability. Then humility becomes something like: degrees of belief should be more equivocal than probability 0 and 1, unless forced by the evidence. This would preserve probability theory.
Chloé de Canson
15th July 2020 at 10:09 pm
I agree that, if we’ve assigned credence 1 to the evidence, where the evidence is broader than just the kind that can be determined through observation capaciously understood, the problem is a lot less pressing. Which I think brings us back to the question above: on what basis would such an idealisation be justified?
Jonathan Birch, LSE
12th July 2020 at 2:33 pm
Well done Chloé, that’s a great talk, very well presented.
I don’t follow the world of Bayesianism all that closely, but it surprised me that “evidential omniscience” seems to be as popular as it is. Giving evidential propositions a credence of 1 feels like an idealization. I liked the idea of Jeffrey conditionalization, where the evidential propositions can have probability <1. This seems to bring Bayesianism a little closer to the real world, where evidence is uncertain. Do Bayesians these days reject the idea of uncertain evidence – and, if so, why?
Chloé de Canson
12th July 2020 at 5:49 pm
Thanks Jonathan for this question! I’ve done a group reply with Jon’s question above, because I think they are nicely taken together.
Dave Lucas
12th July 2020 at 2:56 pm
Thx, interesting and very smart argument for a very tricky problem.
Please comment on the case where there is some means to determine if an event will occur, but these means are not entirely reliable. For instance, I am of the view that based on history, Facebook stock will appreciate. But can I trust this intuition as there may be an external event, which I did not know about, which can affect the stock price. Is it rational for me to buy Facebook stock?
Chloé de Canson
12th July 2020 at 6:10 pm
Thank you Dave for this question! I take it that the question Bayesians pose is that of what to believe given one’s limited perspective. So, whether it is rational (for instance) to buy Facebook stock will depend on how likely you think it is that your intuition is correct. And how confident you should be that your intuition is correct, and on what basis, is the kind of question that Bayesians debate. I discuss this in my reply to Daniel Whiting above!
Joe Roussos
12th July 2020 at 4:10 pm
A small comment about the Objective Bayesian. As I understand it, they arrive at the principle of indifference via an argument about entropy (or at least some of them do). This MaxEnt principle gives them a reason for the plausible thought directly. Assigning credence 1 to something contingent is the worst thing you can do, according to MaxEnt. So do they need your means-ends argument? (I don’t think they’re right generally, so this isn’t an objection so much as a question of framing.)
Chloé de Canson
12th July 2020 at 6:19 pm
Thanks Joe for this suggestion! I’ll look into this argument. But prima facie I’m surprised, given that Jon Williamson, in his question above, seems to push against the plausible thought—though this doesn’t entail that he rejects it of course.
But yes–if the MaxEnt-ers already have their own argument for the plausible thought, you’re right that they do not need this one! It might be nice to see how the arguments relate though, to see whether they spring from the same/similar reasons and considerations. I will look into this.
Jon Williamson
13th July 2020 at 11:13 am
Hi Joe. It is true that on a finite domain, maxent will ensure that the only propositions that get probability 0 or 1 are those where those values are forced by evidence. But already where the domain is a predicate language (not uncountable), maxent will give some (universal and existential propositions) propositions probability 0 or 1, even if these values are not forced by the evidence. There’s a sense in which these probabilities 0 and 1 are less of a concern for objective Bayesianism than for subjective Bayesianism: for subjective Bayesianism, Bayesian conditionalisation (or its generalisations) offer the only mechanism for updating, so once a proposition has probability 0 or 1 it is stuck there; objective Bayesians can update by reapplying maxent, so probabilities can shift away from 0 or 1 as the evidence changes. So these extreme probabilities are only defeasibly extreme for the objectivist.