Group Belief: Lessons from Lies and Bullshit
Jennifer Lackey (Northwestern)
Abstract
Groups and other sorts of collective entities are frequently said to believe things. Sarah Huckabee Sanders, for instance, was asked by reporters at White House press conferences whether the Trump Administration ‘believes in climate change’ or ‘believes that slavery is wrong.’ Similarly, it is said on the website of the ACLU of Illinois that the organization ‘firmly believes that rights should not be limited based on a person’s sexual orientation or gender identity.’ A widespread philosophical view is that belief on the part of a group’s members is neither necessary nor sufficient for group belief. In other words, groups are said to be able to believe that p, even when not a single individual member of the group believes that p.
In this paper, I challenge this view by focusing on two phenomena that have been entirely ignored in the literature: group lies and group bullshit. I show that when group belief is understood in terms of actions over which group members have voluntarily control, as is standardly thought, paradigmatic instances of a group lying or bullshitting end up counting as a group believing. Thus, we need to look elsewhere for an adequate account of group belief.
Biography
Jennifer Lackey is the Wayne and Elizabeth Jones Professor of Philosophy at Northwestern University and the Director of the Northwestern Prison Education Program. Most of her research is in the area of social epistemology, with a recent focus on issues involving credibility and false confessions, the epistemology of groups, and testimonial injustice. Jennifer is the winner of the Dr. Martin R. Lebowitz and Eve Lewellis Lebowitz Prize for Philosophical Achievement and Contribution and the Young Epistemologist Prize, and has received grants and fellowships from the Mellon Foundation, the American Council of Learned Societies, and the National Endowment for the Humanities.
Group Lies and Reflections on The Purpose of Social Epistemology
Liam Kofi Bright (LSE)
Abstract
In her fascinating piece on collective social epistemology, Jennifer Lackey makes the case that non-summativist accounts of group belief cannot adequately account for an important difference between Group Lies (or, separately, Group Bullshit) and Group Belief. Since non-summativist accounts fail to do this she argues that they ought be rejected and that we should seek for an account of group belief which can do better by this standard. I am sorry to say that it is my (de re) intent to respond to this in the most irritating fashion possible, and focus on questioning the standard of evaluation rather than the first order claims at issue. I hope in this essay to try and sketch, or at least make apparent via remarking on Lackey’s essay, something like a different overall metaphilosophical orientation to the questions of social epistemology.
Biography
Liam Kofi Bright is an assistant professor of philosophy at the London School of Economics. He works on the social epistemology of science and Africana philosophy.
The supplementary volume for the conference will be available from 3-13 July.
Bill Wringe
10th July 2020 at 7:01 pm
(Moderator’s placeholder)
Bill Wringe
10th July 2020 at 8:21 pm
Just to say I’ll be moderating this discussion over the weekend, and if no-one has jumped in to start discussion over night, (which they are obviously welcome to), I’ll pop a couple of questions into the comments tomorrow morning (UK time) to start things rolling.
Kian Mintz-Woo
11th July 2020 at 2:32 am
Thanks to both of you for these stimulating works–and very clear presentations!
I will try with a couple questions–one for each of you–to get us started.
@Kofi: Your objection case to (the normative) defence of the Group Lie Desideratum is about taking a pill to induce a mistaken/unjustified belief. I think this could conflate a bunch of things. First, many of us (and I strongly count myself in this camp) are just angry about selling dangerous and addictive tobacco products. So we are inclined to judge the company harshly given horn (negative halo) effects. Second, we might think that if these pills were intentionally taken to induce this belief, then the state of belief at the time the pill kicks in is not relevant–it’s the intention to be morally excused with respect to the mistaken state that is culpable. I am pretty sure that these are not supposed to feed into our judgments on these cases, so let’s try to set them aside. Your Anansi case is helpful here because Anansi is clearly outside their control (and likely not even someone they take seriously, more the fool them). In this case, we are left with what I think drives the relevant intuitions: the significant harm done by tobacco-sellers and their addictive products. At this point, the deep pragmatism that you and I may share kicks in and it seems like–who cares what they believe? Their actions are harmful and must be stopped! [“I do not think the introduction of a magic belief-changing pill or charm is normatively significant at all” (214)]
But I disagree. We often think that the beliefs matter (e.g. mens rea) even if the outcome is the same. And the reason might not be (merely) some consequentialist public commitments; it’s the deeper claim that the ought relevant to choice is the subjective ought and the commitment we cannot hold people liable for doing what they objectively ought not if they are doing what they subjectively ought.
Now, you are smart enough to recognise this, so explicitly admit that your goal is not to change the mind of JL (or those who hold similar views) since your projects may differ. But I think that even if you adopt different projects, it could well be that the pragmatist project is better when it distinguishes between cases of doing what you objectively ought not while believing you subjectively ought from cases where you believe you subjectively ought not. If so for individuals, perhaps likewise for groups.
@Jennifer Lackey: I found the core argument quite persuasive. So I have a question about group responsibility that is somewhat an extension–for that reason, feel free to remain agnostic (or perhaps other commentators joining the thread can help). Let us suppose that a group agent forms a belief in any of these ways (except the conservative summative account) such that not all individuals in the group assent to the proposition p but enough/the right others do such that the group on that account is appropriately said to believe p. That group then goes on to act (\varphi) on the basis of p. Now, by construction, some of the individuals in the group do not support p, and presumably neither do they support \varphi-ing on the basis of p. In one sense, they are not responsible–certainly their support is counterfactually irrelevant. But insofar as they belong to that group, they might well gain or suffer on the basis of \varphi-ing. So, for instance, there are people in BP who could not have affected the choice to drill in the Gulf, but who would have gained if it had gone off without a hitch (as the company flourished) and lost to some extent because of its massive failure and the costs of clean-up (as the company paid)–most of the rank-and-file workers and administrative assistants, for instance, presumably don’t make these decisions but pay for (and gain from!) the decisions anyways. And I don’t think this is repugnant to common sense (for instance, we don’t think it is society’s job to shield those who could not have affected the spill from the financial consequences that stem from the company’s paying for its failure). It seems to me that this suggests that a certain type of sharing or facing burdens responsibility for effects can be divorced from causal or counterfactual responsibility for those same effects. Do you agree? Do you think that those who don’t make a difference (for some of the accounts of group belief) can reasonably face burdens in light of their group membership if the group beliefs lead the group astray in this way?
Looking forward to hearing and reading more!
Bill Wringe
11th July 2020 at 8:06 am
Hi Kian – thanks for getting the ball rolling: I’m very interested to see what Jennifer and Liam have to say in response.
Liam Kofi Bright
11th July 2020 at 12:58 pm
Thank you for your fascinating comment and question Kian! (Hope you’re well btw, was lovely meeting you 🙂 )
So you have picked up just the difference, which I should have put my finger on in the paper – thank you! If one is sufficiently normatively realist to agree with the presuppositions, I think it’d be a fair summary of how I see the project of explication (or conceptual engineering in general) as helping us design our reasoning tools and conceptual environment such that when we decide what to do in light of our model of the world we are more likely to behave in line with what we objective-ought to do. As such, considerations of whether agents are doing what they subjective-ought become rather secondary and only of derivative importance, and in this case I argue that we ought insist people follow what would in fact be good epistemic procedure regardless of whether from their subjective perspective it would seem as so to them. (Of course it need not always go this way, for some purposes one might think the best way to achieve the objective duties would be to insist on people doing as they subjectively feel they ought.) So you have identified my perspective quite accurately! I’d like to hear a bit more about why you disagree, why you think someone with these sort of pragmatic concerns should be more worried about ensuring people do as they subjectively-ought?
Just one note though. In the Anansi case I take part of what they are doing wrong to be inherent to their reasoning practices, not just their subsequent behaviour. In that they have set up their epistemic machinery such that it is going to output bad results (in this case with further undesirable social consequences) regardless of whether they sincerely believe the results of this machinery operating. Anansi doesn’t change that, he just intervenes to make them sincerely buy into the results of their own bad process. Now, maybe ultimately the reason one cares about evaluating the epistemic machinery is because one cares about the resultant behaviour. But I do not think one has to buy into that consequentialism to share the concern, it could be that one is intrinsically just worried about evaluating their epistemic practices – and my point is that even if it was this specifically one wishes to evaluate, the Anansi case suggests that it is not the difference between sincere and insincere belief in the output that is of primary normative concern in the sort of uses I imagine the explication being put to.
Jennifer Lackey
11th July 2020 at 4:26 pm
Thank you so much for your excellent comment, Kian! The short answer to your question is, yes, I think a group member can incur certain responsibilities in virtue of being a member of a group, even if the member in question did not make a causal contribution to a particular action. Consider, for instance, a person who is a member of a white supremacy group. Even if this member was home sick when the group vandalized a Black church, he might still bear responsibility for the collective action of the group of which he is a member. This action is the kind of action that is undertaken by this white supremacy group, he is knowingly a member of the group, and so on. Here is another example, but where a member doesn’t even knowingly join the group in question: Americans have inherited a history of slavery, convict leasing, Jim Crow, and mass incarceration. Even if a white American living today was simply born an American and did not directly causally contribute to any of these institutions, she is part of–and indeed benefits from–a system in which Black Americans are systematically marginalized and oppressed for the benefit of whites. Given this, she may bear causal responsibility for white supremacy through being an American, despite not making a direct causal contribution to the institutions themselves. Of course, there are many details to work out here, but I hope to have gestured at how I think about your question. Many thanks, again!
Bill Wringe
11th July 2020 at 8:27 am
Thanks Kian – excellent questions, and I’m looking forward to seeing what Jennifer and Liam have to say about them.
In the meantime, maybe a couple of other things to think about:
For Jennifer, about the Philip Morris case:
When you were motivating non-summative conceptions of group belief, and in particular when you talked about the acceptance account and the Jane Smith case you said something like this: ‘if the department commits, as a group, to treating Jane Smith as the best candidate, this will be reflected in its behaviour’. So you might be tempted to think that there’s some kind of quasi functionalist constraint on when collective acceptance counts as group belief: the group belief has got to connect up with behaviour in the right kind of way.
If that’s right, doesn’t it give the non-summativist something sensible to say in the Philip Morris case. Because in that case, what makes it plausible to say that PM was lying is that at the same time as making all kinds of public pronouncements that smoking isn’t harmful, they are *also* pursuing a whole bunch of policies whIch only make sense on the supposition that it is (for example, seeking out various kinds of potentially muddying evidence, lobbying the AMA etc). Similarly with climate change denial: if you look at what actually happens (studies by Naomi Oreskes et al), you see evidence of companies doing things in a co-ordinated way that’s best explained by the groups believing one thing, while the individual members believe something else. (Quick shout-out here to Sade Hormio, whose PhD thesis has a really nice discussion of some of these issues)
(Maybe the non-summative analysis should go something like this: a group counts as believing p if it’s members commit to accepting p, *in the absence of behaviour on the part of the group that’s best rationalized on the assumption that they believe Not-p*. If you’ve got Wittgensteinian sympathies, you might think that’s not too different from how we think about belief in the individual case.)
Question for Liam in the next comment (to avoid wall-of-text).
Jennifer Lackey
11th July 2020 at 4:44 pm
Thanks so much for this insightful comment, Bill! I don’t think that the non-summativist typically endorses a full functionalist conception of belief to motivate the view. Rather, she seems to appeal to a weaker, non-theoretical commitment–belief and action are intimately connected. If you assert that p, act as if p, reason with that p as a premise, and so on, then, the non-summativist says, it is plausible to conclude that you believe that p. Unfortunately, really good liars do all of these things, too. Philip Morris, for instance, may behave exactly as one who genuinely believes that p would act–they assert that smoking is safe, act as if smoking is safe, reason as if smoking is safe, and so on. The joint acceptance theorist has a serious problem here since some of the most dangerous and effective liars can behave in ways that are indistinguishable from those who actually believe the proposition in question. Moreover, even if there are some instances where liars reveal their true colors and deviate from the behavior that one would expect if one believed a given proposition, this is also true in the case of belief. I believe that scuba diving is safe–I allow my daughter to take scuba diving classes, I tell my husband that it is safe, and so on. However, I have momentary lapses where I can’t myself scuba dive because of fear. Many thanks, again, for the great question!
Bill Wringe
11th July 2020 at 8:48 am
OK – question for Liam
I’m quite sympathetic to the idea that what we might need in collective epistemology is something more akin to Carnapian explication than straightforward conceptual analysis. But it seems to me that there are at least three ways things might go in a project like that, and that you’ve opted for one of them as opposed to two others: it’s kind of a middle of the road position. So my question is partly clarificatory and if I’ve understood right partly an invitation to say more about why one should go one way rather than the others
So – 3 ways the explicatory project could go
1) We need a way to distinguish good and bad collective belief forming processes/policies and we need to get institutions to form collective beliefs to adopt good policies (and maybe sanction those that don’t). But the concepts we use there won’t distinguish between cases where institutions are culpably bad and those where they are non-culpably bad – though they could (perhaps) allow for distinguishing between culpable and non-culpable individual actors.
This seems like your view – at least as I understand the Anansi case
2) Something like 1 except a bit more conservative: the concepts we come up with do make a distinction between cases where the collective is culpable for the bad belief forming policies it has and those where it isn’t – they’ve just made honest mistakes about how to assess evidence, or how to weight different opinions. BUT there’s no reason to think this will line up particularly closely with a natural extension of our folk conceptions of lying and bullshit to the collective case (this position looks as though it might get some support from your discussion of Shiffrin)
3) A much more radical view (I guess) where the assessment of collective epistemic policies doesn’t require anything like a conception of collective belief at all (or the concept fragments into various successors, some but only some of which look like the summative conception).
Why go for 1 rather than either of 2 or 3 (if that is the way you would go?)
(Apologies for length: hope I haven’t inadvertently strawpersoned your position).
Liam Kofi Bright
11th July 2020 at 12:34 pm
Thank you for the question! I think it gives me a lovely opportunity to try and clarify what it is I am saying.
Broadly speaking I think it is a contingency that I happen to go for something more like (1). The contingency being: it happens to seem useful to me, as best as I can tell, that we ought actually frame our policy or regulatory interventions in terms of something like group attitudes or group beliefs. This because I think conceptualising things in group attitude terms is already pretty well embedded in our evaluative practices so disentangling it would be a lot of work, and there’s no clear reason to engage in that work when we can just refine what we already have. However, despite the weight it gets in every day evaluation (at least, it seems to me accusing someone of dishonesty or hypocrisy is one of our more fraught charges) I don’t think sincerity vs insincerity is the right normative joint to cut given the regularity projects I envision this playing a role in. So while I do want to be engaged in something like the same general project of analysis here, I end up advocating a somewhat more revisionary attitude than the paper I was responding to. It is this combination of factors that leads to me having a middle position here regarding how revisionary I advocate our explication ought be.
But even if that is how I happen to come down on this question, I wish to stress that on the whole I think very revisionary explication projects should be more often countenanced and engaged in. I think our problem more often tends to be that we are too timid, small-c-conservative, rather than too bold. This because I am committed to this being an essentially pragmatic normative question that must take into account the contingencies of where we actually are socially and conceptually into account. We are trying to tailor our conceptual apparatus to our theoretical and practical needs, and to do this well we must know where we are starting from and what those needs happen to be in the present environment. Weighing the costs and benefits will (or need) not always come down on the side of working with close analogues of the concepts we find ourselves with. We always begin in medas res, repairing Neurath’s boat – this rules out, I grant, changing absolutely all of our conceptual apparatus at once. But for any one potion of our conceptual lives rather dramatic changes are perfectly possible and could be the wisest course of action. We should not rule out quite large leaps, or quite dramatic changes to the ship as we sail along.
Elliot Porter
11th July 2020 at 12:51 pm
Thanks both, that was really interesting.
I wonder if there are a set of cases that trouble the waters a little. It seems natural to talk of ‘Britain’ as a group, maybe composed of citizens+state apparatus+some other institutions, (or perhaps citizenry+…etc). And it seems natural to talk about “Britain” having certain attitudes that look very much like beliefs, even if they are not held in the all out 0-or-1 way that some beliefs are held. (So belief-like attitudes, if not beliefs-proper). “Britain” has some wrongheaded attitudes about its imperial past (±30% of people think the empire is more to be proud of than ashamed of). More importantly, there is something wrong with this. It’s bad that this attitude is held and it is good that ‘Britain’ apologises for its empire and role in transatlantic slavery.
So it looks like Jennifer’s normative case pushes us to make sense of the attitudes held by groups like ‘Britain’ or ‘Spain’. But it’s not obvious that Liam’s reframe helps us here. We can criticise the UK’s education system for eliding over imperialism, but the problem with these belief-like-attitudes isn’t that they are allowed to come about in epistemically irresponsible ways., or that they endorse false propositions. It’s also that they’re evaluatively wrong. They fail to recognise harm, and fail to make acknowledgement of wrongdoing.
So, it might be that this is a case when we’re engaged in a different project, as Liam suggests, and so we will need to construct the disputes with different concepts that you were talking about, but it seems that this case has at least one foot in the social epistemology you’re interested in? I wondered whether you both think this is comfortably excluded from the question, or whether cases like these do push us one way or the other?
(maybe the Spanish ‘Pact of Forgetting’ might have been a better eg, as to all ‘forget’ p we need to keep an eye on p and avoid it, but idk anything about Spanish politics so i’m less ready to use the eg)
Liam Kofi Bright
11th July 2020 at 1:05 pm
Thank you for the comment and the kind words! I think it might be disputed whether Britain constitutes a group in the right sort of way for these sort of group attitudes to form, but I also think Dr. Lackey will be better placed to speak to that.
For my part I will stress a different point. Namely, I think we ought often see our goal as being eliminating the harm, rather than coming up with a conceptual analysis that identifies the harm appropriately. In the case discussed in the paper, for instance, I do not deny (or do not mean to deny!) that there is something bad about lying, I just do not think the appropriate or most useful task for an explicator to be engaged in is picking that out. Rather, I think the task is to aid regulatory or public efforts to reduce or eliminate the harms of corporate misinformation or propaganda, and in this case to do so by aiding people in holding those responsible to account.
So likewise here. I think that actually holding our history curricula to higher epistemic standards (in terms of insisting that students as often as possible come out with true and well justified beliefs regarding, as well as well grounded modes of reasoning about, British history) might not be a direct identification of the harm in question, but would in fact do much to curtail it. So I think even in this case something like the same approach could well be appropriate.
Joel Yalland
11th July 2020 at 5:42 pm
Great talks from both Jennifer and Liam, thank you both for your contributions!
I have one very brief question for Jennifer. It seems that one element of both individual and perhaps group belief that hasn’t been considered is that of uncertainty. On the summativist understanding of group belief, it stands to reason that if a substantial number of individuals are not swayed to either accept or deny the contested belief, then the group has no (uniform or overall) belief. I grant that on the non-summativist understanding we can’t characterise the disconnect between the (public) group belief and the different belief/beliefs of the individuals as uncertain, because this disconnect is presumably ignored and over-ridden by the attribution (publication?) of the group belief.
I agree with the worry you raise about joint acceptance as compelling individuals to accept and defend the belief, to reflect the group belief in their practices or otherwise obscure the truth. Does this then present a point in favour of the summativist view, since lies and BS could simply be accommodated within group belief as insincere or ersatz, just as uncertain individual beliefs could be said to generate uncertain (or perhaps conflicted, lacking full conviction) group belief? Or is it merely that non-summativists fail to sufficiently accommodate lies and BS?
Jennifer Lackey
11th July 2020 at 6:42 pm
This is a great question, Joel–many thanks! Yes, I think views of group belief with a summativist constraint can accommodate uncertainty with some ease insofar as the doxastic states of the individuals would be reflected in the doxastic states of the group. Thus, to the extent that individuals can be uncertain, so, too, can groups. On non-summativist views, the situation is a bit less clear. On the joint acceptance account, for instance, it seems that it would have to be the case that the members of the group jointly accept an uncertain state. On the judgment aggregation view, it would have to be the case that members vote for states of uncertainty. Maybe there is nothing in principle problematic about these responses, but in the same way that one can accept that p in order to lie or bullshit, one can deceptively accept to be uncertain about that p in order to be deceptive. Thanks, again, for the question!
Daniel Whiting
12th July 2020 at 11:47 am
Thanks both for such interesting talks. I’d like to ask Jennifer Lackey a question.
In the written version, you consider a response to your challenge to the joint acceptance account—adding a positive epistemic condition. The condition you consider, roughly, is that the joint acceptance be motivated by the evidence. Your response is that this doesn’t get the right result in the Medical Association case or in cases of wishful thinking. For what it’s worth, that seemed right to me. But I wonder if there’s a proposal in the same spirit that might survive your objections.
Consider this weaker proposal, inspired by the aim of belief literature: The joint acceptance must be regulated by the aim to accept only what is true. This seems to handle the Medical Association case. Plausibly, the Board would not have accepted the relevant proposition had its members not taken the evidence sufficiently to support that proposition. Cases of wishful thinking are tricky—for everyone!—but a not unpopular approach is to understand regulation in dispositional terms such that the presence of a competing motivation might block it without removing it.
The proposal might then help with cases of lies and bullshit, since joint acceptance in those cases is not regulated by the above aim.
Of course, you can’t consider every possible supplement to the joint acceptance account – in the paper or in these comments! But, if you’ve more to add, I’d be interested to hear it.
Bill Wringe
12th July 2020 at 6:55 pm
Since I think comments are about to close, I’d like to finish by thankin both our symposia students for really interesting talks (and also for coming back and engaging with the questions), and also Simon, Graeme, and Alyx for putting on an excellent conference. Thank you!
Susanne Burri
14th July 2020 at 1:18 pm
Not sure this comment will still be added, but brief question for Liam based on watching the two videos but not reading the papers: my inclination is to think you are right that our normative focus should be on good epistemic procedures and whether these were followed when it comes to groups. But lies still matter as well and independently, as they add something malicious that is of interest in its own right. More precisely, what I have in mind is that a group who lies will hide behind procedures they claim to be apt ways of taking into account relevant evidence. And this is very different from negligently (say) failing to implement or follow such procedures. Such malicious conduct can be difficult to prove, and naïve people or those whom it happens to suit may come to believe–as a consequence of the malicious conduct–that there really are reasonable differences in opinion when actually there are not (think climate change). And this may be the goal of the group that lies. Etc. In short, groups lying is bad and harmful in ways that go beyond what you say we should care about, and we should care about it also.