Many of the beliefs that play a fundamental role in our worldview are largely the result of the communities in which we’ve been immersed.
Religious parents tend to beget religious children, liberal educational institutions tend to produce liberal graduates, blue states stay mostly blue, and red ones stay mostly red. Of course, some people, through their own sheer intelligence, might be able to see through fallacious reasoning, detect biases and, as a result, resist the social influences that lead most of us to belief. But I’m not that special, and so learning how susceptible my beliefs are to these sorts of influences makes me a bit squirmy.
Let’s work with a hypothetical example. Suppose I’m raised among atheists and firmly believe that God doesn’t exist. I realise that, had I grown up in a religious community, I would almost certainly have believed in God. Furthermore, we can imagine that, had I grown up a theist, I would have been exposed to all the considerations that I take to be relevant to the question of whether God exists: I would have learned science and history, I would have heard all the same arguments for and against the existence of God. The difference is that I would interpret this evidence differently. Divergences in belief result from the fact that people weigh the evidence for and against theism in varying ways. It’s not as if pooling resources and having a conversation would result in one side convincing the other – we wouldn’t have had centuries of religious conflict if things were so simple. Rather, each side will insist that the balance of considerations supports its position – and this insistence will be a product of the social environments that people on that side were raised in.
The you-just-believe-that-because challenge is meant to make us suspicious of our beliefs, to motivate us to reduce our confidence, or even abandon them completely. But what exactly does this challenge amount to? The fact that I have my particular beliefs as a result of growing up in a certain community is just a boring psychological fact about me and is not, in itself, evidence for or against anything so grand as the existence of God. So, you might wonder, if these psychological facts about us are not themselves evidence for or against our worldview, why would learning them motivate any of us to reduce our confidence in such matters?
The method of believing whatever one’s social surroundings tell one to believe is not reliable. So, when I learn about the social influences on my belief, I learn that I’ve formed my beliefs using an unreliable method. If it turns out that my thermometer produces its readings using an unreliable mechanism, I cease to trust the thermometer. Similarly, learning that my beliefs were produced by an unreliable process means that I should cease to trust them too.
But in the hypothetical example, do I really hold that my beliefs were formed by an unreliable mechanism? I might think as follows: ‘I formed my atheistic beliefs as a result of growing up in my particular community, not as a result of growing up in some community or another. The fact that there are a bunch of communities out there that inculcate their members with false beliefs doesn’t mean that my community does. So I deny that my beliefs were formed by an unreliable method. Luckily for me, they were formed by an extremely reliable method: they are the result of growing up among intelligent well-informed people with a sensible worldview.’
The thermometer analogy, then, is inapt. Learning that I would have believed differently if I’d been raised by a different community is not like learning that my thermometer is unreliable. It’s more like learning that my thermometer came from a store that sells a large number of unreliable thermometers. But the fact that the store sells unreliable thermometers doesn’t mean I shouldn’t trust the readings of my particular thermometer. After all, I might have excellent reasons to think that I got lucky and bought one of the few reliable ones.
There’s something fishy about the ‘I got lucky’ response because I would think the very same thing if I were raised in a community that I take to believe falsehoods. If I’m an atheist, I might think: ‘Luckily, I was raised by people who are well-educated, take science seriously, and aren’t in the grip of old-fashioned religious dogma.’ But if I were a theist, I would think something along the lines of: ‘If I’d been raised among arrogant people who believe that there is nothing greater than themselves, I might never have personally experienced God’s grace, and would have ended up with a completely distorted view of reality.’ The fact that the ‘I got lucky’ response is a response anyone could give seems to undermine its legitimacy.
Despite the apparent fishiness of the ‘I got lucky’ response in the case of religious belief, this response is perfectly sensible in other cases. Return to the thermometers. Suppose that, when I was looking for a thermometer, I knew very little about the different types and picked a random one off the shelf. After learning that the store sells many unreliable thermometers, I get worried and do some serious research. I discover that the particular thermometer I bought is produced by a reputable company whose thermometers are extraordinarily reliable. There’s nothing wrong with thinking: ‘How lucky I am to have ended up with this excellent thermometer!’
What’s the difference? Why does it seem perfectly reasonable to think I got lucky about the thermometer I bought but not to think that I got lucky with the community I was raised in? Here’s the answer: my belief that the community I was raised in is a reliable one is itself, plausibly, a result of growing up in that community. If I don’t take for granted the beliefs that my community instilled in me, then I’ll find that I have no particular reason to think that my community is more reliable than others. If we’re evaluating the reliability of some belief-forming method, we can’t use beliefs that are the result of that very method in support of that method’s reliability.
So, if we ought to abandon our socially influenced beliefs, it is for the following reason: deliberation about whether to maintain or abandon a belief, or set of beliefs, due to the worries about how the beliefs were formed must be conducted from a perspective that doesn’t rely on the beliefs in question. Here’s another way of putting the point: when we’re concerned about some belief we have, and are wondering whether to give it up, we’re engaged in doubt. When we doubt, we set aside some belief or cluster of beliefs, and we wonder whether the beliefs in question can be recovered from a perspective that doesn’t rely on those beliefs. Sometimes, we learn that they can be recovered once they’ve been subject to doubt, and other times we learn that they can’t.
What’s worrisome about the realisation that our moral, religious, and political beliefs are heavily socially influenced is that many ways of recovering belief from doubt are not available to us in this case. We can’t make use of ordinary arguments in support of these beliefs because, in the perspective of doubt, the legitimacy of those very arguments is being questioned: after all, we are imagining that we find the arguments for our view more compelling than the arguments for alternative views as a result of the very social influences with which we’re concerned. In the perspective of doubt, we also can’t take the fact that we believe what we do as evidence for the belief’s truth, because we know that we believe what we do simply because we were raised in a certain environment, and the fact that we were raised here rather than there is no good reason to think that our beliefs are the correct ones.
It’s important to realise that the concern about beliefs being socially influenced is worrisome only if we’re deliberating about whether to maintain belief from the perspective of doubt. For recall that the facts about how my particular beliefs were caused are not, in themselves, evidence for or against any particular religious, moral, or political outlook. So if you were thinking about whether to abandon your beliefs from a perspective in which you’re willing to make use of all of the reasoning and arguments that you normally use, you would simply think that you got lucky – just as you might have got lucky buying a particular thermometer, or reaching the train moments before it shuts its doors, or striking up a conversation on an airplane with someone who ends up being the love of your life.
There’s no general problem with thinking that we’ve been lucky – sometimes we are.
The worry is just that, from the perspective of doubt, we don’t have the resources to justify the claim that we’ve been lucky. What’s needed to support such a belief is part of what’s being questioned.
Miriam Schoenfield is associate professor in the Department of Philosophy at the University of Texas at Austin.
This article was originally published at Aeon and has been republished under Creative Commons.