With fake news and disinformation seemingly thriving during the COVID-19 pandemic, J. McKenzie Alexander looks at the epistemology and psychology of fringe beliefs.
It goes without saying that the coronavirus pandemic has brought about huge changes in contemporary life. Many international flights are grounded, many people have been kept inside their homes in lockdown, and economies across the world are virtually frozen. Enormous uncertainty exists regarding what form the “new normal” will take when this is all over. And, if all of that were not bad enough, arsonists throughout Europe have set about torching 5G towers because they believe that they are connected, somehow, to the spread of coronavirus.1 According to the Financial Times, as of 16 April at least 60 towers have been burnt down in the UK, where the movement started, and more across Europe. Given that these beliefs are not based on evidence, and lead to actions detrimental to society, believing that 5G towers are implicated in the spread of coronavirus seems like some seriously weird sh*t.
Now, believing weird sh*t (to introduce a term of art) is by no means new. People have believed weird sh*t ever since humans evolved the ability to form beliefs. But one thing which cries out for explanation is why, in a time when scientific knowledge is growing faster than ever, so many people believe so much weird sh*t. For example: that airplane contrails are actually a governmental effort to release toxic chemicals into the sky, that Hilary Clinton was involved in a paedophile ring based in Comet Ping Pong pizza, that Barack Obama was not born in the United States, and – somewhat reassuringly down-to-earth in contrast – that anthropogenic climate change has been vastly overstated. What’s going on? One important aspect, I suggest, is that in contemporary society, for some people, what matters most about beliefs is not their truth-value, but rather their role in signalling group membership and in alleviating unmet psychological needs.
So how should we think of beliefs? Does it matter if they are not true? The question is tricky because some beliefs – like those underlying the theory of Newtonian mechanics – are, strictly speaking, false. But Newtonian mechanics, although false, is approximately true and well-founded on evidence. We can use Newtonian mechanics to accurately describe the world and to make powerful interventions in it. Although both are literally false, there is a world of difference between Newtonian mechanics and the weird sh*t some people believe.
Two models of belief formation
When philosophers think about beliefs – cognitive attitudes towards propositions that have a truth-value – they often worry about the following questions: (i) what counts as appropriate evidence for a belief? or, (ii) what is the appropriate degree of belief, given the evidence? Both questions are motivated by the desire to avoid false beliefs. We cannot entirely avoid error, but we can try to minimise the likelihood of error. And minimising the likelihood of error means that an epistemically careful, rational agent will try to ensure that they base their beliefs on appropriate evidence. And this is where things become interesting, because people often form beliefs by relying on the wider community in which they are embedded.
Let’s consider an issue which affects us all: anthropogenic climate change. The belief that it exists has a truth-value independent of whether we actually know it. One idealised model of how rational actors form beliefs is that they use a process, like Bayesian updating, to adjust their credences based on the available evidence. However, in the real world, most of us face the problem that we cannot engage with the relevant evidence directly because we have neither the time nor the expertise required. In cases like this, we often rely on another person’s expert opinion to form a belief by proxy, using their judgement to fix our belief. Or, since most of us don’t have a single scientific expert on whom we rely, what we do is rely on the considered opinion of a reliable epistemic community. For example, we rely on epidemiologists to tell us about the spread of SARS-CoV-2, and what we should do to avoid developing COVID-19. Traditionally, in the case of beliefs about the nature and causal structure of the world, we would look to the epistemic community of scientists: their expert knowledge and track record give them the credibility to act as arbiters on such matters. In this idealised model, beliefs aim at the truth.
What the model of idealised rational belief formation neglects is the fact that, for some, perhaps many, people, what matters most about a belief is the relationship between the person and a group associated with that particular belief. Individual belief often serves as a signifier of group membership, and endorsing a particular belief often becomes an informal requirement of group membership. The idealised model assumes the following process of belief formation: beliefs should be based on evidence and, therefore, when I cannot fix a belief with sufficient reason to be confident that it is true (or likely to be true), I should rely on a reliable epistemic community whose considered judgement is unlikely to lead me astray. An alternative sociological model conceives of belief formation as follows: my beliefs are often associated with a particular group, and that group contributes (sometimes significantly) to my social identity. My social identity, as a source of esteem, friendship, and camaraderie, constructs meaning in my life. Preserving my social identity is a matter of great importance to me. Certain matters are earmarked as constitutive of membership in that group and, hence, when it comes to beliefs concerning those matters, they are not determined by evidence but rather determined by my group identity. And this can happen even when, objectively, none of the group members are qualified to advise on those beliefs. Why would any agent behave in such a way? Because doing so is the best way to ensure continued group membership.2
I have described these two models of belief formation as though they were mutually exclusive, but we must keep in mind that both can operate within individuals to varying degrees at different times. The two models should be thought of as ideal types, useful to distinguish for the purpose of understanding. It is not uncommon for both processes to operate at the same time, and for the individual to feel torn between accepting a belief constitutive of group membership, and rejecting that belief based on evidence to the contrary. Some religious beliefs fall into this category, particularly ones concerning metaphysics. And we should also not overlook that the category “a believer in science” can also be understood as a social identity, so there is interplay between the two models as well.
The decoupling of belief and truth
The inversion of grounds for belief on the sociological model might strike some as odd. To begin, it would seem to decouple beliefs from reality in ways that the “reality-based community”3 would legitimately see as harmful. If there is no mental state a person can adopt which, for example, would make bleach safe to consume,4 why wouldn’t blind deference to beliefs – especially that kind of weird sh*t – determined by a group identity be eventually eliminated by natural selection? How can obviously false, potentially harmful and socially damaging beliefs persist and spread?
Here we must recognise that, in the developed world, we have engineered society in such a way that we rarely need to rely on the accuracy of the vast majority of our beliefs in order to navigate the world safely. If you have what David Graeber calls a “bullshit job,” you don’t need many accurate beliefs to do your job and get paid. And if your most deeply held beliefs go against what you are required to do in your job (because they are racist, sexist, Islamophobic, not politically correct, anti-religious, etc.), you can go through the motions bracketing what you really think and tell yourself “this is what I need to do in order to get paid”. A person doesn’t really need to have accurate scientific beliefs, coherent political beliefs, or reasonable economic or social beliefs to go to the supermarket and buy food. A person doesn’t have to believe in evolutionary theory to go to the doctor, be prescribed antibiotics, take them and get better. A person doesn’t have to believe in general relativity to use their phone’s GPS to find their way, even though the technology would malfunction if the designers didn’t take the gravitational timeshift implied by general relativity into account. People who believe weird sh*t can free ride on those in society who strive for truth, because technology and social institutions provide a pretty big buffer between any single person’s beliefs and the stark fist of reality.
Of course, to someone fundamentally committed to truth it appears deeply hypocritical to behave in this way but, from the weird sh*t believer’s point of view, so what? The person who thinks like that will still feed themselves, be cured of their infection, and be able to get to where they want to go. The only beliefs a person really needs to survive in a modern society are elementary ones like how to cross the street without being run over, how to drive a car (not an issue if you can take public transportation), how to pay your bills on time, how to cook over a gas stove without blowing the house up (not an issue if you have an electric hob), how to use your smartphone, and so on. These beliefs are instances of highly specific pragmatic local knowledge, and are compatible with a wide variety of nonstandard theoretical beliefs about the world. Members of the Flat Earth Society can negotiate daily life just fine. Society’s collective technological prowess has, for better or for worse, radically decoupled people’s ability to survive from the theoretical coherence, truthfulness and accuracy of their beliefs. Furthermore, certain false beliefs can even be fitness-enhancing. If a person erroneously believes that crime is on the increase and, as a result, they insist on staying home in the evenings rather than going out in public or driving places, this change in behaviour reduces their exposure to car accidents and being mugged or otherwise assaulted.
So let’s recap: this decoupling of a person’s ability to survive from the truthfulness and accuracy of their beliefs frees up a person’s system of beliefs to assume a different functional role. What may matter most about a belief is not its particular content or actual truth-value, but what that belief signals about a person and a group with which they identify. The denial of anthropogenic climate change provides a nice illustration of this phenomenon in action. Climate change requires a coordinated, global response in order to be combated effectively. The actions of any single person are entirely irrelevant to the global outcome. Furthermore, due to the long-term time-delay in the climate’s response to environmental legislation, it is essentially impossible for any person to see a material difference between what they do and any improvement to the environment over the short to medium run. Given this, beliefs about whether anthropogenic climate change exists can be co-opted to serve a signalling function about which group a person belongs to, because over the short-run they are decoupled from noticeable material consequences.
Looking back over recent history, this seems to be what has happened with the Republican party in the US on several issues. I find it remarkable that, under Nixon, the Republican party passed a number of pro-climate pieces of legislation and founded the EPA. Yet today, the majority of Republican supporters are sceptical about climate change. Why? The importance of their social identity as Republicans leads them to defer to a core belief of their group. The real question becomes why have Republicans arrived at a consensus denying climate change when the evidence points to the contrary? The answer, I believe, is this: because the majority of economic and business interests represented among top Republican donors benefit from continuing with business as usual, rather than taking the effective change required to combat climate change. To a great extent, the Republican party line has been ideologically captured and subordinated to these economic and business interests. Furthermore, the denial of climate change can be spun, in the public context, as having two consequences. First, a rejection of “liberal science” with its purported political bias. (The comedian Stephen Colbert once said that “reality has a well-known liberal bias”.) Second, as an attempt to bring back traditional extractionary industries (such as coal mining) or new ones (such as fracking), for which there is an interest in communities that have historically voted conservative. In either case, both aspects of climate change denial primarily have the function of reaffirming the group identity of those who adopt the belief and advancing their local self-interest. Concerns about truth and evidence take a back seat to these other functions.
The social and psychological functions of systems of beliefs
This last example illustrates how the various social functions played by beliefs can subordinate the truth-functional role of beliefs. Once we recognise that there are alternative functional roles played by beliefs that can trump an interest in the truth, we see that the way to confront beliefs in weird sh*t involves engaging with the underlying functions that are served by that system of beliefs, rather than engaging with their theoretical content or evidential basis. And this shift towards the functionality of systems of beliefs means we have to acknowledge that, sometimes, the real social function served by a system of beliefs is not necessarily known by many – perhaps any – of the people who have those beliefs.
What we are talking about, indirectly, is the well-known distinction between manifest and latent functions, as featured in classic work in anthropology. A manifest function of a social practice is one which that practice has been deliberately designed to have. For example: randomised police patrols keep criminals from being able to predict a safe time to commit burglaries. A latent function of a social practice is one which that practice has, but for which it was not deliberately designed. The classic anthropological example is how the practice of extended lactation in hunter-gatherer tribes (i.e., breastfeeding infants for longer than twelve months) has the latent function of controlling the population of the tribe, since breastfeeding reduces fertility.
When it comes to the social functions of systems of belief, the distinction between manifest and latent functions helps us understand better why people believe weird sh*t. Beliefs which seem irrational when we treat them as evidence-based vehicles of truth can easily make sense from another point of view. When we appreciate that systems of beliefs can have, as their latent function, the satisfaction of other needs, the fact that those beliefs are unresponsive to contrary evidence no longer seems unusual. If a system of beliefs gives me a way to understand my place in society, an explanation of why I am unhappy (or happy), unsuccessful (or successful), and a justification for feeling the way I feel, there’s little incentive for revising those beliefs on evidence to the contrary because doing so leaves me with unaddressed psychological and sociological needs.
This brings us back to the phenomenon with which we started: the torching of 5G masts. Providing evidence that 5G masts are not harmful will not stop the arsonists because, I suspect, this is not what is driving their actions. I think they are driven by a combination of several factors. To begin, there is widespread fear in the SARS-CoV-2 and a belief – in many cases justified – that governments have fallen short in their response and are therefore perceived as having failed, in whole or in part, in their duty to keep the public safe. Given this, people want there to be an easy solution, a “magic bullet” which will make everything better. In addition, there is another long-standing feeling held by many that decisions which negatively impact them are imposed from the outside: a new road that increases pollution, increased numbers of flights to an airport which raise noise, or the installation of an unsightly and unwanted 5G mast in the community. And we know that some industries, like the tobacco and fossil fuel industries, really have in the past deliberately worked to suppress information about the harmful effects of their products. This volatile cocktail of unaddressed desires and feelings and beliefs creates the space where beliefs in weird sh*t can multiply and flourish. A virus of the mind, so to speak. The metaphor is apt because, just like a virus, you cannot reason with it: eradication requires identifying, and removing, the background conditions which made it possible to take hold and spread in the first place.
J. McKenzie Alexander is Professor of Philosophy and head of LSE’s Department of Philosophy, Logic and Scientific Method. His primary field of research concerns evolutionary game theory as applied to the evolution of morality and social norms, but more recently he has worked on the foundations of decision theory. He also has broad interests in the philosophy of science and social science.
Notes
1 – https://www.ft.com/content/1eeedb71-d9dc-4b13-9b45-fcb7898ae9e1
2 – In speaking of a single group, I do not mean to ignore or downplay the importance of intersectionality. In some cases, of the many groups to which we belong simultaneously, there is one which is particularly salient. In those cases, the salient group can drive the process of belief formation. However, in other cases, no one group is salient and different groups can pull in different directions. How a person negotiates this is a fascinating question, but one which falls outside the scope of the present discussion.
3 – In a 2004 article in the New York Times, Ron Suskind quoted an anonymous White House aide who said people “in what we call the reality-based community, believe that solutions emerge from your judicious study of discernible reality.” The aide then said, “That’s not the way the world really works anymore.”
Featured image: Public Domain
really enjoyed reading the article. A few points this article brought to my mind:
Some cultural evolutionary theorists have used the umbrella term “cultural replicators” to capture the way conceptual entities such as ideas and beliefs replicate and perpetuate through social interactions. In my dissertation I discussed an interpretation of cultural fitness which is most suitable for the task of explaining scientific change at the population level (i.e. theory acceptance in the scientific community). I argued that cultural fitness of a theory may be measured in terms of its number of its proponents (*In the history of science, there are many instances in which some theories are accepted but others dismissed and rejected. In a paradigm shift, in particular, a new theory replaces the old theory as it comes to be widely accepted by members of the scientific community.), and that this interpretation of cultural fitness may be applied to many other areas. These include social and political change where the same pattern of replicator dynamics can be observed. The implication of this for the “5G” example is that, even if the idea that “5G towers are dangerous” have spread and seemingly become popular among the public, it did so only to a limited extent—perhaps due to the fact that it embodies some emotions that the public share (e.g. discontents, fear, unwillingness to accept something). But it is never going to achieve the same popularity as scientific ideas such as gravity or natural selection because of its low cultural fitness.
Additionally, the formation of ideas or beliefs can in principle be traced to genes through a long causal process that involves protein synthesis, neuron firing, etc. (*Although there have been objections on the grounds that one-to-one correspondence between genes and ideas. It is pointed out that genes do not function in isolation but as part of integrated gene complexes. In other words, being particular units is not a necessary requirement for an entity or pattern to be treated as cultural replicators. Neural Darwinists have also made attempts to treat neuronal groups or patterns of neural connectivity as the unit of selection.). The general idea here is that today some “new” genes are trying to achieve dominance in the world, but only through producing “mutant strategies” such as lies, provocative content, sexually attractive look or sound (there are scientific evidence that sensory receptors in humans are highly evolutionarily labile, and I conjecture that this creates room for some genes to mutate and”invade” the population they belong to by producing stimulus that target directly at these structures).
In the Qin dynasty in China, there was a powerful political figure named Gao Zhao. In a meeting, he deliberately pointed to a deer and said “horse”. No one in the room dared to point out his mistakes! In doing so, Zhao was able to tell his supporters (at least those who appear to be) from those who dare to challenge him in public. Zhao was so powerful that anyone being identified as his enemy would be banished or killed the next day. But this did not mean that those people in the room was firm in the belief that the horse was a deer.
Deep meaningful article and follow up comment, but there is an important element missing. The growing mistrust of science across the spectrum. Independent science is concerned. But not those funded by corporates like Big Pharma and the Telcom industry.
It’s clear that a rapidly increasing number of citizens around the world just want unbiased clear input. Unequivocal confirmation of the real truth. Not whitewashes, or immediate fact-checker denials of anything outside of the prevailing narrative. People aren’t prepared just to believe everything without facts. You must know there is a deep credibility chasm for all things mainstream and media based too – so can you blame them?
I think that stupid, irrational beliefs are failure of parents and educational system. People should be brought up, so they would have respect for science, facts and rational analysis. I don’t see any other way to combat this, than to change way we educate children.
Belief that 5G causes Coronavirus is one of the most bizzare beliefs I ever heard of. Really good article.