Dr Ben Tappin is Assistant Professor in Psychological and Behavioural Science.
What is your field of study?
I would call my field of study “persuasion”, which in practice straddles several more traditionally-defined fields. The common thread in my work is a motivation to understand when and why people change their minds and behaviours (or not, as the case may be), and the implications of that for themselves and wider society. I mostly study persuasion in the context of human social and political behaviour and how it interacts with technology.
Can you tell us a bit about your academic background, and how it led you to your current field of study?
My motivation to do a PhD was originally to better understand religious belief. At the time I was an atheist, but had friends who were not, and I was motivated to understand that difference. So I began my PhD in 2015 with the goal of investigating religious belief and disbelief formation. A year later Donald Trump was elected for his first term as US president and the UK voted Leave in the EU referendum. These events had a big impact on the direction of my PhD. Many observers in 2016 (and since) tried to make sense of those events by appealing to the influence of propaganda and misinformation on people’s beliefs, and ultimately on their voting behaviour – in a word, by appealing to persuasion. It felt important to me to understand if that was true, but I also felt like the evidence for that explanation wasn’t super convincing. So, by the end of 2016 I had resolved to apply the theories and methods I was using to originally study religious belief formation in my PhD to the domain of political attitudes and behaviour instead. And I haven’t looked back since!
What is your favourite project or piece of research you’ve worked on to date?
In 2023 I published a piece of research whose results were a big surprise to me, so I think that takes the cake for my favourite so far. I’m also a big nerd for statistical methods and the R programming language, and I got to fit some fancy models and make some unusual visualisations for that project, so that’s another reason it ranks as my favourite.
The motivation for the project was the following. In the field of political psychology, it is widely assumed that people’s loyalty to a particular political party can distort their information processing, making them less receptive, or even unreceptive, to arguments that go against the party line. I think this also fits with many people’s personal experience of talking with relatives and friends who strongly support a particular political party. However, despite the intuitive appeal of this assumption, there is a distinct lack of evidence that people’s party loyalty actually causes them to be unaffected by arguments against the party line. This is because identifying causality requires us to compare what happened in two different worlds: one world where people had their party loyalty, and one world where they didn’t. And the problem of course is that we can only observe one world.
So my colleagues and I attempted to identify the causal effect of party loyalty on people’s receptivity to arguments in the way we know best — by conducting an experiment. We asked people for their attitudes on various political issues. However, before giving their opinion, they were randomly assigned into 1 of 4 groups, in which they were shown either (1) the position of their favoured party leader (e.g., Donald Trump) on the issue; (2) an argument for the position that was opposite to their leaders’; (3) both the leader’s position and the argument; or (4) neither. This experiment design allowed us to test how receptive people were to the argument in each of our two “worlds” of interest: one world in which people were explicitly confronted with the fact their party leader took the opposite position—activating party loyalty—and the other world in which they weren’t so confronted. Based on the widespread assumption described above, we expected people to be less receptive to the argument when their party loyalty was explicitly activated.
But we didn’t see that happen. While telling people that their leader supported a certain viewpoint did cause them to update their attitudes in the direction of their leader (replicating previous research), it didn’t cause them to ignore or discount the countervailing arguments. On the contrary, exposure to the arguments also caused people to update their attitudes — in the direction of the argument. They were receptive. And this receptivity to the argument was similar even when people were explicitly confronted with the fact that their leader took the opposite position. This result was consistent across a range of different policy issues and demographic groups we examined, and came as a genuine surprise to me. I had expected at least some diminished receptivity to the arguments when party loyalty was staring people in the face. But we didn’t find any evidence of that in this research, which suggests that party loyalty doesn’t render people unreceptive to the persuasive effect of counter arguments — even if it often feels that way.
Who is your biggest academic inspiration?
A tough question! There are several answers I could give to this that feel equally valid to me, but to spare the poor readers I’ll restrict myself to just one, which is my parents.
My dad was an academic, albeit in a field very different from my own. He was a marine chemist, which meant he spent time on boats taking samples from rivers and oceans, and yet he never learnt to swim and hates being in the water. When I was growing up I inferred from this that my dad must really have loved his job to put himself in that position repeatedly! He also served as my first role model that a career in academia was something to realistically consider, and I think that had an important influence on me. My mum wasn’t an academic, but I think I also learned an important lesson from her career choices that nudged me toward an academic path. Fairly late in her working life she switched into working in mental health care from something very different she had been doing for a long time previously, motivated by a newfound passion for improving the quality of mental health care that patients were receiving in the UK. I was a teenager at the time, and my mum’s behaviour licensed and encouraged me to try and pursue something I was passionate about in my working life too (to be fair to them my parents also explicitly encouraged this, but seeing them actually do it themselves is a different, arguably more credible, signal!). It’s a tired trope which I’ll deservedly receive flak for saying, but I feel like I’m doing something I’m passionate about by working as an academic. I’m very fortunate I received the opportunities and support to do so.
Are there any big questions or problems in your field that you hope will be solved in your lifetime?
From my corner of the field, a big question for persuasion research right now is: how persuasive could advanced AI models become? Many people seem concerned that in the near or medium future AI could become superhumanly persuasive, capable of influencing public opinion and behaviour at a scale and magnitude far beyond what has previously been possible. In such a future, actors who control the most capable AI models could have a large political advantage, potentially compounding the influence of wealth in politics, making it easier for existing power to be “locked in”, and/or augmenting foreign political interference. The field has started trying to answer this question, including my colleagues and I, but I think a comprehensive answer is a ways off, in part because of the uncertainty over exactly how far and how fast AI may advance in the coming years. My hope is that the research will keep pace with the technical developments in AI, and can provide accurate information about the potential persuasion risks of advanced AI so that society is clear-eyed about any such risks while enjoying its many promised benefits.
How do you see AI impacting psychological and behavioural science more generally?
It seems like there are two broad ways in which AI is having an impact so far. One is as a tool through which to enhance current research practices. For example, large language models are unlocking new opportunities for large-scale text analysis and various research teams are exploring the potential of LLMs for simulating human responses in experiments (with a variety of use-cases) and for designing adaptive and highly personalised questions and stimuli in surveys. These are just a few of the ways in which AI could potentially enhance research practices and open new doors, but there are many more. Of course, despite the potential benefits for research, it’s important we remain critically-minded and do the hard work of validating such use of AI tools instead of blindly adopting them.
Second, AI is quickly becoming a topic of study in and of itself for psychological and behavioural science, and for good reason. It seems highly likely that interactive AI tools will be increasingly deployed throughout society, and scientists of human behaviour are in a good position to evaluate the implications of this for human psychology and behaviour. While there are many potential risks, there are also potential benefits. For example, AI agents could help facilitate consensus-building in human groups, challenge false beliefs, support education, and more. Given all this, it’s a very exciting time to be a scientist of human behaviour—especially in a social science focused institution like the LSE.