David Kinney (Santa Fe): “Why Average When You Can Stack? Better Methods for Generating Accurate Group Credences”
Formal and social epistemologists have devoted significant attention to the question of how to aggregate the credences of a group of agents who disagree about the probabilities of events. Most of this work focuses on strategies for calculating the mean credence function of the group. In particular, Moss (2011) and Pettigrew (2019) argue that group credences should be calculated by taking a linear mean of the credences of each individual in the group, on the grounds that this method leads to more accurate group credences than all other methods. In this paper, I argue that if the epistemic value of a credence function is determined solely by its accuracy, then we should not generate group credences by finding the mean of the credences of the individuals in a group. Rather, where possible, we should aggregate the underlying statistical models that individuals use to generate their credence function, using “stacking” techniques from statistics and machine learning first developed by Wolpert (1992). My argument draws on a result by Le and Clarke (2017) that shows the power of stacking techniques to generate predictively accurate aggregations of statistical models, even when all models being aggregated are highly inaccurate.
David Kinney is Complexity Postdoctoral Fellow at the Santa Fe Institute, his research focuses mostly on mathematical models that represent the causal structure of their target systems.
Connect with us
Facebook
Twitter
Youtube
Flickr