Current Research
Peer Reviewed Publications
- Wollesen, B (forthcoming in Ergo). The Condorcet Jury Theorem under Ambiguity (download manusript here)
The paper evaluates the Condorcet Jury Theorem under ambiguity. It explores the effects on voter competence assumption when voters are faced with situations they can’t ascribe a single probability anymore. In contrast to voting in situations where voters can do so, this paper shows that voters can fail to vote competently in situations under ambiguity even if they are honest, practically rational and epistemically competent. Thus, the conditions under which we can guarantee voter competence become obscure, namely, conditions that guarantee voter competence under risk do not guarantee voter competence under ambiguity. The second contribution is a more positive one. There is a fruitful research project that identifies collective decision procedures better suited for less idealised uncertainty frameworks. In relation to this, the paper shows how allowing abstention can have positive effects on the epistemic benefits of voting and extends the Condorcet Jury Theorem.
Other Publications
Under Review
A Paper about the Principle of One Person One Vote and Strategic Voting
Strategic voting is widely regarded as something to be discouraged, yet the normative grounds for its discouragement remain remarkably underdeveloped. This paper revisits a concern raised by Satterthwaite (1973): that differences in voters’ strategic skills may undermine the principle "one person, one vote" (OPOV) by turning elections into "games of skill". Taking this concern as a starting point, I argue that strategic voting is problematic insofar as it redistributes power in ways that conflict with the commitments of OPOV.
A Paper about Generalisations and the Harm for Marginalisedd Groups
(with Shira Ahissar)
In everyday conversations, we provide generalised advice of the form “doing x will lead to the desired result y”, without realising the resulting social implications. In this paper, we demonstrate the grave social harm caused by such generalisations, showing they disproportionately disadvantage marginalised groups. We term this phenomenon Dominated Discourse. Using formal network analysis, we show how a dominated discourse emerges through information asymmetries between social groups. Importantly, those same asymmetries have been previously credited with conferring epistemic advantages to marginalised groups (Wu, 2023). We examine whether scepticism about the applicability of generalisations suffices for marginalised individuals to overcome their disadvantage. To capture cases of justifiably acquired scepticism, we develop a novel account of how agents in a network weigh evidence where agents gradually update their beliefs about their neighbours’ evidence. Contrary to existing approaches, which fix the reasons for disvaluing evidence externally, our framework lets those reasons arise from within the agents’ own epistemic perspectives. Strikingly, agents remain epistemically disadvantaged simply by being situated within a dominated discourse, even when they meet this higher standard of rationality. Given our results, we raise potential social solutions and discuss how identifying this phenomenon complements several leading theories in social epistemology.
Human Agency Under Algorithmic Prediction: An Experimental Study of
Decision Making in Newcomb’s Problem (with Jan Woike)
Algorithms in our present environment predict our future decisions and behavior, an some of these predictions will lead to interventions. Here, we investigate how partial knowledge about predictions and interventions affects individuals whose decisions are predicted. In six studies (N = 1, 274), we confronted participants with variants of Newcomb’s Problem, which creates a conflict between decision principles in response to a prediction. Participants in the scenarios faced two boxes and chose between opening one of them or both. While the second box always contained a positive amount of money, the contents of the first box depended on an undisclosed prediction of their choice. Across studies, we varied the content of the boxes, presentation features of the scenario, and the presentation of arguments for both decision alternatives. We show that there is a considerable proportion of participants who forfeit the additional money in response to this situation, and we demonstrate that this cannot be fully explained by confusion or peripheral features of the scenario. We also demonstrate that box contents and the confrontation with arguments affect the proportion of participants who open both boxes. We examine the rationality of both decisions, and we discuss implications for applications of predictive algorithms, for human agency, and for potential predictive privacy harms.
Completed Mansuscripts
Models, Measurement and Manipulability
Different metrics of manipulability disagree on their assessment on how vulnerable a particular collective decision mechanism is to manipulation. Surprisingly, little attention has been paid to the normative assumptions that underlie these assessments. First, I contend that concerns about manipulation in the case of voting can be grounded in both welfarist and relational egalitarian considerations. I then demonstrate that these distinct value commitments can render different metrics appropriate for normative guidance in the selection of voting rules. Finally, I argue that the adoption of a single value framework for manipulation risks circularity and that the literature needs to aim to develop multiple metrics that track different value judgement about manipulation. Thus, the paper calls for the developments of multiple metrics for the vulnerability to manipulation along with different value judgments that underlie the evaluation of manipulation.
.
Feel free to contact me for a copy!
Work in Process
Heuristic Strategic Voting: Why more votes doesn’t mean more manipulation
This is a paper based on a simulation that I wrote which models voters as boundedly rational. They vote repeatedly (such as reacting to polls) and use a simple rule to make strategic decisions instead of orthodox rational choice theory. The results suggest, contrary to the literature, that giving voters more votes will not necessarily lead to more strategic effects on the outcome.
The Communication Structure of Cross-disciplinary Communities
(with Catherine Herfeld)
This paper investigates how the structural organization of scientific communities shapes the transfer and broad adoption of models across disciplinary boundaries. Building on and extending insights from epistemic network simulations (Zollman 2007, 2010), we propose a model of cross-disciplinary scientific communities in which agents exchange not merely information, but models themselves. Thereby, a central and novel idea that grounds our framework is the introduction of multiple disciplines whose members pursue distinct epistemic aims and are more densely connected within their own discipline than across disciplines. We incorporate two key social roles agents can take on and that facilitate cross-disciplinary exchange: brokers, who maintain connections across disciplinary boundaries and can learn from successful applications of models in other fields, and translators (Herfeld and Doehne 2019), who modify models originating in some source domain in such a way that they better align with the epistemic aims and evaluative standards of a new target discipline.
Building on Zollman’s results concerning the trade-offs between connectivity and epistemic performance, we extend the analysis from network-level properties to role-sensitive structures, asking how the placement of brokers and translators within partially segregated communities influences collective epistemic success.
Paul Klee Pavillon der Zahlen, 1918 (Pavilion of Numbers)