For my papers, see my PhilPeople page, https://philpeople.org/profiles/spencer-paulson
​
Research Overview: Somehow minds do things that make it appropriate to assess them normatively. I’m interested in what it is about them that makes this the case. My thought is that it is somehow our ability to self-monitor or, as it is sometimes put, “step back”. To better understand the norms, their place in nature, and how they get their grip on us, I try to give an account of the self-monitoring faculty that is simultaneously empirical and philosophical. I draw on work done in developmental and evolutionary psychology, philosophy of mind, epistemology, and ethics. I think self-monitoring takes different forms in different creatures and different forms ground different epistemic norms and consequently different ways of knowing. I think we self-monitor largely by simulating interpersonal argumentation, toggling between our own perspective and that of a potential critic, modifying our beliefs and intentions upon simulating counterarguments to which we can’t respond. This helps make sense of topics of interest to epistemologists such as knowledge, justification, and defeasibility. Defeaters are the materials out of which counterarguments are made. We make ourselves answerable to them (as opposed to merely evaluable in terms of them) by holding ourselves answerable to them self-monitoring in our characteristic way. Knowledge is the standing we have when our beliefs are immune to defeat and, consequently, counterargument. It is like a winning hand in interpersonal argumentation. Justification is like a winning hand in the solitaire version of it.
​
I am working on extending the account to inquiry, suspended judgment, assertion, and knowledge-how. I am also working on filling in the sketch of the self-monitoring faculty provided in previous work by integrating it into a broader account of mental architecture. I’m interested in how the self-monitoring faculty integrates the outputs of modules. I’m interested in the format(s) of the representations it uses and the ways in which it makes use of perceptual states to run offline simulations. I'm interested in the perception-cognition interface more generally.
​
I also have an interest in applications. On my account, much of our mental life is a rehearsal for epistemic social coordination. While our ability to do this is impressive, polarization illustrates our limitations. I’m interested in the extent to which our failure to coordinate on important matters is the result of an informationally polluted environment vs. the extent to which it is the result of a failure of the self-monitoring faculty (which competes for shared resources with the faculties it oversees). I am also interested in whether a silicon-based computer could ever self-monitor in the way we do. If programmed correctly, it could resemble us at an abstract, functional level. Is that enough to make it appropriate for us to take the participant’s stance toward it? Part of what makes the self-monitoring faculty normatively significant is that it enables us to toggle between our own perspective and that of an interlocutor. The extent to which it could enable a digital computer to do the same is not immediately clear.
