In my dissertation, I offer a new argument for the elimination of “beliefs” from scientific psychology based on Wimsatt’s (1981) concept of robustness and its inverse, fragility. A theoretical entity is robust if multiple independent means of detection show invariant results in measuring the posited entity. A theoretical entity is fragile when multiple independent and reliable means of detecting it produce variant results. Fragile posits are good candidates for elimination. This approach to eliminativism avoids the semantic and reductionist pitfalls of previous eliminativist arguments (Churchland, P. M. 1981; Stich 1983). I argue that new techniques in experimental psychology such as implicit association tests (Lane et al. 2007) ought to count as reliable measures of participants’ beliefs. Implicit association tests and other experiments in social psychology show radical variance between what people sincerely report their “beliefs” are and what their nonverbal behavior indicates about their “beliefs.” This variance between self-report and nonverbal behavior, two independent means of detection, is evidence that “belief” is a fragile theoretical category and therefore a strong candidate for elimination. My paper for The British Journal for the Philosophy of Science, “The Belief Illusion,” can be viewed as a précis of my dissertation. Three related projects cognitive science projects have come out of my dissertation.
Going Beyond Belief
The first project is developing an account of the mental categories that ought to replace beliefs in cognitive science. Sterelny (2003) has developed an account of three different mental categories with evolutionary pressures in mind. Stanovich (2011) has also been working on a set of three distinct mental categories that line up with his tripartite theory of cognitive architecture. Both of these fit quite well within dual-process theory. These are promising theories, and I intend to build on them. I propose three mental categories as well. Automatic representations would be the kind of representation that operates in what dual process theorists have called “System 1” (Stanovich 1999; Kahneman 2011). These states are inaccessible to conscious introspection and are activated automatically by specific stimuli. Control representations would be the kind of representation that operate in “System 2.” These are representations that are decoupled from specific behaviors and are accessible to conscious introspection. Finally, expert representations have properties associated with both “System 1” and “System 2.”
The second project is an argument for the elimination of desires from cognitive science. Timothy Schroeder (2004) has argued that desire has three faces: motivation, pleasure, and reward. Schroeder provides evidence that each of these “faces” is neurologically and functionally distinct. I argue that the heterogeneous nature of the folk concept “desire” is indicative of a fragile theoretical category. Maintaining a unified category of desire will likely lead to highly variable results between distinct methods of determining a person’s desires. In this case, the very folk categories Schroeder uses to characterize each “face” are perfectly adequate. There is no reason to pick one of these faces as the “real” desire, nor is there there any reason to treat them as subtypes of desire.
Experimental Philosophy and Belief
The third project is developing surveys in experimental philosophy that test intuitions about borderline cases of “belief.” For example, I have created a survey that presents test participants with scenarios where a person’s implicit reactions are at odds with their explicit avowals. The participants are asked what the individuals in the scenario believe. My prediction is a bimodal distribution in which the majority attribute beliefs consistent with explicit avowals and a significant minority attribute beliefs consistent with implicit reactions. If my prediction is correct, then it provides evidence against a number of accounts of belief (Gendler 2008a 2008b; Sommers 2009; Zimmerman 2007) which choose either explicit avowals or nonverbal behavior as indicative the agent’s “real” beliefs.
My dissertation has also given rise to two projects in the philosophy of science. The first is to extend my analysis of measurement robustness. Robustness analysis has come under significant criticism (Justus 2012; Odenbaugh & Alexandrova, 2011; Orzack & Sober, 1993; Plutynski, 2006; Woodward 2006). Woodward (2006) distinguishes between a number of types of robustness. According to Woodward, inferential robustness involves applying multiple independent models, which make different and sometimes contrary idealizing assumptions, to a fixed data set in order to reach a conclusion about the hypothesis. Measurement robustness, on the other hand, consists of the agreement of multiple independent measurement or detection procedures. I am developing a more detailed argument for the view that measurement robustness is not susceptible to many of the criticisms leveled against other forms of robustness.
The second project is to argue that thinking in terms of natural kinds in science is a mistake. My approach is pragmatic. Recent work in psychology and anthropology provides evidence that we have a natural essentialist cognitive bias. This raises worries for projects which attempt to redefine natural kinds in terms causal processes that give rise to stable property clusters (Boyd 1991, Griffiths 1997; Machery 2009) and related projects that attempt to resurrect a concept human nature that avoids essentialism (Machery 2008; Samuels forthcoming). I argue that the constant need to consciously override the automatic essentialist intuitions that will be invoked by thinking in terms of “natural kinds” or “human nature” is not worth the trouble when there are perfectly good non-essentialist alternatives to these concepts.