Scientific theories always presuppose an ontology: a set of entities, relationships, and processes that together form the basic building blocks of theories. Chemistry has atoms, molecules, and ways of making and breaking bonds. Evolutionary biology has species and processes of reproduction, competition, and natural selection. A cognitive ontology is the set of entities presupposed by a theory in the behavioral and cognitive sciences. A theory that postulates a lexical and a phonological route for reading written words, for example, posits two decoding processes, each of which is part of a larger process of reading. Debates about the correct ontology are as old as cognitive theorizing itself, but the term cognitive ontology is usually discussed in the context of challenges to traditional ontologies stemming from neuroimaging research beginning in the early 2000s. The extent to which traditional cognitive theorizing must line up with discoveries in cognitive neuroscience remains a central question in the cognitive ontology debate.
Every cognitive theory implies an ontology, so there is a sense in which cognitive scientists have always debated about the right cognitive ontology. The current use of the term, however, largely dates to a paper by Price and Friston (2005). They argued that neuroimaging results tend to not line up neatly with brain activation. In particular, cognitive processes that any traditional theory would distinguish—such as naming pictures, making braille discriminations, and deciding whether words rhyme—activate the very same brain regions (Price & Friston, 2005; Anderson, 2014). Conversely, what ought to be instances of the same cognitive process can be associated with the activation of very different brain regions (Price & Friston, 2002).
Such findings suggest that the ontologies embodied in traditional cognitive science might be mistaken. The most radical responses to the lack of alignment between cognitive terms and brain regions suggested that we are in the same position as medieval alchemy was to modern chemistry: We are mostly carving up the world wrong, and when we do get it right, it’s for the wrong reasons. Price and Friston themselves suggested as much. This revisionist agenda was pursued most vigorously by Poldrack and his collaborators, who suggested that we should rebuild our cognitive ontologies in a bottom-up, data-driven way (cf. Yarkoni et al., 2010). This position is reminiscent of philosopher Paul Churchland’s earlier advocacy of Eliminative Materialism, that is, the thesis that common-sense psychology is a false theory that will be completely replaced by cognitive science (Churchland, 1981) but with more empirical grounding. A less radical response is that traditional ontologies might need to be revised but that neuroimaging can show us which things can be reliably discriminated and so retained (Lenartowicz et al., 2010).
The term cognitive ontology has been used to refer to three closely related but distinct concepts (Janssen et al., 2017).
First, an ontology can mean a particular proposal about which cognitive states there are and how they are related. This is ontology in Quine’s sense of “what there is” (Quine, 1948): the things that theories talk about, observe, and measure. This is the way in which this article uses the term and is the most common usage in philosophy and cognitive science.
Second, ontology can refer to a particular, standardized set of terminology used for talking about cognitive states. This is a sense of ontology inherited from data science and is becoming more common where cognitive science meets large-scale automated meta-analyses. Consistent terminology is very important: If some labs call a brain region “V5” and others “MT,” then it will be difficult to coordinate results across labs.
Third, ontology can refer to very abstract proposals about the types of entities admissible in a cognitive theory: processes versus states, innate modules versus learned associations, or feedforward computations versus predictive feedback. This sense of ontology is mostly used by philosophers, and debates tend to be more theoretical than empirical.
The three uses are distinct but closely related. For example, if you think that cognitive scientists ought to build a cognitive ontology using empirical data, then the question of how we name and organize that data becomes pressing. Someone who thinks that the basic type of mental entity is a dynamic, continuous process will probably give a very different catalog of mental states than one who thinks that there are innate computational modules.
The classic controversies over cognitive ontology involve the relative role of top-down versus bottom-up approaches to theorizing. Price and Friston (2005) raise an implicit challenge to top-down theorizing: Whatever our traditional psychological ontologies are, they must ultimately line up with sensible neural ontologies. Revisionary projects go further: Having accepted that constraint, they conclude we ought to just start with the brain and rebuild our ontologies in a way that guarantees alignment.
Yet many have challenged the assumption that the mapping between cognitive states and the brain needs to be one-to-one. Most obviously, a number of tasks will be performed by conjunctions of basic operations: As Petersen and Fietz (1993) point out, there is no “tennis forehand area” to be discovered in the brain. Even basic cognitive activities, whatever those are, need not have a simple relationship to the brain. We do not find a one-to-one mapping between levels in other scientific domains, including the computational ones that inspire cognitive psychology (Roskies, 2009; Jonas & Kording, 2017; McCaffrey, 2023). The influential philosophical thesis of nonreductive physicalism explicitly denies there ought to be a simple mapping. So there remains real debate about the constraints on building cognitive ontologies.
More recent work has emphasized the role of broader context in settling ontologies. It is possible, for example, that the best characterization of a brain region’s function itself depends on what is going on elsewhere in the brain (Anderson, 2014; Dewhurst, 2018). Several authors have also emphasized that one cannot really distinguish an ontology of cognition from an ontology of tasks (Figdor, 2010; Klein, 2012; Burnston, 2021). If a researcher thinks that recognizing faces is just a kind of object recognition, for example, they will use different methods (and potentially come to a different ontology) than one who thinks it is a wholly distinct task [see Face Perception]. The ways we operationalize psychological functions both embody and shape assumptions about the underlying realizers, sometimes in unexpected ways.
A final open question, coming from philosophy of science, concerns the extent to which a cognitive ontology can be pluralist. If one author says face recognition is a distinct, modular process while another says that it is a specialized function of a broader distributed system (Haxby et al., 2001), do they really disagree? Or can we carve up cognition in many different but equally valid ways? There are strong arguments that other areas of science (such as biology) admit multiple overlapping taxonomies (Dupré, 1993); perhaps many apparent debates in cognitive science might dissolve if we embrace similar pluralism (Dale et al., 2009).
The debate over cognitive ontology has been fruitful because it links to many other topics in philosophy and cognitive science. Some classic debates—such as over the unity or disunity of working memory (Gomez-Lavin, 2021) or the correct level of abstraction at which to describe fusiform face area (Farah, 2004) or the proper taxonomy of delusions (Clutton et al., 2017)—are readily cast as debates about cognitive ontology and so can be linked to the philosophical resources developed in that literature.
Broader revisionary projects in cognitive psychology can also be better conceptualized as debates over cognitive ontology rather than as a simple clash of theories. The work of Poldrack and colleagues noted above is one radical version of this. However, non-mainstream theories such as 4E Cognition (Chemero, 2009) can be seen as an attempt to enlarge our cognitive ontology by including Gibsonian affordances—which in turn argues that a full cognitive ontology includes the environment, not just the individual (Hutto et al., 2017).
Finally, the debate over cognitive ontology has the potential to inform philosophical debates as well. Much of the traditional work on natural kinds takes as its starting point kinds in chemistry or biology. Yet cognitive processes are arguably natural kinds as well, and ones with unique properties. So thinking about cognitive ontology might provide a richer foundation for philosophical theorizing about the goals of science and of building ontologies in the first place (Francken et al., 2022; Khalidi, 2023).
Work on this paper was supported by a grant from the Templeton World Charity Foundation (TWCF-2020-20539).
Janssen, A., Klein, C., & Slors, M. (2017). What is a cognitive ontology, anyway? Philosophical Explorations, 20(2), 123–128. https://doi.org/10.1080/13869795.2017.1312496
Khalidi, M. A. (2023). Cognitive ontology: taxonomic practices in the mind-brain sciences. Cambridge University Press. https://doi.org/10.1017/9781009223645
Price, C. J., & Friston, K. J. (2005). Functional ontologies for cognition: The systematic definition of structure and function. Cognitive Neuropsychology, 22(3), 262–275. https://doi.org/10.1080/02643290442000095