Skip to main content
SearchLoginLogin or Signup

Conceptual Analysis

Published onJul 24, 2024
Conceptual Analysis
·

Conceptual analysis is concerned with clarifying concepts in the sense of illuminating what it takes to fall under a concept. Thus, when political theorists offer accounts of what it takes to be a democracy, and statisticians offer accounts of what it takes to be a random sequence, they are doing conceptual analysis. And when we were told in high school physics classes that what it takes for a body to move is for it to occupy different places at different times, we were being offered a conceptual analysis of motion. Sometimes analyses are framed in ordinary language; sometimes they are framed using one or another technical vocabulary, perhaps supplemented with terms drawn from ordinary language. Often, the analyses are intended to capture an existing concept, but sometimes they are offered as stipulations designed to progress theoretical inquiry. There is controversy over the importance of conceptual analyses, exactly what role they play in scholarly inquiry, and over how to validate them. 

History

The history of conceptual analysis is part and parcel of the history of theoretical inquiry. In the Republic, Plato (428-7 BCE to 348-7 BCE) asks, “What is justice?” He did this because he saw that one cannot advance views about the desirability of justice in the absence of a view about what it takes to be just. In Discourse on Method, Descartes (1596 to 1650) asks, “What does it take to be a thinker?” He did this because he saw that one cannot discuss whether or not machines and nonhuman animals think in the absence of a view on what it takes to be a thinker. Recent discussions of AI remind us of the importance of asking the “what it takes” question. 

Similar remarks apply to the rise of modern mathematics. Georg Cantor (1845 to 1918) offered an analysis of what it takes for two sets to be the same size in terms of one-to-one correspondence between their members (Boolos et al., 2007, p. 16ff). Without this analysis, we could not show that the set of real numbers is bigger than the set of natural numbers. Much the same can be said for the epsilon-delta account of a limit (Larson et al., 1994, p. 66). We need this analysis in order to prove theorems about which functions have which limits, and to do so rigorously.

What has changed with the passage of time is not the prevalence and importance of conceptual analysis but rather the extent to which we theorists are consciously aware that this is what we are doing, the extent to which we use various technical tools to express some analysis or other, the extent to which we acknowledge variations in concepts between individuals and groups of individuals, and the extent to which we see ourselves as putting forward new concepts to serve one or another purpose versus articulating existing concepts. 

Core concepts

Prescription, folk concepts, indeterminacy

Does the atomic theory of matter—the theory according to which, for example, I and the chair I am sitting on are lattice-like arrays of widely separated particles held in place by internal forces—show that the chair I am sitting on is not solid? In order to answer this question, one needs an account of what it takes to be solid. There are, however, two competing analyses of being solid: one that requires, as a necessary condition, filling every part of space within its bounds, versus one that analyzes solidity in terms of resisting intrusion and deformation. If the first is correct, the atomic theory of matter shows that the chair is not solid (and nor is my body). If the second is correct, the atomic theory does not show that the chair is not solid; the relevant internal forces are sufficient to stop my body falling through the chair (Eddington, 1948). It is hard to think that there is a deep puzzle here. Prior to the arrival of the atomic theory of matter, it is plausible that our ordinary concept of being solid was indeterminate as between the two conceptions of solidity because it had not occurred to us that objects that resist intrusion and deformation might be radically “gappy.” The atomic theory was a surprise. After the arrival of the atomic theory, the sensible response was to note the indeterminacy in the existing concept of being solid and to resolve it in favor of the conception in terms of resisting deformation and intrusion; this way, we allow chairs and tables and the foundations of our houses to be solid despite their “gappiness.” 

It is plausible that a similar attitude makes sense when we enter the debate over whether or not one or another manifestation of AI is really a case of intelligence. We can hardly enter that debate without a view about what it takes to be intelligent—the concept of intelligence—but plausibly any view we have leaves some matters open. That is, the concept of intelligence is to that extent indeterminate. We are then in a position to resolve the indeterminacy in the light of future developments in AI. 

The examples of solidity and intelligence are ones in which we have pre-existing, shared concepts—folk concepts, as they are called [see Concepts]. But not all examples of conceptual analysis are analyses of pre-existing concepts. Cantor was not offering an explication of the pre-existing concept of two sets being the same size; he was offering a new concept that allows us to make sense of infinite sets differing in size. He was, that is, doing some prescribing in the service of advancing set theory. The same goes for many concepts that make their appearance in one or another science as it emerges and develops. For example, the rise of economics was accompanied by the introduction of new concepts to do the needed predictive and explanatory work: fiscal drag, opportunity cost, inflation, etc. 

Thought experiments

Among the folk concepts are ones that play important roles in one or another established area of inquiry. An example is the concept of knowledge in epistemology. A natural thought is that we can analyze knowledge in terms of true, justified belief (Ayer, 1956). This is a reductive analysis in the sense that it is an account of what it takes to be knowledge in terms more basic than knowledge itself. It is also an example of an analysis that faces serious challenges from a range of thought experiments—imagined cases that are, intuitively, cases of true justified belief but are not cases of knowledge (Gettier, 1963). Here is one: I glance at a railway station clock and see that it says 12 noon. I know I am in a country famous for the reliability of its station clocks and so believe it is 12 noon. It is in fact 12 noon. My belief that it is 12 noon is, thus, a true justified one. However, unknown to me, the clock broke down yesterday at 12 noon and has not been repaired since. It seems clear that I do not know that it is 12 noon. 

How should we respond to this thought experiment? One option is to deny its relevance to the proffered analysis of knowledge. This is hard to believe. As we observed, the concept of knowledge is a folk concept. We have to take seriously our intuitive responses to imagined cases; it is our concept, after all. A second option is to seek a replacement for the justified true belief analysis. It is a matter of record (Shope, 1983) that finding a good replacement has proved to be very challenging. Each proposal has been vulnerable to one or another counterexample in the sense that, for each offering, there is at least one thought experiment in which, intuitively, we have knowledge but the offered analysis isn’t satisfied, or else we don’t have knowledge but the offered analysis is satisfied (what happens in the station clock example). A third option is to urge that not all concepts are analyzable, and to hold that knowledge is an example of such a concept. 

The case of knowledge is not an isolated one. Conceptual analyses of folk concepts are typically subjected to a kind of trial by intuitions about thought experiments. Here is another example: Some glasses are brittle. What does it take for a glass to be brittle? A natural thought is that it is a matter of the glass being such that if it were dropped, it would break easily. But encasing a glass in cotton wool doesn’t stop it from being brittle; indeed, it may explain why the glass is encased in cotton wool. The glass being encased in cotton wool does, however, stop it being the case that were the glass dropped, it would break. 

Much of the history of conceptual analyses of folk concepts is the history of contests between those offering one or another analysis of a folk concept and those making trouble for the offered analysis by describing a counterexample—a thought experiment where, intuitively, we have an instance of the concept, but what is offered does not obtain, or what is offered obtains, but we do not have an instance of the concept.

The role of conceptual analysis in theoretical controversies

Many philosophers of mind and cognitive scientists are materialists. Materialism takes many forms, but a popular version holds that mental states are identical with brain states. One way to argue for this view (see, e.g., Armstrong, 1998) goes via a conceptual analysis of mental states in causal-functional terms. Beliefs, for example, are analyzed as states that respond to how things are around us in ways that carry information about them. Pains are responses to bodily damage that cause behavior that minimizes the extent of the damage. These conceptual analyses—refined, modified, made more precise in one or another way, etc.—then allow a simple argument for the thesis that mental states are brain states. Given that we have analyzed mental states in terms of the roles they play, it will turn out (runs the argument) that the states that play these roles are one or another state of the brain. 

Here is another example. Is free will compatible with determinism? A natural thought is that it isn’t. How could my actions be free if they are determined by how things are before I act? But a rigorous discussion of this issue requires that we do some conceptual analysis. After all, in order to answer whether or not being an even number is compatible with being a prime number, one needs to know what it takes to be an even number and what it takes to be a prime number. Only that way can one know that the answer is yes. The same, plausibly, goes for the question of the compatibility of being free with being determined (Frankfurt, 1969). It is no surprise then that the history of the debate over the compatibility of free will and determinism is in large part the history of how best to analyze what it is to act freely.

Final example. We favor those who are close to us—our family and friends and the community we belong to—ahead of the world at large in all sorts of ways. Charity begins at home, blood is thicker than water, and all that. There is an evolutionary explanation of this fact: preferring those who are close to us enhanced our chances of survival once we formed mutually supportive groups. Does the fact that there is an explanation of this kind mean that there is no moral justification, or perhaps no objective moral justification, for favoring those who are close to us? A rigorous discussion of this question requires attention to what it takes to be morally right and what it takes to be objective (Sterelny & Fraser, 2016). 

Questions, controversies, and new developments

Whose intuitions?

We noted earlier the role of thought experiments—intuitions about imagined cases—in assessing analyses of folk concepts. But whose intuitions should we attend to? Traditionally, philosophers have drawn on their own intuitions about this or that thought experiment, perhaps moderated by discussions with colleagues. But we are talking about folk concepts, not just those of philosophers. Experimental philosophers (see Knobe & Nichols, 2008) respond to this point by insisting that we should conduct surveys to ascertain the intuitions of the folk in general about one or another thought experiment and argue that when we do, the results are not always in agreement with the intuitions of philosophers. 

Cross-cultural empirical work has also suggested that there are systematic cross-cultural differences in philosophical intuitions (Henrich, 2020) and, thus, in concepts and the mapping of language to categories (Malt & Majid, 2013). 

Conceptual engineering

Is one a prime number? Many folk say yes, and the matter was left unresolved by mathematicians for some years. The question is now settled in the negative. One is not a prime number. This was a decision by mathematicians in order to ensure that each positive integer greater than one has a unique factorization. We can think of this as a bit of what is called conceptual engineering. Likewise, Cantor was doing some conceptual engineering when he gave us the one-to-one correspondence analysis of what it takes for two sets to be the same size. He was not seeking fidelity to an existing concept; he was offering the right concept for a certain job. Above, we noted the role of prescribing in resolving indeterminacy, such as in what it takes to be solid. Prescribing is a form of conceptual engineering. 

Recent work in conceptual analysis has highlighted the importance of conceptual engineering. The guiding thought is that what matters most is the best way of categorizing a subject matter, be that subject matter numbers, sets, physical properties of matter, mental states, human actions, brain states, economies, etc., and not fidelity to an existing way of categorizing that subject matter. Finally, some authors have argued that conceptual engineering is just one of many ways in which individuals engage in forms of conceptual exploration (Rudolph, 2021). 

Words and concepts

There are a number of ways to specify a prime number: for example, as a natural number greater than one that is exactly divisible by itself and one alone or as a natural number exactly divisible by two numbers. Likewise, there are different ways to pick out the number two: for example, as the smallest even number or as the smallest prime number. 

Should we think of these different ways as different ways of expressing in words the one concept, or should we think of the different wordings as corresponding to different concepts? Some insist that being the smallest prime and being the smallest even number are one and the same concept captured in different words; others insist that we have two different concepts that apply (of necessity and a priori) to one and same number.

Broader connections

The concept of representation plays a huge role in cognitive science [see Mental Representation] (Shea, 2018). First, many theorists (e.g., Byrne, 2001) appeal to it in accounts of what happens when we are under one or another visual illusion. In the Müller-Lyer Illusion, for example, two lines that are in fact the same length look to be different in length. Representational theorists hold that the way to account for the distinctive phenomenology is in terms of one’s visual system representing (mistakenly) that the lines differ in length, or maybe in terms of one’s awareness that one’s visual system is representing that the two lines differ in length.

Second, many philosophers and cognitive scientists hold that beliefs are a kind of sentence in the head with elements that represent items in the world (e.g., Fodor, 1975; see Porot & Mandlebaum, 2021). Thus, the belief that snow is white might be a structure in the brain with an element representing snow and an element representing whiteness. Third, what goes on in our brains and central nervous systems is crucial for successfully navigating our surroundings and for accurately reporting on them. It follows that what goes on in our brains and central nervous systems carries information about how things are around us and so, in some good sense, represents how things are around us. 

Do we have one concept of representation at work in these examples, or two, or three? How should we understand it or them? To what extent should we think of this enterprise as analyzing an existing concept or concepts—or should we approach the question as conceptual engineers focused on finding the concept or concepts needed for one or another purpose? This is a place where philosophers and cognitive scientists have found fruitful interchange, demonstrating the relevance of conceptual analysis for ongoing interdisciplinary work. 

Finally, there are important connections between conceptual analysis and the methodology of cognitive science. Cognitive science deals in constructs: unobservable entities postulated to have causal powers that explain observed behavior (Cronbach & Meehl, 1955).  To test constructs, cognitive scientists propose an operationalization: something that can be measured that would naturally count as a measure of the construct (Frank et al., 2024). The early stages of operationalization are often similar to the process of conceptual analysis, though tests are typically refined empirically.  

Further reading

  • Braddon-Mitchell, D., & Nola, R. (Eds.). (2009). Conceptual analysis and philosophical naturalism. MIT Press.

  • Isaac, M.G., Koch, S., & Neftd, R. (2022). Conceptual engineering: A road map to practice. Philosophy Compass, 17(10). https://doi.org/10.1111/phc3.12879

  • Jackson, F. (1998). From metaphysics to ethics: A defence of conceptual analysis. Oxford University Press. https://doi.org/10.1093/0198250614.001.0001

  • Urmson, J. O. (1956). Philosophical analysis: Its development between the two world wars. Oxford University Press.

Comments
0
comment
No comments here