Skip to main content
SearchLoginLogin or Signup

Psycholinguistics

Published onJul 24, 2024
Psycholinguistics
·

Psycholinguistics is an interdisciplinary field that combines principles and methods from both psychology and linguistics and applies them to the study of psychological and neurobiological factors that enable humans to acquire, use, comprehend, and produce language. The field uses a range of methods, including controlled behavioral experiments, computational modeling, analyses of large amounts of naturally produced linguistic data, and techniques from neuroscience such as electrophysiology and functional magnetic resonance imaging. The key areas covered in psycholinguistics include language comprehension, which focuses on how people understand spoken, written, and signed languages as well as how context affects this understanding; language production, which is the study of how we generate language when we speak, write, or sign and includes the selection of appropriate words, the construction of sentences, and the organization of larger multi-sentence sequences; language acquisition, which focuses on how children learn to understand and produce language; and neurolinguistics, which is the study of the neural mechanisms that support the comprehension, production, and acquisition of language as well as how changes in the brain affect typical language processing.

History

Investigations of human language use go back to at least the late 19th century (Levelt, 2012), but it is probably fair to say that it was in the 1950s that psycholinguistics established itself as a recognizable and distinctive scientific field, when a set of significant scientific and social forces led to the emergence of the discipline of cognitive science. One of those forces was Noam Chomsky and the emergence of a generative theory of linguistics, which emphasized grammatical rules as the basis of human linguistic knowledge and was explicitly mentalist—that is, the goal was to try to explain the knowledge humans possessed that allowed them to understand and produce language (Chomsky, 1965). For example, early in this enterprise, it was argued that a passive sentence such as “the dog was chased by the cat” was generated from an active form like “the cat chased the dog” via a set of transformational rules (Fodor et al., 1974). Rules also were the basis of humans’ knowledge of how to order words within phrases and how to establish agreement between elements of a sentence (e.g., in English, between a subject and verb). This explicitly mentalist approach inspired psychologists interested in language to abandon behaviorist paradigms that appealed only to principles such as association and reinforcement and that largely rejected notions of internal representation and to develop theories of processing that attempted to explain how these putative grammatical rules were applied in real time by human language users (Berwick & Weinberg, 1983).

In the 1980s and beyond, linguistic theories took another turn that helped to inspire a new paradigm for psycholinguistics. Explicit rules were replaced by a set of constraints, such as a principle that any phrase of a type XP must be headed by an element of type X (e.g., a verb phrase, or VP, must be headed by a V; a prepositional phrase must be headed by a P; and so on—so-called X-bar theory; Jackendoff, 1977). Another development in linguistics was the lexicalist hypothesis, which stated that syntactic structure is commonly linked to specific lexical items rather than needing to be stipulated in a general rule. On this view, there is no need for a rule stating that a VP may contain an object because whether the VP is transitive or intransitive depends on the semantic and syntactic features of the verb (Chomsky, 1970).

Both of these trends—the move toward constraint-based and lexicalist approaches to grammar—were remarkably compatible with new developments in cognitive science, psychology, and particularly the popularity of connectionist or neural network models of cognition, which also rejected abstract, formal rules and attempted to explain complex behavior in terms of spreading patterns of putative activity acquired during learning [see Language Acquisition] (MacDonald et al., 1994). In the end, some proponents of these connectionist models went well beyond what many linguists were comfortable with, rejecting almost all formalisms and appeals to innateness. These arguments continue to this day in the form of debates concerning large language models (LLMs) and whether their abilities suggest that language can be acquired and processed entirely via exposure to vast amounts of data [see Large Language Models] (Contreras Kallens et al., 2023).

Today, psycholinguistics is characterized by diversity: diversity in the types of theories that are assumed, the methods for testing those theories, and the languages that are investigated. Computational approaches are also far more influential than in the past, both because the methods are useful tools for evaluating psycholinguistic theories and because artificial systems raise important questions about the capacities and architecture of human systems. 

Core concepts

Understanding language

Modern psycholinguistics focuses on a range of topics including speech perception, word processing, reading, and sentence processing. This article focuses on sentence processing where three major theoretical issues have dominated the field: The first concerns the unit over which meaning is computed: Is it the word, the phrase, or an entire clause? Earlier theories of processing posited the existence of multiword processing units, aligning with major syntactic constituents such as phrases and clauses, which were assumed to serve as processing chunks (Bever et al., 1973). For example, a two-clause sentence such as “the dog barked and the cat meowed” would be processed as two units corresponding to each of the two conjoined clauses. The generation of compositional meaning was thought to be delayed until the end of a processing unit.

More recent theories assume incremental processing whereby each word is integrated and maximally interpreted as it is encountered. Evidence from the last decades suggests people attempt to get ahead of the input, predicting the next word or word sequence in response to semantic and syntactic constraints. For example, if listeners hear “the boy will eat” and they are in a visual context containing a range of objects, they will move their eyes to an image of a food item before that food is mentioned in the utterance (Altmann & Kamide, 1999). This work suggests a rather “greedy” processing system that builds semantic interpretations as quickly as possible, usually on a word-by-word basis. Anticipation of a word allows the system to get a head start on processing a word, because preactivated features can be integrated with information from the item when it is encountered in the input.

A second theoretical issue relates to whether representations are built serially or in parallel. This debate is closely related to the problem of ambiguity in language, which refers to the fact that linguistic constituents can often be assigned more than one structure or meaning. For example, words may have more than one unrelated meaning (e.g., “bank” can mean either a financial institution or the land that slopes toward a river). Studies suggest these alternate interpretations are activated in parallel, in proportion to the evidence for them, including how often the alternative meanings of the same form occur in the language. The frequency of the alternative meanings and the degree of sentence constraint combine to influence the ease with which the ambiguity is resolved (Blott et al., 2021). Ambiguities arise at the sentential level as well, in part because a sequence of words can often be assigned more than one syntactic structure, as in “Mary saw the kid with the glasses,” where it is indeterminate whether Mary or the kid is the one in possession of glasses. The debate concerns whether only one interpretation is considered at a time or whether all of them are activated and evaluated with respect to one another. On the serial view (Frazier & Fodor, 1978), the system builds only one analysis (e.g., the kid is wearing glasses) and revises it if subsequent information suggests the analysis is wrong (e.g., the context later states or implies that Mary is the one with glasses). On the parallel view, both are activated simultaneously, although not necessarily equally strongly—the one that is more frequent or more likely given preceding context may have greater activation (Taraban & McClelland, 1988). 

Serial versus parallel views imply different processes at work when the comprehender encounters a so-called garden-path sentence. These sentences have been the basis of research on what psycholinguists refer to as parsing, the process of building syntactic structure as words are encountered. In garden-path sentences, a sequence of words seems headed in one syntactic direction but then takes an unexpected turn, as in “Mary bumped into the student and the teacher told her to be careful.” The verb told indicates that Mary bumped into only the student and probably not the teacher, which means the preceding conjoined noun phrase analysis must be revised. On the serial view, only one analysis is considered at a time, usually the one that is either syntactically simpler, more frequent, or both. The onset of “told” forces that single parse to be repaired to create an analysis that conforms to the rules of English grammar. On the parallel view, the sequence “the student and the teacher” would be analyzed as both a conjoined object of “bumped into” and as an object noun phrase followed by the subject of the second clause, but, again, one of those two analyses might be more strongly activated than the other. The word “told” forces the parser to select the latter analysis, the one on which two clauses are conjoined, and the competing nonviable analysis is either inhibited or passively decays. 

The third key issue in psycholinguistics concerns the architecture of the processing system and specifically whether it is modular or interactive. Modular models assume the parser generates an interpretation based on a proprietary database of information pertaining specifically to the function of that module (e.g., syntax), whereas interactive models assume the immediate use of all relevant constraints, including beliefs about the world. Evidence in favor of modularity includes demonstrations that the parser builds the same syntactic structure regardless of the availability of a potentially relevant piece of nonsyntactic information. For example, in reading studies, the simplest syntactic analysis seems to be adopted even when a discourse context motivates the use of a more complex linguistic construction (F. Ferreira & Clifton, 1986). 

A study making use of the so-called visual world paradigm, where participants’ eye movements to objects or scenes are recorded as they listen to language, seems to provide contradictory evidence (Spivey et al., 2002). Take the imperative sentence “put the apple on the towel in the box.” The listener may construe “on the towel” in one of two ways: The phrase might specify where to place the apple, or it might be a modifier indicating which apple is to be moved. The subsequent phrase “in the box” clarifies that “on the towel” is a modifier of apple because “in the box” clearly indicates a location. According to a specific interactive model known as Referential Theory, one function of modifiers is to allow comprehenders to identify a specific referent, such as when it is necessary to distinguish among multiple items of the same type (e.g., more than one apple in the relevant context). Therefore, if there are two apples and the command is to move one of them, the phrase “on the towel” should be understood as identifying which apple to move, and this should happen from the earliest stages of processing. Thus, when presented with a visual world with two apples, the listener should immediately infer that “on the towel” is meant to distinguish between them, preventing any confusion, and the evidence appears to support this. 

This evidence is compelling in some ways, but a potential concern is that the Visual World Paradigm may encourage listeners to adopt an experiment-specific strategy for processing sentences when they are repeatedly presented with this grammatical form (F. Ferreira et al., 2013). The sentence form “put X currently in location Y in location Z” occurs over and over throughout the experiment, and the visual context is shown for several seconds prior to the onset of the sentence. Both the visual display and the sentences conform to predictable patterns, which participants may learn over the course of the experiment. This could lead subjects to adopt atypical processing strategies, ones that work in this experiment but do not characterize normal sentence processing.

Recent psycholinguistic models treat these three issues—units of meaning, serial/parallel processing, and modular versus interactive processing—rather differently. Here we describe three examples: probabilistic parsing models, good-enough processing models, and noisy channel models. 

Probabilistic parsing models incorporate probability theory to predict the structure of a sentence on a word-by-word basis (Futrell et al., 2020). Unlike deterministic models that apply hard-and-fast rules to parse sentences, probabilistic models acknowledge that language is often ambiguous and assume that people use statistical cues based on linguistic experience to interpret sentences. Different continuations of a string are weighted based on their frequency in the language as well as contextual information. A measure called surprisal (Hale, 2016), rooted in information theory, reflects the unexpectedness of a word in a given context (e.g., “butter” is less surprising in “bread and butter” than in “socks and butter”). A word with a high surprisal value slows down comprehension and affects reading times and other measures of language processing. The probability distributions are typically calculated from large corpora of text, which reflect actual usage patterns in the language. Real humans likely learn those distributions from a range of linguistic experiences, including texts, conversations, and so on. Probabilistic parsing models therefore are highly incremental, because these probability distributions are generated as each word is encountered and they are derived in part based on what is expected to come next. The models assume parallel activation of interpretive alternatives, because the idea of a probability distribution over possible continuations would suggest. They also assume interactivity over modularity, since any relevant cue or constraint may influence the probabilities. 

A different model is known as good-enough processing (F. Ferreira et al., 2002). Here, it is assumed that humans often process and understand language based on partial information rather than striving for a complete, detailed understanding of every aspect of the input. This approach to comprehension implies that listeners and readers sometimes rely on heuristics and contextual cues to construct a good enough interpretation of linguistic input, particularly when faced with complex or ambiguous sentences. For example, when people encounter a garden-path sentence, they might not take the time to reanalyze it completely, leaving them with an incorrect or even erroneous interpretation. This has been shown in studies in which people presented with a sentence such as “while Mary bathed the baby played” are then asked whether Mary bathed the baby, and often their response is yes, suggesting that they did not succeed in fully revising their initial syntactic interpretation (Mary is bathing herself, not the baby). 

Noisy channel models (Zhang et al., 2023) are similar to good-enough processing models in that they assume that a linguistic sequence may contain a speaker error or does not accurately reflect the speaker’s communicative intention in some other way, and therefore, in these models, the comprehension system will essentially “auto-correct” the input to make it conform to expectations. For example, a comprehender asked to “put the milk back in the stove” will assume that the speaker intended the word “fridge” and revise the lexical content to conform to their belief that a carton of milk belongs in a fridge and not a stove. These models also assume incremental interpretation, but they are essentially agnostic on the question of whether alternative analyses are created serially or in parallel—both architectures are compatible with these shallow interpretation approaches. 

Modularity is also compatible with good-enough processing, because this model does not necessarily assume a top-down imposition of cues on parsing but instead posits that some stages of processing may simply be skipped in the interests of efficiency. For example, in the example above, the comprehender does not end up thinking Mary bathed the baby because of their beliefs about who is likely to engage in baby-bathing but because the syntactic revision steps that would have led to the removal of the link between “bathe” and “the baby” were not implemented. 

Producing language

For a number of reasons, the study of language production lagged behind the study of comprehension. Likely the most important reason is that it is relatively easy to specify and control the input to the comprehension process (the words that people see or hear), but it is harder to specify and control the input to the production process. That is, an act of production begins with a thought that a speaker wishes to convey—typically termed a message—and it is difficult to plant particular messages in speakers’ minds.

As such, the modern-day study of language production was initiated in the 1970s largely by the study of naturalistic speech errors (Fromkin, 1973). When speakers speak, they err, in that they say something different from what they intend. In the sentence “that log could use another fire,” a speaker has swapped the positions of the nouns “log” and “fire,” not because they erroneously thought that a log could use a fire but because the language production system erred in the positioning of each noun. Critically, as a source of evidence, speech errors sidestep the problem of specifying the input to the production process; instead, by systematically observing the momentary breakdown of a naturalistically operating process, the study of speech errors provides face-valid evidence of the workings of the language production process in action.

The systematic study of speech errors yielded surprisingly orderly insights. As in “that log could use a fire,” when speakers err in where they place their words, those words nearly always come from the same grammatical category—nouns replace nouns, verbs replace verbs, and so forth. This is true both in movement errors (as in “that log could use another fire,” where the nouns have virtually moved from where they were to have been positioned) and in substitution errors (“torn together three issues,” where “together” is substituted for “apart”). People err when assembling the sounds of their speech as well, for example saying “darn bore” instead of “barn door,” and these errors are also systematic. Sounds of the same type replace one another (vowels replace vowels and consonants replace consonants), again both in movement errors and in substitution errors. In movement errors, sounds tend to move between the same positions in syllables, such that beginning sounds exchange with beginning sounds and ending sounds with ending sounds. These patterns give psychological reality to the corresponding categories that describe the patterns: The fact that nouns exchange with nouns and verbs with verbs demonstrates the psychological reality of the categories of noun and verb, and the fact that sound errors respect syllable position shows that we psychologically represent syllable position when we speak.

What’s more, movement errors at different levels of language travel over different distances (Garrett, 1975). Word errors travel over larger distances, because words from the beginnings and endings of sentences can exchange. Sound errors travel over shorter distances, with sounds from beginnings and endings of sentences very rarely exchanging. This shows that what parts of a sentence are planned at the same time differ depending on the level of linguistic planning at stake. For words, parts of the sentence that end up far apart are nonetheless planned at the same time, allowing the words to take each other’s places. For sounds, parts of the sentence that end up far apart are not planned at the same time, and so, far-apart sounds do not take each other’s places. These are some of the simplest insights that have been gleaned from the systematic analysis of speech errors (for a more complex and comprehensive presentation, see Dell, 1995).

Later, more artificial but better controlled laboratory-based approaches emerged (see Bock, 1995), including having participants name pictures of objects or simple scenes, recall sentences after delays sufficient to cause decay of superficial memory, or the completion of beginning fragments of sentences. One key issue driving research is determining the nature of the linguistic (and nonlinguistic) representations behind key language-production behaviors. One common method in language production uses structural priming (Pickering & V. S. Ferreira, 2008). Structural priming is a tendency to repeat structures speakers have previously experienced. For example, a speaker who hears a passive sentence structure (“the referee was punched by one of the fans”) will be more likely to subsequently use another passive (“the boy was awakened by the alarm clock”), even if the previous and current sentences are unrelated in almost every other way. Structural priming evidence shows that sentences are organized by relatively abstract hierarchical structures that group parts of sentences in terms of the grammatical categories of words. Other nonsyntactic properties of sentences can influence structural priming as well (e.g., meaning relationships), but it is unlikely that effects due to syntactic structure are reducible to these nonsyntactic properties.

Another subfield has investigated the types of representations that underlie grammatical agreement [see Morphology] (Bock & Miller, 1991). In English agreement, singular subject noun phrases require singular marking on the verb (“the cat meows”), whereas plural subject noun phrases require plural marking (“the cats meow”). In principle, number marking on the verb could be determined by a representation of grammatical number (“the cats” could be marked grammatically as plural) or a representation of the conceptual number of the referent described by the grammatical subject (“the cats” referring to more than one cat). By investigating a phenomenon termed agreement attraction, a long line of research suggests that verb number in English is determined by a grammatical number feature of the noun phrase. For example, a speaker is more likely to incorrectly say “the strength of the soldiers were” than “the strength of the soldier were,” but they are equally unlikely to say “the strength of the army were” as “the strength of the soldier were.” The word “army” is as unlikely to attract plural agreement as the word “soldier” (and is even less likely to attract plural agreement than the grammatical plural “soldiers”), even though an individual army consists of many soldiers.

Another line of work has investigated the effect of the accessibility of linguistic material on speakers’ linguistic choices (V. S. Ferreira, 2008). People are more likely to say easier-to-retrieve words than harder-to-retrieve words, even when the harder-to-retrieve words might be closer to what speakers mean. This also means that people are more likely to position easier-to-retrieve words earlier in sentences, which helps the production system manage the challenges of the variable lexical-retrieval process during production. This in turn implicates an important property of production (that mirrors comprehension): It is to a large extent incremental—people produce sentences piecemeal, from start to finish, rather than formulating a whole sentence and saying it when it is ready. Such incrementality is limited in some respects—for example, people plan linguistically dependent units together, and people are strategic about how much or how little to be incremental. Some situations allow for and even encourage careful planning—for example, replying to questions presented during an interview—whereas others require quick responses, such as when we are under time pressure to speak.

Finally, corpus research uses (very) large repositories of produced language to gain insight into language production. Corpus studies have affirmed many of the above findings, as well as led to new insights—for example, that people tend to word utterances so that particular stretches of language are neither too heavy nor too light with new information. That is, according to the principle of uniform information density (Jaeger, 2010), speakers make lexical and grammatical choices so that the information density across a sentence tends to be distributed evenly across words. Uniform information density predicts, for example, that a speaker is more likely to include the optional complementizer “that” when the word following is less expected, allowing the surprisal value associated with that word to be distributed over two words rather than just one.

Conversation and dialogue

Most of psycholinguistics is descended from the broader field of cognitive psychology, and so, studies involving isolated language users dominate. However, the study of situated naturalistic language use, primarily in dialogue settings, has always been present. There are two prominent topics in this subfield.

The first is common ground (Clark et al., 1983): How do people in dialogue know what knowledge they share with their interlocutors? This is important because the whole point of conversation is to tell people things they do not already know, but to do so, you must describe those things in terms of what they do know. How we know the latter—what you know your interlocutor knows—is a philosophical puzzle, because in principle, you must not only know a piece of shared knowledge, you must know your partner knows it, you must know that your partner knows you know it, and so on, into infinite regress. So instead, we must use heuristics to assume that a fact is in common ground. The primary dimension of debate within the study of these heuristics has been the extent to which the computation of common ground begins with a well-specified model of the addressee, versus beginning with what the speaker knows and modulating based on what the speaker knows of their addressee. The former approach emphasizes the sophistication with which speakers can compute aspects of common ground, whereas the latter emphasizes the inherent difficulty of computing common ground and, so, focuses on strategies that might ease the process. A recent computational model specifies how speakers tailor their expressions to facilitate listeners’ comprehension as part of a collaborative effort to facilitate communication (Jara-Ettinger & Rubio-Fernandez, 2022).

A second topic is the sophisticated way that people in conversation take turns speaking. Naturalistic investigations have shown that the lag between when one person stops speaking and the next person starts is surprisingly consistent, even across languages and cultures—it is only about 200 ms (though with notable variation; Barthel et al., 2017). This short gap is intriguing in part because it means that people must be planning what to say at the same time as they are listening to their partners, and much work in cognitive psychology has shown that people are terrible at dual-tasking in this way. This gap is also intriguing because it is too short for people to recognize that their partners have stopped speaking before they initiate their own speech—something either in what people say or how they say it is providing reliable information to conversational partners as to when a speech turn will stop. For speakers, accurately estimating this gap is important, because longer-than-normal gaps are noteworthy in conversation, and shorter-than-normal gaps will constitute interruptions, and so, both of these implications of conversation need to be managed to converse successfully.

Questions, controversies, and new developments

Several developments in cognitive science, artificial intelligence (AI), neuroscience, and engineering have altered the theoretical landscape and even the empirical approaches that psycholinguists apply to the understanding of human language processing. These developments include the advent of LLMs in AI that allow for astonishingly realistic generation of language (Blank, 2023), the emergence of information theory as a key paradigm for understanding the nature of language processing (Mahowald et al., 2013), and the increasing focus on how language is implemented in the human brain, based in part on the availability of innovative techniques in neuroscience (Friederici et al., 2017). These developments are not independent, of course. For example, LLMs are inspired by the concept of neural networks, which are often touted as AI modeled on the functioning of biological brains. 

LLMs are being used to test psycholinguistic theories, including ideas about the sort of efficiency that emerges from information theory approaches, and they are useful for establishing the extent of language localization in the human brain. LLMs are based on deep neural network architectures that learn from massive amounts of data typically scraped from the internet. During training, LLMs analyze patterns and relationships among adjacent and nonadjacent words to gain the ability to predict and generate coherent and often lengthy linguistic passages. The learning algorithm allows LLMs to generate responses that are novel and “creative” in the traditional sense of constituting sequences that have likely never been produced before; they do not simply mimic texts they have encountered. The relevance of LLMs for psycholinguistics is that they potentially serve as existence proofs of the power of processing mechanisms such as prediction and responsiveness to frequency of use. 

Additionally, LLMs produce high-quality linguistic output with no explicit knowledge of syntax or other grammatical rules and constraints. Their ability to respect grammatical constraints emerges as a result of exposure to vast amounts of data combined with an “attentional” mechanism that allows LLMs to emphasize certain words in the input over others and link them to establish important syntactic and semantic relationships. This ability means that an LLM will understand that a verb that might continue the sequence “the key to the cabinets” should be singular because that is the number feature on the word “key,” which is the head of the noun phrase; unlike people, the LLM is never tricked into computing the agreement feature based on the more local noun, which in the example happens to be a plural (“cabinets”). There is currently intense debate over whether LLMs behave the way human systems do (e.g., Contreras Kallens et al., 2023; Yiu et al., 2023)—for example, whether they “experience” garden-paths when presented with the same constructions. Another issue is that LLMs ingest amounts of data that are orders of magnitude greater than any human would encounter in a lifetime (or indeed, in thousands of lifetimes), suggesting they may be implausible as models of human language learning.

Information theory relates to how messages are conveyed for the purposes of communication and posits that messages are encoded; sent over a limited capacity, potentially noisy channel; and then decoded by a receiver. Communication is successful when the message sent and the message received are similar if not identical and if the meaning is derived with minimal effort. Information theory implies that linguistic forms have properties that enhance communicative efficiency. For example, words that are used often tend to be short, and changes in linguistic forms tend to lead to enhanced communicative efficiency (e.g., shortening and simplification of commonly used expressions). One of the challenges of efficient communication is the reality that the linguistic signal is often noisy: Speakers make errors, listeners’ perceptual and cognitive systems do not operate flawlessly, and the environment in which linguistic exchanges happen often contain noise and distractions. Returning to the idea behind probabilistic approaches, models inspired by information theory assume that the language system uses its expectations or priors to assign probabilities to potential interpretations. These priors are used in a rational way, so that if the input is clear and the environment is optimal, then the “data” or the actual linguistic content will tend to drive the interpretation. But if the input is messy or the environment is noisy (or both), priors will have a stronger influence. 

Finally, the study of brain and language has been at the center of both neuroscience and linguistics for more than a century, but research linking language to brain mechanisms has dramatically accelerated since the availability of brain imaging techniques such as functional magnetic resonance imaging. Electrophysiological methods such as electroencephalography have been used for many decades, and they are useful for testing psycholinguistic theories because electroencephalography effects linked to specific linguistic manipulations—event-related potentials—permit psycholinguists to evaluate the effects of certain linguistic features on online processing. For example, the component known as the N400 (a negative-deflecting waveform that peaks approximately 400 ms after the onset of a stimulus) reflects the difficulty of integrating an unexpected word (e.g., due to poor semantic fit) into an ongoing sequence. The N400 and other event-related potentials have been extensively used to examine prediction in language processing, for example. But more recent work focuses specifically on the brain itself rather than on using brain-based methods to evaluate psycholinguistic models. For example, the localizer method (Fedorenko & Kanwisher, 2009) allows researchers to identify the putative language areas in healthy individuals as well as in those experiencing a language disorder to determine to what extent those areas overlap with other cognitive domains such as music and planning. Recent work using this method has shown that LLMs can predict the size of the neural response associated with different sentence types (Tuckute et al., 2024), illustrating again the interconnections among these new theoretical and methodological developments. 

Broader connections

The field of psycholinguistics has links to almost all areas of cognitive science. Aside from the importance of generative AI and other types of computational models for psycholinguistic theory development and testing, neuroscience is also embedded in the study of language processing as psycholinguists attempt to map out the brain’s language networks. Because language processing involves retrieval of information, the study of both short-term and long-term memory connects to issues such as lexical and syntactic retrieval and the integration of preexisting knowledge into representations of sentence meaning. Attention, inhibition, and cognitive control come into the picture as psycholinguists assess whether listeners and readers pay more attention to some words in sentences over others and how they suppress unwanted or nonviable bits of linguistic information (e.g., irrelevant word meanings) [see Attention]. Decision-making processes also influence the selection of information for linguistic processing as well as the organization of ideas for orderly linguistic communication. 

Psycholinguistics, then, is not only significant as basic research but also has practical implications in today’s technology-driven society. Innovations like generative AI have brought to the forefront essential questions about how language is learned and used. Additionally, as the world becomes more interconnected, there is greater demand for language education and a deeper understanding of bilingualism’s impact on cognitive and neural functioning.

Further reading

Comments
0
comment
No comments here