Skip to main content
SearchLoginLogin or Signup

Gesture

Published onJul 24, 2024
Gesture
·

Co-speech gesture is a form of bodily communication primarily involving the hands and arms to convey a message. These gestures enrich the conventional format of spoken words by embodying thoughts in ways that speech cannot (McNeill, 1992). To illustrate, imagine describing how to tie a shoe while gesturing. Notice how the words in this description conform to standard rules and bear no resemblance to their meaning, while the gestures physically simulate the actual act. Iconic gestures, like in this example, visually and spatially communicate meaning. Other prominent types of gesture include deictics, which involve manual points that connect speech to context, and beats, characterized by rhythmic hand and arm movements that emphasize words and phrases. Unlike conventional sign languages, co-speech gestures require speech to fully convey their meaning. Together, speech and gesture serve not only to clarify intentions and aid listener comprehension but also to assist the speaker in organizing thought and managing cognitive resources.

History

Since antiquity, the art of gesturing has been recognized as an effective tool for oration and persuasion (Quintilian, 1922). The first detailed scientific study of gesture was by Darwin (1872), who recognized that the hands, together with other bodily displays, greatly accentuated the emotional force of language. Nearly 100 years later, these ideas coalesced into the field of nonverbal behavior, which conceived of hand gestures—along with facial expressions, eye gaze, and tone of voice—as emotional and cultural expressions adding to language but operating largely outside of it (Ekman & Friesen, 1969).

Contrasting with this nonverbal tradition, more modern theories situate gesture not on the periphery of language but as a central part of it (Kendon, 1986). Moreover, co-speech gestures are associated primarily with thinking rather than feeling. This new view is based on numerous empirical observations: gesture and speech develop in parallel over the course of children’s language acquisition; gesture compensates for language deficits; gestures and words mutually disambiguate one another; the two modalities share neural mechanisms; and gestures spur cognitive change (Goldin-Meadow, 2005; McNeill, 1992; Willems & Hagoort, 2007). These findings add credibility to theories that spoken language emerged from gestural communication systems in our evolutionary past (Tomasello, 2010).

Core concepts

The present-day study of co-speech gestures is wide-ranging, but there are three main themes highlighted below.

Gesture plays a key role in language acquisition

Babies begin pointing at roughly nine months of age, and these gestures are among the first explicit attempts to share intentions (Bates, 1976). Unlike chimpanzee pointing, which serves exclusively to request, infant deictics are also commonly used to inform and help others (Tomasello, 2010). These pointing gestures predict one-word speech, and later, deictic and iconic gestures combine with single words to predict two-word speech (Goldin-Meadow, 2005) [see Language Acquisition]. Moreover, young learners pay keen attention to all types of gestures to better understand the language of others. The fact that blind children gesture even when speaking to other blind people suggests that this behavior is innate and integral to language itself. Indeed, atypical gesturing is associated with neurodiversity in language outcomes, as seen in autism spectrum disorder (de Marchena & Eigsti, 2010) [see Autism].

Gesture enhances multimodal expression

Co-speech gestures go beyond the meaning of speech, even when there is extensive overlapping content. Recall the knot-tying example, in which speech and gesture simultaneously convey complementary information through both modalities. This example illustrates the property of iconicity in gesture, in which object attributes, action sequences, and spatial relationships are vividly depicted by the hands. This iconicity greatly enriches the abstract content of speech not only in one’s native language (McNeill, 1992) but also in a second language (Gullberg, 2006). Moreover, the layered nature of these multimodal expressions helps listeners to more quickly, accurately, and thoroughly understand a message (Hostetter, 2011).

Gesture is an embodied form of thinking and learning

Current theories view gesture not just as reflecting thought, but as thought itself (Streeck, 2009). From this perspective, when people gesture about tying a knot, part of that knowledge is literally in the hands. This sort of embodied thinking allows speakers to off-load resources to facilitate other cognitive processes such as planning speech, solving problems, or generating insights. This embodiment also helps to simulate visuospatial concepts (Hostetter & Alibali, 2008) [see Spatial Cognition] and generate schemas that are hard to verbally articulate (Kita et al., 2017). In this way, gestures bridge abstract thought and concrete action, uniquely enabling humans to engage with ideas using their hands. As a consequence, this embodied thinking is well suited to play a powerful role in learning. For example, in mathematical instruction, gestures not only reveal what learners know and do not know, but attention to them also guides teacher instruction and facilitates student growth (Alibali & Nathan, 2012).

Open questions

It is now widely accepted that co-speech gestures are part of language, yet several questions remain (Church et al., 2017). For example, it is known that gesture and speech share extensive neural mechanisms, but how and when these two modalities interact in the brain is less understood. Furthermore, research should delve deeper into the shared and unique neural processes of co-speech gestures and conventionalized signs (Emmorey & Özyürek, 2014). Such investigations are likely to yield valuable insights into the diverse biological underpinnings of language, enhancing our understanding of its complexity and adaptability. Relatedly, there should be further investigation into how iconicity is expressed in sign language and co-speech gesture, given that signs are bound by convention much more than gestures (Perniss & Vigliocco, 2014). Another important question revisits the emotional function of co-speech gestures: How do the hands interact with other bodily actions in thinking and feeling (Kelly & Tran, 2023)? Finally, although somewhat contentious, the limitations of gestures deserve more attention: Why are there such large individual differences in the use of gestures? When does gesture disrupt rather than help thinking and learning? If gesture is indeed part of language, why do many aspects of language still develop and function in its absence?

Broader connections

The study of gesture is inherently interdisciplinary and has implications for diverse areas in cognitive science. Although humans gesture in unique ways, gesturing is not unique to humans, so better understanding similarities and differences across species will inform comparative psychology [see Signaling]. Within humans, pinpointing the relative linguistic contributions of speech and gesture informs debates about what aspects of cognition are embodied and concrete versus computational and abstract. Beyond living things, the study of gesture and other bodily actions can improve the field of artificial intelligence by adding missing context to large language models [see Large Language Models]. Similarly, exploring the biomechanical basis for gesture–speech synchrony has implications for the fields of robotics and human–machine interaction (Pouw et al., 2020). Finally, the study of gesture enriches philosophical frameworks that challenge traditional notions of where the mind begins and ends (Clark, 2006).

Acknowledgements

This work was supported by Colgate's Center for Language and Brain.

Further reading

Comments
0
comment
No comments here
Read Next