Skip to main content
SearchLoginLogin or Signup

Perceptual Learning

Published onSep 30, 2024
Perceptual Learning
·

Perceptual learning is a change in perception due to practice or repeated exposure to a given category of stimuli. It is commonly measured via performance improvements in perceptual tasks. The study of perceptual learning can be separated into research on basic perceptual tasks, real-world perceptual expertise, and object recognition. Perceptual learning is distinct from other forms of learning in that it involves structural and functional changes to perceptual systems. It can occur under a wide variety of conditions, such as with or without feedback, task relevance, and conscious attention. Open questions include whether a single perceptual learning model can accommodate this wide variety of conditions, and what neural and psychological mechanisms underlie perceptual learning. Practical applications include improving expert performance and treating pathologies such as amblyopia (lazy eye). Recent philosophical debate over perceptual learning includes whether high-level (more complex) properties are represented in perception, whether perception is theory laden or penetrated by cognition as a result, and whether there can be “bad” or biased cases of perceptual learning. 

History

Two early experiments in perceptual learning illustrate different approaches to its study. In 1858, German physiologist Alfred Wilhelm Volkmann measured improvements in people’s ability to perform a basic perceptual task with a simple stimulus: distinguish between two distinct but simultaneous points of stimulation on their skin (Volkmann, 1858). In contrast, in 1897, American psychologists Bryan and Harter studied real-life skills involving perceptual expertise: those of telegraph operators, who learned to comprehend and produce messages in telegraphic code (Bryan & Harter, 1897).

These disparate approaches have largely persisted. Pioneering work on perceptual learning by Eleanor Gibson (1969) employed relatively complex perceptual stimuli in comparison to researchers who studied phenomena such as learned sensitivity to simple line orientations and motion discrimination. 

Today, three relatively self-contained research programs have emerged, investigating the perceptual processes involved in learning to detect and discriminate simple stimuli in lab-created tasks (Dosher & Lu, 2017), real-world objects of perceptual expertise (Gauthier et al., 2010), and general object recognition (Peissig & Tarr, 2007).

Core concepts

Perceptual learning is distinct from other forms of learning, such as motor and cognitive learning, in that there is a change in the functional and structural organization of the perceptual system itself [see Statistical Learning]. 

Evidence for a perceptual locus of change comes from brain imaging that demonstrates structural and functional changes in early visual cortical areas after learning (Folstein et al., 2013), changes to perceptual processing within the first 170-ms post-stimulus onset (Tanaka & Curran, 2001), and training specificity, in which a lack of transfer from learned to novel contexts is thought to indicate a perceptual change (Ahissar & Hochstein, 1997). 

While the bulk of evidence for perceptual learning comes from vision research, perceptual learning has been observed in all other sensory modalities (e.g., Spence, 2019) and in multisensory integration (Shams & Seitz, 2008). 

Characterizing the variety of factors that lead to changes in perception because of perceptual learning is an important area of investigation. Distinctions have been made between fast and slow processes, task-relevant versus -irrelevant stimuli, (a lack of) conscious attention to or awareness of the stimuli, and learning with or without feedback during the training period (Watanabe et al., 2001) [see Attention].

Questions, controversies, and new developments

Given the different conditions and sensory modalities in which perceptual learning occurs, as well as considerable variation in the complexity of the learned stimuli, there can be skepticism of the prospect for a unified model. For example, many models of the neural mechanisms of perceptual learning do not explain both task-relevant and -irrelevant varieties, or they explain perceptual learning of simple but not more complex stimuli (although, see Watanabe & Sasaki, 2015). Whether or not these models can be extended to multisensory perceptual learning is a further question. Several attempts have been made to give a unified account of the psychological mechanisms responsible for perceptual learning (Goldstone, 1998; Kellman & Garrigan, 2009).

Computational models offer mechanistic theories of perceptual learning that provide accounts of the time course, durability, and specificity of training. Most of these models are neural networks because their gradual, incremental adaptation of weights between neuron-like units aligns well with the typically protracted course of human perceptual learning (Dosher & Lu, 2017). There is an important distinction between unsupervised and supervised models. Unsupervised neural networks pick up on coherent patterns among input elements across many inputs (Austerweil & Griffiths, 2013), whereas supervised models create new (Goldstone, 2003) and weight preexisting (Petrov et al., 2005) perceptual components that are useful for an important task. An important future direction will be to incorporate both unsupervised and supervised learning so that a single learning system simultaneously finds underlying regularities in the form of stimulus clusters, dimensions, and features and also warps these regularities in response to environmental feedback.

Broader connections

Practical applications of perceptual learning include enhancing perceptual expertise in areas such as sports, music, medical diagnosis, and mathematics (Goldstone et al., 2017); treating pathologies such as amblyopia (lazy eye); and mitigating age-related visual decline (Watanabe & Sasaki, 2015). Some suggestions for improving perceptual learning include pairing confusable stimuli to increase their discriminability, creating easy-to-hard training sequences, requiring learners to actively predict components of stimuli, providing immediate feedback, employing contrastive labels, and initially presenting systematically caricatured members for highly similar categories.

Some philosophers have recently invoked perceptual learning to support the claim that we can perceive high-level properties of objects, such as a tree or a person (Ransom, 2020; Stokes, 2021; Burnston, 2021). However, others have argued that perceptual learning merely alters what sorts of low-level properties we are able to perceive, such as color, shape, and motion (Connolly, 2019). Allowing for the alteration of perception through learning raises the question of whether perception is theory laden or penetrable by our theories and beliefs (cognitive penetration), which would have implications for scientific observation (Ransom, 2020; Stokes, 2021). Finally, there is the possibility that perception may be biased in undesirable ways as a result of biases in the learning environment or process (Connolly, 2019; Prettyman, 2019; Jenkin, 2023).

Regardless of whether perceptual learning involves cognitive penetration, it is broadly connected to cognition as a whole because it leads to changes to the input representations for all processes downstream from perception in the flow of information. Given the major consequences of lasting changes to perceptual processing, it is not surprising that perceptual systems generally change slowly and conservatively. Nonetheless, perceptual learning is an important tool for creating flexible cognitive systems in which environmental regularities can be assimilated into a learning device that is subject to perceptual constraints [see Affordances]. Viewed in this way, perceptual learning critically involves a learner modifying the constraints according to which their own subsequent learning unfolds.

Further reading

Comments
0
comment
No comments here