Visual processing has always been one of the topics in cognitive science that interests me the least, but after reading this paper, “Constructing Meaning,” by Seana Coulson, I’ve changed my mind (at least a tiny bit). Instead of ascribing to the folk psychological view of vision as a “passive registration process in which people record optical information that exists in the world,” she suggests that it’s an “active, constructive process in which people exploit optical information to interact with the world.”
Early accounts of vision represented it as a hierarchical, feed-forward process, but more recent studies have revealed that there are in fact a number of backward connections, in which information is passed from higher-level areas to lower, as well as a number of lateral information transfers. Vision isn’t as simple as was once thought.
Further demonstrating this point is the notion of context sensitivity. One example is the phenomenon of color constancy: even when lighting conditions change, the color we perceive objects remains constant. Another example is neural filling-in. We all have a blind spot, the region of the retina where the optic nerve attaches and there aren’t receptors, but we don’t perceive the small hole in our visual field that we would if our brains weren’t somehow filling in the gaps. This is a specific example of the more general problem that despite frequent blinks (about every 5 seconds), we don’t experience perceptual discontinuity. And a final example that I thought was earth-shattering the first time I read Noe’s account of it: when we look at an object, we’re actually only seeing it in two dimensions, but we perceive the whole of it in its three-dimensional glory.
The short story: we don’t perceive what’s actually there, but instead construct a representation of what we’re seeing based on context and prior knowledge of the world around us.
Then Coulson likens the process of visual perception to language processing. Making meaning out of an utterance is not simply a decoding process based solely on the linguistic information, just as visual perception isn’t simply passively absorbing visual stimuli. Instead, perceiving linguistic meaning involves a complex interplay between linguistic and nonlinguistic knowledge. After reviewing quite a bit of cognitive neuroscience data, Coulson concludes that the studies “argue against a deterministic, algorithmic coding and decoding model of meaning, and for a dynamic, context sensitive one.” I thought this paper was a really cool way of saying that context is crucial for making meaning, whether out of visual or linguistic input, rather than an incidental property of the stimuli.