Staying within the lines: the formation of visuospatial boundaries influences multisensory feature integration.
Academic Article
Overview
abstract
The brain processes multisensory features of an object (e.g., its sound and shape) in separate cortical regions. A key question is how representations of these features bind together to form a coherent percept (the 'binding problem'). Here we tested the hypothesis that the determination of an object's visuospatial boundaries is paramount to the linking of its multisensory features (i.e., that the refinement of attended space through the formation of visual boundaries establishes the boundaries for multisensory feature integration). We recorded both scalp and intracranial electrophysiological data in response to Kanizsa-type illusory contour stimuli (in which pacman-like elements give the impression of a single object), their non-illusory counterparts, and auditory stimuli. Participants performed a visual task and ignored sounds. Enhanced processing of task-irrelevant sounds when paired with attended visual stimuli served as our metric for multisensory feature integration [e.g., Busse et al. (2005) Proc. Natl Acad. Sci. USA 102: 18751-18756]. According to our hypothesis, task-irrelevant sounds paired with Kanizsa-type illusory contour stimuli (which have well-defined boundaries) should receive enhanced processing relative to task-irrelevant sounds paired with non-illusory contour stimuli (which have ambiguous boundaries). The scalp data clearly support this prediction and, combined with the intracranial data, advocate for an important extension of models for multisensory feature integration. We propose a model in which (i) the visual boundaries of an object are established through processing in occipitotemporal cortex, and (ii) attention then spreads to cortical regions that process features that fall within the object's established visual boundaries, including its task-irrelevant multisensory features.