Distributed Representation of Touch in Visual and Somatosensory Cortex
Author: Ying Zhu
Committee Members: Hualou Liang, PhD (co-author); Micheal S. Beauchamp, PhD (co-author)
Masters thesis, The University of Texas School of Health Information Sciences at Houston.
Multi-voxel pattern analysis (MVPA) was used to analyze blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) data, which were acquired as human subjects received vibrotactile stimulation of their hands and feet. Support vector machines trained and tested on the whole brain fMRI data were able to accurately decode the body site of single stimulation trials, with mean performance of 92% in a two-way discrimination task (chance performance 50%) and 70% in a four-way discrimination task (chance performance 25%). Primary and secondary somatosensory areas (S1 and S2) alone decoded the body site of stimulation with similarly high accuracy. The hand and foot regions of S1 (S1hand and S1foot) were separately examined in a two-way classification task. S1hand was better able to decode the hand of stimulation (left vs. right), and S1foot was better able to decode the foot of stimulation. Surprisingly, S1foot was also able to decode the hand of stimulation at above-chance levels: representations in somatosensory cortex may be more distributed than suggested by the classical model of the sensory homunculus. In addition to S1 and S2, vibrotactile responses were observed in a region of visual cortex, area MST and associated areas (MST+) in lateral occipitotemporal lobe. MST+ was able to accurately decode the hand but not the foot of stimulation, supporting the idea of a role for MST+ in eye-hand coordination.