XClose

UCL Psychology and Language Sciences

Home
Menu

Natural language comprehension in Japanese-English bilinguals

Abstract

Introduction

Comprehending natural language requires the integration of information from speech, body movements, manual gestures and facial expressions. Recent neuroimaging studies have investigated how people process their native language in real-world contexts (e.g., Skipper, et al., 2007; Willems et al., 2007). Here we examine second-language comprehension in naturalistic contexts by studying brain activity in native Japanese speakers while they watch an American game show (Skipper & Zevin, in prep.).

Methods

Participants were 13 Japanese-English bilinguals (with moderate to high English proficiency) and 13 age-matched English controls who passively viewed and listened to an American game show while brain activity was measured by fMRI. Brain networks for the task-state of the two groups were identified with Tensor Independent Component Analysis (TICA, implemented in MELODIC). The video stimuli were coded for various types of events, and then turnpoints analysis (Skipper & Zevin, in prep.) was used to select the independent components most strongly associated with each event type. A dual-regression procedure (Beckmann et al., 2009) was then used to identify differences between the two groups with respect to these components.

Results

Two components (IC3 and IC6) were found to be tuned to speech by the turnpoints analysis and indicated a similar network of bilateral regions, including bilateral superior temporal gyrus, middle temporal gyrus, supramarginal gyrus, inferior frontal gyrus, precentral gyrus, supplementary motor area, medial frontal gyrus and cerebellum. Group comparison by dual regression showed that the monolingual group had stronger effects in left superior temporal gyrus, left inferior frontal gyrus (p. opercularis), right middle and inferior temporal gyrus, and right supplementary motor area. In contrast, the bilingual group showed stronger activations in anterior cingulate and medial frontal gyrus.

Conclusions

The current results indicate that non-native listeners are less efficient than native listeners at engaging the “dorsal stream” associated with audio-motor mappings for speech perception during natural language processing (Hickok and Poeppel,2007). In contrast, Japanese-English bilinguals may depend more on higher-order comprehension of the stimulus, based on non-linguistic information. Future analyses, in particular examination of the temporal precedence relationships among regions, will focus on determining whether any differences at all arise within the typical “language network.”

References

Beckmann, C., Mackay, C., Filippini, N., Smith, S. (2009), Group comparison of resting-state fMRI data using multi-subject ica and dual regression, 15th Annual Meeting of Organization for Human Brain Mapping, poster 441 SU-AM.

Hickok, G., Poeppel, D. (2007), The cortical organization of speech processing, Nature Review Neuroscience, vol. 8, pp. 393-402.

Skipper, J.I., van Wassenhove, V., Nusbaum, H.C., Small, S.L. (2007), Hearing lips and seeing voices: how cortical areas supporting speech production mediate audiovisual speech perception, Cerebral Cortex, vol. 17, no. 10, pp. 2387-2399.

Willems, R.M., Ozyürek, A., Hagoort, P. (2007), When language meets action: the neural integration of gesture and speech, Cerebral Cortex, vol. 17, no. 10, pp. 2322-2333.