XClose

UCL Psychology and Language Sciences

Home
Menu

CI_Symposium3

Multi-modal communication

Professor Bencie Woll, Chair in Sign Language and Deaf Studies, DCAL, UCL

INTRODUCTION

Research on the “Deaf Brain‟ is beginning to provide a new evidence base for policy and practice in relation to intervention with deaf children. This talk outlines the multi-channel nature of language and reviews recent neuroscience research on sign language processing and on spoken language processing with and without sound. Issues of brain plasticity – the way in which the brain is able to adapt to different sensory and cognitive experiences are discussed with special emphasis on first language acquisition and bilingualism. The evidence base provided by such studies should inform practice with children who have CIs.

MULTI-CHANNEL COMMUNICATION

Human communication is essentially multi-channel. Although it is possible to communicate through the auditory components of speech alone, typically, people talk face-to-face, providing concurrent auditory and visual input. The visual elements include facial expression, body movement and gesture. Sign language of course is also multi-channel; although there is no auditory component, facial expression, body movement and gesture accompany the linguistic channel.

AUDIO-VISUAL LANGUAGE PROCESSING

There is an intimate and 2-way relationship between vision and hearing in processing language. The McGurk effect (when hearing (e.g. “ga‟) and seeing (e.g. “ba‟) two different syllables at the same time, the visual information dominates) was identified 30 years ago. More recent research has shown that observing a silent recording of a specific person talking for as little as 2 minutes improves subsequent auditory-only speech recognition for that person. This improvement in auditory-only speech recognition is based on activation in the brain in an area typically involved in face-movement processing. Such findings challenge unisensory models of speech processing, because they show that, in auditory-only speech, the brain exploits previously encoded audiovisual correlations to optimize communication.

AUDITORY PROCESSING BY DEAF PEOPLE

A number of studies have addressed the question of whether there are reliable indicators (neural, cognitive, behavioural) of individual differences in the ability to benefit from auditory prostheses. We have already discussed above research that shows that when auditory cortex is not activated by acoustic stimulation, it can nevertheless be activated by silent speech in the form of speechreading. It has been suggested that visual-to-auditory cross-modal plasticity is an important factor limiting hearing ability in non-proficient CI users. However, as we have seen, cross-modal activation is found in hearing as well as deaf people . The suggestion is sometimes advanced that the deaf child should not watch speech or use sign language, since this may adversely affect the sensitivity of auditory cortical regions to acoustic activation following cochlear implantation. Such advice may not be warranted if speechreading activates auditory regions irrespective of hearing status and if such activation may be relatively specific to such stimulation.

It is known that adults with cochlear implants (CI) present a higher visuo-auditory gain than that observed in normally hearing subjects in conditions of noise. This suggests that people with CI have developed specific skills in visuo-auditory interaction leading to an optimisation of the integration of visual temporal cueing with sound in the absence of fine temporal spectral information.

SPEECHREADING

Speechreading gives access to spoken language structure by eye, and, at the segmental level, can complement auditory processing, as discussed above. This does not need to be taught or learned explicitly, since infants are highly sensitive to seen speech. Therefore it should be considered whether speechreading has the potential to impact positively on the development of auditory speech processing following cochlear implantation. Speechreading is strongly implicated in general speech processing and in literacy development in both hearing and deaf children. Speechreading capabilities interact with prosthetically enhanced acoustic speech processing skills to predict speech processing outcomes for cochlear implantees (Rouger et al, 2007) and continues to play an important role in segmental speech processing post-implant (Rouger et al, 2008).

CONCLUSIONS

The age of acquisition of a first language impacts on brain development. The importance of early exposure to an accessible language for those born profoundly deaf cannot be overstated, as it is necessary to establish the neural networks for language, social interaction and cognitive development while maximum plasticity is available. The studies reported above indicate that superior temporal regions of the deaf brain, once tuned to visible speech, can more readily adapt to perceiving speech multimodally. These findings should inform preparation and intervention strategies for cochlear implantation in deaf children.

Link to slides