XClose

UCL Psychology and Language Sciences

Home
Menu

About ECOLANG

The ECOLANG Multimodal Corpus enables a new approach to the study of language in its core ecological niche: face-to-face, multimodal environments.

Context

To account for the human capacity for language, we must study language in the settings that represent the conditions in which language has evolved, is learned and is commonly used. Here, communication comprises a wealth of multimodal signals – such as gestures, eyegaze, and intonation – alongside speech. However, most of the research concerning language processing, learning, and the neurobiology of language has focused on speech or text only. The ECOLANG Multimodal Corpus of adult-adult and adult-child conversation enables a new approach to the study of language in its core ecological niche: face-to-face, multimodal environments.

Gabriella Vigliocco presents Ecological Language: A Multimodal Approach to Language Learning and Processing (Abralin ao Vivo, 2021) with Susan Goldin-Meadow

YouTube Widget Placeholderhttps://www.youtube.com/watch?v=l9NoF5R3GBA&ab_channel=Abralin

Corpus

The corpus provides audiovisual recordings and annotation of multimodal behaviours (speech transcription, gesture, object manipulation, and eye gaze) by British and American English-speaking adults engaged in semi-naturalistic conversation with their child (N = 38, children 3-4 years old) or a familiar adult (N = 32). Speakers were asked to talk about objects (familiar or unfamiliar) to their interlocutors both when the objects were physically present or absent. Thus, the corpus characterises the use of multimodal signals in social interaction and their modulations depending upon the age of the interlocutor (child or adult); whether the interlocutor is learning new concepts/words (unfamiliar or familiar objects) and whether they can see and manipulate (present or absent) the objects

Schematic of the ECOLANG corpus design

Application

The corpus provides ecologically-valid data about the distribution and cooccurrence of the multimodal signals for cognitive scientists and neuroscientists to address questions about real-world language learning and processing; and for computer scientists to develop more human-like artificial agents.