Loading Events

« All Events

  • This event has passed.

[PhD Colloquium of Akshara Soman, EE on 14/7, 11AM] {Investigating Neural Encoding of Word Learning and Speech Perception}

July 14 @ 11:00 AM - 1:00 PM IST

Dear All,
Inviting you to the PhD Thesis Colloquium talk with the following details. 
Speaker: Ms. Akshara Soman
Title: Investigating Neural Encoding of Word Learning and Speech Perception
Date & Time : 14-7-2023, 11:00AM
Venue : MMCR (C241), EE, IISc
Research Supervisor: Prof. Sriram Ganapathy, EE.
Language learning and speech perception are remarkable feats performed by the human brain, involving complex neural mechanisms that allow us to understand and communicate with one another. Unravelling the mysteries of these mechanisms has far-reaching implications, from theories of human cognition to developing effective language learning strategies and advancing speech technology. By employing a multidisciplinary approach encompassing neural investigations using EEG signals, behavioral analyses, and machine learning perspectives, this talk sheds light on the underlying processes involved in word learning and speech perception.
The talk is divided into three parts. The first part begins by examining how an imitation based learning of foreign sounds is captured in the EEG signals. In this listen and reproduce setting, subjects were introduced to words from a foreign language (Japanese), and English. The subjects were also asked to articulate the words. The results show that time-frequency features and phase in the EEG signal contain information for language discrimination. Further analysis showed that speech production improved over time, and the frontal brain regions were involved in language learning. These findings suggest the potential of EEG for personalized language exercises and for assessing learners’ abilities.
The next part of the talk investigates how learning patterns change when semantics are introduced and presented in a sentence context. The participants listen to Japanese words in an English sentence, once before understanding the semantics of these words and later with the semantic exposure. We quantify the learning patterns in the EEG signal. Notably, a delayed P600 component emerges for Japanese words, suggesting short-term memory processing unlike the N400 typically seen for incongruent words in the known language. The brain regions associated with semantic learning are also identified in this study using the EEG data.
In the final part of the talk, we analyze the neural mechanisms of human speech comprehension using a match-mismatch classification of the continuous speech stimulus and the neural response (EEG). We make three major contributions on this front –  i) Illustrate the role of word-boundaries in continuous speech comprehension for the first time, ii) Elicit the encoding of speech data (acoustics) as well as the text data (semantics) in the EEG signal, and, iii) Increased signature of semantic content (text) in the EEG data in acoustically challenging environments of dichotic listening.  The findings have potential applications for understanding speech recognition in noise, brain-computer interfaces, and attention studies.
In summary, the talk will attempt to enhance our understanding of language learning, speech comprehension, and the neural mechanisms involved.


July 14
11:00 AM - 1:00 PM IST


MMCR, Hall C 241, 1st floor, EE department