Loading Events

« All Events

  • This event has passed.

Defense Talk on Investigating Neural Mechanisms of Word Learning and Speech Perception

April 23 @ 10:30 AM - 11:30 AM IST

Title

Investigating Neural Mechanisms of Word Learning and Speech Perception: Insights from Behavioral, Neural, and Machine Learning Perspectives

Speaker: Ms. Akshara Soman

Time: 1030am-1130am

Venue: MMCR, EE, C241 and on the Teams-Meeting-Link

Abstract

The process of language learning and speech perception is a remarkable feat of the human brain, involving complex neural mechanisms that allow us to understand and communicate with one another. By employing a multidisciplinary approach, this talk sheds light on the underlying processes involved in word learning and speech perception.

The talk begins by examining how imitation of sounds influences language learning and language discrimination using EEG signals. Results show that time-frequency features and phase in the EEG signal contain information for language discrimination. Further experiments confirm these findings and analyse improvements in pronunciation over time, identifying frontal brain regions involved.

The talk then discusses how learning patterns change when semantics are introduced. Participants learn Japanese words and undergo ERP analysis, revealing distinct EEG patterns for newly learned words. Notably, a delayed P600 component emerges, suggesting short-term memory processing.  Based on the above neuro-behavioural experiments, a machine model is proposed to compare human-machine performances in audio-visual association learning. The model performs comparable to humans in learning with few examples, with slightly inferior generalisation abilities.

Moving to naturalistic stimuli, the talk analyses continuous speech perception using a deep learning model. This model achieves 93% accuracy in stimulus-response modelling on a speech-EEG dataset, surpassing previous efforts. It demonstrates the role of word-level segmentation during speech comprehension in the human brain.   We further extend this study to investigate speech perception in complex listening environments where multiple speech streams are heard simultaneously. Our proposed model, based on Long Short-Term Memory (LSTM), reveals that the human brain prioritises understanding the semantics rather than the acoustics under such challenging listening conditions. The proposed model has potential applications in speech recognition, brain-computer interfaces, and attention studies.

Overall, the thesis enhances our understanding of language learning, speech comprehension, and the neural mechanisms involved. It provides insights into familiar and unfamiliar language processing, semantic effects, audiovisual learning, and word boundaries in sentence comprehension. These findings have implications for both human language learning and the development of machine systems aimed at understanding and processing speech.

 Coffee/Tea will be served before the talk.

Details

Date:
April 23
Time:
10:30 AM - 11:30 AM IST

Venue

Multi-Media Class Room (MMCR), EE Department (Hybrid mode)