BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//EE - ECPv5.10.0//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:EE
X-ORIGINAL-URL:https://ee.iisc.ac.in
X-WR-CALDESC:Events for EE
BEGIN:VTIMEZONE
TZID:Asia/Kolkata
BEGIN:STANDARD
TZOFFSETFROM:+0530
TZOFFSETTO:+0530
TZNAME:IST
DTSTART:20240101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Asia/Kolkata:20240423T103000
DTEND;TZID=Asia/Kolkata:20240423T113000
DTSTAMP:20260421T034246
CREATED:20240422T042104Z
LAST-MODIFIED:20240422T045632Z
UID:241458-1713868200-1713871800@ee.iisc.ac.in
SUMMARY:Defense Talk on Investigating Neural Mechanisms of Word Learning and Speech Perception
DESCRIPTION:Title \nInvestigating Neural Mechanisms of Word Learning and Speech Perception: Insights from Behavioral\, Neural\, and Machine Learning Perspectives \nSpeaker: Ms. Akshara Soman \nTime: 1030am-1130am \nVenue: MMCR\, EE\, C241 and on the Teams-Meeting-Link\n\n\nAbstract \nThe process of language learning and speech perception is a remarkable feat of the human brain\, involving complex neural mechanisms that allow us to understand and communicate with one another. By employing a multidisciplinary approach\, this talk sheds light on the underlying processes involved in word learning and speech perception. \nThe talk begins by examining how imitation of sounds influences language learning and language discrimination using EEG signals. Results show that time-frequency features and phase in the EEG signal contain information for language discrimination. Further experiments confirm these findings and analyse improvements in pronunciation over time\, identifying frontal brain regions involved. \nThe talk then discusses how learning patterns change when semantics are introduced. Participants learn Japanese words and undergo ERP analysis\, revealing distinct EEG patterns for newly learned words. Notably\, a delayed P600 component emerges\, suggesting short-term memory processing.  Based on the above neuro-behavioural experiments\, a machine model is proposed to compare human-machine performances in audio-visual association learning. The model performs comparable to humans in learning with few examples\, with slightly inferior generalisation abilities. \nMoving to naturalistic stimuli\, the talk analyses continuous speech perception using a deep learning model. This model achieves 93% accuracy in stimulus-response modelling on a speech-EEG dataset\, surpassing previous efforts. It demonstrates the role of word-level segmentation during speech comprehension in the human brain.   We further extend this study to investigate speech perception in complex listening environments where multiple speech streams are heard simultaneously. Our proposed model\, based on Long Short-Term Memory (LSTM)\, reveals that the human brain prioritises understanding the semantics rather than the acoustics under such challenging listening conditions. The proposed model has potential applications in speech recognition\, brain-computer interfaces\, and attention studies. \nOverall\, the thesis enhances our understanding of language learning\, speech comprehension\, and the neural mechanisms involved. It provides insights into familiar and unfamiliar language processing\, semantic effects\, audiovisual learning\, and word boundaries in sentence comprehension. These findings have implications for both human language learning and the development of machine systems aimed at understanding and processing speech. \n Coffee/Tea will be served before the talk.
URL:https://ee.iisc.ac.in/event/defense-talk-on-investigating-neural-mechanisms-of-word-learning-and-speech-perception/
LOCATION:Multi-Media Class Room (MMCR)\, EE Department (Hybrid mode)
END:VEVENT
END:VCALENDAR