Loading Events

« All Events

  • This event has passed.

Colloquium on Low-Complexity Classification of Patients with Amyotrophic Lateral Sclerosis from Healthy Controls: Exploring the Role of Hypernasality

December 19 @ 12:00 PM - 1:00 PM IST

NAME OF THE STUDENT         :  Anjali Jayakumar

DEGREE REGISTERED             :     M. Tech. (Research)

DATE AND DAY                  :     19th December, 2024, THURSDAY

TIME                          :     12:00 PM

VENUE                         :     EE, MMCR

Teams meeting link      :     https://tinyurl.com/2zckabj2

T I T L E
Low-Complexity Classification of Patients with Amyotrophic Lateral Sclerosis from Healthy Controls: Exploring the Role of Hypernasality

Abstract:
Amyotrophic Lateral Sclerosis (ALS) is a progressive neurodegenerative disorder characterized by motor neuron degeneration, leading to muscle weakness, atrophy, and speech impairments. Dysarthria, a motor speech disorder, is an early symptom in approximately 30% of ALS patients, with hypernasality—excessive nasal resonance due to velopharyngeal dysfunction—observed in around 73.88% of individuals with bulbar-onset ALS. These speech impairments significantly hinder communication and affect patients’ quality of life. Current ALS monitoring methods, including clinical assessments, genetic testing, electromyography (EMG), and magnetic resonance imaging (MRI) can be time-consuming and invasive, whereas speech-based approaches provide a non-invasive and efficient alternative for continuous monitoring. However, the lack of large ALS-specific speech datasets hinders the development of reliable models. This study aims to develop a simplified, low-complexity model to distinguish ALS speech from healthy control (HC) speech, exploring the role of hypernasality for effective classification. By leveraging hypernasality as an indicator of ALS, the study seeks to develop machine learning models that train on healthy speech data, avoiding the need for large amounts of ALS speech data. Ultimately, the study aims to develop a low-complexity classification method for classifying ALS patients from HC subjects using their speech.
The study begins by simplifying deep learning models, transitioning from complex Convolutional Neural Networks (CNNs) and Bidirectional Long Short-Term Memory (BiLSTM) architectures to simpler Deep Neural Networks (DNNs) of varying complexity. These models are trained using Mel Frequency Cepstral Coefficients (MFCCs), along with their deltas and double-deltas. Additionally, various temporal statistics of the MFCCs and their derivatives are explored to reduce feature dimensionality, thereby decreasing model complexity in terms of the number of model parameters and Floating-Point Operations (FLOPs), resulting in reduced computational cost. The study then investigates the presence of hypernasality in ALS speech of varying dysarthria severity, as well as the HC speech, using HuBERT representations and a DNN model trained on healthy speech for nasal vs. non-nasal phoneme classification. Finally, the study integrates hypernasality in ALS speech into the ALS vs. HC classification by training a model for nasal vs. non-nasal phoneme classification using only healthy speech data. The model then classifies ALS vs. HC speech, with ALS treated as the nasal class and HC as the non-nasal class, demonstrating its effectiveness in distinguishing ALS speech from HC speech, while also validating the potential of simplified DNN models for the classification.
The results show that reduced-complexity DNN models can outperform CNN-BiLSTM models, achieving up to 5.67% and 6.59% higher classification accuracies for Spontaneous Speech (SPON) and Diadochokinetic Rate (DIDK) tasks, respectively, with a significant reduction in the number of model parameters by 99.99% and FLOPs by 99.60%. Dimensionality reduction minimizes complexity, with a further reduction of 94.59% in the number of model parameters and 94.61% in FLOPs, resulting in minimal accuracy loss of 1.76% for SPON and 5.17% for DIDK. Analysis of hypernasality across varying ALS severity levels reveals that individuals with severe dysarthria exhibit the highest levels of nasalized speech, followed by those with mild dysarthria, with normal ALS speech and healthy controls showing the lowest levels. This finding is validated with manually annotated nasality scores. Hypernasality proves to be an effective indicator for distinguishing ALS from HC, achieving up to 66.48% and 81.46% accuracy for SPON and DIDK tasks, respectively, with low-complexity models.

Details

Date:
December 19
Time:
12:00 PM - 1:00 PM IST