Utilizing CNN Architectures for Non-Invasive Diagnosis of Speech Disorders – Further Experiments and Insights

Authors

  • Filip Ratajczak Wrocław University of Science and Technology
  • Mikołaj Najda Institute of Data Science, Maastricht University
  • Kamil Szyc Faculty of Information and Communication Technology, Wrocław University of Science and Technology

Abstract

This research investigated the application of deep neural networks for diagnosing diseases that affect the voice and speech mechanisms through the non-invasive analysis of vowel sound recordings. Using the Saarbruecken Voice Database, the voice recordings were converted to spectrograms to train the models, specifically focusing on the vowels /a/, /u/, and /i/. The study used Explainable Artificial Intelligence (XAI) methodologies to identify essential features within these spectrograms for pathology identification, with the aim of providing medical professionals with enhanced insight into how diseases manifest in sound production. The F1 Score performance evaluation showed that the DenseNet model scored 0.70 ± 0.03 with a top of 0.74. The findings indicated that neither vowel selection nor data augmentation strategies significantly improved model performance. Additionally, the research highlighted that signal splitting was ineffective in enhancing the models' ability to extract features. This study builds on our previous research, offering a more comprehensive understanding of the topic. \footnote{All results are fully reproducible, the source code is available at https://github.com/Tesla2000/DepCoS2024/

Additional Files

Published

2025-07-09

Issue

Section

Biomedical Engineering