2025 : 4 : 6
khadijeh Aghajani

khadijeh Aghajani

Academic rank: Assistant Professor
ORCID:
Education: PhD.
ScopusId:
HIndex: 0/00
Faculty: Faculty of Technology and Engineering
Address:
Phone: 0113533000

Research

Title
Deep Learning Approach for Robust Voice Activity Detection: Integrating CNN and Self-Attention with Multi-Resolution MFCC
Type
JournalPaper
Keywords
Voice Activity Detection, SelfAttention Mechanism, MultiResolution Mel-Frequency Cepstral Coefficients, Deep Learning.
Year
2024
Journal Journal of Artificial Intelligence and Data Mining (JAIDM)
DOI
Researchers khadijeh Aghajani

Abstract

Voice Activity Detection (VAD) plays a vital role in various audio processing applications, such as speech recognition, speech enhancement, telecommunications, satellite phone, and noise reduction. The performance of these systems can be enhanced by utilizing an accurate VAD method. In this paper, multiresolution MelFrequency Cepstral Coefficients (MRMFCCs), their first and secondorder derivatives (delta and delta2), are extracted from speech signal and fed into a deep model. The proposed model begins with convolutional layers, which are effective in capturing local features and patterns in the data. The captured features are fed into two consecutive multi-head self-attention layers. With the help of these two layers, the model can selectively focus on the most relevant features across the entire input sequence, thus reducing the influence of irrelevant noise. The combination of convolutional layers and selfattention enables the model to capture both local and global context within the speech signal. The model concludes with a dense layer for classification. To evaluate the proposed model, 15 different noise types from the NoiseX-92 corpus have been used to validate the proposed method in noisy condition. The experimental results show that the proposed framework achieves superior performance compared to traditional VAD techniques, even in noisy environments.