Multimodal Emotion Recognition Using EEG and Facial Expressions with Potential Applications in Driver Monitoring

Автор: Ch. Raga Madhuri, Anideep Seelam, Fatima Farheen Shaik, Aadi Siva Kartheek Pamarthi, Mohan Kireeti Krovi

Журнал: International Journal of Image, Graphics and Signal Processing @ijigsp

Статья в выпуске: 1 vol.18, 2026 года.

Бесплатный доступ

Mental conditions such as fatigue, distraction, and cognitive overload are known to contribute significantly to traffic accidents. Accurate recognition of these cognitive and emotional states is therefore important for the development of intelligent monitoring systems. In this study, a multimodal emotion recognition framework using electroencephalography (EEG) signals and facial expression features is proposed, with potential applications in driver monitoring. The approach integrates Long Short-Term Memory (LSTM) networks and Transformer architectures for EEG-based temporal feature extraction, along with Vision Transformers (ViT) for facial feature representation. Feature-level fusion is employed to combine physiological and visual modalities, enabling improved emotion classification performance compared to unimodal approaches. The model is evaluated using accuracy, precision, recall, and F1-score metrics, achieving an overall accuracy of 96.38%, demonstrating the effectiveness of multimodal learning. Although the experiments are conducted on general-purpose emotion datasets, the results indicate that the proposed framework can serve as a reliable foundation for driver monitoring applications, such as fatigue, distraction, and cognitive state assessment, in intelligent transportation systems.

Еще

Advanced Driver Safety, Electroencephalography (EEG), Cognitive Monitoring, LSTM, CNN, Fatigue Detection, Transformers, Vision Transformers (ViT), Driver Safety

Короткий адрес: https://sciup.org/15020145

IDR: 15020145   |   DOI: 10.5815/ijigsp.2026.01.10