A deep learning approach to detect the electroencephalogram-based cognitive task states
Hitesh Yadav1 and Surita Maini2
Associate Professor,Department of Electrical and Electronic Engineering, Sant Longowal Institute of Engineering and Technology (SLIET), Punjab,India2
Corresponding Author : Hitesh Yadav
Recieved Date
04-Sep-2023
Revised Date
06-Aug-2024
Accepted Date
10-Sep-2024
Abstract
Cognitive abilities are responsible for performing various simple and complex activities that affect a person's mental performance. These are also responsible for different day-to-day actions in human life. In the past few years, studies on cognitive ability, mental performance, and mindfulness meditation have been seen more frequently. The electroencephalogram (EEG) is an effective technique to study brain dynamics while executing any cognitive task and leads to new possibilities in the brain-computer interface (BCI) field. In this study, twenty-seven (27) healthy subjects performed a designed cognitive task having three different states (i.e., rest, meditation, and arithmetic) to stimulate the brain's cognitive functions. BIOPAC-MP-160 has been used for the EEG signal acquisition of the designed cognitive task according to the international 10-20 data acquisition system. The EEGLAB has been used to visualize, pre-process, filter, and removal of noise from the data. Then phase-amplitude coupling is performed to extract the features. After completing the feature extraction, the classification has been performed by three different deep learning approaches, i.e., sequential convolutional network (SCN), multi-branch convolutional network (MBCN), and multi-branch convolutional network-bidirectional long short-term memory network (MBCN-Bi-LSTM). The performance of the different classifications model has been estimated in terms of accuracy, precision, F1 score, and recall. The results demonstrated that MBCN-Bi-LSTM performs better than the SCN and MBCN, with a significant improvement in accuracy of 97.99%. The comparative analysis of the previously used deep learning and machine learning approaches to classify the EEG signal of different brain states substantially indicates that the proposed MBCN-Bi-LSTM model performs better in terms of accuracy and error rate. Also, the computational execution time of the proposed MBCN-Bi-LSTM is found to be less than the previous methods. The proposed classification approach may be utilized in future research to classify the various physiological signals.
Keyword
BCI, EEG, Deep learning, Classification, Cognitive task, Mental state classification.
Cite this article
Cite this ArticleRefference
[1]Obi Y, Claudio KS, Budiman VM, Achmad S, Kurniawan A. Sign language recognition system for communicating to people with disabilities. Procedia Computer Science. 2023; 216:13-20.
[2]Riad AM, Elminir HK, Shohieb SM. Hand gesture recognition system based on a geometric model and rule based classifier. British Journal of Applied Science & Technology. 2014; 4(9):1432-44.
[3]Mariappan HM, Gomathi V. Real-time recognition of Indian sign language. In international conference on computational intelligence in data science (ICCIDS) 2019 (pp. 1-6). IEEE.
[4]Wu J, Sun L, Jafari R. A wearable system for recognizing American sign language in real-time using IMU and surface EMG sensors. IEEE Journal of Biomedical and Health Informatics. 2016; 20(5):1281-90.
[5]Rekha J, Bhattacharya J, Majumder S. Hand gesture recognition for sign language: a new hybrid approach. In proceedings of the international conference on image processing, computer vision, and pattern recognition (IPCV) 2011 (pp. 1-7). WorldComp.
[6]Huu PN, Phung NT. Hand gesture recognition algorithm using SVM and HOG model for control of robotic system. Journal of Robotics. 2021; 2021:1-3.
[7]Shinde P, Shinde P, Shinde S, Shinde S, Shinde S. Augmented reptile feeder. In Pune section international conference (PuneCon) 2022 (pp. 1-4). IEEE.
[8]Ismail MH, Dawwd SA, Ali FH. A review on Arabic sign language recognition. Journal of Advances in Computer and Electronics Engineering. 2021; 6(12):1-12.
[9]Michele A, Colin V, Santika DD. Mobilenet convolutional neural networks and support vector machines for palmprint recognition. Procedia Computer Science. 2019; 157:110-7.
[10]Carney M, Webster B, Alvarado I, Phillips K, Howell N, Griffith J, et al. Teachable machine: approachable web-based tool for exploring machine learning classification. In extended abstracts of the 2020 CHI conference on human factors in computing systems 2020 (pp. 1-8). ACM.
[11]Dogo EM, Afolabi OJ, Nwulu NI, Twala B, Aigbavboa CO. A comparative analysis of gradient descent-based optimization algorithms on convolutional neural networks. In international conference on computational techniques, electronics and mechanical systems (CTEMS) 2018 (pp. 92-9). IEEE.
[12]Obi Y, Claudio KS, Budiman VM, Achmad S, Kurniawan A. Sign language recognition system for communicating to people with disabilities. Procedia Computer Science. 2023; 216:13-20.
[13]Riad AM, Elminir HK, Shohieb SM. Hand gesture recognition system based on a geometric model and rule based classifier. British Journal of Applied Science & Technology. 2014; 4(9):1432-44.
[14]Mariappan HM, Gomathi V. Real-time recognition of Indian sign language. In international conference on computational intelligence in data science (ICCIDS) 2019 (pp. 1-6). IEEE.
[15]Wu J, Sun L, Jafari R. A wearable system for recognizing American sign language in real-time using IMU and surface EMG sensors. IEEE Journal of Biomedical and Health Informatics. 2016; 20(5):1281-90.
[16]Rekha J, Bhattacharya J, Majumder S. Hand gesture recognition for sign language: a new hybrid approach. In proceedings of the international conference on image processing, computer vision, and pattern recognition (IPCV) 2011 (pp. 1-7). WorldComp.
[17]Huu PN, Phung NT. Hand gesture recognition algorithm using SVM and HOG model for control of robotic system. Journal of Robotics. 2021; 2021:1-3.
[18]Shinde P, Shinde P, Shinde S, Shinde S, Shinde S. Augmented reptile feeder. In Pune section international conference (PuneCon) 2022 (pp. 1-4). IEEE.
[19]Ismail MH, Dawwd SA, Ali FH. A review on Arabic sign language recognition. Journal of Advances in Computer and Electronics Engineering. 2021; 6(12):1-12.
[20]Michele A, Colin V, Santika DD. Mobilenet convolutional neural networks and support vector machines for palmprint recognition. Procedia Computer Science. 2019; 157:110-7.
[21]Carney M, Webster B, Alvarado I, Phillips K, Howell N, Griffith J, et al. Teachable machine: approachable web-based tool for exploring machine learning classification. In extended abstracts of the 2020 CHI conference on human factors in computing systems 2020 (pp. 1-8). ACM.
[22]Dogo EM, Afolabi OJ, Nwulu NI, Twala B, Aigbavboa CO. A comparative analysis of gradient descent-based optimization algorithms on convolutional neural networks. In international conference on computational techniques, electronics and mechanical systems (CTEMS) 2018 (pp. 92-9). IEEE.