International Journal of Advanced Technology and Engineering Exploration (IJATEE) ISSN (P): 2394-5443 ISSN (O): 2394-7454 Vol - 9, Issue - 87, February 2022
  1. 1
    Google Scholar
A customized 1D-CNN approach for sensor-based human activity recognition

Shilpa Ankalaki and Thippeswamy M. N.

Abstract

Sensor-based human activity recognition (HAR) plays a major role in healthcare and security applications. The significance of this study is to understand the state-of-art of techniques for recognizing the activities of humans based on physiological signals acquired by a body-worn sensors. Accurate recognition of activities with just signals from the wearable sensor is a difficult task due to the inherent complexity of physical activities. Even though sensor-based HAR has been accomplished by using various algorithms exploiting machine and deep learning techniques, just a handful of researchers have made an extensive study on the contribution of various parameters on the accuracy of recognition of human activity. The main focus of this study is to perform a comparative evaluation of the state-of-the-art algorithms based on machine and deep learning techniques that have been proposed for HAR. Principal component analysis (PCA) and t-distributed stochastic neighbour embedding (t-SNE) methods are employed for data dimensional reduction and visualization. Machine learning algorithms like random forest (RF), kernel-based support vector machine (SVM), and deep learning algorithms such as convolutional neural networks (CNN) have been applied on the University of California, Irvine (UCI) HAR dataset. A comprehensive study has been conducted to understand the impact of changing parameters like pooling, activation functions, number of dense layers, and dropout percentage of CNN on the accuracy of recognition. The proposed work employed the swish activation function in the dense layer which was recently proposed by Google. The output layer of the last dense network employs the softmax function to classify human activities. The proposed CNN architecture (with max-pooling layer, the swish activation function in one dense layer, and softmax function at the output layer) achieved results of training and validation accuracy as 99.58% and 92.57% respectively for HAR.

Keyword

Activation function, Convolutional neural network (CNN), Human activity recognition, Machine learning and deep learning algorithms, Principal component analysis (PCA), t-Distributed stochastic neighbor embedding (t-SNE).

Cite this article

Ankalaki S, N. TM

Refference

[1][1]Yazdansepas D, Niazi AH, Gay JL, Maier FW, Ramaswamy L, Rasheed K, et al. A multi-featured approach for wearable sensor-based human activity recognition. In international conference on healthcare informatics 2016 (pp. 423-31). IEEE.

[2][2]Bhattacharya S, Lane ND. From smart to deep: robust activity recognition on smartwatches using deep learning. In international conference on pervasive computing and communication workshops 2016 (pp. 1-6). IEEE.

[3][3]Capela NA, Lemaire ED, Baddour N. Feature selection for wearable smartphone-based human activity recognition with able bodied, elderly, and stroke patients. PloS one. 2015; 10(4):1-18.

[4][4]Plötz T, Hammerla NY, Olivier PL. Feature learning for activity recognition in ubiquitous computing. In twenty-second international joint conference on artificial intelligence 2011(pp-1729-34).

[5][5]Zebin T, Scully PJ, Ozanyan KB. Inertial sensor based modelling of human activity classes: feature extraction and multi-sensor data fusion using machine learning algorithms. Ine health 360° 2017 (pp. 306-14). Springer, Cham.

[6][6]Zhang X, Wong Y, Kankanhalli MS, Geng W. Hierarchical multi-view aggregation network for sensor-based human activity recognition. PloS one. 2019; 14(9):e0221390.

[7][7]De-la-hoz-franco E, Ariza-colpas P, Quero JM, Espinilla M. Sensor-based datasets for human activity recognition–a systematic review of literature. IEEE Access. 2018; 6:59192-210.

[8][8]Al MF, Ranasinghe S, Plattner J, Jnoub N. Human activity recognition based on real life scenarios. In international conference on pervasive computing and communications workshops 2018 (pp. 3-8). IEEE.

[9][9]Nafea O, Abdul W, Muhammad G, Alsulaiman M. Sensor-based human activity recognition with spatio-temporal deep learning. Sensors. 2021; 21(6):1-20.

[10][10]Ahmed BR, Ahmed N, Amiruzzaman M, Islam MR. A robust feature extraction model for human activity characterization using 3-axis accelerometer and gyroscope data. Sensors. 2020; 20(23):1-17.

[11][11]Irvine N, Nugent C, Zhang S, Wang H, Ng WW. Neural network ensembles for sensor-based human activity recognition within smart environments. Sensors. 2020; 20(1):1-26.

[12][12]Voicu RA, Dobre C, Bajenaru L, Ciobanu RI. Human physical activity recognition using smartphone sensors. Sensors. 2019; 19(3):1-18.

[13][13]Han J, Shao L, Xu D, Shotton J. Enhanced computer vision with microsoft kinect sensor: a review. IEEE Transactions on Cybernetics. 2013; 43(5):1318-34.

[14][14]Fortin-simard D, Bilodeau JS, Bouchard K, Gaboury S, Bouchard B, Bouzouane A. Exploiting passive RFID technology for activity recognition in smart homes. IEEE Intelligent Systems. 2015; 30(4):7-15.

[15][15]Ponce H, Martínez-Villaseñor MD, Miralles-Pechuán L. A novel wearable sensor-based human activity recognition approach using artificial hydrocarbon networks. Sensors. 2016; 16(7):1033.

[16][16]Anguita D, Ghio A, Oneto L, Parra PX, Reyes OJL. A public domain dataset for human activity recognition using smartphones. In proceedings of the 21th international European symposium on artificial neural networks, computational intelligence and machine learning 2013 (pp. 437-42).

[17][17]Le TD, Van NC. Human activity recognition by smartphone. In 2nd national foundation for science and technology development conference on information and computer science 2015 (pp. 219-24). IEEE.

[18][18]Gupta P, Dallas T. Feature selection and activity recognition system using a single triaxial accelerometer. IEEE Transactions on Biomedical Engineering. 2014; 61(6):1780-6.

[19][19]Fazli M, Kowsari K, Gharavi E, Barnes L, Doryab A. HHAR-net: hierarchical human activity recognition using neural networks. In international conference on intelligent human computer interaction 2020 (pp. 48-58). Springer, Cham.

[20][20]De LG, Rosati S, Balestra G, Agostini V, Panero E, Gastaldi L, et al. Human Activity recognition by wearable sensors: comparison of different classifiers for real-time applications. In international symposium on medical measurements and applications 2018 (pp. 1-6). IEEE.

[21][21]Sun J, Fu Y, Li S, He J, Xu C, Tan L. Sequential human activity recognition based on deep convolutional network and extreme learning machine using wearable sensors. Journal of Sensors. 2018.

[22][22]Ignatov A. Real-time human activity recognition from accelerometer data using convolutional neural networks. Applied Soft Computing. 2018; 62:915-22.

[23][23]Cho H, Yoon SM. Divide and conquer-based 1D CNN human activity recognition using test data sharpening. Sensors. 2018; 18(4):1-24.

[24][24]Avilés-cruz C, Ferreyra-ramírez A, Zúñiga-lópez A, Villegas-cortéz J. Coarse-fine convolutional deep-learning strategy for human activity recognition. Sensors. 2019; 19(7):1-16.

[25][25]Ordóñez FJ, Roggen D. Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition. Sensors. 2016; 16(1):115.

[26][26]Gupta S. Deep learning based human activity recognition (HAR) using wearable sensor data. International Journal of Information Management Data Insights. 2021; 1(2):1-18.

[27][27]Uddin MZ, Soylu A. Human activity recognition using wearable sensors, discriminant analysis, and long short-term memory-based neural structured learning. Scientific Reports. 2021; 11(1):1-15.

[28][28]Ye J, Li X, Zhang X, Zhang Q, Chen W. Deep learning-based human activity real-time recognition for pedestrian navigation. Sensors. 2020; 20(9):1-30.

[29][29]Gao W, Zhang L, Huang W, Min F, He J, Song A. Deep neural networks for sensor-based human activity recognition using selective kernel convolution. IEEE Transactions on Instrumentation and Measurement. 2021; 70:1-13.

[30][30]Teng Q, Wang K, Zhang L, He J. The layer-wise training convolutional neural networks using local loss for sensor-based human activity recognition. IEEE Sensors Journal. 2020; 20(13):7265-74.

[31][31]Mukherjee D, Mondal R, Singh PK, Sarkar R, Bhattacharjee D. EnsemConvNet: a deep learning approach for human activity recognition using smartphone sensors for healthcare applications. Multimedia Tools and Applications. 2020; 79(41):31663-90.

[32][32]Yang R, Wang B. PACP: a position-independent activity recognition method using smartphone sensors. Information. 2016; 7(4):72.

[33][33]Dharavath R, MadhukarRao G, Khurana H, Edla DR. t-SNE manifold learning based visualization: a human activity recognition approach. In advances in data science and management 2020 (pp. 33-43). Springer, Singapore.

[34][34]Van DML, Hinton G. Visualizing data using t-SNE. Journal of Machine Learning Research. 2008; 9:2579-605.

[35][35]Anguita D, Ghio A, Oneto L, Parra X, Reyes-ortiz JL. Human activity recognition on smartphones using a multiclass hardware-friendly support vector machine. In international workshop on ambient assisted living 2012 (pp. 216-23). Springer, Berlin, Heidelberg.

[36][36]Ravi N, Dandekar N, Mysore P, Littman ML. Activity recognition from accelerometer data. In AAAI 2005 (pp. 1541-6).

[37][37]Amami R, Ayed DB, Ellouze N. Practical selection of SVM supervised parameters with different feature representations for vowel recognition. arXiv preprint arXiv:1507.06020. 2015.

[38][38]Goodfellow I, Bengio Y, Courville A. Deep learning. MIT Press; 2016.

[39][39]Nwankpa C, Ijomah W, Gachagan A, Marshall S. Activation functions: comparison of trends in practice and research for deep learning. arXiv preprint arXiv:1811.03378. 2018.

[40][40]Zhou Y, Wang X, Zhang M, Zhu J, Zheng R, Wu Q. MPCE: a maximum probability based cross entropy loss function for neural network classification. IEEE Access. 2019; 7:146331-41.

[41][41]Gordon-rodriguez E, Loaiza-ganem G, Pleiss G, Cunningham JP. Uses and abuses of the cross-entropy loss: case studies in modern deep learning. Proceedings of machine learning research 2020(pp. 1-10).

[42][42]Ramachandran P, Zoph B, Le QV. Swish: a self-gated activation function. 2017. Arxiv:1710.05941v1 [Cs.NE]: 1-12.

[43][43]Murad A, Pyun JY. Deep recurrent neural networks for human activity recognition. Sensors. 2017; 17(11):1-17.

[44][44]Li Y, Shi D, Ding B, Liu D. Unsupervised feature learning for human activity recognition using smartphone sensors. In mining intelligence and knowledge exploration 2014 (pp. 99-107). Springer, Cham.

[45][45]Ronao CA, Cho SB. Human activity recognition with smartphone sensors using deep learning neural networks. Expert Systems with Applications. 2016; 59:235-44.

[46][46]Kolosnjaji B, Eckert C. Neural network-based user-independent physical activity recognition for mobile devices. In international conference on intelligent data engineering and automated learning 2015 (pp. 378-86). Springer, Cham.

[47][47]Inoue M, Inoue S, Nishida T. Deep recurrent neural network for mobile human activity recognition with high throughput. Artificial Life and Robotics. 2018; 23(2):173-85.