Universiti Teknologi Malaysia Institutional Repository

Cascading pose features with CNN-LSTM for multiview human action recognition

Rehman Malik, Najeeb and Syed Abu Bakar, Syed Abdul Rahman and Sheikh, Usman Ullah and Channa, Asma and Popescu, Nirvana (2023) Cascading pose features with CNN-LSTM for multiview human action recognition. Signals, 4 (1). pp. 40-55. ISSN 2624-6120

[img] PDF
864kB

Official URL: http://dx.doi.org/10.3390/signals4010002

Abstract

Human Action Recognition (HAR) is a branch of computer vision that deals with the identification of human actions at various levels including low level, action level, and interaction level. Previously, a number of HAR algorithms have been proposed based on handcrafted methods for action recognition. However, the handcrafted techniques are inefficient in case of recognizing interaction level actions as they involve complex scenarios. Meanwhile, the traditional deep learning-based approaches take the entire image as an input and later extract volumes of features, which greatly increase the complexity of the systems; hence, resulting in significantly higher computational time and utilization of resources. Therefore, this research focuses on the development of an efficient multi-view interaction level action recognition system using 2D skeleton data with higher accuracy while reducing the computation complexity based on deep learning architecture. The proposed system extracts 2D skeleton data from the dataset using the OpenPose technique. Later, the extracted 2D skeleton features are given as an input directly to the Convolutional Neural Networks and Long Short-Term Memory (CNN-LSTM) architecture for action recognition. To reduce the complexity, instead of passing the whole image, only extracted features are given to the CNN-LSTM architecture, thus eliminating the need for feature extraction. The proposed method was compared with other existing methods, and the outcomes confirm the potential of the proposed technique. The proposed OpenPose-CNNLSTM achieved an accuracy of 94.4% for MCAD (Multi-camera action dataset) and 91.67% for IXMAS (INRIA Xmas Motion Acquisition Sequences). Our proposed method also significantly decreases the computational complexity by reducing the number of inputs features to 50.

Item Type:Article
Uncontrolled Keywords:CNN-LSTM, deep learning, human action recognition (HAR)
Subjects:T Technology > TK Electrical engineering. Electronics Nuclear engineering
Divisions:Electrical Engineering
ID Code:106948
Deposited By: Yanti Mohd Shah
Deposited On:23 Aug 2024 01:32
Last Modified:23 Aug 2024 01:32

Repository Staff Only: item control page