Jadooki, S. and Mohamad, D. and Saba, T. and Almazyad, A. S. and Rehman, A. (2017) Fused features mining for depth-based hand gesture recognition to classify blind human communication. Neural Computing and Applications, 28 (11). pp. 3285-3294. ISSN 0941-0643
Full text not available from this repository.
Official URL: https://www.scopus.com/inward/record.uri?eid=2-s2....
Abstract
Gesture recognition and hand pose tracking are applicable techniques in human–computer interaction fields. Depth data obtained by depth cameras present a very informative explanation of the body or in particular hand pose that it can be used for more accurate gesture recognition systems. The hand detection and feature extraction process are very challenging task in the RGB images that they can be effectively dissolved with simple ways with depth data. However, depth data could be combined with the color information for more reliable recognition. A common hand gesture recognition system requires identifying the hand and its position or direction, extracting some useful features and applying a suitable machine-learning method to detect the performed gesture. This paper presents the novel fusion of the enhanced features for the classification of static signs of the sign language. It begins by explaining how the hand can be separated from the scene by depth data. Then, a combination feature extraction method is introduced for extracting some appropriate features of the images. Finally, an artificial neural network classifier is trained with these fused features and applied to critically analyze various descriptors performance.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | DCT, Depth data, Fused features mining, Hand gesture recognition, Moment invariant |
Subjects: | Q Science > QA Mathematics > QA75 Electronic computers. Computer science |
Divisions: | Computing |
ID Code: | 76989 |
Deposited By: | Fazli Masari |
Deposited On: | 30 Apr 2018 14:32 |
Last Modified: | 30 Apr 2018 14:32 |
Repository Staff Only: item control page