Universiti Teknologi Malaysia Institutional Repository

An improved retraining scheme for convolutional neural network

Radzi, Feeza and Mohd. Hani, Mohamed Khalil and Mohd. Saad, Norhashimah and Salehuddin, Fauziyah and Abdul Hamid, Norihan (2015) An improved retraining scheme for convolutional neural network. Jourl Of Telecommunication, Electronic And Computer Engineering, 7 (1). pp. 5-9. ISSN 2180-014

Full text not available from this repository.

Official URL: http://journal.utem.edu.my/index.php/jtec/article/...

Abstract

A feed-forward neural network artificial model, or multilayer perceptron (MLP), learns input samples adaptively and solves non-linear problems for data that are noisy and imprecise. Another variant of MLP, known as Convolutional Neural Network (CNN) has additional features such as weight sharing, local receptive field, and subsampling, making CNN superior in handling challenging pattern-recognition tasks. Although CNN has improved the performance of MLP, the complexity of its structure has caused retraining processes to become inefficient whenever new categories or neurons using a winner-takes-all approach are added at the classifier stage. Thus, it is necessary to retrain the complete network set when new categories are added to the network. However, such a retraining incurs additional cost and training time. In this paper, we propose a retraining scheme that could overcome the mentioned problem. The proposed retraining scheme generalizes the feature of extraction layers, hence the retraining process only involves the last two layers instead of the whole network. The design was evaluated on AT&T and JAFFE databases. The results obtained have proved that training an additional category is approximately more than 70 times faster than retraining the whole network architecture.

Item Type:Article
Subjects:A General Works
ID Code:57765
Deposited By: Haliza Zainal
Deposited On:04 Dec 2016 12:07
Last Modified:11 Sep 2017 11:40

Repository Staff Only: item control page