Universiti Teknologi Malaysia Institutional Repository

An optimized second order stochastic learning algorithm for neural network training

Liew, S. S. and Khalil-Hani, M. and Bakhteri, R. (2016) An optimized second order stochastic learning algorithm for neural network training. Neurocomputing, 186 . pp. 74-89. ISSN 0925-2312

Full text not available from this repository.

Official URL: https://www.scopus.com/inward/record.uri?eid=2-s2....

Abstract

This paper proposes an improved stochastic second order learning algorithm for supervised neural network training. The proposed algorithm, named bounded stochastic diagonal Levenberg-Marquardt (B-SDLM), utilizes both gradient and curvature information to achieve fast convergence while requiring only minimal computational overhead than the stochastic gradient descent (SGD) method. B-SDLM has only a single hyperparameter as opposed to most other learning algorithms that suffer from the hyperparameter overfitting problem due to having more hyperparameters to be tuned. Experiments using the multilayer perceptron (MLP) and convolutional neural network (CNN) models have shown that B-SDLM outperforms other learning algorithms with regard to the classification accuracies and computational efficiency (about 5.3% faster than SGD on the mnist-rot-bg-img database). It can classify all testing samples correctly on the face recognition case study based on AR Purdue database. In addition, experiments on handwritten digit classification case studies show that significant improvements of 19.6% on MNIST database and 17.5% on mnist-rot-bg-img database can be achieved in terms of the testing misclassification error rates (MCRs). The computationally expensive Hessian calculations are kept to a minimum by using just 0.05% of the training samples in its estimation or updating the learning rates once per two training epochs, while maintaining or even achieving lower testing MCRs. It is also shown that B-SDLM works well in the mini-batch learning mode, and we are able to achieve 3.32× performance speedup when deploying the proposed algorithm in a distributed learning environment with a quad-core processor.

Item Type:Article
Uncontrolled Keywords:Algorithms, Artificial intelligence, Character recognition, Classification (of information), Computational efficiency, Computer aided instruction, Convolution, Database systems, Distributed computer systems, Efficiency, Face recognition, Learning systems, Neural networks, Stochastic systems, Convolutional neural network, Distributed machine learning, Fast convergence, Levenberg-Marquardt, Overfitting, Learning algorithms, accuracy, algorithm, Article, artificial neural network, controlled study, convolutional neural network, intermethod comparison, learning algorithm, machine learning, mathematical analysis, mathematical computing, mathematical phenomena, measurement error, misclassification error rate, priority journal, process development, process optimization, stochastic diagonal Levenberg Marquardt algorithm, stochastic gradient descent method, stochastic learning algorithm
Subjects:T Technology > TK Electrical engineering. Electronics Nuclear engineering
Divisions:Electrical Engineering
ID Code:72624
Deposited By: Haliza Zainal
Deposited On:27 Nov 2017 04:42
Last Modified:27 Nov 2017 04:42

Repository Staff Only: item control page