Liew, S. S. and Khalil-Hani, M. and Bakhteri, R. (2016) Distributed B-SDLM: accelerating the training convergence of deep neural networks through parallelism. In: 14th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2016, 22-26 Aug 2016, Phuket, Thailand.
Full text not available from this repository.
Official URL: https://www.scopus.com/inward/record.uri?eid=2-s2....
Abstract
This paper proposes an efficient asynchronous stochastic second order learning algorithm for distributed learning of neural networks (NNs). The proposed algorithm, named distributed bounded stochastic diagonal Levenberg-Marquardt (distributed B-SDLM), is based on the B-SDLM algorithm that converges fast and requires only minimal computational overhead than the stochastic gradient descent (SGD) method. The proposed algorithm is implemented based on the parameter server thread model in the MPICH implementation. Experiments on the MNIST dataset have shown that training using the distributed B-SDLM on a 16-core CPU cluster allows the convolutional neural network (CNN) model to reach the convergence state very fast, with speedups of 6.03× and 12.28× to reach 0.01 training and 0.08 testing loss values, respectively. This also results in significantly less time taken to reach a certain classification accuracy (5.67× and 8.72× faster to reach 99% training and 98% testing accuracies on the MNIST dataset, respectively).
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Uncontrolled Keywords: | Convolutional neural network, Deep learning, Distributed machine learning, Stochastic diagonal Levenberg-Marquardt |
Subjects: | T Technology > TK Electrical engineering. Electronics Nuclear engineering |
Divisions: | Electrical Engineering |
ID Code: | 73608 |
Deposited By: | Mohd Zulaihi Zainudin |
Deposited On: | 28 Nov 2017 05:01 |
Last Modified: | 28 Nov 2017 05:01 |
Repository Staff Only: item control page