Liew, Shan Sung (2016) An efficient and effective convolutional neural network for visual pattern recognition. PhD thesis, Universiti Teknologi Malaysia, Faculty of Electrical Engineering.
|
PDF
951kB |
Official URL: http://dms.library.utm.my:8080/vital/access/manage...
Abstract
Convolutional neural networks (CNNs) are a variant of deep neural networks (DNNs) optimized for visual pattern recognition, which are typically trained using first order learning algorithms, particularly stochastic gradient descent (SGD). Training deeper CNNs (deep learning) using large data sets (big data) has led to the concept of distributed machine learning (ML), contributing to state-of-the-art performances in solving computer vision problems. However, there are still several outstanding issues to be resolved with currently defined models and learning algorithms. Propagations through a convolutional layer require flipping of kernel weights, thus increasing the computation time of a CNN. Sigmoidal activation functions suffer from gradient diffusion problem that degrades training efficiency, while others cause numerical instability due to unbounded outputs. Common learning algorithms converge slowly and are prone to hyperparameter overfitting problem. To date, most distributed learning algorithms are still based on first order methods that are susceptible to various learning issues. This thesis presents an efficient CNN model, proposes an effective learning algorithm to train CNNs, and map it into parallel and distributed computing platforms for improved training speedup. The proposed CNN consists of convolutional layers with correlation filtering, and uses novel bounded activation functions for faster performance (up to 1.36x), improved learning performance (up to 74.99% better), and better training stability (up to 100% improvement). The bounded stochastic diagonal Levenberg-Marquardt (B-SDLM) learning algorithm is proposed to encourage fast convergence (up to 5.30% faster and 35.83% better than first order methods) while having only a single hyperparameter. B-SDLM also supports mini-batch learning mode for high parallelism. Based on known previous works, this is among the first successful attempts of mapping a stochastic second order learning algorithm to be deployed in distributed ML platforms. Running the distributed B-SDLM on a 16- core cluster achieves up to 12.08x and 8.72x faster to reach a certain convergence state and accuracy on the Mixed National Institute of Standards and Technology (MNIST) data set. All three complex case studies tested with the proposed algorithms give comparable or better classification accuracies compared to those provided in previous works, but with better efficiency. As an example, the proposed solutions achieved 99.14% classification accuracy for the MNIST case study, and 100% for face recognition using AR Purdue data set, which proves the feasibility of proposed algorithms in visual pattern recognition tasks.
Item Type: | Thesis (PhD) |
---|---|
Additional Information: | Thesis (Ph.D (Kejuruteraan Elektrik)) - Universiti Teknologi Malaysia, 2016; Supervisor : Prof. Dr. Mohamed Khalil Mohd. Hani |
Uncontrolled Keywords: | convolutional neural networks (CNNs), machine learning (ML) |
Subjects: | T Technology > TK Electrical engineering. Electronics Nuclear engineering |
Divisions: | Electrical Engineering |
ID Code: | 60714 |
Deposited By: | Fazli Masari |
Deposited On: | 27 Feb 2017 05:00 |
Last Modified: | 03 Jan 2021 01:31 |
Repository Staff Only: item control page