Quah, Poey Poey (2020) Salient map image for plant disease using deep learning. Masters thesis, Universiti Teknologi Malaysia, Faculty of Engineering - School of Electrical Engineering.
|
PDF
264kB |
Official URL: http://dms.library.utm.my:8080/vital/access/manage...
Abstract
Plant diseases are a critical factor that impacts the yield and quality of crops and economics in the agricultural sector. This can shown in the incident of a fungal wheat disease in North Texas caused $250 million loss of revenue of the affected country in the year of 2001. Hence, there is essential in protecting the crops from diseases to ensure production quality and quantity. Early detection of the plant diseases is necessary, and it can help to prevent the spreading of the diseases by choosing an appropriate treatment for the plants. However, the process is often trailed by the lack of necessary infrastructure that offers simplicity in performing accurate classifications. Thus, rapid and accurate detection of plant disease through machine learning is essential to minimizing or averting this hardship. On top of that, the existing work does not segment the progression area of the disease on the leaf. In which, this area giving a lot of information on the disease. Especially to the pattern of disease symptoms that are very similar such as in the case of vegetable early and late blight is currently not given much consideration in the machine learning process. Hence, the objective of this project is to construct a salient map image that tracks the disease progression right from inception to manifestation following the pathological disease anatomy. Semantic segmentation with ConvolutionalNeuralNetwork (CNN) is used to construct the salient map image, through transfer learning with SegNet. In this project, 460 images of early blight and late blight diseases plants from PlantVillage dataset is used for the training and testing processes of the CNN. Next, the training parameters are fine-tuned in order to optimize the deep learning model accuracy. At the end of the project, the deep learning model will be able to segment the leaf image into several regions with the overall accuracy of 89.567% and overall IOU of 52.5448%. Also, although the transfer learning on FCN with same data-set and training parameters has slightly better performance with overall accuracy of 89.91% and IOU of 53.92%, its main drawbacks of long model training duration and consumption of huge memory size has made SegNet more preferable in this project. With the gradient map image generated, the pattern of each disease manifestation along the leaf surface can be tracked and quantified for better understanding and characterization based on their anatomy.
Item Type: | Thesis (Masters) |
---|---|
Additional Information: | Thesis (Sarjana Kejuruteraan (Komputer dan Sistem Mikroelektronik)) - Universiti Teknologi Malaysia, 2020; Supervisors : Dr. Musa Mohd. Mokji |
Subjects: | T Technology > TK Electrical engineering. Electronics Nuclear engineering |
Divisions: | Electrical Engineering |
ID Code: | 92995 |
Deposited By: | Yanti Mohd Shah |
Deposited On: | 07 Nov 2021 06:00 |
Last Modified: | 07 Nov 2021 06:00 |
Repository Staff Only: item control page