Javascript is required
1.
L. Zhang, J. Pang, X. Chen, and Z. Lu, “Carbon emissions, energy consumption and economic growth: Evidence from the agricultural sector of China’s main grain-producing areas,” Sci. Total Environ., vol. 665, pp. 1017–1025, 2019. [Google Scholar] [Crossref]
2.
H. O. Guo, S. Xu, and C. L. Pan, “Measurement of the spatial complexity and its influencing factors of agricultural green development in China,” Sustainability, vol. 12, no. 21, p. 9259, 2020. [Google Scholar] [Crossref]
3.
L. C. Ngugi, M. Abelwahab, and M. Abo-Zahhad, “Recent advances in image processing techniques for automated leaf pest and disease recognition–A review,” Inf. Process. Agric., vol. 8, no. 1, pp. 27–51, 2021. [Google Scholar] [Crossref]
4.
A. Kumar, S. Sarkar, and C. Pradhan, “Recommendation system for crop identification and pest control technique in agriculture,” in 2019 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 2019, pp. 185–189. [Google Scholar] [Crossref]
5.
Y. Yuan, L. Chen, H. R. Wu, and L. Lin, “Advanced agricultural disease image recognition technologies: A review,” Inf. Process. Agric., vol. 9, no. 1, pp. 48–59, 2022. [Google Scholar] [Crossref]
6.
D. M. Gao, Q. Sun, B. Hu, and S. Zhang, “A framework for agricultural pest and disease monitoring based on Internet-of-Things and unmanned aerial vehicles,” Sensors, vol. 20, no. 5, p. 1487, 2020. [Google Scholar] [Crossref]
7.
T. Daniya and S. Vigneshwari, “A review on machine learning techniques for rice plant disease detection in agricultural research,” Int. J. Adv. Sci. Technol., vol. 28, no. 13, pp. 49–62, 2019. [Google Scholar]
8.
K. Thenmozhi and U. S. Reddy, “Crop pest classification based on deep convolutional neural network and transfer learning,” Comput. Electron. Agric., vol. 164, p. 104906, 2019. [Google Scholar] [Crossref]
9.
C. J. Chen, Y. Y. Huang, Y. S. Li, C. Y. Chang, Y. C. Chen, and Y. M. Huang, “Identification of fruit tree pests with deep learning on embedded drone to achieve accurate pesticide spraying,” IEEE Access, vol. 9, pp. 21986–21997, 2021. [Google Scholar] [Crossref]
10.
G. J. Sun, S. H. Liu, H. L. Luo, and others, “Intelligent monitoring system of migratory pests based on searchlight trap and machine vision,” Front. Plant Sci., vol. 13, p. 897739, 2022. [Google Scholar] [Crossref]
11.
A. Devaraj, K. Rathan, S. Jaahnavi, and K. Indira, “Identification of plant disease using image processing technique,” in 2019 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 2019, pp. 749–753. [Google Scholar] [Crossref]
12.
S. S. Kumar and B. K. Raghavendra, “Diseases detection of various plant leaf using image processing techniques: A review,” in 2019 5th International Conference on Advanced Computing & Communication Systems (ICACCS), Coimbatore, India, 2019, pp. 313–316. [Google Scholar] [Crossref]
13.
Y. Ai, C. Sun, J. Tie, and X. T. Cai, “Research on recognition model of crop diseases and insect pests based on deep learning in harsh environments,” IEEE Access, vol. 8, pp. 171686–171693, 2020. [Google Scholar] [Crossref]
14.
M. Türkoğlu and D. Hanbay, “Plant disease and pest detection using deep learning-based features,” Turk. J. Electr. Eng. Comput. Sci., vol. 27, no. 3, pp. 1636–1651, 2019. [Google Scholar] [Crossref]
15.
D. J. A. Rustia, J. J. Chao, L. Y. Chiu, Y. F. Wu, J. Y. Chung, J. C. Hsu, and T. T. Lin, “Automatic greenhouse insect pest detection and recognition based on a cascaded deep learning classification method,” J. Appl. Entomol., vol. 145, no. 3, pp. 206–222, 2021. [Google Scholar] [Crossref]
16.
C. R. Rahman, P. S. Arko, M. E. Ali, M. A. I. Khan, S. H. Apon, F. Nowrin, and A. Wasif, “Identification and recognition of rice diseases and pests using convolutional neural networks,” Biosyst. Eng., vol. 194, pp. 112–120, 2020. [Google Scholar] [Crossref]
17.
S. H. Lee, S. R. Lin, and S. F. Chen, “Identification of tea foliar diseases and pest damage under practical field conditions using a convolutional neural network,” Plant Pathol., vol. 69, no. 9, pp. 1731–1739, 2020. [Google Scholar] [Crossref]
18.
H. Kuzuhara, H. Takimoto, Y. Sato, and A. Kanagawa, “Insect pest detection and identification method based on deep learning for realizing a pest control system,” in 2020 59th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), Chiang Mai, Thailand, 2020, pp. 709–714. [Google Scholar] [Crossref]
19.
J. L. Kong, H. X. Wang, C. C. Yang, X. B. Jin, M. Zuo, and X. Zhang, “A spatial feature-enhanced attention neural network with high-order pooling representation for application in pest and disease recognition,” Agriculture, vol. 12, no. 4, p. 500, 2022. [Google Scholar] [Crossref]
20.
Y. S. Peng and Y. Wang, “CNN and transformer framework for insect pest classification,” Ecol. Inform., vol. 72, p. 101846, 2022. [Google Scholar] [Crossref]
21.
Z. Li, F. Liu, W. J. Yang, S. H. Peng, and J. Zhou, “A survey of convolutional neural networks: Analysis, applications, and prospects,” IEEE Trans. Neural Netw. Learn. Syst., vol. 33, no. 12, pp. 6999–7019, 2021. [Google Scholar] [Crossref]
Search
Open Access
Research article

Enhanced Pest and Disease Detection in Agriculture Using Deep Learning-Enabled Drones

wenqi li1,
xixi han1*,
zhibo lin1,
Atta-ur Rahman2
1
School of Electronic and Information Engineering, Zhongyuan University of Technology, 451191 Zhengzhou, China
2
College of Computer Science and Information Technology, Imam Abdulrahman Bin Faisal University, 31441 Dammam, Saudi Arabia
Acadlore Transactions on AI and Machine Learning
|
Volume 3, Issue 1, 2024
|
Pages 1-10
Received: 10-12-2023,
Revised: 12-21-2023,
Accepted: 01-01-2024,
Available online: 01-11-2024
View Full Article|Download PDF

Abstract:

In this study, an integrated pest and disease recognition system for agricultural drones has been developed, leveraging deep learning technologies to significantly improve the accuracy and efficiency of pest and disease detection in agricultural settings. By employing convolutional neural networks (CNN) in conjunction with high-definition image acquisition and wireless data transmission, the system demonstrates proficiency in the effective identification and classification of various agricultural pests and diseases. Methodologically, a deep learning framework has been innovatively applied, incorporating critical modules such as image acquisition, data transmission, and pest and disease identification. This comprehensive approach facilitates rapid and precise classification of agricultural pests and diseases, while catering to the needs of remote operation and real-time data processing, thus ensuring both system efficiency and data security. Comparative analyses reveal that this system offers a notable enhancement in both accuracy and response time for pest and disease recognition, surpassing traditional detection methods and optimizing the management of agricultural pests and diseases. The significant contribution of this research is the successful integration of deep learning into the domain of agricultural pest and disease detection, marking a new era in smart agriculture technology. The findings of this study bear substantial theoretical and practical implications, advancing precision agriculture practices and contributing to the sustainability and efficiency of agricultural production.

Keywords: Deep learning, Agricultural drones, Pest and disease recognition, Convolutional neural networks, Recognition accuracy

1. Introduction

In China, known for its robust agricultural sector, favorable climatic conditions, abundant sunlight, and ample water resources are recognized as key factors promoting agricultural development [1], [2]. These conditions, however, also contribute to the proliferation of pests and diseases, positioning pest and disease detection as a critical factor in advancing agricultural practices. Traditional approaches, which predominantly rely on experts or technicians for the identification of pests and diseases, are characterized by their low efficiency, susceptibility to subjective biases, and high labor intensity, rendering them inadequate for practical needs [3], [4].

The advent of machine vision, the Internet of Things (IoT), and drone technologies has led to a significant expansion in the use of drones within the agricultural sector [5], [6]. Machine vision, substituting human observation with computerized systems, imitates human visual capabilities to measure and evaluate the identified objects. This process involves capturing images through imaging devices, which are then processed by a series of algorithms within an image processing unit, ultimately yielding detection results [7], [8]. The role of target detection is fundamental in the machine vision system. These technologies introduce innovative methods for detecting agricultural pests and diseases, thereby significantly enhancing detection efficiency [9], [10].

In traditional drone-based pest and disease recognition systems, machine vision plays a pivotal role, primarily processing extracted images via algorithms to identify attributes such as texture, shape, and color. These systems then employ various algorithms for classification. Conventional methods involved manual design of pest characteristics and classifiers, utilizing techniques like threshold segmentation, edge detection with Sobel, Roberts, Prewitt methods, and feature extraction methods including Scale-Invariant Feature Transform (SIFT), Histogram of Oriented Gradients (HOG), as well as shape, color, and texture analysis. Classification is typically performed using methods such as Support Vector Machine (SVM), Back Propagation (BP), Bayesian analysis, among others [11], [12]. The imaging requirements for object detection in these systems are stringent, demanding a stark contrast between pests and their surrounding environment. However, the diversity of pest types and their morphological changes during different growth stages, along with the possibility of similar characteristics being displayed by different pests at various stages, pose significant challenges. These challenges are compounded by the complexities of natural light and background similarities in real-world detection scenarios, which can compromise the robustness and effectiveness of traditional machine vision recognition algorithms [13], [14].

Deep learning technology, which mimics the structure and functionality of human brain neural networks, outperforms traditional machine learning by autonomously learning from extensive datasets, thus exhibiting formidable computational capabilities. Provided with ample learning data and high-performance computing units, deep learning demonstrates enhanced robustness, superior detection performance, and greater recognition accuracy [15], [16]. The evolution of drone technology has markedly advanced the deployment of agricultural drones for pest control. The application of these drones, particularly in pesticide spraying, not only utilizes machine vision technology for efficient planning of spray routes but also significantly reduces human exposure to pesticides. This advancement facilitates increased spray efficiency, uniformity, and the automation and precision of pesticide application. The role of pest and disease detection and control in agricultural production is crucial, directly influencing crop yield. Therefore, the integration of deep learning in pest and disease recognition systems for agricultural drones is vital for the advancement of agricultural practices.

In this study, the has been selected as the deep learning model for the pest and disease recognition system tailored to agricultural drones. CNN is meticulously designed to take into account the spatial structure and local correlation of images, thereby enhancing the efficiency of image data processing. These networks are adept at tasks such as object detection, image classification, and image recognition, excelling in the extraction of features from detected images, which equips them with robust image recognition and analysis capabilities. Particularly proficient in extracting deep features from images, CNN significantly bolsters image recognition performance, exhibiting formidable robustness [17], [18]. This research delves into the critical technologies integral to deep learning in agricultural drone pest and disease recognition systems, encompassing deep learning models, algorithms, and evaluation metrics for such systems. The deep learning-based algorithms employed here are characterized by their potent feature extraction ability, high detection precision, and recognition efficiency, surpassing traditional algorithms and offering substantial guidance in identifying crop pests and diseases.

The application of CNN in crop pest and disease recognition by various researchers has yielded noteworthy results. However, challenges persist due to the complex and variable backgrounds encountered in real crop pest and disease scenarios, difficulties in acquiring agricultural disease image data, and the constraints posed by limited data volume. Enhancing the accuracy and robustness of recognition algorithms remains an ongoing focus of research. Striking a balance between heightened recognition precision and maintaining processing speed is an ongoing challenge in the field.

2. Methodology

2.1 Overall System Design

In this research, a pest and disease recognition system designed for agricultural drones is structured into a three-tier architecture. This architecture encompasses the sensing layer, transmission layer, and application layer. The structural framework of the system is systematically represented in Figure 1.

Figure 1. Structural framework of the recognition system
2.1.1 Sensing layer

The sensing layer forms the foundation of the recognition system, incorporating the drone system, ground monitoring station, and mobile client. This layer is primarily dedicated to fulfilling the dynamic recognition demands of agricultural drones.

Within the drone system, a series of components are coordinated to control device rotation, movement, focus adjustment, and image acquisition. Key components include a structural frame, a main processor of Digital Signal Processor (DSP), a data collection module, a power supply module, a Charge Coupled Device (CCD) camera, and a propulsion system. The drone system's control framework is comprehensively illustrated in Figure 2.

Figure 2. Control framework of the drone system

The central processing unit of the drone system is a DM643 model, specifically engineered for digital multimedia applications. This unit functions as the heart of the drone's flight control system, managing tasks such as receiving flight data, orchestrating trajectory planning, and processing the images captured. The data collection module is integral to this system, tasked with the real-time acquisition of flight status data. This data is then relayed to a microcontroller responsible for monitoring and controlling the drone's posture, thereby ensuring the safety of its flight. Constituting this module are various components, including an altitude sensor, a Global Positioning System (GPS), angle sensor, angular velocity sensor, accelerometer, and airspeed indicator. Proportion Integration Differentiation (PID) control technology is implemented to precisely manage the drone's flight posture while minimizing systemic errors. This control mechanism is detailed in Figure 3.

Figure 3. Drone flight attitude control mechanism

The propulsion system of the drone, crucial for its operation, comprises a motor, an Electronic Speed Controller (ESC), and a propeller. The motor selected, known for its precise controllability, is of the brushless variety. The ESC, with its 20A rated current, is utilized to modulate the motor's speed. The propeller, driven by the motor, generates necessary lift, ensuring that the motor's operational capacity is not exceeded.

For imaging purposes, a CCD camera, model DS-2DC4423IW-D, is employed. This camera is characterized by its spherical shape and expansive field of view. The GPS system plays a pivotal role in facilitating the drone's navigational accuracy.

The ground monitoring station serves as a control hub for the drone, overseeing its flight status and missions. Its primary components include an imaging system and a Liquid Crystal Display (LCD). The imaging system is tasked with adjusting and controlling the CCD camera, modifying parameters like resolution, shooting mode, and focal length to suit various area characteristics, thus ensuring optimal image acquisition.

2.1.2 Transmission layer

The transmission layer is designed with two essential functions. Firstly, it is responsible for the secure and stable transfer of flight data from the recognition drone and the image data collected by the CCD camera. Secondly, it facilitates the transmission of commands from ground personnel to the drone. Wireless networking is employed for data transmission to accommodate the requirements of long-distance drone operations, enhancing both the safety and stability of the data transmission process. The wireless network operates on the Ubuntu system, utilizing SDCC, EC2Tools, and MonoDeveloper as its primary development tools. Data transmission is executed in packet form, ensuring efficient and reliable delivery of information. The sequential steps of data transmission within this wireless network are detailed in Figure 4.

Figure 4. Steps of data transmission in the wireless network
2.1.3 Application layer

The application layer's primary role is the analysis and processing of the information acquired. It employs deep learning algorithms for the recognition of diseases, yielding definitive results. To align with the functional demands of this recognition system, the application layer is segregated into six distinct modules: the image acquisition module, storage module, virus detection module, disease recognition module, flight route planning module, and control module.

2.2 Core Algorithm Design of the System

Deep learning integrates low-level features to create abstract high-level features, representing categories or characteristics of attributes, thereby enabling complex classification tasks to be performed using simplified models. This technique grants machines capabilities akin to human analytical skills, advancing the development of artificial intelligence [19], [20]. In the realm of deep learning, CNN stand out as a quintessential model, automating the learning of image features and patterns. This automation is realized through an amalgamation of convolutional layers, pooling layers, and fully connected layers, which collectively facilitate the automatic extraction and classification of image features. Central to the CNN is the convolutional layer, comprised of multiple convolutional kernels. Each kernel interacts with the input image to extract local features, capturing edges, textures, and other localized aspects of the image. The pooling layer plays a pivotal role in reducing the spatial dimensions of the output from the convolutional layer, thereby diminishing the number of parameters and highlighting the key features. CNN, compared to other deep learning algorithms, demands fewer parameters and demonstrate superior generalization abilities [21]. Given their extensive application in target detection within machine vision, CNN has been deemed appropriate for this study. Consequently, CNN has been selected as the foundational deep learning algorithm in the development of the pest and disease recognition system for agricultural drones.

2.2.1 CNN design

CNN is comprised of several layers: convolutional layers, activation function layers, pooling layers, fully connected layers, and output layers. At the heart of the CNN architecture lies the convolutional layer, entrusted with the critical task of extracting local features from images. This layer is recognized as the most computationally demanding and time-intensive component, yet it is essential for the network's functionality. The core principle of the convolutional layer involves sliding a window across the input image data and performing calculations to extract image features. The convolution operation process is systematically illustrated in Figure 5.

Figure 5. Illustration of the convolution operation process

In the convolution operation, the input pixels are summed with the corresponding data in the convolution kernel, along with an added bias term. The network algorithm design employs various convolution kernels to achieve specific functionalities. The dimensions ${{n}_{out}}$ of the resulting image are determined by the following equation:

${{n}_{out}}=\left[ \frac{{{n}_{in}}+2q-f}{s} \right]+1 $
(1)

where, ${{n}_{in}}$ and $f$ denote the number of convolution kernels and their respective sizes, $s$ represents the step size, and $q$ is the padding number, all classified as hyperparameters within the convolutional layer.

The activation function layer is pivotal in addressing data non-linearities, thus equipping the CNN to handle more complex tasks. Prominent activation functions utilized include the Sigmoid, Tanh, and ReLU activation functions, each described by their respective mathematical expressions:

${Sigmoid}(x)=\frac{1}{1+{{e}^{-x}}}, {Sigmoi{d}'}(x)={Sigmoid}(x)[1-{Sigmoid}(x)]$
(2)
${Tanh}(x)=\frac{{{e}^{x}}-{{e}^{-x}}}{{{e}^{x}}+{{e}^{-x}}},{Tanh}'(x)=1-{{{Tanh}}^{2}}(x) $
(3)
${ReLu}(x)=\max (0,x)$
(4)

The pooling layer, often referred to as the subsampling layer, employs mathematical methods to process data derived from the convolutional and activation function layers, ultimately producing a feature map. This layer plays a crucial role in data and parameter compression by reducing the feature map's size, which consequently diminishes the consumption of computational resources and aids in preventing overfitting. The size of the output feature map is determined by the formula provided below:

$ {{n}_{out}}=\left[ \frac{{{n}_{in}}-f}{s} \right]+1$
(5)

For the augmentation of image classification, recognition, and detection accuracy, the network architecture may be further enhanced by incorporating additional layers, such as fully connected layers and loss function layers. These layers are instrumental in classifying the features extracted from the images, thereby yielding a quantitative description of the input feature map.

2.2.2 Design of evaluation metrics for the recognition system

Upon the extraction of image features and completion of localization via the CNN, establishing evaluation metrics for the system becomes imperative to assess whether the output aligns with the specified criteria. In instances where the results adhere to the predefined range, they are considered valid outputs; otherwise, a further assessment of image features is necessitated. Key evaluation metrics adopted include the Intersection over Union (IoU), precision, recall, F-measure, and the precision-recall (PR) curve.

The IoU metric quantitatively assesses the extent of overlap between the predicted outcomes and the actual results. It is computed using the following formula:

$ IoU=\frac{{{S}_{Overlap}}}{{{S}_{Union}}}$
(6)

Here, IoU is expressed as the proportion of the intersection ${{S}_{Overlap}}$ compared to the union ${{S}_{Union}}$ of the predicted and actual results.

Precision in this context is defined as the system's accuracy in identifying image features and is calculated as:

${{Precision}}=\frac{TP}{ZP} $
(7)

where, $TP$ denotes the count of samples accurately identified as positive by the system, and $ZP$ represents the total number of samples the system classifies as positive. Recall is the measure of the system’s ability to re-identify, computed using:

$ {Recall}=\frac{TP}{ZN}$
(8)

where, $ZN$ signifies the actual count of positive samples. For practical application, a balance between precision and recall is sought, achieved through the F-measure, which is articulated as:

$F = \frac{(1 + \beta) \cdot \textit{Precision} \cdot \textit{Recall}}{\beta^2 \cdot \textit{Precision} + \textit{Recall}} $
(9)

where, $\beta $ is the harmonic mean of precision and recall.

In the PR curve, precision and recall are delineated as the x and y coordinates, respectively. This curve illustrates the relationship between precision and recall at varying threshold values, as demonstrated in Figure 6.

Within this curve, the comparative analysis of feature curves is conducted, where larger features are indicative of enhanced performance. As depicted in Figure 6, the performance denoted by Curves A and B is observed to surpass that of Curve C. The equilibrium point of the curve serves as an additional comparative metric; a higher equilibrium point suggests more favorable features. Utilizing these analytical methods enables the determination of image feature values and the achievement of accurate localization.

Figure 6. PR curve

3. Experiments

To evaluate the effectiveness of the pest and disease recognition system developed for agricultural drones, a comprehensive series of experiments was undertaken. Initially, the system required training with a specific dataset, followed by a testing phase. Additionally, an emphasis was placed on the system's portability, namely its capability to adapt to entirely new datasets. Consequently, the experimental design encompassed both training and testing trials, as well as the application of the model to alternative datasets.

3.1 Training and Testing Experiments

For the training and testing phase, grape leaves, characterized by their distinct pest features, were selected as the primary subject. Although the Plant Village dataset is extensively utilized in the domain of plant leaf pest and disease visual detection, it predominantly comprises images acquired under laboratory conditions rather than natural environments. Therefore, in this study, images captured by agricultural drones, representing pest and disease instances in actual natural settings, were employed. To resolve issues related to dataset accessibility and nomenclature, the downloaded data files were meticulously processed and renamed, facilitating the formation of a dataset suitable for subsequent training. The selected pest images underwent preprocessing and data augmentation procedures, including rotation, flipping, scaling, and translation. These processes were essential to conform to the size requirements of the chosen model and to mitigate the limitations posed by a smaller dataset size, which could potentially impact the efficacy of model training. This methodology also aimed to enhance the generalization capability of the model trained on this dataset. Post preprocessing, pest image samples were inputted for model training, which was conducted in alignment with the target detection recognition algorithm. Following the completion of the model training, pest and disease recognition and detection were executed using the obtained model weight files. The dataset utilized in the system included 1000 grape leaf images, encompassing 184 images of healthy leaves, 328 images of brown spot disease leaves, 301 images of black rot disease leaves, and 187 images of ring spot disease leaves. A random selection of 70% of these images served as the training data, while the remaining 30% were used for testing purposes. The outcomes of these experiments are detailed in Table 1.

Table 1. Results of training and testing experiments

Healthy Leaves

Leaves Affected by Diseases and Pests

Number of samples

Correct identifications

Identification rate %

Number of samples

Correct identifications

Identification rate %

Training set

130

122

93.8

57

55

96.4

Testing

set

54

51

94.4

246

230

I93.5

As presented in Table 1, the results from both the training and testing sets of the experiment demonstrate a noteworthy achievement in identification accuracy, surpassing 93.5%. This high level of accuracy, coupled with the system's automation and intelligence, aligns with the stipulated accuracy requirements for the pest and disease recognition system.

3.2 Testing the Recognition System on Other Training Sets

The evaluation of the recognition system's portability was conducted by applying it to tomato, corn, and potato leaves for pest and disease identification. This phase encompassed an initial training with respective samples, followed by a subsequent testing phase. The results of these tests are detailed in Table 2.

Table 2. Test results of the recognition system on other training sets

Plant Type

Leaf Type

Number of Samples

Correct Identifications

Identification Rate %

Tomato leaf

Healthy

320

309

96.5

Affected by diseases and pests

250

241

Corn leaf

Healthy

310

303

97.2

Affected by diseases and pests

240

232

Potato leaf

Healthy

300

290

97.4

Affected by diseases and pests

230

226

As illustrated in Table 2, the system exhibited identification accuracies of 96.5% for tomato leaves, 97.2% for corn leaves, and 97.4% for potato leaves, underscoring its remarkable portability.

The outcomes from both the training and testing experiments, along with the tests on additional training sets, corroborate that the deep learning-based pest and disease detection system developed in this study substantially bolsters the implementation of deep learning in crop pest and disease recognition. This enhancement markedly improves the efficiency and accuracy of pest and disease identification and detection, thus providing valuable insights for the furtherance of pest and disease detection technologies in practical applications.

4. Conclusions

This study addressed the challenges of manual pest and disease identification in agriculture, characterized by low accuracy, slow speed, and high labor costs, by exploring a deep learning-based recognition system for agricultural drones. Deep learning, a leading technology in image detection, has been effectively applied to agricultural drones, enabling intelligent, automated detection and identification of pests and diseases. This approach significantly surpasses traditional image processing methods by autonomously extracting image features, thereby enhancing the accuracy in identifying characteristics of pests and diseases. Such advancements contribute a novel methodology to the prevention and treatment of crop pests and diseases, holding substantial value for the advancement of intelligent and precision agriculture. The developed recognition system encompasses a three-tier architecture: the sensing layer, transmission layer, and application layer. Deep learning algorithms were instrumental in the artificial intelligence recognition process, with specific focus on designing both the CNN and the recognition system's evaluation metrics. Comprehensive training and testing trials were conducted on the system, supplemented by testing on various datasets. The findings reveal that the system not only possesses a high recognition rate for viruses but also demonstrates commendable portability, effectively addressing the challenges associated with detecting crop pests and diseases using agricultural drones. The system's potential for broad application signifies a significant step towards the further intelligent evolution of agriculture. Additionally, the integration of drone technology and deep learning presents expansive possibilities, such as in the detection and analysis of environmental elements like land surfaces and water quality, thus enhancing the efficiency and precision in areas like environmental monitoring, disaster assessment, and land use planning.

The methodologies employed in this research were constrained by limited datasets, presenting significant challenges for future studies. In real-world agricultural settings, the diversity of crop types and the plethora of pest and disease species necessitate the recognition and detection of a wider array of crops and their afflictions, marking a critical direction for future research endeavors. The dataset utilized in this study, while pivotal, was not exhaustive, and there is a noticeable scarcity of images depicting crop pests and diseases under varied environmental conditions. Future research should focus on continuously augmenting and developing these datasets, expanding their scope to surmount the limitations inherent in current research methodologies, thereby enhancing the stability and adaptability of the algorithms. Investigating the states of crops prior to the onset of pests and diseases, with the aim of achieving preventative measures, emerges as a prospective trend in this field. Moreover, a paramount task in the evolution of the pest and disease recognition system for agricultural drones is the refinement of algorithms. The objective is to achieve an optimal balance between recognition accuracy and processing speed, which remains a pivotal area of focus for upcoming research initiatives.

Funding
The study was funded by Scientific Research Fund Project of Zhengzhou University of Economics and Trade Young in 2022 (Grant No.: QK2228), and Henan Province Higher Education Institution College Student Innovation Training Program Project in 2023 (Grant No.: 202310465013).
Data Availability

The data used to support the research findings are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References
1.
L. Zhang, J. Pang, X. Chen, and Z. Lu, “Carbon emissions, energy consumption and economic growth: Evidence from the agricultural sector of China’s main grain-producing areas,” Sci. Total Environ., vol. 665, pp. 1017–1025, 2019. [Google Scholar] [Crossref]
2.
H. O. Guo, S. Xu, and C. L. Pan, “Measurement of the spatial complexity and its influencing factors of agricultural green development in China,” Sustainability, vol. 12, no. 21, p. 9259, 2020. [Google Scholar] [Crossref]
3.
L. C. Ngugi, M. Abelwahab, and M. Abo-Zahhad, “Recent advances in image processing techniques for automated leaf pest and disease recognition–A review,” Inf. Process. Agric., vol. 8, no. 1, pp. 27–51, 2021. [Google Scholar] [Crossref]
4.
A. Kumar, S. Sarkar, and C. Pradhan, “Recommendation system for crop identification and pest control technique in agriculture,” in 2019 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 2019, pp. 185–189. [Google Scholar] [Crossref]
5.
Y. Yuan, L. Chen, H. R. Wu, and L. Lin, “Advanced agricultural disease image recognition technologies: A review,” Inf. Process. Agric., vol. 9, no. 1, pp. 48–59, 2022. [Google Scholar] [Crossref]
6.
D. M. Gao, Q. Sun, B. Hu, and S. Zhang, “A framework for agricultural pest and disease monitoring based on Internet-of-Things and unmanned aerial vehicles,” Sensors, vol. 20, no. 5, p. 1487, 2020. [Google Scholar] [Crossref]
7.
T. Daniya and S. Vigneshwari, “A review on machine learning techniques for rice plant disease detection in agricultural research,” Int. J. Adv. Sci. Technol., vol. 28, no. 13, pp. 49–62, 2019. [Google Scholar]
8.
K. Thenmozhi and U. S. Reddy, “Crop pest classification based on deep convolutional neural network and transfer learning,” Comput. Electron. Agric., vol. 164, p. 104906, 2019. [Google Scholar] [Crossref]
9.
C. J. Chen, Y. Y. Huang, Y. S. Li, C. Y. Chang, Y. C. Chen, and Y. M. Huang, “Identification of fruit tree pests with deep learning on embedded drone to achieve accurate pesticide spraying,” IEEE Access, vol. 9, pp. 21986–21997, 2021. [Google Scholar] [Crossref]
10.
G. J. Sun, S. H. Liu, H. L. Luo, and others, “Intelligent monitoring system of migratory pests based on searchlight trap and machine vision,” Front. Plant Sci., vol. 13, p. 897739, 2022. [Google Scholar] [Crossref]
11.
A. Devaraj, K. Rathan, S. Jaahnavi, and K. Indira, “Identification of plant disease using image processing technique,” in 2019 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 2019, pp. 749–753. [Google Scholar] [Crossref]
12.
S. S. Kumar and B. K. Raghavendra, “Diseases detection of various plant leaf using image processing techniques: A review,” in 2019 5th International Conference on Advanced Computing & Communication Systems (ICACCS), Coimbatore, India, 2019, pp. 313–316. [Google Scholar] [Crossref]
13.
Y. Ai, C. Sun, J. Tie, and X. T. Cai, “Research on recognition model of crop diseases and insect pests based on deep learning in harsh environments,” IEEE Access, vol. 8, pp. 171686–171693, 2020. [Google Scholar] [Crossref]
14.
M. Türkoğlu and D. Hanbay, “Plant disease and pest detection using deep learning-based features,” Turk. J. Electr. Eng. Comput. Sci., vol. 27, no. 3, pp. 1636–1651, 2019. [Google Scholar] [Crossref]
15.
D. J. A. Rustia, J. J. Chao, L. Y. Chiu, Y. F. Wu, J. Y. Chung, J. C. Hsu, and T. T. Lin, “Automatic greenhouse insect pest detection and recognition based on a cascaded deep learning classification method,” J. Appl. Entomol., vol. 145, no. 3, pp. 206–222, 2021. [Google Scholar] [Crossref]
16.
C. R. Rahman, P. S. Arko, M. E. Ali, M. A. I. Khan, S. H. Apon, F. Nowrin, and A. Wasif, “Identification and recognition of rice diseases and pests using convolutional neural networks,” Biosyst. Eng., vol. 194, pp. 112–120, 2020. [Google Scholar] [Crossref]
17.
S. H. Lee, S. R. Lin, and S. F. Chen, “Identification of tea foliar diseases and pest damage under practical field conditions using a convolutional neural network,” Plant Pathol., vol. 69, no. 9, pp. 1731–1739, 2020. [Google Scholar] [Crossref]
18.
H. Kuzuhara, H. Takimoto, Y. Sato, and A. Kanagawa, “Insect pest detection and identification method based on deep learning for realizing a pest control system,” in 2020 59th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), Chiang Mai, Thailand, 2020, pp. 709–714. [Google Scholar] [Crossref]
19.
J. L. Kong, H. X. Wang, C. C. Yang, X. B. Jin, M. Zuo, and X. Zhang, “A spatial feature-enhanced attention neural network with high-order pooling representation for application in pest and disease recognition,” Agriculture, vol. 12, no. 4, p. 500, 2022. [Google Scholar] [Crossref]
20.
Y. S. Peng and Y. Wang, “CNN and transformer framework for insect pest classification,” Ecol. Inform., vol. 72, p. 101846, 2022. [Google Scholar] [Crossref]
21.
Z. Li, F. Liu, W. J. Yang, S. H. Peng, and J. Zhou, “A survey of convolutional neural networks: Analysis, applications, and prospects,” IEEE Trans. Neural Netw. Learn. Syst., vol. 33, no. 12, pp. 6999–7019, 2021. [Google Scholar] [Crossref]

Cite this:
APA Style
IEEE Style
BibTex Style
MLA Style
Chicago Style
GB-T-7714-2015
Li, W. Q., Han, X. X., Lin, Z. B., & Rahman, A. (2024). Enhanced Pest and Disease Detection in Agriculture Using Deep Learning-Enabled Drones. Acadlore Trans. Mach. Learn., 3(1), 1-10. https://doi.org/10.56578/ataiml030101
W. Q. Li, X. X. Han, Z. B. Lin, and A. Rahman, "Enhanced Pest and Disease Detection in Agriculture Using Deep Learning-Enabled Drones," Acadlore Trans. Mach. Learn., vol. 3, no. 1, pp. 1-10, 2024. https://doi.org/10.56578/ataiml030101
@research-article{Li2024EnhancedPA,
title={Enhanced Pest and Disease Detection in Agriculture Using Deep Learning-Enabled Drones},
author={Wenqi Li and Xixi Han and Zhibo Lin and Atta-Ur Rahman},
journal={Acadlore Transactions on AI and Machine Learning},
year={2024},
page={1-10},
doi={https://doi.org/10.56578/ataiml030101}
}
Wenqi Li, et al. "Enhanced Pest and Disease Detection in Agriculture Using Deep Learning-Enabled Drones." Acadlore Transactions on AI and Machine Learning, v 3, pp 1-10. doi: https://doi.org/10.56578/ataiml030101
Wenqi Li, Xixi Han, Zhibo Lin and Atta-Ur Rahman. "Enhanced Pest and Disease Detection in Agriculture Using Deep Learning-Enabled Drones." Acadlore Transactions on AI and Machine Learning, 3, (2024): 1-10. doi: https://doi.org/10.56578/ataiml030101
LI W Q, HAN X X, LIN Z B, et al. Enhanced Pest and Disease Detection in Agriculture Using Deep Learning-Enabled Drones[J]. Acadlore Transactions on AI and Machine Learning, 2024, 3(1): 1-10. https://doi.org/10.56578/ataiml030101
cc
©2024 by the author(s). Published by Acadlore Publishing Services Limited, Hong Kong. This article is available for free download and can be reused and cited, provided that the original published version is credited, under the CC BY 4.0 license.