KEMBAR78
Leaf Disease Detection Robot | PDF | Agriculture | Automation
0% found this document useful (0 votes)
308 views61 pages

Leaf Disease Detection Robot

Raspberry pi based leaf disease detection rover

Uploaded by

Prem Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
308 views61 pages

Leaf Disease Detection Robot

Raspberry pi based leaf disease detection rover

Uploaded by

Prem Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 61

A robot that uses a machine learning algorithm to detection of plant leaf disease

Abstract
For food security to be guaranteed, plant health and agricultural output are
essential. However, lower yields are frequently the consequence of ineffective
agricultural methods and plant diseases. This project offers a novel "Leaf Disease
Detection and Fertilizer Spraying Robot," which integrates automation and Internet of
Things technologies for efficient farm management, in order to address these issues
and integrated with a solar powered energy system.
There are two modes of operation for the robot: manual and automated. The
Blynk app controls the manual mode, which employs a virtual joystick to operate the
device precisely, and the automated mode, which uses ultrasonic sensors to identify
obstacles and navigate on its own. The robot's primary functions, such as mobility
driven by four 60 RPM motors and an L298N motor driver, are controlled by an
ESP32 microprocessor. The robot's adaptability is further increased by additional
features including an automatic seed-sowing mechanism controlled by a servo motor
and a plowing configuration powered by two 10 RPM motors.
A Raspberry Pi with a camera module is used to identify leaf diseases in real
time for plant health monitoring. In order to ensure that afflicted plants receive
targeted treatment, the system uses machine learning-based image processing
algorithms to identify illnesses and activate a relay that turns on a fertilizer spraying
pump. The Blynk cloud integration of the system offers smooth control, real-time
monitoring, and intuitive operation.
For small and medium-sized farmers, this project provides a complete solution
to maximize crop care, minimize manual work, and boost productivity. The robot
exemplifies a realistic approach to upgrading agriculture by using cutting-edge
technology like computer vision and the Internet of Things. By facilitating accurate
fertilizer delivery and early disease diagnosis, the technology is affordable, scalable,
and supports sustainable agricultural methods.
CHAPTER 1
INTRODUCTION
The foundation of the world economy, agriculture provides millions of people
with food, raw resources, and a means of subsistence. Traditional agricultural
methods, however, are frequently time-consuming and vulnerable to inefficiencies
brought on by unidentified plant diseases and suboptimal resource management. To
ensure maximum crop production and stop the spread of infections, early diagnosis of
plant illnesses is essential. Increasing agricultural output simultaneously requires
efficient planting, plowing, and fertilizer techniques.
IoT (Internet of Things) and automation technologies have become
revolutionary instruments for tackling these issues in agriculture. Smart agricultural
systems may automate time-consuming chores, improve resource utilization, and
monitor plant health by combining robots, machine learning, and the Internet of
Things. In order to increase farming productivity and lessen dependency on manual
labor, this project presents a Leaf Disease Detection and Fertilizer Spraying Robot.
The robot ensures a cutting-edge, effective, and scalable crop management solution by
fusing real-time disease diagnosis with IoT-enabled control.
1.1 Problem Statement
Maintaining plant health and increasing agricultural output are major issues for
farmers because of:
Undetected Plant Diseases: Many plant diseases generate large output losses because
they spread quickly and go undetected until they have done a great deal of harm.
Manual Labor Intensity : Large-scale enterprises cannot afford the labor-intensive
and time-consuming farming procedures like plowing, seeding, and fertilizing.
Inefficient Fertilizer Application: Excessive or irregular fertilizer usage can damage
plants, degrade soil, and raise operating expenses.
Limited Technological Adoption: The productivity and development potential of
small and medium-sized farmers are typically limited by their lack of access to
sophisticated equipment for automation and real-time disease diagnosis.
1.2 Objectives
The primary objectives of this project are:
Implement Leaf Disease Detection: Utilize machine learning-based image
processing techniques to identify plant diseases in real time using a Raspberry Pi and a
camera. Optimize resource utilization and save waste by integrating a relay-controlled
pump system that will only spray fertilizer on unhealthy plants.
Provide IoT-based Control: Use the Blynk cloud platform with an ESP32
microcontroller to provide remote control and robot monitoring in both manual and
automated modes.
1.3 Scope of the Project
For use in small- and medium-sized farms, the Leaf Disease Detection and
Fertilizer Spraying Robot was created. Among the project's salient aspects are:
 Autonomous and manual operation modes for flexibility.
 Integration of machine learning for disease detection.
 IoT-enabled monitoring and control via the Blynk platform.
 Modular design for scalability and adaptability to diverse farming needs.
The foundation for future developments in agricultural automation is laid by this
project, which may be expanded to incorporate soil analysis, weather monitoring, and
multi-crop assistance.
CHAPTER 2
LITERATURE SURVEY
2.1 Abed, Sudad H., Alaa S. Al-Waisy, Hussam J. Mohammed, and Shumoos Al-
Fahdawi. "A modern deep learning framework in robot vision for automated
bean leaves diseases detection." International Journal of Intelligent Robotics and
Applications 5, no. 2 (2021): 235-251.
Numerous diseases, including bean rust and angular leaf spots, can affect bean
leaves, seriously harming bean crops and reducing their yield. Therefore, addressing
these illnesses early on can enhance the product's quality and quantity. Recently, a
number of robotic frameworks based on artificial intelligence and image processing
have been employed to automatically cure various illnesses. But if the diseased leaf is
misdiagnosed, chemical treatments for healthy leaves may be used, which won't fix
the problem and might be expensive and dangerous. A cutting-edge deep learning
framework in robot vision is suggested as a solution to these problems in order to
identify bean leaf illnesses early. The two main steps of the suggested framework are
identifying the bean leaves in the input photos and determining if the leaves have any
illnesses. In order to identify the bean leaves in input photos taken under uncontrolled
environmental settings, the U-Net architecture—which is based on a pre-trained
ResNet34 encoder—is used. To determine the healthiness of bean leaves, the
classification stage properly evaluates the performance of five different deep learning
models (e.g., Densenet121, ResNet34, ResNet50, VGG-16, and VGG-19).
1295 photos from three different classes—Healthy, Angricular Leaf Spot, and
Bean Rust, for example—are used to assess the efficacy of the suggested framework.
With a CAR of 98.31%, Sensitivity of 99.03%, Specifcity of 96.82%, Precision of
98.45%, F1-Score of 98.74%, and AUC of 100%, the Densenet121 model performs
best in the binary classification challenge. In the multi-classification challenge, the
same model yields a higher CAR of 91.01%, taking less than two seconds each image
to get the final judgment.

2.2 Rahul, M. S. P., and M. Rajesh. "Image processing based automatic plant
disease detection and stem cutting robot." In 2020 Third International
Conference on Smart Systems and Inventive Technology (ICSSIT), pp. 889-894.
IEEE, 2020.
With an emphasis on coffee plantations and cotton plants, the reference paper
investigates the creation of autonomous systems for managing agricultural pests and
diseases. Pests like Xylotrechus Quadripes (Coffee White Stem Borer), which can
reduce yields by up to 70%, pose a threat to coffee, a major crop in the mountainous
areas of southern India. In order to protect the ecosystem and water bodies while
reducing pollution, the report emphasizes the need of organic agricultural practices.
The problem of stem cutting, where damaged stems must be cut off before fruit ripens
to minimize harm, is addressed by a robotic system. With economic viability as a key
factor, this system seeks to effectively and autonomously detect and remove damaged
or pest-affected stems.
The study focuses on the use of image processing techniques for disease
identification and classification in the cotton area. Machine learning-based
categorization algorithms identify diseases with a 79% accuracy rate, including
Alternaria, Bacterial Blight, and Myrothecium. In 20% of instances, healthy leaves
were mistakenly classified as sick, and vice versa. The main goal of the integrated
system is to walk across fields on its own, take pictures of plants, check them for
illnesses, and eliminate those that are ill. Because plants in the field have a
homogeneous look ("sea of green"), making it difficult to identify the right target, path
planning becomes a big difficulty.
The study lays the groundwork for fusing robots with image processing to
produce practical and effective answers to problems in agriculture. By combining
reliable path-planning algorithms with image-based illness diagnosis, future research
seeks to increase the system's autonomy. This would allow the robot to move about on
its own, precisely detect sick plants, and take the appropriate action. By lowering
human labor and guaranteeing sustainable farming methods, such developments have
the potential to completely transform crop management.
2.3 Nooraiyeen, Aamina. "Robotic vehicle for automated detection of leaf
diseases." In 2020 IEEE International Conference on Electronics, Computing and
Communication Technologies (CONECCT), pp. 1-6. IEEE, 2020.
The growing need for automated technology has sparked the creation of
creative solutions that improve speed, accuracy, and efficiency across a range of
industries. According to the reference study, one such method is the combination of
image processing and robotics for the identification of plant leaf disease. The study
highlights the difficulties with traditional illness detection techniques, which depend
on visual inspection and can produce erroneous and inconsistent results. The authors
suggest an autonomous robotic system using a network of image sensors and a
microprocessor to solve this problem. For convenience and portability, the system
uses voice control, which allows it to move over uneven ground in gardens or fields.
The robotic automobile efficiently detects plant illnesses, like basil, using image
processing methods like K-means clustering and Support Vector Machine (SVM)
algorithms, and notifies the user with practical remedies.
This method minimizes the need for human intervention by combining accurate
illness diagnosis with autonomous mobility in a novel way. The outcomes show how
these systems may be used to enhance farming methods. The suggested approach,
which combines robotics and image processing, provides a practical and affordable
way to manage leaf diseases while guaranteeing precise diagnosis and prompt
response.
In order to enable smooth monitoring, the research also emphasizes the usage
of a Bluetooth module for data transfer between the robotic vehicle and a distant
computer system. According to the results, the system may be used in small-scale
agricultural settings since it produces the best outcomes with the least amount of
computing work. By effectively identifying and categorizing illnesses, the
combination of clustering and SVM algorithms helped to extend the life of plants. All
things considered, this study emphasizes how automation and robots may boost
agricultural output and highlights the possibility of wider precision farming
applications.
2.4 Fernando, Sandunika, Ranusha Nethmi, Ashen Silva, Ayesh Perera, Rajitha
De Silva, and Pradeep KW Abeygunawardhana. "Intelligent disease detection
system for greenhouse with a robotic monitoring system." In 2020 2nd
International Conference on Advancements in Computing (ICAC), vol. 1, pp. 204-
209. IEEE, 2020.
The greenhouse farming creates controlled climatic conditions that encourage
ideal plant development, it is essential to the agricultural industry. However, new
research shows that agricultural yields under greenhouse conditions have decreased,
mostly as a result of bacterial infections, pests, and microorganisms. Current methods
frequently apply pesticides carelessly without taking into account the needs of certain
plants, which creates a number of ecological and financial problems. As a result,
scientists have put forward sophisticated methods for identifying diseases in
greenhouse plants early on. One noteworthy study looks at illnesses that cause tomato
plants to become yellow, which is a serious problem in greenhouse-controlled
settings. The study focuses on early-stage illness diagnosis by utilizing methods
including image processing, machine learning, and deep learning, which allow for
prompt intervention to reduce production losses.
The study's technique combines automated illness detection with robotics.
Using a camera on a robotic arm, a robotic system using a Raspberry Pi CPU takes
pictures of plant leaves. After undergoing pre-processing to identify illness signs,
machine learning algorithms are used to evaluate the photos. Additionally, the robot
keeps an eye on environmental factors like humidity and temperature, comparing them
to ideal circumstances for plant health. A database and a backend processing system
are housed on a centralized server that handles disease detection and leaf analysis.
Furthermore, a graphical user interface (GUI) improves usability for farmers by
providing a remote, interactive platform for greenhouse plant monitoring.
By adding additional technologies and broadening the scope of plant illnesses
that may be detected, the research shows promise for scalability. The technology lays
the groundwork for more thorough disease diagnostic solutions in greenhouse
farming, despite being restricted to two particular illnesses that impact several plant
species. With the goals of increasing productivity, lowering chemical overuse, and
guaranteeing better crop production, this work highlights the revolutionary role that
robots and machine learning play in sustainable agriculture.
2.5 Dharanika, T., S. Ruban Karthik, S. Sabhariesh Vel, S. Vyaas, and S.
Yogeshwaran. "Automatic leaf disease identification and fertilizer agrobot."
In 2021 7th International Conference on Advanced Computing and
Communication Systems (ICACCS), vol. 1, pp. 1341-1344. IEEE, 2021.
Plant diseases that cause food loss and lower crop yields pose serious problems
for agriculture, a crucial industry that helps meet the population's expanding
requirements. An IoT-based system for automated disease diagnosis and pesticide
spraying is one creative way to handle these issues, according to a cited study. The
suggested approach analyzes plant health using image processing techniques, with a
special emphasis on identifying afflicted leaf sections. When a sickness is detected,
the system instantly notifies the person in question and uses automated robots to apply
pesticides, minimizing the need for manual work. This approach is a realistic and
effective way to reduce the amount of time and effort needed, as well as the
requirement for professional supervision.
According to the study's findings, the treatment of plant diseases may be
greatly simplified by the use of such automated systems. The model provides an
efficient substitute for time-consuming and labor-intensive traditional approaches by
combining image processing with Internet of Things technologies. Additionally, the
automation of pesticide spraying guarantees accuracy in treating contaminated areas,
minimizing resource waste and exposure to hazardous chemicals. The paper makes
recommendations for future improvements, such as expanding the detection
capabilities to detect illnesses in stems and roots and allowing real-time feedback
when applying pesticides. Additionally, adding navigation controls through an
Android interface may make the system more user-friendly by enabling farmers to
track and direct the robot's movements in real time.
All things considered, this strategy is a promising development in agricultural
technology that seeks to boost crop yields while lowering labor dependence and
operating inefficiencies. IoT and robotics integration in agriculture opens the door to
sustainable farming methods while tackling important issues with crop health
maintenance and disease control.
2.6 Ahmed, Shahad, and Saman Hameed Ameen. "Detection and classification of
leaf disease using deep learning for a greenhouses’ robot." Iraqi Journal of
Computers, Communications, Control and Systems Engineering 21, no. 4 (2021):
15-28.
Early detection of plant diseases is essential because they present serious
threats to the economy, ecology, and public health. In order to overcome the
difficulties of early illness diagnosis, especially in areas with limited infrastructure
and resources like Iraq, a cited research makes use of deep learning techniques. The
researchers used pictures from two datasets PlantVillage and a cotton dataset and
several convolutional neural network (CNN) architectures to identify plant illnesses.
While the cotton dataset included 2,204 training pictures and 106 testing images
covering four classes, the PlantVillage dataset had 10,190 photos covering four crops
with ten classes of damaged and healthy leaves. With the greatest accuracy of
99.908% of the studied models, VGG16 was determined to be the best appropriate for
incorporation into an autonomous greenhouse robot operating in real time.
In order to improve the performance of pre-trained CNN architectures such as
Inception-v3, ResNet50, Squeezenet1-1, and VGG16, the study also investigated
transfer learning techniques, contrasting shallow and deep approaches. For crucial
applications like plant disease identification, deep transfer learning proved more
dependable because to its higher accuracy and lower training loss, even though
shallow transfer learning produced faster results. The result underlined that the best
option for incorporating into autonomous systems to successfully stop the spread of
plant diseases is VGG16, which has been trained using deep transfer learning. The
authors also paved the path for future research by highlighting the possibility of using
localized datasets to further enhance detection algorithms.
This study emphasizes the value of strong architectures for disease detection
and the effectiveness of deep learning in agricultural applications. In order to improve
system accuracy and flexibility, it also promotes the use of region-specific datasets
and sophisticated transfer learning algorithms. The results lay a solid basis for the
creation of intelligent robotic systems for environmentally friendly farming practices.
2.7 Hidayah, AH Nurul, Syafeeza Ahmad Radzi, Norazlina Abdul Razak, Wira
Hidayat Mohd Saad, Y. C. Wong, and A. Azureen Naja. "Disease Detection of
Solanaceous Crops Using Deep Learning for Robot Vision." Journal of Robotics
and Control (JRC) 3, no. 6 (2022): 790-799.
Traditionally, farmers manually diagnose and monitor plant illnesses,
nutritional shortages, regulated irrigation, and controlled fertilizers and pesticides to
manage the crops from the early development stage to the mature harvest stage.
Because many agricultural diseases are identical, even farmers find it challenging to
identify them with their unaided eyes. Since it may increase crop yield in both quality
and quantity, identifying the right illnesses is essential. With the development of
artificial intelligence (AI) technology, a robot that replicates a farmer's skills can
automate all crop management activities. Another difficulty to take into account is
creating a robot with human-like abilities, particularly in real-time crop disease
detection. The goal of further studies is to increase mean average precision, and
YOLOv5 has produced the best result to yet, with mean Average Precision (mAP) of
93%. In order to detect solanaceous crop diseases for robot vision, this article focuses
on object detection using a Convolutional Neural Network (CNN) architecture. The
contribution of this study was the reporting of developmental details and a proposed
remedy for problems that arose throughout the investigation. Furthermore, the results
of this study are anticipated to be used as the robot's vision algorithm. Images of four
crops—tomato, potato, eggplant, and pepper as well as 23 classes of healthy and sick
crops with leaf and fruit infections are used in this study. The dataset used is a
combination of self-collected samples and the publicly available PlantVillage dataset.
16580 photos make up the whole dataset for all 23 classes, which is separated into
three sections: the training set, the validation set, and the testing set. Eighty-eight
percent of the overall dataset (15000 photos) was utilized for training, eight percent
underwent validation (1400 images), and the remaining four percent (699 images) was
used for testing. With a 94.2% mAP, YOLOv5's performances were more reliable,
and its speed was somewhat quicker than ScaledYOLOv4. This object detection-based
method has shown promise as a real-time, effective way to identify crop diseases.
2.8 Cubero, Sergio, Ester Marco-Noales, Nuria Aleixos, Silvia Barbé, and Jose
Blasco. "Robhortic: A field robot to detect pests and diseases in horticultural
crops by proximal sensing." Agriculture 10, no. 7 (2020): 276.
The study by [Reference Paper] shows how RobHortic, a remote-controlled
field robot, was created to use proximate sensing to find pests and diseases in
horticultural crops. To monitor crops in controlled illumination, the robot incorporates
cutting-edge imaging technologies, such as colour, multispectral, and hyperspectral
cameras (400–1000 nm range). A protective sheet and halogen bulbs were used to
lessen the interference of natural sunlight. A GNSS receiver made sure that the
gathered data was geospatially referenced, allowing for high-accuracy mapping. The
robot's mobility was synchronised with the system's software, guaranteeing accurate
geolocation and image capturing while in use. The robot's ability to identify bacterial
infections brought on by Candidatus Liberibacter solanacearum (CaLsol) was tested
over a three-year period in commercial carrot farms. Molecular PCR analyses were
used as a benchmark, and the robot's detection accuracy utilising Partial Least
Squares-Discriminant Analysis (PLS-DA) was 66.4% in the lab and 59.8% in the
field.
With notable developments in spectral imaging and data processing, the study
highlights the potential of robotics in precision agriculture. RobHortic delivers higher
resolution than drone-based systems, allowing for leaf-level analysis with spatial
resolutions of 1–2.5 mm per pixel. Furthermore, accurate geographic reference with
an accuracy of roughly 3 cm is ensured by the combination of GNSS and RTK
correction. For crop monitoring, this combination makes it easier to create
comprehensive spectral index maps. According to the results, optical sensing
The viability of such systems for early disease diagnosis is demonstrated by the
reasonable accuracy with which they can identify asymptomatic bacterial infections
when paired with sophisticated modelling methodologies such as PLS-DA, LDA,
QDA, and SVM. But there are still issues, such as the requirement for scalability for
wider agricultural uses and increased detection accuracy in field settings. This study
makes a substantial contribution to the field of agricultural robotics by emphasising
how it might enhance crop health monitoring and sustainable farming methods.
2.9 Forhad, Shamim, Kazi Zakaria Tayef, Mahamudul Hasan, A. N. M.
Shahebul Hasan, Md Zahurul Islam, and Md Riazat Kabir Shuvo. "An
autonomous agricultural robot for plant disease detection." In The Fourth
Industrial Revolution and Beyond: Select Proceedings of IC4IR+, pp. 695-708.
Singapore: Springer Nature Singapore, 2023.
The bulk of the workforce depends on farming for employment, making it an
important sector of a nation's economy. Plant diseases, on the other hand, present a
serious problem as they affect crop quality and yield. Conventional illness detection
techniques frequently involve physical labour, which is labour-intensive and prone to
mistakes. This problem has been creatively solved by recent developments in robots
and artificial intelligence (AI), which offer a more effective method of monitoring and
identifying plant illnesses.
Convolutional Neural Networks (CNN), a sophisticated image processing
method, are one potential method for detecting plant diseases. CNNs have
demonstrated significant promise in precisely detecting a range of plant diseases using
crop image analysis. Farmers' production and efficiency are greatly increased when
CNNs are integrated into robotic systems to identify diseases in real time. Robots
using image processing models, such VGG16, have been developed to identify plant
illnesses and give farmers prompt feedback. By putting these technologies in place,
farmers can detect infections early and take the necessary steps to prevent crop harm.
Some systems include robotic elements like pesticide sprinkling, which
automates the application procedure and eliminates the need for physical labour, in
addition to disease detection. These robots are also sustainable since they are powered
by solar energy, which makes them economical and environmentally beneficial
options for farmers. Because they offer ongoing plant health monitoring and enhance
the general health of the crops, these systems are also advantageous for long-term
agricultural profitability.
The broad use of these technologies is essential to the future of precision
agriculture. These robots will assist enhance agricultural productivity, lessen their
impact on the environment, and boost farmers' profits as they become more widely
available and reasonably priced. AI and robotics developments hold great promise for
transforming the agriculture sector and opening the door to a more productive and
sustainable farming future.
2.10 Xenakis, Apostolos, Georgios Papastergiou, Vassilis C. Gerogiannis, and
George Stamoulis. "Applying a convolutional neural network in an IoT robotic
system for plant disease diagnosis." In 2020 11th International Conference on
Information, Intelligence, Systems and Applications (IISA, pp. 1-8. IEEE, 2020.
The study tackles a crucial problem in agriculture: the early identification of
plant diseases, which has a big influence on crop quality and productivity. Because
plant diseases are a serious hazard to agriculture, prompt identification is crucial for
preventing financial losses. Significant production harm results from traditional illness
detection techniques' frequent inability to identify issues in their early phases. As a
result, using cutting-edge technology to identify plant diseases early on is essential for
implementing more effective management strategies.
A Plant Disease Diagnosis Support System (DDSS) that combines a robotic
system and an Internet of Things (IoT) platform is the suggested remedy in the article.
In order to categorise plant illnesses and assess the health of plants, this system makes
use of artificial intelligence (AI), more especially a convolution neural network
(CNN). The DDSS analyses plant conditions using data analytics and inference
algorithms, which allows the system to spot disease early. In precision agriculture, this
method has several benefits as it enables farmers to carry out focused treatments to
stop the spread of disease and lessen the need for costly and time-consuming human
labour.
In the example case study, the system achieved a 98% classification accuracy
rate, demonstrating its excellent performance. The lightweight, compact robotic
device is useful for real-time illness diagnostics because it is made to function in a
greenhouse. Additionally, the technology gives farmers useful information for prompt
remedial action by providing feedback on the best treatments for each ailment that is
identified. In order to identify a wider variety of plant illnesses and increase the
system's functionality and suitability in various agricultural contexts, future research
will concentrate on growing the dataset used to train the AI model.
2.11 Karpyshev, Pavel, Valery Ilin, Ivan Kalinov, Alexander Petrovsky, and
Dzmitry Tsetserukou. "Autonomous mobile robot for apple plant disease
detection based on cnn and multi-spectral vision system." In 2021 IEEE/SICE
international symposium on system integration (SII), pp. 157-162. IEEE, 2021.
The use of autonomous systems for crop monitoring and disease detection is
becoming increasingly popular in the precision agriculture space. One such system,
made for inspecting apple orchards, combines cutting-edge sensors with machine
learning methods to detect diseases early and treat them locally. The system offers a
more proactive approach to illness management by combining visible range,
multispectral, and hyperspectral scanners to identify diseases before visual signs
appear. Reducing the use of pesticides and increasing agricultural yields depend on
this early identification.
In order to provide accurate data collecting and mapping of orchard conditions,
the system additionally integrates 2D LiDARs and RTK GNSS receivers for precise
localisation and obstacle identification. The program can identify diseases and
separate plants using neural networks, giving users comprehensive information on the
health of individual trees. The study emphasises the value of using narrow infrared
bands for illness diagnosis, which have been investigated in the lab before and have
shown promise in detecting infections early on.
The suggested method promotes decision-making for illness prevention and
treatment in addition to facilitating early disease identification. The platform can
identify diseased trees using high-precision RTK GNSS technology, enabling focused
treatments and reducing the needless use of fungicides. This customised strategy
lowers expenses and the impact on the environment by ensuring that chemical
treatments are only used where necessary.
The next stage of the research, according to the authors, will entail building a
prototype and conducting real-world testing, as well as growing the illness dataset and
improving detection algorithms. The ultimate objective is to create a recommendation
system that would assist operators in selecting the best methods for managing
diseases, maybe including fungicide spraying devices to deliver treatments straight to
afflicted regions. By improving sustainability and efficiency, this integrated strategy
has the potential to completely transform disease control in apple orchards.
2.12 Ouyang, Chen, Emiko Hatsugai, and Ikuko Shimizu. "Tomato disease
monitoring system using modular extendable mobile robot for greenhouses:
Automatically reporting locations of diseased tomatoes." Agronomy 12, no. 12
(2022): 3160.
In recent years, there has been a lot of interest in the development of automated
systems for monitoring agricultural illnesses, especially for crops like tomatoes where
early disease identification can save considerable output losses. Conventional disease
detection techniques mostly rely on human examination, which can be labour-
intensive and time-consuming. Numerous automated methods have been put out to
address this, utilising developments in robotics and machine learning.
One noteworthy system is the one described in the reference paper for
autonomous tomato disease monitoring utilising a mobile robot that is modular and
expandable. The device is intended to function independently in greenhouse settings,
gathering tomato plant picture data in order to identify illnesses. Because of its
modular design, the robot may be set up to match the height of the tomato plants,
guaranteeing thorough monitoring coverage. The system incorporates a server that
uses a two-level illness detection model to handle the gathered data. In order to
identify and confirm infected tomatoes, this model combines a classification network
(MobileNetv2) with an object detection network (YOLOv5l).
Numerous object identification methods, such as RetinaNet, Faster R-CNN,
YOLOv5, and YOLOv7, were investigated. With a remarkable mAP@0.5 of 90.4%,
YOLOv5l demonstrated the greatest performance, especially when trained on a
randomly split dataset. MobileNetv2, a categorisation network, was chosen because it
strikes the best possible balance between model size and accuracy, making it
appropriate for system deployment. When compared to YOLOv5l alone, the final
model showed a significant decrease in false positive rates, increasing the overall
precision and effectiveness of illness diagnosis.
This method fills a major gap in agricultural automation, since the manual
labour required to monitor plant health may be greatly decreased by promptly and
accurately identifying diseases such as blossom-end rot (BER). Furthermore, the
system's practical applicability is improved by reducing false positives and false
negatives through the use of a two-level model. The system's usefulness in large-scale
agricultural settings will likely be further enhanced by future research that aims to
fully automate the process and broaden its scope of disease detection.
2.13 Chowdhury, Muhammad EH, Tawsifur Rahman, Amith Khandakar,
Mohamed Arselene Ayari, Aftab Ullah Khan, Muhammad Salman Khan, Nasser
Al-Emadi, Mamun Bin Ibne Reaz, Mohammad Tariqul Islam, and Sawal Hamid
Md Ali. "Automatic and reliable leaf disease detection using deep learning
techniques." AgriEngineering 3, no. 2 (2021): 294-312.
The difficulties presented by conventional manual monitoring techniques have
drawn a lot of interest in recent years to the application of computer vision and
artificial intelligence (AI) for the early identification of plant diseases. In addition to
being time-consuming, manually inspecting plants for illness is prone to human
mistake. Deep learning methods, in particular convolutional neural networks (CNNs),
have been suggested as a successful way to overcome these difficulties. In the
reference research, a deep CNN based on the EfficientNet architecture is used in a
unique way to analyse leaf photos and identify tomato plant illnesses.
In order to preprocess the 18,161 tomato leaf pictures for disease identification,
the study uses two segmentation models, U-net and Modified U-net. The models'
performance is compared in the study across a range of classification tasks, such as
binary, six-class, and ten-class classifications. With a Dice score of 98.73% and an
IoU of 98.5%, the findings show that the Modified U-net segmentation model obtains
remarkable accuracy and segmentation scores. Additionally, the EfficientNet-B7
architecture performs better than previous models in terms of classification accuracy,
with 99.12% accuracy for six-class classification tasks and 99.95% accuracy for
binary classification tasks. These results highlight how deep learning models may
improve the precision of plant disease detection systems, particularly when paired
with excellent picture segmentation.
The study also emphasises the benefits of deep learning over conventional
techniques, providing a way to effectively identify plant diseases without the need for
specialised knowledge. With the use of widely available technologies, including
cellphones, drones, and robotic platforms, the suggested system can offer automated,
real-time illness diagnosis. Furthermore, by offering suggestions for illness
management, the authors contend that the inclusion of a feedback mechanism might
improve the model's usefulness even further. Future advancements in precision
agriculture, where real-time monitoring and decision-making may greatly increase
crop output and lower losses from plant diseases, are made possible by this study.
2.14 Bir, Paarth, Rajesh Kumar, and Ghanshyam Singh. "Transfer learning
based tomato leaf disease detection for mobile applications." In 2020 IEEE
International Conference on Computing, Power and Communication Technologies
(GUCON), pp. 34-39. IEEE, 2020.
The use of deep learning in agriculture has drawn a lot of interest lately,
especially for crop disease identification, as a way to increase food security and lower
losses. This paper highlights a project that uses Convolutional Neural Networks
(CNNs) to handle the problem of plant disease detection. It highlights the need for
new technology in early disease detection and notes that pests, illnesses, and weeds
cause a significant loss in agricultural yield in India—roughly 15–25%. CNNs have
been widely used in industries including robotics, healthcare, and agriculture because
of their success in a variety of visual tasks, such as object recognition and
categorisation. However, the computational demands of these models, particularly
regarding memory and power, pose a challenge for their deployment on mobile
devices, which are essential for real-time, on-field applications.
Using pre-trained models such as EfficientNetB0, MobileNetV2, and VGG19
to extract features from tomato plant photos, the research suggests a solution using
transfer learning. The usefulness of these models in agricultural applications was
demonstrated using a dataset of 15,000 photos from nine different disease kinds and
one healthy class. Through the use of smaller, less computationally costly models,
transfer learning presents a feasible strategy without compromising performance. This
makes it possible to implement deep learning models on mobile devices, which makes
it an affordable option for real-world applications in agriculture.
The study's conclusion emphasises the benefits of applying transfer learning to
smaller models, particularly the EfficientNets, which demonstrated a significant boost
in performance when compared to other designs despite having lower computing
costs. In order to minimise model sizes for mobile inference, the article also
recommends further TensorFlow Lite optimisations. However, there was no
discernible gain in performance as a result of the segmentation of plant leaves.
According to the study, a larger dataset might be helpful for even better outcomes.
This study shows how deep learning may be used to solve practical agricultural issues,
paving the door for more effective and user-friendly crop disease detection systems.
2.15 Chaitanya, Pvr, Dileep Kotte, A. Srinath, and K. B. Kalyan. "Development
of smart pesticide spraying robot." International Journal of Recent Technology
and Engineering 8, no. 5 (2020): 2193-2202.
An innovative approach to successfully combating crop diseases is the
application of machine learning and image processing technology in agriculture. A
noteworthy study emphasises how critical it is to identify and control plant diseases in
order to achieve excellent crop yields and quality. The study highlights that plant
illnesses produced by pathogenic organisms frequently show up as visible signs on
leaves, stems, and branches. Reducing post-harvest losses requires early detection and
treatment of these illnesses. The suggested approach precisely identifies contaminated
regions by analysing camera photos using a Raspberry Pi3 and machine learning
techniques. To ensure effective pesticide use, the device also automatically applies
pesticides just to the impacted areas. This method is a health-conscious alternative as
it lessens farmers' direct exposure to dangerous chemicals.
Using Python-based machine learning algorithms for disease detection and an
L293D motor driver for mobility, the study further illustrates the promise of robots in
agriculture. With just sporadic pesticide and battery refills needed, this creative
method does away with the necessity for continuous surveillance. In order to improve
sustainability, the scientists also suggest using solar technologies for future
autonomous power management.
There are substantial social and economic advantages to such a system. The
technology boosts crop output, lowers labour costs, and wastes less pesticides by
automating disease control, which raises farmers' incomes. According to the analysis,
the project is a worthwhile, one-time expenditure that will pay off in the long run.
Additionally, the younger generation finds it appealing and accessible due to its
simplicity of use and remote control capabilities. As a cutting-edge development in
agriculture, this technology may support sustainable farming methods and lessen food
crises, which is in line with the goal of contemporary, technologically sophisticated
agriculture.
CHAPTER 3
EXISTING METHOD
3.1 EXISTING METHOD INTRODUCTION
Traditionally, the agricultural industry has used a variety of techniques to
manage crop care, identify plant diseases, and improve farming operations. These
approaches can be broadly divided into three categories: isolated automation
techniques, basic mechanical tools, and manual inspection. Although these methods
have been useful over time, they are not very effective at solving contemporary
farming issues including accuracy, efficiency, and scalability.
Manual Inspection
Plant illnesses are visibly identified during manual inspection, and fertilizer or
herbicides are then applied appropriately. To evaluate the condition of plants and
gauge the severity of diseases, farmers rely on their experience. For small-scale
farming, this approach is economical, but for larger agricultural enterprises, it is error-
prone and ineffective.
Basic Mechanical Tools
In order to help in planting, fertilization, and disease management, mechanical
tools like hand-operated sprayers and ploughing equipment have been utilized
extensively. Although these technologies offer some mechanization, a great deal of
human labor and supervision are still necessary.
Isolated Automation Techniques
Standalone automation solutions such as basic drones, irrigation controls, and
tractor-mounted sprayers have been introduced in recent years. Individual activities
can be completed by these systems with little assistance from humans. They are
unable to combine several features, nevertheless, including real-time control, resource
optimization, and disease detection.
3.2 DISADVANTAGES
Existing techniques are widely used, however they have a number of shortcomings
that reduce their efficacy and efficiency:
Time-Consuming: Mechanical tool operation and inspection by hand take a lot of
time and effort, which makes them unsuitable for large-scale farming.
Lack of Precision: Human error can lead to erroneous judgments and postponed
actions when disease detection is done by hand observation.
Resource Wastage: Excessive use of water, herbicides, and fertilizers is frequently
caused by mechanical tools and stand-alone systems, raising expenses and damaging
the environment.
Limited Integration: Current approaches are task-specific and do not provide a
comprehensive solution for automation, fertilization, and disease detection.
High Dependency on Labor: Human labor is a major component of traditional
systems, although it may be erratic and challenging to coordinate at busy times or
labor shortages.
Scalability Challenges: These techniques' application to small-scale farms is limited
since they cannot be scaled to satisfy the demands of contemporary, large-scale
farming operations.
Inadequate Real-Time Monitoring: Insufficient IoT-based monitoring and control
systems hinder remote administration and real-time data collecting, which lowers
response to shifting field circumstances.
CHAPTER 4
PROPOSED SYSTEM
4.1 PROPOSED SYSTEM INTRODUCTION
The suggested system integrates technologies including machine learning,
embedded systems, and the Internet of Things (IoT) to build and deploy a leaf disease
detection and fertilizer spraying robot. With the help of this technology, agricultural
output will increase and labor costs will be decreased since it automatically detects
leaf illnesses and applies fertilizer in real-time depending on plant health. integrated
with a solar-powered energy system

Figure 4.1.1 Black Diagram


The two primary parts of the system are the Raspberry Pi-based image
processing and disease detection system for regulating fertilizer spraying and
monitoring plant health, and the ESP32-based robotic platform for managing mobility
and carrying out simple agricultural activities. There are two modes of operation for
the ESP32-based robot: Auto and Manual. In manual mode, users may control the
robot's motions using the Blynk app, guiding four 60 RPM motors with the L298N
motor driver. In auto mode, it employs an ultrasonic sensor to identify obstacles and
navigate on its own. Additionally, the robot has a 1000 RPM crass-cutting motor that
can be turned on via a relay. The robot also features a servo motor for automatic seed
sowing and two 10 RPM plowing motors.

Test image captured from


USB Camera

Test Image to Trained


Model

Dataset Train CNN Model Load Model

Performance Metrics Prediction (Disease)

Pump ON\OFF

Figure 4.1.2 Flowchart


However, the Raspberry Pi is in charge of detecting leaf disease. A trained
Convolutional Neural Network (CNN) model processes the photos of the plant leaves
that are taken using a USB camera. To find any patterns of illness, this model is
trained on a dataset of both healthy and sick leaves. The technology uses a relay to
turn on a pump that sprays fertilizer on the afflicted region whenever a disease is
detected.

Figure 4.1.3 Circuit Diagram


This system offers an effective answer for contemporary farming by combining
machine learning techniques, ESP32, Raspberry Pi, and the Blynk app. It is a reliable,
reasonably priced precision agricultural technology that lowers human labor and
enhances plant health monitoring.

4.2 BLOCK DIAGRAM EXPLANATION


4.2.1 12V LEAD ACID BATTERY
A lead acid battery is a type of rechargeable battery that runs on sulphuric and
lead acid. The lead is immersed in sulphuric acid to facilitate a regulated chemical
reaction. It is this chemical process that generates power in the battery. To recharge
the battery, this reaction is then reversed.
Figure 4.2.1 Lead Acid Battery

The lead peroxide plate and sponge lead plate are dipped in diluted sulphuric acid to
create the lead acid storage battery. Between these plates lies an externally linked
electric current. The acid molecules in diluted sulphuric acid separate into negative
sulphate ions (SO4 − −) and positive hydrogen ions (H+). The hydrogen ions absorb
electrons from the PbO2 plate upon arrival, transforming into hydrogen atoms that
assault PbO2 once again to produce PbO and H2O (water). PbSO4 and H2O (water)
are the products of this PbO's reaction with H2 SO4.
SO4− − ions (anions) go in the direction of the electrode (anode) that is attached to the
DC source's positive terminal. There, they will surrender their excess electrons and
transform into radical SO4. Since this radical SO4 cannot exist on its own, it combines
with the anode's PbSO4 to produce sulphuric acid (H2SO4) and lead peroxide (PbO2).
The electrochemical reaction is reversed during the charging procedure. It transforms
the charger's electrical energy into the battery's chemical energy. However, a battery
retains the chemical energy required to generate electricity rather than storing power.
As long as the charger's voltage is higher than the battery's, a battery charger reverses
the current flow. Positive hydrogen ions are drawn to the negative plates because of
the charger's creation of an excess of electrons there. When the majority of the
sulphate is removed, hydrogen rises from the negative plates as a result of the
hydrogen's reaction with the lead sulphate to produce sulphuric acid and lead. When
the reaction is nearly finished, oxygen bubbles emerge from the positive plates as the
oxygen in the water combines with the lead sulphate on the plates to transform them
back into lead dioxide. We refer to this as gassing.
4.2.2 POWER SUPPLY MODULE
Usually, an RC filter is made up of a resistor (R) and a capacitor (C) linked in
parallel. By charging the capacitor during the rectified waveform's peaks and
discharging it during its troughs, this combination serves to lessen ripple. The required
load and the intended degree of smoothing are taken into consideration when
determining the values of R and C. The output is a somewhat steady DC voltage
following the RC filter, however there may still be some swings. A 7805 voltage
regulator is used to give a steady and controlled output voltage of +5 volts. A common
linear voltage regulator, the 7805 keeps the output voltage constant despite variations
in the input voltage or load circumstances. It guarantees a steady and dependable
power source for electrical circuits that are linked.

Figure 4.2.2 Circuit Diagram of Power Supply Module

4.2.3 ESP 32 MICROCONTROLLER


The ESP 32 microcontroller is a small integrated circuit with programmable
input/output peripherals, memory, and a processing core. Its foundation is the dual-
core Xtensa LX6 CPU, which enables effective multitasking and parallel processing.
With many Wi-Fi modes, such as Station, Access Point, and both at once in SoftAP
mode, the ESP32 has built-in support for both Bluetooth and Wi-Fi connectivity.
Bluetooth has both Bluetooth Low Energy (BLE) and Classic Bluetooth. GPIO
(General Purpose Input/Output) pins, SPI (Serial Peripheral Interface), I2C (Inter-
Integrated Circuit), UART (Universal Asynchronous Receiver/Transmitter), PWM
(Pulse Width Modulation), and other peripherals are all available on the ESP32.

Figure 4.2.3 Pin Diagram of ESP 32 Microcontroller


The ESP32 can communicate with a variety of sensors, actuators, and other devices
thanks to these peripherals. Typically, the ESP32 has SRAM (Static Random Access
Memory) for data storage and flash memory for program storage. The particular
ESP32 module or development board might affect the quantity of flash and SRAM.
The Arduino IDE offers additional low-level control over programming the ESP32.
Apps for the ESP32 may be created using a number of programming languages,
including as C and C++.
4.2.4 Camera
A webcam is a type of video camera used to record and send video over the
internet that is linked to a computer or other device, usually through a USB
connection. Applications requiring real-time visual communication, such as live
streaming and videoconferencing, frequently employ webcams. Webcams can be
standalone devices that can be connected to a computer or other device, or they can be
integrated into laptops and desktop computers. Some webcams can record audio in
addition to video since they come with built-in microphones. Because they are
portable and usually have a tiny form factor, webcams are useful in a range of
situations.
Figure 4.2.4 webcam
Along with additional capabilities that let users tailor the video capture to their
requirements, many webcams offer focus that can be adjusted. Webcams are used for
a number of uses, such as live streaming, online schooling, and videoconferencing.
They are a helpful tool for facilitating real-time video communication.
4.2.5 Raspberry pi

Figure4.2.5 Raspberry pi
A popular single-board computer for DIY, education, and prototyping
applications is the Raspberry Pi, which is compact and reasonably priced. The
Raspberry Pi Foundation created it with the intention of advancing computer science
education, especially in underdeveloped nations. Since its launch, it has grown in
popularity as a tool for engineers, developers, and hobbyists working on a variety of
projects, from robots to home automation systems to AI-based apps like your facial
recognition attendance system.
A Broadcom System on Chip (SoC), which integrates a CPU, GPU, and
memory into a single chip, is the fundamental component of the Raspberry Pi. GPIO
(General Purpose Input/Output) pins for hardware interface, USB ports for input
devices, HDMI for video output, and wireless connectivity choices like Wi-Fi and
Bluetooth are just a few of the peripherals it supports. For projects requiring
interaction with sensors, cameras, and other devices, this makes it a great platform.
Although other operating systems are also compatible, the Raspberry Pi runs a
variant of Linux, usually Raspberry Pi OS. Because it can be programmed in a variety
of languages, including Python, C++, and Java, developers may create a vast range of
applications, from straightforward automation scripts to intricate machine learning
models. Strong deep learning frameworks like TensorFlow and Keras are also
supported by the Raspberry Pi, which qualifies it for AI-based applications like object
identification, speech recognition, and facial recognition.
The Raspberry Pi's cost is one of its main advantages. Because it offers an affordable
substitute for more costly computer hardware, it is now a viable choice for people,
businesses, and educational institutions. The Raspberry Pi is a perfect choice for
developing embedded systems, smart devices, and learning platforms because of its
tiny form factor, low power consumption, and versatile networking possibilities.
4.2.5.1 Convolutional Neural Network (CNN)
Convolutional Neural Networks (CNNs) are an effective machine learning
technique, particularly for computer vision problems. A specific kind of neural
networks called convolutional neural networks, or CNNs, is made to handle grid-like
data like images efficiently.
One kind of deep learning technique that works especially well for image
processing and recognition applications is the Convolutional Neural Network (CNN).
It is composed of several layers, such as fully connected, pooling, and convolutional
layers. CNNs are ideal for identifying hierarchical patterns and spatial correlations in
pictures because of their architecture, which draws inspiration from the visual
processing in the human brain.
Figure 4.2.5.1 Convolutional Neural Network
Key components of a Convolutional Neural Network include:
Convolutional Layers: In order to identify characteristics like edges, textures, and
more intricate patterns, these layers perform convolutional operations to input pictures
using filters, sometimes referred to as kernels. The spatial associations between pixels
are preserved with the use of convolutional techniques.
Pooling Layers: By down sampling the input's spatial dimensions, pooling layers
lower the network's computational complexity and parameter count. A popular
pooling technique is max pooling, which chooses the highest value from a collection
of nearby pixels.
Activation Functions: By adding non-linearity to the model, non-linear activation
functions like the Rectified Linear Unit (ReLU) enable the model to discover more
intricate links in the data.
Fully Connected Layers: These layers are in charge of forecasting using the high-level
characteristics that the preceding levels have learnt. Every neuron in one layer is
connected to every other layer's neuron.
In order to train CNNs to identify patterns and characteristics linked to certain objects
or classes, a sizable collection of labeled pictures is used. demonstrated exceptional
efficacy in image-related tasks, attaining cutting-edge results across a range of
computer vision applications. They are ideal for jobs where the spatial connections
and patterns in the data are essential for precise predictions because of their capacity
to automatically learn hierarchical representations of characteristics. CNNs are
extensively utilized in fields including medical image analysis, object identification,
facial recognition, and image categorization.
The main part of a CNN is its convolutional layers, which apply filters to the input
picture in order to extract characteristics like edges, textures, and forms. In order to
down-sample the feature maps and preserve the most crucial information, the output
of the convolutional layers is then sent via pooling layers. One or more fully
connected layers are then applied to the pooling layers' output in order to categorize
the picture or generate a prediction.
4.2.6 Servo Motor
One kind of motor that can rotate extremely precisely is a servo motor. This
kind of motor typically has a control circuit that gives feedback on the motor shaft's
present location. This feedback enables the servo motors to rotate extremely precisely.
A servo motor is used when you wish to rotate an object at a certain angle or distance.
It consists just of a basic motor that is driven by a servo mechanism. A DC servo
motor is one that is powered by a DC power source; an AC servo motor is one that is
powered by an AC power source. We will just be talking about the operation of the
DC servo motor in this tutorial. There are several more servo motor kinds based on the
kind of gear arrangement and operating characteristics in addition to these main
categories. The gear arrangement that servo motors often come with enables us to
acquire very high torque servo motors in compact, lightweight packaging. Due to
these features, they are being used in many applications like toy car, RC helicopters
and planes, Robotics, etc.

The majority of hobby servo motors have ratings of 3 kg/cm, 6 kg/cm, or 12


kg/cm. Servo motors are rated in kilogrammes per centimetre, or kg/cm. The weight
that your servo motor can lift at a specific distance is shown by this kg/cm. For
instance: If the load is hanging 1 cm from the motor shaft, a 6kg/cm servo motor
should be able to lift 6kg; the further the load is from the motor shaft, the less weight
it can support. A servo motor's electronics is positioned next to the motor, and its
position is determined by an electrical pulse.
A DC or AC motor, a potentiometer, a gear assembly, and a controlling circuit
make up a servo. To start, we utilise a gear arrangement to increase the motor's torque
and decrease its RPM. Assume that the potentiometer knob is positioned so that no
electrical signal is produced at the potentiometer's output port while the servo motor
shaft is in its starting position. The error detector amplifier's other input terminal is
now receiving an electrical signal. Now, a feedback mechanism will process the
difference between these two signals one from the potentiometer and the other from
external sources and deliver an error signal as the output.

This erroneous signal serves as the motor's input, and the motor begins to
rotate. Now that the potentiometer and motor shaft are linked, the potentiometer will
provide a signal when the motor turns. Thus, the output feedback signal of the
potentiometer varies with its angular location. After some time, the potentiometer's
position reaches a point where its output matches the supplied external signal. Since
there is no difference between the signal generated at the potentiometer and the signal
applied externally, there will be no output signal from the amplifier to the motor input
under these circumstances, and the motor will stop moving.

Figure4.2.6 Servo Motor

A pastime of interface It is fairly simple to use servo motors, such as the S90 servo
motor, with an MCU. Servos are equipped with three wires. Of these, one will be used
for the signal that the MCU is supposed to send, and the other two will be used for
supply (both positive and negative). The most popular use for an MG995 Metal Gear
Servo Motor is in remote-controlled vehicles, humanoid robots, etc.
4.2.7 WATER PUMP
A mechanical device called a water pump is used to transfer water from one
place to another. Water is forced through a pipe or hose by the pressure differential it
produces. Applications for water pumps are numerous and include drainage,
irrigation, and water delivery.

Figure 4.2.7 12V Water Pump


A water pump's impeller, casing, and motor are its fundamental parts. The
revolving part that accelerates the water to create the pressure differential is called the
impeller. The fixed component that houses the impeller and controls water flow is
called the casing. The impeller is driven by mechanical energy from the motor.
4.2.8 RELAY
Relays are electromechanical devices that use a moveable mechanical
component that is electrically controlled by an electromagnet to create or break
electrical connections. It functions similarly to a mechanical switch, but instead of
requiring human involvement to activate, an electrical signal does it. One popular
version uses electromagnets as switches. The word "relay" refers to its function of
sending signals to control switching operations. Relays manipulate contacts with a
signal to function independently, controlling circuit connection without the need for
human involvement.
Usually, high-powered circuits, such those that operate AC home appliances,
are controlled by a DC signal from low-power sources like microcontrollers. The relay
is constructed with a moveable armature that serves as a common terminal connecting
to external circuitry and a case that houses a core with copper windings creating a coil.
Additionally, it has two pins that connect to the armature or common terminal:
ordinarily closed (NC) and usually opened (NO). As long as current passes through
the coil, the armature moves when the coil is energised, connecting with the usually
opened contact. The armature returns to its starting position when the coil is de-
energised.

Figure 4.2.8 Relay Working Diagram


4.2.9 ULTRASONIC SESNOR
For precise distance measurements in robotics and electronics projects, the HC-
SR04 ultrasonic sensor is a flexible tool. By producing a brief sound burst and timing
how long it takes for the sound waves to return after striking an item, it works on the
basis of ultrasonic waves. Accurate and trustworthy measurements are made possible
by the sensor's transceiver module, which has an ultrasonic transmitter and receiver. It
is a well-liked option for professionals, students, and amateurs because to its
affordability, simplicity, and ease of use.

Figure 4.2.9 Shows the Pin Configuration of Ultra Sonic Sensor


The sensor's range may be altered by varying the ultrasonic pulse's length, and it
usually delivers distance readings in centimetres or inches. It is now a vital part of
engineers' and electronics enthusiasts' toolkits. The HC-SR04 is essential to improving
operational efficiency and user experience in smart transport systems. The technology
may provide real-time information to the bus, enabling it to modify its approach and
timing by precisely calculating the distance between the sensor and people at the bus
stop. By cutting down on wait times and guaranteeing a more seamless passenger flow
aboard the bus, this not only expedites the boarding procedure but also improves
transportation efficiency overall.
4.2.10 DC GEAR MOTOR

The performance of a gear motor is determined by its design, gears, lubrication,


and connection. A gear motor is a motor with an integrated gearbox that serves as a
torque multiplier and speed reducer, needing less power to move a load. An electric
motor plus a gearbox with a number of gears make up a gear motor, which is a
mechanical system. For some jobs, the gearbox raises the torque and decreases the
speed of the motor. Gear motors may be used in a variety of mechanical automation
applications, including as vending machines, printers, and industrial and residential
automation, because to its straightforward design and gearbox versatility. They can
have steppers, brushes, or brushless.

Figure 4.2.10 DC Gear Motor


An apparatus that rotates and transforms electrical energy into mechanical energy is
called a DC motor. It creates a magnetic field by supplying DC power to the motor
terminals. Torque is produced by the motor's wire-wrapped iron shaft and two fixed
magnets. Both brushed and brushless DC motors are designed and produced by ISL
Products, which may also alter their size and functionality to satisfy certain
requirements. Applications needing high output torque and low shaft rotational speed
especially in areas with limited space and power are served by electric gear motors in
a variety of sectors.

4.2.11 Solar Panel

A Solar panel, also referred to as "PV panels," are a technology that transforms
sunlight's light which is made up of energy particles called "photons" into electrical
power for electrical loads. Applications for solar panels are numerous and include
remote power systems for cabins, telecommunications devices, remote sensing, and,
of course, the generation of energy by solar electric systems in homes and businesses.

4.2.11 Solar Panel


Sunlight is a clean, sustainable energy source that solar panels capture and
transform into electricity, which may power electrical loads. Multiple separate solar
cells, each made up of layers of silicon, phosphorous (which gives the negative
charge), and boron (which provides the positive charge), make up solar panels. By
absorbing the photons, solar panels create an electric current. Electrons can be
knocked out of their atomic orbits and released into the electric field created by the
solar cells, which subsequently draw the liberated electrons into a directed current,
thanks to the energy produced when photons strike the solar panel's surface. The
Photovoltaic Effect is the name given to this entire process. The roof space of an
ordinary home is more than sufficient for the amount of solar panels required to create
enough solar electricity to meet all of its demands. Any extra electricity produced is
sent into the main power grid, which is then used to pay for electricity usage at night.

A solar array produces electricity during the day and uses it in the house at
night in a well-balanced grid-connected setup. Solar generator owners can get
payment through net metering programs if their system generates more electricity than
their residence requires. A battery bank, charge controller, and, often, an inverter are
required parts for off-grid solar applications. Direct current (DC) power is sent to the
battery bank from the solar array via the charge controller. After that, the inverter
receives electricity from the battery bank and transforms it into alternating current
(AC), which non-DC equipment may use. Solar panel arrays may be sized to satisfy
the most exacting electrical load needs with the help of an inverter. Residential or
commercial structures, recreational vehicles and boats, distant cabins, cottages, or
residences, remote traffic controls, telecommunications devices, oil and gas flow
monitoring, RTU, SCADA, and many other things can all be powered by the AC
current.
CHAPTER 5
SOFTWARE DESCRIPTION
5.1 Arduino IDE
Code for Arduino boards may be created and uploaded using the open-source
Arduino IDE software. It supports the C and C++ programming languages and works
with Linux, Mac OS X, and Windows. Connecting the Genuine and Arduino board to
the Arduino IDE allows you to upload a sketch, which is saved with the '.ino'
extension. Sketching is a frequent operation in the Arduino IDE. Code compilation is
made easier by Arduino software, which even non-technical people may use. Every
board has a code-accepting microcontroller that has been programmed. A Hex File is
created by the main code, or sketch, on the IDE platform and uploaded to the board's
controller. For novices, this makes learning code compilation a snap.

Figure 5.1.1 Menu Bar of Arduino IDE

To add or change code, utilize the Arduino software's five major menus: File,
Edit, Sketch, Tools, and Help. The toolbar, which includes functions like Verify,
Upload, New, Open, Save, and Serial Monitor, is essential for continuous
programming. Code is reviewed using Verify to make sure it is error-free. Save is
used to save the current sketch, Open is used to open the sketch from the sketchbook,
New is used to create a new project or sketch, and Upload is used to upload code to
the Arduino board. The program's code editor is a blank area where code is created
and edited. The operation's completion status is displayed in the Status bar. Program
notifications highlight errors and issues that occurred during the programming process
and offer clarifications and guidance on how to handle them.

Figure 5.1.2 Arduino IDE Environment

The program displays the type of serial ports used to connect the Arduino to a
computer, while the board selections space displays the type of Arduino board.
Figure 5.1.3 Arduino IDE Board And Port Selection
The Tools panel contains a distinct pop-up window called the Serial Monitor,
which functions as a stand-alone terminal for transmitting and receiving serial data.
To access it, concurrently press Ctrl+Shift+M. The Serial Monitor helps debug written
sketches so that the program can be understood. The Arduino Module has to be linked
to the computer via a USB connection in order to enable the Serial Monitor.

Figure 5.1.4 Arduino IDE Uploading


The steps below can be used to program an Arduino board: Utilizing the
Arduino board's input/output pins, install electrical components. Use a USB cable to
connect the Arduino board to a computer. Click "Tools" and then "Board" and "Port"
to launch the Arduino software. Choose the serial port that the Arduino board is
attached to. Use the "Code editor" to write the programming code, check it, and then
upload it to the Arduino board. This procedure guarantees that the programming and
code are accurate.

5.1.2 Source Code ESP32


#define BLYNK_TEMPLATE_ID "TMPL3haXkXnd5"
#define BLYNK_TEMPLATE_NAME "US car controller"
#define BLYNK_AUTH_TOKEN "yaz-74_6aiYS3LekKK1OyJK5GPeIALti"

#define echoPin 5 // Pin connected to Echo of HC-SR04


#define trigPin 18 // Pin connected to Trigger of HC-SR04
int V8State = 0; // To track the state of V8 (ultrasonic control)

const float SOUND_SPEED = 0.0344; // Speed of sound in cm/µs


const int MAX_DISTANCE = 400; // Maximum measurable distance in cm
const int MIN_DISTANCE = 2; // Minimum measurable distance in cm

long duration; // Variable to store the time of pulse


int distance; // Variable to store the calculated distance

#include <WiFi.h>
#include <BlynkSimpleEsp32.h>

// Blynk and WiFi credentials


char auth[] = BLYNK_AUTH_TOKEN;
char ssid[] = "vivo V23 5G";
char pass[] = "123654789";

// Motor driver pins

int M1F = 13; // Forward for Motor 1


int M1R = 12; // Reverse for Motor 1
int M2F = 14; // Forward for Motor 2
int M2R = 27; // Reverse for Motor 2

int speedValue = 1; // Speed value for motors


// Function to stop all motors
void stopMotors() {
digitalWrite(M1F, LOW);
digitalWrite(M1R, LOW);
digitalWrite(M2F, LOW);
digitalWrite(M2R, LOW);

Serial.println("Motors Stopped");
}

// Blynk CUTTING MOTOR


BLYNK_WRITE(V0) {
if (param.asInt()) { // If button is pressed

digitalWrite(19, HIGH);

}
else {
digitalWrite(19, LOW);
}

// Forward button (V1)


BLYNK_WRITE(V1) {
int pinv1 = param.asInt();
if (pinv1 ==HIGH) { // If button is pressed

digitalWrite(M1F, HIGH);
digitalWrite(M1R, LOW);
digitalWrite(M2F, HIGH);
digitalWrite(M2R, LOW);
Serial.println("Moving Forward");
} else {
stopMotors();
}
}

// Reverse button (V2)


BLYNK_WRITE(V2) {
if (param.asInt()) {

digitalWrite(M1F, LOW);
digitalWrite(M1R, HIGH);
digitalWrite(M2F, LOW);
digitalWrite(M2R, HIGH);
Serial.println("Moving Reverse");
} else {
stopMotors();
}
}

// Left button (V3)


BLYNK_WRITE(V3) {
if (param.asInt()) {

digitalWrite(M1F, LOW);
digitalWrite(M1R, HIGH);
digitalWrite(M2F, HIGH);
digitalWrite(M2R, LOW);
Serial.println("Turning Left");
} else {
stopMotors();
}
}

// Right button (V4)


BLYNK_WRITE(V4) {
int pinv4 = param.asInt();
if (pinv4 == HIGH) {

digitalWrite(M1F, HIGH);
digitalWrite(M1R, LOW);
digitalWrite(M2F, LOW);
digitalWrite(M2R, HIGH);
Serial.println("Turning Right");
} else {
stopMotors();
}
}

BLYNK_WRITE(V5) {
if (param.asInt()) {
int pinValue = param.asInt();

digitalWrite(25, HIGH);
digitalWrite(26, LOW);

delay(1000);
digitalWrite(25, LOW);
digitalWrite(26, LOW);
Blynk.virtualWrite(V5, 0);
}
}

BLYNK_WRITE(V6) {
if (param.asInt()) {
int pinValue2 = param.asInt();

digitalWrite(25, LOW);
digitalWrite(26, HIGH);

delay(1000);
digitalWrite(25, LOW);
digitalWrite(26, LOW);

Blynk.virtualWrite(V6, 0);
}
}
BLYNK_WRITE(V7) {
if (param.asInt()) {
int pinValue = param.asInt();

digitalWrite(33, HIGH);
digitalWrite(32, LOW);

delay(1000);
digitalWrite(33, LOW);
digitalWrite(32, HIGH);

Blynk.virtualWrite(V7, 0);
}
}

BLYNK_WRITE(V8) {
int pin8 = param.asInt();

V8State = param.asInt(); // Get the state of V8 from the app


if (V8State) {
Serial.println("Ultrasonic control enabled");
} else {
stopMotors();
Serial.println("Ultrasonic control disabled");
}
}

void setup() {

// Motor pins as outputs


pinMode(M1F, OUTPUT);
pinMode(M1R, OUTPUT);
pinMode(M2F, OUTPUT);
pinMode(M2R, OUTPUT);
pinMode(32, OUTPUT);
pinMode(33, OUTPUT);
pinMode(25, OUTPUT);
pinMode(26, OUTPUT);
pinMode(19, OUTPUT);
pinMode(35, OUTPUT);

pinMode(trigPin, OUTPUT); // Set trigPin as OUTPUT


pinMode(echoPin, INPUT); // Set echoPin as INPUT
Serial.begin(9600); // Start serial communication
Serial.println("Ultrasonic Sensor Distance Measurement");

// Connect to WiFi and Blynk


WiFi.begin(ssid, pass);
while (WiFi.status() != WL_CONNECTED) {
delay(500);
Serial.print(".");
}
Blynk.begin(auth, ssid, pass);

// Initialize motors to stopped


stopMotors();
delay(500);
}

void loop() {
if (V8State){
// Trigger an ultrasonic pulse
digitalWrite(trigPin, LOW);
delayMicroseconds(2);
digitalWrite(trigPin, HIGH);
delayMicroseconds(10);
digitalWrite(trigPin, LOW);

// Measure the duration of the echo pulse


duration = pulseIn(echoPin, HIGH);

// Calculate distance in cm
distance = duration * SOUND_SPEED / 2;
// Check if the distance is within the sensor's range
if (distance >= MIN_DISTANCE && distance <= MAX_DISTANCE) {
Serial.print("Distance: ");
Serial.print(distance);
Serial.println(" cm");
delay(100); // Wait for 100ms before the next measurement

if (distance <= 60){


digitalWrite(M1F, HIGH);
digitalWrite(M1R, LOW);
digitalWrite(M2F, LOW);
digitalWrite(M2R, HIGH);
Serial.println("Turning Right");

}else {

digitalWrite(M1F, HIGH);
digitalWrite(M1R, LOW);
digitalWrite(M2F, HIGH);
digitalWrite(M2R, LOW);
Serial.println("Moving Forward");
}
} else {
Serial.println("Out of range");
}

}
else {

}
Blynk.run();
}
5.2 IOT (Blynk)
The network of physical things, automobiles, appliances, and other items that
are equipped with sensors, software, and network connectivity so they may gather and
share data is known as the Internet of Things (IoT). Connecting and integrating
various devices is the aim of IoT in order to improve decision-making, ease, and
efficiency.

Figure 5.2 IoT Working with ESP 32


One well-known IoT platform that makes it easier to create IoT apps is Blynk. It
offers a drag-and-drop interface for developing mobile applications that may be
customised to operate Internet of Things devices. Blynk is adaptable for a variety of
IoT applications since it supports a large number of hardware platforms. A
smartphone application made with the Blynk app allows users to communicate with
IoT devices. This application offers a user interface for managing and keeping an eye
on linked devices. Between the mobile app and the physical devices, the Blynk cloud
server serves as a bridge. It facilitates safe and effective data exchange. By using the
cloud, Blynk enables users to remotely control their gadgets. Without requiring a lot
of technical knowledge, Blynk makes it easier to create a smartphone application that
controls Internet of Things devices.
5.3 Thonny IDE
The Raspberry Pi OS comes pre-installed with Thonny, an easy-to-use Python code
editor. With the integrated Python interpreter, we can begin coding immediately
without the need to install any additional software, and the interface is clear and easy
to use.

Figure 5.3 Thonny IDE Interface


Thonny's built-in debugger, which enables step-by-step code execution, is one of its
best features. By graphically illustrating how variables change while a program is
running, this aids users in understanding how their programs operate. This experience
is further improved by the integrated variable inspector, which shows the variables'
current status in real time. This facilitates learning Python principles and debugging.
Programming microcontrollers such as the Raspberry Pi Pico, ESP32, and other
devices that support MicroPython is another area in which Thonny excels. Users may
submit scripts, test functionality, and debug straight from the IDE thanks to its ability
to establish direct USB communication with these devices. For applications requiring
embedded devices and the Internet of Things, this makes Thonny an excellent option.
A built-in Python shell in the IDE allows you to run brief code fragments without
writing whole scripts. Additionally, it has a package manager that makes managing
and installing Python libraries easier. When incorporating third-party libraries into
projects, this feature is quite helpful.
Thonny helps users swiftly find and fix problems in their code by highlighting syntax
mistakes and providing informative error messages. Additionally, it facilitates syntax
highlighting and code completion, which increase readability and coding efficiency.
Thonny may be downloaded for free from the official website and works with
Windows, macOS, and Linux. It is a great tool for learning Python and working on
complex programming projects requiring Python and MicroPython because of its
simplicity and functionality.

5.3.1 Source Code


import cv2
from matplotlib import pyplot as plt
import os
import numpy as np
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.models import load_model

filepath =
'C:/Users/Madhuri/AppData/Local/Programs/Python/Python38/Tomato_Leaf_Disease
_Prediction/model.h5'
model = load_model(filepath)
print(model)

print("Model Loaded Successfully")

tomato_plant = cv2.imread('D:/DISEASE DETECTION AND PREVENTION/Plant-


Leaf-Disease-Prediction/Dataset/test'
'/Tomato___Bacterial_spot (1).JPG')
test_image = cv2.resize(tomato_plant, (128,128)) # load image
test_image = img_to_array(test_image)/255 # convert image to np array and
normalize
test_image = np.expand_dims(test_image, axis = 0) # change dimention 3D to 4D

result = model.predict(test_image) # predict diseased palnt or not

pred = np.argmax(result, axis=1)


print(pred)
if pred==0:
print( "Tomato - Bacteria Spot Disease")

elif pred==1:
print("Tomato - Early Blight Disease")

elif pred==2:
print("Tomato - Healthy and Fresh")

elif pred==3:
print("Tomato - Late Blight Disease")

elif pred==4:
print("Tomato - Leaf Mold Disease")

elif pred==5:
print("Tomato - Septoria Leaf Spot Disease")

elif pred==6:
print("Tomato - Target Spot Disease")

elif pred==7:
print("Tomato - Tomoato Yellow Leaf Curl Virus Disease")
elif pred==8:
print("Tomato - Tomato Mosaic Virus Disease")

elif pred==9:
print("Tomato - Two Spotted Spider Mite Disease")
CHAPTER 6
RESULT AND DISCUSSION
Successful design, implementation, and testing were conducted to assess the
Leaf Disease Detection and Fertilizer Spraying Robot's performance in both manual
and autonomous modes. The outcomes show how well the system performs
agricultural operations including plowing, seeding, and fertilizer spraying as well as
identifying leaf diseases.

The navigation and task execution capabilities of the robotic platform based on
ESP32 were evaluated. Using the 60 RPM motors, the robot could move precisely in
all four directions while in manual mode, which was controlled by the Blynk app. The
Blynk app's control interface was snappy and easy to use, allowing for seamless
functioning. The ultrasonic sensor effectively identified impediments in Auto mode,
guaranteeing safe travel. During testing, the 1000 RPM motor-powered trash cutting
equipment successfully cleaned the plant's environs. Consistent soil preparation was
proved by the plowing arrangement, which was powered by two 10 RPM motors. The
servo motor-driven seed-sowing device produced precise and consistent seed
positioning. All of these characteristics improved the robot's ability to be used in
agricultural settings.
Blynk app's control interface
The trained Convolutional Neural Network (CNN) model was used by the
Raspberry Pi-based disease detection system to correctly identify leaf illnesses. The
model effectively distinguished between healthy and unhealthy leaves based on real-
time processing of test photos taken by the USB camera. When a sick plant was
detected, the fertilizer spraying relay and pump system quickly triggered, guaranteeing
targeted treatment. With performance parameters including precision, recall, and F1-
score showing dependable illness identification, the trained CNN model demonstrated
excellent accuracy. The system operated more smoothly because to the ESP32 and
Raspberry Pi components' integration via IoT connectivity. The Blynk cloud improved
accessibility and user experience by offering real-time updates and control.
Overall, the system showed better accuracy in plant health monitoring,
decreased labor needs, and effectively automated agricultural chores. The infrequent
false-positive illness detections and minor delays in relay activation during high-load
processing were some of the difficulties, though. By improving the hardware-software
integration of the system and further optimizing the CNN model, these problems can
be resolved.
CHAPTER 7
CONCLUSION
The promise of combining IoT, machine learning, and renewable energy for
precision agriculture has been effectively illustrated by the creation of the Leaf
Disease Detection and Fertilizer Spraying Robot. By combining cutting-edge features
like multifunctional agricultural operations, targeted fertilizer spraying, and automated
disease diagnosis, the robot offers a workable and sustainable answer to today's
farming problems. With both manual and autonomous operation modes, the ESP32-
based robotic platform efficiently controls navigation and job execution. While Auto
mode uses an ultrasonic sensor for obstacle identification and navigation, Manual
mode allows users to operate the robot remotely via the Blynk app. The robot's
adaptability is further increased with the installation of devices for root cutting, seed
seeding, and plowing. The dual 10 RPM ploughing motors and servo-controlled seed
sowing operated dependably, guaranteeing steady functioning throughout testing.
Using real-time photos taken with a USB camera, the Raspberry Pi-based leaf disease
detection system correctly diagnosed unhealthy plants using a trained Convolutional
Neural Network (CNN). In order to assure accurate and effective treatment, minimize
chemical waste, and enhance plant health monitoring, the fertilizer spraying system
was automatically activated upon disease detection. Performance indicators such as
recall and accuracy confirmed the model's efficacy. The system is environmentally
beneficial and appropriate for installation in places with restricted access to power
since it incorporates a solar panel, which offers a renewable energy source. This
function lessens the robot's reliance on external power sources while improving its
operating efficiency and sustainability.
All things considered, the project met its goals and demonstrated a low-cost,
scalable solution for automated agricultural chores. Even if issues like sporadic false-
positive detections and relay activation delays were observed, they can be resolved
with more tuning. This system's combination of solar power, machine learning, and
the Internet of Things makes it a noteworthy development in precision agriculture that
has the potential to transform conventional farming methods and boost agricultural
output.
CHAPTER 8
REFERENCE
[1]. Abed, Sudad H., Alaa S. Al-Waisy, Hussam J. Mohammed, and Shumoos Al-
Fahdawi. "A modern deep learning framework in robot vision for automated
bean leaves diseases detection." International Journal of Intelligent Robotics
and Applications 5, no. 2 (2021): 235-251.
[2]. Rahul, M. S. P., and M. Rajesh. "Image processing based automatic plant
disease detection and stem cutting robot." In 2020 Third International
Conference on Smart Systems and Inventive Technology (ICSSIT), pp. 889-
894. IEEE, 2020.
[3]. Nooraiyeen, Aamina. "Robotic vehicle for automated detection of leaf
diseases." In 2020 IEEE International Conference on Electronics, Computing
and Communication Technologies (CONECCT), pp. 1-6. IEEE, 2020.
[4]. Fernando, Sandunika, Ranusha Nethmi, Ashen Silva, Ayesh Perera, Rajitha De
Silva, and Pradeep KW Abeygunawardhana. "Intelligent disease detection
system for greenhouse with a robotic monitoring system." In 2020 2nd
International Conference on Advancements in Computing (ICAC), vol. 1, pp.
204-209. IEEE, 2020.
[5]. Dharanika, T., S. Ruban Karthik, S. Sabhariesh Vel, S. Vyaas, and S.
Yogeshwaran. "Automatic leaf disease identification and fertilizer agrobot."
In 2021 7th International Conference on Advanced Computing and
Communication Systems (ICACCS), vol. 1, pp. 1341-1344. IEEE, 2021.
[6]. Ahmed, Shahad, and Saman Hameed Ameen. "Detection and classification of
leaf disease using deep learning for a greenhouses’ robot." Iraqi Journal of
Computers, Communications, Control and Systems Engineering 21, no. 4
(2021): 15-28.
[7]. Cubero, Sergio, Ester Marco-Noales, Nuria Aleixos, Silvia Barbé, and Jose
Blasco. "Robhortic: A field robot to detect pests and diseases in horticultural
crops by proximal sensing." Agriculture 10, no. 7 (2020): 276.
[8]. Forhad, Shamim, Kazi Zakaria Tayef, Mahamudul Hasan, A. N. M. Shahebul
Hasan, Md Zahurul Islam, and Md Riazat Kabir Shuvo. "An autonomous
agricultural robot for plant disease detection." In The Fourth Industrial
Revolution and Beyond: Select Proceedings of IC4IR+, pp. 695-708.
Singapore: Springer Nature Singapore, 2023.
[9]. Xenakis, Apostolos, Georgios Papastergiou, Vassilis C. Gerogiannis, and
George Stamoulis. "Applying a convolutional neural network in an IoT robotic
system for plant disease diagnosis." In 2020 11th International Conference on
Information, Intelligence, Systems and Applications (IISA, pp. 1-8. IEEE,
2020.
[10]. Karpyshev, Pavel, Valery Ilin, Ivan Kalinov, Alexander Petrovsky, and Dzmitry
Tsetserukou. "Autonomous mobile robot for apple plant disease detection based
on cnn and multi-spectral vision system." In 2021 IEEE/SICE international
symposium on system integration (SII), pp. 157-162. IEEE, 2021.
[11]. Ouyang, Chen, Emiko Hatsugai, and Ikuko Shimizu. "Tomato disease
monitoring system using modular extendable mobile robot for greenhouses:
Automatically reporting locations of diseased tomatoes." Agronomy 12, no. 12
(2022): 3160.
[12]. Chowdhury, Muhammad EH, Tawsifur Rahman, Amith Khandakar, Mohamed
Arselene Ayari, Aftab Ullah Khan, Muhammad Salman Khan, Nasser Al-
Emadi, Mamun Bin Ibne Reaz, Mohammad Tariqul Islam, and Sawal Hamid
Md Ali. "Automatic and reliable leaf disease detection using deep learning
techniques." AgriEngineering 3, no. 2 (2021): 294-312.
[13]. Bir, Paarth, Rajesh Kumar, and Ghanshyam Singh. "Transfer learning based
tomato leaf disease detection for mobile applications." In 2020 IEEE
International Conference on Computing, Power and Communication
Technologies (GUCON), pp. 34-39. IEEE, 2020.
[14]. Chaitanya, Pvr, Dileep Kotte, A. Srinath, and K. B. Kalyan. "Development of
smart pesticide spraying robot." International Journal of Recent Technology
and Engineering 8, no. 5 (2020): 2193-2202.
[15]. Hidayah, AH Nurul, Syafeeza Ahmad Radzi, Norazlina Abdul Razak, Wira
Hidayat Mohd Saad, Y. C. Wong, and A. Azureen Naja. "Disease Detection of
Solanaceous Crops Using Deep Learning for Robot Vision." Journal of
Robotics and Control (JRC) 3, no. 6 (2022): 790-799.

You might also like