KEMBAR78
Gesture | PDF | Human–Computer Interaction | Augmented Reality
0% found this document useful (0 votes)
3 views11 pages

Gesture

Uploaded by

Shrutika Sabale
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views11 pages

Gesture

Uploaded by

Shrutika Sabale
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

www.ijcrt.

org © 2024 IJCRT | Volume 12, Issue 7 July 2024 | ISSN: 2320-2882

Cursor Controller Using Hand Gestures


1
K.T.Krishna Kumar, 2Medisetti Abimitra,
1
Associate Professor and Placement Officer, 2MCA Final Semester,
1
Master of Computer Applications
1
Sanketika Vidya Parishad Engineering College, Visakhapatnam, Andhra Pradesh, India.

Abstract: This Human Computer Interaction (HCI) has become one of the important projects in this new era
and technology. In today's digital age, the interaction between humans and computers continues to evolve
beyond traditional input devices like keyboards and mice. This project proposes a novel approach to
computer interaction through hand gestures, specifically aiming to control the cursor on a screen without
physical contact with any device. The system utilizes computer vision techniques, primarily leveraging a
webcam to capture and interpret hand movements in real-time. Key components of the system include hand
detection, gesture recognition, and cursor control algorithms. Hand detection identifies the presence and
position of hands within the camera's field of view. Gesture recognition algorithms then classify specific hand
gestures, translating them into commands such as moving the cursor in different directions, clicking, or
scrolling. These gestures are predefined and mapped to corresponding actions in the graphical user interface
(GUI).The project's implementation involves using Python and libraries like OpenCV for computer vision
tasks, ensuring compatibility across different operating systems. The system's performance is evaluated based
on accuracy in gesture recognition, real-time responsiveness, and user experience in controlling the cursor
solely through hand gestures.

Index terms: Hand gesture recognition, Human-computer interaction, Cursor control, Gesture-based
interaction, Real-time systems, OpenCV
I.INTRODUCTION
In recent years, the evolution of human-computer interaction has increasingly focused on natural and intuitive
methods of input beyond traditional devices. One promising approach is the use of hand gestures to control
digital interfaces, offering a hands-free and potentially more immersive interaction experience.This project
explores the development of a cursor controller using hand gestures, aiming to revolutionize how users
interact with computers. By leveraging computer vision techniques, specifically with OpenCV and Python,
the system interprets real-time hand movements captured by a webcam. This interaction paradigm not only
enhances accessibility but also opens avenues for applications in diverse fields including gaming, virtual
reality, and accessibility technologies. The core challenge lies in accurately detecting and recognizing hand
gestures, mapping them to cursor movements and actions like clicking or scrolling. [20]This introduction sets
the stage for exploring the technical implementation, evaluation metrics, and potential impact of gesture-
based cursor control systems in enhancing user experience and interaction efficiency.

1.1 Existing System


The current landscape of computer interaction predominantly relies on traditional input devices such as
keyboards and mice. While effective, these methods can be limiting in terms of user mobility and
engagement. Recent advancements have introduced gesture-based systems that utilize cameras and sensors to
interpret hand movements. [3]Technologies like Microsoft Kinect and Leap Motion have pioneered gesture
recognition for gaming and interactive applications, allowing users to control interfaces through natural
gestures. These systems typically employ depth sensing or infrared cameras to capture and interpret hand
movements accurately. However, challenges remain in achieving robustness and accuracy, particularly in

IJCRT2407597 International Journal of Creative Research Thoughts (IJCRT) www.ijcrt.org f221


www.ijcrt.org © 2024 IJCRT | Volume 12, Issue 7 July 2024 | ISSN: 2320-2882
varying lighting conditions and for users with diverse hand sizes and gestures. [19]Commercially available
products often require specific hardware setups and may be limited in their application beyond specific use
cases. As technology progresses, integrating more sophisticated algorithms and leveraging machine learning
for gesture recognition could enhance the reliability and versatility of these systems, potentially leading to
broader adoption in mainstream computing and improving accessibility for users with disabilities.

1.1.2 Challenges
• Achieving accurate gesture recognition in varying conditions.
• Providing clear user feedback and error handling.
• Managing environmental noise interference.

1.2 Proposed system:


The proposed system aims to implement a robust cursor controller using hand gestures, leveraging advanced
computer vision techniques and machine learning algorithms. [5]It will utilize a webcam for real-time capture
of hand movements, processed through OpenCV and Python for gesture detection and recognition. Key
components include hand region segmentation, feature extraction, and classification to interpret gestures
accurately. The system will map recognized gestures to cursor movements, clicks, and scrolling actions in a
graphical user interface (GUI). Special attention will be given to optimizing for low latency and high
accuracy across diverse user demographics and environmental conditions. [26]Machine learning models,
possibly including convolutional neural networks (CNNs) and recurrent neural networks (RNNs), will
enhance gesture recognition capabilities over time. User experience enhancements will focus on intuitive
interaction design, providing visual feedback and error handling mechanisms. The system's versatility will be
tested across different operating systems and applications, with scalability considerations for potential
integration into mainstream computing and interactive environments.

Figure 1: Work Flow

1.2.1 Advantages
• Facilitates hands-free interaction, ideal for scenarios where users need to multitask or have limited
mobility.
• Minimizes dependency on physical peripherals like mice and keyboards, potentially reducing costs and
space requirements.
• Once mastered, gesture-based controls can be intuitive and easy to learn for new users.
• Prepares for future technological advancements and integrates well with emerging technologies like
augmented reality and smart environments

IJCRT2407597 International Journal of Creative Research Thoughts (IJCRT) www.ijcrt.org f222


www.ijcrt.org © 2024 IJCRT | Volume 12, Issue 7 July 2024 | ISSN: 2320-2882

Figure 2: System Architecture


II LITERATURE REVIEW

Gesture-based interaction has emerged as a transformative paradigm in human-computer interaction (HCI),


offering natural and intuitive means of controlling digital interfaces without physical contact with traditional
input devices like keyboards and mice. [1]This literature review explores the technological foundations,
challenges, applications, and future directions of gesture-based cursor control systems. The foundation lies in
computer vision techniques, where cameras and sensors capture and interpret hand movements in real-time.
[17]
Early systems like Microsoft Kinect and Leap Motion pioneered gesture recognition using depth sensors,
enabling applications in gaming and virtual environments. [9]These systems employ algorithms for hand
detection, feature extraction, and gesture classification, with advancements now integrating machine learning
to improve accuracy and robustness. Convolutional neural networks (CNNs) and recurrent neural networks
(RNNs) are increasingly utilized to handle complex gesture patterns and variations across different users and
environments. Technical challenges include achieving high accuracy in gesture detection amidst varying
lighting and background conditions, mitigating latency between gesture input and system response, and
addressing gesture ambiguity to prevent unintended actions. Usability concerns encompass user adaptation to
gesture controls and ensuring intuitive interaction design with clear feedback mechanisms. Com [15]mercial
implementations like Samsung's Air Gesture and Google's Project Soli have demonstrated successes in
gaming, interactive displays, and automotive interfaces, showcasing improved user engagement and
accessibility for individuals with disabilities. [11]Academic studies explore applications in virtual reality (VR),
education, and healthcare, underscoring the versatility and potential impact of gesture-based HCI beyond
traditional computing contexts. For instance, VR environments benefit from gesture controls for immersive
experiences, while educational applications use gestures for interactive learning tools. Smart environments
and Internet of Things (IoT) applications also leverage gesture recognition to control home automation
systems and IoT devices seamlessly. [18]Future research directions emphasize integrating gesture control with
augmented reality (AR) to enhance user interaction in mixed reality environments, exploring gesture
recognition in smart homes and IoT contexts, and advancing sensor technologies to reduce hardware
dependencies and improve system scalability. Innovations in depth sensing technologies, [25]such as Time-of-
Flight (ToF) sensors and infrared cameras, hold promise for enhancing gesture recognition accuracy and
expanding the range of interactive applications. Overall, this review synthesizes insights from scholarly
articles, conference proceedings, and industry reports to provide a comprehensive overview of the current
state, challenges, and opportunities in gesture-based cursor control systems, paving the way for future
advancements in HCI and interactive technology.
Future research directions emphasize integrating gesture control with augmented reality (AR) to enhance user
interaction in mixed reality environments, exploring gesture recognition in smart homes and IoT contexts, and
advancing sensor technologies to reduce hardware dependencies and improve system scalability.
[24]
Innovations in depth sensing technologies, such as Time-of-Flight (ToF) sensors and infrared cameras,
IJCRT2407597 International Journal of Creative Research Thoughts (IJCRT) www.ijcrt.org f223
www.ijcrt.org © 2024 IJCRT | Volume 12, Issue 7 July 2024 | ISSN: 2320-2882
hold promise for enhancing gesture recognition accuracy and expanding the range of interactive applications.
[21]
ToF sensors measure the time it takes for light to travel to an object and back, providing precise depth
information that can improve hand tracking accuracy. Infrared cameras can capture images in low-light
conditions, making them suitable for gesture recognition in various environments.
Overall, this review synthesizes insights from scholarly articles, conference proceedings, and industry reports
to provide a comprehensive overview of the current state, challenges, and opportunities in gesture-based
cursor control systems. [13]The integration of advanced machine learning algorithms, improved sensor
technologies, and user-centric design principles is essential for the continued advancement of gesture-based
HCI. By addressing technical challenges and exploring new applications, gesture-based systems have the
potential to revolutionize human-computer interaction, making it more intuitive, accessible, and engaging.

Figure 3: Flow Model

The flow model shows the working of the system with different functions. The system will first take the input of
image from the web camera. It will also convert the video captured from the web camera and convert them into
image. [12]It will then resize the input image so that the segmentation can take place to calibrate the points on the
image. It will denoise from the image and start showing the center radius of the image of the desired color. The
radius points will be centered on the image of the color on the finger tip. The finger tips will now start moving
according to the movement of the fingers. It will detect the points of the radius. We can now manipulate the
cursor with fingertip movement.

III METHODOLOGY
The development of a cursor controller using hand gestures begins with a comprehensive system design and
requirements analysis, defining objectives like accurate gesture recognition, user-friendly interaction, and
real-time responsiveness. [4]Hardware selection is critical, starting with a high-resolution camera capable of
capturing detailed hand movements under varying lighting conditions. Software setup involves selecting
appropriate libraries and frameworks, such as OpenCV for image processing and machine learning libraries
like TensorFlow or PyTorch for gesture recognition. [6]The methodology includes data collection, where a
diverse dataset of hand gestures is gathered under different conditions to train the model. Preprocessing
techniques, such as background subtraction and noise reduction, are applied to enhance image quality.
Feature extraction follows, identifying key hand features like edges, contours, and finger positions. A robust
machine learning model, often a convolutional neural network (CNN), is trained using this processed data to
classify gestures accurately. The system's real-time performance is optimized through parallel processing and
IJCRT2407597 International Journal of Creative Research Thoughts (IJCRT) www.ijcrt.org f224
www.ijcrt.org © 2024 IJCRT | Volume 12, Issue 7 July 2024 | ISSN: 2320-2882
hardware acceleration techniques. Once trained, the model is integrated into a software application that maps
recognized gestures to cursor actions like movement, clicking, and scrolling. Extensive testing and validation
are conducted to ensure high accuracy and low latency, addressing challenges like gesture ambiguity and
varying user hand characteristics. User feedback is incorporated to refine the system, enhancing usability and
interaction design. [10]Finally, the system is deployed and evaluated in real-world scenarios, with continuous
monitoring and updates to maintain performance and adapt to new use cases.

Figure 4: Landmarks in the Hand

3.1 Input
The input stage is critical for developing a gesture-based cursor controller, as it involves capturing and
processing raw data that the system will interpret. This stage begins with selecting appropriate hardware,
particularly a high-resolution camera capable of capturing fine details of hand movements under various lighting
conditions. [8]The camera is positioned to maximize the field of view and minimize occlusions. Next, the system
initializes by calibrating the camera to ensure accurate color representation and depth perception, if using depth
sensors. Calibration involves adjusting for lens distortion and aligning the camera’s perspective with the screen.
Data collection involves recording a diverse set of hand gestures from multiple users in different environments.
This dataset includes various hand positions, orientations, and motions to train the gesture recognition model
effectively. During recording, it's essential to capture gestures under different lighting conditions and
backgrounds to ensure robustness . [7]The captured raw video data undergoes preprocessing to enhance image
quality and extract relevant features. Preprocessing steps include noise reduction, background subtraction, and
contrast enhancement to isolate the hand from the background. These steps are crucial to ensure that the
subsequent feature extraction phase works with clean, clear input. Feature extraction involves identifying key
points and contours of the hand. Techniques like edge detection (using algorithms such as Canny or Sobel) and
contour finding (using OpenCV functions) are applied to highlight the hand's structure. Additional methods like
skin color segmentation can help in distinguishing the hand from other objects in the frame. After preprocessing
and feature extraction, the system generates a series of frames representing the hand's movement over time.
These frames serve as input to the machine learning model, which is trained to recognize and classify the
gestures accurately. The model's input layer is designed to handle the processed image data, ensuring that the
features crucial for gesture recognition are emphasized. By meticulously handling the input stage, the system
IJCRT2407597 International Journal of Creative Research Thoughts (IJCRT) www.ijcrt.org f225
www.ijcrt.org © 2024 IJCRT | Volume 12, Issue 7 July 2024 | ISSN: 2320-2882
ensures that the data fed into the gesture recognition model is of high quality, which is essential for achieving
accurate and reliable cursor control through hand gestures.

3.2 Output
The output stage of a gesture-based cursor controller involves converting the recognized gestures into
precise and intuitive cursor actions, ensuring the system operates seamlessly in real-time. [9]Once the machine
learning model classifies a gesture, the system translates this classification into specific cursor commands. These
commands might include moving the cursor in various directions, executing a click, performing drag-and-drop
actions, or scrolling through content. The system must interface with the operating system's input API to send
these commands accurately. Feedback is a critical component of the output stage, ensuring users are aware of the
system's recognition of their gestures and the corresponding actions. Visual indicators, such as cursor animations
or on-screen icons, provide immediate feedback that a gesture has been recognized and acted upon. For example,
when a user performs a 'click' gesture, the cursor might briefly change color or shape to indicate the action. The
system also incorporates adaptive algorithms that learn from the user's behavior over time, refining the
interpretation of gestures to match individual movement patterns. This learning capability enhances accuracy
and user satisfaction, adapting to each user's unique style of interaction. Error handling is another essential
aspect of the output stage. The system needs to distinguish between intentional gestures and accidental
movements, minimizing false positives and ensuring reliable operation. When an unrecognized or ambiguous
gesture is detected, the system should either ignore the input or provide a gentle prompt to guide the user
towards a correct gesture, ensuring the overall user experience remains smooth and frustration-free. The system's
output is continuously monitored and refined through user testing and feedback, ensuring that it meets usability
standards and adapts to a wide range of applications and environments. [14]By providing accurate, responsive,
and intuitive cursor control, the output stage is crucial for the success and adoption of gesture-based interaction
systems, making technology more accessible and engaging for users.

Figure 5: Cursor Hold

Figure 6: Cursor Movement

IJCRT2407597 International Journal of Creative Research Thoughts (IJCRT) www.ijcrt.org f226


www.ijcrt.org © 2024 IJCRT | Volume 12, Issue 7 July 2024 | ISSN: 2320-2882

Figure 7: Cursor Right Click

Figure 8: Cursor Left Click

IV RESULTS
The implementation of the cursor controller using hand gestures resulted in a highly effective and user-
friendly interaction system. Extensive testing with a diverse group of users revealed that the system achieved
high accuracy in recognizing a wide range of hand gestures across different lighting conditions and
backgrounds. Users experienced minimal latency, with the system responding to gestures in real-time,
ensuring smooth and intuitive cursor control. The visual feedback mechanisms, such as cursor animations and
on-screen indicators, provided clear confirmation of gesture recognition and action execution, significantly
enhancing user confidence and satisfaction. Most users adapted quickly to the gesture-based interface, finding
it natural and intuitive after a brief learning period. The system's versatility was demonstrated across various
applications, including web browsing, document editing, and multimedia control, showcasing its potential for
broad adoption in different computing environments. Overall, the gesture-based cursor controller successfully
provided an accessible, engaging, and efficient alternative to traditional input devices, highlighting its
potential to revolutionize human-computer interaction..

IJCRT2407597 International Journal of Creative Research Thoughts (IJCRT) www.ijcrt.org f227


www.ijcrt.org © 2024 IJCRT | Volume 12, Issue 7 July 2024 | ISSN: 2320-2882

Figure 9: Cursor Movements Using Hand Gestures

V DISCUSSION
1. Accuracy and Responsiveness : The system demonstrated high accuracy in gesture recognition,
effectively handling various lighting conditions and backgrounds. The integration of advanced computer
vision techniques and machine learning algorithms, particularly convolutional neural networks (CNNs),
ensured precise translation of hand gestures into cursor movements.
2. User Experience and Adaptation : User feedback highlighted the system's intuitiveness and ease of
use, with most users quickly adapting to the gesture-based interface. Visual feedback mechanisms, such as
cursor animations and on-screen indicators, significantly enhanced user confidence by providing immediate
confirmation of gesture recognition and action execution.
3. Versatility Across Applications : The system's effectiveness was demonstrated across various
applications, including web browsing, document editing, and multimedia control. This versatility suggests
that gesture-based cursor control can be broadly adopted in different computing environments.
4. Challenges Encountered : Several challenges were identified during development. Gesture
ambiguity, where similar gestures led to misclassifications, indicated the need for more refined recognition
algorithms or additional training data. Environmental factors, such as varying lighting conditions and
backgrounds, posed challenges that were partially mitigated through preprocessing techniques and robust
machine learning models.
5. Privacy and Security Considerations : Continuous video capture raises important privacy and
security concerns. Ensuring that user data is handled securely, with robust encryption and secure storage
solutions, is essential. Implementing clear user consent protocols and transparent data handling practices will
be necessary to maintain user trust and comply with privacy regulations.
6. Future Research and Development : Future research could explore the integration of gesture
control with augmented reality (AR) and virtual reality (VR) environments to create more immersive
interaction experiences. Additionally, the use of wearable devices or advanced sensors, such as
electromyography (EMG) sensors that detect muscle activity, could enhance gesture recognition accuracy and
expand the range of detectable gestures. These advancements could further refine the system and broaden its
application scope.

VI CONCLUSION
In conclusion, the development of a gesture-based cursor controller represents a significant advancement in
human-computer interaction, offering a natural and intuitive alternative to traditional input methods. The
system demonstrated high accuracy and real-time responsiveness in recognizing a variety of hand gestures,
enhancing user engagement and interaction fluidity across diverse computing environments. User feedback
underscored the system's effectiveness in usability and adaptability, supported by intuitive visual feedback
mechanisms. Challenges such as gesture ambiguity and environmental variability were addressed through
iterative refinement and adaptive algorithms. Looking ahead, integrating gesture control with emerging
technologies and exploring new applications in IoT and immersive environments hold promise for further
innovation. This project lays a robust foundation for advancing gesture-based interaction, paving the way for

IJCRT2407597 International Journal of Creative Research Thoughts (IJCRT) www.ijcrt.org f228


www.ijcrt.org © 2024 IJCRT | Volume 12, Issue 7 July 2024 | ISSN: 2320-2882

more intuitive and accessible computing experiences in the future.

VII FUTURE SCOPE


The development of gesture-based cursor controllers represents a significant leap forward in human-
computer interaction, with promising avenues for future exploration and innovation. Looking ahead,
integrating gesture control with augmented reality (AR) and virtual reality (VR) environments could enhance
immersion and interaction within virtual spaces, paving the way for more intuitive interactions in 3D
environments. Advancements in wearable technology, such as EMG sensors and gyroscopic sensors, offer
opportunities to improve gesture recognition accuracy and expand the range of detectable gestures beyond
traditional camera-based systems. Addressing privacy concerns through enhanced data security measures and
exploring ethical implications will be critical as gesture-based technologies become more prevalent. Overall,
the future scope of gesture-based cursor controllers is expansive, promising transformative changes in how
users interact with digital interfaces across diverse applications and technological landscapes.

VIII ACKNOWLEDGEMENT

Kandhati Tulasi Krishna Kumar: Training & Placement Officer with 15 years’ experience in training & placing
the students into IT, ITES & Core profiles & trained more than 9,500 UG, PG candidates & trained more than
350 faculty through FDPs. Authored 5 books, Guided 40+ papers in international journals for the benefit of the
diploma, pharmacy, engineering & pure science graduating students. He is a Certified Campus Recruitment
Trainer from JNTUA, did his Master of Technology degree in CSE from VTA and in process of his Doctoral
research. He is a professional in Pro-E, CNC certified by CITD He is recognized as an editorial member of IJIT
(International Journal for Information Technology & member in IAAC, IEEE, MISTE, IAENG, ISOC, ISQEM,
and SDIWC. He published articles in various international journals on Databases, Software Engineering,
Human Resource Management and Campus Recruitment & Training.

Mr. Abimitra Medisetti is currently in his final semester of the MCA program at Sanketika Vidya Parishad
Engineering College, which is accredited with an A grade by NAAC, affiliated with Andhra University, and
approved by AICTE. With a keen interest in Machine Learning, Python Programming Mr. Abimitra has
undertaken his postgraduate project on " Cursor Controller Using Hand Gestures." He has also published a
paper related to this project under the guidance of K. Tulasi Krishna Kumar, an associate professor at
SVPEC.

IJCRT2407597 International Journal of Creative Research Thoughts (IJCRT) www.ijcrt.org f229


www.ijcrt.org © 2024 IJCRT | Volume 12, Issue 7 July 2024 | ISSN: 2320-2882

REFERENCES
Book Reference:

[1] A Book on "Computer Vision: Algorithms and Applications" by Richard Szeliski.


[2] A Book on "Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow" by Aurélien
Géron.
[3] A Book on "Human-Computer Interaction" by Alan Dix, Janet E. Finlay, Gregory D. Abowd, and
Russell Beale.
[4] A Book on "Augmented Reality: Principles and Practice" by Dieter Schmalstieg and Tobias Hollerer.
[5] A Book on "Virtual Reality Technology" by Grigore C. Burdea and Philippe Coiffet.
[6] A Book on "Introduction to Machine Learning with Python: A Guide for Data Scientists" by
Andreas C. Müller and Sarah Guido.
[7] A Book on "Interactive Data Visualization for the Web" by Scott Murray.

Web References:

[8] A Web references on OpenCV Documentation link: https://www.opencv.org/


[9] A Web references on TensorFlow Documentation link: https://www.tensorflow.org/
[10] A web reference on ResearchGate link: https://www.researchgate.net/
Article References:

[11] Quam, D.L., et.al. (1990). Gesture Recognition with a Dataglove. In IEEE conference on Aerospace
and Electronics (pp. 755-760).
[12] Guoli Wang., et.al. (2015). Optical Mouse Sensor-Based Laser Spot Tracking for HCI input,
Proceedings of the Chinese Intelligent Systems Conference (pp. 329-340).
[13] Baldauf, M., and Frohlich, p. (2013). Supporting Hand Gesture Manipulation of Projected Content with
mobile phones. In the European conference on computer vision (pp. 381-390).
[14] Roshnee Matlani., Roshan Dadlani., Sharv Dumbre., Shruti Mishra., & Abha Tewari. (2021). Virtual
Mouse Hand Gestures. In the International Conference on Technology Advancements and innovations (pp. 340-
345).
[15] Mayur, Yeshi., Pradeep, Kale., Bhushan, Yeshi., & Vinod Sonawane. (2016). Hand Gesture
Recognition for Human-Computer Interaction. In the international journal of scientific development and research
(pp. 9-13).
[16] Shriram, S., Nagaraj, B., Jaya, J., Sankar, S., & Ajay, P. (2021). Deep Learning Based Real-Time AI
Virtual Mouse System Using Computer Vision to Avoid COVID-19 Spread. In the Journal of Healthcare
Engineering (pp. 3076-3083).
[17] Steven Raj, N., Veeresh Gobbur, S., Praveen., Rahul Patil., & Veerendra Naik. (2020). Implementing
Hand Gesture Mouse Using OpenCV. In the International Research Journal of Engineering and Technology (pp.
4257-4261).
[18] Sneha, U., Monika, B., & Ashwini, M. (2013). Cursor Control System Using Hand Gesture
Recognition. In the International Journal of Advanced Research in Computer and Communication Engineering
(pp. 2278-1021).
[19] Krishnamoorthi, M., Gowtham, S., Sanjeevi, K., & Revanth Vishnu, R. (2022). Virtual mouse using
YOLO. In the international conference on Innovative Computing, Intelligent Communication and Smart
Electrical Systems (pp. 1-7).
[20] Varun, K.S., Puneeth, I., & Jacob, T.p. (2019). Virtual Mouse Implementation using OpenCV. In the
International Conference on Trends in Electronics and Informatics (pp. 435-438).
[21] Quek, F., et.al. (1994). Towards a vision based hand gesture interface, in Proceedings of Virtual
Reality Software and Technology (pp. 17-31).
[22] Tharsanee, R.M., Soundariya, R.s., Kumar, A.S., Karthiga, M., & Sountharrajan, S. (2021). Deep
Convolutional neural network-based image classification for COVID-19 diagnosis. In Data Science for COVID-
19 (pp. 117-145). Academic Press.
[23] Newell, A., Yang, K., & Deng, J. (2016, October). Stacked hourglass networks for human pose
estimation. In the European conference on computer vision (pp. 483-499). Springer, Cham.
[24] Ramakrishna, V., Munoz, D., Hebert, M., Andrew Bagnell, J., & Sheikh, Y. (2014).Pose machines:

IJCRT2407597 International Journal of Creative Research Thoughts (IJCRT) www.ijcrt.org f230


www.ijcrt.org © 2024 IJCRT | Volume 12, Issue 7 July 2024 | ISSN: 2320-2882

Articulated pose estimation via inference machines. In the European Conference on Computer Vision (pp. 33-
47). Springer, Cham.
[25] Tharani, G., Gopikasri, R., Hemapriya R., & Karthiga, M. (2022). Gym Posture Recognition and
Feedback Generation Using Mediapipe and OpenCV. In International Journal of Advance Research and
Innovative Ideas in Education (pp. 2053-2057). Vol-9 Issue-2 2023 IJARIIE-ISSN(O)-2395-4396 19380
www.ijariie.com 320
[26] Shibly, K.H., Kumar, S., Islam, M.A., & Iftekhar Showrav, S. (2019). Design and Development of
Hand Gesture Based Virtual Mouse. In the International Conference on Advances in Science, Engineering and
Robotics Technology (pp. 1-5).

IJCRT2407597 International Journal of Creative Research Thoughts (IJCRT) www.ijcrt.org f231

You might also like