KEMBAR78
Autonomous Indoor Navigation Using Nav2 | PDF | Lidar | Computing
0% found this document useful (0 votes)
39 views6 pages

Autonomous Indoor Navigation Using Nav2

The document presents a project focused on developing an Autonomous Mobile Robot (AMR) system for warehouse environments, leveraging technologies like LiDAR and machine learning for efficient navigation. It discusses the integration of advanced sensing, path planning, and obstacle avoidance mechanisms, built on the ROS 2 framework, to enhance operational efficiency in response to the growing demands of e-commerce. Additionally, the project utilizes SLAM techniques for real-time mapping and localization, aiming to revolutionize warehouse logistics and improve productivity.

Uploaded by

pranavp.ec22
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views6 pages

Autonomous Indoor Navigation Using Nav2

The document presents a project focused on developing an Autonomous Mobile Robot (AMR) system for warehouse environments, leveraging technologies like LiDAR and machine learning for efficient navigation. It discusses the integration of advanced sensing, path planning, and obstacle avoidance mechanisms, built on the ROS 2 framework, to enhance operational efficiency in response to the growing demands of e-commerce. Additionally, the project utilizes SLAM techniques for real-time mapping and localization, aiming to revolutionize warehouse logistics and improve productivity.

Uploaded by

pranavp.ec22
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Proceedings of the Third International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT-2025)

IEEE Xplore Part Number: CFP25CV1-ART; ISBN: 979-8-3315-2754-9

Autonomous Indoor Navigation using Nav2


2025 3rd International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT) | 979-8-3315-2754-9/25/$31.00 ©2025 IEEE | DOI: 10.1109/IDCIOT64235.2025.10915000

Sumathi K Ajay Balaji L Danaraj AP


Department of Electronics and Department of Electronics and Department of Electronics and
Communication Engineering Communication Engineering Communication Engineering
KIT-KalaignarKarunanidhi Institute of KIT-KalaignarKarunanidhi Institute of KIT-KalaignarKarunanidhi Institute of
Technology Technology Technology
Coimbatore, India Coimbatore, India Coimbatore, India
sumathikkit@gmail.com kit.25.21bec007@gmail.com kit.25.21bec023@gmail.com

Deepaganesh M
Department of Electronics and
Communication Engineering
KIT-KalaignarKarunanidhi Institute of
Technology
Coimbatore, India
kit.25.21bec024@gmail.com

Abstract—The project focuses on the design, development, warehouse environments. This project aims to design,
and implementation of an Autonomous Mobile Robot (AMR) develop, and implement an AMR system that leverages
system specifically tailored for warehouse environments. The cutting-edge technologies such as LiDAR, machine learning,
escalating demand for efficient warehouse management, and advanced sensing algorithms to enable safe, reliable, and
driven by the e-commerce boom, necessitates the adoption of efficient navigation. The system integrates key features such
automated solutions to overcome challenges like labor as robust path planning, real-time obstacle avoidance, and
shortages and operational inefficiencies. The proposed AMR compliance with safety standards to ensure seamless
system integrates advanced sensing technologies, including 2D operations. With a focus on operational efficiency and
LiDAR and IMU, with robust navigation and control
adaptability, the proposed AMR system addresses pressing
algorithms to enable the robot to autonomously navigate
through dynamic warehouse environments. The core of the
logistical challenges, enabling warehouses to meet evolving
system is built upon the ROS 2 framework, providing a robust e-commerce demands while ensuring safety and regulatory
and flexible platform for development and deployment. The compliance. By streamlining workflows and minimizing
project explores the implementation of Simultaneous manual intervention, this system holds the potential to
Localization and Mapping (SLAM) techniques, utilizing the revolutionize warehouse logistics and enhance productivity.
SLAM Toolbox, to enable the robot to build and update maps
of its environment while simultaneously determining its I. LITERATURE REVIEW
position. Furthermore, the system incorporates advanced
path planning algorithms and obstacle avoidance mechanisms The integration of Simultaneous Localization and Mapping
to ensure safe and efficient navigation. The project utilizes (SLAM) for autonomous navigation has been extensively
Gazebo for realistic simulation and validation, allowing for explored in the literature, with various approaches aiming to
thorough testing and optimization of the AMR system before improve mapping accuracy and real-time performance.
real-world deployment. This research aims to contribute to
the advancement of warehouse automation by developing a
Among these, methods like the Extended Kalman Filter
cost-effective and efficient AMR system that can significantly (EKF) and Rao-Blackwellized filters have been applied to
improve operational efficiency and productivity. LiDAR sensor data for efficient map creation. Several
studies, such as Cui et al. (2020) [1], have evaluated various
SLAM algorithms, including CoreSLAM, Gmapping, and
keywords—SLAM(Simultaneous localization and
Mapping), AMR(Autonomous mobile robot), IMU(Inertial Hector SLAM, and their application in autonomous
measuring unit) navigation systems. The research also highlighted the use of
LiDAR sensors and Rviz in developing navigation
capabilities, such as in the case of TurtleBot autonomous
OVERVIEW systems [2]. Moreover, other studies like Macenski et al.
The exponential growth of e-commerce has amplified the (2020) [3] explored the use of RFID sensors for object-level
need for efficient and automated warehouse management
mapping in indoor environments, illustrating their potential
systems. Conventional logistics processes often face
for improving mapping accuracy in constrained or dynamic
challenges such as labor shortages, high attrition rates, and
operational inefficiencies, which are further compounded by settings. A similar study by Tang (2020) [4] focused on
spatial constraints and increasing market demands. assessing the adaptability of a SLAM-based mobile robot for
Autonomous Mobile Robots (AMRs) have become a viable indoor navigation, using the ROS framework and integrating
solution to overcome these challenges by enhancing the GMapping algorithm in a simulated environment with
warehouse operations through sophisticated navigation and Rviz. In parallel, Heggem et al. (2020) [5] underscored the
control systems. However, existing AMR solutions often fall importance of real-time performance in autonomous
short due to high costs and limited precision, necessitating the navigation systems by proposing a navigation platform built
development of specialized systems tailored for dynamic on automated vision and the ROS framework, utilizing the

979-8-3315-2754-9/25/$31.00 ©2025 IEEE 942


Authorized licensed use limited to: R V College of Engineering. Downloaded on April 25,2025 at 07:03:00 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Third International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT-2025)
IEEE Xplore Part Number: CFP25CV1-ART; ISBN: 979-8-3315-2754-9

Kinect sensor to replace traditional laser range finders. development. It integrates sensors, actuators, and algorithms,
Furthermore, Jeonghyeon Pak et al. (2023) [6] conducted a enabling precise navigation in complex environments. Key
comparative study on the performance of three popular features include real-time processing, modular architecture,
SLAM algorithms—CoreSLAM, Gmapping, and Hector and compatibility with tools like Gazebo for simulation. The
SLAM—in simulations for unmanned ground vehicles framework supports distributed systems, ensuring modular
(UGVs) navigating various terrains, specifically focusing on and scalable development with secure communication.
defense-related applications. Furthermore, Marder- Security in ROS 2 has been significantly enhanced through
Eppstein,(2010) done [7] The integration of voxel-based 3D various measures. These include encrypted communication
mapping for autonomous navigation has been explored to between nodes, ensuring data integrity and confidentiality
enhance robustness in dynamic environments.And in across the system. ROS 2 utilizes Secure DDS (Data
(1998),Tang Y presented the software architecture of an Distribution Service) configurations to authenticate nodes and
autonomous tour-guide robot deployed at the Deutsches manage access control, preventing unauthorized interactions.
Museum Bonn, combining probabilistic reasoning and first- Additionally, the framework supports transport-layer security
order logic for safe, high-speed navigation in crowded and security plugins, which provide further protection against
environments. potential cyber threats.ROS 2 Humble facilitates motion
control, sensor integration, and navigation with tools like
II. RELATED METHODOLOGY ROS2_control and AMCL, simplifying tasks like obstacle
avoidance and path planning. These capabilities make it ideal
Designing a robot that can efficiently navigate environments
for building reliable and efficient autonomous robotic systems
such as agricultural fields presents unique challenges
for dynamic environments
compared to robots designed for industrial use. Industrial
robots typically operate on smooth, paved surfaces where
turning and maintaining speed are relatively straightforward. B. 2D Lidar:
However, in agricultural fields, the terrain is often bumpy,
and the robot must have the capability to maintain its center The 2D LiDAR sensor is a cornerstone of this project,
of gravity to avoid tipping. Additionally, turning in a field providing accurate and real-time environmental data essential
requires more force due to higher friction compared to paved for autonomous navigation and mapping. LiDAR (Light
roads. The robot must also maintain balance and stay on the Detection and Ranging) functions by emitting laser beams
intended path while navigating these uneven surfaces. and measuring the time it takes for the beams to bounce back
after striking objects. This time-of-flight information is then
SLAM (Simultaneous Localization and Mapping) is used in used to determine distances and generate a detailed 2D map
autonomous navigation to allow the robot to create a map of of the surrounding environment.
its environment while simultaneously identifying its position
within that map. SLAM is critical in scenarios where the IV. SIMULATION
environment is unknown, as it helps the robot localize itself A. GAZEBO:
and build a map in real-time. While various mapping and
localization techniques exist, SLAM adds the complexity of
both tasks happening concurrently. Gazebo is a sophisticated 3D simulation tool that provides
a virtual testing ground for developing and validating the
robot’s capabilities before real-world deployment. This
In this project, the SLAM Toolbox is used to implement the project leverages Gazebo to create a highly realistic
SLAM algorithm. The robot employs odometry (wheel simulation environment resembling an agricultural field,
encoder data) and laser data (from the LIDAR sensor) to complete with uneven terrain, dynamic obstacles, narrow
generate a map of the environment. The SLAM Toolbox passages, and regions of varying friction. Such an
helps produce a 2D grid map, similar to a building’s floor environment mimics the challenges the robot is expected to
plan, to represent the environment. As the robot explores face, ensuring that its navigation and control systems are
more of the unknown area, the quality of the generated map thoroughly tested.
improves. These models include walls, trees, and other agricultural
structures that are integrated into the simulation to test the
III. ANALYSIS OF ROS2 AND 2D LIDAR robot’s interaction with its surroundings. Gazebo’s
compatibility with ROS 2 Humble enables seamless
The Robot Operating System (ROS) acts as the foundational integration of the robot model, designed using URDF and
framework for developing the robot's software. It facilitates Xacro files, and its physical characteristics such as mass,
low-level device control, implementation of common friction, and inertia are accurately represented in the
functions, and communication between different nodes in the simulation.
system. ROS processes are organized into distinct nodes that Gazebo also supports the simulation of key sensors such
interact with one another via a graph-based structure. as 2D Li-DAR, IMU, and wheel encoders, which provide the
data needed for tasks like SLAM (Simultaneous Localization
A. Analysis of ROS2 and 2D Li-DAR: and Mapping), obstacle detection, and navigation. Its real-
time physics engine simulates forces, inertia, and collisions,
ensuring realistic robot behavior. By visualizing the robot’s
ROS2 Humble Hawksbill, an long-term support (LTS) operations and continuously iterating on its design, Gazebo
distribution, is a robust framework for robot software allows for optimization of the robot’s systems, reducing risks

979-8-3315-2754-9/25/$31.00 ©2025 IEEE 943


Authorized licensed use limited to: R V College of Engineering. Downloaded on April 25,2025 at 07:03:00 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Third International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT-2025)
IEEE Xplore Part Number: CFP25CV1-ART; ISBN: 979-8-3315-2754-9

and development costs. This ensures the robot’s readiness for KartoSLAM, LagoSLAM, and HectorSLAM are widely
the complexities of an agricultural field before transitioning to applied in indoor SLAM tasks.
physical testing and deployment
Performance optimization in autonomous navigation is
Creating custom objects for the Gazebo environment achieved through several key strategies. Fine-tuning
using Blender involves designing the 3D model, texturing it, parameters within the Navigation2 (Nav2) stack enhances
and exporting it in the Collada (.dae) file format, which is the efficiency of path-planning algorithms. Moreover,
supported by Gazebo. Blender, an open-source 3D modeling localization accuracy is significantly improved through
software, offers a robust platform for crafting these models. sensor fusion, where data from the Inertial Measurement
To begin, download and install Blender from its official Unit (IMU) and odometry sensors are combined and filtered
website. Once installed, set up the workspace to create the using the Extended Kalman Filter (EKF). This fusion
object, keeping in mind the dimensions and scale suitable for effectively mitigates sensor noise and provides a more
the Gazebo simulation. Design the object using Blender’s accurate state estimation. Additionally, the implementation
modeling tools, ensuring that the geometry is optimized for of the Model Predictive Path Integral (MPPI) controller
simulation to avoid unnecessary computational load. After facilitates adaptive and smooth trajectory planning by
modeling, apply textures and materials to enhance the object's predicting and optimizing future motion paths. Collectively,
visual realism. When the design is completed, export it in the these methods contribute to the development of a highly
Collada (.dae) format by selecting “File > Export > Collada reliable and efficient autonomous robotic system capable of
(.dae)” in Blender. Ensure that the export settings are adjusted navigating dynamic indoor environments.
to include textures and transformations. The resulting .dae file,
along with any texture files, can then be imported into Gazebo, This work leverages the SLAM Toolbox, a tool
where it can be integrated into the simulation environment. optimized for LiDAR scan matching, to address indoor
navigation challenges. The SLAM Toolbox allows robots to
create detailed indoor maps while simultaneously localizing
themselves within those environments.
B. Rviz :
VI. SUGGESTED IMPLEMENTATION
RViz (ROS Visualization Tool) is a crucial element of the
Robot Operating System (ROS) framework, playing a key role
in this project by offering a graphical interface to visualize the
robot’s status, sensor data, and navigation processes. It is
invaluable during both simulation and debugging stages,
allowing developers to track and enhance the robot’s
performance in real time.

Here the project, Rviz uses visualize data from the robot’s
sensors, including 2D LiDAR, IMU, and wheel encoders. One
of its key applications is displaying LiDAR point clouds,
which represent the robot’s perception of its surroundings in a
2D or 3D format. This allows developers to assess how well
the robot detects obstacles and interacts with its environment,
which is crucial for navigation and obstacle avoidance in
agricultural fields.
V. ALGORITHMS FOR SLAM
Autonomous robots are designed to navigate indoor
environments without colliding with obstacles. This
capability is enabled by a method called "Simultaneous
Localization and Mapping" (SLAM), which simultaneously
builds and updates maps while determining the robot's
position. Key methods used in SLAM The methods
employed in SLAM include Particle Filter, Extended
Kalman Filter, FastSLAM, Covariance Intersection, and
Graph-Based SLAM. These techniques are designed to
enhance the accuracy and efficiency of localization and
mapping in various environments. For indoor applications,
SLAM integrates sensing, mapping, kinematic modeling,
and handling dynamic obstacles and loop closure detection,
ensuring reliable navigation. Fig.1 Suggested Implementation
The SLAM process relies on range measurement devices
to observe the environment. These devices, combined with
sensors and localization tools, enable the robot to identify its
position based on landmarks. Once a landmark is detected,
the robot processes this data to recognize and interpret its
surroundings. Algorithms such as CoreSLAM, Gmapping,

979-8-3315-2754-9/25/$31.00 ©2025 IEEE 944


Authorized licensed use limited to: R V College of Engineering. Downloaded on April 25,2025 at 07:03:00 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Third International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT-2025)
IEEE Xplore Part Number: CFP25CV1-ART; ISBN: 979-8-3315-2754-9

Implementation Steps Odom to Base Link Transformation: Provides a smooth


but potentially inaccurate estimate of the robot's position
Fig 1 shows the Implementation starts with a URDF file, due to odometry drift.
control system, and 2D Lidar data. The NAV2 stack handles Map to Odom Transformation: Corrects the drift by
localization and path planning using costmaps to optimize combining SLAM-based pose estimates with odometry data.
navigation and avoid obstacles. Initial and goal coordinates
Map to Base Link Transformation: Provides a precise
guide the robot, with the system computing a collision-free depiction of the robot's position in relation to the global map.
path. Sensors like Lidar continuously detect obstacles,
enabling real-time adjustments. The control system executes D. Using Lidar and IMU for Dynamic Obstacle
movement commands for smooth navigation. Adaptive Avoidance
Monte Carlo Localization (AMCL) ensures positional
accuracy, recalculating paths if deviations occur. Feedback Lidar Sensors:
mechanisms address navigation failures, and the task
completes when the robot verifies goal achievement with Continuously scan the surroundings to detect dynamic
accurate position and orientation. obstacles like moving people or objects.Provide real-time
data to update the map and adjust the robot's trajectory,
avoiding collisions.
A. Understanding SLAM for Indoor Navigation:

Simultaneous Localization and Mapping (SLAM) is an IMU (Inertial Measurement Unit):


essential technique that allows robots to navigate Tracks the robot's orientation and angular velocity to
environments, like indoors, where GPS signals are assist in maintaining stability and precise motion.Works in
unavailable. It allows a robot to identify its position within a conjunction with odometry to refine velocity and position
building while simultaneously creating a map of its estimates
surroundings.
E. Map Generation for Indoor Navigation
Role of SLAM in Indoor Navigation: Creating a precise navigation map is essential for
enabling robots to navigate indoor environments
Mapping Objects and Layouts: SLAM identifies objects autonomously. Below is a structured process for map
like walls, furniture, and obstacles, tracking their relative generation tailored for indoor navigation:
positions within the indoor space.Tracking Robot's Position:
It updates the robot's location continuously as it moves, Setting Up the Indoor Environment
ensuring precise navigation in a dynamic environment. Simulation Setup: Using tools like Gazebo, create
Categories of SLAM for Indoor Applications: a virtual indoor environment and position the robot within it.
This environment can include walls, furniture, and other
Feature SLAM: Focuses on recognizing distinct static objects typically found indoors. Key Sensor: A 2D
features or landmarks within an indoor space. Suitable for LiDAR sensor mounted on the robot serves as the main
environments with recognizable and repeatable elements, device for collecting spatial data of the indoor environment.
such as doors, windows, or unique objects like vending
machines or bookcases. Grid SLAM: Divides the indoor Capturing Indoor Spatial Data
space into a grid of cells. Each cell is categorized as
occupied, unoccupied, uncertain. How Lidar Works: The 2D Lidar sensor emits infrared beams
to measure distances, generating an array of depth points in a
360-degree radius. These points represent the distance of
B. Understanding Coordinate Transformations in Indoor surrounding objects from the robot.Initial Data: The raw
Navigation Lidar data serves as a foundational input for creating the
Role of Coordinate Frames - In robotic navigation, indoor map.
reference frames are used to define the robot's position and
orientation in relation to its environment. The main frames Transforming Sensor Data
used include: To make the Lidar data usable for mapping, certain
Base Link: This frame is attached to the robot itself and ROS packages are utilized:
serves as its local reference point.
SLAM Toolbox:
Odom (Odometry) Frame: Acts as a reference frame Implements the Online-Asynchronous mapping
relative to the robot's starting position and is used as the method to process live sensor data during robot
default world origin when no other reference is provided. movement.Generates and stores a reusable map for future
C. Integrating SLAM with Coordinate Frames navigation tasks.
To resolve the drift issue while maintaining a smooth
trajectory, SLAM introduces a map frame, which serves as Pointcloud_to_laserscan:
a global reference for navigation. The relationship between Converts point cloud data from the Lidar into a 2D
the frames is managed as follows: laser scan format suitable for indoor mapping.

979-8-3315-2754-9/25/$31.00 ©2025 IEEE 945


Authorized licensed use limited to: R V College of Engineering. Downloaded on April 25,2025 at 07:03:00 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Third International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT-2025)
IEEE Xplore Part Number: CFP25CV1-ART; ISBN: 979-8-3315-2754-9

Generating the Indoor Map


The Slam Toolbox package plays a critical role in
map creation by requiring two essential inputs:
1. Odometry Data
2. Laser Scan Data

Exploring the Indoor Space for Mapping


Manual Control:
Initially, the robot can be operated manually using
the teleop_key package to explore the indoor environment.
Live Visualization:
RViz provides a graphical representation of the
mapping process, showing how the indoor environment is
being mapped in real time. Fig 2 Initial Pose

Saving the Indoor Map


Map Publication:
The Slam Toolbox publishes the generated map on
the /map topic.
Persistent Storage:
The map_server package stores the map from the
/map topic in a file format, making it available for future
navigation tasks.
Reusable Map:
The saved map eliminates the need to recreate it for
subsequent indoor navigation activities.

Navigating the Indoor Environment


Once the map is created, the robot can use it for
autonomous indoor navigation. It integrates:
Obstacle Detection: Fig 3 Initiating the navigation
The 2D Lidar continuously scans the environment
for dynamic obstacles and updates the map accordingly.
Path Planning:
The NAV2 stack utilizes the map
VII. AUTONOMOUS NAVIGATION FOR INDOOR ENVIRONMENTS
Once mapping and localization are complete, the robot is
ready for autonomous navigation by combining data sources
and planning algorithms. SLAM estimates the robot's
position, providing a foundation for navigation. Obstacle
detection uses static maps and live Lidar data to create cost
maps, identifying areas to avoid or navigate. Autonomous
navigation depends on robot-specific parameters like size and
hardware. AMCL, implemented through ROS Nav2,
integrates static maps, Lidar, and Transform Features (TF) to
publish accurate poses. It generates global and local cost
maps for path planning, visualized in Rviz, enabling the robot FIG 4 GOAL POSE REACHED
to plan routes, adjust movements, and navigate smoothly. Fig
2, Fig3 and Fig 4 depicts the initial pose of the ebot,
navigation initiation and final pose reached respectively.

979-8-3315-2754-9/25/$31.00 ©2025 IEEE 946


Authorized licensed use limited to: R V College of Engineering. Downloaded on April 25,2025 at 07:03:00 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Third International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT-2025)
IEEE Xplore Part Number: CFP25CV1-ART; ISBN: 979-8-3315-2754-9

VIII. COMPARISON OF PROPOSED AMR WITH [2] Heggem, C., Wahl, N.M., & Tingelstad, L. (2020). Configuration and
Control of KMR iiwa Mobile Robots using ROS2. 2020 3rd
EXISTING SYSTEM International Symposium on Small-scale Intelligent Manufacturing
Systems (SIMS), 1-6.
[3] Macenski S, Martín F, R. White, and Clavero J. G., "The Marathon 2:
Ref Navigation framework Precision CONTROL A Navigation System", IEEE/RSJ International Conference on
LER Intelligent Robots and Systems (IROS), pp. 2718-2725, 2020.
[4] Tang Y., "Response Time Analysis and Priority Assignment of
Nav2 Stack with advanced High precision MPPI(Mode Processing Chains on ROS2 Executors", IEEE Real-Time Systems
AMR path planning and recovery with 2D LiDAR, l predictive Symposium (RTSS), pp. 231-243, 2020.
behaviors IMU, and wheel path [5] H.I.M.A.O.a.K.S.MSahari,""Indoor mapping using kinect and ROS,","
encoders integarl) in nternational Symposium on Agents, Multi-Agent Systems and
Robotics (ISAMSR),,Putrajaya, Malaysia, 2015.
ROS 2 Navigation Stack
[1] High precision DWB(Dyna [6] Jeonghyeon Pak, Bosung Kim, Chanyoung Ju, Sung Hyun You,
with sensor fusion mic window Hyoung Il Son, "UAV-Based Trilateration System for Localization and
approach) Tracking of Radio- Tagged Flying Insects: Development and Field
Evaluation", 2023 IEEE/RSJ International Conference on Intelligent
Custom ROS 2 Framework Robots and Systems (IROS), pp.1-8, 2023
[2] High with KMR PurePursuit
iiwa sensors controller [7] Marder-Eppstein, E. Berger, T. Foote, B. Gerkey, and K. Konolige,
”The Office Marathon: Robust navigation in an indoor office
environment,” in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), May
2010, pp. 300307.
[8] W. Burgard, A. Cremers, D. Fox, D. Hahnel, G. Lakemeyer, D. Schulz,
W. Steiner, and S. Thrun, ”The interactive museum tour-guide robot,”
IX. CONCLUSION in Proc. of the National Conference on Artificial Intelligence, 1998

In this research, a ROS2-based system for autonomous robot


navigation is assessed and deployed in indoor environments.
A custom robot is designed for efficient indoor navigation,
which was then tested through simulations in a Gazebo
environment. To implement SLAM, Grid SLAM approach,
employing the SLAM Toolbox plugin to generate and store a
static map of the environment on the local system. This map
was then utilized for the robot's navigation. Following the
creation of the SLAM map, the ROS2 Nav2 Stack for
autonomous navigation incorporated. The system relies on
the map server, which uses the static map generated by
SLAM, and integrates AMCL with real-time 2D LiDAR data
for detecting and avoiding dynamic obstacles. The map
server utilizes will enhance reliability, the system
incorporates sensor fusion by combining IMU and odometry
data, filtered through the Extended Kalman Filter (EKF),
ensuring accurate localization. Additionally, the Model
Predictive Path Integral (MPPI) controller improves
trajectory planning, enabling adaptive and fault-tolerant
navigation in dynamic environments data to calculate an
optimal path, considering any obstacles along the way to the
target location. Currently, the approach shows a good level of
navigational accuracy. However, further work is necessary,
including the use of fully calibrated hardware and more
accurate mapping, to improve precision. Future
improvements will also focus on refining the robot's
specifications to enhance its performance in indoor
navigation. The system defines a goal pose, specifying both
the location and orientation the robot should reach, which is
essential for guiding the robot to its intended position within
the indoor space

REFERENCES
[1] Cui, H., Zhang, J., & Norris, W. R. (2020). An Enhanced Safe and
Reliable Autonomous Driving Platform using ROS2. In 2020 IEEE
International Conference on Mechatronics and Automation, ICMA
2020 (pp. 290-295). Article 9233814 (2020 IEEE International
Conference on Mechatronics and Automation, ICMA 2020). Institute
of Electrical and Electronics Engineers
Inc.. https://doi.org/10.1109/ICMA49215.2020.9233814

979-8-3315-2754-9/25/$31.00 ©2025 IEEE 947


Authorized licensed use limited to: R V College of Engineering. Downloaded on April 25,2025 at 07:03:00 UTC from IEEE Xplore. Restrictions apply.

You might also like