There are 18 repositories under robot-navigation topic.
Deep Reinforcement Learning for mobile robot navigation in ROS Gazebo simulator. Using Twin Delayed Deep Deterministic Policy Gradient (TD3) neural network, a robot learns to navigate to a random goal point in a simulated environment while avoiding obstacles.
[IEEE RA-L'25] NavRL: Learning Safe Flight in Dynamic Environments (NVIDIA Isaac/Python/ROS1/ROS2)
[TRO 2025] NeuPAN: Direct Point Robot Navigation with End-to-End Model-based Learning.
[RSS2024] Official implementation of "Hierarchical Open-Vocabulary 3D Scene Graphs for Language-Grounded Robot Navigation"
A curated list of robot social navigation.
[T-RO 2023] DRL-VO: Learning to Navigate Through Crowded Dynamic Scenes Using Velocity Obstacles
Wild Visual Navigation: A system for fast traversability learning via pre-trained models and online self-supervision
Deep Reinforcement Learning for mobile robot navigation in IR-SIM simulation. Using DRL (SAC, TD3, PPO, DDPG) neural networks, a robot learns to navigate to a random goal point in a simulated environment while avoiding obstacles.
A goal-driven autonomous exploration through deep reinforcement learning (ICRA 2022) system that combines reactive and planned robot navigation in unknown environments
Implementation of the D* lite algorithm in Python for "Improved Fast Replanning for Robot Navigation in Unknown Terrain"
[RA-L'25] An Reliable and Efficient Framework for Zero-Shot Object Navigation
Deep Reinforcement Learning for mobile robot navigation in ROS2 Gazebo simulator. Using DRL (SAC, TD3) neural networks, a robot learns to navigate to a random goal point in a simulated environment while avoiding obstacles.
Indoor segmentation for robot navigating, which is based on deeplab model in TensorFlow.
Repository for the paper "Extending Maps with Semantic and Contextual Object Information for Robot Navigation: a Learning-Based Framework using Visual and Depth Cues"
[IROS23] Hybrid Map-Based Path Planning for Robot Navigation in Unstructured Environments
Pedestrian ROS simulator with Gazebo and differential wheeled robots
Code base for SICNav T-RO paper and SICNav-Diffusion RA-L paper
RosNav-RL is a modular DRL framework for ROS 2 with a pluggable architecture, allowing you to switch between RL backends like Stable-Baselines3 and DreamerV3 to accelerate research and deployment.
This repository contains the source code for our paper: "NaviSTAR: Socially Aware Robot Navigation with Hybrid Spatio-Temporal Graph Transformer and Preference Learning". For more details, please refer to our project website at https://sites.google.com/view/san-navistar.
Deep Reinforcement Learning Based Mobile Robot Navigation Using ROS2 and Gazebo
Simple differential drive robot for indoor environments simulated using ROS and Gazebo.
Adaptive Risk Tendency Implicit Quantile Network for Drone Navigation under Partial Observability.
A Reinforcement Learning (RL) based navigation implementation for mobile robot navigation. Q-Learning, SARSA and Deep Q-Network algorithms were compared.
(IJRR) Mixed Strategy Nash Equilibrium for Crowd Navigation
Mobile Robot Planner with Low-cost Cameras Using Deep Reinforcement Learning
DRL-VO navigation policy for BARN Challenge
Socially normative mobile robot navigation
(RSS 2021) Move Beyond Trajectories: Distribution Space Coupling for Crowd Navigation
Robot Navigation Tutorials for Move Base Flex (MBF)
A library for Pepper QiSDK, to find Aruco markers, and provide their position as a frame, useful for localization and navigation.
A ROS package for topological navigation
[RA-L] DRAGON: A Dialogue-Based Robot for Assistive Navigation with Visual Language Grounding
[TRO-2025] SCOPE: Stochastic Cartographic Occupancy Prediction Engine for Uncertainty-Aware Dynamic Navigation
This is a online social robot navigation framework that implements several techniques for that matter, like the social relevance validity checking and an extended social comfort cost function.
A ROS toolkit for waypoints generation for robot navigation
【IJSR】Crowd-comfort Robot Navigation among Dynamic Environment Based on social-stressed deep reinforcement learning