mbaran's starred repositories
3D-Point-Cloud-Curve-Extraction-and-Optimization
This code is designed to extract a representative curve (or a thin center line) from a noisy 3D point cloud, especially when the point cloud has discernible geometric patterns. Given a noisy point cloud that has an inherent structure or shape, the goal is to identify and trace a continuous curve that best captures the essence of that shape.
Online-3D-BPP-DRL
This repository contains the implementation of paper Online 3D Bin Packing with Constrained Deep Reinforcement Learning.
adaptive_clustering
[ROS package] Lightweight and Accurate Point Cloud Clustering
segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
Pursuit-Evasion-Game-with-Deep-Reinforcement-Learning-in-an-environment-with-an-obstacle
In this study, a multi agent chase-escape problem using Deep Q learning. Actors of the problem are smart evader and smart pursuers with opposite goals. At the beginning of the game these agents have homogeneous properties and evader and pursuits don’t have knowledge about the map. The purpose of the pursuer robots is the catching the evader as fast as it could and the purpose of the evader robot is the escaping as much as it could. Such as this game, where a player's gain is in balance with the loss of other players are called zero-sum games. The end condition, which may differ according to the approach applied, in our study is that “any of the pursuers or evader within the same or neighbor pixel with obstacle or map border” or “one pursuer and evader within the same or neighbor pixel”, in other words, Evader catches by the any of the pursuers or evader hits an obstacle or any pursuers hits an obstacle. A new episode of the game resumes after each collision or cath. In this respect, escape-chase problems are also included in the repeat games class. In this study, the question is what any pursuer or evader can do to improve its performance in a repetitive part of the game is questioned. The method used for this study is Deep Reinforcement Learning. Agents receive rewards or penalties based on their moves within a section and update this information into the Neural Network.
webots_ros2
Webots ROS 2 packages
Detectron2_ros
A ROS Node for detecting objects using Detectron2.
zed-ros-wrapper
ROS wrapper for the ZED SDK
Computer-Vision-and-Robotics-Paper-List
International Conferences and Journal on Computer Vision / Robotics
darknet_ros
YOLO ROS: Real-Time Object Detection for ROS
kuka_experimental
Experimental packages for KUKA manipulators within ROS-Industrial (http://wiki.ros.org/kuka_experimental)
kuka-rsi3-communicator
For controlling KUKA manipulators via RSI 3
kuka-rsi-ros-interface
A ROS node for the manipulation of a KUKA robot arm via RSI 3
Reinforcement-Learning-Tutorial
Sample reinforcement learning tutorial notebooks 🎉
guided_filter_point_cloud_denoising
Denoising of Point Cloud with Guided Filter
ROS_Raw_Kitti_Player
Ros Package to access and manipulate raw KITTI data, with Camera-LIDAR sensor fusion and Perception Tasks
ROS_Self_Driving_Car_Sim
Minimalistic Self Driving Car Simulation with basic Sensors and Perception Tasks