There are 10 repositories under depth-image topic.
:taxi: Fast and robust clustering of point clouds generated with a Velodyne sensor.
Real-Time 3D Semantic Reconstruction from 2D data
Implementation of the KinectFusion approach in modern C++14 and CUDA
ICRA 2018 "Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image" (PyTorch Implementation)
ICRA 2018 "Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image" (Torch Implementation)
Code for paper "A2J: Anchor-to-Joint Regression Network for 3D Articulated Pose Estimation from a Single Depth Image". ICCV2019
TensorRT implementation of Depth-Anything V1, V2
Sample implementation of an application using KinectFusionLib
Algorithm for user tracking and following (turtle bot control)
"Kinect Smoothing" helps you to smooth and filter the Kinect depth image as well as trajectory data
Probabilistic depth fusion based on Optimal Mixture of Gaussians for depth cameras
Displays the depth values received by the front-facing camera.
Generate blur image with 3 types of blur `motion`, `lens`, and `gaussian` by using OpenCV.
IROS'16/IJRR "Sparse Sensing for Resource-Constrained Depth Reconstruction"
Code examples of point cloud processing in python.
Omnidirectional Synthetic image generator for Computer Vision
Which object a person is pointing at? Detect it by using YOLO, Openpose and depth image (under customized scene).
A python node to detect planes from depth image by using RANSAC algorithm. Input/Output from/to ROS topics.
Monocular depth estimation using Feature Pyramid Network implemented in PyTorch 1.1.0
Cross-platform library to communicate with LiDAR devices of the Blickfeld GmbH.
Capture RGB-D data from a depth camera
Deep-learning approaches to object recognition from 3D data
Remote Color Depth Camera without any 3rd-party dependencies in iOS.
Depth-Based Region-of-Interest (ROI) Selection
The purpose of this project is to detect and track people in an indoor environment and recognize events regarding their movement using visual information. The visual information used consists of an RGB stream and a depth stream from an ASUS Xtion Pro or Microsoft Kinect.
Using pre-trained DL models and Transformations for generating occupancy maps. Includes some other basic deep learning tasks. Feel free to contribute.
LiDAR processing ROS2. Segmentation: "Fast Ground Segmentation for 3D LiDAR Point Cloud Based on Jump-Convolution-Process". Clustering: "Curved-Voxel Clustering for Accurate Segmentation of 3D LiDAR Point Clouds with Real-Time Performance".
Performing Dense correspondence matching and Depth Image Generation using Patch Matching and Gradients-Based Features algorithms.
Nearest neighbor depth completion
ROS 2 Human detector
Contains the code and weights to our paper "Multi-Task Deep Learning for Depth-based Person Perception in Mobile Robotics" that was published on IROS 2020.