There are 15 repositories under depth-map topic.
OBS Plugin to use a Kinect (all models supported) in OBS (and setup a virtual green screen based on depth and/or body detection).
Accompanying library for the Record3D iOS app (https://record3d.app/). Allows you to receive RGBD stream from iOS devices with TrueDepth camera(s).
Single Image Depth Estimation with Feature Pyramid Network
🌊 Image to → 2.5D Parallax Effect Video. A Free and Open Source ImmersityAI alternative
Create Dense Depth Map Image for Known Poisitioned Camera from Lidar Point Cloud
📷 Threaded depth-map cleaning and inpainting using OpenCV
Collection of scripts for generating depth maps for videos using machine learning.
Full body pose estimation to be used with HMD (Quest2) built in Unity
TriDepth: Triangular Patch-based Deep Depth Prediction [Kaneko+, ICCVW2019(oral)]
ComfyUI Depth Anything (v1/v2) Tensorrt Custom Node (up to 14x faster)
🎬 An OpenGL application for editing and retouching images using depth-maps in 2.5D
Semiglobal Matching with Census Matching Cost
Simple, locally-running web app for generating depth maps using machine learning.
Converts a depth map image to a normal map image using Python
Visualizing a point cloud using scene depth in Unity similar to WWDC20 demo.
Awesome-3D/Multimodal-Anomaly-Detection-and-Localization/Segmentation/3D-KD/3D-knowledge-distillation
Generate blur image with 3 types of blur `motion`, `lens`, and `gaussian` by using OpenCV.
Depth map applied Image viewer inside ComfyUI
(Eyebeam #1 of 13) Developed with @FakeGreenDress. Record, stream, and export Kinect mocap data to After Effects puppet pins. Record directly from the Kinect or over OSC. Compiling or running from source requires SimpleOpenNI.
Using PyTorch's MiDaS model and Open3D's point cloud to map a scene in 3D 🏞️🔭
Create a point cloud and vertices mesh by using the Google Pixel's IR camera. This is a non-commercial project that I am doing for educational purposes only. Project based on this article: https://ai.googleblog.com/2020/04/udepth-real-time-3d-depth-sensing-on.html
This is a sample C# project that extracts Depth and Color information from videos shot in iPhone's Cinematic mode and outputs each as separate videos, along with a sample Unity project for 3D playback of these videos.
Insert your face, detected in your camera feed, in a web 3D scene in real-time.
High resolution and depth rendering to PNG for Three.js
Rankings include: BetterDepth Depth Anything DPT FutureDepth GBDMF GenPercept GeoWizard LeReS LightedDepth LFVRT Marigold Metric3D MiDaS NeWCRFs PatchFusion UniDepth ZoeDepth
edepth is an open-source, trainable CNN-based model for depth estimation from single images, videos, and live camera feeds.
Programs to detect keyPoints in Images using SIFT, compute Homography and stitch images to create a Panorama and compute epilines and depth map between stereo images.
In this project, we try to implement the concept of Stereo Vision. We test the code on 3 different datasets, each of them contains 2 images of the same scenario but taken from two different camera angles. By comparing the information about a scene from 2 vantage points, we can obtain the 3D information by examining the relative positions of objects.