Lihe Ding's starred repositories

mcc-ho

MCC-HO

Language:PythonLicense:MITStargazers:18Issues:0Issues:0

mannequinchallenge

Inference code and trained models for "Learning the Depths of Moving People by Watching Frozen People."

Language:PythonLicense:Apache-2.0Stargazers:492Issues:0Issues:0

MeshAnythingV2

From anything to mesh like human artists. Official impl. of "MeshAnything V2: Artist-Created Mesh Generation With Adjacent Mesh Tokenization"

Language:PythonLicense:NOASSERTIONStargazers:454Issues:0Issues:0

ambient-tweedie

[ICML 2024]: Official implementation for the paper: "Consistent Diffusion Meets Tweedie"

Language:PythonLicense:GPL-3.0Stargazers:42Issues:0Issues:0

glomap

GLOMAP - Global Structured-from-Motion Revisited

Language:C++License:BSD-3-ClauseStargazers:1174Issues:0Issues:0
Language:Jupyter NotebookStargazers:72Issues:0Issues:0

bilarf

Code Release for "Bilateral Guided Radiance Field Processing"

Language:PythonLicense:Apache-2.0Stargazers:110Issues:0Issues:0

SDS-Bridge

Official Implementation of Rethinking Score Distillation as a Bridge Between Image Distributions

Language:PythonLicense:MITStargazers:31Issues:0Issues:0

segment-anything-2

The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:9971Issues:0Issues:0

Open-Sora-Plan

This project aim to reproduce Sora (Open AI T2V model), we wish the open source community contribute to this project.

Language:PythonLicense:MITStargazers:11168Issues:0Issues:0

PGSR

code for "PGSR: Planar-based Gaussian Splatting for Efficient and High-Fidelity Surface Reconstruction"

Language:PythonLicense:NOASSERTIONStargazers:371Issues:0Issues:0

FSGS

[ECCV 2024]"FSGS: Real-Time Few-Shot View Synthesis using Gaussian Splatting", Zehao Zhu*, Zhiwen Fan*, Yifan Jiang, Zhangyang Wang

Language:PythonLicense:NOASSERTIONStargazers:351Issues:0Issues:0

DeformingThings4D

[ICCV 2021] A dataset of non-rigidly deforming objects.

Language:PythonStargazers:298Issues:0Issues:0

point_odyssey

Official code for PointOdyssey: A Large-Scale Synthetic Dataset for Long-Term Point Tracking (ICCV 2023)

Language:PythonStargazers:108Issues:0Issues:0

gaussian_surfels

Implementation of the SIGGRAPH 2024 conference paper "High-quality Surface Reconstruction using Gaussian Surfels".

Language:PythonStargazers:473Issues:0Issues:0

locotrack

Official implementation of "Local All-Pair Correspondence for Point Tracking" (ECCV 2024)

Language:PythonLicense:Apache-2.0Stargazers:91Issues:0Issues:0
Language:PythonLicense:NOASSERTIONStargazers:811Issues:0Issues:0
Language:PythonLicense:MITStargazers:702Issues:0Issues:0

flowmap

Code for "FlowMap: High-Quality Camera Poses, Intrinsics, and Depth via Gradient Descent" by Cameron Smith*, David Charatan*, Ayush Tewari, and Vincent Sitzmann

Language:PythonLicense:MITStargazers:854Issues:0Issues:0
Stargazers:55Issues:0Issues:0
Language:PythonStargazers:65Issues:0Issues:0

Motion-I2V

[SIGGRAPH 2024] Motion I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling

Language:PythonStargazers:76Issues:0Issues:0

MotionCtrl

Official Code for MotionCtrl [SIGGRAPH 2024]

Language:PythonLicense:Apache-2.0Stargazers:1244Issues:0Issues:0

tapnet

Tracking Any Point (TAP)

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:1236Issues:0Issues:0

DecoMotion

[ECCV 2024] Decomposition Betters Tracking Everything Everywhere

License:MITStargazers:99Issues:0Issues:0

mast3r

Grounding Image Matching in 3D with MASt3R

Language:PythonLicense:NOASSERTIONStargazers:682Issues:0Issues:0

ProPainter

[ICCV 2023] ProPainter: Improving Propagation and Transformer for Video Inpainting

Language:PythonLicense:NOASSERTIONStargazers:5375Issues:0Issues:0

NVS_Solver

Source code of paper "NVS-Solver: Video Diffusion Model as Zero-Shot Novel View Synthesizer"

Language:PythonStargazers:235Issues:0Issues:0

Track-Anything

Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI.

Language:PythonLicense:MITStargazers:6375Issues:0Issues:0

Depth-Anything-V2

Depth Anything V2. A More Capable Foundation Model for Monocular Depth Estimation

Language:PythonLicense:Apache-2.0Stargazers:3078Issues:0Issues:0