Wu Xiaodong's starred repositories
segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
Grounded-Segment-Anything
Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
leetcode_101
LeetCode 101:和你一起你轻松刷题(C++)
Deep-Learning-Interview-Book
深度学习面试宝典(含数学、机器学习、深度学习、计算机视觉、自然语言处理和SLAM等方向)
Segment-Everything-Everywhere-All-At-Once
[NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"
Semantic-Segment-Anything
Automated dense category annotation engine that serves as the initial semantic labeling for the Segment Anything dataset (SA-1B).
Anything-3D
Segment-Anything + 3D. Let's lift anything to 3D.
Awesome-CLIP
Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).
SegmentAnything3D
[ICCV'23 Workshop] SAM3D: Segment Anything in 3D Scenes
ucasproposal
LaTeX Proposal Template for the University of Chinese Academy of Sciences
Segment-Any-Point-Cloud
[NeurIPS'23 Spotlight] Segment Any Point Cloud Sequences by Distilling Vision Foundation Models
segment-anything-annotator
We developed a python UI based on labelme and segment-anything for pixel-level annotation. It support multiple masks generation by SAM(box/point prompt), efficient polygon modification and category record. We will add more features (such as incorporating CLIP-based methods for category proposal and VOS methods for video datasets
PointCLIP_V2
[ICCV 2023] PointCLIP V2: Prompting CLIP and GPT for Powerful 3D Open-world Learning
Virtual-Multi-View-Fusion
An Elegant PyTorch Implementation of ECCV'2020: Virtual Multi View Fusion for 3D Semantic Segmentation.