Soyun Choi (soyunchoi)

soyunchoi

Geek Repo

Github PK Tool:Github PK Tool

Soyun Choi's starred repositories

segment-anything

The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:46416Issues:303Issues:658

Grounded-Segment-Anything

Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:14614Issues:114Issues:382

Swin-Transformer

This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".

Language:PythonLicense:MITStargazers:13519Issues:127Issues:309

dinov2

PyTorch code and models for the DINOv2 self-supervised learning method.

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:8681Issues:95Issues:386

mae

PyTorch implementation of MAE https//arxiv.org/abs/2111.06377

Language:PythonLicense:NOASSERTIONStargazers:7117Issues:58Issues:189

EVA

EVA Series: Visual Representation Fantasies from BAAI

Language:PythonLicense:MITStargazers:2195Issues:30Issues:156

Semantic-Segment-Anything

Automated dense category annotation engine that serves as the initial semantic labeling for the Segment Anything dataset (SA-1B).

Language:PythonLicense:Apache-2.0Stargazers:2093Issues:19Issues:57

Swin-Transformer-Object-Detection

This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" on Object Detection and Instance Segmentation.

Language:PythonLicense:Apache-2.0Stargazers:1778Issues:22Issues:218

ViT-Adapter

[ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions

Language:PythonLicense:Apache-2.0Stargazers:1205Issues:17Issues:179

Awesome_Prompting_Papers_in_Computer_Vision

A curated list of prompt-based paper in computer vision and vision-language learning.

ov-seg

This is the official PyTorch implementation of the paper Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP.

Language:Jupyter NotebookLicense:NOASSERTIONStargazers:673Issues:13Issues:30

OpenAI-CLIP

Simple implementation of OpenAI CLIP model in PyTorch.

Language:Jupyter NotebookLicense:MITStargazers:600Issues:5Issues:21
Language:Jupyter NotebookLicense:Apache-2.0Stargazers:554Issues:14Issues:16

Awesome-Mixture-of-Experts-Papers

A curated reading list of research in Mixture-of-Experts(MoE).

model-soups

Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time

Language:PythonLicense:MITStargazers:402Issues:10Issues:18

deephar

Deep human action recognition and pose estimation

Language:PythonLicense:MITStargazers:381Issues:16Issues:41

PointTransformerV2

[NeurIPS'22] An official PyTorch implementation of PTv2.

Language:PythonLicense:Apache-2.0Stargazers:56Issues:2Issues:3

PGN

Prompt Generation Networks for Efficient Adaptation of Frozen Vision Transformers. Jochem Loedeman, Maarten C. Stol, Tengda Han, Yuki M. Asano. Tech Report. 2022

Language:PythonLicense:MITStargazers:40Issues:3Issues:2

dap-cl

Official code of "Generating Instance-level Prompts for Rehearsal-free Continual Learning (ICCV 2023)"

Language:PythonLicense:NOASSERTIONStargazers:39Issues:1Issues:10
Language:PythonStargazers:22Issues:0Issues:0

mvitac

Self-Supervised Visual-Tactile Representation Learning via Multimodal Contrastive Training

Language:Jupyter NotebookStargazers:11Issues:1Issues:6

ICL_multimodal

Code for paper 'Borrowing Treasures from Neighbors: In-Context Learning for Multimodal Learning with Missing Modalities and Data Scarcity'

Language:PythonStargazers:10Issues:0Issues:0

NYUDepthV2_PointCloud_Converter

Utility to convert the NYU Depth V2 dataset into point clouds for advanced 3D visualization and analysis.

Language:PythonStargazers:7Issues:0Issues:0
Language:PythonStargazers:5Issues:0Issues:0