csguomy's starred repositories
Awesome-Visual-Transformer
Collect some papers about transformer with vision. Awesome Transformer with Computer Vision (CV)
knowledge-distillation-papers
knowledge distillation papers
awesome-knowledge-distillation
Awesome Knowledge Distillation
ConvNeXt-V2
Code release for ConvNeXt V2 model
llm-action
本项目旨在分享大模型相关技术原理以及实战经验。
focal_loss_pytorch
A PyTorch Implementation of Focal Loss.
deepcluster
Deep Clustering for Unsupervised Learning of Visual Features
segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
Semantic-Segment-Anything
Automated dense category annotation engine that serves as the initial semantic labeling for the Segment Anything dataset (SA-1B).
Track-Anything
Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI.
Segment-and-Track-Anything
An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary algorithms utilized include the Segment Anything Model (SAM) for key-frame segmentation and Associating Objects with Transformers (AOT) for efficient tracking and propagation purposes.
yeezhu.github.io
personal website https://yeezhu.github.io/
robust_loss_pytorch
A pytorch port of google-research/google-research/robust_loss/
google-research
Google Research
fast-transformers
Pytorch library for fast transformer implementations