Zhixing Sun's repositories

2024-AAAI-HPT

Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models (AAAI 2024)

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

Agent-Attention

Official repository of Agent Attention

Language:PythonStargazers:0Issues:0Issues:0

APE

[ICCV 2023] Code for "Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement"

Stargazers:0Issues:0Issues:0

AttriCLIP

CVPR2023: AttriCLIP: A Non-Incremental Learner for Incremental Knowledge Learning

Stargazers:0Issues:0Issues:0

BiDistFSCIL

Official implementation of CVPR 2023 paper Few-Shot Class-Incremental Learning via Class-Aware Bilateral Distillation.

Stargazers:0Issues:0Issues:0

CLIP_Surgery

CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks

Stargazers:0Issues:0Issues:0

code-samples

Holds code for our CVPR'23 tutorial: All Things ViTs: Understanding and Interpreting Attention in Vision.

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:0Issues:0Issues:0

FGVP

Official Codes for Fine-Grained Visual Prompting, NeurIPS 2023

Stargazers:0Issues:0Issues:0

FLatten-Transformer

Official repository of FLatten Transformer (ICCV2023)

Stargazers:0Issues:0Issues:0

Gard

Code for Graph-based High-Order Relation Discovery for Fine-grained Recognition in CVPR 2021

License:MITStargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0

IELT

Source code of the paper Fine-Grained Visual Classification via Internal Ensemble Learning Transformer

License:MITStargazers:0Issues:0Issues:0

LLaVA-Plus-Codebase

LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills

License:Apache-2.0Stargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0

MiniGPT-4

Open-sourced codes for MiniGPT-4 and MiniGPT-v2

License:BSD-3-ClauseStargazers:0Issues:0Issues:0

Monkey

【CVPR 2024 Highlight】Monkey (LMM): Image Resolution and Text Label Are Important Things for Large Multi-modal Models

License:MITStargazers:0Issues:0Issues:0

multimodal-prompt-learning

[CVPR 2023] Official repository of paper titled "MaPLe: Multi-modal Prompt Learning".

License:MITStargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0

opencon

Code for TMLR 2023 paper "OpenCon: Open-world Contrastive Learning"

Language:PythonStargazers:0Issues:0Issues:0

ovsam

[arXiv preprint] The official code of paper "Open-Vocabulary SAM".

License:NOASSERTIONStargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0

recognize-anything

Code for the Recognize Anything Model (RAM) and Tag2Text Model

License:Apache-2.0Stargazers:0Issues:0Issues:0

RevisitingCIL

The code repository for "Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need" in PyTorch.

Stargazers:0Issues:0Issues:0

RPF

This is a repository contains the implementation of our SIGIR'23 full paper From Region to Patch: Attribute-Aware Foreground-Background Contrastive Learning for Fine-Grained Fashion Retrieval.

Stargazers:0Issues:0Issues:0

SHIP

Official code for ICCV 2023 paper, "Improving Zero-Shot Generalization for CLIP with Synthesized Prompts"

Language:PythonStargazers:0Issues:0Issues:0
License:MITStargazers:0Issues:0Issues:0

sunhongbo.github.io

Github Pages template for academic personal websites, forked from mmistakes/minimal-mistakes

License:MITStargazers:0Issues:0Issues:0

vit-pytorch

Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

Vitron

A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing

Stargazers:0Issues:0Issues:0