Zhao Jiahe's starred repositories
computer-vision-in-action
A computer vision closed-loop learning platform where code can be run interactively online. 学习闭环《计算机视觉实战演练:算法与应用》中文电子书、源码、读者交流社区(持续更新中 ...) 📘 在线电子书 https://charmve.github.io/computer-vision-in-action/ 👇项目主页
ContextDET
Contextual Object Detection with Multimodal Large Language Models
pointingqa
Code for paper "Point and Ask: Incorporating Pointing into Visual Question Answering"
mmtracking
OpenMMLab Video Perception Toolbox. It supports Video Object Detection (VID), Multiple Object Tracking (MOT), Single Object Tracking (SOT), Video Instance Segmentation (VIS) with a unified framework.
LLaMA-Adapter
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
Awesome-Multimodal-Large-Language-Models
:sparkles::sparkles:Latest Advances on Multimodal Large Language Models
ICCV2023-Papers-with-Code
ICCV 2023 论文和开源项目合集
UCASDeepLearning
国科大深度学习课程作业
visual_prompting
Exploring Visual Prompts for Adapting Large-Scale Models
HumanBench
This repo is official implementation of HumanBench (CVPR2023)
insightface
State-of-the-art 2D and 3D Face Analysis Project
Self-Correction-Human-Parsing
An out-of-box human parsing representation extractor.
gnn-re-ranking
A real-time GNN-based method. Understanding Image Retrieval Re-Ranking: A Graph Neural Network Perspective
vit-pytorch
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
deep-person-reid
Torchreid: Deep learning person re-identification in PyTorch.
Person_reID_baseline_pytorch
:bouncing_ball_person: Pytorch ReID: A tiny, friendly, strong pytorch implement of person re-id / vehicle re-id baseline. Tutorial 👉https://github.com/layumi/Person_reID_baseline_pytorch/tree/master/tutorial
Simple-CCReID
Pytorch implementation of 'Clothes-Changing Person Re-identification with RGB Modality Only. In CVPR, 2022.'
gdrnpp_bop2022
PyTorch Implementation of GDRNPP, winner (most of the awards) of the BOP Challenge 2022 at ECCV'22