Awj2021 / ICLR24

Official code for ICLR 2024 paper, "A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

A hard-to-beat baseline for training-free CLIP-based adaptation

Official implementation of A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation.

This paper has been accepted by ICLR 2024.

Requirements

Installation

Create a conda environment and install dependencies:

conda create -n h2b python=3.9
conda activate h2b

pip install -r requirements.txt

# Install the according versions of torch and torchvision
conda install pytorch torchvision cudatoolkit

Dataset

Follow DATASET.md to install ImageNet and other datasets referring to CoOp.

Get Started

Configs

The running configurations can be modified in configs/setting/dataset.yaml, including evaluation setting, shot numbers, visual encoders, and hyperparamters.

Numerical Results

We provide numerical results in few-shot classification in figure 1 at exp.log.

Running

For few-shot classification:

CUDA_VISIBLE_DEVICES=0 python main_few_shots.py --config configs/few_shots/dataset.yaml

For base-to-new generalization:

CUDA_VISIBLE_DEVICES=0 python main_base2new.py --config configs/base2new/dataset.yaml

Acknowledgement

This repo benefits from CLIP, CoOp and SHIP. Thanks for their wonderful works.

Citation

@inproceedings{wang2024baseline,
  title={A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation},
  author={Zhengbo Wang and Jian Liang and Lijun Sheng and Ran He and Zilei Wang and Tieniu Tan},
  booktitle={Proceedings of International Conference on Learning Representations (ICLR)},
  year={2024}
}

Contact

If you have any question, feel free to contact zhengbowang@mail.ustc.edu.cn.

About

Official code for ICLR 2024 paper, "A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation"


Languages

Language:Jupyter Notebook 84.9%Language:Python 14.7%Language:Shell 0.4%