syunar / AlphaCLIP

Alpha-CLIP: A CLIP Model Focusing on Wherever You Want

Home Page:https://aleafy.github.io/alpha-clip

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Alpha-CLIP

This repository is the official implementation of AlphaCLIP

Alpha-CLIP: A CLIP Model Focusing on Wherever You Want
Zeyi Sun*, Ye Fang*, Tong Wu, Pan Zhang, Yuhang Zang, Shu Kong, Yuanjun Xiong, Dahua Lin, Jiaqi Wang

*Equal Contribution

Demo Alpha-CLIP with Stable Diffusion: Hugging Face Spaces Open in OpenXLab

Demo Alpha-CLIP with LLaVA: coming soon

πŸ“œ News

[2023/12/7] The paper and project page are released!

πŸ’‘ Highlights

  • πŸ”₯ 3.93% improved zero-shot ImageNet classification accuracy when providing foreground alpha-map.
  • πŸ”₯ Plug-in and play with region focus in any work that use CLIP vision encoder.
  • πŸ”₯ A strong visual encoder as vasatile tool when foreground mask is available.

πŸ‘¨β€πŸ’» Todo

  • Training and evaluation code for Alpha-CLIP
  • Web demo and local demo of Alpha-CLIP with LLaVA
  • Web demo and local demo of Alpha-CLIP with Stable Diffusion
  • Usage example notebook of Alpha-CLIP
  • Checkpoints of Alpha-CLIP

πŸ› οΈ Usage

Installation

our model is based on CLIP, please first prepare environment for CLIP, then directly install Alpha-CLIP.

pip install -e .

install loralib

pip install loralib

how to use

Download model from model-zoo and place it under checkpoints.

import alpha_clip
alpha_clip.load("ViT-B/16", alpha_vision_ckpt_pth="checkpoints/clip_b16_grit1m_fultune_8xe.pth", device="cpu"), 
image_features = model.visual(image, alpha)

alpha need to be normalized via transforms when using binary_mask in (0, 1)

mask_transform = transforms.Compose([
    transforms.ToTensor(), 
    transforms.Resize((224, 224)),
    transforms.Normalize(0.5, 0.26)
])
alpha = mask_transform(binary_mask * 255)

Usage examples are available

  • Visualization of attention map: notebook
  • Alpha-CLIP used in BLIP-Diffusion: notebook
  • Alpha-CLIP used in SD_ImageVar: demo

⭐ Demos

❀️ Acknowledgments

  • CLIP: The codebase we built upon. Thanks for their wonderful work.
  • LAVIS: The amazing open-sourced multimodality learning codebase, where we test Alpha-CLIP in BLIP-2 and BLIP-Diffusion.
  • Point-E: Wonderful point-cloud generation model, where we test Alpha-CLIP for 3D generation task.
  • LLaVA: Wounderful MLLM that use CLIP as visual bacbone where we test the effectiveness of Alpha-CLIP.

βœ’οΈ Citation

If you find our work helpful for your research, please consider giving a star ⭐ and citation πŸ“

@misc{sun2023alphaclip,
      title={Alpha-CLIP: A CLIP Model Focusing on Wherever You Want}, 
      author={Zeyi Sun and Ye Fang and Tong Wu and Pan Zhang and Yuhang Zang and Shu Kong and Yuanjun Xiong and Dahua Lin and Jiaqi Wang},
      year={2023},
      eprint={2312.03818},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

License

Code License Data License Usage and License Notices: The data and checkpoint is intended and licensed for research use only. They are also restricted to uses that follow the license agreement of CLIP. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.

About

Alpha-CLIP: A CLIP Model Focusing on Wherever You Want

https://aleafy.github.io/alpha-clip

License:Apache License 2.0


Languages

Language:Jupyter Notebook 98.0%Language:Python 2.0%