chongzhou96 / MaskCLIP

Official PyTorch implementation of "Extract Free Dense Labels from CLIP" (ECCV 22 Oral)

Home Page:https://www.mmlab-ntu.com/project/maskclip/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Backbone Pre-train weight?

cuiziteng opened this issue · comments

Hello, thanks for your nice code and nice paper!

One question I wonder is that when I see the code, there are no code show that pre-train weight load to the backbone, I find the pre-train weights are only load to segmentation head, could you show me where are the code to load backbone weights of CLIP encoder? thanks

I could run the zero-shot segmentation after getting the clip weights, ViT16_clip_weights.pth, with:

python tools/maskclip_utils/convert_clip_weights.py --model ViT16

ie, without the "--backbone" arg mentioned in the readme.

Hello,I want to ask you a question,Which version of mmsegmentation should I install to run this code properly? I installed 0.20.0 but couldn't run it

commented

Hello,I want to ask you a question,Which version of mmsegmentation should I install to run this code properly? I installed 0.20.0 but couldn't run it
Maybe the MMCV is the most important issue, I have tried a lot method, this is how I prepare for it:
First make sure your cudatoolkit installed:
I have installed cudatoolkit 11.1

install pytorch:
pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html

install MMCV through pip:
pip install mmcv-full==1.5.0 -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.8.0/index.html

install CLIP and other package:
pip install git+https://github.com/openai/CLIP.git