ruotianluo / Context-aware-ZSR

Official code for paper Context-aware Zero-shot Recognition (https://arxiv.org/abs/1904.09320 to appear at AAAI2020)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Unable to download pretrained imagenet-model

buxpeng opened this issue · comments

hi ,When I download the pretrained imagenet-model according to the README.md, the connection fails all the time. Is the URL in it changed?

interseting. Maybe the code breaks.

you can still download by enter in the url: https://docs.google.com/uc?export=download&id=1wHSvusQ1CiEMc5Nx5R8adqoHQjIDWXl1

you can replace id with other ids in that python file.

Thank you, there is another question, in which folder do I need to put the model downloaded according to the link? Download pretrained model from link in README.md

a new directory called pretrained_model under the repo dir.

image
Sorry, I mean you reproduce the model in your paper, not pretrained imagenet-model,Is for example this we_relt_geo_sc.pth file .

a dir called pretrained.
Checkout scripts/reproduce for clue.

thank you very much!

Hello, please disturb you again,When I run the test file, it shows that there is no the tagging_eval.pkl file,Does this file need to be downloaded? Where is it if downloaded? Thank you!

no. it's the output of tools/test_net.py. i may put the path wrong. do you mind search a little bit where the tagging_eval.pkl is?

image
sorry,i didn't find where the file was.

Can you show me the full error log. And if possible can you run line by line in the script; and also what's your version of pytorch.

Do you have to use cuda to run? pytorch version is 1.0

Yes, cuda is needed.

i know ,thank you very much!

image
hi,When I run the test file (bash scripts/test/we_infer.sh).I got the above error.These two parameters load_ckpt and load_detectron in the run command are None,Can you look at it for me?

You should run scripts/reproduce

image
Hi,Do you need both python2 and python3? I'm installing python 3. Do commands like(In python 2 python lib/datasets/vg/convert_from_bansal.py) need to be executed?

I used a conda environment to have both py2 and py3

I completed ( Convert from bansal train test split )using python2,but later compilation fails,My machine is ubantu18 cuda 9.0 gcc have 4.8 5.4 6 7,gcc version i have tried,Can you show it to me?
image

sorry ,gcc have 4.8 5.5 6 7

There is no change in this part compared to the original Detectron.pytorch. I suggest you go there and search for the solution.

Can you tell me which version of cuda and GCC you are using?

gcc4.9.4 and cuda10

hi,I want to ask where this weight is downloaded?
image

Hi, I didn't provide that weight; I only provided the weight after conversion.

When I run this test file, I can't find the file. Can I ask if this file can be downloaded?
image

image

Have you trained the model by yourself?

no ,I am not going to train, I want to use the trained ones directly

Does everything need to be running? I have run the two inside
image

It depends on what you want?

I want to run this
image

that's gcn_infer and sync_infer

I have run gcn_infer in scripts/reproduce,But when I run this bash scripts/test/detection_gcn.sh ,Still without this weight ckpt/model_final_gcn_wn.pth
image

Sorry, my bad, now I understand.
replace the load_ckpt in detection_gcn with the load_ckpt in gcn_infer
same for sync.

Oh thanks,This is the result of my running bash scripts/test/we_infer.sh,Can you tell me what these numbers represent?
image

It corresponds to Table 1 GCN+Context row.

thanks,I want to put a picture now, and then use this zero sample to identify what I have seen and what I have n’t seen. Which file do I need to run?

Not sure what you mean.

How can the 7794 pictures in the test set be modified into my own pictures?

I haven't done this before. My guess is you need to create a coco annotation type json file for your own images.

If you want to test a single image. You may need to dig around. It shouldn't be too complicated, but it may take a few hours.

If I only want to test one picture, what do I need to modify?

Try if you can extend from this function:

But you still need to be able to run edgebox.

image
hi,Can this file be replaced? Is it necessary to install matlab to run this?

Unfortunately, it's necessary

oh,thanks,Is it okay to follow this instruction?
image

Now need to install matlab, and then install this Matlab Toolbox?

I think so.

image
image
hi,This score is not very clear, why some are greater than 1?

Ok,I tested a picture. The category_id in the generated bbox_vgbansalone_test_results.json is the last category detected with zero ZSR, right?How does this category_id correspond to the real label?

image
image

It should be in some json file in the datasets folder.

Ok, can you tell me where you can normalize the value after Crf?

I am not sure. In fact from https://github.com/ruotianluo/Context-aware-ZSR/blob/master/lib/modeling/rel_heads.py#L135, it seems the scores should be somewhat normalized.

interseting. Maybe the code breaks.

you can still download by enter in the url: https://docs.google.com/uc?export=download&id=1wHSvusQ1CiEMc5Nx5R8adqoHQjIDWXl1

you can replace id with other ids in that python file.

Hello, could you please refresh this download link? Best wishes.

interseting. Maybe the code breaks.
you can still download by enter in the url: https://docs.google.com/uc?export=download&id=1wHSvusQ1CiEMc5Nx5R8adqoHQjIDWXl1
you can replace id with other ids in that python file.

Hello, could you please refresh this download link? Best wishes.

https://drive.google.com/drive/folders/0B7fNdx_jAqhtdXR1NUN4NkZwS00?resourcekey=0-4-sXYhJKBsRD8sohsPfDvA&usp=sharing this folder should have all the weights.