akshitac8 / tfvaegan

[ECCV 2020] Official Pytorch implementation for "Latent Embedding Feedback and Discriminative Features for Zero-Shot Classification". SOTA results for ZSL and GZSL

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

There is a gap in performance than in the paper

in-my-heart opened this issue · comments

In order to make the code run on pytorch1.8, two changes have been made.
One is to change .data[0] to item(), and the other is to remove all the volatile parameters of Variable(,volatile=True)

when i use link: https://drive.google.com/drive/folders/16Xk1eFSWjQTtuQivTogMmvL3P6F_084u?usp=sharing
`
Dataset CUB

the best ZSL unseen accuracy is tensor(0.6168)
Dataset CUB
the best GZSL seen accuracy is tensor(0.6192)
the best GZSL unseen accuracy is tensor(0.4801)
the best GZSL H is tensor(0.5408)
when i use cub_feat.mat **link: https://drive.google.com/drive/folders/1SOUNd8mgNmY0kFn4iKSvPE8lsMuxaJhp**(Fine-tune)
Dataset CUB
the best ZSL unseen accuracy is tensor(0.7142)
Dataset CUB
the best GZSL seen accuracy is tensor(0.7154)
the best GZSL unseen accuracy is tensor(0.6128)
the best GZSL H is tensor(0.6602)`

When I read "Counterfactual Zero-Shot and Open-Set Visual Recognition" (CVPR2021), I found that the results of his use of your code to reproduce are also 2% lower than the paper.
image

Is there anything we need to pay attention to in our reproduce?

Hello @in-my-heart,
Thank you for your interest in our work. The counterfactual paper used PS2.0 splits whereas our paper uses PS1.0 . The 2% drop mentioned in the counterfactual paper exists because of PS2.0.
Also, when you are changing the pytorch version from 0.3.1 to 1.8 you will need to do the hyperparameter search again.

Hello @in-my-heart,
Thank you for your interest in our work. The counterfactual paper used PS2.0 splits whereas our paper uses PS1.0 . The 2% drop mentioned in the counterfactual paper exists because of PS2.0.
Also, when you are changing the pytorch version from 0.3.1 to 1.8 you will need to do the hyperparameter search again.

Hello @in-my-heart,
Thank you for your interest in our work. The counterfactual paper used PS2.0 splits whereas our paper uses PS1.0 . The 2% drop mentioned in the counterfactual paper exists because of PS2.0.
Also, when you are changing the pytorch version from 0.3.1 to 1.8 you will need to do the hyperparameter search again.

soga,Let me confirm: PSv1 is used in the paper, but the link(https://drive.google.com/drive/folders/16Xk1eFSWjQTtuQivTogMmvL3P6F_084u?usp=sharing) to the dataset you provided is PSv2. So the result I got using the dataset you provided will be a bit lower than the paper.

Hello, @in-my-heart the training data link I provided is for the PSv1 split. if you run our code using the PSv2 split the numbers will be lower as reported by the counterfactual paper.

Hello, @in-my-heart the training data link I provided is for the PSv1 split. if you run our code using the PSv2 split the numbers will be lower as reported by the counterfactual paper.
Sorry to disturb you, this is the split of the cub dataset I downloaded from your link. Is this v1?
image

@in-my-heart if you downloaded the dataset from the drive I shared then its correct this is v1 dataset.

@in-my-heart if you downloaded the dataset from the drive I shared then its correct this is v1 dataset.

I'm really sorry to trouble you for so long, I want to ask you question about psv2. This smallquestion puzzled me for a long time.
Q1:《Zero-Shot Learning—A Comprehensive Evaluation of the Good, the Bad and the Ugly》
Are the SS and PS in the figure below proposed in this paper psv1 and psv2 respectively? (note that I use matlab software to determine that your data link is PS)
image

@in-my-heart For our paper, we used the psv1 which were uploaded by the original authors on the webpage.