yaoyao-liu / meta-transfer-learning

TensorFlow and PyTorch implementation of "Meta-Transfer Learning for Few-Shot Learning" (CVPR2019)

Home Page:https://lyy.mpi-inf.mpg.de/mtl/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question Regarding the Finetuning Ablation Study

JKDomoguen opened this issue · comments

Hello, in the paper in Table 1 which shows some of the ablation experiments that you have performed, The FT \theta , FT[\Theta4;\theta] and FT[\Theta;\theta] experimental configurations in the table correspond to 55.9%, 57.2% and 58.3% test accuracy in 1 (shot) miniimagenet setting.

I am trying to replicate this result in my experiments. And I want to clarify if FT \theta means freezing the backbone feature extractor (resnet12 in this case) and meta-learning the single Fully connected layer? Is that right?

And the FT[\Theta4;\theta] means, freezing the first 3 residual blocks but meta learning the classifier \theta and the last residual block \Theta4? Is this correct?

And finally FT[\Theta;\theta] means you don't freeze the feature extractor and treat the pre-trained resnet12 as part of the whole meta-learning parameters? Again is this correct?

Furthermore, does this mean that--for example in the case of FT[\Theta;\theta]-- during the inner update to learn task-specific parameters, you also update \Theta other than \theta(the classifier)?

Thank you for your time!

Hi @JKDomoguen,

Thanks for your interest in our work.
The detailed ablation settings are available in our extended version, please see Section 5.3 in this paper.

If you have any further questions, feel free to add more comments.

I see and this is most helpful! Thank you very much for your time.