yaoyao-liu / meta-transfer-learning

TensorFlow and PyTorch implementation of "Meta-Transfer Learning for Few-Shot Learning" (CVPR2019)

Home Page:https://lyy.mpi-inf.mpg.de/mtl/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How do you update the base_leaner's parameters

xugy16 opened this issue · comments

Thank you for the code.

I have a question about the base_learner update.

  1. The base-learner is fast updated using 100 steps.
  2. then we return qry_logits and calculate the cross entropy loss for qry_set
  3. using sel.optimizer to update.

But what gradient is stored in base_learner? Because you use fast-model to calcualte qry loss

Thanks for your interest in our work.

The fast model can be regarded as a function of the base learner. Thus, we can calculate the derivative of the query loss with respect to the base model. We update the base learner using Eq. 5 in our paper.

If you have any further questions, please feel free to contact me.

Best,

Yaoyao

Really appreciate for the response.

So you are using 1st-Order MAML to update the classifier-head (base learner)?

Yes. We use the first-order approximation MAML to update the FC classifier.

If you have any further questions, please do not hesitate to contact me.