dahiyaaneesh / peclr

This is the pretraining code for PeCLR. An equivariant contrastive learning framework for 3D hand pose estimation. The paper is presented at ICCV 2021.

Home Page:https://ait.ethz.ch/projects/2021/PeCLR/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

end-to-end fine-tuning? linear probing?

hongsukchoi opened this issue · comments

Hi, thank you for your great work.

I read the paper and tried to analyze the codes, but wasn't able to figure out whether PeCLR is adopting end-to-end fine-tuning or linear probing for evaluating the latent representation.

In the ablation section, the paper says you freeze the encoder, but in other parts of the paper, you use a term "fine-tuning".

Evaluation of the learned feature representation as done in the ablation section 4.4 is performed by freezing the PeCLR-trained encoder and then training an MLP using that representation.

For the final numbers in section 4.5 and 4.6, we perform end-to-end fine-tuning. This means that the entire model is first pre-trained using PeCLR and then fine-tuned.