dahiyaaneesh / peclr

This is the pretraining code for PeCLR. An equivariant contrastive learning framework for 3D hand pose estimation. The paper is presented at ICCV 2021.

Home Page:https://ait.ethz.ch/projects/2021/PeCLR/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to finetune on FreiHAND?

Lily-Le opened this issue · comments

Hi, sorry to bother you. You’ve really done a fantastic job. I think that the idea of translating the image transformation to the latent space is cool. I’m working on my graduation project now, and want to include your experiments in the paper. I wonder whether the finetuning code for FreiHAND Dataset is available? Thanks!
I‘m looking forward to hearing from you soon:D

Thank you for your interest in our work! The fine-tuning code is not available, as it depends on code-base I've developed during an internship and have not gotten around to refactoring this for release. In practice you can use any training pipeline to fine-tune the PeCLR models. In fact I encourage you to do so, as I would love to see how well the model generalizes outside of our pipeline!

Thanks for your reply! I'll have a try. :)

嗨,很抱歉打扰您。你真的做得很好。我认为将图像转换转换为潜在空间的想法很酷。我现在正在做我的毕业设计,想把你的实验写进论文里。我想知道FreiHAND数据集的微调代码是否可用?谢谢!我期待很快收到您的来信:D
Hello, I would like to ask if you are still working on this project, I wrote fine-tuned code but did not run out of the results in the paper, there are still some questions, can you share your code or results, thank you very much.