cmhungsteve / SSTDA

[CVPR 2020] Action Segmentation with Joint Self-Supervised Temporal Domain Adaptation (PyTorch)

Home Page:https://arxiv.org/abs/2003.02824

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Results of fewer labeled training data

wlin-at opened this issue · comments

Hi, I would like to thank you for the refreshing paper.
I have a question regarding the experiments of fewer labeled training data (Table 4 in the main paper and Table 8 in Appendix). I wonder whether the results with 65% of labeled training data were acquired by setting ratio_source or ratio_label_source to 65%.
To my understanding:
(1) ratio_source: dropping both frame features and labels
(2) ratio_label_source: dropping labels only. The dropped labels won’t be used in the TCN cross-entropy loss. However the frame features will still be used in the adversarial loss of domain prediction.
I thought the results of Table 4 were obtained with ratio_source= 65% as it says “we drop labeled frames from source domains with uniform sampling for training” in the paper.
However, in the appendix it also mentions “The additional trained data are all unlabeled, so they cannot be directly trained with standard prediction loss. There we propose SSTDA to exploit unlabeled data” and “achieve performance with this strong baseline using only 65% of labels for training”, which somehow indicate that the results are acquired with ratio_label_source=65%.
Thank you in advance and please correct me if there is any misunderstanding.
Regards

I set ratio_label_source to 0.65.
Sorry for the confusing wording in the paper.