junyongyou / triq

TRIQ implementation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Training

zmm96 opened this issue · comments

commented

hello,I want to rapeat your work and rewrite it by pytorch. can you tell me more about the detail about training,"A base learning rate 5e-5 was used for pretraining"you mean pretrain in the same dataset(Koniq-10k and livec)?

Yes, I used 5e-5 to train the model first, based on the ImageNet pretrained weights for the base net. Subsequently, I used 1e-6 to retrain the model based on the weights obtained from the previous train.

commented

Thanks for your raplay. I want to konw that whether you partition the training set ,validation set and test set. I don not see the validation set. Maybe you use the test as val ?

commented

And what you mean KonIQ-half-sized in the experiment table? Thanks so much for your work.

Hi, I didn't split the data set into three parts, so basically there was no test set, but I used other datasets as test set. KonIQ-half-sizes means KonIQ images that have been halve sized, as that in the original KonIQ paper. Maybe you could first read the papers and then see if you have any questions.

commented

Do you use the same training set for the two stage(lr=5e-5,lr=1e-6) of your experiment?

Do you use the same training set for the two stage(lr=5e-5,lr=1e-6) of your experiment?

Yes, I did.

commented

Then I seen Imagenet in your dataloader code, you mean yorself train the imagenet again from the pretrained?

Then I seen Imagenet in your dataloader code, you mean yorself train the imagenet again from the pretrained?

No. I only used ImageNet pretrained weights.

commented

Can you tell me detail about training. total epoch, warm_up_epoch and so no. It seems that your code is not match with the paper(et al. learning rate)

Can you tell me detail about training. total epoch, warm_up_epoch and so no. It seems that your code is not match with the paper(et al. learning rate)

Hi, there might be some slight difference in settings between the paper and the code. You can find the total epoch, warnup epoch and hold epoch in the code. I used 5e-5 as lr in the base train and 1e-6 in the finetune.