qiqihaer / RandLA-Net-pytorch

RandLA-Net's implementation with Pytorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Learning Rate Schedule

tsunghan-wu opened this issue · comments

Hi,

Thanks to share the excellect work.
As you've mentioned in the README that the performance is somehow worse than the original implementation.

I found that the learning rate schduling method in your repo is different from the original one.

In this repo, the learning rate starts from 0.1 * 0.95 = 0.095. (cause you call adjust_learning_rate before each training epoch)
However in the original repo, the adjust learning rate should be after each training epoch.

I'm not sure whether my observation is correct.
I am looking forward to discussing with you.

Tsung-Han Wu

Yes, you are right. Maybe you can try to modify it and train again. But I don't think it's going to help.

Hi,

Thanks to share the excellect work.
As you've mentioned in the README that the performance is somehow worse than the original implementation.

I found that the learning rate schduling method in your repo is different from the original one.

In this repo, the learning rate starts from 0.1 * 0.95 = 0.095. (cause you call adjust_learning_rate before each training epoch)
However in the original repo, the adjust learning rate should be after each training epoch.

I'm not sure whether my observation is correct.
I am looking forward to discussing with you.

Tsung-Han Wu

Yes, you are right. Maybe you can try to modify it and train again. But I don't think it's going to help.