The parameter sample_shape does not work when fine-tune the generative model using 256x256 pixel samples
bugsuse opened this issue · comments
Hi, I was running train_genforecast.py
to fine-tune the model using 256x256 pixel samples. The model was initialized with the weights obtained the pre-training with 128x128 pixel samples.
I found that the parameter sample_shape
of the train
function in train_genforecast.py
was not used when fine-tuning the model.
That is, the sample_shape
is still (4, 4)
when building datamodule
for fine-tuning the model.
ldcast/scripts/train_genforecast.py
Lines 94 to 97 in 1345cb2
Would you mind checking this for me?
Thanks for the report again! This is fixed in the above commit.
I also double checked that this was not affecting the training runs for the paper (for instance, the increase in time per training sample for the fine-tuning phase is consistent with the increase in sample size). Rather this parameter was omitted when building the scripts for the code release.