Error in optim/adamw.py
caodoanh2001 opened this issue Β· comments
π Bug
Hi,
I think in optim/adamw.py
has a small mistake with alignment at line 110.
F.adamw(params_with_grad,
grads,
exp_avgs,
exp_avg_sqs,
max_exp_avg_sqs,
state_steps,
amsgrad,
beta1,
beta2,
group['lr'],
group['weight_decay'],
group['eps'])
At line 110, I think it should be increased by 1 tab.
I met this bug when using mmdetection toolbox.
cc @vincentqb
What is the bug that you are encountering?
What is the bug that you are encountering?
Hi, when I start training model by mmdetection, I get this error:
UnboundLocalError: local variable 'beta1' referenced before assignment
When I decrease 1 tab at line 110 in file adamw.py, it seems that can solve this problem.
Maybe I think the variable beta1, beta2 are defined outside your loop so that it occurs the mentioned issues.
This was fixed in #52944, what pytorch version are you using?
I also encountered this problem, i'm using pytorch 1.8.1 (py3.9_cuda11.1_cudnn8.0.5_0).
I confirm that pytorch-1.8.1
doesn't have this fix included. And getting the same problem.
I confirm that
pytorch-1.8.1
doesn't have this fix included. And getting the same problem.
Me too confirm this
It seems that
beta1, beta2 = group['betas']
have to be moved to line 76??
f8238d7#diff-46de6ea1d9fce81c27638ecd7f137c781fd64d02acea698c432a8ddb916ea51f
hmmm, it seems that 1.8.2 LTS doesn't have this fix included.
It seems that beta1, beta2 = group['betas'] have to be moved to line 76??
f8238d7#diff-46de6ea1d9fce81c27638ecd7f137c781fd64d02acea698c432a8ddb916ea51f
it does work