yunjey / pytorch-tutorial

PyTorch Tutorial for Deep Learning Researchers

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

some question about the position of 'optimizer.zero_grad()'

languandong opened this issue · comments

I think the correct way the code the training
is that

    optimizer.zero_grad()
    # Forward pass
    outputs = model(images)
    loss = criterion(outputs, labels)
    
    # Backward and optimize
    loss.backward()
    optimizer.step()

not that

    # Forward pass
    outputs = model(images)
    loss = criterion(outputs, labels)
    
    # Backward and optimize
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
commented

any difference?

@languandong
You can use both, doesn't matter as long as optimizer.zero_grad() is called before loss.backward().
Note that optimizer.zero_grad() zeroes out the gradients in the grad field of the tensors, and loss.backward() compute s the gradients which are then stored in the grad field.

As pointed out by @languandong, the critical factor is the correct sequence in which optimizer.zero_grad() and loss.backward() are called. Both code snippets are valid as long as optimizer.zero_grad() is invoked before loss.backward(). This ensures that the gradients are properly zeroed out and then computed and stored in the appropriate tensors' grad field.

commented

@languandong I think the confusion originates from the misconception that the gradient would be computed and stored during the forward pass. In fact, in the forward pass, only the DAG is constructed. The grad is computed in a lazy mode: it is not computed until explicit loss.backward() is invoked.