Why avoid gradient accumulation?
RonanKMcGovern opened this issue · comments
There is this quote:
**Gradient accumulation** simulates a larger batch size than the
--
252 | hardware can support and therefore does not provide any throughput
253 | benefits. It should generally be avoided in applied work.
For large GPUs and multi-GPU setups, I can see this making sense, as you can run batches of 32 and don't need accumulation.
Am I mistaken or missing something?
But, on smaller GPUs, grad accum can be important because it provides averaging in the virtual batches that stabilises the training.
A lot of architecture have BN layers which don't work properly unless actually backprogated through, I think.
Batch normalization. Essentially, BN blocks keep track of the running batch mean and standard deviation and use them to normalize their inputs.
These parameters are non-trainable and are updated with each minibatch the blocks receive. However, because the total number of batches per epoch is not the same as that of backpropagations when using gradient accumulation, BN blocks now compute "incorrect" statistics. This problem is further magnified by their other parameters still being updated according to accumulated batches. Basically, batches and their descriptive statistics become “unsynchronized”.
BN blocks are very popular in computer vision tasks, and unfortunately, I’m not too familiar with much else. However, I believe that transformer blocks use typically use layer normalization blocks which do not depend on batch size, so you should be safe.
By the way, large batch sizes are just as "dangerous" as small ones due to potential overmoothing of the gradient landscape. It's kind of a "pick your poison" situation.