warner-benjamin / fastxtend

Train fastai models faster (and other useful tools)

Home Page:https://fastxtend.benjaminwarner.dev

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

CutMixUpAugment's augment_finetune param applies at the beginning of training rather than the end

csaroff opened this issue · comments

The documentation for CutMixUpAugment says "Use augment_finetune to only apply dataloader augmentations at the end of training."

Reviewing the code however, it appears that augment_finetune applies the augmentations at the beginning of training only.

if self.element and self.augment_finetune >= self.learn.pct_train:

Let's assume I want to only apply augmentations for the last 20% of training. self.augment_finetune=0.2.

During the first epoch, the check would be

if True and 0.2 >= 0.0:

Meaning that the augmentations would be applied on the first epoch(and the first 20% of training.

I might be missing something, but I think the check should be:

if self.element and self.augment_finetune >= 1 - self.learn.pct_train:

I agree it would make more sense for augment_finetune to be the number of epochs or percent of training it's applied for, not the starting epoch. Will change.