Nixtla / neuralforecast

Scalable and user friendly neural :brain: forecasting algorithms.

Home Page:https://nixtlaverse.nixtla.io/neuralforecast

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[BUG] Model refit after cross-validation

elephaint opened this issue · comments

What happened + What you expected to happen

Raising the issue so that I don't forget to solve it [the change is simple but I need to spend a bit more time on it]

In Auto* models, it seems we refit the model after finding the best set of hyperparameters here:

self.model = self._fit_model(

However, it seems it should be be val_size = val_size * self.refit_with_val instead of val_size=val_size * (1 - self.refit_with_val).

refit_with_val is a boolean that is False by default, so the current code (by default) will refit with val, as val_size will be > 0. So it seems the boolean works the wrong way around. I'd assume that if refit_with_val=False, you don't want to use a validation set in the final fit. The current implementation does the opposite.

Versions / Dependencies

1.7.1

Reproduction script

n/a

Issue Severity

Low: It annoys or frustrates me.