Nixtla / neuralforecast

Scalable and user friendly neural :brain: forecasting algorithms.

Home Page:https://nixtlaverse.nixtla.io/neuralforecast

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Transfer learning tutorial issue with --ntasks error

neilmartindev opened this issue · comments

What happened + What you expected to happen

Hey, I tried to follow the tutorial on Transfer Learning and tried to run this section:

`horizon = 12
stacks = 3
models = [NHITS(input_size=5 * horizon,
h=horizon,
max_steps=100,
stack_types = stacks*['identity'],
n_blocks = stacks*[1],
mlp_units = [[256,256] for _ in range(stacks)],
n_pool_kernel_size = stacks*[1],
batch_size = 32,
scaler_type='standard',
n_freq_downsample=[12,4,1])]
nf = NeuralForecast(models=models, freq='M')
nf.fit(df=Y_df)

nf.save(path='./results/transfer/', model_index=None, overwrite=True, save_dataset=False)`

However, I got this runtime error:

RuntimeError: You set --ntasks=6 in your SLURM bash script, but this variable is not supported. HINT: Use --ntasks-per-node=6 instead.

I've not used or touched slurm but I did look int my slurm.py script but can't see anything that jumps out at me. Do you have any advice? I'm a new PhD student to any help is really appreciated!

Thanks,
Neil

Versions / Dependencies

Python 3.10.4
Linux '4.18.0-372.32.1.el8_6.x86_64'

Reproduction script

horizon = 12
stacks = 3
models = [NHITS(input_size=5 * horizon,
h=horizon,
max_steps=100,
stack_types = stacks*['identity'],
n_blocks = stacks*[1],
mlp_units = [[256,256] for _ in range(stacks)],
n_pool_kernel_size = stacks*[1],
batch_size = 32,
scaler_type='standard',
n_freq_downsample=[12,4,1])]
nf = NeuralForecast(models=models, freq='M')
nf.fit(df=Y_df)

nf.save(path='./results/transfer/', model_index=None, overwrite=True, save_dataset=False)

Issue Severity

Medium: It is a significant difficulty but I can work around it.

Mixed up the MLForecast with this, sorry!