time-series-foundation-models / lag-llama

Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to reuse best checkpoint and turned hparams.yaml to predict results

YonDraco opened this issue · comments

I used best checkpoint after turning then used get_lag_llama_predictions function to predict from best checkpoint . However, the predicted result is much lower than when I turned and trained that checkpoint.

Sorry, I don't understand. Can you elaborate? Which checkpoint did you use?

@ashok-arjun I saved the best checkpoint epoch=36-step=1850.ckpt after turning and used the get_lag_llama_predictions function as in colab demo 2 to make predictions from this checkpoint. However, when loading this checkpoint to make predictions, the results are worse than when turning. So I think I will have to load both checkpoint and hparams.yml but I don't know how to handle them

Sorry, I don't understand what "turning" is. Do you mean finetuning or training?