functime-org / functime

Time-series machine learning at scale. Built with Polars for embarrassingly parallel feature extraction and forecasts on panel data.

Home Page:https://docs.functime.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[FEAT] LLM improvements

TomBurdge opened this issue · comments

In making a small project with functime's OpenAI capability, I wish to suggest a few improvements.
In order of complexity:

  • Change how importing the API key works

Currently, if you are using a .env file, one needs to load the dotenv before importing the from functime.llm.
If you are using just one python file, then this means you execute code before finishing your imports. This goes against Ruff E402 (and equivalent rules in black, Flake8).

  • Budgeting: It should be possible to choose maximum model & maximum tokens.

Currently, if the tokens are too long for gpt-3.5-turbo then functime will use another model that can handle more, until gpt-4-32k.
If I am concerned about my OpenAI spend, I may wish to: limit the possible size of my token request, limit which model I use.
These are somewhat equivalent for budgeting.

  • Behaviour to generate a bit more dataset context/metadata for the prompt.

These could be simple metadata: the periodicity, the number of periods.
And it could also be relevant features: the seasonality according to a simple statistical method, holiday days for the country that the data includes.

It is possible to add most of these via the context option currently, but it could be easier.
You never know how an LLM will respond to information until it receives it, but this could help provide better responses.

  • Compatability with HuggingFace API.

OpenAI is great and gives good responses, but for a variety of reasons I may wish to use HuggingFace.
Streaming the tokens is probably overkill, but this should be a case of adding something like the code here.

Ciao Tom, I recall a bit about this was written in the discord. Do you think the issue is still unaddressed?