Port `autogpt.core.resource.model_provider` from AutoGPT to Forge
Pwuts opened this issue · comments
-
Actionable for #6970
Move clear-cut library code from AutoGPT to Forge (or to /dev/null if Forge already has a better version)
autogpt.core.resource.model_provider
- ...
Proposed new module name: forge.llm
Dependencies
TODO
- Port
autogpt.core.resource.model_provider
- Make single interface for client initialization/usage
- Check module configuration setup (see below)
Notes
-
Configuration may need revision
We want Forge components to be portable and usable as stand-alone imports. Modules should be able to configure themselves if no configuration is passed in.
Example:OpenAI
's constructor has anapi_key
parameter. If not set, it will try to read the API key from theOPENAI_API_KEY
environment variable.Our
OpenAIProvider
wraps anOpenAI
orAzureOpenAI
client, depending on the configuration. We think it makes sense to preserve this behavior.
Why migrate this module?
The model_provider
module provides functionality and extendability that is not available from any many-model client that we know of, e.g. LiteLLM. We would like to have support for as many models as possible, but:
- As it is, AutoGPT's prompts are not portable between different model families. Until this is fixed, having access to any number of LLMs / LLM providers doesn't add much value.
- We are eyeing some opportunities (developing LLM polyfills/middleware) for which having low-level access to the native clients is beneficial. Related: #6969.
Because of these reasons, we want to keep our own client implementation for now.
This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.
This issue was closed automatically because it has been stale for 10 days with no activity.
Unstale @kcze