OpenAI compatable API
Stargate256 opened this issue · comments
Hi, is it possible to connect to a local LLM server via the OpenAI compatable API.
Basically I would like to use oobabooga/text-generation-webui with the OpenAI API extension.
I can't use ollama because it is almost 3x slower than text-generation-webui as it can't run EXL2.
yes, in the .env file in the root folder, add
OPENAI_API_KEY=sk-11111111111111111111111111111111
OPENAI_BASE_URL=http://127.0.0.1:5000/v1
But you need models with function calling abilities in order to call search api, see #15 (comment)
What must I select when I run it then?
I tried this, even added OPENAI_BASE_URL=${OPENAI_BASE_URL}
to docker-compose.dev.yaml and it doesn't work,
there are no errors, it just doesne't do anything.
LLM: phi3-medium-8_0bpw-exl2
@yhyu13 Are you sure that you don't need to do anything else?
Sorry, I missed it
in src/backend/related_queries.py, I added openai.base_url support in order to config openai url by .env file
load_dotenv()
+openai.base_url = os.environ.get("OPENAI_BASE_URL", "https://api.openai.com")
+print(f"openai.base_url is {openai.base_url}")
+
OLLAMA_HOST = os.environ.get("OLLAMA_HOST", "http://localhost:11434")
Just added support for this through https://www.litellm.ai/!
You should be able to do this by setting OPENAI_BASE_URL
in your .env
.
Let me know if you have any trouble setting this up.
And how do I make it use an OpenAI compatable APi? (sorry for the late follow up, my AI server had died.)