mudler / LocalAI

:robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.

Home Page:https://localai.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Feature Request: mimic openai API endpoints

mkellerman opened this issue · comments

I'm using this docker compose file to deploy a front end UI that is very similar to the ChatGPT UI interface.

version: '3.6'

services:
  chatgpt:
    build: .
    # image: ghcr.io/mckaywrigley/chatbot-ui:main
    ports:
      - 9080:3000
    environment:
      - 'OPENAI_API_KEY='
      - 'OPENAI_API_HOST=http://api:8080'

  api:
    image: quay.io/go-skynet/llama-cli:v0.4
    volumes:
      - /Users/Shared/Models:/models
    ports:
      - 9000:8080
    environment:
      - MODEL_PATH=/models/7B/gpt4all-lora-quantized.bin
      - CONTEXT_SIZE=700
      - THREADS=4
    command: api

Would it be possible to add API endpoints to mimic the same output as openai? Not sure if it's easier to do here, or to to add a proxy that converts the in/out of each call. But i see value in other tools that normally call openai apis, could simply targer this local instance.

Your thoughts?

wonderful idea, I'd be more than happy to have it work in a way that is compatible with chatbot-ui, I'll try to have a look, but - on the other hand I'm concerned if the openAI api does some assumptions (e.g. prefixed prompts, roles, etc) at the moment the llama-cli API is very simple, as you need to inject your prompt with the input text.

Although we can just offer an opinionated setup, e.g. by injecting prompts, etc.

alright, master is capable to handle multi-model, and mimics the openAI api. however, chatbot-ui seems to not be completely follow the openAI spec so I'm struggling to make it work with. however, https://github.com/Niek/chatgpt-web follow closely the spec and seems to work just fine.

I've just contacted upstream as would be super-nice if they could list the available models returned by the API and avoid filtering it.

I can now see all the models when using the master branch. thank you. Nice work!

Couple of suggestions:
1 - Do PRs, so when a new feature is introduced, people can go to the PR and see what were the changes, and any additional comments related to the issue it resolves, etc..
2- Update your documentation. You're README.md doesn't show how this implementation is working.

Feature Request:
1 - Is it possible to add a drop down in the index.html to show the list of models?
2- Have it switch models when a different one is selected in the drop down?

I can now see all the models when using the master branch. thank you. Nice work!

Couple of suggestions: 1 - Do PRs, so when a new feature is introduced, people can go to the PR and see what were the changes, and any additional comments related to the issue it resolves, etc.. 2- Update your documentation. You're README.md doesn't show how this implementation is working.

Good points! Lately I had just few cycles to dedicate to it and had less time here, hence I rushed this a bit, sorry about that!

Feature Request: 1 - Is it possible to add a drop down in the index.html to show the list of models? 2- Have it switch models when a different one is selected in the drop down?

That sounds a great addition! could you please file separate issues? So we can track them down and tackle those separately.

My plan indeed was to close this issue once got the documentation and a tag with all the new features in!

If you want to test out chatgpt-web with your cli/api:

https://github.com/mkellerman/chatgpt-web/tree/feature/add-llama-cli

If you want to test out chatgpt-web with your cli/api:

https://github.com/mkellerman/chatgpt-web/tree/feature/add-llama-cli

That's super-nice! loving it! you are so fast :)

I've opened Niek/chatgpt-web#105 upstream to track it, maybe worth mentioning your branch directly?

Let me close this issue, I've just tagged v0.7 and added the docs for it: https://github.com/go-skynet/llama-cli#supported-openai-api-endpoints so we have addressed the original request, we can track the webui api in a separate one or directly upstream - I'm fine with both!