wandb / openui

OpenUI let's you describe UI using your imagination, then see it rendered live.

Home Page:https://openui.fly.dev

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Using OpenUI without Ollama and Llava?

hukarere opened this issue · comments

Hello,

After reading the README file, I have the following questions:

  1. Is it possible to run OpenUI locally without Ollama?
  2. Is Llava or any other model with image processing capabilities required for running OpenUI?
  3. What OpenAI or Groq image processing model can be used instead of Ollama/Llava, and how?

Thanks in advance.

Hey @hukarere, by default OpenUI will use gpt-4o from OpenAI when an image is uploaded. If you don't upload an image and instead just chat with the model we can use any language model. Groq currently only supports language models so when choosing one of them you won't be able to upload an image.

Also, sorry to answer your questions directly:

  1. Yes, just set an OPENAI_API_KEY and / or a GROQ_API_KEY
  2. No, image models enable the ability to upload a screenshot but is not required. The tool works fine with only text.
  3. Answered in the above comment. gpt4-o when using gpt-3.5-turbo otherwise the selected model from the "settings" menu if it supports images. For Groq, there's no image support.

Hello @vanpelt,

Thanks for your explanations!

Perhaps it would make sense to add this detail about gpt4-o being used for image processing by default to README? Now it only mentions llava, so I had mistakenly concluded that llava is the only option...

@vanpelt,

3. Answered in the above comment.  `gpt4-o` when using `gpt-3.5-turbo` otherwise the selected model from the "settings" menu if it supports images.  For Groq, there's no image support.

Another question: if I understood you correctly, gpt4-o is only used for uploaded images if gpt-3.5-turbo is used for text. What if I would like to use a groq model for text, but still use gpt4-o for uploaded images? Is that possible?