ItzCrazyKns / Perplexica

Perplexica is an AI-powered search engine. It is an Open source alternative to Perplexity AI

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Help fix the bugs adding the LLM API providers config details right on the UI instead of or in addition to .toml or .env

arunkumarakvr opened this issue · comments

commented

Describe the bug
A clear and concise description of what the bug is.

Help fix the bugs adding the LLM API providers config details right on the UI say under Settings instead of or in addition to .toml or .env and make it work seamlessly

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

Please provide more context

commented

One who still doesn't have access to the OpenAI API or can't serve Ollama is not yet able to deploy locally to use this wrapper properly. That's where and why this wrapper becomes useless, as none of the LLMs APIs are particularly usable. Even when one tries to use the existing options, no workings yet. So you may know the alternative solution to this I guess. If so please fix it.

commented

One who still doesn't have access to the OpenAI API or can't serve Ollama is not yet able to deploy locally to use this wrapper properly. That's where and why this wrapper becomes useless, as none of the LLMs APIs are particularly usable. Even when one tries to use the existing options, no workings yet. So you may know the alternative solution to this I guess. If so please fix it.

In conclusion, this wrapper is not working properly as neither OpenAI API nor Ollama becomes usable including the popular OpenAI compatible APIs due to incorrect config or what?

Please open only 1 issue #107, we can continue there