Working with ollama or llama.cpp
region23 opened this issue · comments
hello @region23
yes it is possible to use a local model. What you'd need to do is:
- Serve the model locally at some endpoint
- And change the settings accordingly
change it to your local endpoint
and make sure to update the prompt template
For now ollama's API is not supported, it's on the todo list though!
Also created an issue for llama.cpp : huggingface/llm-ls#28
This issue is stale because it has been open for 30 days with no activity.
+1
This issue is stale because it has been open for 30 days with no activity.
Is there a timeline for when feat: Add adaptors for ollama and openai #117 might be merged?
Finishing the last touches of fixes on llm-ls and testing everything works as expected for 0.5.0
and we should be good to go for a release. I'd say either I find some time this week-end or next week :)