alexrozanski / LlamaChat

Chat with your favourite LLaMA models in a native macOS app

Home Page:https://llamachat.app

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Add ability to configure runtime and model hyperparameters

zakkor opened this issue · comments

commented

Some essential ones that come to mind:

  • -t {n} to specify number of threads. From what I can tell, LlamaChat uses 3 threads by default, but on my machine 8 threads gets the best performance.
  • --temp and the rest of the useful params

@zakkor Yep this is definitely coming ☺️

Added in v1.2.0