withcatai/node-llama-cpp Issues
Issue with Webpack Compilation
Closed 9Support file based prompt caching
Updated 9Loading Llama3 in Electron
Closed 2Support for Llama 3
Closed 1Function call error
Closed 2kv slot none
Closed 2feat: max GPU layers param
Closed 1Inconsistent tokenization/encoding
Closed 3Fail to run in docker image
Closed 6Please add AMD ROCm support
Closed 8Grammars folder not found
Closed 4feat: minP support
Closed 1ESM support?
Closed 3feat: hide llama.cpp logs
Closed 2Bun support
Closed 1CLI does not work with Bun
Closed 1feat: get embedding for text
Closed 2Could not find a KV slot
Closed 6feat: automatic batching
Updated 8support commonjs
Closed 4Not worlking as intended.
Closed 3Error building with Cuda
Closed 1Set repeat-penalty request
Closed 1