Giters
thomasantony
/
llamacpp-python
Python bindings for llama.cpp
Geek Repo:
Geek Repo
Github PK Tool:
Github PK Tool
Stargazers:
198
Watchers:
10
Issues:
21
Forks:
26
thomasantony/llamacpp-python Issues
KeyError: 'transformer.h.0.attn.c_attn.bias'
Closed
3 months ago
Update llamacpp
Updated
10 months ago
Not work
Updated
a year ago
Comments count
1
batch inference?
Updated
a year ago
Update for llama.cpp quantization changes
Updated
a year ago
Better support for editable install
Closed
a year ago
Comments count
3
delete
Closed
a year ago
unable to convert
Updated
a year ago
Are you even tested with code?
Closed
a year ago
Comments count
2
Clean up llamacpp-chat interface
Updated
a year ago
Comments count
1
how to set customized tokenizer?
Updated
a year ago
Updating to latest llama.cpp version
Closed
a year ago
Comments count
1
AttributeError: module 'llamacpp' has no attribute 'llama_model_quantize'
Closed
a year ago
Comments count
2
ImportError: DLL load failed while importing _pyllamacpp: A DLL initialization routine failed.
Closed
a year ago
Comments count
1
Llama Inference not working
Closed
a year ago
Comments count
3
Segmentation fault for generations larger than ~512 tokens
Updated
a year ago
Comments count
4
Better command-line argument parsing for cli.py and chat.py
Updated
a year ago
It it possible to reset the model context for use as a REST API?
Closed
a year ago
Comments count
1
problem with llama-convert
Closed
a year ago
Comments count
4
Adding new params
Closed
a year ago
Comments count
3
Script crash with no output
Updated
a year ago
Comments count
1