Giters
0cc4m
/
KoboldAI
Geek Repo:
Geek Repo
Github PK Tool:
Github PK Tool
Stargazers:
151
Watchers:
4
Issues:
47
Forks:
30
0cc4m/KoboldAI Issues
./play-rocm.sh gptq error fedora 39
Closed
7 months ago
Comments count
1
ImportError: cannot import name 'url_quote' from 'werkzeug.urls'
Updated
a year ago
Attempting to pass model params to ExLlama on startup causes an AttributeError
Updated
a year ago
Comments count
2
[Regression] Can't participate in horde with `exllama` branch, stopping sharing breaks processing
Updated
a year ago
Support for MythoMax-L2-13B-GPTQ
Updated
a year ago
How to load multiple graphics cards
Updated
a year ago
Exllama in KoboldAI emits a spurious space at the beginning of generations that end with a stop token.
Closed
a year ago
Comments count
2
Significant Speed Recression on P40 compared to United
Updated
a year ago
Slow speed for some models.
Updated
a year ago
Comments count
4
"expected scalar type BFloat16 but found Half"
Updated
a year ago
i keep getting a merge conflict when trying to git pull from the new updated 4bit-plugin dev branch
Updated
a year ago
Comments count
1
when will the new update kobold just got for llama-2 be pushed here?
Updated
a year ago
Comments count
1
cant load models 4bit
Closed
a year ago
Comments count
4
Can't load 4bit models on Rocm
Updated
a year ago
Comments count
4
WinError 127 on nvfuser_codegen.dll
Updated
a year ago
Request for T5 gptq model support.
Updated
a year ago
Comments count
1
please add code for landmark attention to 4bit-plugin
Updated
a year ago
1 token generation in story mode
Updated
a year ago
Comments count
2
i cannot load any ai models and i keep getting this error no matter what i do. this happened after i did "git pull" command from this repository
Updated
a year ago
Comments count
1
Hey, I'm not sure what's wrong, but it does automatically delete a lot of output at the end of each generation.
Closed
a year ago
Comments count
4
ModuleNotFoundError: No module named 'gptq.bigcode'
Closed
a year ago
Comments count
1
anaconda3/lib/python3.9/runpy.py:127: RuntimeWarning: 'gptq.bigcode' found in sys.modules after import of package 'gptq', but prior to execution of 'gptq.bigcode'; this may result in unpredictable behaviour
Updated
a year ago
Interface not loading... WSL/Windows
Updated
a year ago
ModuleNotFoundError when starting "play.bat"
Closed
a year ago
Comments count
4
how i can uninstall
Updated
a year ago
Comments count
1
Can't split 4bit model between gpu/cpu, and can't run only on cpu
Closed
a year ago
Comments count
1
install_requirements error libmamba
Updated
a year ago
Cannot find the path specified & No module named 'hf_bleeding_edge' when trying to start.
Closed
a year ago
Comments count
20
Failed to load 4bit-128g WizardLM 7B
Updated
a year ago
Comments count
3
Loading a model via command line (--model) does not work in 0cc4m Branch
Closed
a year ago
Comments count
5
AMD install out of date?
Updated
a year ago
Comments count
6
Error involving bfloat 16 on generation with MPT 7B 4-bit_128g
Updated
a year ago
Comments count
2
Issue with loading 30b model which was previously good
Updated
a year ago
Comments count
1
Error using previously good model.
Closed
a year ago
Comments count
2
NameError: name 'os' is not defined after last commit
Closed
a year ago
Comments count
2
What is the best way to update?
Closed
a year ago
Comments count
2
ImportError when running "play.sh"
Closed
a year ago
Comments count
1
No 4-bit toggle
Closed
a year ago
Comments count
2
Can't Generate With 4bit Quantized Model
Closed
a year ago
Comments count
13
i got an other error
Closed
a year ago
Comments count
1
Error on start
Closed
a year ago
Comments count
2
pt not found
Closed
a year ago
Comments count
2
Can't Find 4Bit Model
Closed
a year ago
Comments count
3
Flask Error
Closed
a year ago
Comments count
1
ERROR: quant_cuda-0.0.0-cp38-cp38-win_amd64.whl is not a supported wheel on this platform.
Closed
a year ago
Comments count
2
src/sentencepiece_processor.cc error when loading GPT4X model
Closed
a year ago
Comments count
1