Giters
lxe
/
simple-llm-finetuner
Simple UI for LLM Model Finetuning
Geek Repo:
Geek Repo
Github PK Tool:
Github PK Tool
Stargazers:
2039
Watchers:
20
Issues:
48
Forks:
134
lxe/simple-llm-finetuner Issues
How should I prepare the dataset for generative question answering on the private documents?
Updated
3 months ago
Comments count
47
will this work with quantized GGUF files?
Updated
7 months ago
[Request] Retrain adapter from checkpoint?
Updated
7 months ago
Performance after FineTuning
Updated
9 months ago
Comments count
3
About llama-2-70B fine-tuning
Updated
9 months ago
Getting the repo id error from the web interface
Updated
9 months ago
How to use CPU instead of GPU
Updated
10 months ago
Comments count
2
AMD GPU compability or CPU
Updated
10 months ago
RuntimeError: expected scalar type Half but found Float
Updated
10 months ago
Comments count
3
M1/M2 Metal support?
Updated
10 months ago
[Request] Mac ARM support
Updated
a year ago
RuntimeError: unscale_() has already been called on this optimizer since the last update().
Closed
a year ago
Comments count
3
[Request] QLoRA support
Updated
a year ago
In trainer.py, ignore the last token is not suitable for all situations.
Updated
a year ago
`LLaMATokenizer` vs `LlamaTokenizer` class names
Updated
a year ago
Comments count
5
Question: Is fine tuning suitable for factual answers from custom data, or is it better to use vector databases and use only the relevant chunk in the prompt for factual answers?
Closed
a year ago
Comments count
2
Multi GPU running
Updated
a year ago
Issue in train in colab
Updated
a year ago
Comments count
7
Getting OOM
Updated
a year ago
Comments count
2
How do I merge trained Lora an Llama7b weight?
Updated
a year ago
Comments count
2
Error: Adapter lora/decapoda-research_llama-{ADAPTER_NAME} not found.
Closed
a year ago
Suggestion to improve UX
Updated
a year ago
Comments count
3
"The tokenizer class you load from this checkpoint is 'LLaMATokenizer'."
Closed
a year ago
Comments count
3
Verbose function to find out what leads to crash during training?
Closed
a year ago
Comments count
1
How can I use the finetuned model with text-generation-webui or KoboldAI?
Updated
a year ago
Comments count
4
question: could the model trained be used for alpaca.cpp?
Updated
a year ago
Comments count
2
Slow generation speed: around 10 minutes / loading forever on rtx3090 with 64gb ram....
Updated
a year ago
Comments count
3
AttributeError: type object 'Dataset' has no attribute 'from_list'
Updated
a year ago
Comments count
3
"error" in training - AttributeError: 'CastOutputToFloat' object has no attribute 'weight', RuntimeError: Only Tensors of floating point and complex dtype can require gradients
Updated
a year ago
Comments count
5
Training using long stories instead of question/response
Updated
a year ago
Comments count
3
how to finetune with 'system information'
Updated
a year ago
Comments count
1
Attempting to use 13B in the simple tuner -
Updated
a year ago
Comments count
2
Not a problem - but like people should know
Updated
a year ago
Comments count
2
Error during Training RuntimeError: mat1 and mat2 shapes cannot be multiplied (511x2 and 3x4096)
Updated
a year ago
Comments count
2
Traceback during inference.
Updated
a year ago
Comments count
8
Examples to get started with
Updated
a year ago
Comments count
4
How the finetuning output looks like?
Closed
a year ago
Comments count
1
Question: Native windows support
Updated
a year ago
Comments count
2
Can Nivdia 3090 with 24G video memory support finetune?
Updated
a year ago
Comments count
4
Inference doesn't work after training
Closed
a year ago
Comments count
2
Finetuning in unsupported language
Updated
a year ago
Comments count
2
Host on Hugging Face Spaces
Closed
a year ago
Comments count
4
Inference output text keeps running on...
Updated
a year ago
Comments count
1
Inference works just once
Closed
a year ago
Comments count
12
(WSL2) - No GPU / Cuda detected....
Closed
a year ago
Comments count
6
Collecting info on memory requirements
Updated
a year ago
Comments count
1
Is CUDA 12.0 supported?
Updated
a year ago
Comments count
1
Where are the downloaded ".bin" files for the llama model stored on the disk?
Closed
a year ago
Comments count
2