Giters
EricLBuehler
/
candle-lora
Low rank adaptation (LoRA) for Candle.
Geek Repo:
Geek Repo
Github PK Tool:
Github PK Tool
Stargazers:
110
Watchers:
7
Issues:
14
Forks:
8
EricLBuehler/candle-lora Issues
Could we have a written walkthrough of finetuning llama/mistral with this?
Updated
2 months ago
Comments count
1
In Llama model, only the embedding layer is converted to lora layer.
Updated
2 months ago
Comments count
2
Can lora also be implemented with stable diffusion?
Updated
3 months ago
Comments count
7
Updated candle-core, candle-nn [0.5.0] release breaks installation of candle-lora and candle-lora-macro dependencies
Closed
3 months ago
Is there any way to save lora-converted model?
Closed
3 months ago
Comments count
5
How to use canle_lora modle with rust auxm web server
Closed
4 months ago
Comments count
4
error[E0277]: expected a `Fn<(&candle_core::Tensor,)>` closure, found `BatchNorm`
Closed
5 months ago
Model Merging
Closed
5 months ago
Comments count
6
any example for llama_lora training
Closed
5 months ago
Comments count
2
replace_layer_fields and AutoLoraConvert not working as expected
Closed
7 months ago
Comments count
1
QA-LoRA Implementation and Review
Closed
9 months ago
Comments count
3
Add more LoRA transformers
Closed
8 months ago
Examples for Llama model architecture
Closed
10 months ago
Comments count
5
Question: Could we use the same mechanism for Quantization?
Closed
10 months ago
Comments count
4