Giters
EricLBuehler
/
candle-lora
Low rank adaptation (LoRA) for Candle.
Geek Repo:
Geek Repo
Github PK Tool:
Github PK Tool
Stargazers:
116
Watchers:
7
Issues:
15
Forks:
8
EricLBuehler/candle-lora Issues
Installation does not work with newer version of candle (0.6.0)
Updated
16 days ago
Can lora also be implemented with stable diffusion?
Updated
23 days ago
Comments count
8
Could we have a written walkthrough of finetuning llama/mistral with this?
Updated
3 months ago
Comments count
1
In Llama model, only the embedding layer is converted to lora layer.
Updated
3 months ago
Comments count
2
Updated candle-core, candle-nn [0.5.0] release breaks installation of candle-lora and candle-lora-macro dependencies
Closed
3 months ago
Is there any way to save lora-converted model?
Closed
4 months ago
Comments count
5
How to use canle_lora modle with rust auxm web server
Closed
4 months ago
Comments count
4
error[E0277]: expected a `Fn<(&candle_core::Tensor,)>` closure, found `BatchNorm`
Closed
6 months ago
Model Merging
Closed
6 months ago
Comments count
6
any example for llama_lora training
Closed
6 months ago
Comments count
2
replace_layer_fields and AutoLoraConvert not working as expected
Closed
8 months ago
Comments count
1
QA-LoRA Implementation and Review
Closed
10 months ago
Comments count
3
Add more LoRA transformers
Closed
9 months ago
Examples for Llama model architecture
Closed
a year ago
Comments count
5
Question: Could we use the same mechanism for Quantization?
Closed
a year ago
Comments count
4