Giters
yuhuixu1993
/
qa-lora
Official PyTorch implementation of QA-LoRA
Geek Repo:
Geek Repo
Github PK Tool:
Github PK Tool
Stargazers:
107
Watchers:
4
Issues:
32
Forks:
12
yuhuixu1993/qa-lora Issues
Merging problem
Updated
2 months ago
Comments count
6
is that right? could you tell me how to fix this error?
Updated
3 months ago
Comments count
2
Thanks for your helpful project,could you give us model checkpoint as shown in figure
Updated
3 months ago
Comments count
2
Hi can you tell me how to fix the error?
Closed
3 months ago
ValueError: Target modules [] not found in the base model.
Updated
4 months ago
Comments count
2
AWQ+LoRA available?
Updated
4 months ago
the difference between QA-Lora and simple GPTQ+LoRA in training
Closed
4 months ago
After merge.py, is the model int4 data type?
Closed
4 months ago
Comments count
2
Encounter data type problem in train
Closed
4 months ago
Comments count
3
merge err: scales and qzeros dimension mismatch
Updated
5 months ago
Comments count
4
cannot reproduce the MMLU accuracy claimed in paper, could you release the script?
Updated
5 months ago
Comments count
1
Llama2 supported?
Closed
5 months ago
Comments count
6
Request for Replication Script for LLaMA 7B on MMLU
Closed
5 months ago
Comments count
2
3bit not supported by autogptq with triton?
Updated
5 months ago
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:3! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
Updated
5 months ago
Comments count
1
Discrepancy in Reproduced Results for LLaMA Tuning on Alpaca Dataset
Closed
5 months ago
The loss cannot converge when finetuning Llama2-7b-GPTQ on 4090
Updated
7 months ago
Comments count
11
encounter bug in "auto_gptq/modeling/_base.py"
Closed
7 months ago
Comments count
2
About the question of derivation of merge_with_quantization in the paper
Updated
7 months ago
Comments count
1
Equation, algorithm and experimental results question
Closed
7 months ago
Comments count
3
Finetuning Vision Foundation Models like OWL-ViT & Grounding Dino is possible? Any reference is available?
Updated
8 months ago
Training with multi gpus, increase the batch size, and how to evaluate?
Updated
8 months ago
Comments count
3
How to set batchsize
Updated
8 months ago
Comments count
1
RuntimeError: self and mat2 must have the same dtype
Updated
8 months ago
Comments count
6
quantize_config.json file
Updated
8 months ago
Comments count
1
How to support FLAN v2 dataset.
Updated
9 months ago
Adapter dimensions question
Updated
10 months ago
Comments count
4
Can this be merged with normal 16bits model
Closed
10 months ago
Comments count
1
(Enhancement) Suggestion to incorporate GPTQ adapter merging into axolotl library
Updated
10 months ago
Comments count
1
Is it possible to use an open source model that Huggingface has quantified?
Updated
10 months ago
Comments count
1
The paper is using Sum Pooling but the script is using Average Pooling
Updated
a year ago
Comments count
1
What is the expected time for release of the code?
Closed
a year ago
Comments count
1