Giters
hiyouga
/
FastEdit
🩹Editing large language models within 10 seconds⚡
Geek Repo:
Geek Repo
Github PK Tool:
Github PK Tool
Stargazers:
1209
Watchers:
14
Issues:
26
Forks:
82
hiyouga/FastEdit Issues
It seems like there is a ignored value in delta calculation?
Closed
2 months ago
Comments count
3
Why modifying down_proj in llama?
Updated
3 months ago
Comments count
1
[Llama-2-7b-chat] RuntimeError: expected scalar type Float but found Half
Updated
7 months ago
Comments count
7
使用在线量化的baichuan 13b chat 报错 LookupError: model.layers.5.mlp.down_proj.weight
Updated
7 months ago
Comments count
3
Llama-2-7b-chat - RuntimeError: Inference tensors cannot be saved for backward
Closed
7 months ago
Comments count
2
训练方式和LLaMA-Efficient-Tuning-main区别
Updated
8 months ago
请问编辑后的模型储存在哪里了
Updated
8 months ago
Comments count
3
运行时报错
Updated
9 months ago
Comments count
2
RuntimeError: computing v Vector
Updated
9 months ago
qwen support
Updated
9 months ago
Comments count
1
Would you consider supporting ChatGLM2-6B?
Updated
10 months ago
Comments count
5
显存占用
Closed
10 months ago
Comments count
1
this is not a good idea, may lead to severe overfitting.
Updated
10 months ago
Comments count
1
LLaMA-2-7b-chat Editing failed
Updated
10 months ago
数据集格式
Closed
10 months ago
编辑完的baichuan-13b该如何保存
Updated
10 months ago
Comments count
3
Is there any way to apply this interesting algorithm to the chatGLM-6B or chatGLM2-6B models?
Updated
10 months ago
Comments count
2
单张80G卡编辑7B模型 报显存不足 想请教一下如何单机多卡去run
Updated
10 months ago
Comments count
2
这种编辑的方式有副作用吗?比如模型遗忘问题
Updated
10 months ago
Comments count
2
A little mistake in HyperParams
Closed
a year ago
Comments count
1
Error occurs when editing Baichuan-13B
Updated
a year ago
Comments count
2
错误 :TypeError: can't convert cuda:0 device type tensor to numpy.
Closed
a year ago
Comments count
1
编辑baichuan13b的时候报错NotImplementedError
Closed
a year ago
请教如何配置config?
Closed
a year ago
Comments count
1
初步看起来,这个的原理是,捕捉到两个data在模型内部的参数diff?
Updated
a year ago