Giters
datawhalechina
/
self-llm
《开源大模型食用指南》基于Linux环境快速部署开源大模型,更适合**宝宝的部署教程
Geek Repo:
Geek Repo
Github PK Tool:
Github PK Tool
Stargazers:
4586
Watchers:
48
Issues:
64
Forks:
562
datawhalechina/self-llm Issues
微调Qwen1.5-0.5b报错 PermissionError: [Errno 13] Permission denied: './output/Qwen1.5\checkpoint-100'
Updated
7 hours ago
Comments count
2
使用 llama3 的 lora 微调报错:NotImplementedError: Cannot copy out of meta tensor; no data!
Updated
2 days ago
Comments count
3
llama3 api报错
Updated
2 days ago
Comments count
1
请问有多模态LLM的部署/微调文档吗,未来有相关更新计划吗
Updated
3 days ago
【XVERSE-7B-chat WebDemo 部署】报错 torch.cuda.OutOfMemoryError: CUDA out of memory.
Updated
3 days ago
Comments count
2
deepseek-v2部署请求
Updated
5 days ago
Comments count
2
04-Qwen-7B-Chat Lora 微调时报错
Updated
7 days ago
Comments count
1
在纯 CPU 上可以运行吗?比如苹果电脑没有 cuda?
Updated
8 days ago
Comments count
1
chatglm3,lora微调报错
Updated
8 days ago
Comments count
1
我在微调LLAMA3的时候出现NotImplementedError: Cannot copy out of meta tensor; no data!
Updated
8 days ago
Comments count
8
Qwen1.5-7B Lora微调报错:RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Updated
9 days ago
Comments count
2
llama3 API调用的问题
Updated
9 days ago
Comments count
1
【非Issues!讨论帖】通过lora微调的qianwen和直接使用system来预设的区别貌似不是很大
Updated
10 days ago
Comments count
3
LLaMA3-8B-Instruct+lora使用A800(80GB显存)微调长度8192
Updated
10 days ago
Comments count
2
Qwen1.5-7B-Chat vLLM 部署调用-速度测试 hf命令错误
Updated
11 days ago
Comments count
1
微调出来会有不礼貌或攻击性的言语
Updated
12 days ago
Comments count
3
Qwen1.5-7B Lora微调报错
Updated
13 days ago
Comments count
1
qwen1.5 量化版本的部署方案有么?
Closed
17 days ago
Comments count
3
好兄弟 帮大忙了
Closed
17 days ago
Building wheel for flash-attn (setup.py) ... - 卡住了。
Closed
23 days ago
Comments count
4
peft训练完成,参考的是04-Qwen-7B-Chat Lora 微调.ipynb,但是重新载入模型时,提示peft版本问题,用的是model = AutoModelForCausalLM.from_pretrained("../output/Qwen/checkpoint-1300/", trust_remote_code=True).eval(),提示的错误时ValueError: The version of PEFT you are using is not compatible, please use a version that is greater than 0.5.0
Closed
23 days ago
Comments count
2
InternLM2 缺少包
Closed
24 days ago
Comments count
2
报错asyncio.run() cannot be called from a running event loop,辛苦大佬们看看
Updated
a month ago
Comments count
6
想问下在这个项目下的lora微调和Chatglm3官方微调的demo的数据格式怎么不一样呀
Closed
a month ago
Comments count
1
请问LLAMA3,里面是按1.2.3.4的顺序来分别执行吗?
Updated
a month ago
Comments count
6
deepseek lora
Updated
a month ago
Comments count
2
qwen-vl
Updated
a month ago
请问ChatGLM3微调中的数据集huanhuan的在哪获取?
Updated
a month ago
Comments count
3
多卡报错,Qwen1.5-7B-Chat FastApi 部署调用
Updated
a month ago
Comments count
3
请问chatglm模型Lora微调完成之后,如何加载新模型?
Updated
a month ago
Comments count
8
ChatGLM3-6B微调后成哑巴了(字面意思)
Closed
2 months ago
Comments count
4
与FastChat的区别
Updated
a month ago
建议按照顺序撰写README部分教程
Updated
a month ago
Comments count
1
chatglm3-6b fastapi调用
Closed
a month ago
Comments count
1
Qwen-1.5-4B LLM推理bug
Closed
a month ago
Comments count
3
关于模型的api部署,能否推出高性能的异步版本、例如使用vllm、或者fastchat等工具
Updated
2 months ago
Comments count
1
deepseek官方readme,loss第二轮开始就是
Closed
2 months ago
Comments count
7
我想我可不可提交给PR支持一下BlueLM我们的蓝心大模型
Closed
2 months ago
Comments count
1
chatglm搭建知识库读取文件出错
Closed
2 months ago
Comments count
3
帮大忙了
Closed
2 months ago
Comments count
3
想问下部署chatglm出现了问题
Closed
2 months ago
Comments count
5
大佬这是什么情况?
Closed
2 months ago
这不来个llama的微调教程
Closed
2 months ago
Comments count
2
Qwen1.5-7B推理部分,为什么我在modelscope上的GPU调试,fastapi报错。
Updated
2 months ago
Comments count
1
chatGLM3保存模型失败,报错TypeError: Object of type set is not JSON serializable
Closed
3 months ago
Comments count
4
参考DeepSeek-7B-chat Lora 微调脚本微调deepseek-coder-7b-v1.5版本模型,生成的内容全是感叹号
Closed
3 months ago
Comments count
3
chatGLM3微调过程中报错TypeError: Object of type set is not JSON serializable
Closed
3 months ago
Comments count
3
Failed to import transformers.models.qwen2
Closed
4 months ago
Comments count
8
代码笔误pd.read_json('../dataset/huanhuan.jsonl'),应该是pd.read_json('../dataset/huanhuan.json')
Closed
4 months ago
Comments count
2
自定义服务界面打开显示"detail": "Method Not Allowed"
Closed
4 months ago
Comments count
1
Previous
Next