Giters
X-jun-0130
/
LLM-Pretrain-FineTune
Deepspeed、LLM、Medical_Dialogue、医疗大模型、预训练、微调
Geek Repo:
Geek Repo
Github PK Tool:
Github PK Tool
Stargazers:
218
Watchers:
2
Issues:
15
Forks:
31
X-jun-0130/LLM-Pretrain-FineTune Issues
构建数据集
Closed
3 months ago
Comments count
1
进行增量预训练一些问题
Closed
3 months ago
Comments count
3
关于数据集构建
Updated
3 months ago
Comments count
1
请问使用到的数据会开源吗
Updated
3 months ago
Comments count
1
求预训练和sft的数据格式形式
Closed
7 months ago
Comments count
1
请问lora多卡有代码吗
Closed
a year ago
Comments count
1
请问处理好的数据可以提供一下吗
Closed
a year ago
Comments count
3
预训练数据里的表格是怎么处理转换的?
Closed
a year ago
Comments count
1
请问该项目支持ChatGLM-6B吗
Closed
a year ago
替换成LLama : Error: Incorrect padding
Closed
a year ago
Comments count
3
请问预训练文本格式?
Closed
a year ago
Comments count
1
Repo id must be in the form 'repo_name' or 'namespace/repo_name': './Model_TH/Bloom_6B4_zh/'. Use `repo_type` argument if needed.
Closed
a year ago
Comments count
1
os.chdir('./Nlp_2023/Dialogue_Bloom/')
Closed
a year ago
Comments count
1
请问微调模型的效果如何?
Closed
a year ago
Comments count
1
我是一个新手,可以用到 llama 上不
Closed
a year ago
Comments count
1