codefuse-ai/MFTCoder Issues
实验 MFTCoder 的效果总是不尽人意
Updated 1数据集loss 下降不均衡如何处理
Closed 1mftcoder使用humaneval评估
Updated请问多机训练需要怎么修改?
Closed 1在codellama上微调的性能没有提升
Closed 2请教4int的gptq模型能不能进行lora微调
Closed 4任务的类型也是用gpt来生成的吗?
Closed 1MFTCoder论文中训练数据集
Closed 2convergence curves
Closedqlora微调合并权重时出错
Closed 4请问要支持chatglm3-6b-base的话需要哪些更改
Closed 2请问下是否支持Wandb或者Tensorboard
Closed 1no 7B model size?
Closed 2Inquiry about weighted_loss_mode
Closed 1请问FSDP的训练API啥时候会开源出来
Closed 1nccl 报错了
Closed 3如何构建codefuse-llamacode的提问和终止符
Closed 29请问,对模型进行多任务微调该怎么设计jsonl数据集?
Closed 5模型是否支持商用
Closed 5little bug fix meet
Closed 2单卡v1000,微调报错
Closed 2模型训练没有进度条
Closeddata.helper 无法加载?
Closed 4基于chatgpt生成的高质量python练习题数据是如何获取呀
Closed 1HumanEval测试的Pass@1不高
Closed 2训练数据包含中文数据吗
Closed 1能否写一个完整的微调例子?
Closed 1国内下载方式
Closed 1