Baiph's starred repositories
Langchain-Chatchat
Langchain-Chatchat(原Langchain-ChatGLM, Qwen 与 Llama 等)基于 Langchain 与 ChatGLM 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
ChatGLM2-6B
ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
chineseocr
yolo3+ocr
LangChain-ChatGLM-Webui
基于LangChain和ChatGLM-6B等系列LLM的针对本地知识库的自动问答
MedicalGPT
MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training Pipeline. 训练医疗大模型,实现了包括增量预训练(PT)、有监督微调(SFT)、RLHF、DPO、ORPO。
ChatGLM-Finetuning
基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等
province-city-china
🇨🇳最全最新**【省、市、区县、乡镇街道】json,csv,sql数据
chinese_speech_pretrain
chinese speech pretrained models
LLM-Tuning
Tuning LLMs with no tears💦; Sample Design Engineering (SDE) for more efficient downstream-tuning.
Cornucopia-LLaMA-Fin-Chinese
聚宝盆(Cornucopia): 中文金融系列开源可商用大模型,并提供一套高效轻量化的垂直领域LLM训练框架(Pretraining、SFT、RLHF、Quantize等)
benchmarking-chinese-text-recognition
This repository contains datasets and baselines for benchmarking Chinese text recognition.
chatGLM-6B-QLoRA
使用peft库,对chatGLM-6B/chatGLM2-6B实现4bit的QLoRA高效微调,并做lora model和base model的merge及4bit的量化(quantize)。
trocr-chinese
transformers ocr for chinese
FinanceChatGLM
SMP 2023 ChatGLM金融大模型挑战赛 60 分baseline思路介绍
Dual-Contrastive-Learning
Code for our paper "Dual Contrastive Learning: Text Classification via Label-Aware Data Augmentation"
kaggle-feedback-effectiveness-1st-place-solution
Winning solution for the Kaggle Feedback Prize Challenge.
Tianchi-LLM-retrieval
2023全球智能汽车AI挑战赛——赛道一:AI大模型检索问答, 75+ baseline
End-to-End-Mandarin-ASR
End-to-end speech recognition on AISHELL dataset.
FinanceChatGLM
SMP 2023 ChatGLM金融大模型挑战赛 60 分baseline思路介绍