zouyang's starred repositories
Chinese-LLaMA-Alpaca
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
tensorflow2_tutorials_chinese
tensorflow2中文教程,持续更新(当前版本:tensorflow2.0),tag: tensorflow 2.0 tutorials
ChatGLM-Tuning
基于ChatGLM-6B + LoRA的Fintune方案
Luotuo-Chinese-LLM
骆驼(Luotuo): Open Sourced Chinese Language Models. Developed by 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技 & 冷子昂 @ 商汤科技
Alpaca-CoT
We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to initiate any meaningful PR on this repo and integrate as many LLM related technologies as possible. 我们打造了方便研究人员上手和使用大模型等微调平台,我们欢迎开源爱好者发起任何有意义的pr!
transformers_tasks
⭐️ NLP Algorithms with transformers lib. Supporting Text-Classification, Text-Generation, Information-Extraction, Text-Matching, RLHF, SFT etc.
TextBrewer
A PyTorch-based knowledge distillation toolkit for natural language processing
China_House
**买房相关资料和项目整理,方便查看,持续更新中...
bert4torch
An elegent pytorch implement of transformers
alpaca_chinese_dataset
人工精调的中文对话数据集和一段chatglm的微调代码
pretrained-models
Open Language Pre-trained Model Zoo
simple_tensorflow_serving
Generic and easy-to-use serving service for machine learning models
keras-mmoe
A TensorFlow Keras implementation of "Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts" (KDD 2018)
roformer-sim
SimBERT升级版(SimBERTv2)!
ChatGLM-LLaMA-chinese-insturct
探索中文instruct数据在ChatGLM, LLaMA上的微调表现
alpaca-chinese-dataset
alpaca中文指令微调数据集
Keyword_Extraction
神策杯2018高校算法大师赛(中文关键词提取)第二名代码方案
GlobalPointer
全局指针统一处理嵌套与非嵌套NER
CamelBell-Chinese-LoRA
CamelBell(驼铃) is be a Chinese Language Tuning project based on LoRA. CamelBell is belongs to Project Luotuo(骆驼), an open sourced Chinese-LLM project created by 冷子昂 @ 商汤科技 & 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技
RasaChatBot
基于Rasa搭建的案件信息问答系统
chengdu_house_knowledge
🏡成都购房知识
roberta-wwm-base-distill
this is roberta wwm base distilled model which was distilled from roberta wwm by roberta wwm large
KnowledgeDistillation
A general framework for knowledge distillation