wanglaiqi's repositories
mrc-for-flat-nested-ner
Code for ACL 2020 paper `A Unified MRC Framework for Named Entity Recognition`
chatbot-base-on-Knowledge-Graph
使用深度学习方法解析问题 知识图谱存储 查询知识点 基于医疗垂直领域的对话系统
kaggleSolution
MachineLearning算法练习
NERBertProject
基于预训练语言模型BERT的中文命名实体识别样例
Task_simbert_project
the project about sim_bert usage
baichuan-speedup
纯c++的全平台llm加速库,支持python调用,支持baichuan, glm, llama, moss基座,手机端流畅运行chatglm-6B级模型单卡可达10000+token / s,
bert4keras
light reimplement of bert for keras
chineseocr_lite
超轻量级中文ocr,支持竖排文字识别, 支持ncnn推理 , psenet(8.5M) + crnn(6.3M) + anglenet(1.5M) 总模型仅17M
fairseq
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
japanese-pretrained-models
Code for producing Japanese pretrained models provided by rinna Co., Ltd.
KnowledgeDistillation
Knowledge distillation in text classification with pytorch. 知识蒸馏,中文文本分类,教师模型BERT、XLNET,学生模型biLSTM。
LLaMA-Efficient-Tuning
Fine-tuning LLaMA with PEFT (PT+SFT+RLHF with QLoRA)
Llama2-Chinese
Llama中文社区,最好的中文Llama大模型,完全开源可商用
nlpcda
一键中文数据增强包 ; NLP数据增强、bert数据增强、EDA:pip install nlpcda
PKD-for-BERT-Model-Compression
pytorch implementation for Patient Knowledge Distillation for BERT Model Compression
Pretrained-Language-Model
Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.
roberta_zh
RoBERTa中文预训练模型: RoBERTa for Chinese
textRewriting
中文文本改写
transformers
🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX.