Aaron's repositories
Bert-In-Relation-Extraction
使用Bert完成实体之间关系抽取
bisheng
Bisheng is an open LLM devops platform for next generation AI applications.
ChatGLM-Tuning
一种平价的chatgpt实现方案, 基于ChatGLM-6B + LoRA
CLIP
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
Firefly
Firefly(流萤): 中文对话式大语言模型
GPT2-chitchat
GPT2 for Chinese chitchat/用于中文闲聊的GPT2模型(实现了DialoGPT的MMI**)
langchain-ChatGLM
langchain-ChatGLM, local knowledge based ChatGLM with langchain | 基于本地知识库的 ChatGLM 问答
LoRA
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
NLP-Loss-Pytorch
Implementation of some unbalanced loss like focal_loss, dice_loss, DSC Loss, GHM Loss et.al
P-tuning-v2
An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
pdfGPT
PDF GPT allows you to chat with the contents of your PDF file by using GPT capabilities. The only open source solution to turn your pdf files in a chatbot!
PrefixTuning
Prefix-Tuning: Optimizing Continuous Prompts for Generation
prompt_text_classification
基于prompt的中文文本分类。
RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
SimCSE-Pytorch
中文数据集下SimCSE+ESimCSE的实现