慢半拍's repositories
alpaca-rlhf
Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat
multi-turn-alpaca
Multi-turn alpaca is an extension of stanford alpaca and supports multi-turn dialogue 多轮对话版alpaca
ChatGPT-Techniques-Introduction-for-Everyone
ChatGPT技术介绍
entire-space-aste
A Better Choice: Entire-space Datasets for Aspect Sentiment Triplet Extraction
ABSA-datasets
Datasets for Aspect-Based Sentiment Analysis and codes for reading them.
SIGIR22-TOWE
[SIGIR 2022] Training Entire-Space Models for Target-oriented Opinion Words Extraction
ACOS
The datasets and code of ACL 2021 paper "Aspect-Category-Opinion-Sentiment Quadruple Extraction with Implicit Aspects and Opinions".
alpaca-lora
Instruct-tune LLaMA on consumer hardware
ChatAlpaca
A Multi-Turn Dialogue Corpus based on Alpaca Instructions
ChatGLM-6B
ChatGLM-6B:开源双语对话语言模型 | An Open Bilingual Dialogue Language Model
chatgpt-evaluation-01-2023
Code, datasets and results of the ChatGPT evaluation presented in paper "ChatGPT: Jack of all trades, master of none"
Chinese-LangChain
中文langchain项目|小必应,Q.Talk,强聊,QiangTalk
clv-related-distributions
distributions used in calculate customer lifetime value.
DeepSpeedExamples
Example models using DeepSpeed
Entire-Space-TOWE-ARGCN
Train and evaluat ARGCN on entire space
llama
Inference code for LLaMA models
my-autocrit
Experiments using autocrit
ranksim-imbalanced-regression
[ICML 2022] RankSim: Ranking Similarity Regularization for Deep Imbalanced Regression
stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
Transfer-Learning-Library
Transfer Learning Library for Domain Adaptation, Task Adaptation, and Domain Generalization
try-large-models
Try large models on colab