zhaopu7's repositories
alpaca_eval
A validated automatic evaluator for instruction-following language models. High-quality, cheap, and fast.
Awesome-Chinese-NLP
A curated list of resources for Chinese NLP 中文自然语言处理相关资料
Awesome-Diffusion-Models
A collection of resources and papers on Diffusion Models
Awesome-LLM
Awesome-LLM: a curated list of Large Language Model
awesome-rl-nlp
Curated Reinforcement Learning Resources for Natural Language Processing
Chinese-LLaMA-Alpaca
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
Chinese-LLaMA-Alpaca-2
中文 LLaMA-2 & Alpaca-2 大模型二期项目 + 本地CPU/GPU训练部署 (Chinese LLaMA-2 & Alpaca-2 LLMs)
Chinese_Corpus
中文语料库:包括情感词典 情感分析 文本分类 单轮对话 中文词典 知乎
Conference-Acceptance-Rate
Acceptance rates for the major AI conferences
data-juicer
A one-stop data processing system to make data higher-quality, juicier, and more digestible for LLMs! 🍎 🍋 🌽 ➡️ ➡️🍸 🍹 🍷为大语言模型提供更高质量、更丰富、更易”消化“的数据!
evals
Evals is a framework for evaluating OpenAI models and an open-source registry of benchmarks.
gptq
Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".
GPTQ-for-LLaMa
4 bits quantization of LLaMA using GPTQ
llama
Inference code for LLaMA models
llama.cpp
Port of Facebook's LLaMA model in C/C++
llama_index
LlamaIndex (formerly GPT Index) is a data framework for your LLM applications
Megatron-LM
Ongoing research training transformer models at scale
nlp-datasets
Alphabetical list of free/public domain datasets with text data for use in Natural Language Processing (NLP)
nnPUlearning
Non-negative Positive-Unlabeled (nnPU) and unbiased Positive-Unlabeled (uPU) learning reproductive code on MNIST and CIFAR10
Prompt-Engineering-Guide
:octopus: Guides, papers, lecture, and resources for prompt engineering
PU-Learning
This repo lists some researches and applications in PU learning.
pytorch-A3C
Simple A3C implementation with pytorch + multiprocessing
Reinforcement-learning-with-tensorflow
Simple Reinforcement learning tutorials
RLHF-Reward-Modeling
Recipes to train reward model for RLHF.