littlehacker26's repositories
Discriminator-Cooperative-Unlikelihood-Prompt-Tuning
The code implementation of the EMNLP2022 paper: DisCup: Discriminator Cooperative Unlikelihood Prompt-tuning for Controllable Text Generation
2018-NanJing-AI-Application-Competition
赛题的解题思路描述和项目源代码
Residual_Memory_Transformer
This repository contains code, data, checkpoints, and training and evaluation instructions for the paper: Controllable Text Generation with Residual Memory Transformer
ACL2021MF
Source Code For ACL 2021 Paper "Mention Flags (MF): Constraining Transformer-based Text Generators"
adavae
VAE with adaptive parameter-efficient GPT-2s for language modeling
awesome-phd-advice
Collection of advice for prospective and current PhD students
baichuan-7B
A large-scale 7B pretraining language model developed by BaiChuan-Inc.
BIThesis
📖 北京理工大学非官方 LaTeX 模板集合,包含本科、研究生毕业设计模板及更多。🎉 (更多文档请访问 wiki 和 release 中的手册)
COCON_ICLR2021
Pytorch implementation of CoCon: A Self-Supervised Approach for Controlled Text Generation
CommonGen
A Constrained Text Generation Challenge Towards Generative Commonsense Reasoning
DExperts
code associated with ACL 2021 DExperts paper
diasenti
Conversational Multimodal Emotion Recognition
HuatuoGPT
HuatuoGPT, Towards Taming Language Models To Be a Doctor. (An Open Medical GPT)
LLaMA-Factory
Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
Mengzi
Mengzi Pretrained Models
OpenRLHF
An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & Mixtral)
P-tuning
A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
PLMpapers
Must-read Papers on pre-trained language models.
PPLM
Plug and Play Language Model implementation. Allows to steer topic and attributes of GPT-2 models.
Progressive-Hint
This is the official implementation of "Progressive-Hint Prompting Improves Reasoning in Large Language Models"
Residual-EBM
Code for Residual Energy-Based Models for Text Generation in PyTorch.
self-refine
LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.
transformers
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.