Chien Nguyen's starred repositories
text-generation-inference
Large Language Model Text Generation Inference
torchscale
Foundation Architecture for (M)LLMs
neurips_llm_efficiency_challenge
NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day
ProphetNet
A research project for natural language generation, containing the official implementations by MSRA NLC team.
direct-preference-optimization
Reference implementation for DPO (Direct Preference Optimization)
baize-chatbot
Let ChatGPT teach your own chatbot in hours with a single GPU!
denoising-diffusion-gan
Tackling the Generative Learning Trilemma with Denoising Diffusion GANs https://arxiv.org/abs/2112.07804
latex-examples
Simple LaTeX examples
LLaMA-Cult-and-More
Large Language Models for All, 🦙 Cult and More, Stay in touch !
torchrl_examples
Training examples using TorchRL repo
self-instruct
Aligning pretrained language models with instruction data generated by themselves.
self-refine
LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.
consistency_models
Official repo for consistency models.
stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
DA-Transformer
Official Implementation for the ICML2022 paper "Directed Acyclic Transformer for Non-Autoregressive Machine Translation"
prompt-in-context-learning
Awesome resources for in-context learning and prompt engineering: Mastery of the LLMs such as ChatGPT, GPT-3, and FlanT5, with up-to-date and cutting-edge updates.
chatgpt-retrieval-plugin
The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.