Xuechen Li's repositories
Make differentially private training of transformers easy
My ML research codebase
🚀 A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision
Github pages backend for https://differentialprivacy.org
Distilling Model Failures as Directions in Latent Space
Implementation of Imagen, Google's Text-to-Image Neural Network, in Pytorch
Algorithms for Privacy-Preserving Machine Learning in JAX
High-Resolution Image Synthesis with Latent Diffusion Models
Training PyTorch models with differential privacy
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
The password hash Argon2, winner of PHC
A modular RL library to fine-tune language models to human preferences
A library for experimenting with, training and evaluating neural networks, with a focus on adversarial robustness.
Aligning pretrained language models with instruction data generated by themselves.
A game theoretic approach to explain the output of any machine learning model.
Code for "Learning to summarize from human feedback"
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.
Development repository for the Triton language and compiler
Train transformer language models with reinforcement learning.