Xuechen Li's repositories
private-transformers
Make differentially private training of transformers easy
ml-swissknife
My ML research codebase
accelerate
🚀 A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision
differentialprivacy
Github pages backend for https://differentialprivacy.org
failure-directions
Distilling Model Failures as Directions in Latent Space
imagen-pytorch
Implementation of Imagen, Google's Text-to-Image Neural Network, in Pytorch
jax_privacy
Algorithms for Privacy-Preserving Machine Learning in JAX
latent-diffusion
High-Resolution Image Synthesis with Latent Diffusion Models
opacus
Training PyTorch models with differential privacy
peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
phc-winner-argon2
The password hash Argon2, winner of PHC
RL4LMs-lxuechen
A modular RL library to fine-tune language models to human preferences
robustness
A library for experimenting with, training and evaluating neural networks, with a focus on adversarial robustness.
self-instruct
Aligning pretrained language models with instruction data generated by themselves.
shap
A game theoretic approach to explain the output of any machine learning model.
summarize-from-feedback
Code for "Learning to summarize from human feedback"
transformers
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.
triton
Development repository for the Triton language and compiler
trl-lxuechen
Train transformer language models with reinforcement learning.