Thomas Wang's repositories
apex
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
bigscience
Codebase for the engineering/scaling WG
drjit
Dr.Jit — A Just-In-Time-Compiler for Differentiable Rendering
eval_t0_deepspeed
Evaluate T0 with DeepSpeed
lm-evaluation-harness
A framework for few-shot evaluation of autoregressive language models.
lxmls-toolkit
Machine Learning applied to Natural Language Processing Toolkit used in the Lisbon Machine Learning Summer School
Megatron-DeepSpeed
Ongoing research training transformer language models at scale, including: BERT & GPT-2
Megatron-LM
Ongoing research training transformer models at scale
nerfacc
A General NeRF Acceleration Toolbox in PyTorch.
nerfstudio
A collaboration friendly studio for NeRFs
promptsource
Toolkit for collecting and applying templates of prompting instances
ReinforcementLearningMVAProject
Self-Paced IRL
svox2
Plenoxels: Radiance Fields without Neural Networks, Code release WIP
text-to-text-transfer-transformer
Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"
transformers
🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX.
trl
Train transformer language models with reinforcement learning.