Phil Wang's repositories
vit-pytorch
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
DALLE2-pytorch
Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch
imagen-pytorch
Implementation of Imagen, Google's Text-to-Image Neural Network, in Pytorch
denoising-diffusion-pytorch
Implementation of Denoising Diffusion Probabilistic Model in Pytorch
x-transformers
A simple but complete full-attention transformer with a set of promising experimental features from various papers
stylegan2-pytorch
Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
vector-quantize-pytorch
Vector (and Scalar) Quantization, in Pytorch
make-a-video-pytorch
Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch
byol-pytorch
Usable Implementation of "Bootstrap Your Own Latent" self-supervised learning, from Deepmind, in Pytorch
video-diffusion-pytorch
Implementation of Video Diffusion Models, Jonathan Ho's new paper extending DDPMs to Video Generation - in Pytorch
soundstorm-pytorch
Implementation of SoundStorm, Efficient Parallel Audio Generation from Google Deepmind, in Pytorch
linear-attention-transformer
Transformer based on a variant of attention that is linear complexity in respect to sequence length
MEGABYTE-pytorch
Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch
meshgpt-pytorch
Implementation of MeshGPT, SOTA Mesh generation using Attention, in Pytorch
rotary-embedding-torch
Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch
magvit2-pytorch
Implementation of MagViT2 Tokenizer in Pytorch
ema-pytorch
A simple way to keep track of an Exponential Moving Average (EMA) version of your pytorch model
alphafold3-pytorch
Implementation of Alphafold 3 in Pytorch
iTransformer
Unofficial implementation of iTransformer - SOTA Time Series Forecasting using Attention networks, out of Tsinghua / Ant group
q-transformer
Implementation of Q-Transformer, Scalable Offline Reinforcement Learning via Autoregressive Q-Functions, out of Google Deepmind
lumiere-pytorch
Implementation of Lumiere, SOTA text-to-video generation from Google Deepmind, in Pytorch
bidirectional-cross-attention
A simple cross attention that updates both the source and target in one step
infini-transformer-pytorch
Implementation of Infini-Transformer in Pytorch
videogigagan-pytorch
Implementation of VideoGigaGAN, SOTA video upsampling out of Adobe AI labs, in Pytorch
self-reasoning-tokens-pytorch
Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto
nim-genetic-algorithm
a simple genetic algorithm written in Nim
crystal-genetic-algorithm
A simple toy genetic algorithm in Crystal