Phil Wang's repositories
vit-pytorch
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
imagen-pytorch
Implementation of Imagen, Google's Text-to-Image Neural Network, in Pytorch
denoising-diffusion-pytorch
Implementation of Denoising Diffusion Probabilistic Model in Pytorch
DALLE-pytorch
Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
x-transformers
A simple but complete full-attention transformer with a set of promising experimental features from various papers
lion-pytorch
🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch
vector-quantize-pytorch
Vector (and Scalar) Quantization, in Pytorch
self-rewarding-lm-pytorch
Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI
soundstorm-pytorch
Implementation of SoundStorm, Efficient Parallel Audio Generation from Google Deepmind, in Pytorch
video-diffusion-pytorch
Implementation of Video Diffusion Models, Jonathan Ho's new paper extending DDPMs to Video Generation - in Pytorch
muse-maskgit-pytorch
Implementation of Muse: Text-to-Image Generation via Masked Generative Transformers, in Pytorch
phenaki-pytorch
Implementation of Phenaki Video, which uses Mask GIT to produce text guided videos of up to 2 minutes in length, in Pytorch
meshgpt-pytorch
Implementation of MeshGPT, SOTA Mesh generation using Attention, in Pytorch
voicebox-pytorch
Implementation of Voicebox, new SOTA Text-to-speech network from MetaAI, in Pytorch
rotary-embedding-torch
Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch
ema-pytorch
A simple way to keep track of an Exponential Moving Average (EMA) version of your pytorch model
enformer-pytorch
Implementation of Enformer, Deepmind's attention network for predicting gene expression, in Pytorch
ring-attention-pytorch
Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch
CoLT5-attention
Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch
st-moe-pytorch
Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch
lumiere-pytorch
Implementation of Lumiere, SOTA text-to-video generation from Google Deepmind, in Pytorch
soft-moe-pytorch
Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch
recurrent-interface-network-pytorch
Implementation of Recurrent Interface Network (RIN), for highly efficient generation of images and video without cascading networks, in Pytorch
flash-attention-jax
Implementation of Flash Attention in Jax
CALM-pytorch
Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind
quartic-transformer
Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)
genetic-algorithm-pytorch
Toy genetic algorithm in Pytorch
SAC-pytorch
Implementation of Soft Actor Critic and some of its improvements in Pytorch
flash-attention
Fast and memory-efficient exact attention