Phil Wang's repositories
lightweight-gan
Implementation of 'lightweight' GAN, proposed in ICLR 2021, in Pytorch. High resolution image generations that can be trained within a day or two
alphafold2
To eventually become an unofficial Pytorch implementation / replication of Alphafold2, as details of the architecture get released
flamingo-pytorch
Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch
PaLM-pytorch
Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways
nuwa-pytorch
Implementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch
slot-attention
Implementation of Slot Attention from GoogleAI
segformer-pytorch
Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorch
Adan-pytorch
Implementation of the Adan (ADAptive Nesterov momentum algorithm) Optimizer in Pytorch
se3-transformer-pytorch
Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch. This specific repository is geared towards integration with eventual Alphafold2 replication.
flash-cosine-sim-attention
Implementation of fused cosine similarity attention in the same style as Flash Attention
res-mlp-pytorch
Implementation of ResMLP, an all MLP solution to image classification, in Pytorch
chroma-pytorch
Implementation of Chroma, generative models of protein using DDPM and GNNs, in Pytorch
invariant-point-attention
Implementation of Invariant Point Attention, used for coordinate refinement in the structure module of Alphafold2, as a standalone Pytorch module
gated-state-spaces-pytorch
Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch
perceiver-ar-pytorch
Implementation of Perceiver AR, Deepmind's new long-context attention network based on Perceiver architecture, in Pytorch
n-grammer-pytorch
Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch
memory-compressed-attention
Implementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"
transframer-pytorch
Implementation of Transframer, Deepmind's U-net + Transformer architecture for up to 30 seconds video generation, in Pytorch
adjacent-attention-network
Graph neural network message passing reframed as a Transformer with local attention
equiformer-diffusion
Implementation of Denoising Diffusion for protein design, but using the new Equiformer (successor to SE3 Transformers) with some additional improvements
isab-pytorch
An implementation of (Induced) Set Attention Block, from the Set Transformers paper
einops-exts
Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️
memory-editable-transformer
My explorations into editing the knowledge and memories of an attention network
bitsandbytes
8-bit CUDA functions for PyTorch
nucleotide-transformer
🧬 Nucleotide Transformer: Building and Evaluating Robust Foundation Models for Human Genomics
RFdiffusion
Code for running RFdiffusion