Phil Wang (lucidrains)

lucidrains

Geek Repo

Location:San Francisco

Home Page:lucidrains.github.io

Twitter:@lucidrains

Github PK Tool:Github PK Tool

Phil Wang's repositories

vit-pytorch

Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch

Language:PythonLicense:MITStargazers:18533Issues:143Issues:258

DALLE2-pytorch

Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch

Language:PythonLicense:MITStargazers:10932Issues:122Issues:207

denoising-diffusion-pytorch

Implementation of Denoising Diffusion Probabilistic Model in Pytorch

Language:PythonLicense:MITStargazers:7346Issues:32Issues:274

x-transformers

A simple but complete full-attention transformer with a set of promising experimental features from various papers

Language:PythonLicense:MITStargazers:4271Issues:52Issues:198

vector-quantize-pytorch

Vector (and Scalar) Quantization, in Pytorch

Language:PythonLicense:MITStargazers:2050Issues:31Issues:98

lion-pytorch

🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch

Language:PythonLicense:MITStargazers:1938Issues:15Issues:23

byol-pytorch

Usable Implementation of "Bootstrap Your Own Latent" self-supervised learning, from Deepmind, in Pytorch

Language:PythonLicense:MITStargazers:1709Issues:25Issues:80

gigagan-pytorch

Implementation of GigaGAN, new SOTA GAN out of Adobe. Culmination of nearly a decade of research into GANs

Language:PythonLicense:MITStargazers:1646Issues:72Issues:46

linear-attention-transformer

Transformer based on a variant of attention that is linear complexity in respect to sequence length

Language:PythonLicense:MITStargazers:627Issues:12Issues:19

alphafold3-pytorch

Implementation of Alphafold 3 in Pytorch

Language:PythonLicense:MITStargazers:599Issues:41Issues:5

meshgpt-pytorch

Implementation of MeshGPT, SOTA Mesh generation using Attention, in Pytorch

Language:PythonLicense:MITStargazers:589Issues:18Issues:67

magvit2-pytorch

Implementation of MagViT2 Tokenizer in Pytorch

Language:PythonLicense:MITStargazers:459Issues:29Issues:32

rotary-embedding-torch

Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch

Language:PythonLicense:MITStargazers:443Issues:10Issues:19

ema-pytorch

A simple way to keep track of an Exponential Moving Average (EMA) version of your pytorch model

Language:PythonLicense:MITStargazers:428Issues:4Issues:11

iTransformer

Unofficial implementation of iTransformer - SOTA Time Series Forecasting using Attention networks, out of Tsinghua / Ant group

Language:PythonLicense:MITStargazers:360Issues:7Issues:20

BS-RoFormer

Implementation of Band Split Roformer, SOTA Attention network for music source separation out of ByteDance AI Labs

Language:PythonLicense:MITStargazers:311Issues:11Issues:25

q-transformer

Implementation of Q-Transformer, Scalable Offline Reinforcement Learning via Autoregressive Q-Functions, out of Google Deepmind

Language:PythonLicense:MITStargazers:297Issues:6Issues:9

classifier-free-guidance-pytorch

Implementation of Classifier Free Guidance in Pytorch, with emphasis on text conditioning, and flexibility to include multiple text embedding models

Language:PythonLicense:MITStargazers:289Issues:9Issues:4

st-moe-pytorch

Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch

Language:PythonLicense:MITStargazers:243Issues:5Issues:11

lumiere-pytorch

Implementation of Lumiere, SOTA text-to-video generation from Google Deepmind, in Pytorch

Language:PythonLicense:MITStargazers:224Issues:24Issues:4

En-transformer

Implementation of E(n)-Transformer, which incorporates attention mechanisms into Welling's E(n)-Equivariant Graph Neural Network

Language:PythonLicense:MITStargazers:207Issues:6Issues:12

mmdit

Implementation of a single layer of the MMDiT, proposed in Stable Diffusion 3, in Pytorch

Language:PythonLicense:MITStargazers:172Issues:3Issues:0

bidirectional-cross-attention

A simple cross attention that updates both the source and target in one step

Language:PythonLicense:MITStargazers:136Issues:4Issues:2

pytorch-custom-utils

Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new AI research

Language:PythonLicense:MITStargazers:106Issues:8Issues:0

infini-transformer-pytorch

Implementation of Infini-Transformer in Pytorch

Language:PythonLicense:MITStargazers:94Issues:3Issues:0

self-reasoning-tokens-pytorch

Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto

Language:PythonLicense:MITStargazers:51Issues:6Issues:0

frame-averaging-pytorch

Pytorch implementation of a simple way to enable (Stochastic) Frame Averaging for any network

Language:PythonLicense:MITStargazers:41Issues:2Issues:0

mogrifier

Usable implementation of Mogrifier, a circuit for enhancing LSTMs and potentially other networks, from Deepmind

Language:PythonLicense:MITStargazers:11Issues:4Issues:0

nim-genetic-algorithm

a simple genetic algorithm written in Nim

Language:NimLicense:MITStargazers:7Issues:2Issues:1

crystal-genetic-algorithm

A simple toy genetic algorithm in Crystal

Language:CrystalLicense:MITStargazers:4Issues:1Issues:0