Phil Wang (lucidrains)

lucidrains

Geek Repo

Location:San Francisco

Home Page:lucidrains.github.io

Twitter:@lucidrains

Github PK Tool:Github PK Tool

Phil Wang's repositories

reformer-pytorch

Reformer, the efficient Transformer, in Pytorch

Language:PythonLicense:MITStargazers:2062Issues:53Issues:121

nuwa-pytorch

Implementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch

Language:PythonLicense:MITStargazers:535Issues:23Issues:9

slot-attention

Implementation of Slot Attention from GoogleAI

Language:PythonLicense:MITStargazers:358Issues:11Issues:6

conformer

Implementation of the convolutional module from the Conformer paper, for use in Transformers

Language:PythonLicense:MITStargazers:339Issues:9Issues:12

segformer-pytorch

Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorch

Language:PythonLicense:MITStargazers:318Issues:9Issues:12

se3-transformer-pytorch

Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch. This specific repository is geared towards integration with eventual Alphafold2 replication.

Language:PythonLicense:MITStargazers:244Issues:11Issues:15

electra-pytorch

A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorch

Language:PythonLicense:MITStargazers:218Issues:9Issues:11

jax2torch

Use Jax functions in Pytorch

Language:PythonLicense:MITStargazers:216Issues:5Issues:3

flash-cosine-sim-attention

Implementation of fused cosine similarity attention in the same style as Flash Attention

Language:CudaLicense:MITStargazers:195Issues:12Issues:10

res-mlp-pytorch

Implementation of ResMLP, an all MLP solution to image classification, in Pytorch

Language:PythonLicense:MITStargazers:192Issues:4Issues:3

graph-transformer-pytorch

Implementation of Graph Transformer in Pytorch, for potential use in replicating Alphafold2

Language:PythonLicense:MITStargazers:177Issues:4Issues:2

chroma-pytorch

Implementation of Chroma, generative models of protein using DDPM and GNNs, in Pytorch

Language:PythonLicense:MITStargazers:155Issues:20Issues:3

gated-state-spaces-pytorch

Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch

Language:PythonLicense:MITStargazers:94Issues:6Issues:3

perceiver-ar-pytorch

Implementation of Perceiver AR, Deepmind's new long-context attention network based on Perceiver architecture, in Pytorch

Language:PythonLicense:MITStargazers:85Issues:4Issues:8

rvq-vae-gpt

My attempts at applying Soundstream design on learned tokenization of text and then applying hierarchical attention to text generation

Language:PythonLicense:MITStargazers:75Issues:5Issues:0

memory-compressed-attention

Implementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"

Language:PythonLicense:MITStargazers:72Issues:2Issues:4

n-grammer-pytorch

Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch

Language:PythonLicense:MITStargazers:72Issues:7Issues:3

adjacent-attention-network

Graph neural network message passing reframed as a Transformer with local attention

Language:PythonLicense:MITStargazers:61Issues:6Issues:0

equiformer-diffusion

Implementation of Denoising Diffusion for protein design, but using the new Equiformer (successor to SE3 Transformers) with some additional improvements

License:MITStargazers:55Issues:13Issues:0

isab-pytorch

An implementation of (Induced) Set Attention Block, from the Set Transformers paper

Language:PythonLicense:MITStargazers:55Issues:6Issues:1

flash-genomics-model

My own attempt at a long context genomics model, leveraging recent advances in long context attention modeling (Flash Attention + other hierarchical methods)

Language:PythonLicense:MITStargazers:52Issues:6Issues:3

einops-exts

Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️

Language:PythonLicense:MITStargazers:51Issues:4Issues:1

autoregressive-linear-attention-cuda

CUDA implementation of autoregressive linear attention, with all the latest research findings

Language:PythonLicense:MITStargazers:44Issues:4Issues:0

memory-editable-transformer

My explorations into editing the knowledge and memories of an attention network

License:MITStargazers:35Issues:5Issues:0

open_clip

An open source implementation of CLIP.

Language:PythonLicense:NOASSERTIONStargazers:9Issues:1Issues:0

bitsandbytes

8-bit CUDA functions for PyTorch

Language:PythonLicense:MITStargazers:4Issues:1Issues:0

Nim

Nim is a statically typed compiled systems programming language. It combines successful concepts from mature languages like Python, Ada and Modula. Its design focuses on efficiency, expressiveness, and elegance (in that order of priority).

Language:NimLicense:NOASSERTIONStargazers:3Issues:1Issues:0

nucleotide-transformer

🧬 Nucleotide Transformer: Building and Evaluating Robust Foundation Models for Human Genomics

Language:PythonLicense:NOASSERTIONStargazers:3Issues:1Issues:0

CLIP

Contrastive Language-Image Pretraining

Language:Jupyter NotebookLicense:MITStargazers:2Issues:1Issues:0

RFdiffusion

Code for running RFdiffusion

Language:PythonLicense:NOASSERTIONStargazers:2Issues:1Issues:0