Mahdi Pourmirzaei's starred repositories

pytorch-lightning

Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.

Language:PythonLicense:Apache-2.0Stargazers:27742Issues:0Issues:0

protein-structure-tokenizer

Official implementation of "Learning the language of protein structures"

Language:PythonLicense:Apache-2.0Stargazers:3Issues:0Issues:0

ProteinWorkshop

Benchmarking framework for protein representation learning. Includes a large number of pre-training and downstream task datasets, models and training/task utilities. (ICLR 2024)

Language:PythonLicense:MITStargazers:187Issues:0Issues:0

llama-models

Utilities intended for use with Llama models.

Language:PythonLicense:NOASSERTIONStargazers:3383Issues:0Issues:0

ttt-lm-pytorch

Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States

Language:PythonLicense:MITStargazers:923Issues:0Issues:0

TTS

🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production

Language:PythonLicense:MPL-2.0Stargazers:32688Issues:0Issues:0

schedule_free

Schedule-Free Optimization in PyTorch

Language:PythonLicense:Apache-2.0Stargazers:1763Issues:0Issues:0

mergekit

Tools for merging pretrained large language models.

Language:PythonLicense:LGPL-3.0Stargazers:4321Issues:0Issues:0

ProSST

Code for ProSST: A Pre-trained Protein Sequence and Structure Transformer with Disentangled Attention.

Language:PythonLicense:GPL-3.0Stargazers:32Issues:0Issues:0
Language:PythonLicense:NOASSERTIONStargazers:1043Issues:0Issues:0

ml-4m

4M: Massively Multimodal Masked Modeling

Language:PythonLicense:Apache-2.0Stargazers:1505Issues:0Issues:0

LLM101n

LLM101n: Let's build a Storyteller

Stargazers:26935Issues:0Issues:0

Uni-Mol

Official Repository for the Uni-Mol Series Methods

Language:PythonLicense:MITStargazers:637Issues:0Issues:0

nablaDFT

nablaDFT: Large-Scale Conformational Energy and Hamiltonian Prediction benchmark and dataset

Language:PythonLicense:MITStargazers:154Issues:0Issues:0

magvit

Official JAX implementation of MAGVIT: Masked Generative Video Transformer

Language:PythonLicense:Apache-2.0Stargazers:926Issues:0Issues:0
Language:PythonStargazers:439Issues:0Issues:0

Open-Sora

Open-Sora: Democratizing Efficient Video Production for All

Language:PythonLicense:Apache-2.0Stargazers:21217Issues:0Issues:0

chroma

A generative model for programmable protein design

Language:PythonLicense:Apache-2.0Stargazers:648Issues:0Issues:0

alphafold3-pytorch

Implementation of Alphafold 3 in Pytorch

Language:PythonLicense:MITStargazers:785Issues:0Issues:0

magvit2-pytorch

Implementation of MagViT2 Tokenizer in Pytorch

Language:PythonLicense:MITStargazers:513Issues:0Issues:0

En-transformer

Implementation of E(n)-Transformer, which incorporates attention mechanisms into Welling's E(n)-Equivariant Graph Neural Network

Language:PythonLicense:MITStargazers:206Issues:0Issues:0

egnn-pytorch

Implementation of E(n)-Equivariant Graph Neural Networks, in Pytorch

Language:PythonLicense:MITStargazers:401Issues:0Issues:0

equiformer-pytorch

Implementation of the Equiformer, SE3/E3 equivariant attention network that reaches new SOTA, and adopted for use by EquiFold for protein folding

Language:PythonLicense:MITStargazers:235Issues:0Issues:0

COATI

COATI: multi-modal contrastive pre-training for representing and traversing chemical space

Language:PythonLicense:Apache-2.0Stargazers:89Issues:0Issues:0

matmulfreellm

Implementation for MatMul-free LM.

Language:PythonLicense:Apache-2.0Stargazers:2801Issues:0Issues:0

matchem-llm

A public repository collecting links to state of the art QA and evaluation sets for various ML and LLM applications

Stargazers:73Issues:0Issues:0

LlamaGen

Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation

Language:PythonLicense:MITStargazers:1133Issues:0Issues:0

se3-transformer-public

code for the SE3 Transformers paper: https://arxiv.org/abs/2006.10503

Language:PythonStargazers:482Issues:0Issues:0

se3-transformer-pytorch

Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch. This specific repository is geared towards integration with eventual Alphafold2 replication.

Language:PythonLicense:MITStargazers:247Issues:0Issues:0

Torch-Linguist

Language Modeling with PyTorch

Language:Jupyter NotebookStargazers:15Issues:0Issues:0