There are 12 repositories under distributed-training topic.
Learn how to design, develop, deploy and iterate on production-grade ML applications.
PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNet-V3/V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
👑 Easy-to-use and powerful NLP and LLM library with 🤗 Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including 🗂Text Classification, 🔍 Neural Search, ❓ Question Answering, ℹ️ Information Extraction, 📄 Document Intelligence, 💌 Sentiment Analysis etc.
SkyPilot: Run LLMs, AI, and Batch jobs on any cloud. Get maximum savings, highest GPU availability, and managed execution—all with a simple interface.
FEDML - The unified and scalable ML library for large-scale distributed training, model serving, and federated learning. FEDML Launch, a cross-cloud scheduler, further enables running any AI jobs on any GPU cloud or on-premise cluster. Built on this library, FEDML Nexus AI (https://fedml.ai) is your generative AI platform at scale.
Fengshenbang-LM(封神榜大模型)是IDEA研究院认知计算与自然语言研究中心主导的大模型开源体系,成为中文AIGC和认知智能的基础设施。
Fast and flexible AutoML with learning guarantees.
Training and serving large-scale neural networks with auto parallelization.
Determined is an open-source machine learning platform that simplifies distributed training, hyperparameter tuning, experiment tracking, and resource management. Works with PyTorch and TensorFlow.
Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
Library for Fast and Flexible Human Pose Estimation
DeepRec is a high-performance recommendation deep learning framework based on TensorFlow. It is hosted in incubation in LF AI & Data Foundation.
DLRover: An Automatic Distributed Deep Learning System
Efficient Deep Learning Systems course materials (HSE, YSDA)
Best practice for training LLaMA models in Megatron-LM
Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training
A full pipeline AutoML tool for tabular data
YOLO3D: End-to-end real-time 3D Oriented Object Bounding Box Detection from LiDAR Point Cloud (ECCV 2018)
Distributed Deep Learning on AWS Using CloudFormation (CFN), MXNet and TensorFlow
Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.
How to use Cross Replica / Synchronized Batchnorm in Pytorch
[ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
Paddle Large Scale Classification Tools,supports ArcFace, CosFace, PartialFC, Data Parallel + Model Parallel. Model includes ResNet, ViT, Swin, DeiT, CaiT, FaceViT, MoCo, MAE, ConvMAE, CAE.
Minimal sharded dataset loaders, decoders, and utils for multi-modal document, image, and text datasets.