There are 7 repositories under distributed-machine-learning topic.
Distributed Machine Learning Patterns from Manning Publications by Yuan Tang https://bit.ly/2RKv8Zo
This is suite of the hands-on training materials that shows how to scale CV, NLP, time-series forecasting workloads with Ray.
[ICLR 2021] HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients
FedERA is a modular and fully customizable open-source FL framework, aiming to address these issues by offering comprehensive support for heterogeneous edge devices and incorporating both standalone and distributed computing. It includes new software modules to enhance usability and promote environ- mental sustainability.
Paddle with Decentralized Trust based on Xuperchain
A curated list of Federated Learning papers/articles and recent advancements.
sensAI: ConvNets Decomposition via Class Parallelism for Fast Inference on Live Data
🔨 A toolbox for federated learning, aiming to provide implementations of FedAvg, FedProx, Ditto, etc. in multiple versions, such as Pytorch/Tensorflow, single-machine/distributed, synchronized/asynchronous.
CSCE 585 - Machine Learning Systems
vector quantization for stochastic gradient descent.
[NeurIPS 2022] SemiFL: Semi-Supervised Federated Learning for Unlabeled Clients with Alternate Training
The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".
🔨 使用Spark/Pytorch实现分布式算法,包括图/矩阵计算(graph/matrix computation)、随机算法、优化(optimization)和机器学习。参考刘铁岩《分布式机器学习》和CME 323课程
Materials for "Machine Learning on Big Data" course
Event-Triggered Communication in Parallel Machine Learning
Collaborative Data Analysis for All
Associated codebase for Byzantine-resilient distributed / decentralized machine learning papers from INSPIRE Lab
A PS ML training architecture with p4 programmable switches.
Framework that supports pipeline federated split learning with multiple hops.
Scalable/Distributed Computer Vision with Ray
Solution for the Ultimate Student Hunt Challenge (1st place).
A distributed implementation of "Nested Subtree Hash Kernels for Large-Scale Graph Classification Over Streams" (ICDM 2012).
Distributed Neural Networks Training
[NeurIPS 2022] GAL: Gradient Assisted Learning for Decentralized Multi-Organization Collaborations
Scalable NLP model fine-tuning and batch inference with Ray and Anyscale
Ring Allreduce implmentation in Spark with Barrier Scheduling experiment
[DCC 2020] DRASIC: Distributed Recurrent Autoencoder for Scalable Image Compression
Ray Saturday Dec 2022 edition