0xbadc0ffe / FedSimulate

Simulation framework for Federated Learning (FedProx and FedAvg) in PyTorch.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

FedSimulate

Simulation framework for Federated Learning in PyTorch based on this paper. The distributed devices runs their updates in a sequential fashon so that there's no need of high computational power, although the training might be pretty long in some cases.

Server: Centralize and unify the clients weights following a weights generation policy (e.g. averaging, weighted averaging, top-k, ...) and then update the client devices with the result. This is performed for rounds turns and at each round pool devices are sampled from the whole set to update the server model.

Clients: Each client is defined by a Model-Free Trainer which handle its hyperparamentrs and training procedure. At each round, only the sampled devices have thier models instantiated to optimize the memory consumption.

Loner: This is a stand-alone model trained in parallel for rounds epochs and it is used as a baseline to evaluate the FL algorithm perfomances. It has the complete dataset to train on.

Imbalanced Data Distribution

The clients datasets are made imbalanced by the following distribution over the classes (in this case is CIFAR10 dataset):

Obviously, the classes are shuffled before applying this distributions. An example is presented in the following plot:

Results

About

Simulation framework for Federated Learning (FedProx and FedAvg) in PyTorch.

License:MIT License


Languages

Language:Python 100.0%