-
FedAvg -- Communication-Efficient Learning of Deep Networks from Decentralized Data (AISTATS'17)
-
FedAvgM -- Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification (ArXiv'19)
-
FedProx -- Federated Optimization in Heterogeneous Networks (MLSys'20)
-
SCAFFOLD -- SCAFFOLD: Stochastic Controlled Averaging for Federated Learning (ICML'20)
-
MOON -- Model-Contrastive Federated Learning (CVPR'21)
-
FedDyn -- Federated Learning Based on Dynamic Regularization (ICLR'21)
-
FedLC -- Federated Learning with Label Distribution Skew via Logits Calibration (ICML'22)
-
Local-Only -- Local training only (without communication).
-
FedMD -- FedMD: Heterogenous Federated Learning via Model Distillation (NIPS'19)
-
APFL -- Adaptive Personalized Federated Learning (ArXiv'20)
-
LG-FedAvg -- Think Locally, Act Globally: Federated Learning with Local and Global Representations (ArXiv'20)
-
FedBN -- FedBN: Federated Learning On Non-IID Features Via Local Batch Normalization (ICLR'21)
-
FedPer -- Federated Learning with Personalization Layers (AISTATS'20)
-
FedRep -- Exploiting Shared Representations for Personalized Federated Learning (ICML'21)
-
Per-FedAvg -- Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach (NIPS'20)
-
pFedMe -- Personalized Federated Learning with Moreau Envelopes (NIPS'20)
-
Ditto -- Ditto: Fair and Robust Federated Learning Through Personalization (ICML'21)
-
pFedHN -- Personalized Federated Learning using Hypernetworks (ICML'21)
-
pFedLA -- Layer-Wised Model Aggregation for Personalized Federated Learning (CVPR'22)
-
CFL -- Clustered Federated Learning: Model-Agnostic Distributed Multi-Task Optimization under Privacy Constraints (ArXiv'19)
-
FedFomo -- Personalized Federated Learning with First Order Model Optimization (ICLR'21)
-
FedBabu -- FedBabu: Towards Enhanced Representation for Federated Image Classification (ICLR'22)
-
FedAP -- Personalized Federated Learning with Adaptive Batchnorm for Healthcare (IEEE'22)
-
kNN-Per -- Personalized Federated Learning through Local Memorization (ICML'22)
-
MetaFed -- MetaFed: Federated Learning among Federations with Cyclic Knowledge Distillation for Personalized Healthcare (IJCAI'22)
More reproductions/features would come soon or later (depends on my mood π€£).
# partition the CIFAR-10 according to Dir(0.1) for 100 clients
cd data/utils
python run.py -d cifar10 -a 0.1 -cn 100
cd ../../
# run FedAvg under default setting.
cd src/server
python fedavg.py
About methods of generating federated dastaset, go check data/README.md
for full details.
- Run
python -m visdom.server
on terminal. - Run
src/server/${algo}.py --visible 1
- Go check
localhost:8097
on your browser.
π’ All arguments have default value.
About the default values and hyperparameters of advanced FL methods, go check src/config/args.py
for full details.
General Argument | Description |
---|---|
--dataset , -d |
The name of dataset that experiment run on. |
--model , -m |
The model backbone experiment used. |
--seed |
Random seed for running experiment. |
--join_ratio , -jr |
Ratio for (client each round) / (client num in total). |
--global_epoch , -ge |
Global epoch, also called communication round. |
--local_epoch , -le |
Local epoch for client local training. |
--finetune_epoch , -fe |
Epoch for clients fine-tunning their models before test. |
--test_gap , -tg |
Interval round of performing test on clients. |
--eval_test , -ee |
Non-zero value for performing evaluation on joined clients' testset before and after local training. |
--eval_train , -er |
Non-zero value for performing evaluation on joined clients' trainset before and after local training. |
--local_lr , -lr |
Learning rate for client local training. |
--momentum , -mom |
Momentum for client local opitimizer. |
--weight_decay , -wd |
Weight decay for client local optimizer. |
--verbose_gap , -vg |
Interval round of displaying clients training performance on terminal. |
--batch_size , -bs |
Data batch size for client local training. |
--server_cuda |
Non-zero value indicates that tensors at server side are in gpu. |
--client_cuda |
Non-zero value indicates that tensors at client side are in gpu. |
--visible |
Non-zero value for using Visdom to monitor algorithm performance on localhost:8097 . |
--save_log |
Non-zero value for saving algorithm running log in FL-bench/out/{$algo} . |
--save_model |
Non-zero value for saving output model(s) parameters in FL-bench/out/{$algo} . |
--save_fig |
Non-zero value for saving the accuracy curves showed on Visdom into a .jpeg file at FL-bench/out/{$algo} . |
--save_metrics |
Non-zero value for saving metrics stats into a .csv file at FL-bench/out/{$algo} . |
This benchmark only support algorithms to solve image classification task for now.
Regular Image Datasets
-
MNIST (1 x 28 x 28, 10 classes)
-
CIFAR-10/100 (3 x 32 x 32, 10/100 classes)
-
EMNIST (1 x 28 x 28, 62 classes)
-
FashionMNIST (1 x 28 x 28, 10 classes)
-
FEMNIST (1 x 28 x 28, 62 classes)
-
CelebA (3 x 218 x 178, 2 classes)
-
SVHN (3 x 32 x 32, 10 classes)
-
USPS (1 x 16 x 16, 10 classes)
-
Tiny-ImageNet-200 (3 x 64 x 64, 200 classes)
-
CINIC-10 (3 x 32 x 32, 10 classes)
Medical Image Datasets
-
COVID-19 (3 x 244 x 224, 4 classes)
-
Organ-S/A/CMNIST (1 x 28 x 28, 11 classes)
Some reproductions in this benchmark are referred to https://github.com/TsingZ0/PFL-Non-IID, which is a great FL benchmark. π
This benchmark is still young, which means I will update it frequently and unpredictably. Therefore, periodically fetching the latest code is recommended. π€
If this benchmark is helpful to your research, it's my pleasure. π