nkotelevskii / FL-bench

Benchmark of federated learning. Dedicated to the community. πŸ€—

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Federated Learning Benchmark

Method 🧬

Regular FL Methods

Personalized FL Methods

More reproductions/features would come soon or later (depends on my mood 🀣).

Easy Run πŸƒβ€β™‚οΈ

# partition the CIFAR-10 according to Dir(0.1) for 100 clients
cd data/utils
python run.py -d cifar10 -a 0.1 -cn 100
cd ../../

# run FedAvg under default setting.
cd src/server
python fedavg.py

About methods of generating federated dastaset, go check data/README.md for full details.

Monitor πŸ“ˆ (optional and recommended πŸ‘)

  1. Run python -m visdom.server on terminal.
  2. Run src/server/${algo}.py --visible 1
  3. Go check localhost:8097 on your browser.

Arguments πŸ”§

πŸ“’ All arguments have default value.

About the default values and hyperparameters of advanced FL methods, go check src/config/args.py for full details.

General Argument Description
--dataset, -d The name of dataset that experiment run on.
--model, -m The model backbone experiment used.
--seed Random seed for running experiment.
--join_ratio, -jr Ratio for (client each round) / (client num in total).
--global_epoch, -ge Global epoch, also called communication round.
--local_epoch, -le Local epoch for client local training.
--finetune_epoch, -fe Epoch for clients fine-tunning their models before test.
--test_gap, -tg Interval round of performing test on clients.
--eval_test, -ee Non-zero value for performing evaluation on joined clients' testset before and after local training.
--eval_train, -er Non-zero value for performing evaluation on joined clients' trainset before and after local training.
--local_lr, -lr Learning rate for client local training.
--momentum, -mom Momentum for client local opitimizer.
--weight_decay, -wd Weight decay for client local optimizer.
--verbose_gap, -vg Interval round of displaying clients training performance on terminal.
--batch_size, -bs Data batch size for client local training.
--server_cuda Non-zero value indicates that tensors at server side are in gpu.
--client_cuda Non-zero value indicates that tensors at client side are in gpu.
--visible Non-zero value for using Visdom to monitor algorithm performance on localhost:8097.
--save_log Non-zero value for saving algorithm running log in FL-bench/out/{$algo}.
--save_model Non-zero value for saving output model(s) parameters in FL-bench/out/{$algo}.
--save_fig Non-zero value for saving the accuracy curves showed on Visdom into a .jpeg file at FL-bench/out/{$algo}.
--save_metrics Non-zero value for saving metrics stats into a .csv file at FL-bench/out/{$algo}.

Supported Datasets 🎨

This benchmark only support algorithms to solve image classification task for now.

Regular Image Datasets

  • MNIST (1 x 28 x 28, 10 classes)

  • CIFAR-10/100 (3 x 32 x 32, 10/100 classes)

  • EMNIST (1 x 28 x 28, 62 classes)

  • FashionMNIST (1 x 28 x 28, 10 classes)

  • Syhthetic Dataset

  • FEMNIST (1 x 28 x 28, 62 classes)

  • CelebA (3 x 218 x 178, 2 classes)

  • SVHN (3 x 32 x 32, 10 classes)

  • USPS (1 x 16 x 16, 10 classes)

  • Tiny-ImageNet-200 (3 x 64 x 64, 200 classes)

  • CINIC-10 (3 x 32 x 32, 10 classes)

Medical Image Datasets

Acknowledgement πŸ€—

Some reproductions in this benchmark are referred to https://github.com/TsingZ0/PFL-Non-IID, which is a great FL benchmark. πŸ‘

This benchmark is still young, which means I will update it frequently and unpredictably. Therefore, periodically fetching the latest code is recommended. πŸ€–

If this benchmark is helpful to your research, it's my pleasure. 😏

About

Benchmark of federated learning. Dedicated to the community. πŸ€—

License:MIT License


Languages

Language:Python 95.4%Language:Shell 4.6%