Chenfeng1271 / FedCL

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

FedCL: Federated Multi-Phase Curriculum Learning to Synchronously Correlate User Heterogeneity

Research code that accompanies the paper FedCL: Federated Multi-Phase Curriculum Learning to Synchronously Correlate User Heterogeneity. It contains implementation of the following algorithms:

Install Requirements:

pip3 install -r requirements.txt

Prepare Dataset:

  • To generate non-iid Mnist Dataset following the Dirichlet distribution D(α=0.1) for 20 clients, using 50% of the total available training samples:
cd ./data/Mnist
python generate_niid_dirichlet.py --n_class 10 --sampling_ratio 0.5 --alpha 0.1 --n_user 20
### This will generate a dataset located at FedGen/data/Mnist/u20c10-alpha0.1-ratio0.5/
  • Similarly, to generate non-iid EMnist Dataset, using 10% of the total available training samples:
cd FedGen/data/EMnist
python generate_niid_dirichlet.py --sampling_ratio 0.1 --alpha 0.1 --n_user 20 
### This will generate a dataset located at FedGen/data/EMnist/u20-letters-alpha0.1-ratio0.1/

Run Experiments:

There is a main file "main.py" which allows running all experiments. The fedgen in this code is fedcl. If you want to run fedgen, please use the code.

Run experiments on the Mnist Dataset:

python main.py --dataset Mnist-alpha0.1-ratio0.5 --algorithm FedGen --batch_size 32 --num_glob_iters 200 --local_epochs 20 --num_users 10 --lamda 1 --learning_rate 0.01 --model cnn --personal_learning_rate 0.01 --times 3 
python main.py --dataset Mnist-alpha0.1-ratio0.5 --algorithm FedAvg --batch_size 32 --num_glob_iters 200 --local_epochs 20 --num_users 10 --lamda 1 --learning_rate 0.01 --model cnn --personal_learning_rate 0.01 --times 3 
python main.py --dataset Mnist-alpha0.1-ratio0.5 --algorithm FedProx --batch_size 32 --num_glob_iters 200 --local_epochs 20 --num_users 10 --lamda 1 --learning_rate 0.01 --model cnn --personal_learning_rate 0.01 --times 3 
python main.py --dataset Mnist-alpha0.1-ratio0.5 --algorithm FedDistll-FL --batch_size 32 --num_glob_iters 200 --local_epochs 20 --num_users 10 --lamda 1 --learning_rate 0.01 --model cnn --personal_learning_rate 0.01 --times 3 

Run experiments on the EMnist Dataset:
python main.py --dataset EMnist-alpha0.1-ratio0.1 --algorithm FedAvg --batch_size 32 --local_epochs 20 --num_users 10 --lamda 1 --model cnn --learning_rate 0.01 --personal_learning_rate 0.01 --num_glob_iters 200 --times 3 
python main.py --dataset EMnist-alpha0.1-ratio0.1 --algorithm FedGen --batch_size 32 --local_epochs 20 --num_users 10 --lamda 1 --model cnn --learning_rate 0.01 --personal_learning_rate 0.01 --num_glob_iters 200 --times 3 
python main.py --dataset EMnist-alpha0.1-ratio0.1 --algorithm FedProx --batch_size 32 --local_epochs 20 --num_users 10 --lamda 1 --model cnn --learning_rate 0.01 --personal_learning_rate 0.01 --num_glob_iters 200 --times 3 
python main.py --dataset EMnist-alpha0.1-ratio0.1 --algorithm FedDistll-FL --batch_size 32 --local_epochs 20 --num_users 10 --lamda 1 --model cnn --learning_rate 0.01 --personal_learning_rate 0.01 --num_glob_iters 200 --times 3 


Plot

For the input attribute algorithms, list the name of algorithms and separate them by comma, e.g. --algorithms FedAvg,FedGen,FedProx

  python main_plot.py --dataset EMnist-alpha0.1-ratio0.1 --algorithms FedGen --batch_size 32 --local_epochs 50 --num_users 10 --num_glob_iters 200 --plot_legend 1

Citation

Please cite the following paper if you use this code in your work.

@article{wang2022fedcl,
  title={FedCL: Federated Multi-Phase Curriculum Learning to Synchronously Correlate User Heterogeneity},
  author={Wang, Mingjie and Guo, Jianxiong and Jia, Weijia},
  journal={arXiv preprint arXiv:2211.07248},
  year={2022}
}

About


Languages

Language:Python 100.0%