TinfoilHat0 / Defending-Against-Backdoors-with-Robust-Learning-Rate

The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".

Home Page:https://ojs.aaai.org/index.php/AAAI/article/view/17118

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Unstable Results Over 500 Round Experiments

AndrewMerrow opened this issue · comments

I have tried running tests on the fedemnist dataset with the default parameters from the runner.sh file. In my 500 round tests, the model's accuracy starts to degrade after approximately round 100.

UTD 500 round graph

I have ran experiments in two separate environments and have tried tweaking some parameters, but the results I am getting all show the same issue. Here are the library versions I am using:

NVIDIA PyTorch Container version 22.12
PyTorch version 1.14.0+410ce96
Python3 version 3.8.10

Hey Andrew - the plot makes me this this a learning rate problem. Are you decaying the learning rate?

I have not decayed the learning rate. Here are the values I have used:
server_lr: 1
client_lr: 0.1

We used all the parameters straight from the provided runner.sh file.