securefederatedai / openfl

An open framework for Federated Learning.

Home Page:https://openfl.readthedocs.io/en/latest/index.html

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

FedProx: getting same model depsite different mu values

chinglamchoi opened this issue · comments

Hi, I'm using the FedProx optimizer and following the PyTorch MNIST demos. I passed different mu values (I tried 0, 0.001, 0.01, 0.1, 1, 2, 5, 10) but still got the same trained client and server models for the same seed (but different mu's).

In the original paper, mu should control the degree of personalization, where mu=0 is equivalent to FedAvg. In this implementation, how does mu affect the model training? Could you point me to the relevant files in which mu is used in optimization?

I found Line 93 in openfl/utilities/optimizers/torch/fedprox.py, d_p.add_(p - w_old_p, alpha=mu). I verified that my mu values had not been overwritten and were different. Other than that, I couldn't find anything else that directly uses mu in optimization.

Thanks!