AshwinRJ / Federated-Learning-PyTorch

Implementation of Communication-Efficient Learning of Deep Networks from Decentralized Data

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Bug with calculating the Training Loss

sashkboos opened this issue · comments

idxs=user_groups[idx], logger=logger)

Here, I think we have to replace the idx with c because we want to calculate the Training error of the global_model on all training data (after the averaging).

I agree with sashkboos

Agree. It should loop all the clients.

commented

of couse, it is a mistake apparently