An implementation of the paper "Towards Federated Learning at scale: System Design" in PyTorch
The two major components of FL - Server, Device
- Coordinator
- Selector
- Aggregator
- Master Aggregator
Explain device here
- Message definitions
- Message class
- Commincator class
Explain fedavg here
commands here
- Gradient compression
- Secure byzantine learning
- Enable distillation
- Check optimiser modification in training
- Message synchronisation methods
- Check if logger can display device #
- Eliminate the ready alive class
- Polling pipe ?
- Better Message class organisation
- Comment out functions