ahmedcs / Compressed_SGD_PyTorch

Implementation of Compressed SGD with Compressed Gradients in Pytorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Code guidelines

This implementation is based on PyTorch (1.5.0) in Python (3.8).

It enables to run simulated distributed optimization with master node on any number of workers based on PyTorch SGD Optimizer with gradient compression. Communication can be compressed on both workers and master level. Error-Feedback is also enabled. For more details, please see our manuscript.

Installation

To install requirements

$ pip install -r requirements.txt

Example Notebook

To run our code see example notebook.

Citing

In case you find this this code useful, please consider citing

@article{horvath2020better,
  title={A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning},
  author={Horv\'{a}th, Samuel and Richt\'{a}rik, Peter},
  journal={arXiv preprint arXiv:2006.11077},
  year={2020}
}

License

License: MIT

About

Implementation of Compressed SGD with Compressed Gradients in Pytorch


Languages

Language:Python 75.6%Language:Jupyter Notebook 24.4%