There are 0 repository under distributed-data-parallel topic.
Unofficial implementation of "TTNet: Real-time temporal and spatial video analysis of table tennis" (CVPR 2020)
An On-Chain Open-Source Platform for Rapid AI Model Productization Using Decentralized Resources with Flexibility and Scalability
Distributed training (multi-node) of a Transformer model
Code for Active Learning at The ImageNet Scale. This repository implements many popular active learning algorithms and allows training with torch's DDP.
Unofficial implementation for Sigmoid Loss for Language Image Pre-Training
A simple API to launch Python functions to run on multiple ranked processes, mpify is designed to enable interactive multiprocessing experiments in Jupyter/IPython, such as distributed data parallel training over multiple GPUs.
Helmet Detector based on the CenterNet.
Different template codes for Deep Learning with PyTorch.
demo for pytorch-distributed
This is a simulator for access strategies for distributed caching. The simulator considers a user who is equipped by several caches, and receives from them periodical updates about the cached content. The problem and algorithms implemented here are detailed in the paper: I. Cohen, G. Einziger, R. Friedman, and G. Scalosub, “Access Strategies for Network Caching”, IEEE/ACM Transactions on Networking, 29(2), pp. 609-622, 2021.
This repository is intended to be a template for starting new projects with PyTorch, in which deep learning models are trained and evaluated on medical imaging data.
Acceleration of a classification model for thoracic diseases