Rahul V's starred repositories
pytorch-tutorial
PyTorch Tutorial for Deep Learning Researchers
tensorflow-wavenet
A TensorFlow implementation of DeepMind's WaveNet paper
improved_wgan_training
Code for reproducing experiments in "Improved Training of Wasserstein GANs"
Deep-Learning-for-Medical-Applications
Deep Learning Papers on Medical Image Analysis
kaggle_ndsb2017
Kaggle datascience bowl 2017
pytorch-maml
PyTorch implementation of MAML: https://arxiv.org/abs/1703.03400
Deep-MRI-Reconstruction
Deep Cascade of Convolutional Neural Networks for MR Image Reconstruction: Implementation & Demo
BEGAN-pytorch
in progress
GAN-Sandbox
Vanilla GAN implemented on top of keras/tensorflow enabling rapid experimentation & research. Branches correspond to implementations of stable GAN variations (i.e. ACGan, InfoGAN) and other promising variations of GANs like conditional and Wasserstein.
deligan
This project is an implementation of the Generative Adversarial Network proposed in our CVPR 2017 paper - DeLiGAN : Generative Adversarial Networks for Diverse and Limited Data. DeLiGAN is a simple but effective modification of the GAN framework and aims to improve performance on datasets which are diverse yet small in size.
very-deep-convnets-raw-waveforms
Tensorflow - Very Deep Convolutional Neural Networks For Raw Waveforms - https://arxiv.org/pdf/1610.00087.pdf
MixtureOfExperts
Master Thesis. Code written in python. (Keras with Tensorflow backend)
which-of-your-friends-are-on-tinder
Discover which of your Facebook friends are on Tinder!
ECG_ArrhythmiaDetection
Notebook to create images from raw ECG values from MIT-BIH database
Expert-Gate
We introduce a model of lifelong learning, based on a Network of Experts. New tasks / experts are learned and added to the model sequentially, building on what was learned before. To ensure scalability of this process, data from previous tasks cannot be stored and hence is not available when learning a new task. A critical issue in such context, not addressed in the literature so far, relates to the decision of which expert to deploy at test time. We introduce a set of gating autoencoders that learn a representation for the task at hand, and, at test time, automatically forward the test sample to the relevant expert. This also brings memory efficiency as only one expert network has to be loaded into memory at any given time. Further, the autoencoders inherently capture the relatedness of one task to another, based on which the most relevant prior model to be used for training a new expert, with finetuning or learning without-forgetting, can be selected. We evaluate our method on image classification and video prediction problems.