hwalsuklee's starred repositories
tensorflow
An Open Source Machine Learning Framework for Everyone
neural-style
Torch implementation of neural style algorithm
fast-style-transfer
TensorFlow CNN for fast style transfer ⚡🖥🎨🖼
TensorFlow-Tutorials
TensorFlow Tutorials with YouTube Videos
py-faster-rcnn
Faster R-CNN (Python implementation) -- see https://github.com/ShaoqingRen/faster_rcnn for the official MATLAB version
deep-residual-networks
Deep Residual Learning for Image Recognition
tensorpack
A Neural Net Training Interface on TensorFlow, with focus on speed + flexibility
TensorFlow-Tutorials
Simple tutorials using Google's TensorFlow Framework
neural-style
Neural style in TensorFlow! 🎨
fast-neural-style
Feedforward style transfer
PaintsChainer
line drawing colorization using chainer
neural-style-tf
TensorFlow (Python API) implementation of Neural Style
cnn-benchmarks
Benchmarks for popular CNN models
neural_artistic_style
Neural Artistic Style in Python
TensorFlow-Tutorials
텐서플로우를 기초부터 응용까지 단계별로 연습할 수 있는 소스 코드를 제공합니다
artistic-videos
Torch implementation for the paper "Artistic style transfer for videos"
tensorflow-resnet
ResNet model in TensorFlow
tensorflow-mnist-tutorial
Sample code for "Tensorflow and deep learning, without a PhD" presentation and code lab.
wide-residual-networks
3.8% and 18.3% on CIFAR-10 and CIFAR-100
texture_nets
Code for "Texture Networks: Feed-forward Synthesis of Textures and Stylized Images" paper.
densenet-tensorflow
DenseNet Implementation in Tensorflow
tensorflow-style-transfer
A simple, concise tensorflow implementation of style transfer (neural style)
tensorflow-mnist-cnn
MNIST classification using Convolutional NeuralNetwork. Various techniques such as data augmentation, dropout, batchnormalization, etc are implemented.
domain_adversarial_neural_network
Domain Adaptation Representation Learning Algorithm (as published in JMLR 2016)
tensorflow-mnist-MLP-batch_normalization-weight_initializers
MNIST classification using Multi-Layer Perceptron (MLP) with 2 hidden layers. Some weight-initializers and batch-normalization are implemented.