anvinhnguyendinh / MangaColorizationCGAN

Manga Colorization with Conditional Generative Adversarial Network

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Manga Colorization with Conditional Generative Adversarial Network

We partly modify the repo https://github.com/ImagingLab/Colorizing-with-GANs to fulfill our self-invented project task of manga colorization.

From here on, we keep their README.

In this work, we generalize the colorization procedure using a conditional Deep Convolutional Generative Adversarial Network (DCGAN) as as suggested by [Pix2Pix]. The network is trained on the datasets CIFAR-10 and Places365. Some of the results from Places365 dataset are shown here.

Prerequisites

  • Linux
  • Tensorflow 1.7
  • NVIDIA GPU (12G or 24G memory) + CUDA cuDNN

Getting Started

Installation

  • Clone this repo:
git clone https://github.com/ImagingLab/Colorizing-with-GANs.git
cd Colorizing-with-GANs
pip install -r requirements.txt

Dataset

  • We use CIFAR-10 and Places365 datasets. To train a model on the full dataset, download datasets from official websites. After downloading, put then under the datasets folder.

Training

  • To train the model, run main.py script
python main.py
  • To train the model on places365 dataset with tuned hyperparameters:
python train.py \
  --seed 100 \
  --dataset places365 \
  --dataset-path ./dataset/places365 \
  --checkpoints-path ./checkpoints \
  --batch-size 16 \
  --epochs 10 \
  --lr 3e-4 \
  --augment True
  
  • To train the model of cifar10 dataset with tuned hyperparameters:
python train.py \
  --seed 100 \
  --dataset cifar10 \
  --dataset-path ./dataset/cifar10 \
  --checkpoints-path ./checkpoints \
  --batch-size 128 \
  --epochs 200 \
  --lr 3e-4 \
  --lr-decay-steps 5e4 \
  --augment True
  

Evaluate

  • To evaluate the model quantitatively on the test-set, run test-eval.py script:
python test-eval.py

Turing Test

  • To evaluate the model qualitatively using human perception, run test-turing.py:
python test-turing.py
  • To apply time-based Turing test run (2 seconds decision time):
python test-turing.py --test-delay 2

Method

Generative Adversarial Network

Both generator and discriminator use CNNs. The generator is trained to minimize the probability that the discriminator makes a correct prediction in generated data, while discriminator is trained to maximize the probability of assigning the correct label. This is presented as a single minimax game problem:

In our model, we have redefined the generator's cost function by maximizing the probability of the discriminator being mistaken, as opposed to minimizing the probability of the discriminator being correct. In addition, the cost function was further modified by adding an L1 based regularizer. This will theoretically preserve the structure of the original images and prevent the generator from assigning arbitrary colors to pixels just to fool the discriminator:

Conditional GAN

In a traditional GAN, the input of the generator is randomly generated noise data z. However, this approach is not applicable to the automatic colorization problem due to the nature of its inputs. The generator must be modified to accept grayscale images as inputs rather than noise. This problem was addressed by using a variant of GAN called conditional generative adversarial networks. Since no noise is introduced, the input of the generator is treated as zero noise with the grayscale input as a prior:

The discriminator gets colored images from both generator and original data along with the grayscale input as the condition and tries to tell which pair contains the true colored image:

Networks Architecture

The architecture of generator is inspired by U-Net: The architecture of the model is symmetric, with n encoding units and n decoding units. The contracting path consists of 4x4 convolution layers with stride 2 for downsampling, each followed by batch normalization and Leaky-ReLU activation function with the slope of 0.2. The number of channels are doubled after each step. Each unit in the expansive path consists of a 4x4 transposed convolutional layer with stride 2 for upsampling, concatenation with the activation map of the mirroring layer in the contracting path, followed by batch normalization and ReLU activation function. The last layer of the network is a 1x1 convolution which is equivalent to cross-channel parametric pooling layer. We use tanh function for the last layer.

For discriminator, we use similar architecture as the baselines contractive path: a series of 4x4 convolutional layers with stride 2 with the number of channels being doubled after each downsampling. All convolution layers are followed by batch normalization, leaky ReLU activation with slope 0.2. After the last layer, a convolution is applied to map to a 1 dimensional output, followed by a sigmoid function to return a probability value of the input being real or fake

Places365 Results

Colorization results with Places365. (a) Grayscale. (b) Original Image. (c) Colorized with GAN.

Citation

If you use this code for your research, please cite our paper Image Colorization with Generative Adversarial Networks:

@article{nazeri2018image,
  title={Image Colorization with Generative Adversarial Networks},
  author={Nazeri, Kamyar and Ng, Eric and Ebrahimi, Mehran},
  journal={arXiv preprint arXiv:1803.05400},
  year={2018}
}

About

Manga Colorization with Conditional Generative Adversarial Network

License:Apache License 2.0


Languages

Language:Python 100.0%