elu697 / image-super-resolution

Super-scale your images and run experiments with Residual Dense and Adversarial Networks.

Home Page:https://idealo.github.io/image-super-resolution/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Image Super-Resolution (ISR)

Build Status License

The goal of this project is to upscale and improve the quality of low resolution images.

This project contains Keras implementations of different Residual Dense Networks for Single Image Super-Resolution (ISR) as well as scripts to train these networks using content and adversarial loss components.

The implemented networks include:

Read the full documentation at: https://idealo.github.io/image-super-resolution/.

Docker scripts and Google Colab notebooks are available to carry training and prediction. Also, we provide scripts to facilitate training on the cloud with AWS and nvidia-docker with only a few commands.

ISR is compatible with Python 3.6 and is distributed under the Apache 2.0 license. We welcome any kind of contribution. If you wish to contribute, please see the Contribute section.

Contents

Sample Results

The samples are upscaled with a factor of two. The weights used to produced these images are available under sample_weights (see Additional Information). They are stored on git lfs. If you want to download the weights you need to run git lfs pull after cloning the repository.

The original low resolution image (left), the super scaled output of the network (center) and the result of the baseline scaling obtained with GIMP bicubic scaling (right).



Below a comparison of different methods on a noisy image: the baseline, bicubic scaling; the RDN network trained using a pixel-wise content loss (PSNR-driven); the same network re-trained on a compressed dataset using VGG19-content and adversarial components for the loss (VGG+GANs). The weights used here are available in this repo.

sandal-baseline
Bicubic up-scaling (baseline).
sandal-baseline
RDN trained with pixel-wise content loss (PSNR-driven).
sandal-baseline
RDN trained with a VGG content and adversarial loss components..

Installation

There are two ways to install the Image Super-Resolution package:

  • Install ISR from PyPI (recommended):
pip install ISR
  • Install ISR from the GitHub source:
git clone https://github.com/idealo/image-super-resolution
cd image-super-resolution
git lfs pull
python setup.py install

Usage

Prediction

Load image and prepare it

import numpy as np
from PIL import Image

img = Image.open('data/input/test_images/sample_image.jpg')
lr_img = np.array(img)

Load model and run prediction

from ISR.models import RDN

rdn = RDN(arch_params={'C':6, 'D':20, 'G':64, 'G0':64, 'x':2})
rdn.model.load_weights('weights/sample_weights/rdn-C6-D20-G64-G064-x2/ArtefactCancelling/rdn-C6-D20-G64-G064-x2_ArtefactCancelling_epoch219.hdf5')

sr_img = rdn.predict(lr_img)
Image.fromarray(sr_img)

Training

Create the models

from ISR.models import RRDN
from ISR.models import Discriminator
from ISR.models import Cut_VGG19

lr_train_patch_size = 40
layers_to_extract = [5, 9]
scale = 2
hr_train_patch_size = lr_train_patch_size * scale

rrdn  = RRDN(arch_params={'C':4, 'D':3, 'G':64, 'G0':64, 'T':10, 'x':scale}, patch_size=lr_train_patch_size)
f_ext = Cut_VGG19(patch_size=hr_train_patch_size, layers_to_extract=layers_to_extract)
discr = Discriminator(patch_size=hr_train_patch_size, kernel_size=3)

Create a Trainer object using the desired settings and give it the models (f_ext and discr are optional)

from ISR.train import Trainer

loss_weights = {
  'generator': 0.0,
  'feature_extractor': 0.0833,
  'discriminator': 0.01,
}

trainer = Trainer(
    generator=rrdn,
    discriminator=discr,
    feature_extractor=f_ext,
    lr_train_dir='low_res/training/images',
    hr_train_dir='high_res/training/images',
    lr_valid_dir='low_res/validation/images',
    hr_valid_dir='high_res/validation/images',
    loss_weights=loss_weights,
    dataname='image_dataset',
    logs_dir='./logs',
    weights_dir='./weights',
    weights_generator=None,
    weights_discriminator=None,
    n_validation=40,
    lr_decay_frequency=30,
    lr_decay_factor=0.5,
)

Start training

trainer.train(
    epochs=80,
    steps_per_epoch=500,
    batch_size=16,
)

Additional Information

You can read about how we trained these network weights in our Medium posts:

RDN Pre-trained weights

The weights of the RDN network trained on the DIV2K dataset are available in weights/sample_weights/rdn-C6-D20-G64-G064-x2/PSNR-driven/rdn-C6-D20-G64-G064-x2_PSNR_epoch086.hdf5.
The model was trained using C=6, D=20, G=64, G0=64 as parameters (see architecture for details) for 86 epochs of 1000 batches of 8 32x32 augmented patches taken from LR images.

The artefact can cancelling weights obtained with a combination of different training sessions using different datasets and perceptual loss with VGG19 and GAN can be found at weights/sample_weights/rdn-C6-D20-G64-G064-x2/ArtefactCancelling/rdn-C6-D20-G64-G064-x2_ArtefactCancelling_epoch219.hdf5 We recommend using these weights only when cancelling compression artefacts is a desirable effect.

RDN Network architecture

The main parameters of the architecture structure are:

  • D - number of Residual Dense Blocks (RDB)
  • C - number of convolutional layers stacked inside a RDB
  • G - number of feature maps of each convolutional layers inside the RDBs
  • G0 - number of feature maps for convolutions outside of RDBs and of each RBD output


source: Residual Dense Network for Image Super-Resolution

RRDN Network architecture

The main parameters of the architecture structure are:

  • T - number of Residual in Residual Dense Blocks (RRDB)
  • D - number of Residual Dense Blocks (RDB) insider each RRDB
  • C - number of convolutional layers stacked inside a RDB
  • G - number of feature maps of each convolutional layers inside the RDBs
  • G0 - number of feature maps for convolutions outside of RDBs and of each RBD output


source: ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks

Contribute

We welcome all kinds of contributions, models trained on different datasets, new model architectures and/or hyperparameters combinations that improve the performance of the currently published model.

Will publish the performances of new models in this repository.

See the Contribution guide for more details.

Citation

Please cite our work in your publications if it helps your research.

@misc{cardinale2018isr,
  title={ISR},
  author={Francesco Cardinale et al.},
  year={2018},
  howpublished={\url{https://github.com/idealo/image-super-resolution}},
}

Maintainers

Copyright

See LICENSE for details.

About

Super-scale your images and run experiments with Residual Dense and Adversarial Networks.

https://idealo.github.io/image-super-resolution/

License:Apache License 2.0


Languages

Language:Python 99.8%Language:Shell 0.2%