Single image deblurring with deep learning.
This is a project page for our research. Please refer to our CVPR 2017 paper for details:
Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring [paper] [supplementary] [slide]
If you find our work useful in your research or publication, please cite our work:
@InProceedings{Nah_2017_CVPR,
author = {Nah, Seungjun and Kim, Tae Hyun and Lee, Kyoung Mu},
title = {Deep Multi-Scale Convolutional Neural Network for Dynamic Scene Deblurring},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {July},
year = {2017}
}
- torch7
- torchx
- cudnn
To run demo, download and extract the trained models into "experiment" folder.
Type following command in "code" folder.
qlua -i demo.lua -load -save release_scale3_adv_gamma -blur_type gamma2.2
qlua -i demo.lua -load -save release_scale3_adv_lin -blur_type linear
To train a model, clone this repository and download below dataset in "dataset" directory.
The data structure should look like "dataset/GOPRO_Large/train/GOPRxxxx_xx_xx/blur/xxxxxx.png"
Then run main.lua in "code" directory with optional parameters.
-- Train for 450 epochs, save in 'experiment/scale3'
th main.lua -nEpochs 450 -save scale3
-- Load saved model
th main.lua -load -save scale3
> blur_dir, output_dir = ...
> deblur_dir(blur_dir, output_dir)
optional parameters are listed in opts.lua
In this work, we proposed a new dataset of realistic blurry and sharp image pairs using a high-speed camera. However, we do not provide blur kernels as they are unknown.
Statistics | Training | Test | Total |
---|---|---|---|
sequences | 22 | 11 | 33 |
image pairs | 2103 | 1111 | 3214 |
Download links
- GOPRO_Large : Blurry and sharp image pairs. Blurry images includes both gamma corrected and not corrected (linear CRF) versions.
- GOPRO_Large_all : All the sharp images used to generate blurry images. You can generate new blurry images by accumulating differing number of sharp frames.
Here are some examples.
This project is partially funded by Microsoft Research Asia