sharif-apu / MedDeblur

Official implementation of state-of-the-art multi-modal medical image deblurring (MedDeblur)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MedDeblur

PWC PWC PWC PWC

This is the official implementation of a state-of-the-art medical image deblurring method titled as "MedDeblur: Medical Image Deblurring with Residual Dense Spatial-Asymmetric Attention". [Click Here] to download the full paper (in PDF).

Please consider citing this paper as follows:

@article{sharif2022meddeblur,
  title={MedDeblur: Medical Image Deblurring with Residual Dense Spatial-Asymmetric Attention},
  author={Sharif, SMA and Naqvi, Rizwan Ali and Mehmood, Zahid and Hussain, Jamil and Ali, Ahsan and Lee, Seung-Won},
  journal={Mathematics},
  volume={11},
  number={1},
  pages={115},
  year={2022},
  publisher={MDPI}
}

Network Architecture

Multi-scale Network

Overview

Figure: Overview of the proposed network for learning medical image deblurring. The proposed method comprises a novel RD-SAM block in a scale recurrent network for learning salient features to accelerate deblurring performance.

Proposed RD-SAM

network

Figure: Overview of proposed RD-SAM. It comprises a residual dense block, followed by a spatial-symmetric attention module. (a) Proposed RD-SAM. (b) Spatial-asymmetric attention module.

Medical Image Deblurring Results

Qualitative Comparison

Results

Figure: Performance of existing medical image deblurring methods in removing blind motion blur. The existing deblurring methods immensely failed in removing blur from medical images. (a) Blurry input. (b) Result obtained by TEMImageNet. (c) Result obtained by ZhaoNet. (d) Result obtained by Deep Deblur. (e) Result obtained by SRN Deblur [13]. (f) Proposed Method.

Quantitative Comparison

Results

Table: Objective comparison between deep deblurring methods for MID. We evaluated the performance of each comparing method by utilizing the evaluation metrics. Moreover, we calculated individual scores (i.e., PSNR, SSIM, and deltaE) for all testing images. We compute the mean performance of each comparing method for a specific dataset to observe their performance on that respective modality. Later, we summarized the performance of each comparing method by calculating the mean PSNR, SSIM, and deltaE scores obtained on the individual modality.

Prerequisites

Python 3.8
CUDA 10.1 + CuDNN
pip
Virtual environment (optional)

Installation

Please consider using a virtual environment to continue the installation process.

git clone https://github.com/sharif-apu/MID-DRAN.git
cd MID-DRAN
pip install -r requirement.txt

Testing

MeDeblur can be inferenced with pretrained weights and default setting as follows:
python main.py -i

A few testing images are provided in a sub-directory under blurTest (i.e., blurTest/samples/)
In such occasion, outputs will be available in modelOutput/sampless/.

To inference with custom setting execute the following command:
python main.py -i -s path/to/inputImages -d path/to/outputImages
Here, -s specifies the root directory of the source images (i.e., blurTest/), and -d specifies the destination root (i.e., modelOutput/).

Training

To train with your own dataset execute:
python main.py -ts -e X -b Y To specify your trining images path, go to mainModule/config.json and update "trainingImagePath" entity.
You can specify the number of epoch with -e flag (i.e., -e 5) and number of images per batch with -b flag (i.e., -b 24).

For transfer learning execute:
python main.py -tr -e -b

Others

Check model configuration:
python main.py -ms
Create new configuration file:
python main.py -c
Update configuration file:
python main.py -u
Overfitting testing
python main.py -to

Contact

For any further queries, feel free to contact us through the following emails: apuism@gmail.com, rizwanali@sejong.ac.kr

About

Official implementation of state-of-the-art multi-modal medical image deblurring (MedDeblur)


Languages

Language:Python 100.0%