Fabio-Gil-Z / IRUNet

Blind microscopy image denoising with a deep residual and multiscale encoder/decoder network.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Blind microscopy image denoising with a deep residual and multiscale encoder/decoder network

Abstract

In computer-aided diagnosis (CAD) focused on microscopy, denoising improves the quality of image analysis. In general, the accuracy of this process may depend both on the experience of the microscopist and on the equipment sensitivity and specificity. A medical image could be corrupted by both intrinsic noise, due to the device limitations, and, by extrinsic signal perturbations during image acquisition. Nowadays, CAD deep learning applications pre-process images with image denoising models to reinforce learning and prediction. In this work, an innovative and lightweight deep multiscale convolutional encoder-decoder neural network is proposed. Specifically, the encoder uses deterministic mapping to map features into a hidden representation. Then, the latent representation is rebuilt to generate the reconstructed denoised image. Residual learning strategies are used to improve and accelerate the training process using skip connections in bridging across convolutional and deconvolutional layers. The proposed model reaches on average 38.38 of PSNR and 0.98 of SSIM on a test set of 57458 images overcoming state-of-the-art models in the same application domain.

IRUNet - Paper

IRUNet architecture

Cite this paper

@INPROCEEDINGS{9630502, 
author={Gil Zuluaga, Fabio Hernan and Bardozzo, Francesco and Rios Patino, Jorge Ivan and Tagliaferri, Roberto},
booktitle={2021 43rd Annual International Conference of the IEEE Engineering in Medicine Biology Society (EMBC)},
title={Blind microscopy image denoising with a deep residual and multiscale encoder/decoder network},
year={2021},
volume={},
number={},
pages={3483-3486},
doi={10.1109/EMBC46164.2021.9630502}
}

Original Dataset Kaggle

kaggle dataset description

Created Datasets for Training and Testing

self created datasets

For model training

Histopathologic Cancer Detection dataset "train" was used to create the training set of images (clean,noise) named noise_0_to_50 using multipleImageNoiseCreator.py program from Util folder, meaning with noise ranges between σ[0,50].

For model testing

Histopathologic Cancer Detection dataset "test" was used to create three testing sets of images (clean,noise) named: noise_10, noise_25 and noise_50 using multipleImageNoiseCreator.py program from Util folder with a fixed noise:
σ = 10, σ = 25 , σ = 50.

Sample images from the Histopathologic Cancer Detection dataset

Kaggle dataset Sample Images

Denoising results

Denoising results

Denoising results

Denoising results

Following are the results obtained using the current IRUNet model, loading its weights and training it for 70 additional epochs.

It can be appreciated that the loss is not changing (Cyan), however, we can appreciate that the PSNR is indeed changing (Orange). This was the reason we used PSNR to measure the best model instead of loss. Additionally, the best model currently is the one obtained at epoch 65 which means the model can still be improved.

Denoising results

Requirements

Tensorflow 2.0 or greater

Cuda and Cudnn 10.1 or better

This work has been developed with:

Denoising results

Instructions of use

Model Training

Make sure you have downloaded and extracted the files of the training dataset noise_0_to_50 from the drive folder which is ready to use.

Alternatively it is possible to download the original dataset from Kaggle and extract the files.

If you downloaded the original dataset from Kaggle, please follow the next steps:

I) Use the snippet renaming_files_ascending_order located in Util folder.
II) Use the program multipleImageNoiseCreator.py located in Util folder.
III) Please make sure you have a folder with name files 1_clean.tif, 1_noise.tif, 2_clean.tif, 2_noise.tif ... etc.

Here is an example how your folder should look like with only 10 images.

At this point you should have the dataset ready to use for training.

We may now configure the main.py program located in Code folder.

Following are the default settings of main.py :

BATCH_SIZE = 32

DATASET_DIRECTOY = "path/to/noise_0_to_50"
Here it is possible to type the dataset directory in which you downloaded / created the training dataset.

DIRECTORY_TO_SAVE_MODELS = "Models"
Default name: Models

DIRECTORY_TO_SAVE_TENSORBOARD = "Tensorboard"
Default name: Tensorboard

DIRECTORY_TO_SAVE_EACH_EPOCH_RESULT = "Epoch_results"
Default name: Epoch_results
In this folder images will be saved after each epoch, showcasing the learning progress of the network.

modelName = "myDenoiser"
weights = "myDenoiser"
Make sure both of these names match, for more information look at main.py.

loadWeights = False
Defaults to "False"
Use in case you want to resume your training if for some reason it was stopped, it is possible to change it to "True".

epoch_assignation = 1000
it is possible to choose a number of epochs for training.

filters = 16
it is possible to change the number of filters

optimizer = ADAM_optimizer
Defaults to: ADAM_optimizer
it is possible to choose other optimizers, for more information look at main.py.

loss_function = "MeanAbsoluteError"
Defaults to: "MeanAbsoluteError"
it is possible to use: "MeanSquaredError".

We have successfully finished configuring our main.py file.

Now it should be possible to run the program with "python3 main.py" execution line at the terminal.

Model Testing

Testing over a group of images ( average testing PSRN / SSIM )

In order to test the model over a group of images, we will be using averageTester.py located in Util folder.

Example of expected output

averageTester_expected_output

it is possible to configure the file to change the default path from the sample testing folder and the number of testing pairs (clean,noise), in this case there are only 10 images in our github repository folder for different levels of noise; it is possible to configure it to do it for 100, 1000 or the whole test dataset at 57458 testing pairs. The idea is to use it with the noise_10, noise_25 and noise_50 testing datasets.

Testing over a single image

In order to test the model over a single image, we will be using singleImageTester.py located in Util folder.

Example of expected output

singleImageTester_expected_output

it is possible to configure the file to change the default path, in this case we have three paths, the noisy image path, the clean image path and the output folder path which defaults to single_Image_tester_Results located at Util.

it is possible to test it with the images from the sample testing folders: testSample_10, testSample_25 or testSample_50. Remember there are only 10 images in our github repository folder for different levels of noise; The idea is to use it with the noise_10, noise_25 and noise_50 testing datasets.

Creation of noise

Following, are the codes which were used for the generation of Additive White Gaussian Noise (AWGN).

The noise was created using the numpy library.

For fair comparison and reproducibility seeding was employed.

The process to create noisy images is displayed below.

A clean image is added with a noise map to create a noisy image.

noise_creation

Creating noise for the whole Histopathologic Cancer Detection dataset

Before you begin

Make sure you have a folder with the images named 1.tif, 2.tif, 3.tif, 4.tif ... etc. it is possible to use renaming_files_ascending_order for this task, because the file names from the original dataset are too long.

This is how the image names come by default from the kaggle website

dataset_long_names

After running renaming_files_ascending_order we will have a folder looking like this

dataset_short_names

If you have the folder that looks like the previous image we can continue

We will be using multipleImageNoiseCreator.py for corrupting the images between ranges σ[0,50].
You need to state the <<"inputfolder">>, it does not have a default folder path.

After you have written down the input folder now we need to state the <<"outputfolder">> in which the image pairs (clean,noise) will be created.

The expected output folder would look like this

averageTester_expected_output

Creating noise for a single image

Here we will be creating a noisy image using noiseCreatorSingleImage.

We only need to state two things, the path to the clean image and the outputput folder which defaults to noise_Images_Created_By_User and the expected output can be seen in the same folder.
it is possible to choose the level of corruption, which is stated as noise_standard_deviation.




That would be it for now, if you have any question / suggestions feel free to send me an email to: fhgil@utp.edu.co, fgilzuluaga@unisa.it
Thank you for reading, have a great day!

About

Blind microscopy image denoising with a deep residual and multiscale encoder/decoder network.


Languages

Language:Python 100.0%