americofmoliveira / VesselSegmentation_ESWA

Support repository for the paper "Retinal vessel segmentation based on Fully Convolutional Neural Networks", Expert Systems with Applications, Volume 112, 1 December 2018, Pages 229-242.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Retinal vessel segmentation based on Fully Convolutional Neural Networks

This repository contains materials supporting the paper: Oliveira, Américo, et al., "Retinal vessel segmentation based on Fully Convolutional Neural Networks", Expert Systems with Applications, Volume 112, 1 December 2018, Pages 229-242.

(Google Scholar | Journal version | ArXiv preprint version)

Abstract

The retinal vascular condition is a reliable biomarker of several ophthalmologic and cardiovascular diseases, so automatic vessel segmentation may be crucial to diagnose and monitor them. In this paper, we propose a novel method that combines the multiscale analysis provided by the Stationary Wavelet Transform (SWT) with a multiscale Fully Convolutional Neural Network (FCN) to cope with the varying width and direction of the vessel structure in the retina. Our proposal uses rotation operations as the basis of a joint strategy for both data augmentation and prediction, which allows us to explore the information learned during training to refine the segmentation. The method was evaluated on three publicly available databases, achieving an average accuracy of 0.9576, 0.9694, and 0.9653, and average area under the ROC curve of 0.9821, 0.9905, and 0.9855 on the DRIVE, STARE, and CHASE_DB1 databases, respectively. It also appears to be robust to the training set and to the inter-rater variability, which shows its potential for real-world applications.

Overview

Pipeline

Architecture

Results

The proposed method was evaluated in three databases: DRIVE, STARE, and CHASE_DB1. Below you can find a brief comparison between our method and other state of the art works. For a more detailed analysis, please refer to the manuscript.


Method
DRIVE
Sn
Sp
Acc
AUC
Soares et al. [1]
.7332 .9782 .9466 .9614
Fraz et al. [2]
.7406 .9807 .9480 .9747
Roychowdhury et al. [3]
.7249 .9830 .9519 .9620
Li et al. [4]
.7569 .9816 .9527 .9738
Liskowski and Krawiec [5]
.7520 .9806 .9515 .9710
This work
.8039 .9804 .9576 .9821

Method
STARE
Sn
Sp
Acc
AUC
Soares et al. [1]
.7207 .9747 .9480 .9671
Fraz et al. [2]
.7548 .9763 .9534 .9768
Roychowdhury et al. [3]
.7719 .9726 .9515 .9688
Li et al. [4]
.7726 .9844 .9628 .9879
Liskowski and Krawiec [5]
.8145 .9866 .9696 .9880
This work
.8315 .9858 .9694 .9905

Method
CHASE_DB1
Sn
Sp
Acc
AUC
Fraz et al. [2]
.7224 .9711 .9469 .9712
Roychowdhury et al. [3]
.7201 .9824 .9530 .9532
Li et al. [4]
.7507 .9793 .9581 .9716
Zhang et al. [6]
.7644 .9716 .9502 .9706
This work
.7779 .9864 .9653 .9855

Contents

The materials in this repository are organized as follows:

  • code: Code required to test our model, including calibration files -- files containing the statistical parameters (mean and standard deviation) to perform standardization on each dataset.

  • folds_constitution: Since there is no explicit division between training and test sets for the STARE and CHASE_DB1 databases, we used 5-fold and 4-fold cross-validation, respectively, to evaluate the results in these cases. Here we show the constitution of the folds, so that future works can replicate our evaluation conditions.

  • image_level_results: Evaluation metrics for each image in terms of Sensitivity, Specificity, Accuracy, Area under the ROC curve (AUC), and Matthews correlation coefficient (MCC). For STARE, in particular, we also provide the performance of our model in the set of pathological images.

  • resources: Mask, probability map outputted by the model, and final binary segmentation for each image.

  • statistical_comparison: Statistical comparison between our method and other state-of-the-art works that have made their segmentations publicly available.

  • supplementary_materials: Quotable document summarizing the metrics obtained by image and dataset.

Citation

If you find this work useful, please, cite our paper:

Oliveira, A. F. M., Pereira, S. R. M., & Silva, C. A. B. (2018). Retinal Vessel Segmentation based on Fully Convolutional Neural Networks. Expert Systems with Applications.

Contact

For information related with the paper, please feel free to contact me (americofmoliveira@gmail.com) or Prof. Carlos A. Silva (csilva@dei.uminho.pt).

Bibliography

[1] Soares, J. V., Leandro, J. J., Cesar, R. M., Jelinek, H. F., & Cree, M. J. (2006). Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification. IEEE Transactions on medical Imaging, 25(9), 1214-1222.

[2] Fraz, M. M., Remagnino, P., Hoppe, A., Uyyanonvara, B., Rudnicka, A. R., Owen, C. G., & Barman, S. A. (2012). An ensemble classification-based approach applied to retinal blood vessel segmentation. IEEE Transactions on Biomedical Engineering, 59(9), 2538-2548.

[3] Roychowdhury, S., Koozekanani, D. D., & Parhi, K. K. (2015). Blood vessel segmentation of fundus images by major vessel extraction and subimage classification. IEEE journal of biomedical and health informatics, 19(3), 1118-1128.

[4] Li, Q., Feng, B., Xie, L., Liang, P., Zhang, H., & Wang, T. (2016). A Cross-Modality Learning Approach for Vessel Segmentation in Retinal Images. IEEE Trans. Med. Imaging, 35(1), 109-118.

[5] Liskowski, P., & Krawiec, K. (2016). Segmenting retinal blood vessels with deep neural networks. IEEE transactions on medical imaging, 35(11), 2369-2380.

[6] Zhang, J., Chen, Y., Bekkers, E., Wang, M., Dashtbozorg, B., & ter Haar Romeny, B. M. (2017). Retinal vessel delineation using a brain-inspired wavelet transform and random forest. Pattern Recognition, 69, 107-123.

About

Support repository for the paper "Retinal vessel segmentation based on Fully Convolutional Neural Networks", Expert Systems with Applications, Volume 112, 1 December 2018, Pages 229-242.


Languages

Language:Jupyter Notebook 58.7%Language:Python 41.3%