JoseAntonyS / SpineFinder

This repository contains code which was used to produce the results for the paper 'Vertebrae Detection and Localization in CT with Two-Stage CNNs and Dense Annotations' by James McCouat, Ben Glocker which was accepted into the MICCAI workshop MSKI 2019.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Introduction

This repo contains the code I used to produce results for my masters project at Imperial College London which uses deep learning to detect and localise vertebrae centroids from CT scans.

A paper was subsequently written presenting my results and was accepted into the MICCAI 2019 conference in the MSKI workshop.

The purpose of this repository is so that other researchers can reproduce the results.

Setup

Note: Shortly this setup guide will be replaced with a setup.py

The experiments for the paper were run on a microsoft azure vm (https://azuremarketplace.microsoft.com/en-gb/marketplace/apps/microsoft-dsvm.linux-data-science-vm-ubuntu) with NC6 Promo. Once 'ssh'ed into the vm the following tasks were performed:

  1. Clone this repository
  2. We used conda to install the packages required for this project:
    conda create -n spine-env
    source activate spine-env
    conda install tensorflow-gpu==1.12.0
    conda install keras
    conda install matplotlib
    conda install -c https://conda.anaconda.org/simpleitk SimpleITK
    conda install pip
    pip install keras-metrics
    pip install elasticdeform

Usage

To reproduce the results of the paper follow these instructions:

  1. First you must download the data from BioMedia: https://biomedia.doc.ic.ac.uk/data/spine/. In the dropbox package there are collections of spine scans called 'spine-1', 'spine-2', 'spine-3', 'spine-4' and 'spine-5', download and unzip these files and move all these scans into a directory called 'training_dataset'. You will also see a zip file called 'spine-test-data', download and unzip this file and rename it 'testing_dataset'.
  2. You must then generate samples to train and test the detection network. python generate_detection_samples.py 'training_dataset' 'samples/detection/training' python generate_detection_samples.py 'testing_dataset' 'samples/detection/testing'
  3. Now train the detection network: python train_detection_model.py 'samples/detection/training' 'samples/detection/testing' 'saved_models/detection.h5'
  4. You must then generate samples to train and test the identification network. python generate_identification_samples.py 'training_dataset' 'samples/identification/training' python generate_identification_samples.py 'testing_dataset' 'samples/identification/testing'
  5. Now train the identification network: python train_identification_model.py 'samples/identification/training' 'samples/identification/testing' 'saved_models/identification.h5'
  6. You can now run the full algorithm on the test data. It should be noted that due to the randomness of sample generation and the stochastic nature of training a network the results may not be exactly as stated in the paper (could be higher or lower). To get results for the same metrics run python measure.py 'testing_dataset' 'saved_models/detection.h5' 'saved_models/identification.h5'

Citation

If you find this repository useful for your own research please consider citing our paper:

@article{mccouat2019vertebrae,
    title={Vertebrae Detection and Localization in CT with Two-Stage CNNs and Dense Annotations},
    author={James McCouat and Ben Glocker},
    journal={arXiv preprint arXiv:1910.05911}
    year={2019}
}

About

This repository contains code which was used to produce the results for the paper 'Vertebrae Detection and Localization in CT with Two-Stage CNNs and Dense Annotations' by James McCouat, Ben Glocker which was accepted into the MICCAI workshop MSKI 2019.

License:GNU General Public License v3.0


Languages

Language:Python 100.0%