han-liu / SynCT_TcMRgFUS

Official PyTorch Implementation of "Synthetic CT Skull Generation for Transcranial MR Imaging–Guided Focused Ultrasound Interventions with Conditional Adversarial Networks"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

SynCT_TcMRgFUS

The repository is the official PyTorch implementation of the paper: "Synthetic CT Skull Generation for Transcranial MR Imaging–Guided Focused Ultrasound Interventions with Conditional Adversarial Networks". [paper]

The jounal version "Evaluation of Synthetically Generated CT for use in Transcranial Focused Ultrasound Procedures" is now available at Journal of Medical Imaging [paper], where you will find out more details in implementation and in-depth result analysis.

Overview

We trained a 3D cGAN model to convert a T1-weighted MRI (input) to a synthetic CT (output). Our approach was originally developed for Transcranial MR Imaging–Guided Focused Ultrasound Interventions (TcMRgFUS), but can also be used as a benchmark for MR-to-CT image synthesis. Please note that our model is 3D and thus avoids the artifacts caused by slice-to-slice inconsistency (typically occur in 2D networks).

vis

Please check out our manuscript if you are interested in the comparison between the real and synthetic CTs in (1) tFUS planning using Kranion and (2) acoustic simulation using k-wave!

If you find our repo useful to your research, please consider citing our works:

@article{liu2023evaluation,
  title={Evaluation of synthetically generated computed tomography for use in transcranial focused ultrasound procedures},
  author={Liu, Han and Sigona, Michelle K and Manuel, Thomas J and Chen, Li Min and Dawant, Benoit M and Caskey, Charles F},
  journal={Journal of Medical Imaging},
  volume={10},
  number={5},
  pages={055001--055001},
  year={2023},
  publisher={Society of Photo-Optical Instrumentation Engineers}
}

@inproceedings{liu2022synthetic,
  title={Synthetic CT skull generation for transcranial MR imaging--guided focused ultrasound interventions with conditional adversarial networks},
  author={Liu, Han and Sigona, Michelle K and Manuel, Thomas J and Chen, Li Min and Caskey, Charles F and Dawant, Benoit M},
  booktitle={Medical Imaging 2022: Image-Guided Procedures, Robotic Interventions, and Modeling},
  volume={12034},
  pages={135--143},
  year={2022},
  organization={SPIE}
}

If you have any questions, feel free to contact han.liu@vanderbilt.edu or open an Issue in this repo.

Prerequisites

  • NVIDIA GPU + CUDA + cuDNN

Installation

  • create conda environment and install dependencies
conda create --name synct python=3.8
conda activate synct
pip install -r requirements.txt
pip install torch==1.10.1+cu111 torchvision==0.11.2+cu111 torchaudio==0.10.1 -f https://download.pytorch.org/whl/torch_stable.html

Training and Validation

python train.py --dataroot . --model han_pix2pix --input_nc 1 --output_nc 1 --direction AtoB --netG resnet_9blocks --display_id 0 --print_freq 20 --n_epochs 3000 --n_epochs_decay 3000 --save_epoch_freq 100 --training_mode cGAN --name synct --lambda_L1 100 --lambda_Edge 10

As reported in the paper, the optimal performance was achieved on our validation set using hyperparameters of lambda_L1=100 and lambda_Edge=10.

Testing

Two arguments are required (1) input_dir: this specifies where you put your input MR images (in the format of .nii or .nii.gz), and (2) output_dir: a folder to store the output synthetic CTs; this can be a new folder. Optionally, you can adjust the overlapping ratio in the sliding window inference function (default is 0.6). Higher overlapping ratio typically produces better synthetic images, but needs longer inference time.

We provide an example MRI image [click to download] and our trained model [click to download]. To reproduce our experimental result on the example MRI, please (1) download the model checkpoint and put it at /src/checkpoints/jmi, and (2) download the example MRI and put it in a folder (input_dir).

Example:

python run_inference.py --input_dir ./AnyName --output_dir ./output --overlap_ratio 0.6

Docker (use our code as out-of-box tool!)

dockerhub:

liuh26/syn_ct

how to use:

  1. download Docker and NVIDIA Container Toolkit.
  2. make inference with the following command:
docker run --gpus all --rm -v [input_directory]:/input/:ro -v [output_directory]:/output -it syn_ct

where

  • input_directory is the directory where you put your input MR images (.nii or .nii.gz).
  • output_directory is the directory where you will see your output files (synthesized CT images).

Acknowledgement

Our code is adapted from the pix2pix repo.

About

Official PyTorch Implementation of "Synthetic CT Skull Generation for Transcranial MR Imaging–Guided Focused Ultrasound Interventions with Conditional Adversarial Networks"

License:MIT License


Languages

Language:Python 99.8%Language:Dockerfile 0.2%