SetsompopLab / deli-cs

This repository contains the code needed to reproduce and explore the results presented in "Deep Learning Initialized Compressed Sensing (Deli-CS) in Volumetric Spatio-Temporal Subspace Reconstruction"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Deli-CS

This repository contains the code needed to reproduce and explore the results presented in Deep Learning Initialized Compressed Sensing (Deli-CS) in Volumetric Spatio-Temporal Subspace Reconstruction available on: https://www.biorxiv.org/content/10.1101/2023.03.28.534431v1

Installation.

Run the following commands in sequence to set up your environment to run the experiments.

  1. conda update -n base -c defaults conda
  2. make conda
  3. conda activate deliCS
  4. make data OR make data+ OR make data++
    • data downloads and sets up the pre-processed data needed to regenerate the figures in the DeliCS paper. Total download size: 18GB
    • data+ downloads and sets up data and the raw testing data and shared parameters that are needed to run the pipeline (which generates the data for the figures). Total download size: 18GB + 18GB = 36GB
    • data++ downloads and sets up data+ and the raw training and validation data needed to re-train the DL component of deliCS. Total download size: 18GB + 18BG + 50GB = 86GB
  5. make docker

Note: the above steps require an Anaconda/Miniconda installation, Docker installation, and nvidia-container-toolkit.


Pipeline description.

To run the DeliCS pipeline, please navigate to the pipeline directory with the deliCS conda environment activated. Then run: python3 XX_script.py with XX_script.py replaced with the actual script you want to run. Note that you need to use data+ or data++ in the setup stage for the pipeline to run.

Data preparation

The data is prepped by pipeline/00_prepare.py. It performs the following steps for all cases in the training, validation and testing folders where data is available. If the data has already been prepared it will not reprocess the data unless the previously processed data is deleted or renamed.

  1. Using the GRE data: estimate the coil compression matrix with RoVir and calculate shifts for AutoFOV.
  2. Prepare acquired MRF data by applying the coil compression, and FOV shifting.
  3. Subsample the reference 6min data to 2min when available.
  4. Perform initial gridding reconstruction of 2min data and estimate the coil sensitivity maps using JSENSE.
  5. Perform reference LLR reconstruction of 6min data when available.
  6. Perform reference LLR reconstructions of 2min testing data with varying number of iterations for the test cases.

The data preparation script also averages the energy in the 2-min reference reconstructions of the training subjects to use as a scaling factor for the deliCS initialization in the refinement step (if you don't download the training data (data++), this scaling factor is saved in the shared data folder already).

Reconstruction parameters can be altered using the pipeline/params.py file.

Note, since each subject is prepared serially, this step takes a very long time to complete for all training, validation, and test subjects.

Block preparation for training

The training and validation data is further processed by pipeline/01_make_blocks.py. Here inputs and targets are augmented and made into blocks to be processed by the DeliCS network during training. The number of augmentations and block size are determined by the pipeline/params.py file.

Train DeliCS

To train DeliCS, run pipeline/02_train.py. To view the progress you can use tensorboard using this command from the main deliCS directory: tensorboard --logdir logs/case_2min. This sets up a port that allows you to follow the training progress in your browser.

Run DeliCS

Once you have a trained network (either by running the pipeline with data+ or using the provided checkpoints file: checkpoints/case_2min/version_000/epoch=433-step=276024.ckpt). The testing data is run through the network using pipeline/03_deli.py. This script takes an argument with the path to the file containing the weights. The way you would run it from the pipeline directory is thus: python3 03_deli.py --chk ../checkpoints/case_2min/version_000/epoch=479-step=299520.ckpt.

The testing blocks differ from training blocks in that they are overlapping and combined using a linear cross-blending method that smooths out the overlapping regions.

Refinement reconstructons

The refinement reconstruction uses the output of DeliCS as initialization of the same LLR FISTA algorithm that was used for the reference reconstruction in pipeline/00_prepare.py-step 6. In the pipeline/04_refinement.py script the DeliCS test images are reconstructed with varying numbers of iterations to see the convergence of the solution.

Quantifications

pipeline/05_quantification.py is the final step of deliCS. Here the final output is dictionary matched to T1 and T2 values for each voxel in each full and partial reconstruction generated by the pipeline so far.

Extra - BART reconstruction

For comparison, a script to run a BART reconstruction of one of the training datasets have also been included. pipeline/recon_2min_bart.py is not part of the deliCS pipeline, but it generates the reference data presented in the DeliCS paper. It uses the bart installation in the setsompop/calib docker container.

Generate figures from manuscript

In the figures directory there are multiple jupyter notebooks that can be used to re-generare the figures from the manuscript. They are numbered according to the figure number in the manuscript.

Project structure overview

This is the structure the whole project would take when using the data++ option and running through the whole pipeline. Some files/folders won't exist when either a smaller dataset is used (data or data+) or before the pipeline is run as some files in the shared directory contain values that are calculated within the pipeline, for example the density compensation fuctions (dcf...).

DeliCS
|-- checkpoints
|-- data
|    |-- training
|    |    |-- case000
|    |    |   |-- noise.npy
|    |    |   |-- raw_mrf.npy
|    |    |   |-- raw_gre.npy
|    |    |   |-- ...
|    |    |-- case001
|    |    |   |-- ...
|    |    |   |-- ...
|    |    |-- case002
|    |    |   |-- ...
|    |    |   |-- ...
|    |    |-- ...
|    |    |-- ...
|    |-- validation
|    |    |-- case000
|    |    |   |-- noise.npy
|    |    |   |-- raw_mrf.npy
|    |    |   |-- raw_gre.npy
|    |    |   |-- ...
|    |    |-- case001
|    |    |   |-- ...
|    |    |   |-- ...
|    |-- testing
|    |    |-- case000
|    |    |   |-- noise.npy
|    |    |   |-- raw_mrf.npy
|    |    |   |-- raw_gre.npy
|    |    |   |-- ...
|    |    |-- case001
|    |    |   |-- ...
|    |    |   |-- ...
|    |    |-- case002
|    |    |   |-- ...
|    |    |   |-- ...
|    |    |-- ...
|    |    |-- ...
|    |-- shared
|    |    |-- dcf_2min.npy
|    |    |-- dcf_6min.npy
|    |    |-- deli_scaling_2min.mat
|    |    |-- dictionary.mat
|    |    |-- phi.mat
|    |    |-- traj_grp16_inacc2.mat
|    |    |-- traj_grp48_inacc1.mat
|-- figures
|    |-- 01_pipeline.txt
|    |-- 02_basis_balancing.ipynb
|    |-- 03_comparebart.ipynb
|    |-- 04_convergence.ipynb
|    |-- 05_compare_recons.ipynb
|    |-- 06_07_block.ipynb
|    |-- 08_compare_recons_patient.ipynb
|    |-- 09_block_patient.ipynb
|    |-- 10_patients.ipynb
|-- logs
|-- MRF [from: https://github.com/SetsompopLab/MRF]
|    |-- src
|    |    |-- 00_io
|    |    |   |-- Dockerfile
|    |    |   |-- main.py
|    |    |   |-- Makefile
|    |    |   |-- ...
|    |    |-- 01_calib
|    |    |   |-- Dockerfile
|    |    |   |-- main.py
|    |    |   |-- Makefile
|    |    |   |-- ...
|    |    |-- 02_recon
|    |    |   |-- Dockerfile
|    |    |   |-- main.py
|    |    |   |-- Makefile
|    |    |   |-- ...
|    |-- README.md
|-- pipeline
|    |-- 00_prepare.py
|    |-- 01_make_blocks.py
|    |-- 02_train.py
|    |-- 03_deli.py
|    |-- 04_refinement.py
|    |-- 05_quantification.py
|    |-- metrics.py
|    |-- params.py
|    |-- recon_2min_bart.py
|    |-- resunet.py
|-- .gitignore
|-- environment.yaml
|-- LICENSE
|-- Makefile
|-- README.md

About

This repository contains the code needed to reproduce and explore the results presented in "Deep Learning Initialized Compressed Sensing (Deli-CS) in Volumetric Spatio-Temporal Subspace Reconstruction"

License:BSD 3-Clause "New" or "Revised" License


Languages

Language:Jupyter Notebook 99.8%Language:Python 0.2%Language:Makefile 0.0%