metrics-lab / surface-vision-transformers

This repository contains code to apply vision transformers on surface data

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Surface Vision Transformers

This repository contains codebase to apply vision transformers on surface data. This is the official PyTorch implementation of Surface Vision Transformers: Attention-Based Modelling applied to Cortical Analysis, presented at the MIDL 2022 conference.

Here, Surface Vision Transformer (SiT) is applied on cortical data for phenotype predictions.

Surface Vision Transformers

Updates

V.1.1 - 12.02.24 Major codebase update - 12.02.24
  • Adding masked patch pretraining code to codebase
  • can be run as simply as with: python pretrain.py ../config/SiT/pretraining/mpp.yml
V.1.0 - 18.07.22 Major codebase update - 18.07.22
  • birth age and scan age prediction tasks
  • simplifying training script
  • adding birth age prediction script
  • simplifying preprocessing script
  • ingle config file tasks (scan age / birth age) and data configurations (template / native)
  • adding mesh indices to extract non-overlapping triangular patches from a cortical mesh ico 6 sphere representation
V.0.2 Update - 25.05.22
  • testing file and config
  • installation guidelines
  • data access
V.0.1 Initial commits - 12.10.21
  • training script
  • README
  • config file for training

Installation & Set-up

1. Connectome Workbench

Connectome Workbench is a free software for visualising neuroimaging data and can be used for visualising cortical metrics on surfaces. Downloads and instructions here.

2. Conda usage

For PyTorch and dependencies installation with conda, please follow instructions in install.md.

3. Docker usage

Coming soon

For docker support, please follow instructions in docker.md

Data

Data used in this project comes from the dHCP dataset. Instructions for processing MRI scans and extract cortical metrics can be found in S. Dahan et al 2021 and references cited in.

To simplify reproducibility of the work, data has been already processed and is made available by following the next guidelines.

1. Accessing processed data

Cortical surface metrics already processed as in S. Dahan et al 2021 and A. Fawaz et al 2021 are available upon request.

How to access the processed data?

To access the data please:

  • Sign in here
  • Sign the dHCP open access agreement
  • Forward the confirmation email to slcn.challenge@gmail.com

G-Node GIN repository

Once the confirmation has been sent, you will have access to the G-Node GIN repository containing the data already processed. The data used for this project is in the zip files `regression_native_space_features.zip` and `regression_template_space_features.zip`. You also need to use the `ico-6.surf.gii` spherical mesh. Surface Vision Transformers

Training and validation sets are available for the task of birth-age and scan-age prediction, in template and native configuration.

However the test set is not currently publicly available as used as testing set in the SLCN challenge on surface learning alongside the MLCN workshop at MICCAI 2022.

2. Data preparation for training

Once the data is accessible, further preparation steps are required to get right and left metrics files in the same orientation, before extracting the sequences of patches.

  1. Download zip files containing the cortical features: regression_template_space_features.zip and regression_native_space_features.zip. Unzip the files. Data is in the format
{uid}_{hemi}.shape.gii 
  1. Download the ico-6.surf.gii spherical mesh from the G-Node GIN repository. This icosphere is by default set to a CORTEX_RIGHT structure in workbench.

  2. Rename the ico-6.surf.gii file as ico-6.R.surf.gii

  3. Create a new sphere by symmetrising the righ sphere using workbench. In bash:

wb_command -surface-flip-lr ico-6.R.surf.gii ico-6.L.surf.gii
  1. Then, set the structure of the new icosphere to CORTEX_LEFT. In bash:
wb_command -set-structure ico-6.L.surf.gii CORTEX_LEFT
  1. Use the new left sphere to resample all left metric files in the template and native data folder. In bash:
cd regression_template_space_features

for i in *L*; do wb_command -metric-resample ${i} ../ico-6.R.surf.gii ../ico-6.L.surf.gii BARYCENTRIC ${i}; done

and

cd regression_native_space_features

for i in *L*; do wb_command -metric-resample ${i} ../ico-6.R.surf.gii ../ico-6.L.surf.gii BARYCENTRIC ${i}; done
  1. Set the structure of the right metric files to CORTEX_LEFT, in both template and native data folder. In bash:
cd regression_template_space_features

for i in *R*; do wb_command -set-structure ${i} CORTEX_LEFT; done

and

cd regression_native_space_features

for i in *R*; do wb_command -set-structure ${i} CORTEX_LEFT; done
Example of left and right myelin maps after resampling

Once symmetrised, both left and right hemispheres have the same orientation when visualised on a left hemipshere template. Surface Vision Transformers

  1. Once this step is done, the preprocessing script can be used to prepare the training and validation numpy array files, per task (birth-age, scan-age) and data configuration (template, native).

In the YAML file config/preprocessing/hparams.yml, change the path to data, set the parameters and run the preprocessing.py script in ./tools:

cd tools
python preprocessing.py ../config/preprocessing/hparams.yml

Training & Inference

Training SiT

For training a SiT model, use the following command:

python train.py ../config/SiT/training/hparams.yml

Where all hyperparameters for training and model design models are to be set in the yaml file config/preprocessing/hparams.yml, such as:

  • Transformer architecture
  • Training strategy: from scratch, ImageNet or SSL weights
  • Optimisation strategy
  • Patching configuration
  • Logging

Testing SiT

For testing a SiT model, please put the path of the SiT weights in /testing/hparams.yml and use the following command:

python test.py ../config/SiT/training/hparams.yml

Tensorboard support

Coming soon

References

This codebase uses the vision transformer implementation from
lucidrains/vit-pytorch and the pre-trained ViT models from the timm librairy.

Citation

Please cite these works if you found it useful:

Surface Vision Transformers: Attention-Based Modelling applied to Cortical Analysis

@article{dahan2022surface,
  title={Surface Vision Transformers: Attention-Based Modelling applied to Cortical Analysis},
  author={Dahan, Simon and Fawaz, Abdulah and Williams, Logan ZJ and Yang, Chunhui and Coalson, Timothy S and Glasser, Matthew F and Edwards, A David and Rueckert, Daniel and Robinson, Emma C},
  journal={arXiv preprint arXiv:2203.16414},
  year={2022}
}

Surface Vision Transformers: Flexible Attention-Based Modelling of Biomedical Surfaces

@article{dahan2022surface,
  title={Surface Vision Transformers: Flexible Attention-Based Modelling of Biomedical Surfaces},
  author={Dahan, Simon and Xu, Hao and Williams, Logan ZJ and Fawaz, Abdulah and Yang, Chunhui and Coalson, Timothy S and Williams, Michelle C and Newby, David E and Edwards, A David and Glasser, Matthew F and others},
  journal={arXiv preprint arXiv:2204.03408},
  year={2022}
}

About

This repository contains code to apply vision transformers on surface data


Languages

Language:Python 100.0%