Ali-Mahzarnia / mrtrix_amd

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Explanation for the Python Pipeline script:

This Python script is used for processing diffusion-weighted imaging (DWI) data and creating a connectome from the processed data. It involves several steps, including data preprocessing, estimation of basis functions, computing fiber orientation distributions (FODs), creating and sifting streamlines, and generating connectomes. Below is a step-by-step explanation of the code:

Importing necessary libraries:

The script begins by importing the required libraries such as os, nibabel, numpy, pandas, and shutil. Setting up file paths and directories:

The script defines various file paths for DWI data, anatomical data, bvec, and bval files. It creates directories to store temporary and processed data. Denoising and tensor metrics:

DWI data is denoised using dwidenoise. The diffusion tensor is estimated using dwi2tensor and tensor-derived metrics (FA, MD, AD, RD) are computed using tensor2metric. Creating masks:

A brain mask is generated from the DWI data using dwi2mask. Cerebrospinal fluid (CSF) regions are segmented using ANTs software (ImageMath and ThresholdImage). The final mask is obtained by excluding the CSF regions from the brain mask. Estimating basis functions:

White matter (wm), gray matter (gm), and CSF basis functions are estimated using dwi2response with the Dhollander algorithm. Computing fiber orientation distributions (FODs):

FODs for white matter are computed using dwi2fod with multi-shell multi-tissue constrained spherical deconvolution (MSMT-CSD). Tractography:

10 million streamlines are generated using tckgen with a seed in the gray matter-white matter interface (GM-WM). Subsetting is performed to select 2 million streamlines per subject with a minimum length of 0.1 mm. Sifting and creating connectomes:

tcksift2 is used to sift the streamlines based on the FODs to obtain a smaller set of 1 million streamlines. Multiple types of connectomes are generated: sift node, sift, distances, FA connectome, and plain connectome. Creating connectomes based on parcellation:

The script uses a parcellation map to group the streamlines and calculate connectome values for each parcel. The connectome is symmetric, and zero diagonal is set to avoid self-connections. Cleaning up:

Temporary directories are deleted to free up disk space. Note: Some parts of the code are commented out, indicating that they may be optional or for testing purposes.

Please note that the code assumes the availability of various software packages such as MRtrix, ANTs, and FreeSurfer. The proper setup and installation of these tools are necessary for the code to execute correctly. Additionally, this script is designed for specific data and directory structures, so modifications may be required to adapt it to different datasets.

cluster wrapper

Explanation of the Code File:

This Python script is designed to run Mrtrix connectome analysis on a cluster using the Sun Grid Engine (SGE) job scheduling system. The code processes a list of subjects' data to generate connectomes and submits the connectome generation jobs to the cluster for execution.

Importing Libraries:

The script imports necessary libraries/modules, such as os and subprocess. Setting Up Environment:

The script tries to access the environment variable BIGGUS_DISKUS to determine the base directory (BD) for the data processing. If the variable is not found, it sets a placeholder value "+++++#####mouse". The variable GD is set to a directory path "#####rja20_dev/gunnies//" which appears to be related to the cluster environment. Creating sbatch Folder:

A folder path is created for storing the SGE job files, which are later submitted to the cluster for processing. The folder name is generated based on the job description ("mrtrix") and appended with "_sbatch". Obtaining Subject List:

The script reads the list of subject folders from the specified path "#####/jacques/AMD_data/" (the actual path is hidden). It extracts the subject names by searching for folders containing "dwi" in their names and removes the "_subjspace_dwi.nii.gz" extension to get a list of subject names (list_of_subjs). Checking Completed Subjects:

The script checks if there is a connectome folder "+++++/mouse/mrtrix_amd/connectome/" (the actual path is hidden). If it exists, it reads the list of subjects for which the connectome generation is already completed and stores them in the variable done_subj. It then updates the list_of_subjs by removing the subjects for which the connectome generation is already done. Submitting Jobs to the Cluster:

For each subject in the updated list_of_subjs, the script prepares a Python command to run the main connectome generation script (main_trc_conn.py) with the subject name as an argument. It then creates a job_name using the job description and the subject name. Next, it constructs a command using GD and sbatch_folder_path to submit the Python command as an SGE job using the submit_sge_cluster_job.bash script. Job Submission:

The script submits the constructed SGE job command using os.system(command) to execute the connectome generation process for each subject on the cluster. Please note that some directory paths and specific details in the code have been masked with placeholders (e.g., +++++ and #####) and are not visible in the provided code snippet.

About


Languages

Language:Python 100.0%