ANTsX / ANTs

Advanced Normalization Tools (ANTs)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ImageMath TimeSeriesAssemble

butellyn opened this issue · comments

I am trying to reassemble a timeseries using ImageMath's TimeSeriesAssemble that I originally split here using ImageMath's TimeSeriesDisassemble, and then transformed using antsApplyTransforms as I asked about here. I tried ImageMath 4 ${VolumefMRI_fs} TimeSeriesAssemble ${funcoutdir}/TR* but got the following error:

"terminate called after throwing an instance of 'itk::ExceptionObject'
what(): /software/sources/builds/ants/2.3.4/ANTsX-ANTs-6829396/build/staging/include/ITK-5.2/itkImageBase.hxx:177:
itk::ERROR: itk::ERROR: Image(0x29aead0): A spacing of 0 is not allowed: Spacing is [0.8, 0.8, 0.8, 0]
Aborted (core dumped)"

This call comes out as:

ImageMath 4 /projects/b1108/studies/mwmh/data/processed/neuroimaging/surf/sub-MWMH212/ses-2/func/sub-MWMH212_ses-2_task-rest_space-fsnative_desc-preproc_bold.nii.gz \
TimeSeriesAssemble \
/projects/b1108/studies/mwmh/data/processed/neuroimaging/surf/sub-MWMH212/ses-2/func/TR1000.nii.gz \
/projects/b1108/studies/mwmh/data/processed/neuroimaging/surf/sub-MWMH212/ses-2/func/TR1001.nii.gz \
/projects/b1108/studies/mwmh/data/processed/neuroimaging/surf/sub-MWMH212/ses-2/func/TR1002.nii.gz \
...
/projects/b1108/studies/mwmh/data/processed/neuroimaging/surf/sub-MWMH212/ses-2/func/TR2109.nii.gz

I get the same output when I run it with the -v 1 flag. How can I address this issue?

TimeSeriesAssemble : Outputs a 4D time-series image from a list of 3D volumes. Usage : TimeSeriesAssemble time_spacing time_origin *images.nii.gz

It requires two initial arguments for the time dimension, the spacing (TR) and origin (usually 0)

Oh! I see. What does "origin" mean here?

It should go into the "toffset" field of the NIFTI header, but it appears not to work. I've never used it, I don't know if it's supported in ITK.

It looks like the origin is set correctly in ImageMath but is not written to disk. I would go ahead and use 0 here.

Thanks! I tried TimeSeriesAssemble, but keep ending up with a core dump up to 40G, so it looks like I am running into a similar problem as to when I tried to use antsApplyTransforms directly. I'm trying to do the assembling with smaller chunks, in the hope that that will fix the problem. The first chunk of 100 TRs worked with 40G (haven't tried with a different number). Here's the call I used for all of them at once:

ImageMath 4 \
/projects/b1108/studies/mwmh/data/processed/neuroimaging/surf/sub-MWMH212/ses-2/func/sub-MWMH212_ses-2_task-rest_space-fsnative_desc-postproc_bold.nii.gz \
TimeSeriesAssemble 0.555 0 \
/projects/b1108/studies/mwmh/data/processed/neuroimaging/surf/sub-MWMH212/ses-2/func/TR*

And the exact error I got: "Bus error (core dumped)".

I think it should require less memory than antsApplyTransforms because it doesn't need to allocate both the input and the output image, but even the output image itself is very large, 320*320*320*1100*4 bytes, or about 135Gb.

Did you intend to resample the data to the template at full resolution? If not, you can downsample it with ResampleImageBySpacing and probably get away with calling antsApplyTransforms directly.

Oh wow, I didn't realize how large the output image was. That definitely has me questioning my broader framework. Definitely can't store many images of that size...

Basically, I am putting the preprocessed functional data from fmriprep into the T1w space (or subject's anat space) using the --output-spaces flag for fmriprep and then applying the transform from the fmriprep T1w space to the freesurfer T1w space because I want to go through the subject's native surface to get to fsLR space (a group surface + MNI volumetric subcortical space). By not going through MNI, as I have seen a few people do, I think I will better retain the layout of networks on the surface because I will not lose any data if someone has more gyri than the MNI template, for instance.

So if I can resample freesurfer's T1w image to a lower resolution and still project the functional data to the surface in this downsampled space, then I don't need the data at full resolution. I'll try downsampling and see if that fixes things.

The pipeline for getting the data to the surface, as it exists right now: https://github.com/NU-ACNLab/mwmh/blob/main/scripts/process/create_ciftis.sh

Sounds sensible. Does fmriprep's --output-spaces fsLR not do this? I don't know the details of its implementation, just curious if there's a reason you aren't using that directly.

I looked into it briefly, but it would have required me to rework my postprocessing pipeline to work on the surface. I wasn't even sure if that was possible, so I decided to take this route, maybe foolishly thinking that it would be easier 😰