khanlab / hippunfold

BIDS App for Hippunfold (automated hippocampal unfolding and subfield segmentation)

Home Page:https://hippunfold.readthedocs.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Cyclic dependency on rule warp_gii_to_native.

debinz opened this issue · comments

commented

Hello there,

Thank you for developing this awesome tool. I tried to run this tool on my hcp-style data. I have organized my data in BIDS style, and run this command line:
poetry run hippunfold /data8/HCP/HCP_struct_bids/derivatives/T1w_proc /data8/HCP/HCP_struct_bids/hippo_seg participant --modality T1w --cores 16 --generate_myelin_map --use-gpu --force_nnunet_model T1T2w

while I get a log and an error info:
Config file config/snakebids.yml is extended by additional config specified via the command line. Building DAG of jobs... CyclicGraphException in line 225 of /home/zeng/Documents/hippunfold/hippunfold/workflow/rules/gifti.smk: Cyclic dependency on rule warp_gii_to_native.

I feel puzzled since I think the surface data may not be necessarily processed in the initial steps of this tool. I'm not familiar with snakemake, can you provide some advice to deal with this issue? Thank you!

Best regards,
Debin Zeng

Yes that error message is misleading as it was an omission of the T1T2w model a couple of places in the config, that leads to the model not being found..

This is fixed now in the master branch, do you mind giving it a try again after running hippunfold_download_models once more?

commented

Thank you for your reply. As the download speed is slow here using this command, I downloaded the models from the website
zenodo.org by other tools, here are the models I contained now:
image

maybe I could re-download the file [trained_model.3d_fullres.Task103_hcp1200_T1T2w.nnUNetTrainerV2.model_best.tar](https://zenodo.org/record/4508747/files/trained_model.3d_fullres.Task103_hcp1200_T1T2w.nnUNetTrainerV2.model_best.tar?download=1) from the website and replace the old version of the trained model? Is the newest version of the model uploaded to that site now?

Thanks for catching this.
Just to clarify: the model itself didn't change, only the hippunfold config file which denotes where to find the model. So you don't need to download the model again, only git pull the latest hippunfold changes (including updated config file!) and redo poetry install from inside the updated hippunfold directory.
Please let me know if that fixes the issue! Thanks

commented

Thanks for your advice. I have redone the poetry install for the latest version of this tool. Everything is well except for the installation of pygraphviz. The error info is:

creating build/temp.linux-x86_64-cpython-38/pygraphviz
gcc -pthread -B /home/zeng/.conda/envs/hippunfold/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/tmp/tmp1f7ve6o2/.venv/include -I/home/zeng/.conda/envs/hippunfold/include/python3.8 -c pygraphviz/graphviz_wrap.c -o build/temp.linux-x86_64-cpython-38/pygraphviz/graphviz_wrap.o
pygraphviz/graphviz_wrap.c:2711:10: fatal error: graphviz/cgraph.h: No such file or directory
#include "graphviz/cgraph.h"
^~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command '/usr/bin/gcc' failed with exit code 1
----------------------------------------
ERROR: Failed building wheel for pygraphviz
Failed to build pygraphviz
ERROR: Could not build wheels for pygraphviz which use PEP 517 and cannot be installed directly

at /etc/poetry/venv/lib/python3.6/site-packages/poetry/utils/env.py:1300 in run
1296│ output = subprocess.check_output(
1297│ cmd, stderr=subprocess.STDOUT, env=env, **kwargs
1298│ )
1299│ except CalledProcessError as e:
→ 1300│ raise EnvCommandError(e, input=input
)
1301│
1302│ return decode(output)
1303│
1304│ def execute(self, bin: str, *args: str, **kwargs: Any) -> Optional[int]:

Despite that, I also tried to run this tool on my data. It still has an error:

Unknown command -retain-labels
Unknown exception caught by convert3d
When processing command: -retain-labels
[Tue Mar 7 15:01:22 2023]
Error in rule prep_segs_for_greedy:
jobid: 26
output: work/sub-HCD2996590/anat/sub-HCD2996590_space-template_desc-hipptissue_dsegsplit
shell:
mkdir -p work/sub-HCD2996590/anat/sub-HCD2996590_space-template_desc-hipptissue_dsegsplit && c3d work/sub-HCD2996590/anat/sub-HCD2996590_space-template_desc-hipptissue_dseg.nii.gz -retain-labels 1 2 3 4 5 6 8 -split -foreach -smooth 0.5x0.5x0.5mm -endfor -oo work/sub-HCD2996590/anat/sub-HCD2996590_space-template_desc-hipptissue_dsegsplit/label_%02d.nii.gz
(one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)

Removing output files of failed job prep_segs_for_greedy since they might be corrupted:
work/sub-HCD2996590/anat/sub-HCD2996590_space-template_desc-hipptissue_dsegsplit

Those are dependencies that hippunfold requires, you will need to use singularity to ensure the dependencies are available. Assuming you are on Linux, you will need to install singularity (now also called apptainer) then run hippunfold with the --use-singularity option. Then, any rules which use dependencies will download and run in a container.

Please see the docs also for a simpler recommended approach to just download the hippunfold container (ie docker or singularity) and run hippunfold using that (instead of using poetry). https://hippunfold.readthedocs.io/en/latest/getting_started/singularity.html

I should also note, if you want to run without any containers that could be possible but not recommended at all, since it would be up to you to install the right versions of all the dependencies. Installing graphviz is simple, but e.g. the version of c3d you have is older than what hippunfold expects (which is why it is missing the -retain-labels subcommand).

commented

Many thanks for your help. Lately, I have also tried the newer version of c3d (1.4.0), it turns out a newer version (2.29) of the GLIBC is required by c3d while Ubuntu 18.04 I used is not supported.
I also tried to install Apptainer, and I found that "Pre-built Ubuntu packages are not available on Ubuntu 18.04."
Finally, I tried to run this tool with docker, but I meet the same error as mentioned before:

Config file config/snakebids.yml is extended by additional config specified via the command line. Building DAG of jobs... CyclicGraphException in line 225 of /home/zeng/Documents/hippunfold/hippunfold/workflow/rules/gifti.smk: Cyclic dependency on rule warp_gii_to_native.

Maybe the docker version of this tool is not updated to the newest one?

What version/tag of the hippunfold container did you use when running with docker? You will need v1.2.1

commented

When I reran this tool with v1.2.1:

sudo docker run -it --rm -v /data8/HCP/HCP_struct_bids/derivatives/T1w_proc:/bids -v /data8/HCP/HCP_struct_bids/hippo_seg:/output khanlab/hippunfold:v1.2.1 /bids /output participant --modality T1w --cores 16 --generate_myelin_map --use-gpu --force_nnunet_model T1T2w

it started to pull the image 'khanlab/hippunfold:v1.2.1', then it ran some steps (47 of 221 steps (21%) done) of the process until a new error occurred:

Error in rule run_inference:
jobid: 0
input: work/sub-HCD2996590/anat/sub-HCD2996590_hemi-R_space-corobl_desc-preproc_T1w.nii.gz, /opt/hippunfold_cache/trained_model.3d_fullres.Task103_hcp1200_T1T2w.nnUNetTrainerV2.model_best.tar
output: work/sub-HCD2996590/anat/sub-HCD2996590_hemi-R_space-corobl_desc-nnunet_dseg.nii.gz
log: logs/sub-HCD2996590/sub-HCD2996590_hemi-R_space-corobl_nnunet.txt (check log file(s) for error details)
shell:
mkdir -p tempmodel tempimg templbl && cp work/sub-HCD2996590/anat/sub-HCD2996590_hemi-R_space-corobl_desc-preproc_T1w.nii.gz tempimg/temp_0000.nii.gz && tar -xf /opt/hippunfold_cache/trained_model.3d_fullres.Task103_hcp1200_T1T2w.nnUNetTrainerV2.model_best.tar -C tempmodel && export RESULTS_FOLDER=tempmodel && export nnUNet_n_proc_DA=16 && nnUNet_predict -i tempimg -o templbl -t Task103_hcp1200_T1T2w -chk model_best --disable_tta &> logs/sub-HCD2996590/sub-HCD2996590_hemi-R_space-corobl_nnunet.txt && cp templbl/temp.nii.gz work/sub-HCD2996590/anat/sub-HCD2996590_hemi-R_space-corobl_desc-nnunet_dseg.nii.gz
(one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)

While if I try the command without --force_nnunet_model T1T2w, it completed all processes. I wonder whether the model Task103_hcp1200_T1T2w is better than the one the latter command used (maybe has better segmentation accuracy of hippocampus and its surrounding structures).

Besides, after the command without --force_nnunet_model T1T2w was completed, I tried to re-run the one with that option (have changed the output directory name), but something weird happened because the error Cyclic dependency on rule warp_gii_to_native was produced again. This error always occurred even I tried to rerun the command without --force_nnunet_model T1T2w, or removed the current images and re-pull the hippunfold container of v1.2.1. I don't know why this could happen.

Not sure what is going in your case, but I suggest you just use either the T1w or T2w model -- in our paper we showed that performance of this model is higher than the T1wT2w model in any case.

commented

Finally, I figured out why this error occurs in my case. I used this tool https://github.com/suyashdb/hcp2bids to convert my HCP-style data to BIDS Structure. Because I'd like to save storage space when doing this conversion, I choose to create symlinks in the BIDS folder for my data. When I change this to the real data, that error would not occur. Sorry for bothering you so much time and many thanks for your help.