TIF conversion fails due to missing extension in directory path
martinschorb opened this issue · comments
Hi,
I try to convert a directory with tif slices and it fails with a missing extension:
convert_to_bdv --resolution 0.01 0.01 0.05 --n_threads 32 /g/emcf/ronchi/Benvenuto-Giovanna-seaurchin_SBEMtest_5833/giovanna_G3-1_20-07-07_3view/S010_acquisition/aligned_dec2020/aligned_full_stomach/ *.tif stomach.n5
Traceback (most recent call last):
File "/g/emcf/software/python/miniconda/envs/bdv/bin/convert_to_bdv", line 33, in <module>
sys.exit(load_entry_point('pybdv', 'console_scripts', 'convert_to_bdv')())
File "/g/emcf/schorb/code/pybdv/pybdv/scripts/pybdv_converter.py", line 91, in main
chunks=chunks, n_threads=args.n_threads)
File "/g/emcf/schorb/code/pybdv/pybdv/converter.py", line 252, in convert_to_bdv
with open_file(input_path, 'r') as f:
File "/g/emcf/schorb/code/pybdv/pybdv/util.py", line 35, in open_file
raise ValueError(f"Invalid extension: {ext}")
ValueError: Invalid extension:
This happens regardless of giving a trailing /
.
I just tried the command you posted and it works for me.
However, you need elf
for the tif wrapper, I suspect it's not in the python env you are using.
If you have trouble installing it from conda-forge (there seems to be some issue with the version pinning), just do the following:
pip install https://github.com/constantinpape/elf/archive/0.2.3.tar.gz
OK, thanks.
Seems to work now. I remember doing this task with --target slurm
. However this parameter is no longer supported. Will it return at some point?
Seems to work now. I remember doing this task with
--target slurm
. However this parameter is no longer supported. Will it return at some point?
No, that has never been supported in pybdv. If you want to use slurm, you can use the cluster_tools.DownscalingWorkflow with format bdv.n5
. Here is a short function that wraps it: https://github.com/constantinpape/paintera_tools/blob/master/paintera_tools/convert/converter.py#L96-L141
Cool thanks!
convert_to_bdv
seems to not downsample without explicit factors. Can I run it again on an existing n5 to just downsample, or what would you use for that?
convert_to_bdv
seems to not downsample without explicit factors
Yes, you need to pass the downsampling factors for this.
Can I run it again on an existing n5 to just downsample, or what would you use for that?
It will fail by default or overwrite the data if you pass the flag overwrite data
.
The downsample functions I linked to above should work if you run it with the s0
dataset existing already and just downsample.
OK, what is halo
for?
Something still fails here...
ERROR: [pid 39011] Worker Worker(salt=066972180, workers=1, host=login.cluster.embl.de, username=schorb, pid=39011) failed DownscalingSlurm(tmp_folder=/scratch/schorb, max_jobs=50, config_dir=/scratch/schorb/configs, input_path=/g/emcf/ronchi/Benvenuto-Giovanna-seaurchin_SBEMtest_5833/giovanna_G3-1_20-07-07_3view/S010_acquisition/aligned_dec2020/stomach.n5, input_key=setup0/s0, output_path=/g/emcf/ronchi/Benvenuto-Giovanna-seaurchin_SBEMtest_5833/giovanna_G3-1_20-07-07_3view/S010_acquisition/aligned_dec2020/stomach.n5, output_key=setup0/s1, scale_factor=(1, 2, 2), scale_prefix=s1, halo=[1, 2, 2], effective_scale_factor=[1, 2, 2], dependency=DummyTask)
Traceback (most recent call last):
File "/g/emcf/software/python/miniconda/envs/mobie/lib/python3.7/site-packages/luigi/worker.py", line 191, in run
new_deps = self._run_get_new_deps()
File "/g/emcf/software/python/miniconda/envs/mobie/lib/python3.7/site-packages/luigi/worker.py", line 133, in _run_get_new_deps
task_gen = self.task.run()
File "/g/emcf/software/python/miniconda/envs/mobie/lib/python3.7/site-packages/cluster_tools/cluster_tasks.py", line 95, in run
raise e
File "/g/emcf/software/python/miniconda/envs/mobie/lib/python3.7/site-packages/cluster_tools/cluster_tasks.py", line 81, in run
self.run_impl()
File "/g/emcf/software/python/miniconda/envs/mobie/lib/python3.7/site-packages/cluster_tools/downscaling/downscaling.py", line 86, in run_impl
prev_shape = f[self.input_key].shape
AttributeError: 'Group' object has no attribute 'shape'
call:
DownscalingWorkflow(tmp_folder=/scratch/schorb, max_jobs=50, config_dir=/scratch/schorb/configs, target=slurm, dependency=DummyTask, input_path=/g/emcf/ronchi/Benvenuto-Giovanna-seaurchin_SBEMtest_5833/giovanna_G3-1_20-07-07_3view/S010_acquisition/aligned_dec2020/stomach.n5, input_key=setup0, scale_factors=[[1, 2, 2], [2, 2, 2], [2, 2, 2]], halos=[[1, 2, 2], [2, 2, 2], [2, 2, 2]], metadata_format=paintera, metadata_dict={}, output_path=, output_key_prefix=setup0, force_copy=False, skip_existing_levels=False, scale_offset=0)
I annotated some of the arguments that you should change.
DownscalingWorkflow(
tmp_folder=/scratch/schorb, # this will write all the logs etc, I would recommend to write this in a dedicated subfolder like /scatch/schorb/tmp_downscale
input_key=setup0, # this needs to be the name of the actual dataset: "setup0/timepoint0/s0". That is causing the error you see
halos=[[1, 2, 2], [2, 2, 2], [2, 2, 2]], # the halo is used to enlarge the blocks in downscaling to avoid artifacts. Setting it to the same as the factors is fine
metadata_format=paintera, # this needs to be "bdv.n5"
metadata_dict={}, # you should pass the resolution / voxel size here: {"resolution": [rz, ry, rx]}
output_path=, # you need to spcify the output path! probably the same as the input path will do in your case
output_key_prefix=setup0, # this should be blank, i.e. ""
)
Edit: output_key_prefix
needs to be blank.
I cannot run it due to issues with anistropic voxel sizes...
ERROR: [pid 57435] Worker Worker(salt=327618298, workers=1, host=login.cluster.embl.de, username=schorb, pid=57435) failed WriteDownscalingMetadata(tmp_folder=/scratch/schorb/blbla, output_path=/g/emcf/ronchi/Benvenuto-Giovanna-seaurchin_SBEMtest_5833/giovanna_G3-1_20-07-07_3view/S010_acquisition/aligned_dec2020/stomach.n5, scale_factors=[[1, 2, 2], [2, 2, 2], [2, 2, 2]], dependency=DownscalingSlurm, metadata_format=bdv.n5, metadata_dict={"resolution": [0.05, 0.015, 0.015]}, output_key_prefix=, scale_offset=0, prefix=downscaling)
Traceback (most recent call last):
File "/g/emcf/software/python/miniconda/envs/mobie/lib/python3.7/site-packages/luigi/worker.py", line 191, in run
new_deps = self._run_get_new_deps()
File "/g/emcf/software/python/miniconda/envs/mobie/lib/python3.7/site-packages/luigi/worker.py", line 133, in _run_get_new_deps
task_gen = self.task.run()
File "/g/emcf/software/python/miniconda/envs/mobie/lib/python3.7/site-packages/cluster_tools/downscaling/downscaling_workflow.py", line 95, in run
self._bdv_metadata()
File "/g/emcf/software/python/miniconda/envs/mobie/lib/python3.7/site-packages/cluster_tools/downscaling/downscaling_workflow.py", line 84, in _bdv_metadata
overwrite_data=False, enforce_consistency=False)
File "/g/emcf/software/python/miniconda/envs/mobie/lib/python3.7/site-packages/pybdv/metadata.py", line 266, in write_xml_metadata
overwrite, overwrite_data, enforce_consistency)
File "/g/emcf/software/python/miniconda/envs/mobie/lib/python3.7/site-packages/pybdv/metadata.py", line 97, in _require_view_setup
_check_setup(vs)
File "/g/emcf/software/python/miniconda/envs/mobie/lib/python3.7/site-packages/pybdv/metadata.py", line 75, in _check_setup
raise ValueError("Incompatible voxel size")
ValueError: Incompatible voxel size
tried with [0.015,0.015,0.05]
as well...
The problem is that you have an xml with metadata already. Now it tries to overwrite the data and sees that the voxels sizes are incompatible, so it complains.
Just remove /g/emcf/ronchi/Benvenuto-Giovanna-seaurchin_SBEMtest_5833/giovanna_G3-1_20-07-07_3view/S010_acquisition/aligned_dec2020/stomach.xml
and it should work.
That's it, thanks!
Will you join the call at 3?
Will you join the call at 3?
Oh, I was not aware there was anything scheduled at 3. What is it about?
(Unfortunately I have something else scheduled already, Thursdays are really full for me.)