stardist-predict3D ignores values for nms and prob thresholds
lguerard opened this issue · comments
Describe the bug
stardist-predict3D
doesn't take into consideration the thresholds arguments
To reproduce
Run this command after having a working environment stardist-predict3d -i path/to/tif -m 3D_demo -o output/path --n_tiles 4 8 8 --prob_thresh 0.5
Expected behavior
Should use the value I enter, but I'm always getting
There is 1 registered model for 'StarDist3D':
Name Alias(es)
──── ─────────
'3D_demo' None
Found model '3D_demo' for 'StarDist3D'.
2022-08-03 11:29:54.051304: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-08-03 11:29:54.707360: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 8964 MB memory: -> device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:d5:00.0, compute capability: 7.5
Loading network weights from 'weights_best.h5'.
Loading thresholds from 'thresholds.json'.
Using default values: prob_thresh=0.707933, nms_thresh=0.3.
Environment (please complete the following information):
- StarDist version 0.8.3
- CSBDeep version 0.7.2
- TensorFlow version 2.9.1
- OS: Windows 10
- GPU memory (if applicable): 11GB
You may run this code and paste the output:
2022-08-03 11:41:08.243702: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-08-03 11:41:09.080689: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /device:GPU:0 with 8964 MB memory: -> device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:d5:00.0, compute capability: 7.5
tensorflow GPU: True
I think this is not a real bug.
When the pre-trained model is loaded, the adjacent parameters from thresholds.json are printed.
However, during prediction the values from command line are passed to the model (but not printed to stdout). You can verify this by trying different thresholds - they will produce different results.
Hmmm, you're actually right. The differences are there when using different values. I thought I tested it but it seems not.
Thanks for your help, I'll close !
Yes, @jonasutz is right - the values you see are printed upon model loading only.