MIC-DKFZ / nnDetection

nnDetection is a self-configuring framework for 3D (volumetric) medical object detection which can be applied to new data sets without manual intervention. It includes guides for 12 data sets that were used to develop and evaluate the performance of the proposed method.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Prepare labels for a custom 3D dataset

Yasmin-Kassim opened this issue · comments

Dear nnDetection Community,

I am planning to use nnDetection for object detection on a custom 3D dataset containing multiple instances of the same type of object - hair cells. I aim to detect each individual 3D hair cell, but since all the hair cells belong to the same category, there's only one class of object in my dataset.

For the labelsTr directory, my current approach is to prepare 3D masks with each 3D hair cell assigned a unique identifier from 1 to n, reflecting the total number of hair cells present, to facilitate instance segmentation.

Could you please confirm if this approach aligns with the requirements for training with nnDetection? Should each hair cell be labeled distinctly even if they are all of the same type, or is there a different protocol for such scenarios?

Thank you for your assistance.

Best regards,
Yasmin

Dear @Yasmin-Kassim ,

the description of your labels matches the requirements of nnDetection. The mask should contain the unique identifier from 1 to n, while the associated label json file should map all of the identifiers to 0 (i.e. the first and only available class in your dataset).

Please note that nnDetection produces bounding boxes and there is no support for instance segmentations.

Best,
Michael

Thank you so much, I have another question, is it possible to run the training without five-fold cross validation as it is taking too much time to finish five folds.

Yes, it is also possible to only train a subset of the folds but that will reduce the final results on an held out test set. If you main evaluation is based on a cross-validation, you need to create your own split where only e.g. 3 folds are used (just replace 'splits_final.pkl' in the preprocessed folder).

I trained the network and this is the prediction of the trained model, looks promising, however, I'm not sure why the bounding boxes are not covering the whole cell or there are two bounding boxes over a single cell, how can I avoid the duplicate predictions and produce a single prediction cover the whole cell ? please see the PPT I attached here, it has the prediction and also the ground truth that I provided to the network given that my stacks are 512*512 with different depth. Also I noticed that during training, I got two pickles one is 'D3V001_3d.pkl' and the other one is 'D3V001_3dlr1.pkl'. Please advise how to proceed.
nnDetection_testing_results.pptx

Dear @Yasmin-Kassim ,

thanks for providing such detailed info on your problem. Base don pptx it looks like that the objects may exceed the patch size during training which results in problems during the postprocessing. The current approach for this is to use a downsampled version of the images where the objects fit into a single patch during training. This is also automatically prepared by nnDetection => D3V001_3dlr1.pkl

You can simply switch the training by using -o plan= D3V001_3dlr1 in the training command which will create a separate training directory. This will likely fix both of your problems.

Best,
Michael

I have question about prediction new images, let's say I have a new image and I just want to run the prediction of the model that I trained, how can I just preprocess only the new image and run the prediction ?

You can place the image into the {Task}/{raw_splitted}/{imagesTs} folder and nnDetection will predict every image inside the folder (it still needs to follow the [sampled_id]_0000.nii.gz format, otherwise it is not possible to correctly identify the modality).

This issue is stale because it has been open for 30 days with no activity.

This issue was closed because it has been inactive for 14 days since being marked as stale.