yeerwen / UniSeg

MICCAI 2023 Paper (Early Acceptance)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to design my own segmentation task_id for my own dataset?

angolin22 opened this issue · comments

commented

Thanks for your work. I want to ask: as for my own several datasets and segmentation task, for example, to segment heart, liver, brain by several datasets, how to design my own segmentation task_id in code? Or may I ask which part of the relevant code is it?

You need to assign a task ID to each task. Next, some necessary modifications need to be made. For example, self.task, self.task_class, and self.toal_task_num in the UniSeg_Trainer class. In addition, the shape of the Universal Prompt needs to be adapted to your own tasks. Note that you also need to merge a new pre-processed dataset containing all the data you need to use.

commented

ok, thanks for your reply.
may I ask what the self.task_class mean?
self.task_class = {0: 3, 1: 3, 2: 3, 3: 3, 4: 2, 5: 2, 6: 2, 7: 2, 8: 2, 9: 4, 10: 2}
for example(0:3), 0 is the task id, what the 3 means?

The number of categories for task 0 ( the background is also counted as a category).

commented

ok, thanks for your reply

commented

sorry,
I want to ask a question about --planner3d MOTSPlanner3D/Verse20Planner3D/ProstatePlanner3D/....
MOTSPlanner3D: target spacing is target = np.array([3, 1.5, 1.5])
Verse20Planner3D: target spacing is target = np.array([4, 2, 2])
ProstatePlanner3D: target spacing is target = np.array([3, 1.5, 1.5])
...
May I ask why you set the target spacing like this?

Just resample all the data from each dataset to a uniform spacing. If possible, I suggest that you can set all the datasets to the same spacing.

commented

ok, thanks for your reply. In addition, I don't understand that the shape of the Universal Prompt needs to be adapted to your own tasks. Suppose I have 3 tasks, which part of the code do I need to modify?

Set task_total_number in UniSeg_model class to 3.

commented

oh, thanks. When I running, I encountered a problem:

Exception in background worker 0:
 list index out of range
Traceback (most recent call last):
  File "/opt/conda/lib/python3.8/site-packages/nnunet/training/dataloading/multi_threaded_augmenter.py", line 46, in producer
    item = data_loader.__next__(task_pool, epoch_choice_id, lock)
  File "/opt/conda/lib/python3.8/site-packages/nnunet/training/dataloading/dataset_loading.py", line 448, in __next__
    return self.generate_train_batch(task_pool, epoch_choice_id, lock)
  File "/opt/conda/lib/python3.8/site-packages/nnunet/training/dataloading/dataset_loading.py", line 490, in generate_train_batch
    selected_keys = np.random.choice(self.list_of_keys_task[choice_id], self.batch_size, True, None)
IndexError: list index out of range

do you know the reason?

commented

I know this error, it need to modify self.task_num = 3 in multi_threaded_augmenter.py

Yes, we have made updates to our code to enhance its compatibility with the new multi-task dataset. We appreciate your suggestions for further improvements or bug.

commented

hello, after I use my dataset to train, if I want to predict new data, what should I modify?
When I only modify the following content, the program will no longer execute after finish preprocess

if modality_used[0] == "CT" and num_image == 1:
        # couple_id = {"live": [1, 2], "kidn": [3, 4], "hepa": [5, 6], "panc": [7, 8], "colo": [9], "lung": [10],
        #              "sple": [11], "sub-": [12]}
        # id_2_name = {-1: "all", 0: "live", 1: "kidn", 2: "hepa", 3: "panc", 4: "colo", 5: "lung", 6: "sple", 7: "sub-"}
        couple_id = {"hear": [1], "ct_t":[2,3,4,5,6,7,8], "asoc":[9]}
        id_2_name = {-1: "all", 0: "hear", 1: "ct_t", 2: "asoc"}

I think you can delete the output directory first and then execute the prediction.

commented

Oh, it works. Thanks very much.