yd-yin / SAI3D

[CVPR 2024] SAI3D: Segment Any Instance in 3D Scenes

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

results in paper

RyanG41 opened this issue · comments

Hi,

May I ask about which part (train/val/test) of ScanNetV2 is used for evaluating the metrics shown in Table 2 of the paper?

I conducted the experiment on validation set of ScanNetV2 only and get the similar results reported in the paper which I don't know whether it is simply a coincidence:

AP: 0.308876279958
AP50: 0.503174827842
AP25: 0.706655160447

Also, could you clarify which part (train/val/test) of dataset were you using on ScanNet++/Matterport3D?

Best Regards

Hi,

It's been clarified in our arxiv paper of latest version. As was said in A.Implementation Details in supplymentary material: "we evaluate the numerical results on the validation set for ScanNetV2, ScanNet200 and ScanNet++ datasets and on the test set for Matterport3D dataset. "

Hi,

Thanks, Sorry I missed that part.

Another question is: when evaluating the AP on ScanNetV2 dataset, it ignores objects with the label "40" (meaning other furniture). I understand that it is to minimize the ambiguity when dealing with open-vocab object search. However, it may not be a big issue when just evaluating class-agnostic instance segmentation.
Thus I did the evaluation when including the label "40" (add it to the valid_id), the results slightly improve.

AP: 0.317002480729
AP50: 0.51299535723
AP25: 0.707198994727

Does it make sense by including the label 40? Hope you can give me some insight.

Best Regards

Hi,

According to here, "otherfurniture" correspond to "39" rather than "40". So you may have included another class rather than "otherfurniture"?

BTW, the improvement of performance is resonable because our method gives finer segmentations than GT of 18 classes.

Hi,

Sorry again, I was misled by another paper. I should've check the label correspondance.

Thanks for your patient explanation!