er-muyue / BeMapNet

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Different evaluation results on nuSences val

imyyf opened this issue · comments

commented

swint_eval_val_results
We use your BeMapNet-SwinT checkpoint to evaluate nuScenes val, but get results which is different from your github results.
Ours: mAP@EASY 70, mAP@HARD 55.3
Yours: mAP@EASY 67, mAP@HARD 49.1
Our test setting and environments
python==3.8, torch==1.9.1, CUDA==11.4, mmcv-full==1.4.0, pillow==8.1.2, numpy==1.21.6, detectron2(build from source)
We use 1*A6000 to test

Similar here. Get higher results better than those reported in the paper with resnet50 (Ours: 62, Paper: 59.8). The performance of the code is obviously better than the paper. Can you explain such a huge performance gap?

The performance differences between GPUs arise from numerical errors in the matmul calculations when using the FloatTensor32 type.

Quick fix: By modifying the line in tools/evaluation/cd.py to dist = torch.cdist(source_pc.type(torch.float64), target_pc.type(torch.float64)), we can obtain consistent and accurate evaluation results that align with the paper.