valeoai / obsnet

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About results on the Segment me if you can datasets

haoz19 opened this issue · comments

Hello ,

Thanks for your great work and your work has inspired me a lot.

I see the result is pretty good on Segment me if you can dataset, but when I follow your instruction and download the pre-trained model for Segment me if you can dataset, I got results like these: FPR@TPR95: 0.617; AUPRC: 0.569; AUROC: 0.862, which is much different than which is on the leaderboard: AUPRC: 0.754, FPR@TPR95: 0.267. (But I did the annotation of SMIYC myself, so the result may be different.) Is that possible that you can send me the anomaly score (.hdf5) for the Segment me if you can dataset, that you send to the organizer of SMIYC, so I can check if my result is right?

It'll be a lot of help for my current work!

Many Thanks

Hi @haoz19,
What was the performance on Segment me if you can Road anomaly validation dataset? I am getting very poor results ?

I got FPR@TPR95: 0.402; AP: 0.727; AUROC: 0.919 in the SMIYC Road anomaly validation set (10 images).

Did you get similar results?

Unfortunately no. Did you use evaluation.py or inference.py?

Okay I see. I am getting
AUROC score: 0.7873951347082428
AUPRC score: 0.40464778006347774
FPR@TPR95: 0.5757097820937631
which is lot worse.
One question did you interpolate the final anomaly scores as it comes with a fixed size ?

Thank you so much !! Also, the results get worse when I test it on road obstacle
AUROC score: 0.8077338939302358
AUPRC score: 0.12090705488023645
FPR@TPR95: 0.5609430880181637

No. Because I think the authors released the pre-trained models for SMIYC benchmark. So I stick to Road Obstacle and Road Anomalies dataset. But I will give a try !!

But did you see a similar drop in Accuracy while testing it on road obstacle dataset ?

Hello,

You can find the (.hdf5) file submitted on the leaderboard in the following google drive link: https://drive.google.com/drive/folders/1yDT5dJsWoxRapROTRRUVPqXF0qTId85E?usp=sharing

You should be able to reproduce the results by using the inference.py file (for example):
python inference.py --data "CityScapes" --model "road_anomaly" --img_folder "./data/dataset_AnomalyTrack/images/" --segnet_file "./ckpt/WideResnet_DeepLabv3plus_CityScapes.pth" --obsnet_file "./ckpt/ObsNet_CityScapes.pth"

The pre-trained models are available on the README.md or here:
https://drive.google.com/drive/folders/1Xxrl0JPU1KNdrZSFTBG1S2be5eu0XjPS?usp=share_link

We did not test our method on the obstacle track of SegmentMeIfYouCan.

Hoping that it helps 🙂

Victor

I just meet with the author and now I can get the result. I used to reshape the image, and when I use the code without any change, I can get the result!