System gets hanged and f measure difference
tuhinkm opened this issue · comments
Hi,
I am having the following issues/doubts:
- My system gets hanged when i try to run the saliency part of the code, for both test.py and predict.py. I use the pretrained model shared. Can you please help.
- The paper states the F measure to be around 91% on DSS. However, the DSS method itself has reported a 92% F measure. Can you please explain the reason for the difference.
- Please give me more information about your problem.
- We couldn't reproduce the reported F measure (92% ) with Pytorch :(
Thanks for the promt reply.
- I get the one line print of the parser_args outputs. Then the system hangs for testing. Not sure why.
- Can you please share your saliency (saliency maps) results reported in google drive, with and without Deep Guided Filter. Will be of great help for comparison.
- Without much info, I couldn't reproduce your problem. I suggest using PyCharm for debugging.
- Currently, I'm busy with ICCV. I can release the results after then.
The saliency maps are in here, where dss_dgf.zip
is with DGF and dss.zip
is without DGF.
{id}.png
is the ground truth, {id}_sal.png
is the saliency map and {id}_sal_{thres}.png
is the result after applying a threshold.
test.txt
is the list of test images and test_result.txt
is the performance.