TiagoCortinhal / SalsaNext

Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About predicted value on the "Eval Script" and how to run the test on this network

AkinoriKotani opened this issue · comments

Nice to meet you.
I am a student studying semantic segmentation in Japan.

I'm interested in SalsaNext announced by your team and trying to implement it.
However, I am having trouble with the behavior related to prediction.

Specifically, when I run the "Eval Script" posted by your team using the pretrained model, I got the following error message:

labels:  4071
predictions:  0
Traceback (most recent call last):
  File "./evaluate_iou.py", line 237, in <module>
    eval(DATA["split"][FLAGS.split],splits,FLAGS.predictions)
  File "./evaluate_iou.py", line 66, in eval
    assert (len(label_names) == len(scan_names) and
AssertionError

When I checked, "label_names" and "scan_names" were entered normally, but the cause was that "len ​​(pred_name) = 0".
This is because there is no prediction directory, and I prepared an empty directory for the time being.

So, could you please teach me how to eliminate this error?
If there is a "Test Script" etc. before the "Eval Script", would you please post it if possible?
Thank you.

Hello @AkinoriKotani. The eval.sh will first do the inference (via infer.py) then the evaluation. Which I think is what you are trying to do.

Please check it out!

Best

Thank you for your teaching.

I tried running eval.sh.
I think that probably the command when running eval.sh seems to be fine, but the output is as follows.

./eval.sh -d ~/Dataset/SemanticKITTI/dataset -p pred -m eval_model -s valid -n salsanext
Traceback (most recent call last):
  File "./infer.py", line 13, in <module>
    from tasks.semantic.modules.user import *
  File "../../tasks/semantic/modules/user.py", line 20, in <module>
    from tasks.semantic.modules.SalsaNextUncertainty import *
ModuleNotFoundError: No module named 'tasks.semantic.modules.SalsaNextUncertainty'
finishing infering.
 Starting evaluating
********************************************************************************
INTERFACE:
Data:  /home/kotani/Dataset/SemanticKITTI/dataset
Predictions:  /home/kotani/SalsaNext/SalsaNext/pred
Split:  valid
Config:  config/labels/semantic-kitti.yaml
Limit:  None
********************************************************************************
Opening data config file config/labels/semantic-kitti.yaml
Ignoring xentropy class  0  in IoU evaluation
[IOU EVAL] IGNORE:  tensor([0])
[IOU EVAL] INCLUDE:  tensor([ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
        19])
[]
labels:  4071
predictions:  0
Traceback (most recent call last):
  File "./evaluate_iou.py", line 237, in <module>
    eval(DATA["split"][FLAGS.split],splits,FLAGS.predictions)
  File "./evaluate_iou.py", line 66, in eval
    assert (len(label_names) == len(scan_names) and
AssertionError

ModuleNotFoundError: No module named 'tasks.semantic.modules.SalsaNextUncertainty'
Is it possible to test and evaluate by outputting the result of SalsaNextUncertainty?

Hello @AkinoriKotani and sorry for the delay. If you pull you will see that issue is fixed.

Best