ZwwWayne / mmMOT

[ICCV2019] Robust Multi-Modality Multi-Object Tracking

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Using different detection set in .pkl file error

dmatos2012 opened this issue · comments

Hi! Thanks for your great work. The evaluation using the pre trained models worked as described. However, I wanted to evaluate on a different set of detections, particularly from PointRCNN. I have grabbed detections from AB3D , where they provide detections in their /data/KITTI for car,pedestrian, etc and have changed the format to fit your .pkl format

I get the following error only on sequences 0007.txt and 0019.txt, but the cod works on the rest.
[2020-04-16 19:36:15,628][eval_seq.py][line: 90][ INFO] Test: [9/11] Sequence ID: KITTI-0019
[2020-04-16 19:36:16,832][eval_seq.py][line: 161][ INFO] Test Frame: [0/1036] Time 1.201(1.201)
Traceback (most recent call last):
File "eval_seq.py", line 205, in
main()
File "eval_seq.py", line 72, in main
validate(val_dataset, tracking_module, args.result_sha, part='val')
File "eval_seq.py", line 107, in validate
seq_loader, tracking_module)
File "eval_seq.py", line 154, in validate_seq
input[0], det_info, dets, det_split)
File "/home/david/Documents/trackers/mmMOT/tracking_model.py", line 81, in predict
assign_id, assign_bbox)
File "/home/david/Documents/trackers/mmMOT/tracking_model.py", line 136, in align_id
dets_out[i]['id'] += id_offset
RuntimeError: result type Float can't be cast to the desired output type Long

So couple of questions.

  1. Is there anything particular with those sequences that makes that error only there, or is probably coming from detection file(made my own .pkl file replicating your own)
  2. After deleting these sequences from detections, I get on validation set: ==========================tracking evaluation summary===========================
    Multiple Object Tracking Accuracy (MOTA) 0.416359
    Multiple Object Tracking Precision (MOTP) 0.865596
    Multiple Object Tracking Accuracy (MOTAL) 0.460030
    Multiple Object Detection Accuracy (MODA) 0.460272
    Multiple Object Detection Precision (MODP) 0.941595

Recall 0.682547
Precision 0.806635
F1 0.739421
False Alarm Rate 0.507463

  1. Would you think this is somehow expected or am I missing something I have to change in your code to accept other detections? It seems a bit low, particularly knowing it was trained on KITTI and on PointClouds, so I was hoping it would not differ by much.

Thanks!

  1. They may be caused by bugs where some part is not robust to the corner cases. I am not sure.
  2. If the formats are the same, there should be no problem, this might because the detection is not so good or the training schedule should be changed for the new detections.
  1. Could you give me more detail what you mean by "robust to the corner cases" ?

Thanks for the quick response.

The problem was on the pytorch version. Downgraded to the recommended pytorch version and didnt get the error again.