dsx0511 / 3DMOTFormer

Offical implementation of ICCV2023 paper 3DMOTFormer: Graph Transformer for Online 3D Multi-Object Tracking.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The results with the training code is worse than the results in paper.

sjtuljw520 opened this issue · comments

Hi, thank you for sharing the nice code.

I try to train my model with the centerpoint detection as input and get the amota 0.513 (validation dataset), wich is significantly lower than the results in paper (amota 0.712). Maybe the config file (default.json) in this repo is not same as the implementation of the paper? Can you share how to modify the config file, thank you!

Hi, the config file in the repository is the same that I used in the paper. It is hard to find out the reason why the performance is different without your reproduction details. I would assume that you can go over the data preprocessing again and ensure that everything went correctly, and also make sure that you used the correct detection results.

hi, can you help me solve a problem? what is the pyg version? in this code,out = self.propagate(edge_index, query=query, key=key, value=value, edge_attr=edge_attr, edge_gate=edge_gate,size=None). there aren't edge_gate param in this function