qiqihaer / RandLA-Net-pytorch

RandLA-Net's implementation with Pytorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About the performance on Semantickitti.

a18700 opened this issue · comments

Hello, thanks for amazing works.

Do you have any idea on what is the main factor for the degradation of performance when to exploit pytorch codes?

I want to boost your model performance to the level of the one of paper.

Hello, thanks for amazing works.

Do you have any idea on what is the main factor for the degradation of performance when to exploit pytorch codes?

I want to boost your model performance to the level of the one of paper.

I have no idea about the degradation. As for the performance, I think the performance on the validation set is similar to the origin paper. Maybe you can try to train more epoches and add some different augmentation methods.

Thank you for reply!

Except for the performance, i have an issue on the memory costs of this implementation.

Can i ask how much memory is used for experimenting on semantickitti on default setting?

I'm experimenting on 8gb rtx2080 with batch size 6, and even i reduced it to 2 but still OOM errors come up.

8G seems small for the default setting. You can conduct one back propagation for several iterations.