hzxie / Pix2Vox

The official implementation of "Pix2Vox: Context-aware 3D Reconstruction from Single and Multi-view Images". (Xie et al., ICCV 2019)

Home Page:https://haozhexie.com/project/pix2vox

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The test results were poor on my own separate dataset

zjp00 opened this issue · comments

commented

Hi, thank you very much for the contact work made by your team.

I am trying to use your pix2vox's model to reconstruct a 6464 image into a 646464 3D voxel file. The dataset contains 2w samples which are my own separate dataset. Used your model_F, and used the loss function BCEloss10, epoch500,
but the results are not satisfactory. The validation loss runs off the charts, I thought it was an overfitting problem, but I think I should have enough data, and I tried adding MSE*40, epoch250,the loss curve is good, but the reconstruction results are terrible.Do you have any good advice on this, what is the problem and how can I adjust it.
图片1
图片1

Looking forward to your quick reply, I would appreciate it.

Maybe you should try more data augmentation.
See also: https://www.quora.com/How-do-you-reduce-model-overfitting

commented

Maybe you should try more data augmentation. See also: https://www.quora.com/How-do-you-reduce-model-overfitting

Thank you very much for your prompt reply. Do you think 5w would be valid if it is more data that is needed? Or 10w? or more, of course more data can be found but there will be more things to consider so would like to approximate a range from your experience. As well as do you think it's necessary to add MSEloss*40, while the loss doesn't run and fly and drops a lot very low, the verified 3d refactoring really didn't work out very well. Looking forward to your reply again.

It's difficult to say definitively. You have to experiment and see for yourself because deep learning is like a black box.

commented

It's difficult to say definitively. You have to experiment and see for yourself because deep learning is like a black box.

Thanks again for your prompt reply, then I will keep trying, thanks for your help and good luck with your research!