sunghoonim / DPSNet

[ICLR19] DPSNet: End-to-end Deep Plane Sweep Stereo

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MVS Dataset not used for training?

ShilinC opened this issue · comments

Hi! I found that in your download_train.py, you download the mvs_train dataset but according to your datapreparation_train.py, you didn't use MVS dataset for training. I can see mvs_test in your test data preparation. So I'm assuming that you ONLY use MVS dataset for testing. Is that correct?

I tried to retrain your network using your default configuration without putting MVS dataset into training, but the result accuracy on mvs_test is visibly smaller than using your pretrained model (and the paper). So I'm wondering if you put MVS dataset into training to get that pretrained model?

Hi!
I did not contain the MVS dataset for training.
I only use SUN3D RGBD SCENES11.
Does the result from your trained network output similar accuracy result for the other dataset (sun3d rgbd escenes11)?

Thanks for getting back.
Generally speaking, on MVS and SUN3D our pretrained model is worse than the number reported in the paper. But on RGBD and Scene11 our retrained model is comparable to the paper. Is there anything we should be specifically aware of during training? I think I may be missing some tricks for training (e.g., pick the last epoch or the best one on validation set)? I trained 10 epochs and I picked the last epoch's model.

After your comment, I re-trained my code.
The default setting of batch size in the uploaded code is 8, but I trained with 16, which I commented number in my paper.
When I trained with 8 batch size, the results were overall worse, especially MVS and SUN3D.
So, you'd a better train with 16 batch size to get similar results on my paper.
As I remember, the result that I upload in my paper is produced with 4 epochs.
This time I trained with 10 epochs. The results are slightly better than those with 4 epochs on SUN3D, RGBD, Scenes11, but slightly worse on MVS.

Thanks! This is very helpful.