DrSleep / DenseTorch

An easy-to-use wrapper for work with dense per-pixel tasks in PyTorch (including multi-task learning)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Training match the paper performance

kspeng opened this issue · comments

commented

HI,

I am trying to follow your instruction to match the result of the paper usubg NYU dataset. But the mIOU and RMSE are still can not be the same. They get stable after 300 iterations and stop at 25% and 0.7. Is there anything I missed or lost in the instructions? Thanks!

Best
Kuo

HI,

I am trying to follow your instruction to match the result of the paper usubg NYU dataset. But the mIOU and RMSE are still can not be the same. They get stable after 300 iterations and stop at 25% and 0.7. Is there anything I missed or lost in the instructions? Thanks!

Best
Kuo

Have you achieved the multi-taskes by these codes?

commented

HI,
I am trying to follow your instruction to match the result of the paper usubg NYU dataset. But the mIOU and RMSE are still can not be the same. They get stable after 300 iterations and stop at 25% and 0.7. Is there anything I missed or lost in the instructions? Thanks!
Best
Kuo

Have you achieved the multi-taskes by these codes?

To run multi-tasks is no doubt.
However, I can't fully reproduce the training procedure mentioned in the paper to get the claimed performance.
Interesting thing is that the provided model can reach the claimed performance of the pape.

Me too, I also trained the example/multitask/train.py, aka mobilenetv2+MTLWRefineNet without any change.
Finally, I get the mIOU=0.27, RMSE=0.72 with 1000epoch.
It didn't achieve the paper-claimed performance.
Anyone knows why?

See the following script for multi-task training with MobileNet-v2 and Multi-Task Light-Weight RefineNet: https://github.com/DrSleep/DenseTorch/blob/dev/examples/scripts/segm-depth-normals-mbv2-rflw.sh
Without changes, it should reach ~37% mean iou, 0.60 RMSE and 24 mean angular error. To achieve higher numbers, as described in the paper, you have to use the Raw NYUD dataset with knowledge distillation on missing segmentation labels -- this is not provided by this repository.