AlexYouXin / Explicit-Shape-Priors

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Training Verse2019

prinshul opened this issue · comments

Hi

Thank you for sharing the code.

The training on one A6000 (48 GB) GPU is taking around 44hrs on Verse2019. Is it correct?

Also after 235 (out of 1000) epochs mean val dice is around 0.44 and it fluctuates as well.
Is this expected?

epoch : 229, iteration : 4592, train loss : 0.508565, train loss_ce: 0.428673, train loss_dice: 0.588457, train dice : 0.411543
epoch : 229, iteration : 4600, train loss : 0.457253, train loss_ce: 0.330832, train loss_dice: 0.583674, train dice : 0.416326
epoch : 229, mean train loss : 0.463446, mean train ce loss: 0.348929, mean train dice : 0.422036
epoch : 229, iteration : 2292, val loss : 0.463003, val loss_ce: 0.381617, val loss_dice: 0.544389, val dice : 0.455611
epoch : 229, iteration : 2295, val loss : 0.456334, val loss_ce: 0.343845, val loss_dice: 0.568823, val dice : 0.431177
epoch : 229, iteration : 2298, val loss : 0.596561, val loss_ce: 0.583527, val loss_dice: 0.609596, val dice : 0.390404
epoch : 229, mean val loss : 0.540370, mean val ce loss: 0.489271, mean val dice : 0.408531
23%|█████▌ | 230/1000 [9:52:30<32:47:30, 153.31s/it]epoch : 230, iteration : 4608, train loss : 0.365300, train loss_ce: 0.220092, train loss_dice: 0.510508, train dice : 0.489492
epoch : 230, iteration : 4616, train loss : 0.395917, train loss_ce: 0.241295, train loss_dice: 0.550540, train dice : 0.449460
epoch : 230, mean train loss : 0.462211, mean train ce loss: 0.356120, mean train dice : 0.431698
epoch : 230, iteration : 2301, val loss : 0.402726, val loss_ce: 0.246581, val loss_dice: 0.558871, val dice : 0.441129
epoch : 230, iteration : 2304, val loss : 0.468381, val loss_ce: 0.412031, val loss_dice: 0.524732, val dice : 0.475268
epoch : 230, iteration : 2307, val loss : 0.457529, val loss_ce: 0.327538, val loss_dice: 0.587521, val dice : 0.412479
epoch : 230, iteration : 2310, val loss : 0.679444, val loss_ce: 0.721153, val loss_dice: 0.637735, val dice : 0.362265
epoch : 230, mean val loss : 0.475301, mean val ce loss: 0.389813, mean val dice : 0.439211
23%|█████▌ | 231/1000 [9:55:03<32:43:57, 153.24s/it]epoch : 231, iteration : 4624, train loss : 0.534674, train loss_ce: 0.455874, train loss_dice: 0.613474, train dice : 0.386526
epoch : 231, iteration : 4632, train loss : 0.389544, train loss_ce: 0.271356, train loss_dice: 0.507731, train dice : 0.492269
epoch : 231, iteration : 4640, train loss : 0.397096, train loss_ce: 0.302117, train loss_dice: 0.492075, train dice : 0.507925
epoch : 231, mean train loss : 0.432567, mean train ce loss: 0.313274, mean train dice : 0.448140
epoch : 231, iteration : 2313, val loss : 0.441968, val loss_ce: 0.296965, val loss_dice: 0.586970, val dice : 0.413030
epoch : 231, iteration : 2316, val loss : 0.560039, val loss_ce: 0.491504, val loss_dice: 0.628573, val dice : 0.371427
epoch : 231, iteration : 2319, val loss : 0.418460, val loss_ce: 0.282845, val loss_dice: 0.554075, val dice : 0.445925
epoch : 231, mean val loss : 0.497063, mean val ce loss: 0.406438, mean val dice : 0.412313
23%|█████▌ | 232/1000 [9:57:39<32:51:29, 154.02s/it]epoch : 232, iteration : 4648, train loss : 0.375251, train loss_ce: 0.247406, train loss_dice: 0.503096, train dice : 0.496904
epoch : 232, iteration : 4656, train loss : 0.335354, train loss_ce: 0.199348, train loss_dice: 0.471360, train dice : 0.528640
epoch : 232, mean train loss : 0.416142, mean train ce loss: 0.293162, mean train dice : 0.460878
epoch : 232, iteration : 2322, val loss : 0.368972, val loss_ce: 0.211474, val loss_dice: 0.526470, val dice : 0.473530
epoch : 232, iteration : 2325, val loss : 0.440467, val loss_ce: 0.284251, val loss_dice: 0.596684, val dice : 0.403316
epoch : 232, iteration : 2328, val loss : 0.477646, val loss_ce: 0.337023, val loss_dice: 0.618270, val dice : 0.381730
epoch : 232, mean val loss : 0.480204, mean val ce loss: 0.392196, mean val dice : 0.431788
23%|█████▎ | 233/1000 [10:00:05<32:17:58, 151.60s/it]epoch : 233, iteration : 4664, train loss : 0.388309, train loss_ce: 0.267901, train loss_dice: 0.508716, train dice : 0.491284
epoch : 233, iteration : 4672, train loss : 0.332113, train loss_ce: 0.224117, train loss_dice: 0.440109, train dice : 0.559891
epoch : 233, iteration : 4680, train loss : 0.344548, train loss_ce: 0.182094, train loss_dice: 0.507002, train dice : 0.492998
epoch : 233, mean train loss : 0.427070, mean train ce loss: 0.302198, mean train dice : 0.448059
epoch : 233, iteration : 2331, val loss : 0.575628, val loss_ce: 0.541614, val loss_dice: 0.609642, val dice : 0.390358
epoch : 233, iteration : 2334, val loss : 0.732134, val loss_ce: 0.846880, val loss_dice: 0.617388, val dice : 0.382612
epoch : 233, iteration : 2337, val loss : 0.641370, val loss_ce: 0.718466, val loss_dice: 0.564274, val dice : 0.435726
epoch : 233, iteration : 2340, val loss : 0.508094, val loss_ce: 0.401183, val loss_dice: 0.615006, val dice : 0.384994
epoch : 233, mean val loss : 0.587039, mean val ce loss: 0.570158, mean val dice : 0.396080
23%|█████▍ | 234/1000 [10:02:39<32:25:32, 152.39s/it]epoch : 234, iteration : 4688, train loss : 0.549605, train loss_ce: 0.476733, train loss_dice: 0.622477, train dice : 0.377523
epoch : 234, iteration : 4696, train loss : 0.418690, train loss_ce: 0.301662, train loss_dice: 0.535718, train dice : 0.464282
epoch : 234, mean train loss : 0.422967, mean train ce loss: 0.306612, mean train dice : 0.460677
epoch : 234, iteration : 2343, val loss : 0.433280, val loss_ce: 0.349381, val loss_dice: 0.517179, val dice : 0.482821
epoch : 234, iteration : 2346, val loss : 0.474856, val loss_ce: 0.337154, val loss_dice: 0.612558, val dice : 0.387442
epoch : 234, iteration : 2349, val loss : 0.548544, val loss_ce: 0.491878, val loss_dice: 0.605210, val dice : 0.394790
epoch : 234, mean val loss : 0.463013, mean val ce loss: 0.358985, mean val dice : 0.432958
24%|█████▍ | 235/1000 [10:05:13<32:30:13, 152.96s/it]epoch : 235, iteration : 4704, train loss : 0.411904, train loss_ce: 0.289972, train loss_dice: 0.533836, train dice : 0.466164
epoch : 235, iteration : 4712, train loss : 0.384918, train loss_ce: 0.263261, train loss_dice: 0.506575, train dice : 0.493425
epoch : 235, iteration : 4720, train loss : 0.427208, train loss_ce: 0.266072, train loss_dice: 0.588343, train dice : 0.411657
epoch : 235, mean train loss : 0.429935, mean train ce loss: 0.314582, mean train dice : 0.454712
epoch : 235, iteration : 2352, val loss : 0.478716, val loss_ce: 0.353607, val loss_dice: 0.603824, val dice : 0.396176
epoch : 235, iteration : 2355, val loss : 0.552514, val loss_ce: 0.493493, val loss_dice: 0.611534, val dice : 0.388466
epoch : 235, iteration : 2358, val loss : 0.394575, val loss_ce: 0.237227, val loss_dice: 0.551922, val dice : 0.448078
epoch : 235, mean val loss : 0.499052, mean val ce loss: 0.414196, mean val dice : 0.416091

**prinshul ** commented Jul 7, 2023

Have you completed your training? Is the final result consistent with the paper?

Hi

Thank you for sharing the code.

The training on one A6000 (48 GB) GPU is taking around 44hrs on Verse2019. Is it correct?

Also after 235 (out of 1000) epochs mean val dice is around 0.44 and it fluctuates as well. Is this expected?

epoch : 229, iteration : 4592, train loss : 0.508565, train loss_ce: 0.428673, train loss_dice: 0.588457, train dice : 0.411543 epoch : 229, iteration : 4600, train loss : 0.457253, train loss_ce: 0.330832, train loss_dice: 0.583674, train dice : 0.416326 epoch : 229, mean train loss : 0.463446, mean train ce loss: 0.348929, mean train dice : 0.422036 epoch : 229, iteration : 2292, val loss : 0.463003, val loss_ce: 0.381617, val loss_dice: 0.544389, val dice : 0.455611 epoch : 229, iteration : 2295, val loss : 0.456334, val loss_ce: 0.343845, val loss_dice: 0.568823, val dice : 0.431177 epoch : 229, iteration : 2298, val loss : 0.596561, val loss_ce: 0.583527, val loss_dice: 0.609596, val dice : 0.390404 epoch : 229, mean val loss : 0.540370, mean val ce loss: 0.489271, mean val dice : 0.408531 23%|█████▌ | 230/1000 [9:52:30<32:47:30, 153.31s/it]epoch : 230, iteration : 4608, train loss : 0.365300, train loss_ce: 0.220092, train loss_dice: 0.510508, train dice : 0.489492 epoch : 230, iteration : 4616, train loss : 0.395917, train loss_ce: 0.241295, train loss_dice: 0.550540, train dice : 0.449460 epoch : 230, mean train loss : 0.462211, mean train ce loss: 0.356120, mean train dice : 0.431698 epoch : 230, iteration : 2301, val loss : 0.402726, val loss_ce: 0.246581, val loss_dice: 0.558871, val dice : 0.441129 epoch : 230, iteration : 2304, val loss : 0.468381, val loss_ce: 0.412031, val loss_dice: 0.524732, val dice : 0.475268 epoch : 230, iteration : 2307, val loss : 0.457529, val loss_ce: 0.327538, val loss_dice: 0.587521, val dice : 0.412479 epoch : 230, iteration : 2310, val loss : 0.679444, val loss_ce: 0.721153, val loss_dice: 0.637735, val dice : 0.362265 epoch : 230, mean val loss : 0.475301, mean val ce loss: 0.389813, mean val dice : 0.439211 23%|█████▌ | 231/1000 [9:55:03<32:43:57, 153.24s/it]epoch : 231, iteration : 4624, train loss : 0.534674, train loss_ce: 0.455874, train loss_dice: 0.613474, train dice : 0.386526 epoch : 231, iteration : 4632, train loss : 0.389544, train loss_ce: 0.271356, train loss_dice: 0.507731, train dice : 0.492269 epoch : 231, iteration : 4640, train loss : 0.397096, train loss_ce: 0.302117, train loss_dice: 0.492075, train dice : 0.507925 epoch : 231, mean train loss : 0.432567, mean train ce loss: 0.313274, mean train dice : 0.448140 epoch : 231, iteration : 2313, val loss : 0.441968, val loss_ce: 0.296965, val loss_dice: 0.586970, val dice : 0.413030 epoch : 231, iteration : 2316, val loss : 0.560039, val loss_ce: 0.491504, val loss_dice: 0.628573, val dice : 0.371427 epoch : 231, iteration : 2319, val loss : 0.418460, val loss_ce: 0.282845, val loss_dice: 0.554075, val dice : 0.445925 epoch : 231, mean val loss : 0.497063, mean val ce loss: 0.406438, mean val dice : 0.412313 23%|█████▌ | 232/1000 [9:57:39<32:51:29, 154.02s/it]epoch : 232, iteration : 4648, train loss : 0.375251, train loss_ce: 0.247406, train loss_dice: 0.503096, train dice : 0.496904 epoch : 232, iteration : 4656, train loss : 0.335354, train loss_ce: 0.199348, train loss_dice: 0.471360, train dice : 0.528640 epoch : 232, mean train loss : 0.416142, mean train ce loss: 0.293162, mean train dice : 0.460878 epoch : 232, iteration : 2322, val loss : 0.368972, val loss_ce: 0.211474, val loss_dice: 0.526470, val dice : 0.473530 epoch : 232, iteration : 2325, val loss : 0.440467, val loss_ce: 0.284251, val loss_dice: 0.596684, val dice : 0.403316 epoch : 232, iteration : 2328, val loss : 0.477646, val loss_ce: 0.337023, val loss_dice: 0.618270, val dice : 0.381730 epoch : 232, mean val loss : 0.480204, mean val ce loss: 0.392196, mean val dice : 0.431788 23%|█████▎ | 233/1000 [10:00:05<32:17:58, 151.60s/it]epoch : 233, iteration : 4664, train loss : 0.388309, train loss_ce: 0.267901, train loss_dice: 0.508716, train dice : 0.491284 epoch : 233, iteration : 4672, train loss : 0.332113, train loss_ce: 0.224117, train loss_dice: 0.440109, train dice : 0.559891 epoch : 233, iteration : 4680, train loss : 0.344548, train loss_ce: 0.182094, train loss_dice: 0.507002, train dice : 0.492998 epoch : 233, mean train loss : 0.427070, mean train ce loss: 0.302198, mean train dice : 0.448059 epoch : 233, iteration : 2331, val loss : 0.575628, val loss_ce: 0.541614, val loss_dice: 0.609642, val dice : 0.390358 epoch : 233, iteration : 2334, val loss : 0.732134, val loss_ce: 0.846880, val loss_dice: 0.617388, val dice : 0.382612 epoch : 233, iteration : 2337, val loss : 0.641370, val loss_ce: 0.718466, val loss_dice: 0.564274, val dice : 0.435726 epoch : 233, iteration : 2340, val loss : 0.508094, val loss_ce: 0.401183, val loss_dice: 0.615006, val dice : 0.384994 epoch : 233, mean val loss : 0.587039, mean val ce loss: 0.570158, mean val dice : 0.396080 23%|█████▍ | 234/1000 [10:02:39<32:25:32, 152.39s/it]epoch : 234, iteration : 4688, train loss : 0.549605, train loss_ce: 0.476733, train loss_dice: 0.622477, train dice : 0.377523 epoch : 234, iteration : 4696, train loss : 0.418690, train loss_ce: 0.301662, train loss_dice: 0.535718, train dice : 0.464282 epoch : 234, mean train loss : 0.422967, mean train ce loss: 0.306612, mean train dice : 0.460677 epoch : 234, iteration : 2343, val loss : 0.433280, val loss_ce: 0.349381, val loss_dice: 0.517179, val dice : 0.482821 epoch : 234, iteration : 2346, val loss : 0.474856, val loss_ce: 0.337154, val loss_dice: 0.612558, val dice : 0.387442 epoch : 234, iteration : 2349, val loss : 0.548544, val loss_ce: 0.491878, val loss_dice: 0.605210, val dice : 0.394790 epoch : 234, mean val loss : 0.463013, mean val ce loss: 0.358985, mean val dice : 0.432958 24%|█████▍ | 235/1000 [10:05:13<32:30:13, 152.96s/it]epoch : 235, iteration : 4704, train loss : 0.411904, train loss_ce: 0.289972, train loss_dice: 0.533836, train dice : 0.466164 epoch : 235, iteration : 4712, train loss : 0.384918, train loss_ce: 0.263261, train loss_dice: 0.506575, train dice : 0.493425 epoch : 235, iteration : 4720, train loss : 0.427208, train loss_ce: 0.266072, train loss_dice: 0.588343, train dice : 0.411657 epoch : 235, mean train loss : 0.429935, mean train ce loss: 0.314582, mean train dice : 0.454712 epoch : 235, iteration : 2352, val loss : 0.478716, val loss_ce: 0.353607, val loss_dice: 0.603824, val dice : 0.396176 epoch : 235, iteration : 2355, val loss : 0.552514, val loss_ce: 0.493493, val loss_dice: 0.611534, val dice : 0.388466 epoch : 235, iteration : 2358, val loss : 0.394575, val loss_ce: 0.237227, val loss_dice: 0.551922, val dice : 0.448078 epoch : 235, mean val loss : 0.499052, mean val ce loss: 0.414196, mean val dice : 0.416091

Our training on 2 Tesla V100s (with a batch size of 2) is taking around 64hrs on Verse2019.

Also after 245 (out of 1000) epochs mean val dice is around 0.60

Then the val dice will increase sharply when Epoch comes to 300-500, from 0.60 to 0.85

Please carry out the data preprocessing before training.

Hi.
We have run the code as is as given here on the GitHub. We have taken the preprocessing code as given.

We have checked that batch size equal to 1 will lead to unstable training and a longer training time. Please enlarge that as at least 2.