marian-nmt / marian

Fast Neural Machine Translation in C++

Home Page:https://marian-nmt.github.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Bouncy training with streaming training data

phikoehn opened this issue · comments

Bug description

I am mainly interested in streaming in the training data (-t stdin) to reduce RAM usage due to very large training files. However, training fails - after a while loss bounces around and BLEU scores roller-coast from 25 to 0 and anything inbetween.

How to reproduce

Regular training:

$marian/build/marian \
        --sync-sgd \
        -T . \
        --model model/model.npz \
        --devices $GPU \
        --train-sets data/train.bpe.de data/train.bpe.en \
        --vocabs data/train.bpe.de.json data/train.bpe.en.json \
        --dim-vocabs 50000 50000 \
        --mini-batch-fit -w 3000 \
        --type s2s \
        --best-deep --dec-cell lstm --enc-cell lstm \
        --layer-normalization --dropout-rnn 0.2 --dropout-src 0.1 --dropout-trg 0.1 \
        --learn-rate 0.0001 \
        --after-epochs 0 \
        --early-stopping 5 \
        --valid-freq 20000 --save-freq 20000 --disp-freq 2000 \
        --valid-mini-batch 8 \
        --valid-sets data/dev.bpe.de data/dev.bpe.en \
        --valid-metrics cross-entropy perplexity translation \
        --valid-translation-output model/dev.out \
        --valid-script-path ./score-dev.sh \
        --seed 1111 --exponential-smoothing \
        --normalize=1 --beam-size=12 --quiet-translation \
        --log model/train.log --valid-log model/valid.log

Log:

[2021-07-15 21:45:46] [data] Shuffling data
[2021-07-15 22:12:57] [data] Done reading 206,145,740 sentences
[2021-07-15 22:54:36] [data] Done shuffling 206,145,740 sentences to temp files
[2021-07-15 22:55:34] [training] Batches are processed as 1 process(es) x 1 devices/process
[2021-07-15 23:05:18] Ep. 1 : Up. 2000 : Sen. 141,379 : Cost 8.30000973 : Time 4762.05s : 421.10 words/s
[2021-07-15 23:15:03] Ep. 1 : Up. 4000 : Sen. 281,544 : Cost 7.60127163 : Time 583.78s : 3443.01 words/s
[2021-07-15 23:24:52] Ep. 1 : Up. 6000 : Sen. 422,074 : Cost 7.34017038 : Time 586.89s : 3436.09 words/s
[2021-07-15 23:34:32] Ep. 1 : Up. 8000 : Sen. 563,558 : Cost 7.10902023 : Time 579.04s : 3470.11 words/s
[2021-07-15 23:44:22] Ep. 1 : Up. 10000 : Sen. 702,613 : Cost 6.85724354 : Time 588.92s : 3426.64 words/s
[2021-07-15 23:54:17] Ep. 1 : Up. 12000 : Sen. 841,703 : Cost 6.56500673 : Time 593.60s : 3433.72 words/s
[2021-07-16 00:03:56] Ep. 1 : Up. 14000 : Sen. 983,784 : Cost 6.28361940 : Time 578.33s : 3452.99 words/s
[2021-07-16 00:13:46] Ep. 1 : Up. 16000 : Sen. 1,123,212 : Cost 6.05325747 : Time 589.20s : 3465.00 words/s
[2021-07-16 00:23:36] Ep. 1 : Up. 18000 : Sen. 1,262,402 : Cost 5.75356102 : Time 587.62s : 3433.56 words/s
[2021-07-16 00:33:21] Ep. 1 : Up. 20000 : Sen. 1,404,169 : Cost 5.31719160 : Time 584.76s : 3427.10 words/s
[...]
[2021-07-16 01:56:22] Ep. 1 : Up. 22000 : Sen. 1,540,853 : Cost 4.85009193 : Time 2193.51s : 934.75 words/s
[2021-07-16 02:09:55] Ep. 1 : Up. 24000 : Sen. 1,683,183 : Cost 4.45044994 : Time 813.27s : 2452.73 words/s
[2021-07-16 02:23:32] Ep. 1 : Up. 26000 : Sen. 1,823,385 : Cost 4.18674564 : Time 816.95s : 2458.90 words/s
[2021-07-16 02:37:07] Ep. 1 : Up. 28000 : Sen. 1,966,223 : Cost 4.00612354 : Time 814.82s : 2441.55 words/s
[2021-07-16 02:50:58] Ep. 1 : Up. 30000 : Sen. 2,106,144 : Cost 3.83319879 : Time 830.38s : 2451.93 words/s
[2021-07-16 03:04:36] Ep. 1 : Up. 32000 : Sen. 2,247,992 : Cost 3.71553779 : Time 817.93s : 2471.99 words/s
[2021-07-16 03:18:26] Ep. 1 : Up. 34000 : Sen. 2,386,818 : Cost 3.63044572 : Time 830.13s : 2425.15 words/s
[2021-07-16 03:32:06] Ep. 1 : Up. 36000 : Sen. 2,528,145 : Cost 3.53368115 : Time 820.12s : 2451.35 words/s
[2021-07-16 03:45:52] Ep. 1 : Up. 38000 : Sen. 2,668,603 : Cost 3.48313785 : Time 826.42s : 2452.20 words/s
[2021-07-16 03:59:26] Ep. 1 : Up. 40000 : Sen. 2,811,273 : Cost 3.41922927 : Time 814.00s : 2461.11 words/s
[2021-07-16 03:59:26] Saving model weights and runtime parameters to model/model.npz.orig.npz
[2021-07-16 03:59:38] Saving model weights and runtime parameters to model/model.iter40000.npz
[2021-07-16 03:59:47] Saving model weights and runtime parameters to model/model.npz
[2021-07-16 03:59:59] Saving Adam parameters to model/model.npz.optimizer.npz
[2021-07-16 04:00:48] [valid] Ep. 1 : Up. 40000 : cross-entropy : 66.6809 : new best
[2021-07-16 04:01:12] [valid] Ep. 1 : Up. 40000 : perplexity : 14.3618 : new best
[2021-07-16 04:02:08] [valid] Ep. 1 : Up. 40000 : translation : 24.31 : new best
[2021-07-16 04:16:01] Ep. 1 : Up. 42000 : Sen. 2,949,289 : Cost 3.38994718 : Time 994.69s : 2028.04 words/s
[2021-07-16 04:29:45] Ep. 1 : Up. 44000 : Sen. 3,090,886 : Cost 3.31471109 : Time 824.43s : 2434.13 words/s
[2021-07-16 04:43:24] Ep. 1 : Up. 46000 : Sen. 3,232,057 : Cost 3.30455637 : Time 819.08s : 2455.17 words/s
[2021-07-16 04:57:09] Ep. 1 : Up. 48000 : Sen. 3,372,198 : Cost 3.26814723 : Time 824.39s : 2438.16 words/s
[2021-07-16 05:10:53] Ep. 1 : Up. 50000 : Sen. 3,513,722 : Cost 3.22496486 : Time 824.35s : 2434.34 words/s
[2021-07-16 05:24:44] Ep. 1 : Up. 52000 : Sen. 3,652,826 : Cost 3.20088148 : Time 830.32s : 2426.83 words/s
[2021-07-16 05:38:27] Ep. 1 : Up. 54000 : Sen. 3,793,266 : Cost 3.19381070 : Time 823.26s : 2464.03 words/s
[2021-07-16 05:52:20] Ep. 1 : Up. 56000 : Sen. 3,933,570 : Cost 3.17313480 : Time 832.81s : 2421.73 words/s
[2021-07-16 06:06:08] Ep. 1 : Up. 58000 : Sen. 4,071,882 : Cost 3.14957476 : Time 828.11s : 2446.76 words/s
[2021-07-16 06:20:03] Ep. 1 : Up. 60000 : Sen. 4,209,619 : Cost 3.11990714 : Time 835.41s : 2429.04 words/s
[2021-07-16 06:20:03] Saving model weights and runtime parameters to model/model.npz.orig.npz
[2021-07-16 06:20:16] Saving model weights and runtime parameters to model/model.iter60000.npz
[2021-07-16 06:20:26] Saving model weights and runtime parameters to model/model.npz
[2021-07-16 06:20:38] Saving Adam parameters to model/model.npz.optimizer.npz
[2021-07-16 06:21:30] [valid] Ep. 1 : Up. 60000 : cross-entropy : 56.9303 : new best
[2021-07-16 06:21:54] [valid] Ep. 1 : Up. 60000 : perplexity : 9.72729 : new best
[2021-07-16 06:22:48] [valid] Ep. 1 : Up. 60000 : translation : 28.08 : new best
[2021-07-16 06:36:31] Ep. 1 : Up. 62000 : Sen. 4,351,451 : Cost 3.09047627 : Time 988.00s : 2025.96 words/s
[2021-07-16 06:50:13] Ep. 1 : Up. 64000 : Sen. 4,493,536 : Cost 3.07874060 : Time 822.03s : 2437.42 words/s
[2021-07-16 07:03:40] Ep. 1 : Up. 66000 : Sen. 4,637,968 : Cost 3.06804967 : Time 807.32s : 2470.76 words/s
[2021-07-16 07:17:28] Ep. 1 : Up. 68000 : Sen. 4,775,827 : Cost 3.05924678 : Time 827.17s : 2443.62 words/s
[2021-07-16 07:31:18] Ep. 1 : Up. 70000 : Sen. 4,915,299 : Cost 3.04785180 : Time 829.90s : 2443.11 words/s
[2021-07-16 07:44:53] Ep. 1 : Up. 72000 : Sen. 5,057,453 : Cost 3.04158092 : Time 815.38s : 2457.77 words/s
[2021-07-16 07:58:36] Ep. 1 : Up. 74000 : Sen. 5,198,101 : Cost 3.01656175 : Time 823.49s : 2445.83 words/s
[2021-07-16 08:12:18] Ep. 1 : Up. 76000 : Sen. 5,339,386 : Cost 2.99259973 : Time 821.15s : 2452.71 words/s
[2021-07-16 08:25:59] Ep. 1 : Up. 78000 : Sen. 5,480,644 : Cost 2.99327040 : Time 821.60s : 2438.89 words/s
[2021-07-16 08:39:56] Ep. 1 : Up. 80000 : Sen. 5,621,598 : Cost 2.97224736 : Time 836.54s : 2421.35 words/s
[2021-07-16 08:39:56] Saving model weights and runtime parameters to model/model.npz.orig.npz
[2021-07-16 08:40:09] Saving model weights and runtime parameters to model/model.iter80000.npz
[2021-07-16 08:40:19] Saving model weights and runtime parameters to model/model.npz
[2021-07-16 08:40:31] Saving Adam parameters to model/model.npz.optimizer.npz
[2021-07-16 08:41:24] [valid] Ep. 1 : Up. 80000 : cross-entropy : 52.5141 : new best
[2021-07-16 08:41:48] [valid] Ep. 1 : Up. 80000 : perplexity : 8.15366 : new best
[2021-07-16 08:42:43] [valid] Ep. 1 : Up. 80000 : translation : 29.74 : new best
[2021-07-16 08:56:52] Ep. 1 : Up. 82000 : Sen. 5,760,030 : Cost 2.98988581 : Time 1015.89s : 1988.86 words/s
[2021-07-16 09:10:43] Ep. 1 : Up. 84000 : Sen. 5,901,704 : Cost 2.96153259 : Time 831.60s : 2419.60 words/s
[2021-07-16 09:24:31] Ep. 1 : Up. 86000 : Sen. 6,040,124 : Cost 2.95955634 : Time 827.86s : 2444.85 words/s
[2021-07-16 09:38:08] Ep. 1 : Up. 88000 : Sen. 6,182,677 : Cost 2.94756746 : Time 816.55s : 2447.18 words/s
[2021-07-16 09:51:56] Ep. 1 : Up. 90000 : Sen. 6,322,067 : Cost 2.94076109 : Time 828.61s : 2458.90 words/s
[2021-07-16 10:05:43] Ep. 1 : Up. 92000 : Sen. 6,462,623 : Cost 2.92255950 : Time 826.58s : 2435.69 words/s
[2021-07-16 10:19:14] Ep. 1 : Up. 94000 : Sen. 6,606,537 : Cost 2.90472412 : Time 811.67s : 2464.81 words/s
[2021-07-16 10:33:01] Ep. 1 : Up. 96000 : Sen. 6,745,859 : Cost 2.91221237 : Time 826.63s : 2441.39 words/s
[2021-07-16 10:46:52] Ep. 1 : Up. 98000 : Sen. 6,884,510 : Cost 2.90941739 : Time 830.97s : 2428.40 words/s
[2021-07-16 11:00:53] Ep. 1 : Up. 100000 : Sen. 7,022,399 : Cost 2.90232229 : Time 840.99s : 2403.52 words/s
[2021-07-16 11:00:53] Saving model weights and runtime parameters to model/model.npz.orig.npz
[2021-07-16 11:01:06] Saving model weights and runtime parameters to model/model.iter100000.npz
[2021-07-16 11:01:16] Saving model weights and runtime parameters to model/model.npz
[2021-07-16 11:01:28] Saving Adam parameters to model/model.npz.optimizer.npz
[2021-07-16 11:02:19] [valid] Ep. 1 : Up. 100000 : cross-entropy : 49.7835 : new best
[2021-07-16 11:02:44] [valid] Ep. 1 : Up. 100000 : perplexity : 7.31078 : new best
[2021-07-16 11:03:38] [valid] Ep. 1 : Up. 100000 : translation : 30.9 : new best
[...]
[2021-07-16 13:24:02] [valid] Ep. 1 : Up. 120000 : cross-entropy : 47.7031 : new best
[2021-07-16 13:24:27] [valid] Ep. 1 : Up. 120000 : perplexity : 6.72759 : new best
[2021-07-16 13:25:21] [valid] Ep. 1 : Up. 120000 : translation : 31.47 : new best
[2021-07-16 13:39:20] Ep. 1 : Up. 122000 : Sen. 8,568,638 : Cost 2.83089995 : Time 1003.86s : 2007.51 words/s

Streaming training:

paste data/train.bpe.de data/train.bpe.en | shuf | xz -T0 > data/train.bpe.shuffled.xz
xzcat data/train.bpe.shuffled.xz | $marian/build/marian \
        --sync-sgd \
        -T . \
        --model model/model.npz \
        --devices 0\
        -t stdin --tsv --tsv-fields 2 --no-shuffle \
        --vocabs data/train.bpe.de.json data/train.bpe.en.json \
        --dim-vocabs 50000 50000 \
        --mini-batch-fit -w 3000 \
        --type s2s \
        --best-deep --dec-cell lstm --enc-cell lstm \
        --layer-normalization --dropout-rnn 0.2 --dropout-src 0.1 --dropout-trg 0.1 \
        --learn-rate 0.0001 \
        --after-epochs 0 \
        --early-stopping 5 \
        --valid-freq 20000 --save-freq 20000 --disp-freq 2000 \
        --valid-mini-batch 8 \
        --valid-sets data/dev.bpe \
        --valid-metrics cross-entropy perplexity translation \
        --valid-translation-output model/dev.out \
        --valid-script-path ./score-dev.sh \
        --seed 1111 --exponential-smoothing \
        --normalize=1 --beam-size=12 --quiet-translation \
        --log model/train.log --valid-log model/valid.log

Log:

[2021-07-17 11:37:54] Ep. 1 : Up. 2000 : Sen. 60,200 : Cost 8.51329231 : Time 1291.57s : 1910.59 words/s
[2021-07-17 11:56:08] Ep. 1 : Up. 4000 : Sen. 133,104 : Cost 7.83979893 : Time 1094.93s : 2128.58 words/s
[2021-07-17 12:11:58] Ep. 1 : Up. 6000 : Sen. 224,154 : Cost 7.46361542 : Time 949.08s : 2457.30 words/s
[2021-07-17 12:25:47] Ep. 1 : Up. 8000 : Sen. 332,432 : Cost 7.21320629 : Time 829.89s : 2616.60 words/s
[2021-07-17 12:38:16] Ep. 1 : Up. 10000 : Sen. 471,101 : Cost 6.92912674 : Time 748.79s : 2724.82 words/s
[2021-07-17 12:48:33] Ep. 1 : Up. 12000 : Sen. 682,459 : Cost 6.38286877 : Time 616.71s : 3015.63 words/s
[2021-07-17 12:54:33] Ep. 1 : Up. 14000 : Sen. 965,843 : Cost 5.20174599 : Time 359.59s : 3007.81 words/s
[2021-07-17 13:13:46] Ep. 1 : Up. 16000 : Sen. 1,052,108 : Cost 6.59608603 : Time 1153.11s : 1942.28 words/s
[2021-07-17 13:32:37] Ep. 1 : Up. 18000 : Sen. 1,123,604 : Cost 6.29688978 : Time 1131.22s : 2070.52 words/s
[2021-07-17 13:49:01] Ep. 1 : Up. 20000 : Sen. 1,212,903 : Cost 5.85167885 : Time 984.07s : 2382.72 words/s
[2021-07-17 13:49:01] Saving model weights and runtime parameters to model/model.npz.orig.npz
[2021-07-17 13:49:15] Saving model weights and runtime parameters to model/model.iter20000.npz
[2021-07-17 13:49:25] Saving model weights and runtime parameters to model/model.npz
[2021-07-17 13:49:39] Saving Adam parameters to model/model.npz.optimizer.npz
[2021-07-17 13:50:36] [valid] Ep. 1 : Up. 20000 : cross-entropy : 130.97 : new best
[2021-07-17 13:51:02] [valid] Ep. 1 : Up. 20000 : perplexity : 187.458 : new best
[2021-07-17 13:51:53] [valid] Ep. 1 : Up. 20000 : translation : 2.35 : new best
[2021-07-17 14:06:01] Ep. 1 : Up. 22000 : Sen. 1,316,283 : Cost 5.25472784 : Time 1019.73s : 2097.49 words/s
[2021-07-17 14:19:04] Ep. 1 : Up. 24000 : Sen. 1,454,670 : Cost 4.57147598 : Time 783.38s : 2713.49 words/s
[2021-07-17 14:29:40] Ep. 1 : Up. 26000 : Sen. 1,650,105 : Cost 3.89087892 : Time 636.06s : 2924.37 words/s
[2021-07-17 14:36:20] Ep. 1 : Up. 28000 : Sen. 1,933,030 : Cost 3.42473960 : Time 399.72s : 3040.25 words/s
[2021-07-17 14:53:49] Ep. 1 : Up. 30000 : Sen. 2,044,344 : Cost 4.49112606 : Time 1048.87s : 1923.33 words/s
[2021-07-17 15:13:00] Ep. 1 : Up. 32000 : Sen. 2,113,937 : Cost 4.20546579 : Time 1151.71s : 2037.06 words/s
[2021-07-17 15:30:15] Ep. 1 : Up. 34000 : Sen. 2,201,896 : Cost 3.90361929 : Time 1034.13s : 2296.19 words/s
[2021-07-17 15:44:48] Ep. 1 : Up. 36000 : Sen. 2,302,145 : Cost 3.62893677 : Time 873.05s : 2454.60 words/s
[2021-07-17 15:58:27] Ep. 1 : Up. 38000 : Sen. 2,437,814 : Cost 3.43059921 : Time 819.08s : 2650.84 words/s
[2021-07-17 16:09:24] Ep. 1 : Up. 40000 : Sen. 2,617,206 : Cost 3.10532475 : Time 657.75s : 2813.20 words/s
[2021-07-17 16:09:24] Saving model weights and runtime parameters to model/model.npz.orig.npz
[2021-07-17 16:09:39] Saving model weights and runtime parameters to model/model.iter40000.npz
[2021-07-17 16:09:49] Saving model weights and runtime parameters to model/model.npz
[2021-07-17 16:10:04] Saving Adam parameters to model/model.npz.optimizer.npz
[2021-07-17 16:11:02] [valid] Ep. 1 : Up. 40000 : cross-entropy : 92.9542 : new best
[2021-07-17 16:11:30] [valid] Ep. 1 : Up. 40000 : perplexity : 41.0361 : new best
[2021-07-17 16:12:07] [valid] Ep. 1 : Up. 40000 : translation : 13.26 : new best
[2021-07-17 16:19:37] Ep. 1 : Up. 42000 : Sen. 2,899,031 : Cost 2.94160414 : Time 612.87s : 2197.98 words/s
[2021-07-17 16:35:39] Ep. 1 : Up. 44000 : Sen. 3,036,592 : Cost 3.71297169 : Time 961.60s : 1891.65 words/s
[2021-07-17 16:55:31] Ep. 1 : Up. 46000 : Sen. 3,105,392 : Cost 3.64552140 : Time 1192.41s : 1996.62 words/s
[2021-07-17 17:12:45] Ep. 1 : Up. 48000 : Sen. 3,190,826 : Cost 3.45583391 : Time 1033.91s : 2288.83 words/s
[2021-07-17 17:27:17] Ep. 1 : Up. 50000 : Sen. 3,288,961 : Cost 3.30208683 : Time 872.18s : 2479.78 words/s
[2021-07-17 17:41:03] Ep. 1 : Up. 52000 : Sen. 3,420,950 : Cost 3.14162445 : Time 825.95s : 2650.84 words/s
[2021-07-17 17:51:57] Ep. 1 : Up. 54000 : Sen. 3,584,691 : Cost 2.92383456 : Time 653.80s : 2784.28 words/s
[2021-07-17 18:00:00] Ep. 1 : Up. 56000 : Sen. 3,865,300 : Cost 2.76975894 : Time 483.22s : 3113.72 words/s
[2021-07-17 18:13:39] Ep. 1 : Up. 58000 : Sen. 4,029,428 : Cost 3.35970497 : Time 818.92s : 1995.91 words/s
[2021-07-17 18:33:37] Ep. 1 : Up. 60000 : Sen. 4,097,043 : Cost 3.42396569 : Time 1197.45s : 2008.07 words/s
[2021-07-17 18:33:37] Saving model weights and runtime parameters to model/model.npz.orig.npz
[2021-07-17 18:33:51] Saving model weights and runtime parameters to model/model.iter60000.npz
[2021-07-17 18:34:01] Saving model weights and runtime parameters to model/model.npz
[2021-07-17 18:34:15] Saving Adam parameters to model/model.npz.optimizer.npz
[2021-07-17 18:35:09] [valid] Ep. 1 : Up. 60000 : cross-entropy : 71.799 : new best
[2021-07-17 18:35:35] [valid] Ep. 1 : Up. 60000 : perplexity : 17.621 : new best
[2021-07-17 18:36:18] [valid] Ep. 1 : Up. 60000 : translation : 13.76 : new best
[2021-07-17 18:53:37] Ep. 1 : Up. 62000 : Sen. 4,179,759 : Cost 3.28512287 : Time 1200.06s : 1957.98 words/s
[2021-07-17 19:07:59] Ep. 1 : Up. 64000 : Sen. 4,274,570 : Cost 3.13071728 : Time 862.47s : 2492.18 words/s
[2021-07-17 19:21:40] Ep. 1 : Up. 66000 : Sen. 4,404,758 : Cost 2.99354339 : Time 820.92s : 2732.62 words/s
[2021-07-17 19:32:25] Ep. 1 : Up. 68000 : Sen. 4,555,581 : Cost 2.84862542 : Time 644.52s : 2774.15 words/s
[2021-07-17 19:40:58] Ep. 1 : Up. 70000 : Sen. 4,832,531 : Cost 2.59583044 : Time 513.05s : 3211.68 words/s
[2021-07-17 19:52:32] Ep. 1 : Up. 72000 : Sen. 5,022,932 : Cost 3.22295165 : Time 694.49s : 2137.28 words/s
[2021-07-17 20:12:06] Ep. 1 : Up. 74000 : Sen. 5,088,745 : Cost 3.30707741 : Time 1173.26s : 2051.18 words/s
[2021-07-17 20:29:25] Ep. 1 : Up. 76000 : Sen. 5,168,787 : Cost 3.18555689 : Time 1039.06s : 2246.12 words/s
[2021-07-17 20:44:04] Ep. 1 : Up. 78000 : Sen. 5,263,329 : Cost 3.03532243 : Time 879.45s : 2511.69 words/s
[2021-07-17 20:57:53] Ep. 1 : Up. 80000 : Sen. 5,387,618 : Cost 2.94611120 : Time 829.31s : 2681.41 words/s
[2021-07-17 20:57:53] Saving model weights and runtime parameters to model/model.npz.orig.npz
[2021-07-17 20:58:06] Saving model weights and runtime parameters to model/model.iter80000.npz
[2021-07-17 20:58:16] Saving model weights and runtime parameters to model/model.npz
[2021-07-17 20:58:30] Saving Adam parameters to model/model.npz.optimizer.npz
[2021-07-17 20:59:21] [valid] Ep. 1 : Up. 80000 : cross-entropy : 67.9483 : new best
[2021-07-17 20:59:45] [valid] Ep. 1 : Up. 80000 : perplexity : 15.1079 : new best
[2021-07-17 21:00:26] [valid] Ep. 1 : Up. 80000 : translation : 20.73 : new best
[2021-07-17 21:11:31] Ep. 1 : Up. 82000 : Sen. 5,529,093 : Cost 2.79048038 : Time 817.42s : 2193.04 words/s
[2021-07-17 21:20:49] Ep. 1 : Up. 84000 : Sen. 5,797,526 : Cost 2.56940508 : Time 557.78s : 3182.58 words/s
[2021-07-17 21:30:44] Ep. 1 : Up. 86000 : Sen. 6,016,044 : Cost 3.01187825 : Time 595.39s : 2238.55 words/s
[2021-07-17 21:50:53] Ep. 1 : Up. 88000 : Sen. 6,079,988 : Cost 3.23542666 : Time 1209.09s : 1992.57 words/s
[2021-07-17 22:08:34] Ep. 1 : Up. 90000 : Sen. 6,157,908 : Cost 3.11294174 : Time 1061.10s : 2199.10 words/s
[2021-07-17 22:23:36] Ep. 1 : Up. 92000 : Sen. 6,251,509 : Cost 2.99662519 : Time 901.76s : 2493.90 words/s
[2021-07-17 22:37:27] Ep. 1 : Up. 94000 : Sen. 6,370,705 : Cost 2.86904597 : Time 831.21s : 2652.73 words/s
[2021-07-17 22:48:55] Ep. 1 : Up. 96000 : Sen. 6,511,680 : Cost 2.74057293 : Time 687.96s : 2721.08 words/s
[2021-07-17 22:58:32] Ep. 1 : Up. 98000 : Sen. 6,763,180 : Cost 2.56433797 : Time 576.87s : 3148.54 words/s
[2021-07-17 23:06:35] Ep. 1 : Up. 100000 : Sen. 7,009,268 : Cost 2.82691193 : Time 483.22s : 2420.35 words/s
[2021-07-17 23:06:35] Saving model weights and runtime parameters to model/model.npz.orig.npz
[2021-07-17 23:06:48] Saving model weights and runtime parameters to model/model.iter100000.npz
[2021-07-17 23:06:58] Saving model weights and runtime parameters to model/model.npz
[2021-07-17 23:07:11] Saving Adam parameters to model/model.npz.optimizer.npz
[2021-07-17 23:08:01] [valid] Ep. 1 : Up. 100000 : cross-entropy : 83.3228 : stalled 1 times (last best: 67.9483)
[2021-07-17 23:08:26] [valid] Ep. 1 : Up. 100000 : perplexity : 27.9267 : stalled 1 times (last best: 15.1079)
[2021-07-17 23:08:57] [valid] Ep. 1 : Up. 100000 : translation : 7.02 : stalled 1 times (last best: 20.73)
[...]
[2021-07-18 01:39:45] [valid] Ep. 1 : Up. 120000 : cross-entropy : 58.7706 : new best
[2021-07-18 01:40:09] [valid] Ep. 1 : Up. 120000 : perplexity : 10.4696 : new best
[2021-07-18 01:40:50] [valid] Ep. 1 : Up. 120000 : translation : 19.77 : stalled 2 times (last best: 20.73)
[...]
[2021-07-18 03:54:14] [valid] Ep. 1 : Up. 140000 : cross-entropy : 67.5061 : stalled 1 times (last best: 58.7706)
[2021-07-18 03:54:39] [valid] Ep. 1 : Up. 140000 : perplexity : 14.8433 : stalled 1 times (last best: 10.4696)
[2021-07-18 03:55:14] [valid] Ep. 1 : Up. 140000 : translation : 14.64 : stalled 3 times (last best: 20.73)
[...]
[2021-07-18 06:14:12] [valid] Ep. 1 : Up. 160000 : cross-entropy : 57.3129 : new best
[2021-07-18 06:14:36] [valid] Ep. 1 : Up. 160000 : perplexity : 9.87716 : new best
[2021-07-18 06:15:17] [valid] Ep. 1 : Up. 160000 : translation : 18.45 : stalled 4 times (last best: 20.73)
[...]
[2021-07-18 08:36:06] [valid] Ep. 1 : Up. 180000 : cross-entropy : 57.3422 : stalled 1 times (last best: 57.3129)
[2021-07-18 08:36:30] [valid] Ep. 1 : Up. 180000 : perplexity : 9.88872 : stalled 1 times (last best: 9.87716)
[2021-07-18 08:37:11] [valid] Ep. 1 : Up. 180000 : translation : 21.33 : new best
[...]
[2021-07-18 10:44:17] [valid] Ep. 1 : Up. 200000 : cross-entropy : 66.3693 : stalled 2 times (last best: 57.3129)
[2021-07-18 10:44:41] [valid] Ep. 1 : Up. 200000 : perplexity : 14.1841 : stalled 2 times (last best: 9.87716)
[2021-07-18 10:45:16] [valid] Ep. 1 : Up. 200000 : translation : 10.94 : stalled 1 times (last best: 21.33)

BLEU:

Sat Jul 17 13:51:53 EDT 2021 model/model.iter20000.npz: BLEU = 2.35, 24.2/4.8/1.0/0.3 (BP=1.000, ratio=1.193, hyp_len=53543, ref_len=44898)
Sat Jul 17 16:12:07 EDT 2021 model/model.iter40000.npz: BLEU = 13.26, 63.0/31.6/17.7/9.9 (BP=0.546, ratio=0.623, hyp_len=27967, ref_len=44898)
Sat Jul 17 18:36:18 EDT 2021 model/model.iter60000.npz: BLEU = 13.76, 64.6/34.6/21.2/13.0 (BP=0.492, ratio=0.585, hyp_len=26264, ref_len=44898)
Sat Jul 17 21:00:26 EDT 2021 model/model.iter80000.npz: BLEU = 20.73, 65.2/36.1/22.5/14.1 (BP=0.705, ratio=0.741, hyp_len=33261, ref_len=44898)
Sat Jul 17 23:08:57 EDT 2021 model/model.iter100000.npz: BLEU = 7.02, 68.0/34.6/20.6/12.1 (BP=0.253, ratio=0.421, hyp_len=18922, ref_len=44898)
Sun Jul 18 01:40:50 EDT 2021 model/model.iter120000.npz: BLEU = 19.77, 67.2/38.1/24.4/15.9 (BP=0.626, ratio=0.681, hyp_len=30587, ref_len=44898)
Sun Jul 18 03:55:14 EDT 2021 model/model.iter140000.npz: BLEU = 14.64, 68.1/36.9/22.8/14.2 (BP=0.487, ratio=0.582, hyp_len=26118, ref_len=44898)
Sun Jul 18 06:15:16 EDT 2021 model/model.iter160000.npz: BLEU = 18.45, 68.6/39.6/25.6/16.7 (BP=0.562, ratio=0.634, hyp_len=28483, ref_len=44898)
Sun Jul 18 08:37:11 EDT 2021 model/model.iter180000.npz: BLEU = 21.33, 68.3/39.4/25.4/16.6 (BP=0.653, ratio=0.701, hyp_len=31484, ref_len=44898)
Sun Jul 18 10:45:16 EDT 2021 model/model.iter200000.npz: BLEU = 10.94, 69.4/37.4/22.9/14.1 (BP=0.361, ratio=0.496, hyp_len=22251, ref_len=44898)
Sun Jul 18 13:15:07 EDT 2021 model/model.iter220000.npz: BLEU = 24.16, 68.7/40.8/27.0/18.1 (BP=0.706, ratio=0.742, hyp_len=33315, ref_len=44898)
Sun Jul 18 15:28:50 EDT 2021 model/model.iter240000.npz: BLEU = 16.21, 69.6/38.7/24.5/15.6 (BP=0.509, ratio=0.597, hyp_len=26811, ref_len=44898)
Sun Jul 18 17:50:39 EDT 2021 model/model.iter260000.npz: BLEU = 21.13, 69.5/40.9/26.9/17.9 (BP=0.618, ratio=0.675, hyp_len=30315, ref_len=44898)
Sun Jul 18 20:13:27 EDT 2021 model/model.iter280000.npz: BLEU = 23.04, 68.8/40.5/26.7/17.8 (BP=0.679, ratio=0.721, hyp_len=32377, ref_len=44898)
Sun Jul 18 22:23:20 EDT 2021 model/model.iter300000.npz: BLEU = 11.59, 70.0/38.3/24.3/15.3 (BP=0.367, ratio=0.499, hyp_len=22420, ref_len=44898)
Mon Jul 19 00:52:03 EDT 2021 model/model.iter320000.npz: BLEU = 25.09, 69.1/41.8/28.2/19.2 (BP=0.710, ratio=0.745, hyp_len=33434, ref_len=44898)
Mon Jul 19 03:04:46 EDT 2021 model/model.iter340000.npz: BLEU = 15.99, 69.9/39.2/24.7/15.8 (BP=0.497, ratio=0.589, hyp_len=26436, ref_len=44898)
Mon Jul 19 05:27:40 EDT 2021 model/model.iter360000.npz: BLEU = 22.41, 69.6/41.8/28.1/19.2 (BP=0.633, ratio=0.686, hyp_len=30810, ref_len=44898)
Mon Jul 19 07:47:37 EDT 2021 model/model.iter380000.npz: BLEU = 23.87, 69.1/41.2/27.5/18.6 (BP=0.688, ratio=0.728, hyp_len=32668, ref_len=44898)
Mon Jul 19 09:58:44 EDT 2021 model/model.iter400000.npz: BLEU = 14.30, 70.2/39.7/25.2/16.0 (BP=0.440, ratio=0.549, hyp_len=24643, ref_len=44898)
Mon Jul 19 12:27:54 EDT 2021 model/model.iter420000.npz: BLEU = 26.04, 69.2/42.1/28.4/19.4 (BP=0.732, ratio=0.762, hyp_len=34211, ref_len=44898)
Mon Jul 19 14:41:42 EDT 2021 model/model.iter440000.npz: BLEU = 15.81, 70.5/40.0/25.5/16.5 (BP=0.479, ratio=0.576, hyp_len=25862, ref_len=44898)
Mon Jul 19 17:08:09 EDT 2021 model/model.iter460000.npz: BLEU = 22.88, 70.0/42.2/28.3/19.1 (BP=0.643, ratio=0.694, hyp_len=31155, ref_len=44898)
Mon Jul 19 19:29:06 EDT 2021 model/model.iter480000.npz: BLEU = 23.85, 69.8/42.0/28.1/19.0 (BP=0.675, ratio=0.717, hyp_len=32214, ref_len=44898)

Context

  • Marian version: v1.10.0 6f6d484 2021-02-06 15:35:16 -0800
  • CMake command: ~/statmt/project/cmake-3.12.2/bin/cmake ..

Hi,
try --shuffle batches instead of --no-shuffle. --no-shuffle in combination with a default setting for --maxi-batch-sort trg probably results in your batches being sorted by increasing length inside a maxi-batch chunk. I think that would explain the bouncing. With --shuffle batches the data gets read in in streaming fashion but the accumulated batches get shuffled within a maxi-batch. If you want to keep true input order you would need --no-shuffle (or --shuffle none) and --maxi-batch-sort none but that can result in badly padded batches.

Great - this works now!