grammarly / gector

Official implementation of the papers "GECToR – Grammatical Error Correction: Tag, Not Rewrite" (BEA-20) and "Text Simplification by Tagging" (BEA-21)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Error when training model

tiensu opened this issue · comments

I want to run training model by myself. This are what I did:

  1. Preprocess data
    $ python utils/preprocess_lang8.py --source data/lang8_raw/lang-8-20111007-L1-v2.dat --output_dir data/lang8_processed/ --processes 1
  2. Train
    I only test with small dataset and 1 epoch:
    $ python train.py --corpora_dir data/lang8_processed/ --output_weights_path model/own_model/ --dataset_len 601 --n_epochs 1
    This is an error what I met:

1/19 [>.............................] - ETA: 56s - loss: 1.6039 - labels_probs_loss: 1.4222 - detect_probs_loss: 0.1817 - labels_probs_sparse_categorical_a 2/19 [==>...........................] - ETA: 1s - loss: 1.5281 - labels_probs_loss: 1.3574 - detect_probs_loss: 0.1707 - labels_probs_sparse_categorical_ac 3/19 [===>..........................] - ETA: 1s - loss: 1.5821 - labels_probs_loss: 1.4066 - detect_probs_loss: 0.1755 - labels_probs_sparse_categorical_ac 4/19 [=====>........................] - ETA: 1s - loss: 1.6311 - labels_probs_loss: 1.4459 - detect_probs_loss: 0.1851 - labels_probs_sparse_categorical_ac 5/19 [======>.......................] - ETA: 1s - loss: 1.6209 - labels_probs_loss: 1.4385 - detect_probs_loss: 0.1825 - labels_probs_sparse_categorical_ac 6/19 [========>.....................] - ETA: 1s - loss: 1.5972 - labels_probs_loss: 1.4161 - detect_probs_loss: 0.1811 - labels_probs_sparse_categorical_ac 7/19 [==========>...................] - ETA: 0s - loss: 1.5838 - labels_probs_loss: 1.4040 - detect_probs_loss: 0.1798 - labels_probs_sparse_categorical_ac 8/19 [===========>..................] - ETA: 0s - loss: 1.5876 - labels_probs_loss: 1.4068 - detect_probs_loss: 0.1808 - labels_probs_sparse_categorical_ac 9/19 [=============>................] - ETA: 0s - loss: 1.5598 - labels_probs_loss: 1.3826 - detect_probs_loss: 0.1772 - labels_probs_sparse_categorical_ac10/19 [==============>...............] - ETA: 0s - loss: 1.5784 - labels_probs_loss: 1.3990 - detect_probs_loss: 0.1794 - labels_probs_sparse_categorical_ac11/19 [================>.............] - ETA: 0s - loss: 1.5696 - labels_probs_loss: 1.3923 - detect_probs_loss: 0.1773 - labels_probs_sparse_categorical_ac12/19 [=================>............] - ETA: 0s - loss: 1.5995 - labels_probs_loss: 1.4196 - detect_probs_loss: 0.1800 - labels_probs_sparse_categorical_ac13/19 [===================>..........] - ETA: 0s - loss: 1.5999 - labels_probs_loss: 1.4193 - detect_probs_loss: 0.1806 - labels_probs_sparse_categorical_ac14/19 [=====================>........] - ETA: 0s - loss: 1.6040 - labels_probs_loss: 1.4234 - detect_probs_loss: 0.1805 - labels_probs_sparse_categorical_ac15/19 [======================>.......] - ETA: 0s - loss: 1.6082 - labels_probs_loss: 1.4267 - detect_probs_loss: 0.1815 - labels_probs_sparse_categorical_ac16/19 [========================>.....] - ETA: 0s - loss: 1.5982 - labels_probs_loss: 1.4178 - detect_probs_loss: 0.1805 - labels_probs_sparse_categorical_ac17/19 [=========================>....] - ETA: 0s - loss: 1.5868 - labels_probs_loss: 1.4080 - detect_probs_loss: 0.1788 - labels_probs_sparse_categorical_ac18/19 [===========================>..] - ETA: 0s - loss: 1.5961 - labels_probs_loss: 1.4159 - detect_probs_loss: 0.1801 - labels_probs_sparse_categorical_accuracy: 0.0000e+00 - detect_probs_sparse_categorical_accuracy: 0.4525Traceback (most recent call last):
File "train.py", line 123, in
main(args)
File "train.py", line 78, in main
train(args.corpora_dir, args.output_weights_path, args.vocab_dir,
File "train.py", line 72, in train
gec.model.fit(train_set, epochs=n_epochs, batch_size=batch_size, validation_data=dev_set,
File "/home/sunt/anaconda3/envs/jp_grm_crt/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1183, in fit
tmp_logs = self.train_function(iterator)
File "/home/sunt/anaconda3/envs/jp_grm_crt/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 889, in call
result = self._call(*args, **kwds)
File "/home/sunt/anaconda3/envs/jp_grm_crt/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 917, in _call
return self._stateless_fn(*args, **kwds) # pylint: disable=not-callable
File "/home/sunt/anaconda3/envs/jp_grm_crt/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3023, in call
return graph_function._call_flat(
File "/home/sunt/anaconda3/envs/jp_grm_crt/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 1960, in _call_flat
return self._build_call_outputs(self._inference_function.call(
File "/home/sunt/anaconda3/envs/jp_grm_crt/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 591, in call
outputs = execute.execute(
File "/home/sunt/anaconda3/envs/jp_grm_crt/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.FailedPreconditionError: 2 root error(s) found.
(0) Failed precondition: Input dataset was expected to contain 601 elements but contained at least 602 elements.
[[node IteratorGetNext (defined at train.py:72) ]]
(1) Failed precondition: Input dataset was expected to contain 601 elements but contained at least 602 elements.
[[node IteratorGetNext (defined at train.py:72) ]]
[[IteratorGetNext/_10]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_23931]

Function call stack:
train_function -> train_function


I tried to solve but I can't. Please help me with a solution how to pass this error.
Thank you very much!