shkarupa-alex / tfstbd

Sentences and tokens boundary detector

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

TODO: LAMB optimizer TODO: short sentencies У С Т А Н О В И Л: Критика

tfstbd

Sentence&Token boundary detector implemented with TensorfFlow. This is a model development part of project.

Training custom model

  1. Obtain dataset with already splitted sentences and tokens in CoNNL-U format.

UniversalDependencies is a good choice. But maybe you have more data? Copy your *.conllu files (or just train part) to "data/prepare/" folder.

  1. Convert *.conllu files (with auto-augmentation) into trainer-accepted dataset format.
tfkstbd-dataset data/prepare/ data/ready/
  1. Prepare configuration file with hyperparameters. Start from config/default.json in this module repository.

  2. Extract most frequent non-alphanum ngrams vocabulary from train dataset. This will include "<start" and "end>" ngrams too.

tfkstbd-vocab data/ready/ config/default.json data/vocabulary.pkl
  1. Run training.

First run will only compute start metrics, so you should run repeat this stem multiple times.

tfkstbd-train data/ready/ data/ready/vocabulary.pkl config/default.json model/

Optionally use --eval_data data/ready_eval/ to evaluate model and --export_path export/ to export. You can also provide --threads_count NN flag if you have a lot (>8) of CPU cores.

  1. Test your model on plain text file.
tfkstbd-infer export/<model_version> some_text_document.txt

No training

{'accuracy': 0.96256346, 'accuracy_baseline': 0.96256346, 'auc': 0.5837398, 'auc_precision_recall': 0.07934569, 'average_loss': 0.30893928, 'label/mean': 0.03743653, 'loss': 4676.206, 'precision': 0.0, 'prediction/mean': 0.23343459, 'recall': 0.0, 'global_step': 1, 'f1': 0.0}

TODO: urldecode, entities? г/кВт∙ч. тонн/ТВт∙ч) КП. АМ.

TODO: focal loss

https://github.com/Koziev/rutokenizer

About

Sentences and tokens boundary detector

License:MIT License


Languages

Language:Python 100.0%