akdeniz27 / TrTokenizer

Sentence and word tokenizers for the Turkish language

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

TrTokenizer 🇹🇷

Python PyPI

TrTokenizer is a complete solution for Turkish sentence and word tokenization with extensively-covering language conventions. If you think that Natural language models always need robust, fast, and accurate tokenizers, be sure that you are at the the right place now. Sentence tokenization approach uses non-prefix keyword given in 'tr_non_suffixes' file. This file can be expanded if required, for developer convenience lines start with # symbol are evaluated as comments. Designed regular expressions are pre-compiled to speed-up the performance.

Install

pip install trtokenizer

Usage

from trtokenizer.tr_tokenizer import SentenceTokenizer, WordTokenizer

sentence_tokenizer_object = SentenceTokenizer()  # during object creation regexes are compiled only at once

sentence_tokenizer_object.tokenize(<given paragraph as string>)

word_tokenizer_object = WordTokenizer()  # # during object creation regexes are compiled only at once

word_tokenizer_object.tokenize(<given sentence as string>)

To-do

  • Usage examples (Done)
  • Cython C-API for performance (Done, build/tr_tokenizer.c)
  • Release platform specific shared dynamic libraries (Done, build/tr_tokenizer.cpython-38-x86_64-linux-gnu.so, only for Debian Linux with gcc compiler)
  • Limitations
  • Prepare a simple guide for contribution

Resources

About

Sentence and word tokenizers for the Turkish language

License:MIT License


Languages

Language:Python 100.0%