callforpapers-source / doc2term

A fast sentence/word tokenizer, and punctuation remover.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

doc2term

Build Status license

A fast NLP tokenizer that detects sentences, words, numbers, urls, hostnames, emails, filenames, dates, and phone numbers. Tokenize integrates and standardize the documents, remove the punctuations and duplications.

Installation

git clone https://github.com/callforpapers-source/doc2term
cd doc2term
python setup.py install

Compilation

The installation requires to compile the original C code using gcc.

Usage

Example notebook: doc2term

Example

>>> import doc2term

>>> doc2term.doc2term_str("Actions speak louder than words. ... ")
"Actions speak louder than words ."
>>> doc2term.doc2term_str("You can't judge a book by its cover. ... from thoughtcatalog.com")
"You can't judge a book by its cover . from"

>>> doc2term.doc2term_str("You can't judge a book by its cover. ... from thoughtcatalog.com", include_hosts_files=1)
"You can't judge a book by its cover . from thoughtcatalog.com"

About

A fast sentence/word tokenizer, and punctuation remover.

License:Apache License 2.0


Languages

Language:C 89.9%Language:Python 5.5%Language:Lex 3.2%Language:Makefile 1.5%