Torsha-Sett / Toxic-comment-challenge

Classify texts

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Toxic-comment-challenge

We are provided with a large number of Wikipedia comments which have been labeled by human raters for toxic behavior. The types of toxicity are:

toxic

severe_toxic

obscene

threat

insult

identity_hate

we must create a model which predicts a probability of each type of toxicity for each comment.

About

Classify texts


Languages

Language:Jupyter Notebook 100.0%