OshaPandey / Toxic_Comment_Classifier_NLP

The Toxic Comment Classifier using Natural Language Processing (NLP) is a project designed to detect and categorize toxic, severe toxic, identity hate, obscene, threat, and insulting comments within digital communication platforms. It may help to promote a healthier online discourse by identifying and flagging potentially harmful content.

Repository from Github https://github.comOshaPandey/Toxic_Comment_Classifier_NLPRepository from Github https://github.comOshaPandey/Toxic_Comment_Classifier_NLP

This repository is not active

About

The Toxic Comment Classifier using Natural Language Processing (NLP) is a project designed to detect and categorize toxic, severe toxic, identity hate, obscene, threat, and insulting comments within digital communication platforms. It may help to promote a healthier online discourse by identifying and flagging potentially harmful content.


Languages

Language:Jupyter Notebook 99.6%Language:Python 0.4%