The Toxic Comment Classifier using Natural Language Processing (NLP) is a project designed to detect and categorize toxic, severe toxic, identity hate, obscene, threat, and insulting comments within digital communication platforms. It may help to promote a healthier online discourse by identifying and flagging potentially harmful content.
Repository from Github https://github.comOshaPandey/Toxic_Comment_Classifier_NLP
The Toxic Comment Classifier using Natural Language Processing (NLP) is a project designed to detect and categorize toxic, severe toxic, identity hate, obscene, threat, and insulting comments within digital communication platforms. It may help to promote a healthier online discourse by identifying and flagging potentially harmful content.