There are 2 repositories under toxicity-detection topic.
This demo shows the functionality of the Voximplant instant messaging SDK, including silent supervision by a bot.
This is a simple python program which uses a machine learning model to detect toxicity in tweets, developed in Flask.
AntiToxicBot is a bot that detects toxics in a chat using Data Science and Machine Learning technologies. The bot will warn admins about toxic users. Also, the admin can allow the bot to ban toxics.
This is a simple python program which uses a machine learning model to detect toxicity in tweets, GUI in Tkinter.
NLP deep learning model for multilingual toxicity detection in text 📚
Open source discord moderation bot leveraging NLP with a focus on explainability.
This library can detect toxicity in the text or string or content and in return provide you the toxicity percentage in text or content with toxic words found in text.
This work focuses on the development of machine learning models, in particular neural networks and SVM, where they can detect toxicity in comments. The topics we will be dealing with: a) Cost-sensitive learning, b) Class imbalance
It is a trained Deep Learning model to predict different level of toxic comments. Toxicity like threats, obscenity, insults, and identity-based hate.
An Explainable Toxicity detector for code review comments. Published in ESEM'2023
Toxicity detection in a conversation or phases.
In-game Toxic Language Detection: Shared Task and Attention Residuals
project for ms_system_design
Comparing Toxic Texts with Transformers