whylabs / langkit

πŸ” LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). πŸ“š Extracts signals from prompts & responses, ensuring safety & security. πŸ›‘οΈ Features include text quality, relevance metrics, & sentiment analysis. πŸ“Š A comprehensive tool for LLM observability. πŸ‘€

Home Page:https://whylabs.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

add support for detoxify local models

FelipeAdachi opened this issue Β· comments

With the changes on this PR, local models are supported for the default model (martin-ha's toxic comment model), but are not supported for any of the detoxify's models. It would be useful to add support for this.