whylabs / langkit

πŸ” LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). πŸ“š Extracts signals from prompts & responses, ensuring safety & security. πŸ›‘οΈ Features include text quality, relevance metrics, & sentiment analysis. πŸ“Š A comprehensive tool for LLM observability. πŸ‘€

Home Page:https://whylabs.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

tests for injections module

FelipeAdachi opened this issue Β· comments

We need some basic tests for the injections module - maybe some obvious injections/non-injections examples and asserting the scores for each.