protectai / llm-guard

The Security Toolkit for LLM Interactions

Home Page:https://llm-guard.com/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Sentence print along with the toxicity score.

kadamsolanki opened this issue · comments

Is there a way, where we can get the sentence as well along with the full results that is generated in the toxicity scanner in input scanner.
That would be really helpful to have as it would get an idea of how each sentence is understood by a non-llm model.
Please tell me how can be this done. Thanks in advance.

Hey @kadamsolanki , thanks for submitting an issue.

We have a similar request: #111
We are planning to change the return type to be an object with more context.

Hey @asofter, thanks for responding.

It's great to know that it was already been worked upon. can I know any tentative timeline that we can see this.
For now, I have found a work around way to get more context, but it would be always better to see it inbuilt.