protectai / llm-guard

The Security Toolkit for LLM Interactions

Home Page:https://llm-guard.com/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Risk score logic explanation

kadamsolanki opened this issue · comments

Hey, can anyone explain me the logic of risk score calculation in toxicity scanner in input scanner, as the formula in util does not give justice to the model generated scores.

If possible please provide a detailed explanation behind adding risk_score as a metric/indicator.

Thanks,
Kadam

Hey @kadamsolanki , thanks for reaching out.

We use threshold configured and only calculate the risk score if it's above that threshold. Then the risk score is basically how far above the confidence score from the threshold.

Hope it makes sense

Hey @asofter, it does make sense and I was aware of this, but I wanted to say use the risk score for evaluation and there it does not make sense in case of all the scanners where we have sentence level scores. Because it will take the max score of all sentences of any 1 of the labels.

now to use the same max score for risk score calculation, does not help me as I am not sure on which sentence, or which label it is failing. So, I wanted to understand that is there a way for some sort of aggregation calculation or some confidence score at overall level for me to be clear with the model output.

I see, your use-case is sentence-level matching instead of overall text. Do you mean something which provides avg score across all sentences instead of the highest?