confident-ai / deepeval

The LLM Evaluation Framework

Home Page:https://docs.confident-ai.com/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Make evaluate() run test cases concurrently

penguine-ip opened this issue · comments

Currently, metrics for each test case is ran concurrently, but not test cases in a test run.