confident-ai / deepeval

The LLM Evaluation Framework

Home Page:https://docs.confident-ai.com/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Suggestion for Future Improvement: Enable Function Calling for Consistent JSON Output

BarendPotijk opened this issue · comments

Suggestion for Future Improvement: Enable Function Calling for Consistent JSON Output

I am currently using a custom LLM class based on Azure OpenAI with the model gpt-3.5-turbo-16k, which supports function calling. While working with DeepEval, I observed that the prompts include instructions to provide a JSON parseable output. However, when the output is not JSON parseable, DeepEval suggests using a more powerful model.

Would it not be more efficient to include an optional parameter that enables the use of function calling? This enhancement would allow less powerful models, such as gpt-3.5-turbo, to produce outputs in a correct JSON format more consistently.