confident-ai / deepeval

The LLM Evaluation Framework

Home Page:https://docs.confident-ai.com/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Dramatically simpler and more reliable cache

prescod opened this issue · comments

It's my belief that the cache can be dramatically simplified, and made more reliable, by using the Python "diskcache" library.

DiskCache handles so much:

  • locking
  • reliability in the face of Ctrl-C/Keyboard Interrupr and other signals
  • easy clearing

I think that hundreds of lines of code could be replaced with diskcache.

JSON is a poor format for a cache because it is hard to update it transactionally and incrementally. If you Ctrl-C in the middle of a write, you'll end up with corrupted data.

Furthermore, I think that the right unit of Cache is the LLM response.

This will solve the problem where some kinds of tests are cached and others are not.

I will attach two files that show how I monkey-patched to add a much more reliable caching system with few lines of code.

caching.zip

It might be cleaner to add caching to the DeepEvalBaseLLM but then I'd need to change every place it is called, so this monkeypatching hack worked better for me. I turned off the DeepEval cache because I was frustrated with it.

Wow! this sounds like a very big improvement as I'm having troubles with the caching myself! Are you working on a PR already? @prescod