langgenius / dify

Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.

Home Page:https://dify.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Consider using gptcache for caching

prs1022 opened this issue · comments

Self Checks

  • I have searched for existing issues search for existing issues, including closed ones.
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • Please do not modify this template :) and fill in all the required fields.

1. Is this request related to a challenge you're experiencing? Tell me about your story.

Recalling LLM APIs can be expensive, and the response speed may slow down during LLMs’ peak times, I think dify needs a cache system to deal with this.Now open source gptcache can be considered for use within the supported model range.What do you think?

2. Additional context or comments

No response

3. Can you help us with this feature?

  • I am interested in contributing to this feature.

Duplicated #848