langchain-ai / opengpts

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Implement error handling on OpenGPTs

machulav opened this issue · comments

STR

  1. Create a new "assistant" GPT with "Connery" tools enabled.
  2. Ask the GPT: "Please summarize the page https://github.com/langchain-ai/opengpts" and press "Send."
  3. The summarization tool is failing because it uses the GPT-3 model with 16k tokens, and the repository page is larger than that, which is expected.

Actual

  • The GPT responds with the "Click to continue" link:
image
  • When pressing the link, it shows the same link again endlessly.

Expected

I suggest one of the following:

  1. The GPT should show the error message returned by the tool "as is". In this particular case, the error message is the following:
"[Action execution error] 400 This model's maximum context length is 16385 tokens. However, your messages resulted in 43518 tokens. Please reduce the length of the messages."
  1. The GPT should summarize the error.

In both cases, the user should be able to understand that the tool failed with an error, identify the tool, and fix it.

Here is the LangSmith trace of the same prompt as above from this example: https://smith.langchain.com/public/b793dd80-d36c-47dd-a0c0-e3219c6bc473/r