ollama / ollama

Get up and running with Llama 3, Mistral, Gemma, and other large language models.

Home Page:https://ollama.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Agent retrieval is not peristant

PietFourie opened this issue · comments

What is the issue?

I use an agent to scrape a website using the "webscraper" agent. The @ agent icon disappears and it seems as if the interface is still talking to the agent. However, this is not so. When the question is asked about the retrieved resultant web page, the LLM answers (as seen by icon and the answers) and does not consider the retrieved website nor information. You have to say /exit before the @ icon is shown again.
Basically what has to be done is formulate everything in the prompt to the agent before running the agent and after that you loose control over the results.. It is not used in any context again.
What I expect to happen is that the agent remains in agent mode, alternative agents are called when necessary and the retrieved data is considered by the LLM as a context. That is because you want to retrieve a webpage and process it further, is necessary
You might argue that the other options do cover this, but that would not be very user friendly nor practically. You want to have a chat with the agent AND LLM, especially for information retrieved from the net or files.
If you propose the later RAG method then there is no need for agents in the Chat window except to query the database. It just confuses the issue.
Maybe it is as simple as feeding the retrieved information of the agent into the context window of the LLM and parse the next question if an agent is required or the LLM, such as automatically initialising a storage agent if the next question says "save the result for late processing", etc.

Just to say...
Thanks for the great program. It is really exceptional. Keep the good work going!

OS

Linux

GPU

Nvidia

CPU

No response

Ollama version

latest