zylon-ai / private-gpt

Interact with your documents using the power of GPT, 100% privately, no data leaks

Home Page:https://privategpt.dev

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Ollama with LLAMA3 trouble referencing ingested files issues

anthonyselias opened this issue · comments

I'm using PrivateGPT with Ollama and llama3. I ingested a bunch of .cs scripts from my project.

Whenever I ask the prompt to reference something quite obvious, it's completely oblivious to ingested files. It's almost as if the files ingested aren't there.

If I give a hint of the location or name of the file, it will retrieve what it's asked of the prompt. But this seems counter productive. The whole point of ingesting the files would be to reference (functions for example) that are present in the scripts.

Is there anything I can do to increase the accuracy or increase the length of which it reads ingested files to be more accurate in its responses?