iyaja / llama-fs

A self-organizing file system with llama 3

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

hardware requirement for local use

laoshaw opened this issue · comments

can I run this locally, seems it is possible, but what are the hardware requirements for that? will RTX4090 suffice?

I have the same needs, but alas, I only have a RTX1060...

I'll just chime in here to say that I'm running LLMs such as Qwen2 and Llama3 in 7B size on a Ryzen 9 5950X CPU-only - no GPU. My GPU is just a 2GB GIGABYTE Radeon RX 550 which is not used by the LLMs at all. The Ryzen is a high-end CPU with 16 cores and 32 threads. I did have 64GB of RAM which I just upgraded to 128GB but mostly for virtual machines applications. My suspicion is that any Ryzen 7 or Intel equivalent would be adequate to run 7B models provided enough RAM, which probably means at least 16GB and preferably 32GB. Any AMD or NVidia GPU with 8-16GB VRAM (and appropriate installation of ROCm or CUDA software) would improve performance.

One question I would ask is the speed of the processing of these files. I assume they are not actually being turned into RAG, but merely read and then organized. It would seem to me that this could be done adequately by any local LLM model, perhaps even smaller models such 1.5GB LLMS. YMMV.