Noeda / rllama

Rust+OpenCL+AVX2 implementation of LLaMA inference code

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

will this support llama2?

shenjackyuanjie opened this issue · comments

image

I try to load llama2-7b but it dose not work

My GPU is a RX 550, so I can not run any CUDA stuf
rllama is the only way to experience llm on my local machine

well, actually, I find this