GPU offloading not working on system with AMD 5900HX CPU
vlasky opened this issue · comments
I'm running llamafile 0.8.1 on a Windows 10 mini PC with a AMD Ryzen 9 5900HX CPU
CPU Architecture: AMD Cezanne (Zen 3, Ryzen 5000)
GPU: AMD Radeon RX Vega 8
The mini PC has 64GB RAM installed.
When I enable llamafile GPU support with -ngl 9999, it exits with the error
ggml_cuda_compute_forward: RMS_NORM failed
CUDA error: invalid device function
current device: 0, in function ggml_cuda_compute_forward at ggml-cuda.cu:11444
err
GGML_ASSERT: ggml-cuda.cu:9198: !"CUDA error"
My command line is:
llamafile-0.8.1.exe -ngl 9999 -m dolphin-2.9-llama3-8b-Q5_K_M.gguf
I have also tried re-running after installing the AMD HIP SDK but this made no difference.
Contrary to the runtime messages, amdclang++.exe was in my Windows PATH.
import_cuda_impl: initializing gpu module...
get_rocm_bin_path: note: amdclang++.exe not found on $PATH
link_cuda_dso: note: dynamically linking /C/Users/Vlad/.llamafile/ggml-rocm.dll
ggml_cuda_link: welcome to ROCm SDK with tinyBLAS
link_cuda_dso: GPU support loaded
{"build":1500,"commit":"a30b324","function":"server_cli","level":"INFO","line":2858,"msg":"build info","tid":"9442720","timestamp":1714373224}
{"function":"server_cli","level":"INFO","line":2861,"msg":"system info","n_threads":8,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LAMMAFILE = 1 | ","tid":"9442720","timestamp":1714373224,"total_threads":16}
llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from dolphin-2.9-llama3-8b-Q5_K_M.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = dolphin-2.9-llama3-8b
llama_model_loader: - kv 2: llama.block_count u32 = 32
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 17
llama_model_loader: - kv 11: llama.vocab_size u32 = 128258
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128258] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128258] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["─á ─á", "─á ─á─á─á", "─á─á ─á─á", "...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128256
llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 20: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 21: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q5_K: 193 tensors
llama_model_loader: - type q6_K: 33 tensors
llm_load_vocab: special tokens definition check successful ( 258/128258 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128258
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = Q5_K - Medium
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 5.33 GiB (5.70 BPW)
llm_load_print_meta: general.name = dolphin-2.9-llama3-8b
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128256 '<|im_end|>'
llm_load_print_meta: PAD token = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128256 '<|im_end|>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 ROCm devices:
Device 0: AMD Radeon(TM) Graphics, compute capability 9.0, VMM: no
llm_load_tensors: ggml ctx size = 0.30 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: ROCm0 buffer size = 5115.50 MiB
llm_load_tensors: CPU buffer size = 344.44 MiB
........................................................................................
llama_new_context_with_model: n_ctx = 512
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: ROCm0 KV buffer size = 64.00 MiB
llama_new_context_with_model: KV self size = 64.00 MiB, K (f16): 32.00 MiB, V (f16): 32.00 MiB
llama_new_context_with_model: ROCm_Host output buffer size = 0.50 MiB
llama_new_context_with_model: ROCm0 compute buffer size = 258.50 MiB
llama_new_context_with_model: ROCm_Host compute buffer size = 9.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 2
ggml_cuda_compute_forward: RMS_NORM failed
CUDA error: invalid device function
current device: 0, in function ggml_cuda_compute_forward at ggml-cuda.cu:11444
err
GGML_ASSERT: ggml-cuda.cu:9198: !"CUDA error"
Hey I believe integrated GPU are not supported, probably better to run on CPU at this time, by passing -ngl 0 instead of 9999
Also I have seen a few open issues with the same error/warning at the start when using AMD, so am not sure if I should open a new issues.
The line that says : get_rocm_bin_path: note: amdclang++.exe not found on $PATH
The actually file located there is named clang++.exe , in Windows, however on Linux is called amdclang++.exe
Perhaps there could be an Operating System check before looking for amdclang++ or clang++.
or maybe it is something else
Hey I believe integrated GPU are not supported, probably better to run on CPU at this time, by passing -ngl 0 instead of 9999
OK. I was curious to know whether additional acceleration could be obtained by combining the iGPU with the CPU.
In any case, I reckon the docs should explicitly state that AMD iGPUs are not supported (if they're not). Ideally, llamafile should also report this at runtime.
Also I have seen a few open issues with the same error/warning at the start when using AMD, so am not sure if I should open a new issues. The line that says : get_rocm_bin_path: note: amdclang++.exe not found on $PATH The actually file located there is named clang++.exe , in Windows, however on Linux is called amdclang++.exe
Perhaps there could be an Operating System check before looking for amdclang++ or clang++. or maybe it is something else
Yes. I copied clang++.exe to amdclang++.exe to overcome this. Both executables were in the $PATH, but the get_rocm_bin_path: note: amdclang++.exe not found on $PATH message still appeared.