ymcui / Chinese-LLaMA-Alpaca

中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)

Home Page:https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

请问如何指定哪一张卡去做推理?

bigmover opened this issue · comments

提交前必须检查以下项目

  • 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
  • 由于相关依赖频繁更新,请确保按照Wiki中的相关步骤执行
  • 我已阅读FAQ章节并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
  • 第三方插件问题:例如llama.cpptext-generation-webuiLlamaChat等,同时建议到对应的项目中查找解决方案
  • 模型正确性检查:务必检查模型的SHA256.md,模型不对的情况下无法保证效果和正常运行

问题类型

None

基础模型

None

操作系统

None

详细描述问题

请问linux 如何指定哪一张卡做推理?CUDA_VISIBLE_DEVICE? @iMountTai

依赖情况(代码类问题务必提供)

No response

运行日志或截图

# 请在此处粘贴运行日志

CUDA_VISIBLE_DEVICE=gpu_id或者参数中指明--gpus gpu_id

CUDA_VISIBLE_DEVICE=gpu_id或者参数中指明--gpus gpu_id

谢谢大佬指教!因为./main -h 没发现--gpus 没看到响应参数,我 使用命令CUDA_VISIBLE_DEVICES=7 ./main -m llama_model/7B/ggml-model-q4.bin --color -f prompts/alpaca.txt -ins -c 2048 --temp 0.2 -n 256 --repeat_penalty 1.3 去测试了一下速度。
image
发现速度又点慢,是本身对GPU的支持就很差吗?

CUDA_VISIBLE_DEVICE=gpu_id或者参数中指明--gpus gpu_id

谢谢大佬指教!因为./main -h 没发现--gpus 没看到响应参数,我 使用命令CUDA_VISIBLE_DEVICES=7 ./main -m llama_model/7B/ggml-model-q4.bin --color -f prompts/alpaca.txt -ins -c 2048 --temp 0.2 -n 256 --repeat_penalty 1.3 去测试了一下速度。 image 发现速度又点慢,是本身对GPU的支持就很差吗?


建议你还是认真填写我们的issue模板,你全填的None。实际你这个是llama.cpp相关的问题。
上面的回复给出的是基于huggingface的inference脚本使用方法。
你的命令:

CUDA_VISIBLE_DEVICES=7 ./main -m llama_model/7B/ggml-model-q4.bin --color -f prompts/alpaca.txt -ins -c 2048 --temp 0.2 -n 256 --repeat_penalty 1.3

并没有使用GPU。llama.cpp中需要先和cuBLAS一起编译,然后通过-ngl参数指定offload到GPU的模型层数。

另外,你的模型路径似乎是llama模型。需要注意的是llama模型不能和alpaca模板(命令中-f prompts/alpaca.txt)一起使用。如果要使用llama.cpp聊天交互,请用Alpaca模型。

另外,你的模型路径似乎是llama模型。需要注意的是llama模型不能和alpaca模板(命令中-f prompts/alpaca.txt)一起使用。如果要使用llama.cpp聊天交互,请用Alpaca模型。

不好意思!我回去研究一下。感谢大佬指正!谢谢

另外,你的模型路径似乎是llama模型。需要注意的是llama模型不能和alpaca模板(命令中-f prompts/alpaca.txt)一起使用。如果要使用llama.cpp聊天交互,请用Alpaca模型。

已改正 目前A100 gpu 单卡int4推理可以达到75.91tokens/s 还是略低。再次感谢指正。

(myenv) [root@alywlcb-lingjun-gpu-0014 llama.cpp]# CUDA_VISIBLE_DEVICES=7 ./build/bin/main -m ./models/llama-7b/ggml-model-q4.bin -n 128 -ngl 1000 -mmq
main: build = 992 (0919a0f)
main: seed  = 1692256552
ggml_init_cublas: found 1 CUDA devices:
  Device 0: NVIDIA A100-SXM4-80GB, compute capability 8.0
llama.cpp: loading model from ./models/llama-7b/ggml-model-q4.bin
llama_model_load_internal: format     = ggjt v3 (latest)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 512
llama_model_load_internal: n_embd     = 4096
llama_model_load_internal: n_mult     = 5504
llama_model_load_internal: n_head     = 32
llama_model_load_internal: n_head_kv  = 32
llama_model_load_internal: n_layer    = 32
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: n_gqa      = 1
llama_model_load_internal: rnorm_eps  = 5.0e-06
llama_model_load_internal: n_ff       = 11008
llama_model_load_internal: freq_base  = 10000.0
llama_model_load_internal: freq_scale = 1
llama_model_load_internal: ftype      = 3 (mostly Q4_1)
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size =    0.08 MB
llama_model_load_internal: using CUDA for GPU acceleration
llama_model_load_internal: mem required  =  380.21 MB (+  256.00 MB per state)
llama_model_load_internal: allocating batch_size x (512 kB + n_ctx x 128 B) = 288 MB VRAM for the scratch buffer
llama_model_load_internal: offloading 32 repeating layers to GPU
llama_model_load_internal: offloading non-repeating layers to GPU
llama_model_load_internal: offloading v cache to GPU
llama_model_load_internal: offloading k cache to GPU
llama_model_load_internal: offloaded 35/35 layers to GPU
llama_model_load_internal: total VRAM used: 4508 MB
llama_new_context_with_model: kv self size  =  256.00 MB

system_info: n_threads = 64 / 128 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 |
sampling: repeat_last_n = 64, repeat_penalty = 1.100000, presence_penalty = 0.000000, frequency_penalty = 0.000000, top_k = 40, tfs_z = 1.000000, top_p = 0.950000, typical_p = 1.000000, temp = 0.800000, mirostat = 0, mirostat_lr = 0.100000, mirostat_ent = 5.000000
generate: n_ctx = 512, n_batch = 512, n_predict = 128, n_keep = 0


 2014-present (The Greatest Hits)
Guitar, Keyboards, Vocals (1987-present)
Aaron Kamin, born on June 25, 1968, is a founding member of the rock band Live. He plays guitar and keyboards and provides background vocals in concerts when not singing lead vocals for songs where Ed Kowalczyk sings backing vocals (which are rare). His playing can be heard on most of the group's records since its debut album Mental Jewelry, with his strong blues-rock
llama_print_timings:        load time =  1823.33 ms
llama_print_timings:      sample time =    64.23 ms /   128 runs   (    0.50 ms per token,  1992.71 tokens per second)
llama_print_timings: prompt eval time =   128.71 ms /     2 tokens (   64.36 ms per token,    15.54 tokens per second)
llama_print_timings:        eval time =  1673.02 ms /   127 runs   (   13.17 ms per token,    75.91 tokens per second)
llama_print_timings:       total time =  1893.95 ms