MTL NPU Error Output
TabNahida opened this issue · comments
Describe the bug
Portable llama.cpp cli for NPU cannot use
Screenshots
llama-cpp-ipex-llm-2.2.0b20250313-win-npu.zip has extracted to C:\Project\CPP\AI\llama.cpp\bin-npu
PS C:\Project\CPP\AI\llama.cpp\bin-npu> conda activate llm-npu
(llm-npu) PS C:\Project\CPP\AI\llama.cpp\bin-npu> set IPEX_LLM_NPU_MTL=1
(llm-npu) PS C:\Project\CPP\AI\llama.cpp\bin-npu> init-llama-cpp.bat
为 C:\Project\CPP\AI\llama.cpp\bin-npu\cache.json <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\intel_npu_acceleration_library\lib\Release\cache.json 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\intel_npu_acceleration_library.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\intel_npu_acceleration_library\lib\Release\intel_npu_acceleration_library.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\openvino.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\intel_npu_acceleration_library\lib\Release\openvino.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\openvino_auto_batch_plugin.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\intel_npu_acceleration_library\lib\Release\openvino_auto_batch_plugin.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\openvino_auto_plugin.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\intel_npu_acceleration_library\lib\Release\openvino_auto_plugin.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\openvino_c.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\intel_npu_acceleration_library\lib\Release\openvino_c.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\openvino_hetero_plugin.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\intel_npu_acceleration_library\lib\Release\openvino_hetero_plugin.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\openvino_intel_cpu_plugin.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\intel_npu_acceleration_library\lib\Release\openvino_intel_cpu_plugin.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\openvino_intel_gpu_plugin.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\intel_npu_acceleration_library\lib\Release\openvino_intel_gpu_plugin.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\openvino_intel_npu_plugin.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\intel_npu_acceleration_library\lib\Release\openvino_intel_npu_plugin.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\openvino_ir_frontend.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\intel_npu_acceleration_library\lib\Release\openvino_ir_frontend.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\openvino_onnx_frontend.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\intel_npu_acceleration_library\lib\Release\openvino_onnx_frontend.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\openvino_paddle_frontend.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\intel_npu_acceleration_library\lib\Release\openvino_paddle_frontend.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\openvino_pytorch_frontend.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\intel_npu_acceleration_library\lib\Release\openvino_pytorch_frontend.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\openvino_tensorflow_frontend.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\intel_npu_acceleration_library\lib\Release\openvino_tensorflow_frontend.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\openvino_tensorflow_lite_frontend.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\intel_npu_acceleration_library\lib\Release\openvino_tensorflow_lite_frontend.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\tbb12.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\intel_npu_acceleration_library\lib\Release\tbb12.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\tbb12_debug.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\intel_npu_acceleration_library\lib\Release\tbb12_debug.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\tbbbind_2_5.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\intel_npu_acceleration_library\lib\Release\tbbbind_2_5.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\tbbbind_2_5_debug.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\intel_npu_acceleration_library\lib\Release\tbbbind_2_5_debug.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\tbbmalloc.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\intel_npu_acceleration_library\lib\Release\tbbmalloc.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\tbbmalloc_debug.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\intel_npu_acceleration_library\lib\Release\tbbmalloc_debug.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\tbbmalloc_proxy.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\intel_npu_acceleration_library\lib\Release\tbbmalloc_proxy.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\tbbmalloc_proxy_debug.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\intel_npu_acceleration_library\lib\Release\tbbmalloc_proxy_debug.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\common.lib <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\bigdl-core-npu\common.lib 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\ggml.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\bigdl-core-npu\ggml.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\ggml.lib <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\bigdl-core-npu\ggml.lib 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\llama.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\bigdl-core-npu\llama.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\llama.lib <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\bigdl-core-npu\llama.lib 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\llm-cli.exe <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\bigdl-core-npu\llm-cli.exe 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\npu_llm.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\bigdl-core-npu\npu_llm.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\npu_llm.lib <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\bigdl-core-npu\npu_llm.lib 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\zlib1.dll <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\bigdl-core-npu\zlib1.dll 创建的符号链接
为 C:\Project\CPP\AI\llama.cpp\bin-npu\__init__.py <<===>> C:\Users\TabYe\.conda\envs\llm-npu\Lib\site-packages\bigdl-core-npu\__init__.py 创建的符号链接
已复制 1 个文件。
(llm-npu) PS C:\Project\CPP\AI\llama.cpp\bin-npu> ./llama-cli-npu -m C:\Data\AI\LLM\Models-GGUF\DeepSeek-R1-Distill-Qwen-7B-Q6_K.gguf -n 32 --prompt "What is AI?"
build: 1 (3ac676a) with MSVC 19.39.33519.0 for x64
llama_model_loader: loaded meta data with 27 key-value pairs and 339 tensors from C:\Data\AI\LLM\Models-GGUF\DeepSeek-R1-Distill-Qwen-7B-Q6_K.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 7B
llama_model_loader: - kv 3: general.organization str = Deepseek Ai
llama_model_loader: - kv 4: general.basename str = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv 5: general.size_label str = 7B
llama_model_loader: - kv 6: qwen2.block_count u32 = 28
llama_model_loader: - kv 7: qwen2.context_length u32 = 131072
llama_model_loader: - kv 8: qwen2.embedding_length u32 = 3584
llama_model_loader: - kv 9: qwen2.feed_forward_length u32 = 18944
llama_model_loader: - kv 10: qwen2.attention.head_count u32 = 28
llama_model_loader: - kv 11: qwen2.attention.head_count_kv u32 = 4
llama_model_loader: - kv 12: qwen2.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 13: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 15: tokenizer.ggml.pre str = deepseek-r1-qwen
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["臓 臓", "臓臓 臓臓", "i n", "臓 t",...
llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646
llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151654
llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 25: general.quantization_version u32 = 2
llama_model_loader: - kv 26: general.file_type u32 = 18
llama_model_loader: - type f32: 141 tensors
llama_model_loader: - type q6_K: 198 tensors
llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default'
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 152064
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 3584
llm_load_print_meta: n_layer = 28
llm_load_print_meta: n_head = 28
llm_load_print_meta: n_head_kv = 4
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 7
llm_load_print_meta: n_embd_k_gqa = 512
llm_load_print_meta: n_embd_v_gqa = 512
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 18944
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = Q6_K
llm_load_print_meta: model params = 7.62 B
llm_load_print_meta: model size = 5.82 GiB (6.56 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Qwen 7B
llm_load_print_meta: BOS token = 151646 '<锝渂egin鈻乷f鈻乻entence锝?'
llm_load_print_meta: EOS token = 151643 '<锝渆nd鈻乷f鈻乻entence锝?'
llm_load_print_meta: PAD token = 151654 '<|vision_pad|>'
llm_load_print_meta: LF token = 148848 '脛默'
llm_load_print_meta: EOG token = 151643 '<锝渆nd鈻乷f鈻乻entence锝?'
llm_load_print_meta: max token length = 256
llm_load_tensors: ggml ctx size = 0.15 MiB
llm_load_tensors: CPU buffer size = 5958.79 MiB
........................................................................................
Directory created: "C:\\Project\\CPP\\AI\\llama.cpp\\bin-npu\\NPU_models\\qwen2-28-3584-152064-Q4_0"
Directory created: "C:\\Project\\CPP\\AI\\llama.cpp\\bin-npu\\NPU_models\\qwen2-28-3584-152064-Q4_0\\model_weights"
Converting GGUF model to Q4_0 NPU model...
Model weights saved to C:\Project\CPP\AI\llama.cpp\bin-npu\NPU_models\qwen2-28-3584-152064-Q4_0\model_weights
llama_model_loader: loaded meta data with 27 key-value pairs and 339 tensors from C:\Data\AI\LLM\Models-GGUF\DeepSeek-R1-Distill-Qwen-7B-Q6_K.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 7B
llama_model_loader: - kv 3: general.organization str = Deepseek Ai
llama_model_loader: - kv 4: general.basename str = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv 5: general.size_label str = 7B
llama_model_loader: - kv 6: qwen2.block_count u32 = 28
llama_model_loader: - kv 7: qwen2.context_length u32 = 131072
llama_model_loader: - kv 8: qwen2.embedding_length u32 = 3584
llama_model_loader: - kv 9: qwen2.feed_forward_length u32 = 18944
llama_model_loader: - kv 10: qwen2.attention.head_count u32 = 28
llama_model_loader: - kv 11: qwen2.attention.head_count_kv u32 = 4
llama_model_loader: - kv 12: qwen2.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 13: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 15: tokenizer.ggml.pre str = deepseek-r1-qwen
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["臓 臓", "臓臓 臓臓", "i n", "臓 t",...
llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646
llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151654
llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 25: general.quantization_version u32 = 2
llama_model_loader: - kv 26: general.file_type u32 = 18
llama_model_loader: - type f32: 141 tensors
llama_model_loader: - type q6_K: 198 tensors
llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default'
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 152064
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 1
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = all F32
llm_load_print_meta: model params = 7.62 B
llm_load_print_meta: model size = 5.82 GiB (6.56 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Qwen 7B
llm_load_print_meta: BOS token = 151646 '<锝渂egin鈻乷f鈻乻entence锝?'
llm_load_print_meta: EOS token = 151643 '<锝渆nd鈻乷f鈻乻entence锝?'
llm_load_print_meta: PAD token = 151654 '<|vision_pad|>'
llm_load_print_meta: LF token = 148848 '脛默'
llm_load_print_meta: EOG token = 151643 '<锝渆nd鈻乷f鈻乻entence锝?'
llm_load_print_meta: max token length = 256
llama_model_load: vocab only - skipping tensors
Model saved to C:\Project\CPP\AI\llama.cpp\bin-npu\NPU_models\qwen2-28-3584-152064-Q4_0//decoder_layer_0.blob
Model saved to C:\Project\CPP\AI\llama.cpp\bin-npu\NPU_models\qwen2-28-3584-152064-Q4_0//decoder_layer_1.blob
llama_new_context_with_model: n_ctx = 1024
llama_new_context_with_model: n_batch = 1024
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 0.0
llama_new_context_with_model: freq_scale = 1
锘縰sing 1 2 3 喙傕笡喔`箒 2喙傕笡喔`箒<think>氐賳丿 1 2喙傕笡喔`箒 "crypto "sync
llm_perf_print: load time = 45635.00 ms
llm_perf_print: prompt eval time = 6463.00 ms / 7 tokens ( 923.29 ms per token, 1.08 tokens per second)
llm_perf_print: eval time = 3049.00 ms / 31 runs ( 98.35 ms per token, 10.17 tokens per second)
llm_perf_print: total time = 55190.00 ms / 38 tokens
(llm-npu) PS C:\Project\CPP\AI\llama.cpp\bin-npu>Environment information
Python 3.11.11
transformers=4.45.0
torch=2.1.2+cpu
Name: ipex-llm
Version: 2.2.0b20250321
Summary: Large Language Model Develop Toolkit
Home-page: https://github.com/intel-analytics/ipex-llm
Author: BigDL Authors
Author-email: bigdl-user-group@googlegroups.com
License: Apache License, Version 2.0
Location: C:\Users\TabYe.conda\envs\llm-npu\Lib\site-packages
Requires:
Required-by:
IPEX is not installed properly.
Total Memory: 31.466 GB
Chip 0 Memory: 4 GB | Speed: 8533 MHz
Chip 1 Memory: 4 GB | Speed: 8533 MHz
Chip 2 Memory: 4 GB | Speed: 8533 MHz
Chip 3 Memory: 4 GB | Speed: 8533 MHz
Chip 4 Memory: 4 GB | Speed: 8533 MHz
Chip 5 Memory: 4 GB | Speed: 8533 MHz
Chip 6 Memory: 4 GB | Speed: 8533 MHz
Chip 7 Memory: 4 GB | Speed: 8533 MHz
CPU Manufacturer: GenuineIntel
CPU MaxClockSpeed: 1200
CPU Name: Intel(R) Core(TM) Ultra 5 125H
CPU NumberOfCores: 14
CPU NumberOfLogicalProcessors: 18
GPU 0: Intel(R) Arc(TM) Graphics Driver Version: 32.0.101.6647
System Information
主机名: TABREDMI
OS 名称: Microsoft Windows 11 家庭中文版
OS 版本: 10.0.22631 暂缺 Build 22631
OS 制造商: Microsoft Corporation
OS 配置: 独立工作站
OS 构建类型: Multiprocessor Free
注册的所有人: TabYe320@outlook.com
注册的组织: 暂缺
产品 ID: 00342-31531-05585-AAOEM
初始安装日期: 25/2/2024, 下午 12:41:28
系统启动时间: 24/3/2025, 上午 9:10:32
系统制造商: XIAOMI
系统型号: Redmi Book Pro 14 2024
系统类型: x64-based PC
处理器: 安装了 1 个处理器。
[01]: Intel64 Family 6 Model 170 Stepping 4 GenuineIntel ~1200 Mhz
BIOS 版本: XIAOMI RMAMT4B0P0A0A, 4/6/2024
Windows 目录: C:\Windows
系统目录: C:\Windows\system32
启动设备: \Device\HarddiskVolume1
系统区域设置: zh-cn;中文(**)
输入法区域设置: zh-cn;中文(**)
时区: (UTC+08:00) 北京,重庆,香港特别行政区,乌鲁木齐
物理内存总量: 32,221 MB
可用的物理内存: 13,206 MB
虚拟内存: 最大值: 35,720 MB
虚拟内存: 可用: 13,087 MB
虚拟内存: 使用中: 22,633 MB
页面文件位置: C:\pagefile.sys
域: WORKGROUP
登录服务器: \TABREDMI
修补程序: 安装了 5 个修补程序。
[01]: KB5049624
[02]: KB5027397
[03]: KB5033055
[04]: KB5053602
[05]: KB5052107
网卡: 安装了 3 个 NIC。
[01]: Remote NDIS Compatible Device
连接名: 以太网 2
启用 DHCP: 是
DHCP 服务器: 192.168.215.116
IP 地址
[01]: 192.168.215.107
[02]: fe80::d606:248d:8f84:4f17
[03]: 240e:430:2a41:daa2:75eb:cb95:cb46:422c
[04]: 240e:430:2a41:daa2:bdbd:bbe9:cdb3:a76
[02]: Intel(R) Wi-Fi 6E AX211 160MHz
连接名: WLAN
状态: 媒体连接已中断
[03]: Bluetooth Device (Personal Area Network)
连接名: 蓝牙网络连接
状态: 媒体连接已中断
Hyper-V 要求: 已检测到虚拟机监控程序。将不显示 Hyper-V 所需的功能。
'xpu-smi' 不是内部或外部命令,也不是可运行的程序
或批处理文件。
xpu-smi is not installed properly.