couldn't allocate memory due to IOVA exceeding limits of current DMA mask
smahi opened this issue · comments
I am trying to use core/mayastor
addon in microk8s
on Ubuntu 23.10
, but i am getting the following error:
$ kubectl logs -n mayastor daemonset/mayastor-io-engine
ault config
[2024-01-05T11:40:03.234274145+00:00 INFO io_engine::subsys::config::opts:opts.rs:151] Overriding NVMF_TCP_MAX_QUEUE_DEPTH value to '32'
[2024-01-05T11:40:03.234283575+00:00 INFO io_engine::subsys::config::opts:opts.rs:151] Overriding NVME_QPAIR_CONNECT_ASYNC value to 'true'
[2024-01-05T11:40:03.234290319+00:00 INFO io_engine::subsys::config:mod.rs:216] Applying Mayastor configuration settings
EAL: alloc_pages_on_heap(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask
EAL: alloc_pages_on_heap(): Please try initializing EAL with --iova-mode=pa parameter
EAL: error allocating rte services array
EAL: FATAL: rte_service_init() failed
EAL: rte_service_init() failed
thread 'main' panicked at 'Failed to init EAL', io-engine/src/core/env.rs:628:13
stack backtrace:
0: std::panicking::begin_panic
1: io_engine::core::env::MayastorEnvironment::init
2: io_engine::main
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
$ watch microk8s kubectl get pod -n mayastor
NAME READY STATUS RESTARTS AGE
etcd-operator-mayastor-6879d6b7b9-qvhtr 1/1 Running 3 (21m ago) 49m
mayastor-csi-node-vgzjf 2/2 Running 6 (21m ago) 49m
etcd-gznhhrnhnh 1/1 Running 3 (21m ago) 48m
mayastor-agent-core-7d6c88c8bf-b4ktz 1/1 Running 3 (21m ago) 49m
mayastor-operator-diskpool-566bd95944-5k2wd 1/1 Running 3 (21m ago) 49m
mayastor-api-rest-84d867b977-g5glh 1/1 Running 3 (21m ago) 49m
mayastor-csi-controller-f8b874bd8-qldt9 3/3 Running 9 (21m ago) 49m
mayastor-io-engine-24lsj 0/1 CrashLoopBackOff 24 (36s ago) 52m
$ cat /proc/sys/vm/nr_hugepages
2048
After changing the value of nr_hugepages
to 2048
, i need to issue the following command:
microk8s kubectl delete pod -n mayastor -l app=io-engine
So that Kubernetes create a new pod with the updated configuration.
because the strategy for the DaemonSet is OnDelete, in this case the existing pod will not automatically restart with the new configuration, so it need to be deleted to pickup the new config.
perhaps you will need also to configure the mayastor pools using microk8s mayastor-pools
command