cgroups not getting applied on containers launched using nomad-driver-containerd
shishir-a412ed opened this issue · comments
When I launch a container using nomad-driver-containerd
, and it exceeds its limits, cgroups are not applied and the container doesn't get OOM killed. To give a comparison between docker
and nomad-driver-containerd
driver:
stress.nomad
job "stress" {
datacenters = ["dc1"]
group "stress-group" {
task "stress-task" {
driver = "docker"
config {
image = "docker.io/shm32/stress:1.0"
}
restart {
attempts = 5
delay = "30s"
}
resources {
cpu = 500
memory = 256
network {
mbits = 10
}
}
}
}
}
$ nomad job run stress.nomad
When stress.nomad
exceeds 500 Mhz of CPU or 256 MB of memory, it's OOM killed.
However when I launch the same job (stress.nomad
) using nomad-driver-containerd
it keeps running and doesn't get OOM killed.
In the case of docker driver, IIUC docker
is managing the cgroups
for the container.
The question probably is, how does nomad manage resource constraints (cgroups) on workloads launched by other drivers e.g. QEMU, Java, exec, etc.
Does nomad apply/manage cgroups at the orchestration level?