`termination-log` is not available when mounting `/dev` in containers
maybe-sybr opened this issue · comments
I've found that the kubernetes-sigs/sig-storage-local-static-provisioner attempts to mount /dev from the host in its provisioner container, which causes issues when running the daemonset under u7s. Given that there are no reports of this being broken on their issue tracker, I presumed it was an unexpected failure case for u7s rather than them doing the wrong thing (feel free to correct me on this).
It's fairly trivial to work around the issue by setting the terminationMessagePath
for the container, although it's not clear to me if this would result in termination messages being lost. See the badness.yaml
snippet below and its failure events in describe pod badness
:
$ cat badness.yaml
apiVersion: v1
kind: Pod
metadata:
name: badness
labels:
app: badness
spec:
containers:
- image: busybox
name: badness
command: ["sleep", "infinity"]
volumeMounts:
- name: shared-dev
mountPath: /dev
#terminationMessagePath: /var/run/termination-log
volumes:
- name: shared-dev
hostPath:
path: /dev/
$ kubectl apply -f badness.yaml
pod/badness created
$ kubectl describe pod badness
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned default/user-storage-provisioner-7sn8s to host
Normal Pulled 2s (x2 over 2s) kubelet, host Container image "quay.io/external_storage/local-volume-provisioner:v2.3.4" already present on machine
Warning Failed 2s kubelet, host Error: container create failed: open `/home/user/.local/share/usernetes/containers/storage/overlay/6d3e5e577d2174a25f2947faa0af0b5e68af13bedb662eaaf59068f173640091/merged/termination-log`: No such file or directory
Warning Failed 2s kubelet, host Error: container create failed: open `/home/user/.local/share/usernetes/containers/storage/overlay/273772f44ad6a6e0d55af8b2d3dadb154c734084d4bc82b7f7e4f0cd7e8bda8e/merged/termination-log`: No such file or directory
Uncommenting the terminationMessagePath
in badness.yaml
makes the pod start happily.
Is this only for crio? Does it work with containerd?
I've not been able to try with containerd yet. I'll try it when I have a bit of free time and get back to you.
@AkihiroSuda - appears to break under containerd as well.
$ kubectl describe pod
Name: badness
Namespace: default
Priority: 0
Node: host/10.0.42.100
Start Time: Wed, 16 Sep 2020 10:06:53 +1000
Labels: app=badness
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"app":"badness"},"name":"badness","namespace":"default"},"spec":{"c...
Status: Running
IP: 10.88.1.23
Containers:
badness:
Container ID: containerd://23de93c3fb8413c3ed9bf7ae6026962982038cad7288d046c488efd9bd016379
Image: busybox
Image ID: docker.io/library/busybox@sha256:d366a4665ab44f0648d7a00ae3fae139d55e32f9712c67accd604bb55df9d05a
Port: <none>
Host Port: <none>
Command:
sleep
infinity
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: StartError
Message: failed to create containerd task: OCI runtime create failed: open `/run/.ro966304741/user/1000/usernetes/containerd/io.containerd.runtime.v2.task/k8s.io/23de93c3fb8413c3ed9bf7ae6026962982038cad7288d046c488efd9bd016379/rootfs/termination-log`: No such file or directory: unknown
Exit Code: 128
Started: Thu, 01 Jan 1970 10:00:00 +1000
Finished: Wed, 16 Sep 2020 10:07:21 +1000
Ready: False
Restart Count: 2
Environment: <none>
Mounts:
/dev from shared-dev (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dgbbv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
shared-dev:
Type: HostPath (bare host directory volume)
Path: /dev/
HostPathType:
default-token-dgbbv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-dgbbv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned default/badness to host
Normal Pulled 34s kubelet, host Successfully pulled image "busybox" in 7.553314539s
Normal Pulled 30s kubelet, host Successfully pulled image "busybox" in 2.673382993s
Normal Pulling 16s (x3 over 41s) kubelet, host Pulling image "busybox"
Normal Created 14s (x3 over 33s) kubelet, host Created container badness
Warning Failed 14s (x3 over 33s) kubelet, host Error: failed to create containerd task: OCI runtime create failed: open `/run/.ro966304741/user/1000/usernetes/containerd/io.containerd.runtime.v2.task/k8s.io/badness/rootfs/termination-log`: No such file or directory: unknown
Normal Pulled 14s kubelet, host Successfully pulled image "busybox" in 2.409908214s
Warning BackOff 2s (x4 over 29s) kubelet, host Back-off restarting failed container