kubernetes-csi / csi-driver-smb

This driver allows Kubernetes to access SMB Server on both Linux and Windows nodes.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MountVolume.NodeExpandVolume failed error for volume declared as read-only file system

Zombro opened this issue · comments

What happened:

Mounting an smb filesystem declared as read-only in the .spec.template.spec.volumes[*] triggers an error in kubelet logs. the scheduling / deployment / filesystem appears to be working, but this event fires:

MountVolume.NodeExpandVolume failed for volume "smb-config" requested read-only file system

This error / event does not fire if the .spec.template.spec.volumes[*].readOnly is omitted.

What you expected to happen:

No errors reported.

How to reproduce it:

Deploy a simple test workload like below. As presented, it works without errors and the mounted filesystem is RO as expected.

Note the .spec.template.spec.volumes[0].persistentVolumeClaim.readOnly is disabled. When this is enabled, the mentioned error / event fires, but the workload still functions.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: smb-ro-mount
spec:
  replicas: 1
  selector:
    matchLabels:
      app: smb-ro-mount
  template:
    metadata:
      labels:
        app: smb-ro-mount
    spec:
      volumes:
        - name: smb-config
          persistentVolumeClaim:
            claimName: smb-config
            # readOnly: true
      containers:
        - name: smb-ro-mount-example
          image: nginx
          volumeMounts:
            - name: smb-config
              readOnly: true
              mountPath: /config
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: smb-config
spec:
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 10Mi
  volumeName: smb-config
  volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: smb-config
spec:
  capacity:
    storage: 10Mi
  csi:
    driver: smb2.csi.k8s.io
    volumeHandle: smb-config-a1b2c3
    fsType: ext4
    volumeAttributes:
      createSubDir: "true"
      source: \\smbtest.x.net\K8S\config-demo
    nodeStageSecretRef:
      name: SMB-DEMO-CREDS
      namespace: default
  accessModes:
    - ReadOnlyMany
  persistentVolumeReclaimPolicy: Retain
  mountOptions:
    - dir_mode=0555
    - file_mode=0444
    - vers=3.0
  volumeMode: Filesystem

Environment:

  • CSI Driver version: helm v.1.13.0, image: registry.k8s.io/sig-storage/smbplugin:v1.13.0
  • Kubernetes version: 1.28
  • OS: windows server 2022 & ubuntu 22.04.3
  • Kernel(s): 10.0.20348.2159 & 5.15.0-75-generic
  • Install tools: helm

Parting Thoughts

Maybe this isn't an issue directly with csi-smb-driver, but rather a coupling with kubelet & volume CSI operations. It would be nice if the documentation somewhere pointed out this behavior.

why is MountVolume.NodeExpandVolume triggered? have you expanded a pvc or pv?

no, have not expanded.

I am also seeing this in some of my filestore logs. Only behavior is that I notice is the filestore works fine but not all the time. Sporadically I have had some mounting issues onto pods that cause pods to get stuck in an init stage but it's not all the time. I am wondering if these are connecting (doesn't seem so) but curious why these logs pop up.