openebs / zfs-localpv

Dynamically provision Stateful Persistent Node-Local Volumes & Filesystems for Kubernetes that is integrated with a backend ZFS data storage stack.

Home Page:https://openebs.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Microk8s - MountVolume.SetUp failed

fornof opened this issue · comments

What steps did you take and what happened:
[A clear and concise description of what the bug is, and what commands you ran.]

#235 this is about what I'm running into, but I didn't see how to solve it in the ticket:

   Warning  FailedMount  86s (x12 over 9m40s)  kubelet  MountVolume.SetUp failed for volume "pvc-3da01b7f-3eb3-427d-943d-42758527c8f2" : applyFSGroup failed for vol pvc-3da01b7f-3eb3-427d-943d-42758527c8f2: lstat /var/snap/microk8s/common/var/lib/kubelet/pods/b803ceff-d82e-4fe2-be34-bb05f6bee84d/volumes/kubernetes.io~csi/pvc-3da01b7f-3eb3-427d-943d-42758527c8f2/mount: no such file or directory                                               

I tried again using a deployment mount and I get:

persistent volume:

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM
            STORAGECLASS           VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-2e49cd25-4b11-4618-817f-3095de775145   20Gi       RWO            Delete           Released   minio-operator/0-microk8s-ss-0-0   microk8s-hostpath      <unset>                          46d
pvc-3a97c089-e659-4d9e-91a6-62bae21f89f4   76Gi       RWO            Delete           Bound      postgres/postgres-fornof-zfs-pvc   openebs-zfspv-fornof   <unset>                          11h
pvc-3da01b7f-3eb3-427d-943d-42758527c8f2   76Gi       RWO            Delete           Bound      db/postgres-fornof-zfs-pvc         openebs-zfspv-fornof   <unset>                          29m
persistent volume claims : 

{
"apiVersion": "v1",
"items": [
{
"apiVersion": "v1",
"kind": "PersistentVolumeClaim",
"metadata": {
"creationTimestamp": "2024-06-22T16:59:42Z",
"finalizers": [
"kubernetes.io/pvc-protection"
],
"labels": {
"app.kubernetes.io/component": "primary",
"app.kubernetes.io/instance": "postgres-bitnami",
"app.kubernetes.io/name": "postgresql"
},
"name": "data-postgres-bitnami-postgresql-0",
"namespace": "db",
"resourceVersion": "17057302",
"uid": "9a5bb71b-76d0-4a91-a84b-78eb5f9790ee"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "8Gi"
}
},
"volumeMode": "Filesystem"
},
"status": {
"phase": "Pending"
}
},
{
"apiVersion": "v1",
"kind": "PersistentVolumeClaim",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"postgres-fornof-zfs-pvc","namespace":"db"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"76Gi"}},"storageClassName":"openebs-zfspv-fornof"}}\n",
"pv.kubernetes.io/bind-completed": "yes",
"pv.kubernetes.io/bound-by-controller": "yes",
"volume.beta.kubernetes.io/storage-provisioner": "zfs.csi.openebs.io",
"volume.kubernetes.io/storage-provisioner": "zfs.csi.openebs.io"
},
"creationTimestamp": "2024-06-22T17:14:01Z",
"finalizers": [
"kubernetes.io/pvc-protection"
],
"name": "postgres-fornof-zfs-pvc",
"namespace": "db",
"resourceVersion": "17059608",
"uid": "3da01b7f-3eb3-427d-943d-42758527c8f2"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "76Gi"
}
},
"storageClassName": "openebs-zfspv-fornof",
"volumeMode": "Filesystem",
"volumeName": "pvc-3da01b7f-3eb3-427d-943d-42758527c8f2"
},
"status": {
"accessModes": [
"ReadWriteOnce"
],
"capacity": {
"storage": "76Gi"
},
"phase": "Bound"
}
}
],
"kind": "List",
"metadata": {
"resourceVersion": ""
}
}


NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
data-postgres-bitnami-postgresql-0 Pending
44m
postgres-fornof-zfs-pvc Bound pvc-3da01b7f-3eb3-427d-943d-42758527c8f2 76Gi RWO openebs-zfspv-fornof 30m


storage classes: 

NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
openebs-hostpath openebs.io/local Delete WaitForFirstConsumer false 77m
openebs-zfspv-fornof zfs.csi.openebs.io Delete Immediate false 15h

openebs pods: 

NAME READY STATUS RESTARTS AGE
openebs-localpv-provisioner-7cd9f85f8f-6d7cp 1/1 Running 0 76m
openebs-zfs-localpv-controller-7fdcd7f65-74l55 5/5 Running 0 76m
openebs-zfs-localpv-node-6dn8t 2/2 Running 0 76m

rob@media:/mnt$ zfs list
NAME                                                    USED  AVAIL     REFER  MOUNTPOINT
robin-makeup                                            299M  1.76T      283M  /mnt/robin-makeup-zfs
robin-makeup/pvc-3da01b7f-3eb3-427d-943d-42758527c8f2    24K  76.0G       24K  legacy
**What did you expect to happen:**
expected volume to mount

**The output of the following commands will help us better understand what's going on**:
(Pasting long output into a [GitHub gist](https://gist.github.com) or other [Pastebin](https://pastebin.com/) is fine.)

* `kubectl logs -f openebs-zfs-controller-f78f7467c-blr7q -n openebs -c openebs-zfs-plugin`
https://nopaste.net/B3aBBGpU1q
* `kubectl logs -f openebs-zfs-node-[xxxx] -n openebs -c openebs-zfs-plugin`
* https://nopaste.net/yjl29wBl0F
* `kubectl get pods -n openebs`

NAME READY STATUS RESTARTS AGE
openebs-localpv-provisioner-7cd9f85f8f-6d7cp 1/1 Running 0 88m
openebs-zfs-localpv-controller-7fdcd7f65-74l55 5/5 Running 0 88m
openebs-zfs-localpv-node-6dn8t 2/2 Running 0 88m

* `kubectl get zv -A -o yaml`

kubectl get zv -A -o yaml
apiVersion: v1
items:

  • apiVersion: zfs.openebs.io/v1
    kind: ZFSVolume
    metadata:
    creationTimestamp: "2024-06-22T17:14:01Z"
    finalizers:
    • zfs.openebs.io/finalizer
      generation: 2
      labels:
      kubernetes.io/nodename: media
      name: pvc-3da01b7f-3eb3-427d-943d-42758527c8f2
      namespace: openebs
      resourceVersion: "17059601"
      uid: 685593db-552a-4daa-838b-4b8137a30c1a
      spec:
      capacity: "81604378624"
      compression: "off"
      dedup: "off"
      fsType: zfs
      ownerNodeID: media
      poolName: robin-makeup
      recordsize: 128k
      volumeType: DATASET
      status:
      state: Ready
      kind: List
      metadata:
      resourceVersion: ""
**Anything else you would like to add:**
I'm trying to get postgresql up and going on kubernetes and having a hard time mounting the persistent volume. 
[Miscellaneous information that will assist in solving the issue.]
I'm using bitnami postgresql with persistence set to true and storage set to true as the same volume claim in the helm chart values. 

**Environment:**
- LocalPV-ZFS version
- openebs.io/version=4.0.0   

- Kubernetes version (use `kubectl version`):

Client Version: v1.30.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.1

- Kubernetes installer & version:
- MicroK8s v1.30.1 revision 6876

- Cloud provider or hardware configuration:
Ubuntu 2TB mirrored storage ZFS pool. 
- OS (e.g. from `/etc/os-release`):
```PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

@fornof Did you change the kubelet dir for the microk8s while installing zfs?

--set zfs-localpv.zfsNode.kubeletDir=/var/snap/microk8s/common/var/lib/kubelet/