topolvm / topolvm

Capacity-aware CSI plugin for Kubernetes

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Can't take a volume snapshot

simonebenati opened this issue · comments

Describe the bug
As the title states, I'm not able to take a volume snapshot using volumesnapshotclass provided from documentation.

Environments

  • Version: 1.25.12+rke2r1
  • OS: ubuntu 20

To Reproduce
Steps to reproduce the behavior:

  1. using k10tools downloadable for free, here: https://docs.kasten.io/latest/operating/k10tools.html -> https://github.com/kastenhq/external-tools/releases
  2. create the following file: csi_check.yaml with the following content:
run_as_user: 1000
storage_class: topolvm
  1. run the following command:
    ./k10tools primer storage check csi -f csi_check.yaml
    it'll use kubestr capabilities to determine whether or not the csi can snapshot or not
    https://kubestr.io/

Expected behavior
Be able to take a snapshot

Additional context
this is the error from k10 tools:

./k10tools primer storage check csi -f csi_check.yaml
Using "csi_check.yaml" file content as config source
         Found multiple snapshot API group versions, using preferred.
I1005 15:55:36.920115   47398 request.go:690] Waited for 1.008984708s due to client-side throttling, not priority and fairness, request: GET:https://rancher.infrastructure.k8s.oncloudfire.it/k8s/clusters/c-m-pwvq8c99/apis/coordination.k8s.io/v1
Creating application
  -> Created pod (kubestr-csi-original-podbdtph) and pvc (kubestr-csi-original-pvcvmdfd)
Taking a snapshot
Cleaning up resources
CSI Snapshot Walkthrough:
  Using annotated VolumeSnapshotClass (csi-topolvm-snapclass)
  Failed to create Snapshot: CSI Driver failed to create snapshot for PVC (kubestr-csi-original-pvcvmdfd) in Namespace (default): Failed to check and update snapshot content: failed to take snapshot of the volume c5e1307a-6d75-440c-a52e-2e58dfcc1f5b: "rpc error: code = Internal desc = logicalvolumes.topolvm.io \"snapshot-82477a8b-12cc-466e-bc0b-f65c4f195c01\" not found"  -  Error
Error: {"message":"Failed to create Snapshot: CSI Driver failed to create snapshot for PVC (kubestr-csi-original-pvcvmdfd) in Namespace (default): Failed to check and update snapshot content: failed to take snapshot of the volume c5e1307a-6d75-440c-a52e-2e58dfcc1f5b: \"rpc error: code = Internal desc = logicalvolumes.topolvm.io \\\"snapshot-82477a8b-12cc-466e-bc0b-f65c4f195c01\\\" not found\"","function":"kasten.io/k10/kio/tools/k10primer.(*TestRetVal).Errors","linenumber":172,"file":"kasten.io/k10/kio/tools/k10primer/k10primer.go:172"}

Could you provide the following information?

  • which topolvm version is used?
  • how do you install it? helm or other? if you use helm, please show me values.yaml.
  • lv related information. show me vgs and lvs outputs.

Hello Toshipp :)

Topolvm version is:

helm.sh/chart: topolvm-11.3.0
 app.kubernetes.io/name: topolvm
 app.kubernetes.io/instance: helm-chart
 app.kubernetes.io/version: "0.19.1"

We used helm, the values are the following:

useLegacy: false

image:
  # image.repository -- TopoLVM image repository to use.
  repository: ghcr.io/topolvm/topolvm-with-sidecar

  # image.tag -- TopoLVM image tag to use.
  # @default -- `{{ .Chart.AppVersion }}`
  tag:  # 0.18.1

  # image.pullPolicy -- TopoLVM image pullPolicy.
  pullPolicy:  # Always

  # image.pullSecrets -- List of imagePullSecrets.
  pullSecrets: []

  csi:
    # image.csi.nodeDriverRegistrar -- Specify csi-node-driver-registrar: image.
    # If not specified, `ghcr.io/topolvm/topolvm-with-sidecar:{{ .Values.image.tag }}` will be used.
    nodeDriverRegistrar:  # registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.2.0

    # image.csi.csiProvisioner -- Specify csi-provisioner image.
    # If not specified, `ghcr.io/topolvm/topolvm-with-sidecar:{{ .Values.image.tag }}` will be used.
    csiProvisioner:  # registry.k8s.io/sig-storage/csi-provisioner:v2.2.1

    # image.csi.csiResizer -- Specify csi-resizer image.
    # If not specified, `ghcr.io/topolvm/topolvm-with-sidecar:{{ .Values.image.tag }}` will be used.
    csiResizer:  # registry.k8s.io/sig-storage/csi-resizer:v1.2.0

    # image.csi.csiSnapshotter -- Specify csi-snapshot image.
    # If not specified, `ghcr.io/topolvm/topolvm-with-sidecar:{{ .Values.image.tag }}` will be used.
    csiSnapshotter:  # registry.k8s.io/sig-storage/csi-snapshotter:v5.0.1

    # image.csi.livenessProbe -- Specify livenessprobe image.
    # If not specified, `ghcr.io/topolvm/topolvm-with-sidecar:{{ .Values.image.tag }}` will be used.
    livenessProbe:  # registry.k8s.io/sig-storage/livenessprobe:v2.3.0

# A scheduler extender for TopoLVM
scheduler:
  # scheduler.enabled --  If true, enable scheduler extender for TopoLVM
  enabled: false #was true but now false due to storage capacity tracking enabled

  # scheduler.args -- Arguments to be passed to the command.
  args: []

  # scheduler.type -- If you run with a managed control plane (such as GKE, AKS, etc), topolvm-scheduler should be deployed as Deployment and Service.
  # topolvm-scheduler should otherwise be deployed as DaemonSet in unmanaged (i.e. bare metal) deployments.
  # possible values:  daemonset/deployment
  type: daemonset

  # Use only if you choose `scheduler.type` deployment
  deployment:
    # scheduler.deployment.replicaCount -- Number of replicas for Deployment.
    replicaCount: 2

  # Use only if you choose `scheduler.type` deployment
  service:
    # scheduler.service.type -- Specify Service type.
    type: LoadBalancer
    # scheduler.service.clusterIP -- Specify Service clusterIP.
    clusterIP:  # None
    # scheduler.service.nodePort -- (int) Specify nodePort.
    nodePort:  # 30251

  # scheduler.updateStrategy -- Specify updateStrategy on the Deployment or DaemonSet.
  updateStrategy: {}
  #  rollingUpdate:
  #    maxUnavailable: 1
  #  type: RollingUpdate

  # scheduler.terminationGracePeriodSeconds -- (int) Specify terminationGracePeriodSeconds on the Deployment or DaemonSet.
  terminationGracePeriodSeconds:  # 30

  # scheduler.minReadySeconds -- (int) Specify minReadySeconds on the Deployment or DaemonSet.
  minReadySeconds:  # 0

  # scheduler.affinity -- Specify affinity on the Deployment or DaemonSet.
  ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: node-role.kubernetes.io/control-plane
                operator: Exists
          # node-role.kubernetes.io/master label is not used in k8s 1.24+.
          # TODO: remove this when minimum supported version becomes 1.24.
          - matchExpressions:
              - key: node-role.kubernetes.io/master
                operator: Exists

  podDisruptionBudget:
    # scheduler.podDisruptionBudget.enabled -- Specify podDisruptionBudget enabled.
    enabled: true

  # scheduler.tolerations -- Specify tolerations on the Deployment or DaemonSet.
  ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
  tolerations:
    - key: CriticalAddonsOnly
      operator: Exists
    - key: node-role.kubernetes.io/control-plane
      effect: NoSchedule
    # node-role.kubernetes.io/master taint will not be used in k8s 1.25+.
    # cf. https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/kubeadm/2067-rename-master-label-taint/README.md
    # TODO: remove this when minimum supported version becomes 1.25.
    - key: node-role.kubernetes.io/master
      effect: NoSchedule

  # scheduler.nodeSelector -- Specify nodeSelector on the Deployment or DaemonSet.
  ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
  nodeSelector: {}

  # scheduler.priorityClassName -- Specify priorityClassName on the Deployment or DaemonSet.
  priorityClassName:

  # scheduler.schedulerOptions -- Tune the Node scoring.
  # ref: https://github.com/topolvm/topolvm/blob/master/deploy/README.md
  schedulerOptions: {}
  #  default-divisor: 1
  #  divisors:
  #    ssd: 1
  #    hdd: 10

  options:
    listen:
      # scheduler.options.listen.host -- Host used by Probe.
      host: localhost
      # scheduler.options.listen.port -- Listen port.
      port: 9251

# lvmd service
lvmd:
  # lvmd.managed -- If true, set up lvmd service with DaemonSet.
  managed: true

  # lvmd.socketName -- Specify socketName.
  socketName: /run/topolvm/lvmd.sock

  # lvmd.deviceClasses -- Specify the device-class settings.
  deviceClasses:
    - name: ssd
      volume-group: vg_data01
      default: true
      spare-gb: 10

  # lvmd.lvcreateOptionClasses -- Specify the lvcreate-option-class settings.
  lvcreateOptionClasses: []
  # - name: ssd
  #   options:
  #     - --type=raid1

  # lvmd.args -- Arguments to be passed to the command.
  args: []

  # lvmd.priorityClassName -- Specify priorityClassName.
  priorityClassName:

  # lvmd.tolerations -- Specify tolerations.
  ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
  tolerations:
    - key: database
      operator: Exists

  # lvmd.nodeSelector -- Specify nodeSelector.
  ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
  nodeSelector: {}

  # lvmd.volumes -- Specify volumes.
  volumes: []
  #  - name: lvmd-socket-dir
  #    hostPath:
  #      path: /run/topolvm
  #      type: DirectoryOrCreate

  # lvmd.volumeMounts -- Specify volumeMounts.
  volumeMounts: []
  #  - name: lvmd-socket-dir
  #    mountPath: /run/topolvm

  # lvmd.env -- extra environment variables
  env: []
  #  - name: LVM_SYSTEM_DIR
  #    value: /tmp

  # lvmd.additionalConfigs -- Define additional LVM Daemon configs if you have additional types of nodes.
  # Please ensure nodeSelectors are non overlapping.
  additionalConfigs: []
  #  - tolerations: []
  #      nodeSelector: {}
  #      device-classes:
  #        - name: ssd
  #          volume-group: myvg2
  #          default: true
  #          spare-gb: 10

  psp:
    # lvmd.psp.allowedHostPaths -- Specify allowedHostPaths.
    allowedHostPaths: []
    #  - pathPrefix: "/run/topolvm"
    #    readOnly: false

  # lvmd.updateStrategy -- Specify updateStrategy.
  updateStrategy: {}
  #  type: RollingUpdate
  #  rollingUpdate:
  #    maxSurge: 50%
  #    maxUnavailable: 50%

# CSI node service
node:
  # node.lvmdSocket -- Specify the socket to be used for communication with lvmd.
  lvmdSocket: /run/topolvm/lvmd.sock
  # node.kubeletWorkDirectory -- Specify the work directory of Kubelet on the host.
  # For example, on microk8s it needs to be set to `/var/snap/microk8s/common/var/lib/kubelet`
  kubeletWorkDirectory: /var/lib/kubelet

  # node.args -- Arguments to be passed to the command.
  args: []

  # node.securityContext. -- Container securityContext.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
  securityContext:
    privileged: true

  metrics:
    # node.metrics.enabled -- If true, enable scraping of metrics by Prometheus.
    enabled: true
    # node.metrics.annotations -- Annotations for Scrape used by Prometheus.
    annotations:
      prometheus.io/port: metrics

  prometheus:
    podMonitor:
      # node.prometheus.podMonitor.enabled -- Set this to `true` to create PodMonitor for Prometheus operator.
      enabled: false

      # node.prometheus.podMonitor.additionalLabels -- Additional labels that can be used so PodMonitor will be discovered by Prometheus.
      additionalLabels: {}

      # node.prometheus.podMonitor.namespace -- Optional namespace in which to create PodMonitor.
      namespace: ""

      # node.prometheus.podMonitor.interval -- Scrape interval. If not set, the Prometheus default scrape interval is used.
      interval: ""

      # node.prometheus.podMonitor.scrapeTimeout -- Scrape timeout. If not set, the Prometheus default scrape timeout is used.
      scrapeTimeout: ""

      # node.prometheus.podMonitor.relabelings -- RelabelConfigs to apply to samples before scraping.
      relabelings: []
      # - sourceLabels: [__meta_kubernetes_service_label_cluster]
      #   targetLabel: cluster
      #   regex: (.*)
      #   replacement: 1
      #   action: replace

      # node.prometheus.podMonitor.metricRelabelings -- MetricRelabelConfigs to apply to samples before ingestion.
      metricRelabelings: []
      # - sourceLabels: [__meta_kubernetes_service_label_cluster]
      #   targetLabel: cluster
      #   regex: (.*)
      #   replacement: 1
      #   action: replace

  # node.priorityClassName -- Specify priorityClassName.
  priorityClassName:

  # node.tolerations -- Specify tolerations.
  ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
  tolerations:
    - key: database
      operator: Exists

  # node.nodeSelector -- Specify nodeSelector.
  ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
  nodeSelector: {}

  # node.volumes -- Specify volumes.
  volumes: []
  #  - name: registration-dir
  #    hostPath:
  #      path: /var/lib/kubelet/plugins_registry/
  #      type: Directory
  #  - name: node-plugin-dir
  #    hostPath:
  #      path: /var/lib/kubelet/plugins/topolvm.io/node
  #      type: DirectoryOrCreate
  #  - name: csi-plugin-dir
  #    hostPath:
  #      path: /var/lib/kubelet/plugins/kubernetes.io/csi
  #      type: DirectoryOrCreate
  #  - name: pod-volumes-dir
  #    hostPath:
  #      path: /var/lib/kubelet/pods/
  #      type: DirectoryOrCreate
  #  - name: lvmd-socket-dir
  #    hostPath:
  #      path: /run/topolvm
  #      type: Directory

  volumeMounts:
    # node.volumeMounts.topolvmNode -- Specify volumes.
    topolvmNode: []
    # - name: node-plugin-dir
    #   mountPath: /var/lib/kubelet/plugins/topolvm.io/node
    # - name: csi-plugin-dir
    #   mountPath: /var/lib/kubelet/plugins/kubernetes.io/csi
    #   mountPropagation: "Bidirectional"
    # - name: pod-volumes-dir
    #   mountPath: /var/lib/kubelet/pods
    #   mountPropagation: "Bidirectional"
    # - name: lvmd-socket-dir
    #   mountPath: /run/topolvm

  psp:
    # node.psp.allowedHostPaths -- Specify volumes.
    allowedHostPaths: []
    # - pathPrefix: "/var/lib/kubelet"
    #   readOnly: false
    # - pathPrefix: "/run/topolvm"
    #   readOnly: false

  # node.updateStrategy -- Specify updateStrategy.
  updateStrategy: {}
  #  type: RollingUpdate
  #  rollingUpdate:
  #    maxSurge: 50%
  #    maxUnavailable: 50%

# CSI controller service
controller:
  # controller.replicaCount -- Number of replicas for CSI controller service.
  replicaCount: 2

  # controller.args -- Arguments to be passed to the command.
  args: []

  storageCapacityTracking:
    # controller.storageCapacityTracking.enabled -- Enable Storage Capacity Tracking for csi-provisioner.
    enabled: true

  securityContext:
    # controller.securityContext.enabled -- Enable securityContext.
    enabled: true

  nodeFinalize:
    # controller.nodeFinalize.skipped -- Skip automatic cleanup of PhysicalVolumeClaims when a Node is deleted.
    skipped: false

  prometheus:
    podMonitor:
      # controller.prometheus.podMonitor.enabled -- Set this to `true` to create PodMonitor for Prometheus operator.
      enabled: false

      # controller.prometheus.podMonitor.additionalLabels -- Additional labels that can be used so PodMonitor will be discovered by Prometheus.
      additionalLabels: {}

      # controller.prometheus.podMonitor.namespace -- Optional namespace in which to create PodMonitor.
      namespace: ""

      # controller.prometheus.podMonitor.interval -- Scrape interval. If not set, the Prometheus default scrape interval is used.
      interval: ""

      # controller.prometheus.podMonitor.scrapeTimeout -- Scrape timeout. If not set, the Prometheus default scrape timeout is used.
      scrapeTimeout: ""

      # controller.prometheus.podMonitor.relabelings -- RelabelConfigs to apply to samples before scraping.
      relabelings: []
      # - sourceLabels: [__meta_kubernetes_service_label_cluster]
      #   targetLabel: cluster
      #   regex: (.*)
      #   replacement: 1
      #   action: replace

      # controller.prometheus.podMonitor.metricRelabelings -- MetricRelabelConfigs to apply to samples before ingestion.
      metricRelabelings: []
      # - sourceLabels: [__meta_kubernetes_service_label_cluster]
      #   targetLabel: cluster
      #   regex: (.*)
      #   replacement: 1
      #   action: replace

  # controller.terminationGracePeriodSeconds -- (int) Specify terminationGracePeriodSeconds.
  terminationGracePeriodSeconds:  # 10

  # controller.priorityClassName -- Specify priorityClassName.
  priorityClassName:

  # controller.updateStrategy -- Specify updateStrategy.
  updateStrategy: {}
  #  type: RollingUpdate
  #  rollingUpdate:
  #    maxSurge: 50%
  #    maxUnavailable: 50%

  # controller.minReadySeconds -- (int) Specify minReadySeconds.
  minReadySeconds:  # 0

  # controller.affinity -- Specify affinity.
  ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  affinity: |
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
              - key: app.kubernetes.io/component
                operator: In
                values:
                  - controller
              - key: app.kubernetes.io/name
                operator: In
                values:
                  - {{ include "topolvm.name" . }}
          topologyKey: kubernetes.io/hostname

  # controller.tolerations -- Specify tolerations.
  ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
  tolerations:
    - key: database
      operator: Exists

  # controller.nodeSelector -- Specify nodeSelector.
  ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
  nodeSelector: {}

  # controller.volumes -- Specify volumes.
  volumes:
    - name: socket-dir
      emptyDir: {}

  podDisruptionBudget:
    # controller.podDisruptionBudget.enabled -- Specify podDisruptionBudget enabled.
    enabled: true

resources:
  # resources.topolvm_node -- Specify resources.
  ## ref: https://kubernetes.io/docs/user-guide/compute-resources/
  topolvm_node:
    requests:
      memory: 100Mi
      cpu: 100m
    limits:
      memory: 500Mi
      cpu: 500m
  # resources.csi_registrar -- Specify resources.
  ## ref: https://kubernetes.io/docs/user-guide/compute-resources/
  csi_registrar:
    requests:
      cpu: "25m"
      memory: "10Mi"
    limits:
      cpu: "200m"
      memory: "200Mi"
  # resources.liveness_probe -- Specify resources.
  ## ref: https://kubernetes.io/docs/user-guide/compute-resources/
  liveness_probe:
    requests:
      cpu: "25m"
      memory: "10Mi"
    limits:
      cpu: "200m"
      memory: "200Mi"
  # resources.topolvm_controller -- Specify resources.
  ## ref: https://kubernetes.io/docs/user-guide/compute-resources/
  topolvm_controller: 
    requests:
      memory: "50Mi"
      cpu: "50m"
    limits:
      memory: "200Mi"
      cpu: "200m"
  # resources.csi_provisioner -- Specify resources.
  ## ref: https://kubernetes.io/docs/user-guide/compute-resources/
  csi_provisioner:
    requests:
      memory: "50Mi"
      cpu: "50m"
    limits:
      memory: "200Mi"
      cpu: "200m"
  # resources.csi_resizer -- Specify resources.
  ## ref: https://kubernetes.io/docs/user-guide/compute-resources/
  csi_resizer:
    requests:
      memory: "50Mi"
      cpu: "50m"
    limits:
      memory: "200Mi"
      cpu: "200m"
  # resources.csi_snapshotter -- Specify resources.
  ## ref: https://kubernetes.io/docs/user-guide/compute-resources/
  csi_snapshotter:
    requests:
      memory: "50Mi"
      cpu: "50m"
    limits:
      memory: "200Mi"
      cpu: "200m"
  # resources.lvmd -- Specify resources.
  ## ref: https://kubernetes.io/docs/user-guide/compute-resources/
  lvmd:
    requests:
      memory: 100Mi
      cpu: 100m
    limits:
      memory: 500Mi
      cpu: 500m
  # resources.topolvm_scheduler -- Specify resources.
  ## ref: https://kubernetes.io/docs/user-guide/compute-resources/
  topolvm_scheduler:
    requests:
      memory: "50Mi"
      cpu: "50m"
    limits:
      memory: "200Mi"
      cpu: "200m"

livenessProbe:
  # livenessProbe.topolvm_node -- Specify resources.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
  topolvm_node:
    failureThreshold:
    initialDelaySeconds: 10
    timeoutSeconds: 3
    periodSeconds: 60
  # livenessProbe.csi_registrar -- Specify livenessProbe.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
  csi_registrar:
    failureThreshold:
    initialDelaySeconds: 10
    timeoutSeconds: 3
    periodSeconds: 60
  # livenessProbe.topolvm_controller -- Specify livenessProbe.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
  topolvm_controller:
    failureThreshold:
    initialDelaySeconds: 10
    timeoutSeconds: 3
    periodSeconds: 60
  # livenessProbe.lvmd -- Specify livenessProbe.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
  lvmd:
    failureThreshold:
    initialDelaySeconds: 10
    timeoutSeconds: 3
    periodSeconds: 60
  # livenessProbe.topolvm_scheduler -- Specify livenessProbe.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
  topolvm_scheduler:
    failureThreshold:
    initialDelaySeconds: 10
    timeoutSeconds: 3
    periodSeconds: 60

# storageClasses -- Whether to create storageclass(es)
# ref: https://kubernetes.io/docs/concepts/storage/storage-classes/
storageClasses:
  - name: topolvm  # Defines name of storage class.
    storageClass:
      # Supported filesystems are: ext4, xfs, and btrfs.
      fsType: xfs
      # reclaimPolicy
      reclaimPolicy:  # Delete
      # Additional annotations
      annotations: {}
      # Default storage class for dynamic volume provisioning
      # ref: https://kubernetes.io/docs/concepts/storage/dynamic-provisioning
      isDefaultClass: false
      # volumeBindingMode can be either WaitForFirstConsumer or Immediate. WaitForFirstConsumer is recommended because TopoLVM cannot schedule pods wisely if volumeBindingMode is Immediate.
      volumeBindingMode: WaitForFirstConsumer
      # enables CSI drivers to expand volumes. This feature is available for Kubernetes 1.16 and later releases.
      allowVolumeExpansion: true
      additionalParameters:
        "topolvm.io/device-class": "ssd"

webhook:
  # webhook.caBundle -- Specify the certificate to be used for AdmissionWebhook.
  caBundle:  # Base64-encoded, PEM-encoded CA certificate that signs the server certificate.
  # webhook.existingCertManagerIssuer -- Specify the cert-manager issuer to be used for AdmissionWebhook.
  existingCertManagerIssuer: {}
    # group: cert-manager.io
    # kind: Issuer
    # name: webhook-issuer
  podMutatingWebhook:
    # webhook.podMutatingWebhook.enabled -- Enable Pod MutatingWebhook.
    enabled: false #was true, because now we want to use storage capacity tracking instead of scheduler
  pvcMutatingWebhook:
    # webhook.pvcMutatingWebhook.enabled -- Enable PVC MutatingWebhook.
    enabled: true

# Container Security Context
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
securityContext:
  # securityContext.runAsUser -- Specify runAsUser.
  runAsUser: 10000
  # securityContext.runAsGroup -- Specify runAsGroup.
  runAsGroup: 10000

podSecurityPolicy:
  # podSecurityPolicy.create -- Enable pod security policy.
  ## ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/
  create: false

cert-manager:
  # cert-manager.enabled -- Install cert-manager together.
  ## ref: https://cert-manager.io/docs/installation/kubernetes/#installing-with-helm
  enabled: false

priorityClass:
  # priorityClass.enabled -- Install priorityClass.
  enabled: true
  # priorityClass.name -- Specify priorityClass resource name.
  name: topolvm
  # priorityClass.value  -- Specify priorityClass value.
  value: 1000000

snapshot:
  # snapshot.enabled -- Turn on the snapshot feature.
  enabled: true

lv:

root@worker01-rancher-opensource:~# vgs
  VG        #PV #LV #SN Attr   VSize    VFree
  vg_data01   1   1   0 wz--n- <150.00g    0
root@worker01-rancher-opensource:~# lvs
  LV        VG        Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv_data01 vg_data01 -wi-ao---- <150.00g

@toshipp Thank you for your reply,
How can I take a snapshot on thin volumes? Do I have to change anything in the value file?

Thank you!
How do I go about setting up lvmd? I have not much experience is there a section in the docs?

This issue has been automatically marked as stale because it has not had any activity for 30 days. It will be closed in a week if no further activity occurs. Thank you for your contributions.

This issue has been automatically closed due to inactivity. Please feel free to reopen this issue (or open a new one) if this still requires investigation. Thank you for your contribution.