Topolvm does not provision new logical volume
vasuakkur-s1 opened this issue · comments
Describe the bug
Topolvm version 0.20.0
(https://github.com/topolvm/topolvm/blob/v0.21.0/charts/topolvm/Chart.yaml) was deployed in eks
using the legacy settings useLegacy: true
(https://github.com/topolvm/topolvm/blob/main/docs/proposals/rename-group.md#user-action) in values files where it retains the provisoner name topolvm.cybozu.com
.
We use a node with nvme capbility where the nvme-provisoner creates the volume groups then sets a label on the node nvme=true to topolvm to takeover and create the logicalvolume.When i try to create a new custom application statefulsets the pod gets in pending state, as dynamic provisioning of the pvc does not happen because the logical volume does not exists on the node that pod was assigned.
The logical volumes are not created when setting the helm chart to use the useLegacy: true
flag.
Environments
- Version: eks, 1.24
https://github.com/topolvm/topolvm/blob/v0.21.0/charts/topolvm/Chart.yaml - OS:
Amazon Linux 2, v1.24.13-eks-0a21954
Operating System: Amazon Linux 2
Kernel: Linux 5.10.184-175.731.amzn2.aarch64
Architecture: arm64
To Reproduce
Steps to reproduce the behavior:
- Use the following helm chart https://github.com/topolvm/topolvm/blob/v0.21.0/charts/topolvm/Chart.yaml
- updates the values file to use the following
useLegacy: true
controller:
storageCapacityTracking:
enabled: true
webhook:
podMutatingWebhook:
enabled: false
- deploy the above helm chart
- the controller deployment and LVM, node daemonsets are running without issues
- It does not create a logical volume on the node
Expected behavior
A clear and concise description of what you expected to happen.
The logical volume resource needs to be created.
Additional context
Add any other context about the problem here.
I'm afraid I couldn't reproduce your problem in my local environment. Here's what I tried:
- Run
git checkout v0.21.0
- Edit
e2e/manifests/values/daemonset-scheduler-legacy.yaml
:
$ cat e2e/manifests/values/daemonset-scheduler-legacy.yaml
scheduler:
type: daemonset
lvmd:
managed: false
node:
lvmdSocket: /tmp/topolvm/lvmd.sock
useLegacy: true
controller:
storageCapacityTracking:
enabled: true
webhook:
podMutatingWebhook:
enabled: false
- Run
cd e2e && make start-lvmd && make create-cluster USE_LEGACY=true
. - Apply the following manifests to deploy a PVC and a Pod.
$ cat test-pvc1.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-pvc1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: topolvm-provisioner
$ cat test-pod1.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pod1
labels:
app.kubernetes.io/name: pause
spec:
containers:
- name: ubuntu
image: ubuntu:20.04
command:
- bash
- -c
- |
sleep inf &
trap "kill -SIGTERM $!" SIGTERM
wait $!
exit
volumeMounts:
- mountPath: /test1
name: my-volume
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: test-pvc1
$ bin/kubectl-1.27.3 apply -f test-pvc1.yaml
$ bin/kubectl-1.27.3 apply -f test-pod1.yaml
- Check the statuses of the Pod and the PVC.
$ bin/kubectl-1.27.3 get pod
NAME READY STATUS RESTARTS AGE
test-pod1 1/1 Running 0 95s
$ bin/kubectl-1.27.3 get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-pvc1 Bound pvc-e839127b-9adc-46ac-b264-afbb06632654 1Gi RWO topolvm-provisioner 19s
$ bin/kubectl-1.27.3 get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-e839127b-9adc-46ac-b264-afbb06632654 1Gi RWO Delete Bound default/test-pvc1 topolvm-provisioner 50s
$ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
pool0 node1-myvg4 twi-a-tz-- 4.00g 0.00 11.23
pool0 node2-myvg4 twi-a-tz-- 4.00g 0.00 11.23
d68bd856-9d9c-4635-9021-1eed47189496 node3-myvg1 -wi-ao---- 1.00g
pool0 node3-myvg4 twi-a-tz-- 4.00g 0.00 11.23
ubuntu-lv ubuntu-vg -wi-ao---- <254.00g
As we can see above, an LV is correctly created.
Could you please explain your problem in more detail? I would appreciate it if you could show me the following:
- The manifests you used for the Pod and PVC
- The statuses for the Pod and PVC you deployed (e.g., the output of
kubectl describe ...
)
This issue has been automatically marked as stale because it has not had any activity for 30 days. It will be closed in a week if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed due to inactivity. Please feel free to reopen this issue (or open a new one) if this still requires investigation. Thank you for your contribution.