[nfs-client-provisioner] failed to provision volume with StorageClass "nfs-client": claim Selector is not supported
weber-d opened this issue · comments
Is this a request for help?: Yes
Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bugreport
Version of Helm and Kubernetes:
$ helm version
Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.6", GitCommit:"b1d75deca493a24a2f87eb1efde1a569e52fc8d9", GitTreeState:"clean", BuildDate:"2018-12-16T04:39:52Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.6", GitCommit:"b1d75deca493a24a2f87eb1efde1a569e52fc8d9", GitTreeState:"clean", BuildDate:"2018-12-16T04:30:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Which chart: nfs-client-provisioner
What happened:
The pvc remains pending and show the following error in its log:
failed to provision volume with StorageClass "nfs-client": claim Selector is not supported
What you expected to happen:
A bound pvc with the corresponding app
label.
How to reproduce it (as minimally and precisely as possible):
Create a example.yml file with the following pvc on a cluster configured with nfs-client-provisioner addon:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: connections-test
name: customizernfsclaim
spec:
selector:
matchLabels:
app: customizervolumeapp
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
storageClassName: nfs-client
and create it using kubectl apply -f example.yml
. Now wait a second and look inside the created pvc using kubectl describe pvc custom-test3 -n <your-namespace>
. You'll see that it failed:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 20s (x263 over 1h) persistentvolume-controller waiting for a volume to be created, either by external provisioner "k8s-nfs" or manually created by system administrator
Normal Provisioning 19s (x10 over 1h) k8s External provisioner is provisioning volume for claim "<your-namespace>/customizervolumeapp"
Warning ProvisioningFailed 19s (x10 over 1h) k8s failed to provision volume with StorageClass "nfs-client": claim Selector is not supported
Anything else we need to know:
It seems that the app
label selector cause the issue. If we deploy the following yaml file using kubectl -f
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: connections-test
name: customizernfsclaim2
spec:
# selector:
# matchLabels:
# app: customizervolumeapp
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
storageClassName: nfs-client
the customizernfsclaim2
pvc got bound successfully. Notice the uncommented labels. Simple re-naming or removing the label is no real workaround: Unluckily, the software using those pvc is proprietary and doesn't even have a official package repository. So it's unlikely that they'll fix this soon...
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
Since I got no answear yet, I don't know what I can do to avoid closing this issue without having any resolution.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
Push
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
Push
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
Push as usual
i've been encountered similar problem. any solution ?
Push as usual
Is this a request for help?: Yes
Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bugreport
Version of Helm and Kubernetes:
$ helm versionClient: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.6", GitCommit:"b1d75deca493a24a2f87eb1efde1a569e52fc8d9", GitTreeState:"clean", BuildDate:"2018-12-16T04:39:52Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.6", GitCommit:"b1d75deca493a24a2f87eb1efde1a569e52fc8d9", GitTreeState:"clean", BuildDate:"2018-12-16T04:30:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Which chart: nfs-client-provisioner
What happened:
The pvc remains pending and show the following error in its log:failed to provision volume with StorageClass "nfs-client": claim Selector is not supported
What you expected to happen:
A bound pvc with the correspondingapp
label.How to reproduce it (as minimally and precisely as possible):
Create a example.yml file with the following pvc on a cluster configured with nfs-client-provisioner addon:kind: PersistentVolumeClaim apiVersion: v1 metadata: namespace: connections-test name: customizernfsclaim spec: selector: matchLabels: app: customizervolumeapp accessModes: - ReadWriteMany resources: requests: storage: 2Gi storageClassName: nfs-client
and create it using
kubectl apply -f example.yml
. Now wait a second and look inside the created pvc usingkubectl describe pvc custom-test3 -n <your-namespace>
. You'll see that it failed:Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ExternalProvisioning 20s (x263 over 1h) persistentvolume-controller waiting for a volume to be created, either by external provisioner "k8s-nfs" or manually created by system administrator Normal Provisioning 19s (x10 over 1h) k8s External provisioner is provisioning volume for claim "<your-namespace>/customizervolumeapp" Warning ProvisioningFailed 19s (x10 over 1h) k8s failed to provision volume with StorageClass "nfs-client": claim Selector is not supported
Anything else we need to know:
It seems that the
app
label selector cause the issue. If we deploy the following yaml file usingkubectl -f
kind: PersistentVolumeClaim apiVersion: v1 metadata: namespace: connections-test name: customizernfsclaim2 spec: # selector: # matchLabels: # app: customizervolumeapp accessModes: - ReadWriteMany resources: requests: storage: 2Gi storageClassName: nfs-client
the
customizernfsclaim2
pvc got bound successfully. Notice the uncommented labels. Simple re-naming or removing the label is no real workaround: Unluckily, the software using those pvc is proprietary and doesn't even have a official package repository. So it's unlikely that they'll fix this soon...
the selector conflicts with the default storage. if you use the seletor , the controller will try to match PV with same selector. it couldn't be provisiond from the default storageclass. can't use selector and default storageclass at the same time.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
Push again. Is nobody from the chart team looking into the issues?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
Push...
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
We're interested in this, can please someone from the dev team looking into the issue?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
This issue is being automatically closed due to inactivity.
Push
Any chance to get this fixed?
Push
Same problem here using nfs when installing the new monitoring in rancher. They set a selector in the storageSpec by default and the issue above occurs. After removing the selector the volume gets bound.
Sad to see that nobody feels responsible for this issue, which is open since over 1 1/2 years now without any response... just the pointless bot which auto-close the issue after a few weeks of inactivity, even when we're waiting to a maintainer/dev to reply. I dont see that as a good practice of handling issues, at least not when the devs have to respond.