coreos / coreos-kubernetes

CoreOS Container Linux+Kubernetes documentation & Vagrant installers

Home Page:https://coreos.com/kubernetes/docs/latest/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Upgrade bundled GlusterFS tools

bootc opened this issue · comments

I recently tried mounting a GlusterFS volume (run outside Kubernetes) within my K8s cluster, but mounting failed because of "Server is operating at an op-version which is not supported" errors. After much investigation, it appears that the reason is that the reason is due to old GlusterFS tools in the hyperkube image.

I'm using quay.io/coreos/hyperkube:v1.5.4_coreos.0 within a CoreOS 1298.5.0 environment. I deployed using Matchbox's bootkube-installed example, slightly modified to my environment, latest CoreOS, etc...

The underlying issue seems to be that hyperkube is built using a Debian Jessie (stable/8.7) image. Debian only has GlusterFS 3.5.2 in stable. GlusterFS 3.8.8 is available in Stretch (testing) as well as jessie-backports so it should be reasonably straightforward to pluck more recent versions of those packages for hyperkube.

The errors produced go into /var/lib/kubelet/plugins/kubernetes.io/glusterfs/glusterfsvol/glusterfs-glusterfs.log and look like:

[2017-03-10 09:43:33.094004] E [glusterfsd-mgmt.c:1297:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2017-03-10 09:43:33.094087] E [glusterfsd-mgmt.c:1388:mgmt_getspec_cbk] 0-mgmt: Server is operating at an op-version which is not supported
[2017-03-10 09:43:33.126728] E [glusterfsd-mgmt.c:1297:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2017-03-10 09:43:33.126783] E [glusterfsd-mgmt.c:1388:mgmt_getspec_cbk] 0-mgmt: Server is operating at an op-version which is not supported

Please consider upgrading these tools for interoperation with newer Gluster volumes.

@bootc We're using the same packages as the upstream hyperkube build (https://github.com/kubernetes/kubernetes/blob/master/cluster/images/hyperkube/Dockerfile#L21-L37) -- it might make sense to open an issue there (or a PR with the changes) so it could potentially be implemented for all hyperkube builds?

@aaronlevy Yes, makes sense, I've raised an issue on the kubernetes project now.

Going to close this - as now tracked in kubernetes/kubernetes#43069