thecodeteam / roadmap

The {code} Team Roadmap

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

OpenShift 3.6.x support for ScaleIO

clintkitson opened this issue · comments

We are looking to include ScaleIO in the next available patch release from 3.6.x.

  • code and PR created
  • tested by @cduchesne
  • business alliances agreement
  • merge PR
  • release communications
  • released 3.6.x

I tested functionality for OpenShift 3.6 and everything worked as expected. There are some limitations that we may want to plan for and address in the near-term. Below is a summary:

  1. Ability to use secret from a different namespace (ceph supports today). Without this, pretty much anyone in a namespace has access to the scaleio admin credentials
  2. Fix detach logic so that it doesn't get skipped if device is deemed busy. If a volume doesn't mount successfully, it is never detached during the detach request, leading to manual intervention
  3. Fix support for same pool name in different protection domains. If protection domains are different (pd1 and pd2) but both have a pool named pool1, volumes will not be managed correctly
  4. Clean up RWO/ROM/RWM logic/supportability. It is unclear how we react to different Access Modes today (eg: we allow to be created/managed but don't function properly for ROM/RWM)

1 and 2 are critical for deploying OpenShift in a production environment where multi-tenancy and high availability are important

3 is important but certainly not a show-stopper - this probably needs to get fixed in our goscaleio package

4 is important to make operating OpenShift/Kubernetes with ScaleIO smoother. Not critical by any means

Also to note, Vlad has had changed merged into Kubernetes master that include FSGroup support - this is a very critical function because OpenShift defaults to not allowing containers to run as root. I don't know if it is possible to migrate this functionality to OpenShift, but we need to if it is possible.

@vladimirvivien can you review these items?

@clintkitson is ScaleIO plugin support released on 3.6.x?

I followed the example in documentation to integrate Openshift 3.6 / K8S 1.6 with ScaleIO.
I'm able to create the secret, the storageclass and a pod with a ScaleIO volume attached, but when I try to create a pvc, it stays in pending state and I get the errore in title ("no volume plugin matched")

Following system info:
OS: Centos7
Kernel: 3.10.0-514.el7.x86_64
ScaleIO: 2.0.13

oc version 
oc v3.6.1+008f2d5
kubernetes v1.6.1+5115d708d7
features: Basic-Auth GSSAPI Kerberos SPNEGO

kubectl get pvc
NAME             STATUS    VOLUME    CAPACITY   ACCESSMODES   STORAGECLASS   AGE
pvc-sio-small  Pending                                                                            sio-k8s        15m
kubectl describe sc sio-k8s
Name:		sio-k8s
IsDefaultClass:	No
Annotations:	<none>
Provisioner:	kubernetes.io/scaleio
Parameters:	fsType=xfs,gateway=https://192.168.20.254/api,protectionDomain=default,secretRef=sio-secret,storagePool=default,system=scaleio
Events:		<none>
oc get event pvc-sio-small.150cced22d69293b 
LASTSEEN   FIRSTSEEN   COUNT     NAME            KIND                    SUBOBJECT   TYPE      REASON               SOURCE                        MESSAGE
4m         19m         62        pvc-sio-small   PersistentVolumeClaim               Warning   ProvisioningFailed   persistentvolume-controller   no volume plugin matched

Thanks

Hello @clintkitson thanks.
I've already read the blogpost but unfortunately I'm running Openshift 3.6 and I was wandering if ScaleIO support for pvc was fixed.
I cannot upgrade to 3.7
Thanks

Thanks @clintkitson. I've upgraded Openshift to 3.7 and plugin now is working correctly.
Just another question, but please feel free to tell me if this is not the right place where to ask.
I used rexray plugin with Docker Swarm services and I was able to attach a ScaleIO volume to multiple containers (replicas of the service) because rexray plugin takes care of tell swarm to launch containers on the same node. I noticed that in K8s this behavior doesn't happen. If I create an application composed with replica 2 (2 pods), k8s starts them on two different nodes and I get this error:
problem getting response: Only a single SDC may be mapped to this volume at a time
There is a way to get the same behavior of docker swarm docker plugin?
Thanks