gmiroshnykov / datera-csi

Datera's CSI Plugin compatible with v1.13+ Kubernetes

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Datera CSI Volume Plugin

Overview

Datera is a fully disaggregated scale-out storage platform, that runs over multiple standard protocols (iSCSI, Object/S3), combining both heterogeneous compute platform/framework flexibility (HPE, Dell, Fujitsu, Cisco and others) with rapid deployment velocity and access to data from anywhere. Datera® is a software-defined data infrastructure for virtualized environments, databases, cloudstacks, DevOps, microservices and container deployments. It provides operations-free delivery and orchestration of data at scale for any application within a traditional datacenter, private cloud or hybrid cloud setting.

Datera gives Kubernetes (K8s) enterprise customers the peace of mind of a future-proof data services platform that is ready for diverse and demanding workloads -- as K8s continues to dominate the container orchestration arena, it is likely to containerize higher-end workloads, as well.

The Datera CSI Volume Plugin uses Datera storage backend as distributed data storage for containers.

alt text

Supported Versions

Datera CSI Plugin Version Supported CSI Versions Supported Kubernetes Versions
v1.0.4 v1.0 v1.13.X+
v1.0.5 v1.0 v1.13.X+
v1.0.6 v1.0 v1.13.X+
v1.0.7 v1.0 v1.13.X+
v1.0.8 v1.0 v1.13.X+
v1.0.9 v1.0 v1.13.X+
v1.0.10 v1.0 v1.13.X+
v1.0.11 v1.0 v1.13.X+
v1.0.12 v1.0 v1.13.X+

Driver Installation

Prerequisites

Kubernetes Installation/Configuration (Kubernetes v1.13+ required). Note that container-based ISCSID is no longer supported. The Datera implementation runs an iscsi-send inside the driver containers and an iscsi-recv on the kubernetes hosts. The iscsi-recv would further use the iscsid on the kubernetes hosts for performing iSCSI operations.

Ensure iscsid and iscsi-recv are running on the hosts. These MUST be performed before installing the CSI plugin:

First install iscsid on the kubernetes hosts

Ubuntu

$ apt install open-iscsi

Centos

$ yum install iscsi-initiator-utils

Verify iscsid is running:

$ ps -ef | grep iscsid
root     12494   996  0 09:41 pts/2    00:00:00 grep --color=auto iscsid
root     13326     1  0 Dec17 ?        00:00:01 /sbin/iscsid
root     13327     1  0 Dec17 ?        00:00:05 /sbin/iscsid

Clone the datera-csi repository

$ git clone http://github.com/Datera/datera-csi

Run the iscsi-recv service installer

$ ./assets/setup_iscsi.sh
[INFO] Dependency checking
[INFO] Downloading iscsi-recv
[INFO] Verifying checksum
[INFO] Changing file permissions
[INFO] Registering iscsi-recv service
Created symlink from /etc/systemd/system/multi-user.target.wants/iscsi-recv.service to /lib/systemd/system/iscsi-recv.service.
[INFO] Starting iscsi-recv service
[INFO] Verifying service started correctly
root      4879     1  0 19:50 ?        00:00:00 /var/datera/iscsi-recv -addr unix:////var/datera/csi-iscsi.sock

Check that the iscsi-recv service is running

$ systemctl --all | grep iscsi-recv
iscsi-recv.service       loaded    active     running   iscsi-recv container to host iscsiadm adapter service

Update install YAML

Modify deploy/kubernetes/releases/1.0/csi-datera-v1.0.x.yaml and update the values for the following environment variables in the yaml:

  • DAT_MGMT -- The management IP of the Datera system
  • DAT_USER -- The username of your Datera account
  • DAT_PASS -- The password for your Datera account
  • DAT_TENANT -- The tenant to use with your Datera account
  • DAT_API -- The API version to use when communicating (should be 2.2, currently the only version the plugin supports)

There are 2 locations for each value within the yaml that should be modified. One is under the StatefulSet and the other is under the DaemonSet. Note that the yaml does not come with a built-in StorageClass. Create one using the following example shown and modifying it depending on deployment needs.

Optional Secrets

Instead of putting the username/password in the yaml file directly instead you can use the kubernetes secrets capabilities.

NOTE: This must be done before installing the CSI driver.

First create the secrets. They're base64 encoded strings. The two required secrets are "username" and "password". Modify and save the below yaml as secrets.yaml.

apiVersion: v1
kind: Secret
metadata:
  name: datera-secret
  namespace: kube-system
type: Opaque
data:
  # base64 encoded username
  # generate this via "$ echo -n 'your-username' | base64"
  username: YWRtaW4=
  # base64 encoded password
  # generate this via "$ echo -n 'your-password' | base64"
  password: cGFzc3dvcmQ=

Then create the secrets.

$ kubectl create -f secrets.yaml

Now install the CSI driver like above, but using the "secrets" yaml:

$ kubectl create -f csi-datera-secrets-v1.0.10.yaml

The only difference between the "secrets" yaml and the regular yaml is the use of secrets for the "username" and "password" fields.

Install Datera CSI driver

The driver install manifest file is available under deploy/kubernetes/release/1.0 directory. Pick up the latest version. Run the following command to install Datera CSI driver.

$ kubectl create -f csi-datera-v1.0.x.yaml

For example:

# kubectl create -f csi-datera-secrets-1.0.10.yaml
storageclass.storage.k8s.io/dat-block-storage created
serviceaccount/csi-datera-controller-sa created
clusterrole.rbac.authorization.k8s.io/csi-datera-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/csi-datera-provisioner-binding created
clusterrole.rbac.authorization.k8s.io/csi-datera-attacher-role created
clusterrolebinding.rbac.authorization.k8s.io/csi-datera-attacher-binding created
clusterrole.rbac.authorization.k8s.io/csi-datera-snapshotter-role created
clusterrolebinding.rbac.authorization.k8s.io/csi-datera-snapshotter-binding created
statefulset.apps/csi-provisioner created
serviceaccount/csi-datera-node-sa created
clusterrole.rbac.authorization.k8s.io/csi-datera-node-driver-registrar-role created
clusterrolebinding.rbac.authorization.k8s.io/csi-datera-node-driver-registrar-binding created
daemonset.apps/csi-node created
#

You should see one csi-provisioner pod and 'N' csi-node pods running in kube-system namespace, where N = number of kubernetes worker nodes. The csi-provisioner pod can run on any kubernetes node.

# kubectl get pods -n kube-system -o wide | egrep 'NAME|csi-'
NAME                                       READY   STATUS    RESTARTS   AGE     IP           NODE    NOMINATED NODE   READINESS GATES
csi-node-cwpfk                             3/3     Running   0          164m    1.1.1.1      node1   <none>           <none>
csi-node-lx66c                             3/3     Running   0          164m    2.2.2.2      node2   <none>           <none>
csi-provisioner-0                          6/6     Running   0          163m    1.1.1.1      node1   <none>           <none>
[root@tlx51cp tmp]# 

Performing Volume operations

Create StorageClass

Before creating a Volume, a StorageClass needs to be created. This StorageClass acts like a template where you can specify your Volume and QoS parameters. The parameters can be placed within the parameters section of the StorageClass.

In the following example we configure volumes with a replica of 3 and a QoS of 1000 IOPS max. All parameters must be strings (pure numbers and booleans should be enclosed in quotes). Save the following as 'csi-storageclass.yaml' file.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: dat-block-storage
  namespace: kube-system
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: dsp.csi.daterainc.io
parameters:
  replica_count: "3"
  total_iops_max: "1000"
$ kubectl create -f csi-storageclass.yaml

Here are a list of supported parameters for the plugin:

Name Default
replica_count 3
placement_mode hybrid (Use this for Datera OS versions < 3.3)
placement_policy default (Use this for Datera OS versions >= 3.3)
ip_pool default
template ""
round_robin false
read_iops_max 0
write_iops_max 0
total_iops_max 0
read_bandwidth_max 0
write_bandwidth_max 0
total_bandwidth_max 0
iops_per_gb 0
bandwidth_per_gb 0
fs_type ext4 (Currently the only supported values are 'ext4' and 'xfs')
fs_args -E lazy_itable_init=0,lazy_journal_init=0,nodiscard -F
delete_on_unmount false

NOTE:

  1. All parameters MUST be strings in the yaml file, otherwise the kubectl parser will fail. If in doubt, enclose each in double quotes ("")

  2. The 'placement_mode' will continue to work in Datera OS versions >= 3.3, however the 'placement_policy' takes precedence.

  3. StorageClass parameters cannot be patched using "kubectl apply -f <>" command. Any changes needs a delete and re-create of the StorageClass with modified parameters. You can also use "kubectl replace .." which does delete and replace of StorageClass. Only subsequent PVCs/PVs which references this modified StorageClass will see the change. There is no impact to existing PVCs/PVs.

$ kubectl replace -f csi-storageclass.yaml --force
storageclass.storage.k8s.io "csi-storageclass" deleted
storageclass.storage.k8s.io/csi-storageclass replaced
$

Create a Volume

A volume on Datera backend is created automatically when a Persistent Volume Claim (PVC) is created. This PVC can be further referenced in a Pod manifest to use the volume.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-pvc
  namespace: default
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi
  storageClassName: dat-block-storage
$ kubectl create -f csi-pvc.yaml

Create an Application using the Volume

Create and save the following as csi-app.yaml. Note that this Pod claims a volume by specifying the name of a PVC claim "csi-pvc".

kind: Pod
apiVersion: v1
metadata:
  name: my-csi-app
spec:
  containers:
    - name: my-app-image
      image: alpine
      volumeMounts:
      - mountPath: "/data"
        name: my-app-volume
      command: [ "sleep", "1000000" ]
  volumes:
    - name: my-app-volume
      persistentVolumeClaim:
        claimName: csi-pvc
$ kubectl create -f csi-app.yaml

Creating and using Volume Snapshots

To create a volume snapshot in kubernetes you can use the following VolumeSnapshotClass and VolumeSnapshot as an example. Save the following as csi-snap-class.yaml.

apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshotClass
metadata:
  name: csi-snap-class
driver: dsp.csi.daterainc.io
deletionPolicy: Retain
parameters:

Here are a list of supported snapshot parameters for the plugin: (v1.0.7+)

Name Default
remote_provider_uuid ""
type local options: local, remote, local_and_remote

Example VolumeSnapshotClass yaml file with parameters (when saving snapshot to a remote provider):

apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshotClass
metadata:
  name: csi-snap-class
driver: dsp.csi.daterainc.io
deletionPolicy: Retain
parameters:
  remote_provider: c7f97223-81d9-44fe-ae7b-7c27daf6c288
  type: local_and_remote
$ kubectl create -f csi-snap-class.yaml

Create and save the following as csi-snap.yaml.

apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
  name: csi-pvc-snap
spec:
  volumeSnapshotClassName: csi-snap-class
  source:
    persistentVolumeClaimName: csi-pvc
$ kubectl create -f csi-snap.yaml

We can now view the snapshot using kubectl command.

# kubectl get volumesnapshot
NAME           READYTOUSE   SOURCEPVC   SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS    SNAPSHOTCONTENT                                    CREATIONTIME   AGE
csi-pvc-snap   true         csi-pvc                             1Gi           csi-snap-class   snapcontent-4ea69ee8-444c-4106-b000-0b7c91cb847f   22m            22m
# 
# kubectl get volumesnapshotcontents
NAME                                               READYTOUSE   RESTORESIZE   DELETIONPOLICY   DRIVER                 VOLUMESNAPSHOTCLASS   VOLUMESNAPSHOT   AGE
snapcontent-4ea69ee8-444c-4106-b000-0b7c91cb847f   true         1073741824    Retain           dsp.csi.daterainc.io   csi-snap-class        csi-pvc-snap     22m
#

Now we can use this snapshot to create a new PVC.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-pvc-restore
  namespace: default
spec:
  storageClassName: dat-block-storage
  dataSource:
    name: csi-pvc-snap
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

More Examples

For other examples, such as resizing volumes, adding CHAP support, overriding Datera templates, using PVCs for deployment, etc., please check the 'deploy/examples' folder.

Collecting Logs

You can collect logs from the entire Datera CSI plugin via the csi_log_collect.sh script in the datera-csi/assets folder. Basic log collection is very simple. To collect logs from all the master and worker nodes, run the script with no arguments on the Kubernetes master node. To collect logs from a specific worker node or master node, specify the '-p ' option after the script. See examples below.

$ chmod +x ./assets/csi_log_collect.sh
$ ./assets/csi_log_collect.sh
$ ./assets/csi_log_collect.sh -p csi-provisioner
$ ./assets/csi_log_collect.sh -p csi-node

Odd Case Environment Variables

Sometimes customer setups require a bit of flexibility. These environment variables allow for tuning the plugin to behave in atypical ways. USE THESE WITH CAUTION.

  • DAT_SOCKET -- Socket that driver listens on
  • DAT_HEARTBEAT -- Interval to perform Datera heartbeat function
  • DAT_TYPE -- Which CSI services to expose on the binary
  • DAT_VOL_PER_NODE -- Max volumes per node setting
  • DAT_DISABLE_MULTIPATH -- Disable multipath (for use with bonded nics)
  • DAT_REPLICA_OVERRIDE -- Override set replica counts to 1 (for single-node systems)
  • DAT_METADATA_DEBUG -- Calculates metadata size before sending (for checking 2KB hard-limit)
  • DAT_DISABLE_LOGPUSH -- Disables pushing plugin logs to the Datera system
  • DAT_LOGPUSH_INTERVAL -- Sets interval between logpushes to the Datera system
  • DAT_FORMAT_TIMEOUT -- Sets the timeout duration for volume format calls (default 60 seconds)

Note on K8S setup through Rancher

In Rancher setup, the kubelet is run inside a container and hence may not have access to the socket /var/datera/csi-iscsi.sock on the host. Run '# nc -U /var/datera/csi-iscsi.sock' from inside the kubelet container and verify whether the socket is listening. If not, a bind mount would be needed as specified here: https://docs.docker.com/storage/bind-mounts/

  --mount type=bind,source="/var/datera/csi-iscsi.sock"/target,target=/var/datera/csi-iscsi.sock

Driver upgrades and downgrades

Driver upgrades and downgrades can be done by running a 'kubectl delete -f <csi_driver_yaml_used_to_create>' followed by 'kubectl delete -f <csi_driver_yaml_for_new_version>'. For example, a downgrade from v1.0.10 to v1.0.9 can be done as follows:

# kubectl delete -f csi-datera-1.0.11.yaml
# kubectl create -f csi-datera-1.0.10.yaml

About

Datera's CSI Plugin compatible with v1.13+ Kubernetes


Languages

Language:Go 86.6%Language:Shell 6.6%Language:Python 3.4%Language:Makefile 2.8%Language:Dockerfile 0.6%