kelseyhightower / kubernetes-redis-cluster

Kubernetes Redis Cluster configs and tutorial

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Simple Redis setup on GKE?

dstroot opened this issue · comments

Hey Kelsey - there are Redis examples in Kubernetes/examples (all slightly different) and this one. but basically if I start simple with one deployment with 1 replica of a Redis pod, with Redis in append only mode sitting on GCE persistent disk, then for whatever reason if it fails won't the deployment just recreate the pod and I am back in business? Oh, and a service of course.

Not sure why I need all the complexity of master/slave/sentinel or clustering? If I really want to go high availability then wouldn't using the recommended solution, e.g. "Redis Sentinel is the official high availability solution for Redis."?

Basically I am just looking for guidance on how to set up a "reasonably" high availability redis service on GKE. Any thing you want to point me at?

Hi @dstroot, what options have you decided to use? I am interested in your findings as I am going through the same issue at the moment.

I kept it simple - I have a redis service, one pod and if it ever crashes (it hasn't - but I tested it using redis' built in command to crash itself) it just restarts another container within a few seconds. With the data on the persistent disk I can snapshot it. So I have the data backed up and the pod will restart.

This is plenty 'available' for my needs. All the complex redis clustering seems to be for people who run it outside a system like Kubernetes - and it keeps evolving - it is different in redis v4.

Hi @dstroot Would you mind sharing the set up you have in terms of deployment and service? I am going through the Kubernetes/examples and have just come across this example too and they are wildly different.
I only need a simple cache, ideally I'd like to be able to either persist the data or hydrate it again if it crashed and restarted.
I don't think I will need it clustered just yet although being able to add that it later would be idea.

Thanks

@Mattchewone if you are still interested, I've got it running with StatefulSet on GKE
Tried killing the node it was running on, seems to comeback with correct data on different node. My redis.yamlconfig:

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: fast
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd
  zone: europe-west1-d
---
apiVersion: v1
kind: Service
metadata:
  labels:
    name: redis
  name: redis-service
spec:
  ports:
    - name: redis-service
      protocol: TCP
      port: 6379
      targetPort: 6379
  selector:
    name: redis-service
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: redis-conf
data:
  redis.conf: |
    appendonly yes
    protected-mode no
    bind 0.0.0.0
    port 6379
    dir /var/lib/redis
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: redis
spec:
  serviceName: redis
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: redis
          image: redis:3.2.0-alpine
          command:
            - redis-server
          args:
            - /etc/redis/redis.conf
          resources:
            requests:
              cpu: 100m
              memory: 100Mi
          ports:
            - containerPort: 6379
              name: redis
          volumeMounts:
            - name: redis-data
              mountPath: /var/lib/redis
            - name: redis-conf
              mountPath: /etc/redis
      volumes:
        - name: redis-conf
          configMap:
            name: redis-conf
            items:
              - key: redis.conf
                path: redis.conf
  volumeClaimTemplates:
    - metadata:
        name: redis-data
        annotations:
          volume.beta.kubernetes.io/storage-class: fast
      spec:
        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 10Gi

There is also this project https://github.com/corybuecker/redis-stateful-set for HA redis, but it kind of seems a bit fragile. I've tested it, and it recovered even when I killed all of the nodes in my cluster, but the sentinelcount seemed to be increasing after each downed node.

Hope that helps!

Thanks @gytisgreitai I will take a look at that. Thanks for taking the time to share.

@gytisgreitai Tried out your manifests, the pvc goes to a pending state and never recovers, how to attach a volume to the storageclass. Point me to the relevant doc if any, google-fu failed me.

@Hashfyre do you have storage class configured?, eg:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: fast
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd
  zone: europe-west1-b

@gytisgreitai Hey, thanks a lot for your configuration file. It was very easy to setup that way. There is a problem tho and I can't find the solution. I hope maybe you can help me:

Apparently every time the database writes to disk (I can see that in my kubernetes logs), my redis database looses all keys, but not the entire memory associated with it. I simply can't find anything anymore in my redis database.

After restarting it says it recovered from my appendonly file, so it does seem to persist the data somehow.

Here is an image of my memory. The circles mark the writing to disk events:
image

@DavidKuennen what does the info command show? how are you sure that there are no keys?