redis / redis

Redis is an in-memory database that persists on disk. The data model is key-value, but many different kind of values are supported: Strings, Lists, Sets, Sorted Sets, Hashes, Streams, HyperLogLogs, Bitmaps.

Home Page:http://redis.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ERROR “can't update cluster config file. ” When using azurefile as storage

SongJinZe1 opened this issue · comments

I'm using redis:6.0.14, the official image of redis on dockerhub, deployed in azure's AKS using azurefile as storage. After K8S version 1.28, redis reports error "Fatal: can't update cluster config file.". This error does not appear in K8S 1.27, nor in 1.28 with local disk storage and azureblob storage, does anyone have a clue about this!
I find that this is not an isolated case, you can find other cases in bitnami/charts#20355
===log===
15:C 17 Apr 2024 01:33:34.380 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
15:C 17 Apr 2024 01:33:34.401 # Redis version=6.0.14, bits=64, commit=00000000, modified=0, pid=15, just started
15:C 17 Apr 2024 01:33:34.425 # Configuration loaded
15:M 17 Apr 2024 01:33:34.458 * No cluster configuration found, I'm b4b66dccb01f8909f617dee6ebf5810731d261fa
15:M 17 Apr 2024 01:33:34.490 # Fatal: can't update cluster config file.
===Inside the storage directory===
root@redis-6380-0:/data# ls -l
total 1
-rwxrwxrwx 1 root root 0 Apr 17 01:33 nodes-6380.conf
-rwxrwxrwx 1 root root 426 Apr 17 01:33 redis6380.log

===version===
AKS(K8S): 1.28.5
redis:6.0.14

===redis config===
bind 0.0.0.0
cluster-enabled yes
cluster-config-file "nodes-6380.conf"
cluster-node-timeout 15000

daemonize no
supervised no
pidfile "/data/redis6380.pid"
port 6380
tcp-backlog 511
timeout 5

tcp-keepalive 0
loglevel notice
logfile "/data/redis6380.log"
databases 16

stop-writes-on-bgsave-error no
rdbcompression yes
rdbchecksum no
dbfilename "dump6380.rdb"
dir "/data"
slave-serve-stale-data yes
slave-read-only yes
repl-disable-tcp-nodelay no
slave-priority 100
maxmemory 3gb
appendonly no
appendfilename "appendonly6380.aof"
appendfsync everysec
no-appendfsync-on-rewrite yes
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
repl-backlog-size 32mb
protected-mode no

rename-command FLUSHALL XQC-FLUSHALL
rename-command FLUSHDB XQC-FLUSHDB
rename-command KEYS XQC-KEYS

===statefulset info===
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: middleware
name: redis-6380
spec:
serviceName: redis-6380
replicas: 1
selector:
matchLabels:
app: redis-6380
template:
metadata:
labels:
app: redis-6380
spec:
terminationGracePeriodSeconds: 30
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- redis-6380
topologyKey: kubernetes.io/hostname
containers:
- name: redis
image: redis:6.0.14
ports:
- containerPort: 6380
name: client
- containerPort: 16380
name: gossip
command: ["redis-server", "/etc/redis/redis.conf"]
#command: ["sleep", "10000"]
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
resources:
requests:
cpu: "100m"
memory: "100Mi"
volumeMounts:
- name: conf
mountPath: /etc/redis/
readOnly: false
- name: data
mountPath: /data
readOnly: false
volumes:
- name: conf
configMap:
name: redis-6380
defaultMode: 0755

volumeClaimTemplates:

  • metadata:
    name: data
    spec:
    storageClassName: azurefile
    accessModes:
    - ReadWriteMany
    resources:
    requests:
    storage: 5Gi

azurefile uses SMB protocol, it can't be read like local storage.
but it's werid that azure blob is also a special protocol, so why doesn't it fail? Is it using a proxy to handle this?

azurefile uses SMB protocol, it can't be read like local storage. but it's werid that azure blob is also a special protocol, so why doesn't it fail? Is it using a proxy to handle this?

We're using azureblob to get around this temporarily, but we're using it directly and not using proxy or doing any other special operations