zeebe cluster when node 0 fail cluster fail
ahmedbeledy opened this issue · comments
Zeebe Version:8.5.1
installed : locally 4 machines one gateway and other node broker
this behave not happen in docker
configration : i create 3 node as by raft algorithm this can give me one fail point
gatway config
zeebe:
gateway:
network:
host: 192.168.8.115
port: 26500
cluster:
host: 192.168.8.115
port: 26502
initialContactPoints: [192.168.8.114:26502 , 192.168.8.110:26502 , 192.168.8.105:26502]
# initialContactPoints: [192.168.8.114:26502 ]
security:
enabled: false
multiTenancy:
enabled: false
nodes config
first node
zeebe:
broker:
gateway:
enable: false
network:
host: 192.168.8.114
port: 26500
security:
enabled: false
data:
directory: data
cluster:
nodeId: 0
partitionsCount: 2
replicationFactor: 3
clusterSize: 3
initialContactPoints: [ 192.168.8.110:26502 , 192.168.8.114:26502 , 192.168.8.105:26502 ]
second node
zeebe:
broker:
gateway:
enable: false
network:
host: 192.168.8.110
port: 26500
security:
enabled: false
data:
directory: data
cluster:
nodeId: 1
partitionsCount: 2
replicationFactor: 3
clusterSize: 3
initialContactPoints: [ 192.168.8.110:26502 , 192.168.8.114:26502 , 192.168.8.105:26502 ]
zeebe:
broker:
gateway:
enable: false
network:
host: 192.168.8.105
port: 26500
security:
enabled: false
data:
directory: data
cluster:
nodeId: 2
partitionsCount: 2
replicationFactor: 3
clusterSize: 3
initialContactPoints: [ 192.168.8.110:26502 , 192.168.8.114:26502 , 192.168.8.105:26502 ]
Hello @ahmedbeledy,
Is this camunda-bpm-platform
ticket concerning Camunda 8
? I couldn't understand enough from the ticket description, could you provide me with some context please?
this ticket related on zeebe service deploying in local machines