cockroachdb / helm-charts

Helm charts for cockroachdb

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Cluster Initialization is missing for custom `conf.join`

shaardie opened this issue · comments

I have a setup for CockroachDB using multiple Kubernetes Cluster. Since the automatic join list only contains the services from the local cluster but not the other cluster, I used a custom conf.join list with the services from both clusters. But unfortunately the job.init to initialize the cluster is not running for both clusters due to

{{ $isClusterInitEnabled := and (eq (len .Values.conf.join) 0) (not (index .Values.conf `single-node`)) }}
{{ $isDatabaseProvisioningEnabled := .Values.init.provisioning.enabled }}
{{- if or $isClusterInitEnabled $isDatabaseProvisioningEnabled }}

So I end with an uninitialized database cluster.

What would be the best way to do this? Can we have something like

init:
  enabled: true

to explicitly enable the init on one of the clusters?

We've implemented that on our Forked version on the Helm Chart.

We use the join flag to join multiple nodes from multiple clusters. Which means that the isClusterInitEnabled would never work as well.

We've changed it to this:

{{ $isClusterInitEnabled := and (not (index .Values.conf `single-node`)) .Values.init.enabled }}

And then we manually control the enabled flag per cluster manually.

Personally I think this would make sense to be introduced here, and I was thinking doing that to avoid having a forked chart. @pseudomuto WDYT ?

Just to note a quick workaround for this (until it is fixed) as i had the same problem.
You can just specify the join list members in the statefulset args

statefulset:
  args:
    - --join=cockroachdb-0.cockroachdb.database.svc.cluster.local:26257,cockroachdb-1.cockroachdb.database.svc.cluster.local:26257,cockroachdb-2.cockroachdb.database.svc.cluster.local:26257