install with flux not working
vincentserpoul opened this issue · comments
Easy to reproduce:
- On any cluster, install flux
- deploy crdb using flux helm CRDs
apiVersion: v1
kind: Namespace
metadata:
name: cockroachdb
---
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: cockroachdb
namespace: flux-system
spec:
interval: 1m
url: https://charts.cockroachdb.com
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: cockroachdb
namespace: cockroachdb
spec:
interval: 1m
chart:
spec:
chart: "cockroachdb"
version: ">=7.0.1"
sourceRef:
kind: HelmRepository
name: cockroachdb
namespace: flux-system
interval: 1m
targetNamespace: cockroachdb
I am not sure if it's on flux or cockroachdb, but it's worth creating an issue to centralize the fix.
I opened an issue on flux side here
Hello, I saw on the related flux issue that "Direct healm install works." - to clarify, you mean you got CockroachDB working on helm without flux? If so, this seems like an issue on the flux side. I saw that they're providing advice over there, but let us know if there's something on our side that we need to look at.
Hi David,
It seems I can reproduce the issue with the --wait option without flux.
Basically, when I run the helm install for the first time with --wait,
I am using k3d with k3s:v1.23.4-k3s1. I'm using cert-manager for the certs and use these values (default self cert does not work):
tls:
certs:
selfSigner:
enabled: false
certManager: true
certManagerIssuer:
group: cert-manager.io
kind: ClusterIssuer
name: mkcert-cluster-issuer
useCertManagerV1CRDs: true
my first helm install:
helm install -f values.yaml cockroachdb cockroachdb/cockroachdb -n cockroachdb --wait
cockroachdb 0s Normal NoPods poddisruptionbudget/cockroachdb-budget No matching pods found
cockroachdb 0s Normal Issuing certificate/cockroachdb-node Issuing certificate as Secret does not exist
cockroachdb 0s Normal SuccessfulCreate statefulset/cockroachdb create Claim datadir-cockroachdb-0 Pod cockroachdb-0 in StatefulSet cockroachdb success
cockroachdb 0s Normal SuccessfulCreate statefulset/cockroachdb create Pod cockroachdb-0 in StatefulSet cockroachdb successful
cockroachdb 0s Normal Issuing certificate/cockroachdb-root-client Issuing certificate as Secret does not exist
cockroachdb 0s Normal WaitForFirstConsumer persistentvolumeclaim/datadir-cockroachdb-0 waiting for first consumer to be created before binding
cockroachdb 0s Normal WaitForFirstConsumer persistentvolumeclaim/datadir-cockroachdb-1 waiting for first consumer to be created before binding
cockroachdb 0s Normal SuccessfulCreate statefulset/cockroachdb create Claim datadir-cockroachdb-1 Pod cockroachdb-1 in StatefulSet cockroachdb success
cockroachdb 0s Normal SuccessfulCreate statefulset/cockroachdb create Pod cockroachdb-1 in StatefulSet cockroachdb successful
cockroachdb 0s Normal ExternalProvisioning persistentvolumeclaim/datadir-cockroachdb-0 waiting for a volume to be created, either by external provisioner "rancher.io/local-path" or manually created by system administrator
cockroachdb 0s Normal Provisioning persistentvolumeclaim/datadir-cockroachdb-0 External provisioner is provisioning volume for claim "cockroachdb/datadir-cockroachdb-0"
cockroachdb 0s Normal ExternalProvisioning persistentvolumeclaim/datadir-cockroachdb-0 waiting for a volume to be created, either by external provisioner "rancher.io/local-path" or manually created by system administrator
cockroachdb 0s Normal SuccessfulCreate statefulset/cockroachdb create Claim datadir-cockroachdb-2 Pod cockroachdb-2 in StatefulSet cockroachdb success
cockroachdb 0s Normal WaitForFirstConsumer persistentvolumeclaim/datadir-cockroachdb-2 waiting for first consumer to be created before binding
cockroachdb 0s Normal Provisioning persistentvolumeclaim/datadir-cockroachdb-1 External provisioner is provisioning volume for claim "cockroachdb/datadir-cockroachdb-1"
cockroachdb 0s Normal ExternalProvisioning persistentvolumeclaim/datadir-cockroachdb-1 waiting for a volume to be created, either by external provisioner "rancher.io/local-path" or manually created by system administrator
cockroachdb 0s Normal SuccessfulCreate statefulset/cockroachdb create Pod cockroachdb-2 in StatefulSet cockroachdb successful
cockroachdb 0s Normal ExternalProvisioning persistentvolumeclaim/datadir-cockroachdb-1 waiting for a volume to be created, either by external provisioner "rancher.io/local-path" or manually created by system administrator
cockroachdb 0s Normal ExternalProvisioning persistentvolumeclaim/datadir-cockroachdb-2 waiting for a volume to be created, either by external provisioner "rancher.io/local-path" or manually created by system administrator
cockroachdb 0s Normal Generated certificate/cockroachdb-root-client Stored new private key in temporary Secret resource "cockroachdb-root-client-8nhmc"
cockroachdb 0s Normal Requested certificate/cockroachdb-root-client Created new CertificateRequest resource "cockroachdb-root-client-qwp47"
cockroachdb 0s Normal cert-manager.io certificaterequest/cockroachdb-root-client-qwp47 Certificate request has been approved by cert-manager.io
cockroachdb 0s Normal CertificateIssued certificaterequest/cockroachdb-root-client-qwp47 Certificate fetched from issuer successfully
cockroachdb 0s Normal Issuing certificate/cockroachdb-root-client The certificate has been successfully issued
cockroachdb 0s Normal Generated certificate/cockroachdb-node Stored new private key in temporary Secret resource "cockroachdb-node-knq7w"
cockroachdb 0s Normal Requested certificate/cockroachdb-node Created new CertificateRequest resource "cockroachdb-node-dpvhc"
cockroachdb 0s Normal cert-manager.io certificaterequest/cockroachdb-node-dpvhc Certificate request has been approved by cert-manager.io
cockroachdb 0s Normal CertificateIssued certificaterequest/cockroachdb-node-dpvhc Certificate fetched from issuer successfully
cockroachdb 0s Normal Issuing certificate/cockroachdb-node The certificate has been successfully issued
cockroachdb 0s Normal Provisioning persistentvolumeclaim/datadir-cockroachdb-2 External provisioner is provisioning volume for claim "cockroachdb/datadir-cockroachdb-2"
kube-system 0s Normal Pulling pod/helper-pod-create-pvc-a2c405cc-ade6-49bb-9876-3d7e5898f2c6 Pulling image "rancher/mirrored-library-busybox:1.34.1"
kube-system 0s Normal Pulling pod/helper-pod-create-pvc-d9d8fc7e-4a57-473d-b8e5-01f9dce00765 Pulling image "rancher/mirrored-library-busybox:1.34.1"
kube-system 0s Normal Pulling pod/helper-pod-create-pvc-15f0f89b-65a8-4085-ae0d-340692281f3a Pulling image "rancher/mirrored-library-busybox:1.34.1"
kube-system 0s Normal Pulled pod/helper-pod-create-pvc-d9d8fc7e-4a57-473d-b8e5-01f9dce00765 Successfully pulled image "rancher/mirrored-library-busybox:1.34.1" in 10.06556221s
kube-system 0s Normal Created pod/helper-pod-create-pvc-d9d8fc7e-4a57-473d-b8e5-01f9dce00765 Created container helper-pod
kube-system 0s Normal Started pod/helper-pod-create-pvc-d9d8fc7e-4a57-473d-b8e5-01f9dce00765 Started container helper-pod
kube-system 0s Normal Pulled pod/helper-pod-create-pvc-15f0f89b-65a8-4085-ae0d-340692281f3a Successfully pulled image "rancher/mirrored-library-busybox:1.34.1" in 10.050039517s
kube-system 0s Normal Created pod/helper-pod-create-pvc-15f0f89b-65a8-4085-ae0d-340692281f3a Created container helper-pod
kube-system 0s Warning FailedMount pod/helper-pod-create-pvc-d9d8fc7e-4a57-473d-b8e5-01f9dce00765 MountVolume.SetUp failed for volume "script" : object "kube-system"/"local-path-config" not registered
cockroachdb 0s Normal ProvisioningSucceeded persistentvolumeclaim/datadir-cockroachdb-1 Successfully provisioned volume pvc-d9d8fc7e-4a57-473d-b8e5-01f9dce00765
kube-system 0s Warning FailedMount pod/helper-pod-create-pvc-d9d8fc7e-4a57-473d-b8e5-01f9dce00765 MountVolume.SetUp failed for volume "kube-api-access-tg7ts" : object "kube-system"/"kube-root-ca.crt" not registered
kube-system 0s Normal Started pod/helper-pod-create-pvc-15f0f89b-65a8-4085-ae0d-340692281f3a Started container helper-pod
kube-system 0s Warning FailedMount pod/helper-pod-create-pvc-d9d8fc7e-4a57-473d-b8e5-01f9dce00765 MountVolume.SetUp failed for volume "script" : object "kube-system"/"local-path-config" not registered
kube-system 0s Warning FailedMount pod/helper-pod-create-pvc-d9d8fc7e-4a57-473d-b8e5-01f9dce00765 MountVolume.SetUp failed for volume "kube-api-access-tg7ts" : object "kube-system"/"kube-root-ca.crt" not registered
cockroachdb 0s Normal Scheduled pod/cockroachdb-1 Successfully assigned cockroachdb/cockroachdb-1 to k3d-gitops-agent-1
kube-system 0s Warning FailedMount pod/helper-pod-create-pvc-15f0f89b-65a8-4085-ae0d-340692281f3a MountVolume.SetUp failed for volume "script" : object "kube-system"/"local-path-config" not registered
cockroachdb 0s Normal ProvisioningSucceeded persistentvolumeclaim/datadir-cockroachdb-2 Successfully provisioned volume pvc-15f0f89b-65a8-4085-ae0d-340692281f3a
cockroachdb 0s Normal Pulling pod/cockroachdb-1 Pulling image "busybox"
kube-system 0s Warning FailedMount pod/helper-pod-create-pvc-15f0f89b-65a8-4085-ae0d-340692281f3a MountVolume.SetUp failed for volume "script" : object "kube-system"/"local-path-config" not registered
cockroachdb 0s Normal Scheduled pod/cockroachdb-2 Successfully assigned cockroachdb/cockroachdb-2 to k3d-gitops-agent-0
cockroachdb 0s Normal Pulling pod/cockroachdb-2 Pulling image "busybox"
cockroachdb 0s Normal ExternalProvisioning persistentvolumeclaim/datadir-cockroachdb-0 waiting for a volume to be created, either by external provisioner "rancher.io/local-path" or manually created by system administrator
kube-system 1s Normal Pulled pod/helper-pod-create-pvc-a2c405cc-ade6-49bb-9876-3d7e5898f2c6 Successfully pulled image "rancher/mirrored-library-busybox:1.34.1" in 15.230545123s
kube-system 0s Normal Created pod/helper-pod-create-pvc-a2c405cc-ade6-49bb-9876-3d7e5898f2c6 Created container helper-pod
kube-system 0s Normal Started pod/helper-pod-create-pvc-a2c405cc-ade6-49bb-9876-3d7e5898f2c6 Started container helper-pod
cockroachdb 0s Normal ProvisioningSucceeded persistentvolumeclaim/datadir-cockroachdb-0 Successfully provisioned volume pvc-a2c405cc-ade6-49bb-9876-3d7e5898f2c6
cockroachdb 0s Normal Scheduled pod/cockroachdb-0 Successfully assigned cockroachdb/cockroachdb-0 to k3d-gitops-server-0
cockroachdb 0s Normal Pulling pod/cockroachdb-0 Pulling image "busybox"
cockroachdb 0s Normal Pulled pod/cockroachdb-1 Successfully pulled image "busybox" in 8.393255237s
cockroachdb 0s Normal Created pod/cockroachdb-1 Created container copy-certs
cockroachdb 0s Normal Started pod/cockroachdb-1 Started container copy-certs
cockroachdb 0s Normal Pulling pod/cockroachdb-1 Pulling image "cockroachdb/cockroach:v21.2.7"
cockroachdb 0s Normal Pulled pod/cockroachdb-2 Successfully pulled image "busybox" in 8.341271462s
cockroachdb 0s Normal Created pod/cockroachdb-2 Created container copy-certs
cockroachdb 0s Normal Started pod/cockroachdb-2 Started container copy-certs
cockroachdb 0s Normal Pulling pod/cockroachdb-2 Pulling image "cockroachdb/cockroach:v21.2.7"
cockroachdb 0s Normal Pulled pod/cockroachdb-0 Successfully pulled image "busybox" in 8.434923379s
cockroachdb 0s Normal Created pod/cockroachdb-0 Created container copy-certs
cockroachdb 0s Normal Started pod/cockroachdb-0 Started container copy-certs
cockroachdb 0s Normal Pulling pod/cockroachdb-0 Pulling image "cockroachdb/cockroach:v21.2.7"
cockroachdb 0s Normal Pulled pod/cockroachdb-1 Successfully pulled image "cockroachdb/cockroach:v21.2.7" in 13.975692346s
cockroachdb 0s Normal Created pod/cockroachdb-1 Created container db
cockroachdb 0s Normal Started pod/cockroachdb-1 Started container db
cockroachdb 0s Normal Pulled pod/cockroachdb-2 Successfully pulled image "cockroachdb/cockroach:v21.2.7" in 14.015553119s
cockroachdb 0s Normal Created pod/cockroachdb-2 Created container db
cockroachdb 0s Normal Started pod/cockroachdb-2 Started container db
cockroachdb 0s Normal Pulled pod/cockroachdb-0 Successfully pulled image "cockroachdb/cockroach:v21.2.7" in 16.822685584s
cockroachdb 0s Normal Created pod/cockroachdb-0 Created container db
cockroachdb 0s Normal Started pod/cockroachdb-0 Started container db
cockroachdb 0s Warning Unhealthy pod/cockroachdb-1 Readiness probe failed: HTTP probe failed with statuscode: 503
cockroachdb 0s Warning Unhealthy pod/cockroachdb-2 Readiness probe failed: HTTP probe failed with statuscode: 503
cockroachdb 0s Warning Unhealthy pod/cockroachdb-1 Readiness probe failed: HTTP probe failed with statuscode: 503
cockroachdb 0s Warning Unhealthy pod/cockroachdb-2 Readiness probe failed: HTTP probe failed with statuscode: 503
cockroachdb 0s Warning Unhealthy pod/cockroachdb-1 Readiness probe failed: HTTP probe failed with statuscode: 503
cockroachdb 0s Warning Unhealthy pod/cockroachdb-0 Readiness probe failed: HTTP probe failed with statuscode: 503
cockroachdb 0s Warning Unhealthy pod/cockroachdb-2 Readiness probe failed: HTTP probe failed with statuscode: 503
cockroachdb 0s Warning Unhealthy pod/cockroachdb-1 Readiness probe failed: HTTP probe failed with statuscode: 503
cockroachdb 0s Warning Unhealthy pod/cockroachdb-0 Readiness probe failed: HTTP probe failed with statuscode: 503
cockroachdb 0s Warning Unhealthy pod/cockroachdb-2 Readiness probe failed: HTTP probe failed with statuscode: 503
cockroachdb 0s Warning Unhealthy pod/cockroachdb-1 Readiness probe failed: HTTP probe failed with statuscode: 503
cockroachdb 0s Warning Unhealthy pod/cockroachdb-0 Readiness probe failed: HTTP probe failed with statuscode: 503
There is no mention of the init container.
I uninstall and run a second time with --wait, same issue.
Now, I remove the --wait, and so simply run
helm install -f values.yaml cockroachdb cockroachdb/cockroachdb -n cockroachdb
and it works (mount errors apart), the init container runs and the server is available.
cockroachdb 0s Normal NoPods poddisruptionbudget/cockroachdb-budget No matching pods found
cockroachdb 0s Normal NoPods poddisruptionbudget/cockroachdb-budget No matching pods found
cockroachdb 0s Normal SuccessfulCreate statefulset/cockroachdb create Pod cockroachdb-0 in StatefulSet cockroachdb successful
cockroachdb 0s Normal Scheduled pod/cockroachdb-0 Successfully assigned cockroachdb/cockroachdb-0 to k3d-gitops-server-0
cockroachdb 0s Normal SuccessfulCreate statefulset/cockroachdb create Pod cockroachdb-1 in StatefulSet cockroachdb successful
cockroachdb 0s Normal Scheduled pod/cockroachdb-1 Successfully assigned cockroachdb/cockroachdb-1 to k3d-gitops-agent-1
cockroachdb 0s Normal SuccessfulCreate statefulset/cockroachdb create Pod cockroachdb-2 in StatefulSet cockroachdb successful
cockroachdb 0s Normal Scheduled pod/cockroachdb-2 Successfully assigned cockroachdb/cockroachdb-2 to k3d-gitops-agent-0
cockroachdb 0s Normal SuccessfulCreate job/cockroachdb-init Created pod: cockroachdb-init-gjfxx
cockroachdb 0s Normal Scheduled pod/cockroachdb-init-gjfxx Successfully assigned cockroachdb/cockroachdb-init-gjfxx to k3d-gitops-agent-1
cockroachdb 0s Normal Pulled pod/cockroachdb-0 Container image "busybox" already present on machine
cockroachdb 0s Normal Pulled pod/cockroachdb-1 Container image "busybox" already present on machine
cockroachdb 0s Normal Pulled pod/cockroachdb-2 Container image "busybox" already present on machine
cockroachdb 0s Normal Pulled pod/cockroachdb-init-gjfxx Container image "busybox" already present on machine
cockroachdb 0s Normal Created pod/cockroachdb-0 Created container copy-certs
cockroachdb 0s Normal Created pod/cockroachdb-1 Created container copy-certs
cockroachdb 0s Normal Created pod/cockroachdb-2 Created container copy-certs
cockroachdb 0s Normal Created pod/cockroachdb-init-gjfxx Created container copy-certs
cockroachdb 0s Normal Started pod/cockroachdb-0 Started container copy-certs
cockroachdb 0s Normal Started pod/cockroachdb-1 Started container copy-certs
cockroachdb 0s Normal Started pod/cockroachdb-2 Started container copy-certs
cockroachdb 0s Normal Started pod/cockroachdb-init-gjfxx Started container copy-certs
cockroachdb 0s Normal Pulled pod/cockroachdb-0 Container image "cockroachdb/cockroach:v21.2.7" already present on machine
cockroachdb 0s Normal Created pod/cockroachdb-0 Created container db
cockroachdb 0s Normal Pulled pod/cockroachdb-2 Container image "cockroachdb/cockroach:v21.2.7" already present on machine
cockroachdb 0s Normal Started pod/cockroachdb-0 Started container db
cockroachdb 0s Normal Created pod/cockroachdb-2 Created container db
cockroachdb 0s Normal Started pod/cockroachdb-2 Started container db
cockroachdb 0s Normal Pulled pod/cockroachdb-init-gjfxx Container image "cockroachdb/cockroach:v21.2.7" already present on machine
cockroachdb 0s Normal Pulled pod/cockroachdb-1 Container image "cockroachdb/cockroach:v21.2.7" already present on machine
cockroachdb 0s Normal Created pod/cockroachdb-init-gjfxx Created container cluster-init
cockroachdb 0s Normal Created pod/cockroachdb-1 Created container db
cockroachdb 0s Normal Started pod/cockroachdb-1 Started container db
cockroachdb 0s Normal Started pod/cockroachdb-init-gjfxx Started container cluster-init
cockroachdb 0s Normal Completed job/cockroachdb-init Job completed
cockroachdb 0s Warning FailedMount pod/cockroachdb-init-gjfxx MountVolume.SetUp failed for volume "certs-secret" : object "cockroachdb"/"cockroachdb-root" not registered
cockroachdb 0s Warning FailedMount pod/cockroachdb-init-gjfxx MountVolume.SetUp failed for volume "certs-secret" : object "cockroachdb"/"cockroachdb-root" not registered
cockroachdb 0s Warning FailedToUpdateEndpoint endpoints/cockroachdb-public Failed to update endpoint cockroachdb/cockroachdb-public: Operation cannot be fulfilled on endpoints "cockroachdb-public": the object has been modified; please apply your changes to the latest version and try again
cockroachdb 0s Warning FailedToUpdateEndpoint endpoints/cockroachdb Failed to update endpoint cockroachdb/cockroachdb: Operation cannot be fulfilled on endpoints "cockroachdb": the object has been modified; please apply your changes to the latest version and try again
ingress-nginx 0s Normal RELOAD pod/ingress-nginx-controller-644f6597b6-99mm6 NGINX reload triggered due to a change in configuration
Flux helm release are using something similar to --wait, hence the issue.
As mentioned on the flux side of things, I could reproduce the error with and without flux, so I guess this now happens here.
Deploying the operator with the helm chart requires that the wait
flag is unset. Having the CRDB pods up and running requires that the init job runs. If the wait
flag is set, it's a bit of a chicken and egg problem where the init job will never start because the pods are not running.
You mentioned that you got it working without wait
here, and on the flux side it looks like you were able to disable wait as well, so are there any remaining issues on your side with starting the operator?
not using --wait anymore, I still need to helm install once, uninstall and then reinstall
I guess it's the time needed for the operator to come up?
Is it better to install the operator CRD and then the helm install? Or is there a proper way?
I would recommend following the steps specified in our Helm chart deployment docs: https://www.cockroachlabs.com/docs/stable/deploy-cockroachdb-with-kubernetes.html?filters=helm
That's what I do:
helm install my-release --values {custom-values}.yaml cockroachdb/cockroachdb
Right now, I need to run it once, uninstall and then reinstall for it to work.
Is it better to install the operator CRD and then the helm install? Or is there a proper way?
Are you trying to use the public operator and the helm chart? They functionally do the same thing. Does operator
mean something different in this context?
Sorry, this sentence might have been confusing, I was just throwing an idea.
I am simply installing the helm chart as per the docs.
It is pretty straightforward to reproduce: using k3d and helm (that is what I am using, it does not mean it is specific to this platform).
As a side note, the self cert doesn't work as well, but I fixed it with cert-manager.
So we understand this correctly, you stated the following:
I am simply installing the helm chart as per the docs.
It is pretty straightforward to reproduce: using k3d and helm (that is what I am using, it does not mean it is specific to this platform).
I was not able to reproduce this problem. Let's go ahead and try to simplify the process and take flux out of the equation. Once you are able to install via helm alone (as shown below), we can try and figure out what might be happening with flux.
I was able to install CockroachDB using the helm charts without any issues. I am not using k3d, so maybe this is the problem you might be running into, I'm not sure as I have not used this to test it.
With that being said, I was able to deploy CockroachDB using the helm charts to a lab running on 3 VMs, here is the entire process. I created my own custom dynamic volume provisioner (maybe you are missing this as well, since these mini k8s deployments might not have all necessary functionality you might expect) and for this test did not use certs. Please try this as well, so we can determine if the cert part of the process might be missing something or not.
Steps shown below:
Add helm repo:
[LAB] root@lab-kub01: helm # helm repo add cockroachdb https://charts.cockroachdb.com/
"cockroachdb" already exists with the same configuration, skipping
Update repo:
[LAB] root@lab-kub01: helm # helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "cockroachdb" chart repository
Update Complete. ⎈Happy Helming!⎈
Create storage class and dynamic volume provisioner (this might be only necessary in my lab, as I'm running on vagrant vms):
[LAB] root@lab-kub01: helm # kubectl apply -f sc.yml
serviceaccount/glfs-provisioner created
clusterrole.rbac.authorization.k8s.io/glfs-provisioner-runner unchanged
clusterrolebinding.rbac.authorization.k8s.io/run-glfs-provisioner configured
storageclass.storage.k8s.io/cockroach-helm-dynamic created
Deploy helm chart:
[LAB] root@lab-kub01: helm # helm install cockroachdb --values custom-values.yml --set image.tag=v21.2.7 cockroachdb/cockroachdb
W0324 14:18:14.823945 11225 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
W0324 14:18:14.829262 11225 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
W0324 14:18:15.001830 11225 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
W0324 14:18:15.067797 11225 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
NAME: cockroachdb
LAST DEPLOYED: Thu Mar 24 14:18:14 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
CockroachDB can be accessed via port 26257 at the
following DNS name from within your cluster:
cockroachdb-public.default.svc.cluster.local
Because CockroachDB supports the PostgreSQL wire protocol, you can connect to
the cluster using any available PostgreSQL client.
For example, you can open up a SQL shell to the cluster by running:
kubectl run -it --rm cockroach-client \
--image=cockroachdb/cockroach \
--restart=Never \
--command -- \
./cockroach sql --insecure --host=cockroachdb-public.default
From there, you can interact with the SQL shell as you would any other SQL
shell, confident that any data you write will be safe and available even if
parts of your cluster fail.
Finally, to open up the CockroachDB admin UI, you can port-forward from your
local machine into one of the instances in the cluster:
kubectl port-forward cockroachdb-0 8080
Then you can access the admin UI at http://localhost:8080/ in your web browser.
For more information on using CockroachDB, please see the project's docs at:
https://www.cockroachlabs.com/docs/
Pod status:
[LAB] root@lab-kub01: helm # kubectl get pods
NAME READY STATUS RESTARTS AGE
cockroachdb-0 1/1 Running 0 88s
cockroachdb-1 1/1 Running 0 88s
cockroachdb-2 1/1 Running 0 88s
cockroachdb-init-2gvlk 0/1 Completed 0 88s
Used files for your reference:
storage class config, service account and roles:
sc.yml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: glfs-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: glfs-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events", "pods/exec"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "watch", "create", "delete", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-glfs-provisioner
subjects:
- kind: ServiceAccount
name: glfs-provisioner
namespace: default
# update namespace above to your namespace in order to make this work
roleRef:
kind: ClusterRole
name: glfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: cockroach-helm-dynamic
annotations:
storageclass.kubernetes.io/is-default-class: "true" # This makes this storageclass the default
provisioner: kubernetes.io/glusterfs
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
resturl: "http://lab-kub01:8080"
restauthenabled: "false"
custom-values.yml
statefulset:
customLivenessProbe: {}
customReadinessProbe: {}
resources:
limits:
cpu: "0.5"
memory: "256Mi"
requests:
cpu: "0.5"
memory: "256Mi"
tls:
enabled: false
conf:
log:
enabled: true
config:
sinks:
stderr:
channels: all
filter: INFO
redact: false
redactable: false
exit-on-error: true
capture-stray-errors:
enable: true
max-group-size: 100MiB
logtostderr: INFO
storage:
persistentVolume:
enabled: true
size: 256Mi
storageClass: "cockroach-helm-dynamic"
I think I will isolate the culprit later on and create a new issue if needed, as right now it's becoming quite confusing.
The bottom line if you are reading this and have an issue with flux:
add this to your HelmRelease:
spec:
interval: 1m
install:
disableWait: true
thanks a lot for your reactivity @davidwding @udnay @daniel-crlabs