splunk / qbec

configure kubernetes objects on multiple clusters using jsonnet

Home Page:https://qbec.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

`qbec apply` failing to sync while waiting on custom type

bikramnehra opened this issue · comments

I am trying to deploy resources using qbec apply however its failing while waiting on a custom vault resources.

Screenshot 2022-11-13 at 8 53 30 PM

Even-though the resource is already installed and can be seen in api-resources:

kubectl api-resources -o wide | grep vault
vaults                                                vault.banzaicloud.com/v1alpha1           true         Vault                            [delete deletecollection get list patch create update watch]

The same deployment works just fine with kustomize:

Screenshot 2022-11-13 at 9 01 31 PM

Here's the command I am using:

qbec apply branch --yes --force:k8s-namespace <namespace>

Also, looking at https://github.com/splunk/qbec/blob/main/internal/commands/remote-list.go#L80 and https://github.com/splunk/qbec/blob/main/internal/remote/k8smeta/meta.go#L85 I am trying to understand how custom resource types are registered.

Would happy to share more information if needed.

Can you share a minimal spec for the CR you are deploying? You have the option to skip waiting/wait longer on the resource to be ready. See qbec apply -h for tuning the wait options.

This is strange. That waiting message should only show up if the CRD is also installed in the same apply session as the thing that depends on it. If it is already pre-installed, qbec should just find it.

Did you retry? Does this happen every time?

Can you share a minimal spec for the CR you are deploying? You have the option to skip waiting/wait longer on the resource to be ready. See qbec apply -h for tuning the wait options.

I did more investigation on this and turns out the issue was related to multiple clusters/contexts being present in the environment and qbec was pointing to wrong cluster.

Closing this issue for now, will share more details in case I run into further issues.