"no matches for kind "ClusterIssuer" in group "cert-manager.io"" With terraform plan
everspader opened this issue · comments
Hi, I am getting the following error when creating the helm_release
for the cert-manager
with the ClusterIssuer
in the same terraform apply
because the plan
fails.
I did a bit of Googling around and it seems that it's because in the plan state, the CRDs are not yet installed so the error happens.
Is this a known issue and is there a way to circumvent it?
│ Error: Failed to determine GroupVersionResource for manifest
│
│ with module.k8s_base.kubernetes_manifest.cluster_issuer,
│ on ../../modules/k8s_base/main.tf line 35, in resource "kubernetes_manifest" "cluster_issuer":
│ 35: resource "kubernetes_manifest" "cluster_issuer" {
│
│ no matches for kind "ClusterIssuer" in group "cert-manager.io"
Hello @everspader , are you trying to add ClusterIssuer outside of the module ?
kubernetes_manifest doesn't support to run plan & apply, when helm_release is not deployed yet.
You can use kubectl_manifest instead.
Also you can add your custom ClusterIssuer to module via cluster_issuer_yaml variable.
@bohdantverdyi sorry to dig out this one; having the same problem. Not trying to create the cluster issuer from outside, just using :
module "cert_manager" {
source = "terraform-iaac/cert-manager/kubernetes"
cluster_issuer_email = "myownemail@gmail.com"
cluster_issuer_name = "cert-manager-global"
cluster_issuer_private_key_secret_name = "cert-manager-private-key"
}
But getting the following : cert-manager-global failed to create kubernetes rest client for update of resource: resource [cert-manager.io/v1/ClusterIssuer] isn't valid for cluster, check the APIVersion and Kind fields are valid
Am on GKE 1.21.9
@benbonnet looks like cert-manager wan't installed correctly. Can you try to re apply , and check if cert-manager is running in you GKE
thx for your super quick response!
First applied got timed out (cert-manager-global failed to create kubernetes rest client for update of resource: Get "https://xx.xxx.xxx.xxx/api?timeout=32s": dial tcp xx.xxx.xxx.xxx:443: i/o timeout
), although everything was running well (the certsmanager pods were all in a running state).
Re-applied, things ended well. Everything is ok
Have you configured kubectl provider?
it couldn’t connect to kube api. Problem on your side or in providers configuration ( helm, kubernetes, kubectl )
It happened on the very first apply (first it creates the cluster, node pool, etc..., then provider "kubernetes"
, then module "cert_manager"
)
...bunch of tf code...
provider "helm" {
kubernetes {
host = "https://${google_container_cluster.this.endpoint}"
token = data.google_client_config.provider.access_token
cluster_ca_certificate = base64decode(
google_container_cluster.this.master_auth[0].cluster_ca_certificate,
)
}
}
resource "google_compute_address" "ingress_ip_address" {
name = "${var.app_name}-ip"
}
module "nginx-controller" {
source = "terraform-iaac/nginx-controller/helm"
ip_address = google_compute_address.ingress_ip_address.address
}
provider "kubernetes" {
host = "https://${google_container_cluster.this.endpoint}"
token = data.google_client_config.provider.access_token
cluster_ca_certificate = base64decode(
google_container_cluster.this.master_auth[0].cluster_ca_certificate,
)
}
module "cert_manager" {
source = "terraform-iaac/cert-manager/kubernetes"
cluster_issuer_email = "myemail@gmail.com"
cluster_issuer_name = "cert-manager-global"
cluster_issuer_private_key_secret_name = "cert-manager-private-key"
}
Anyhow, next applies are super smooth and everything going super fine
do you have helm & kubectl providers settings ?
yup, just above kubernetes (updated above)
i don't see provider "kubectl"
anyway, next apply smooth. I think problem was with not ready nodes. Becuase kubectl cluster-issuer will not deploy if cert-manager pods it's not ready
ok my bad; confused about kubectl/kubernetes provider
will retry from scratch with all