kubernetes-sigs / cluster-api-provider-openstack

Cluster API implementation for OpenStack

Home Page:https://cluster-api-openstack.sigs.k8s.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Identity ref modified by CAPO when using v1alpha7 resources with v0.10.0-alpha.1

mkjpryor opened this issue · comments

/kind bug

These steps use the Helm charts here to generate resources: https://github.com/stackhpc/capi-helm-charts

What steps did you take and what happened:

Set up management cluster with CAPI + CAPO v0.10.0-alpha.1.

Create an appcred for a project in OpenStack, download the clouds.yaml.

Create a values file with the following minimal values:

clouds:
  openstack:
    auth:
      project_id: [PROJECTID]
    verify: false

kubernetesVersion: 1.28.8
machineImageId: [IMAGEID]

clusterNetworking:
  externalNetworkId: [NETWORKID]

controlPlane:
  machineFlavor: [FLAVORNAME]
  machineCount: 1

nodeGroups:
  - name: md-0
    machineFlavor: [FLAVORNAME]
    machineCount: 2

addons:
  enabled: false

Create a cluster using v1alpha7 resources:

helm upgrade capi-test openstack-cluster --repo https://stackhpc.github.io/capi-helm-charts --version 0.5.0 -i -f clouds.yaml -f values.yaml

Immediately check the v1alpha7 representation of the OpenStackCluster:

$ kubectl get openstackcluster.v1alpha7.infrastructure.cluster.x-k8s.io/capi-test -o yaml
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha7
kind: OpenStackCluster
metadata:
  annotations:
    cluster.x-k8s.io/conversion-data: '{"spec":{"h":"KNyh/LzswqI=","d":{"cloudName":"openstack","nodeCidr":"192.168.3.0/24","network":{},"subnet":{},"externalNetworkId":"[REDACTED]","apiServerLoadBalancer":{"enabled":true},"disableAPIServerFloatingIP":false,"apiServerPort":6443,"managedSecurityGroups":true,"allowAllInClusterTraffic":true,"controlPlaneEndpoint":{"host":"","port":0},"controlPlaneOmitAvailabilityZone":true,"identityRef":{"kind":"Secret","name":"capi-test-cloud-credentials"}}}}'
    helm.sh/resource-policy: keep
    meta.helm.sh/release-name: capi-test
    meta.helm.sh/release-namespace: default
  creationTimestamp: "2024-04-10T15:39:20Z"
  finalizers:
  - openstackcluster.infrastructure.cluster.x-k8s.io
  generation: 1
  labels:
    app.kubernetes.io/managed-by: Helm
    capi.stackhpc.com/cluster: capi-test
    capi.stackhpc.com/infrastructure-provider: openstack
    capi.stackhpc.com/managed-by: Helm
    cluster.x-k8s.io/cluster-name: capi-test
    helm.sh/chart: openstack-cluster-0.5.0
  name: capi-test
  namespace: default
  ownerReferences:
  - apiVersion: cluster.x-k8s.io/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: Cluster
    name: capi-test
    uid: 39052247-0adf-49d4-ab06-866c47abbe73
  resourceVersion: "93095"
  uid: 748cc764-fe98-4cf7-92e4-a46c6c8c3a01
spec:
  allowAllInClusterTraffic: true
  apiServerLoadBalancer:
    enabled: true
  apiServerPort: 6443
  cloudName: openstack
  controlPlaneEndpoint:
    host: ""
    port: 0
  controlPlaneOmitAvailabilityZone: true
  disableAPIServerFloatingIP: false
  externalNetworkId: [REDACTED]
  identityRef:
    kind: Secret
    name: capi-test-cloud-credentials
  managedSecurityGroups: true
  network: {}
  nodeCidr: 192.168.3.0/24
  subnet: {}
status:
  ready: false

This looks fine.

However looking at the same object after the load-balancer has been created and the controlPlaneEndpoint is updated:

$ kubectl get openstackcluster.v1alpha7.infrastructure.cluster.x-k8s.io/capi-test -o yaml
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha7
kind: OpenStackCluster
metadata:
  annotations:
    cluster.x-k8s.io/conversion-data: '{"spec":{"h":"KNyh/LzswqI=","d":{"cloudName":"openstack","nodeCidr":"192.168.3.0/24","network":{},"subnet":{},"externalNetworkId":"[REDACTED]","apiServerLoadBalancer":{"enabled":true},"disableAPIServerFloatingIP":false,"apiServerPort":6443,"managedSecurityGroups":true,"allowAllInClusterTraffic":true,"controlPlaneEndpoint":{"host":"","port":0},"controlPlaneOmitAvailabilityZone":true,"identityRef":{"kind":"Secret","name":"capi-test-cloud-credentials"}}}}'
    helm.sh/resource-policy: keep
    meta.helm.sh/release-name: capi-test
    meta.helm.sh/release-namespace: default
  creationTimestamp: "2024-04-10T15:39:20Z"
  finalizers:
  - openstackcluster.infrastructure.cluster.x-k8s.io
  generation: 2
  labels:
    app.kubernetes.io/managed-by: Helm
    capi.stackhpc.com/cluster: capi-test
    capi.stackhpc.com/infrastructure-provider: openstack
    capi.stackhpc.com/managed-by: Helm
    cluster.x-k8s.io/cluster-name: capi-test
    helm.sh/chart: openstack-cluster-0.5.0
  name: capi-test
  namespace: default
  ownerReferences:
  - apiVersion: cluster.x-k8s.io/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: Cluster
    name: capi-test
    uid: 39052247-0adf-49d4-ab06-866c47abbe73
  resourceVersion: "93403"
  uid: 748cc764-fe98-4cf7-92e4-a46c6c8c3a01
spec:
  allowAllInClusterTraffic: true
  apiServerLoadBalancer:
    enabled: true
  apiServerPort: 6443
  cloudName: openstack
  controlPlaneEndpoint:
    host: [FLOATINGIP]
    port: 6443
  controlPlaneOmitAvailabilityZone: true
  disableAPIServerFloatingIP: false
  externalNetworkId: [REDACTED]
  identityRef:
    kind: ""
    name: capi-test-cloud-credentials
  managedSecurityGroups: true
  network: {}
  nodeCidr: 192.168.3.0/24
  subnet: {}
status:
  apiServerLoadBalancer:
    id: [REDACTED]
    internalIP: 192.168.3.212
    ip: [FLOATINGIP]
    name: k8s-clusterapi-cluster-default-capi-test-kubeapi
  controlPlaneSecurityGroup:
    id: [REDACTED]
    name: k8s-cluster-default-capi-test-secgroup-controlplane
  externalNetwork:
    id: [REDACTED]
    name: [REDACTED]
  failureDomains:
    nova: {}
  network:
    id: [REDACTED]
    name: k8s-clusterapi-cluster-default-capi-test
    subnets:
    - cidr: 192.168.3.0/24
      id: [REDACTED]
      name: k8s-clusterapi-cluster-default-capi-test
  ready: true
  router:
    id: [REDACTED]
    ips:
    - 128.232.226.143
    name: k8s-clusterapi-cluster-default-capi-test
  workerSecurityGroup:
    id: [REDACTED]
    name: k8s-cluster-default-capi-test-secgroup-worker

We see that spec.identityRef.kind has been erroneously set to "".

This causes issues when we attempt to update the cluster, even with a null update:

$ helm upgrade capi-test openstack-cluster --repo https://stackhpc.github.io/capi-helm-charts --version 0.5.0 -i -f clouds.arcus.yaml -f values.cluster.arcus.yml 
Error: UPGRADE FAILED: cannot patch "capi-test" with kind OpenStackCluster: OpenStackCluster.infrastructure.cluster.x-k8s.io "capi-test" is invalid: spec.identityRef.kind: Invalid value: "": spec.identityRef.kind in body should be at least 1 chars long

What did you expect to happen:

The helm upgrade to be successful.

Environment:

  • Cluster API Provider OpenStack version (Or git rev-parse HEAD if manually built): v0.10.0-alpha.1
  • Cluster-API version: v1.6.3
  • OpenStack version: Yoga
  • Minikube/KIND version:
  • Kubernetes version (use kubectl version): v1.29.1
  • OS (e.g. from /etc/os-release):