Unable to switch to using kubeip v2, returning region-related error
MikeW1901 opened this issue · comments
Running the latest version of KubeIP v2's DaemonSet on GKE, pods start in the correct place but then immediately throw this error:
func: "main.assignAddress"
msg: "failed to assign static public IP address to node (node id)"
error: "check if static public IP is already assigned to instance (node id): failed to list assigned addresses: failed to list available addresses: googleapi: Error 400: Invalid value for field 'region': 'us-central1-a'. Unknown region., invalid"
(It should fail as it's got a static IP - it's currently using Kubeip v1 which we want to upgrade from as it seems to wipe other Kubernetes labels when setting its own - but I'm suspecting this region error is different, as the region presumably should be us-central1? No region is defined in the user-facing KubeIP config so I'm unclear what could need tweaking here).
Any assistance appreciated!
Current config, nothing particularly non-standard here:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kubeip
spec:
selector:
matchLabels:
app: kubeip
template:
metadata:
labels:
app: kubeip
spec:
tolerations:
- effect: NoSchedule
key: app
operator: Equal
value: appname
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: app
operator: In
values:
- appname
serviceAccountName: kubeip-service-account
terminationGracePeriodSeconds: 30
priorityClassName: system-node-critical
# nodeSelector:
# kubeip.com/public: "true"
containers:
- name: kubeip
image: doitintl/kubeip-agent
resources:
requests:
cpu: 100m
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: FILTER
value: labels.kubeip=nodepoolname
- name: LOG_LEVEL
value: debug
- name: LOG_JSON
value: "true"
I had the same issue so added an env entry for region myself which seems to have resolved this problem. E.g.
- name: REGION
value: "us-central1"