Buil fails when using components and strategic merge patch and null node
matthewhughes-uw opened this issue · comments
What happened?
Given the following setup:
├── annotations.yaml
├── components
│ └── kustomization.yaml
├── kustomization.yaml
└── manifests.yaml
With contents:
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- manifests.yaml
patches:
- path: annotations.yaml
components:
- components
# manifests.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
namespace: my-namespace
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
# NOTE: empty initContainers
initContainers:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
affinity:
podAffinity:
# imagine some complicated affinity setup
# we want the component to strip these
podAffinity: {}
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
namespace: my-namespace
labels:
run: my-nginx
spec:
ports:
- port: 80
protocol: TCP
selector:
run: my-nginx
# annotations.yaml
# placeholder patch, just need any patch
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
namespace: my-namespace
annotations:
example.com/my.tool: blah
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
namespace: my-namespace
annotations:
example.com/my.tool: blah
# components/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
patches:
- target:
version: v1
group: apps
# example patch, just something that reaches inside /spec/template/spec
patch: |-
- op: remove
path: /spec/template/spec/affinity/podAffinity
Then running kustomize build
fails with:
Error: updating name reference in 'spec/template/spec/initContainers/env/valueFrom/configMapKeyRef/name' field of 'Deployment.v1.apps/my-nginx.my-namespace': considering field 'spec/template/spec/initContainers/env/valueFrom/configMapKeyRef/name' of object Deployment.v1.apps/my-nginx.my-namespace: expected sequence or mapping node
What did you expect to happen?
kustomize build
would succeed
How can we reproduce it (as minimally and precisely as possible)?
See details above
Expected output
apiVersion: v1
kind: Service
metadata:
annotations:
example.com/my.tool: blah
labels:
run: my-nginx
name: my-nginx
namespace: my-namespace
spec:
ports:
- port: 80
protocol: TCP
selector:
run: my-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
example.com/my.tool: blah
name: my-nginx
namespace: my-namespace
spec:
replicas: 2
selector:
matchLabels:
run: my-nginx
template:
metadata:
labels:
run: my-nginx
spec:
affinity: {}
containers:
- env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
image: nginx
name: my-nginx
ports:
- containerPort: 80
Actual output
The error
Error: updating name reference in 'spec/template/spec/initContainers/env/valueFrom/configMapKeyRef/name' field of 'Deployment.v1.apps/my-nginx.my-namespace': considering field 'spec/template/spec/initContainers/env/valueFrom/configMapKeyRef/name' of object Deployment.v1.apps/my-nginx.my-namespace: expected sequence or mapping node
Note: things work as expected if the patches are provided as inline JSON6902 patches, i.e. update kustomization.yaml
to:
- target:
version: v1
group: apps
kind: Deployment
patch: |-
- op: add
path: /metadata/annotations/example.com~1my.tool
value: blah
- target:
version: v1
kind: Service
patch: |-
- op: add
path: /metadata/annotations/example.com~1my.tool
value: blah
Kustomize version
v5.3.0
Operating system
Linux
This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted
label.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
This feels related to #5050
note in our case we have e.g. initContainers:
(i.e. initContainers: null
) because the manifests were built from some helm charts and that was what helm
originally gave as output