[Bug] no pods found matching service labels
devShev opened this issue · comments
After scanning a file, linter reports that it cannot find pods by labels, although the labels in the deploy and service are correct.
web.yaml
apiVersion: v1
kind: Service
metadata:
name: admin
spec:
selector:
app.name: ivea-django
ports:
- protocol: TCP
port: 80
targetPort: 8000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: admin
spec:
replicas: 1
selector:
matchLabels:
app.name: ivea-django
template:
metadata:
labels:
app.name: ivea-django
annotations:
lastDeployedDate: "{{ now | unixEpoch }}"
spec:
imagePullSecrets:
- name: gitlab-registry-credentials
containers:
- name: main
imagePullPolicy: Always
image: {{ .Values.images.admin }}
---
apiVersion: batch/v1
kind: Job
metadata:
name: admin-migrations-job-{{ .Release.Revision }}
annotations:
helm.sh/hook: pre-install,pre-upgrade
spec:
backoffLimit: 0
ttlSecondsAfterFinished: 3600
template:
spec:
restartPolicy: Never
imagePullSecrets:
- name: gitlab-registry-credentials
containers:
- name: main
image: {{ .Values.images.admin }}
imagePullPolicy: Always
command: ["python3", "manage.py", "migrate"]
envFrom:
- secretRef:
name: admin
I had the same error, turned out that the Deployment
manifests was invalid (I had strategy
set to Recreate
directly instead of strategy.type: Recreate
).
It seems your deployment's .spec.template.spec.imagePullSecrets
is invalid: it should be an array of string and not an array of objects.
Unfortunately, it means that an invalid deployment is simply ignored by kube-linter so this is an issue on its own.
Edit: just wanted to add that I catched this issue with kubeconform.