Error during parsing: Unknown directive '.'
NadamHL opened this issue · comments
error log:
# kubectl logs coredns-7469dffcdc-zv87j -n kube-system
/etc/coredns/Corefile:3 - Error during parsing: Unknown directive '.'
My configuration:
# kubectl get configmap coredns -n kube-system -o yaml
apiVersion: v1
data:
Corefile: |
.:53 {
errors
. {
hosts /etc/k8shosts
# health
}
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"Corefile":".:53 {\n errors\n . {\n hosts /etc/k8shosts\n health\n }\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n upstream\n fallthrough in-addr.arpa ip6.arpa\n }\n prometheus :9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"coredns","namespace":"kube-system"}}
creationTimestamp: "2020-04-01T10:11:13Z"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: coredns
namespace: kube-system
resourceVersion: "15520676"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: 05fe85db-d0fe-4e52-97fa-935f9fd910bc
# cat coredns.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
priorityClassName: system-cluster-critical
serviceAccountName: coredns
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- name: coredns
image: coredns/coredns:1.3.1
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.254.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
https://guides.github.com/features/mastering-markdown/
I wrapped your issue description in triple back ticks (```'s) so it can be read.
The Corefile is invalid, specifically, the section around hosts.
A correct syntax would be...
.:53 {
errors
hosts . /etc/k8shosts {
fallthrough
}
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
...
Corefile: |
.:53 {
errors
hosts . /etc/k8shosts {
ttl 30
reload 300ms
fallthrough
}
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
error log:
[INFO] Reloading
2020-04-01T13:36:44.065Z [WARNING] plugin/hosts: Hosts file "." is a directory
2020-04-01T13:36:44.169Z [INFO] plugin/reload: Running configuration MD5 = c83324bafe01d7cfab8f6662ce9f398d
[INFO] Reloading complete
please read ... https://github.com/coredns/coredns/tree/master/plugin/hosts
swap arguments to hosts ...
.:53 {
errors
hosts /etc/k8shosts . {
fallthrough
}
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
.:53 {
errors
hosts /etc/k8shosts . {
ttl 30
reload 300ms
fallthrough
}
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
error log:
[INFO] Reloading complete
[INFO] Reloading
2020-04-01T13:42:14.791Z [WARNING] plugin/hosts: File does not exist: /etc/k8shosts
2020-04-01T13:42:14.899Z [INFO] plugin/reload: Running configuration MD5 = 74163651183d1d1697304304812db0cc
[INFO] Reloading complete
Host running pod:
ll /etc/k8shosts
-rwxrwxrwx 1 root root 112 Apr 1 17:07 /etc/k8shosts
Looks like CoreDNS is working as expected.
I want to read the / etc / k8shosts file instead of the default / etc / hosts file, but the error message shows that there is no / etc / k8shosts file, which actually exists on the host
In Kubernetes, CoreDNS runs in a container in a Pod, not on the host. You'll need to set up a mount point in the Pod (by adding it to the Deployement spec). Anyways, that becomes a Kubernetes matter, not a CoreDNS matter. https://kubernetes.io/docs/concepts/storage/volumes/
I understand what you said, but this configuration is added to configmap, which has already been attached to kubernetes
If you need to mount the hosts of the host to coredns, and let coredns read the attached hosts, how to implement it
Please read the Kubernetes documentation...
https://kubernetes.io/docs/concepts/storage/volumes/#configmap
If you need further help, please move the question to a Kubernetes forum, such as https://slack.k8s.io in kubernetes-users or kubnernetes-novice channels, they should be able to help with general Kubernetes questions like this.