kubernetes / minikube

Run Kubernetes locally

Home Page:https://minikube.sigs.k8s.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

docker: Ingress not exposed on MacOS

jkornata opened this issue Β· comments

Steps to reproduce the issue:
I can't access ingress on the fresh installation. It's on MacOS, with docker for mac and kubernetes disabled in docker for mac.

  1. minikube start --vm-driver=docker --kubernetes-version v1.14.0
  2. minikube addons enable ingress

Issue is not affected by the kubernetes version. It also happens on the newest. I've tried following this guide but it's doesn't work without ingress service. I thought that as suggested here adding service manually will fix the issue but it doesn't.

kubectl get ep
NAME         ENDPOINTS         AGE
kubernetes   172.17.0.2:8443   33m
web          172.18.0.5:8080   23m

But if I try to curl 172.18.0.5:8080 it cannot connect.

kubectl -n kube-system  describe po nginx-ingress-controller-b84556868-kh8n6 
Name:           nginx-ingress-controller-b84556868-kh8n6
Namespace:      kube-system
Priority:       0
Node:           minikube/172.17.0.2
Start Time:     Tue, 31 Mar 2020 10:35:06 +0200
Labels:         addonmanager.kubernetes.io/mode=Reconcile
                app.kubernetes.io/name=nginx-ingress-controller
                app.kubernetes.io/part-of=kube-system
                pod-template-hash=b84556868
Annotations:    prometheus.io/port: 10254
                prometheus.io/scrape: true
Status:         Running
IP:             172.18.0.4
IPs:            <none>

curl 172.18.0.4 doesn't work either.

kubectl get ing
NAME              HOSTS              ADDRESS      PORTS   AGE
example-ingress   hello-world.info   172.17.0.2   80      24m

Neither does curl 172.17.0.2 or curl hello-world.info (with /etc/hosts modified

docker ps
CONTAINER ID        IMAGE                                COMMAND                  CREATED             STATUS              PORTS                                                                           NAMES
2bb364a550d9        gcr.io/k8s-minikube/kicbase:v0.0.8   "/usr/local/bin/entr…"   40 minutes ago      Up 40 minutes       127.0.0.1:32773->22/tcp, 127.0.0.1:32772->2376/tcp, 127.0.0.1:32771->8443/tcp   minikube

Full output of failed command:

|-------------|--------------------------|--------------------------------|-----|
|  NAMESPACE  |           NAME           |          TARGET PORT           | URL |
|-------------|--------------------------|--------------------------------|-----|
| default     | kubernetes               | No node port                   |
| kube-system | kube-dns                 | No node port                   |
|-------------|--------------------------|--------------------------------|-----|

Full output of minikube start command used, if not already included:

πŸ˜„ minikube v1.9.0 na Darwin 10.12.6
✨ Using the docker driver based on user configuration
🚜 Pulling base image ...
πŸ”₯ Creating Kubernetes in docker container with (CPUs=2) (2 available), Memory=1989MB (1989MB available) ...
🐳 przygowowywanie Kubernetesa v1.14.0 na Docker 19.03.2...
β–ͺ kubeadm.pod-network-cidr=10.244.0.0/16
🌟 Enabling addons: default-storageclass, storage-provisioner
πŸ„ Gotowe! kubectl jest skonfigurowany do uΕΌycia z "minikube".

❗ /usr/local/bin/kubectl is v1.18.0, which may be incompatible with Kubernetes v1.14.0.
πŸ’‘ You can also use 'minikube kubectl -- get pods' to invoke a matching version

Optional: Full output of minikube logs command:

==> Docker <== -- Logs begin at Tue 2020-03-31 08:29:37 UTC, end at Tue 2020-03-31 08:56:14 UTC. -- Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764070218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764087347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764102400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764116769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764132540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764146535Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764191662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764209275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764224931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764239224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764676320Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764742415Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764898357Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.764918518Z" level=info msg="containerd successfully booted in 0.075847s" Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.768852318Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000932020, READY" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.772244255Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.772733171Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.773028740Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.773232914Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.773688407Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000663a00, CONNECTING" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.773693901Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.774303501Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000663a00, READY" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.775500487Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.775655306Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.775702803Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.775735963Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.775832092Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000663f40, CONNECTING" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.776416332Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000663f40, READY" module=grpc Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.781092553Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Mar 31 08:29:43 minikube dockerd[492]: time="2020-03-31T08:29:43.796947718Z" level=info msg="Loading containers: start." Mar 31 08:29:44 minikube dockerd[492]: time="2020-03-31T08:29:44.012391294Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 31 08:29:44 minikube dockerd[492]: time="2020-03-31T08:29:44.112329971Z" level=info msg="Loading containers: done." Mar 31 08:29:44 minikube dockerd[492]: time="2020-03-31T08:29:44.145329464Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2 Mar 31 08:29:44 minikube dockerd[492]: time="2020-03-31T08:29:44.145538169Z" level=info msg="Daemon has completed initialization" Mar 31 08:29:44 minikube systemd[1]: Started Docker Application Container Engine. Mar 31 08:29:44 minikube dockerd[492]: time="2020-03-31T08:29:44.200990838Z" level=info msg="API listen on /var/run/docker.sock" Mar 31 08:29:44 minikube dockerd[492]: time="2020-03-31T08:29:44.201139377Z" level=info msg="API listen on [::]:2376" Mar 31 08:31:31 minikube dockerd[492]: time="2020-03-31T08:31:31.739954908Z" level=info msg="shim containerd-shim started" address=/containerd-shim/f9656dbf0466d93ef18b4df2bd71f153525ccf97621f24ffea19318ad3e51657.sock debug=false pid=2061 Mar 31 08:31:31 minikube dockerd[492]: time="2020-03-31T08:31:31.767851969Z" level=info msg="shim containerd-shim started" address=/containerd-shim/6d0c1d199ed791ac12ed903ee38af96fcaf6f6aa88827aacf1e0522fbd4bf4f6.sock debug=false pid=2065 Mar 31 08:31:31 minikube dockerd[492]: time="2020-03-31T08:31:31.772055007Z" level=info msg="shim containerd-shim started" address=/containerd-shim/2d9cdc27b4ee56ba50d171794fea9b61119e7e5a0c205188f9ca4df157170e05.sock debug=false pid=2068 Mar 31 08:31:31 minikube dockerd[492]: time="2020-03-31T08:31:31.781207101Z" level=info msg="shim containerd-shim started" address=/containerd-shim/732f37e8c13ebed913ae0b08f53511d9bb83fbfe84b3c4f8f9267867806c4e4b.sock debug=false pid=2072 Mar 31 08:31:32 minikube dockerd[492]: time="2020-03-31T08:31:32.561984783Z" level=info msg="shim containerd-shim started" address=/containerd-shim/55ae5c97f9cc2124da21f5992d9a70a3f2c7754923206cb07403e0a7ddd60aaf.sock debug=false pid=2257 Mar 31 08:31:32 minikube dockerd[492]: time="2020-03-31T08:31:32.694220871Z" level=info msg="shim containerd-shim started" address=/containerd-shim/1969993e8979c9e1492e8bf269ed3381e5de4fcd206b6d24932168c49ad47fa6.sock debug=false pid=2295 Mar 31 08:31:32 minikube dockerd[492]: time="2020-03-31T08:31:32.708992427Z" level=info msg="shim containerd-shim started" address=/containerd-shim/4120649ccf3aacf8220077782d5711078e53fdd33dfd12f94b21e638d54ef4fd.sock debug=false pid=2302 Mar 31 08:31:32 minikube dockerd[492]: time="2020-03-31T08:31:32.728154316Z" level=info msg="shim containerd-shim started" address=/containerd-shim/e0e5c370e03b6e7ab4787dd6359ba272d56a13f17130ad0140513ffc79a1f677.sock debug=false pid=2309 Mar 31 08:32:03 minikube dockerd[492]: time="2020-03-31T08:32:03.361833574Z" level=info msg="shim containerd-shim started" address=/containerd-shim/2981a3a881af97dc083d2e2349d45f1af94116c3a445e8c9d7d2261d0eab561f.sock debug=false pid=2876 Mar 31 08:32:03 minikube dockerd[492]: time="2020-03-31T08:32:03.533736979Z" level=info msg="shim containerd-shim started" address=/containerd-shim/fcd40646714e8511819738aa14c55f375b0ab025db1aedec5e64dc99c8929c30.sock debug=false pid=2906 Mar 31 08:32:03 minikube dockerd[492]: time="2020-03-31T08:32:03.936151887Z" level=info msg="shim containerd-shim started" address=/containerd-shim/bb5bcfea109fcad86618deb96ce4906be121a8ba325cbf4a9a86ad847605c23e.sock debug=false pid=2958 Mar 31 08:32:04 minikube dockerd[492]: time="2020-03-31T08:32:04.066814231Z" level=info msg="shim containerd-shim started" address=/containerd-shim/efa7427bb2c173ea13bca75be0a9f54c7096c622a1a552e37d73107d997c1ba0.sock debug=false pid=2982 Mar 31 08:32:05 minikube dockerd[492]: time="2020-03-31T08:32:05.379357085Z" level=info msg="shim containerd-shim started" address=/containerd-shim/07478204604916b55f0526b99919d1924ce0b9e7d8bfbb883989c6f9f6cd8118.sock debug=false pid=3061 Mar 31 08:32:05 minikube dockerd[492]: time="2020-03-31T08:32:05.671648875Z" level=info msg="shim containerd-shim started" address=/containerd-shim/6b33c5a37b92afa15820bb1bf8b0c2eacbd64fd61e6355305ad8b8072dfbc781.sock debug=false pid=3094 Mar 31 08:32:05 minikube dockerd[492]: time="2020-03-31T08:32:05.738650640Z" level=info msg="shim containerd-shim started" address=/containerd-shim/ed999c3330247aead76ae32bd6b1a431161fc7447f333b9e7d3777ccffd87eeb.sock debug=false pid=3113 Mar 31 08:32:07 minikube dockerd[492]: time="2020-03-31T08:32:07.356036160Z" level=info msg="shim containerd-shim started" address=/containerd-shim/b76d0042d4fc4476b1849988e9512e92ee0df797ee38b34a02d831cc05b6303b.sock debug=false pid=3264 Mar 31 08:32:07 minikube dockerd[492]: time="2020-03-31T08:32:07.361647388Z" level=info msg="shim containerd-shim started" address=/containerd-shim/3ed7ecff86048f785946d4c701e90d4bc1385978017c8533fc27fd790659206e.sock debug=false pid=3265 Mar 31 08:32:27 minikube dockerd[492]: time="2020-03-31T08:32:27.075828822Z" level=info msg="shim containerd-shim started" address=/containerd-shim/e43aba448c53331f08ec1cf1a2cc3b896cd36cf09ae377d92c6ad9cda82e031d.sock debug=false pid=3557 Mar 31 08:35:07 minikube dockerd[492]: time="2020-03-31T08:35:07.062000473Z" level=info msg="shim containerd-shim started" address=/containerd-shim/bf1bfb3c846a13cca7780c6a6592d470b7840e06528d827b058b826211009772.sock debug=false pid=4850 Mar 31 08:35:09 minikube dockerd[492]: time="2020-03-31T08:35:09.544399020Z" level=warning msg="[DEPRECATION NOTICE] registry v2 schema1 support will be removed in an upcoming release. Please contact admins of the quay.io registry NOW to avoid future disruption." Mar 31 08:36:53 minikube dockerd[492]: time="2020-03-31T08:36:53.035598147Z" level=info msg="shim containerd-shim started" address=/containerd-shim/6c4828993809d8b74455926d52bbec30da38e86cafd759bc7414a0ff4c3b3d42.sock debug=false pid=5776 Mar 31 08:41:55 minikube dockerd[492]: time="2020-03-31T08:41:55.036141932Z" level=info msg="shim containerd-shim started" address=/containerd-shim/f22126523d7baaa584a3e9797d19b195849472a0d973f0ae5429d94368466590.sock debug=false pid=8113 Mar 31 08:42:00 minikube dockerd[492]: time="2020-03-31T08:42:00.287239535Z" level=info msg="shim containerd-shim started" address=/containerd-shim/d5bcd9e0e2c52dae9a7cd6398f49bf0c57acba6f6b7db9f65c458b3ea52be9c8.sock debug=false pid=8213

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
49548f067e0fb gcr.io/google-samples/hello-app@sha256:c62ead5b8c15c231f9e786250b07909daf6c266d0fcddd93fea882eb722c3be4 14 minutes ago Running web 0 cc3588d4252ea
6e356d38f6644 quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:d0b22f715fcea5598ef7f869d308b55289a3daaa12922fa52a1abf17703c88e7 19 minutes ago Running nginx-ingress-controller 0 0254de39b3801
fdf3890ae6cad kindest/kindnetd@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555 23 minutes ago Running kindnet-cni 0 fdc9efa64e13c
5987d4d29db7b eb516548c180f 24 minutes ago Running coredns 0 a48e9875ea2d7
6a507738d34a6 eb516548c180f 24 minutes ago Running coredns 0 55124d3804fb1
31fa7a07f95ed 5cd54e388abaf 24 minutes ago Running kube-proxy 0 00fed65b89e57
791695c1a1a89 4689081edb103 24 minutes ago Running storage-provisioner 0 4e5d751c70346
b82aa41df356b 2c4adeb21b4ff 24 minutes ago Running etcd 0 09a6124253491
636cbc28b02a5 00638a24688b0 24 minutes ago Running kube-scheduler 0 59929901cfb8d
a15a83b0d226f ecf910f40d6e0 24 minutes ago Running kube-apiserver 0 1702fda9a509f
c3fe71e5fc3a8 b95b1efa0436b 24 minutes ago Running kube-controller-manager 0 d91b6fdb43251

==> coredns [5987d4d29db7] <==
.:53
2020-03-31T08:32:08.712Z [INFO] CoreDNS-1.3.1
2020-03-31T08:32:08.713Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2020-03-31T08:32:08.713Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669

==> coredns [6a507738d34a] <==
.:53
2020-03-31T08:32:08.711Z [INFO] CoreDNS-1.3.1
2020-03-31T08:32:08.711Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2020-03-31T08:32:08.711Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669

==> describe nodes <==
Name: minikube
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
minikube.k8s.io/commit=8af1ea66d8a0cb7202a44a91b6dc775577868ed1
minikube.k8s.io/name=minikube
minikube.k8s.io/updated_at=2020_03_31T10_31_49_0700
minikube.k8s.io/version=v1.9.0
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 31 Mar 2020 08:31:43 +0000
Taints:
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message


MemoryPressure False Tue, 31 Mar 2020 08:55:44 +0000 Tue, 31 Mar 2020 08:31:35 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 31 Mar 2020 08:55:44 +0000 Tue, 31 Mar 2020 08:31:35 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 31 Mar 2020 08:55:44 +0000 Tue, 31 Mar 2020 08:31:35 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 31 Mar 2020 08:55:44 +0000 Tue, 31 Mar 2020 08:31:35 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 172.17.0.2
Hostname: minikube
Capacity:
cpu: 2
ephemeral-storage: 61255492Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 2037620Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 61255492Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 2037620Ki
pods: 110
System Info:
Machine ID: 8545c5f5c4eb42e884baacaf5fa1f5fb
System UUID: e80618a3-0f92-4608-98b0-196f69922a9e
Boot ID: 598d6f3e-313e-44ba-867d-08468399f9d3
Kernel Version: 4.19.76-linuxkit
OS Image: Ubuntu 19.10
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.2
Kubelet Version: v1.14.0
Kube-Proxy Version: v1.14.0
PodCIDR: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE


default web 0 (0%) 0 (0%) 0 (0%) 0 (0%) 14m
kube-system coredns-fb8b8dccf-bktjn 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 24m
kube-system coredns-fb8b8dccf-lbpbz 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 24m
kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 23m
kube-system kindnet-hcl42 100m (5%) 100m (5%) 50Mi (2%) 50Mi (2%) 24m
kube-system kube-apiserver-minikube 250m (12%) 0 (0%) 0 (0%) 0 (0%) 23m
kube-system kube-controller-manager-minikube 200m (10%) 0 (0%) 0 (0%) 0 (0%) 23m
kube-system kube-proxy-m7v6p 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24m
kube-system kube-scheduler-minikube 100m (5%) 0 (0%) 0 (0%) 0 (0%) 23m
kube-system nginx-ingress-controller-b84556868-kh8n6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits


cpu 850m (42%) 100m (5%)
memory 190Mi (9%) 390Mi (19%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message


Normal NodeHasSufficientMemory 24m (x8 over 24m) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 24m (x8 over 24m) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 24m (x7 over 24m) kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Warning readOnlySysFS 24m kube-proxy, minikube CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)
Normal Starting 24m kube-proxy, minikube Starting kube-proxy.

==> dmesg <==
[Mar31 07:34] tsc: Unable to calibrate against PIT
[ +0.597814] virtio-pci 0000:00:01.0: can't derive routing for PCI INT A
[ +0.001924] virtio-pci 0000:00:01.0: PCI INT A: no GSI
[ +0.005139] virtio-pci 0000:00:07.0: can't derive routing for PCI INT A
[ +0.001680] virtio-pci 0000:00:07.0: PCI INT A: no GSI
[ +0.058545] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
[ +0.022298] ahci 0000:00:02.0: can't derive routing for PCI INT A
[ +0.001507] ahci 0000:00:02.0: PCI INT A: no GSI
[ +0.683851] i8042: Can't read CTR while initializing i8042
[ +0.001417] i8042: probe of i8042 failed with error -5
[ +0.006370] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-184)
[ +0.001774] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-620)
[ +0.260204] ata1.00: ATA Identify Device Log not supported
[ +0.001281] ata1.00: Security Log not supported
[ +0.002459] ata1.00: ATA Identify Device Log not supported
[ +0.001264] ata1.00: Security Log not supported
[ +0.154008] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[ +0.021992] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[Mar31 07:35] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[ +0.077989] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[Mar31 07:40] hrtimer: interrupt took 2316993 ns
[Mar31 07:47] tee (5973): /proc/5576/oom_adj is deprecated, please use /proc/5576/oom_score_adj instead.

==> etcd [b82aa41df356] <==
2020-03-31 08:31:34.136909 I | etcdmain: etcd Version: 3.3.10
2020-03-31 08:31:34.139611 I | etcdmain: Git SHA: 27fc7e2
2020-03-31 08:31:34.139688 I | etcdmain: Go Version: go1.10.4
2020-03-31 08:31:34.140806 I | etcdmain: Go OS/Arch: linux/amd64
2020-03-31 08:31:34.141644 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2020-03-31 08:31:34.144109 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-03-31 08:31:34.161365 I | embed: listening for peers on https://172.17.0.2:2380
2020-03-31 08:31:34.162964 I | embed: listening for client requests on 127.0.0.1:2379
2020-03-31 08:31:34.163139 I | embed: listening for client requests on 172.17.0.2:2379
2020-03-31 08:31:34.193488 I | etcdserver: name = minikube
2020-03-31 08:31:34.194252 I | etcdserver: data dir = /var/lib/minikube/etcd
2020-03-31 08:31:34.195167 I | etcdserver: member dir = /var/lib/minikube/etcd/member
2020-03-31 08:31:34.195636 I | etcdserver: heartbeat = 100ms
2020-03-31 08:31:34.195985 I | etcdserver: election = 1000ms
2020-03-31 08:31:34.196385 I | etcdserver: snapshot count = 10000
2020-03-31 08:31:34.196656 I | etcdserver: advertise client URLs = https://172.17.0.2:2379
2020-03-31 08:31:34.197009 I | etcdserver: initial advertise peer URLs = https://172.17.0.2:2380
2020-03-31 08:31:34.197237 I | etcdserver: initial cluster = minikube=https://172.17.0.2:2380
2020-03-31 08:31:34.236216 I | etcdserver: starting member b8e14bda2255bc24 in cluster 38b0e74a458e7a1f
2020-03-31 08:31:34.236303 I | raft: b8e14bda2255bc24 became follower at term 0
2020-03-31 08:31:34.236320 I | raft: newRaft b8e14bda2255bc24 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2020-03-31 08:31:34.236334 I | raft: b8e14bda2255bc24 became follower at term 1
2020-03-31 08:31:34.340367 W | auth: simple token is not cryptographically signed
2020-03-31 08:31:34.401667 I | etcdserver: starting server... [version: 3.3.10, cluster version: to_be_decided]
2020-03-31 08:31:34.409456 I | etcdserver: b8e14bda2255bc24 as single-node; fast-forwarding 9 ticks (election ticks 10)
2020-03-31 08:31:34.424575 I | etcdserver/membership: added member b8e14bda2255bc24 [https://172.17.0.2:2380] to cluster 38b0e74a458e7a1f
2020-03-31 08:31:34.442258 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-03-31 08:31:34.444013 I | embed: listening for metrics on http://172.17.0.2:2381
2020-03-31 08:31:34.444133 I | embed: listening for metrics on http://127.0.0.1:2381
2020-03-31 08:31:34.702254 I | raft: b8e14bda2255bc24 is starting a new election at term 1
2020-03-31 08:31:34.702335 I | raft: b8e14bda2255bc24 became candidate at term 2
2020-03-31 08:31:34.702368 I | raft: b8e14bda2255bc24 received MsgVoteResp from b8e14bda2255bc24 at term 2
2020-03-31 08:31:34.702389 I | raft: b8e14bda2255bc24 became leader at term 2
2020-03-31 08:31:34.702402 I | raft: raft.node: b8e14bda2255bc24 elected leader b8e14bda2255bc24 at term 2
2020-03-31 08:31:34.931189 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.2:2379]} to cluster 38b0e74a458e7a1f
2020-03-31 08:31:35.006979 I | etcdserver: setting up the initial cluster version to 3.3
2020-03-31 08:31:35.060823 I | embed: ready to serve client requests
2020-03-31 08:31:35.391969 N | etcdserver/membership: set the initial cluster version to 3.3
2020-03-31 08:31:35.432869 I | etcdserver/api: enabled capabilities for version 3.3
2020-03-31 08:31:35.461278 I | embed: ready to serve client requests
2020-03-31 08:31:35.497338 I | embed: serving client requests on 127.0.0.1:2379
2020-03-31 08:31:35.498302 I | embed: serving client requests on 172.17.0.2:2379
proto: no coders for int
proto: no encoder for ValueSize int [GetProperties]
2020-03-31 08:32:26.935952 W | etcdserver: request "header:<ID:13557085228049851706 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/172.17.0.2" mod_revision:439 > success:<request_put:<key:"/registry/masterleases/172.17.0.2" value_size:65 lease:4333713191195075896 >> failure:<request_range:<key:"/registry/masterleases/172.17.0.2" > >>" with result "size:16" took too long (262.086409ms) to execute
2020-03-31 08:32:26.936285 W | etcdserver: read-only range request "key:"/registry/services/endpoints/kube-system/kube-scheduler" " with result "range_response_count:1 size:430" took too long (178.640542ms) to execute
2020-03-31 08:36:01.834832 W | etcdserver: read-only range request "key:"/registry/services/endpoints/kube-system/kube-controller-manager" " with result "range_response_count:1 size:448" took too long (537.346776ms) to execute
2020-03-31 08:36:01.837558 W | etcdserver: read-only range request "key:"/registry/deployments" range_end:"/registry/deploymentt" count_only:true " with result "range_response_count:0 size:7" took too long (237.353514ms) to execute
2020-03-31 08:36:50.822689 W | etcdserver: read-only range request "key:"/registry/persistentvolumeclaims" range_end:"/registry/persistentvolumeclaimt" count_only:true " with result "range_response_count:0 size:5" took too long (268.036763ms) to execute
2020-03-31 08:36:50.823106 W | etcdserver: read-only range request "key:"/registry/leases/kube-node-lease/minikube" " with result "range_response_count:1 size:289" took too long (313.963517ms) to execute
2020-03-31 08:36:52.839697 W | etcdserver: read-only range request "key:"/registry/runtimeclasses" range_end:"/registry/runtimeclasset" count_only:true " with result "range_response_count:0 size:5" took too long (521.345081ms) to execute
2020-03-31 08:41:36.476771 I | mvcc: store.index: compact 792
2020-03-31 08:41:36.485267 I | mvcc: finished scheduled compaction at 792 (took 4.328598ms)
2020-03-31 08:46:36.273524 I | mvcc: store.index: compact 1204
2020-03-31 08:46:36.277749 I | mvcc: finished scheduled compaction at 1204 (took 1.397204ms)
2020-03-31 08:51:36.069722 I | mvcc: store.index: compact 1625
2020-03-31 08:51:36.071463 I | mvcc: finished scheduled compaction at 1625 (took 836.551Β΅s)

==> kernel <==
08:56:17 up 1:21, 0 users, load average: 0.33, 0.36, 0.53
Linux minikube 4.19.76-linuxkit #1 SMP Thu Oct 17 19:31:58 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 19.10"

==> kube-apiserver [a15a83b0d226] <==
I0331 08:55:48.541814 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:48.542070 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:49.542325 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:49.542511 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:50.543443 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:50.543681 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:51.545228 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:51.545400 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:52.548788 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:52.549108 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:53.550212 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:53.550512 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:54.550920 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:54.559542 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:55.552142 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:55.562253 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:56.552804 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:56.563460 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:57.554372 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:57.564611 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:58.555926 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:58.565912 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:59.557787 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:59.567042 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:00.558500 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:00.567752 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:01.559200 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:01.568257 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:02.560176 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:02.568718 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:03.560969 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:03.569388 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:04.562444 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:04.570431 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:05.563591 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:05.571439 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:06.542265 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:06.551395 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:07.545431 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:07.551901 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:08.546286 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:08.552996 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:09.547546 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:09.553592 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:10.553217 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:10.554171 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:11.554591 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:11.554731 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:12.555210 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:12.555426 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:13.555827 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:13.556101 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:14.556416 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:14.556718 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:15.557116 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:15.557383 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:16.558507 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:16.558968 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:17.559695 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:17.565042 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001

==> kube-controller-manager [c3fe71e5fc3a] <==
I0331 08:32:01.282471 1 controllermanager.go:497] Started "daemonset"
W0331 08:32:01.282653 1 controllermanager.go:489] Skipping "root-ca-cert-publisher"
I0331 08:32:01.738243 1 controllermanager.go:497] Started "horizontalpodautoscaling"
I0331 08:32:01.739200 1 horizontal.go:156] Starting HPA controller
I0331 08:32:01.741221 1 controller_utils.go:1027] Waiting for caches to sync for HPA controller
I0331 08:32:01.989670 1 controllermanager.go:497] Started "tokencleaner"
W0331 08:32:01.990240 1 controllermanager.go:489] Skipping "ttl-after-finished"
E0331 08:32:01.990935 1 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
I0331 08:32:01.990176 1 tokencleaner.go:116] Starting token cleaner controller
I0331 08:32:01.994933 1 controller_utils.go:1027] Waiting for caches to sync for token_cleaner controller
W0331 08:32:02.083571 1 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0331 08:32:02.086826 1 controller_utils.go:1034] Caches are synced for bootstrap_signer controller
I0331 08:32:02.087999 1 controller_utils.go:1034] Caches are synced for deployment controller
I0331 08:32:02.089057 1 controller_utils.go:1034] Caches are synced for certificate controller
I0331 08:32:02.092192 1 controller_utils.go:1034] Caches are synced for ReplicaSet controller
I0331 08:32:02.093670 1 controller_utils.go:1034] Caches are synced for endpoint controller
I0331 08:32:02.093757 1 controller_utils.go:1034] Caches are synced for certificate controller
I0331 08:32:02.096554 1 controller_utils.go:1034] Caches are synced for token_cleaner controller
I0331 08:32:02.132764 1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"0f9b3570-732a-11ea-9f29-02429a45b1b2", APIVersion:"apps/v1", ResourceVersion:"197", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-fb8b8dccf to 2
I0331 08:32:02.135841 1 controller_utils.go:1034] Caches are synced for node controller
I0331 08:32:02.135926 1 range_allocator.go:157] Starting range CIDR allocator
I0331 08:32:02.136016 1 controller_utils.go:1027] Waiting for caches to sync for cidrallocator controller
I0331 08:32:02.139985 1 controller_utils.go:1034] Caches are synced for GC controller
I0331 08:32:02.142975 1 controller_utils.go:1034] Caches are synced for HPA controller
I0331 08:32:02.143886 1 controller_utils.go:1034] Caches are synced for TTL controller
I0331 08:32:02.153627 1 controller_utils.go:1034] Caches are synced for PV protection controller
I0331 08:32:02.156638 1 controller_utils.go:1034] Caches are synced for taint controller
I0331 08:32:02.156788 1 node_lifecycle_controller.go:1159] Initializing eviction metric for zone:
W0331 08:32:02.156892 1 node_lifecycle_controller.go:833] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0331 08:32:02.157068 1 node_lifecycle_controller.go:1059] Controller detected that zone is now in state Normal.
I0331 08:32:02.158108 1 taint_manager.go:198] Starting NoExecuteTaintManager
I0331 08:32:02.160204 1 event.go:209] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"0cf13fa1-732a-11ea-9f29-02429a45b1b2", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I0331 08:32:02.170846 1 controller_utils.go:1034] Caches are synced for job controller
I0331 08:32:02.173867 1 log.go:172] [INFO] signed certificate with serial number 348836518710746890614976265293012047567942960152
I0331 08:32:02.190539 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-fb8b8dccf", UID:"18426705-732a-11ea-9f29-02429a45b1b2", APIVersion:"apps/v1", ResourceVersion:"327", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-fb8b8dccf-lbpbz
I0331 08:32:02.221681 1 controller_utils.go:1034] Caches are synced for service account controller
I0331 08:32:02.225773 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-fb8b8dccf", UID:"18426705-732a-11ea-9f29-02429a45b1b2", APIVersion:"apps/v1", ResourceVersion:"327", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-fb8b8dccf-bktjn
I0331 08:32:02.236302 1 controller_utils.go:1034] Caches are synced for cidrallocator controller
I0331 08:32:02.260197 1 controller_utils.go:1034] Caches are synced for namespace controller
I0331 08:32:02.319173 1 range_allocator.go:310] Set node minikube PodCIDR to 10.244.0.0/24
I0331 08:32:02.483880 1 controller_utils.go:1034] Caches are synced for daemon sets controller
I0331 08:32:02.553898 1 controller_utils.go:1034] Caches are synced for ClusterRoleAggregator controller
I0331 08:32:02.561597 1 controller_utils.go:1034] Caches are synced for persistent volume controller
I0331 08:32:02.594321 1 controller_utils.go:1034] Caches are synced for attach detach controller
I0331 08:32:02.623154 1 controller_utils.go:1034] Caches are synced for stateful set controller
I0331 08:32:02.626184 1 controller_utils.go:1034] Caches are synced for expand controller
I0331 08:32:02.641836 1 controller_utils.go:1034] Caches are synced for PVC protection controller
I0331 08:32:02.675653 1 controller_utils.go:1034] Caches are synced for disruption controller
I0331 08:32:02.675749 1 disruption.go:294] Sending events to api server.
I0331 08:32:02.678210 1 controller_utils.go:1034] Caches are synced for ReplicationController controller
I0331 08:32:02.693864 1 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
I0331 08:32:02.724773 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"0fb881c6-732a-11ea-9f29-02429a45b1b2", APIVersion:"apps/v1", ResourceVersion:"208", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-m7v6p
I0331 08:32:02.753727 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"109cdd5b-732a-11ea-9f29-02429a45b1b2", APIVersion:"apps/v1", ResourceVersion:"240", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-hcl42
I0331 08:32:02.815582 1 controller_utils.go:1034] Caches are synced for garbage collector controller
I0331 08:32:02.815791 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0331 08:32:02.863222 1 controller_utils.go:1034] Caches are synced for resource quota controller
I0331 08:32:02.894120 1 controller_utils.go:1034] Caches are synced for garbage collector controller
E0331 08:32:03.056559 1 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I0331 08:35:06.194335 1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"nginx-ingress-controller", UID:"85f7245a-732a-11ea-9f29-02429a45b1b2", APIVersion:"apps/v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-ingress-controller-b84556868 to 1
I0331 08:35:06.243687 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"nginx-ingress-controller-b84556868", UID:"85f8a669-732a-11ea-9f29-02429a45b1b2", APIVersion:"apps/v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-ingress-controller-b84556868-kh8n6

==> kube-proxy [31fa7a07f95e] <==
W0331 08:32:06.518547 1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
I0331 08:32:06.672751 1 server_others.go:148] Using iptables Proxier.
I0331 08:32:06.675746 1 server_others.go:178] Tearing down inactive rules.
I0331 08:32:07.027370 1 server.go:555] Version: v1.14.0
I0331 08:32:07.066710 1 conntrack.go:52] Setting nf_conntrack_max to 131072
E0331 08:32:07.067346 1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime])
I0331 08:32:07.067633 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0331 08:32:07.067763 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0331 08:32:07.068184 1 config.go:202] Starting service config controller
I0331 08:32:07.068371 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0331 08:32:07.089152 1 config.go:102] Starting endpoints config controller
I0331 08:32:07.089722 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0331 08:32:07.195756 1 controller_utils.go:1034] Caches are synced for endpoints config controller
I0331 08:32:07.269068 1 controller_utils.go:1034] Caches are synced for service config controller

==> kube-scheduler [636cbc28b02a] <==
I0331 08:31:35.938018 1 serving.go:319] Generated self-signed cert in-memory
W0331 08:31:36.608645 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0331 08:31:36.608726 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0331 08:31:36.608757 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0331 08:31:36.621912 1 server.go:142] Version: v1.14.0
I0331 08:31:36.625207 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0331 08:31:36.638219 1 authorization.go:47] Authorization is disabled
W0331 08:31:36.638287 1 authentication.go:55] Authentication is disabled
I0331 08:31:36.638311 1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
I0331 08:31:36.640459 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
E0331 08:31:43.052618 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0331 08:31:43.053184 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0331 08:31:43.053690 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0331 08:31:43.055118 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0331 08:31:43.055202 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0331 08:31:43.055360 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0331 08:31:43.055806 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0331 08:31:43.055849 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0331 08:31:43.056810 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0331 08:31:43.070097 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0331 08:31:44.058160 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0331 08:31:44.059737 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0331 08:31:44.059875 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0331 08:31:44.069524 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0331 08:31:44.070192 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0331 08:31:44.073620 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0331 08:31:44.073938 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0331 08:31:44.074342 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0331 08:31:44.080776 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0331 08:31:44.081063 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
I0331 08:31:45.926301 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0331 08:31:46.026597 1 controller_utils.go:1034] Caches are synced for scheduler controller
I0331 08:31:46.027034 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-scheduler...
I0331 08:31:46.066937 1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Tue 2020-03-31 08:29:37 UTC, end at Tue 2020-03-31 08:56:19 UTC. --
Mar 31 08:31:43 minikube kubelet[1618]: E0331 08:31:43.308177 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249aa607", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3e607, ext:711236661, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3e607, ext:711236661, loc:(*time.Location)(0x7ff88e0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:43 minikube kubelet[1618]: E0331 08:31:43.363793 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249a41c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f381c8, ext:711211006, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f381c8, ext:711211006, loc:(*time.Location)(0x7ff88e0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:43 minikube kubelet[1618]: E0331 08:31:43.419479 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249abc1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3fc1a, ext:711242326, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3fc1a, ext:711242326, loc:(*time.Location)(0x7ff88e0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:43 minikube kubelet[1618]: E0331 08:31:43.481355 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249a41c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f381c8, ext:711211006, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd84f838bb8, ext:999230435, loc:(*time.Location)(0x7ff88e0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:43 minikube kubelet[1618]: E0331 08:31:43.543899 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249aa607", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3e607, ext:711236661, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd84f83b36e, ext:999240601, loc:(*time.Location)(0x7ff88e0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:43 minikube kubelet[1618]: E0331 08:31:43.627373 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249abc1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3fc1a, ext:711242326, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd84f83d361, ext:999248781, loc:(*time.Location)(0x7ff88e0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:43 minikube kubelet[1618]: E0331 08:31:43.692428 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249abc1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3fc1a, ext:711242326, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd85f75d96f, ext:1266768277, loc:(*time.Location)(0x7ff88e0)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:43 minikube kubelet[1618]: E0331 08:31:43.851375 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249a41c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f381c8, ext:711211006, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd85f75aebd, ext:1266757353, loc:(*time.Location)(0x7ff88e0)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:44 minikube kubelet[1618]: E0331 08:31:44.249636 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249aa607", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3e607, ext:711236661, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd85f75ca73, ext:1266764442, loc:(*time.Location)(0x7ff88e0)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:44 minikube kubelet[1618]: E0331 08:31:44.452700 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a4a335340", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd863f1c940, ext:1341999475, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd863f1c940, ext:1341999475, loc:(*time.Location)(0x7ff88e0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:44 minikube kubelet[1618]: E0331 08:31:44.847316 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249a41c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f381c8, ext:711211006, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd86b2d1402, ext:1463325782, loc:(*time.Location)(0x7ff88e0)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:45 minikube kubelet[1618]: E0331 08:31:45.248634 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249aa607", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3e607, ext:711236661, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd86b2eae4a, ext:1463430769, loc:(*time.Location)(0x7ff88e0)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:45 minikube kubelet[1618]: E0331 08:31:45.655875 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249abc1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3fc1a, ext:711242326, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd86b33e875, ext:1463773344, loc:(*time.Location)(0x7ff88e0)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:46 minikube kubelet[1618]: E0331 08:31:46.055732 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249a41c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f381c8, ext:711211006, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd893eb4f75, ext:2073139617, loc:(*time.Location)(0x7ff88e0)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:49 minikube kubelet[1618]: E0331 08:31:49.648591 1618 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
Mar 31 08:31:49 minikube kubelet[1618]: E0331 08:31:49.657552 1618 helpers.go:721] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
Mar 31 08:31:59 minikube kubelet[1618]: E0331 08:31:59.692174 1618 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
Mar 31 08:31:59 minikube kubelet[1618]: E0331 08:31:59.692365 1618 helpers.go:721] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.343909 1618 kuberuntime_manager.go:946] updating runtime config through cri with podcidr 10.244.0.0/24
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.345132 1618 docker_service.go:353] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.345503 1618 kubelet_network.go:77] Setting Pod CIDR: -> 10.244.0.0/24
Mar 31 08:32:02 minikube kubelet[1618]: E0331 08:32:02.399773 1618 reflector.go:126] object-"kube-system"/"coredns": Failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
Mar 31 08:32:02 minikube kubelet[1618]: E0331 08:32:02.402299 1618 reflector.go:126] object-"kube-system"/"coredns-token-sflpk": Failed to list *v1.Secret: secrets "coredns-token-sflpk" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.651242 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1846dd62-732a-11ea-9f29-02429a45b1b2-config-volume") pod "coredns-fb8b8dccf-lbpbz" (UID: "1846dd62-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.663192 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-sflpk" (UniqueName: "kubernetes.io/secret/184fbeb3-732a-11ea-9f29-02429a45b1b2-coredns-token-sflpk") pod "coredns-fb8b8dccf-bktjn" (UID: "184fbeb3-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.663343 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/12c200e7-732a-11ea-9f29-02429a45b1b2-tmp") pod "storage-provisioner" (UID: "12c200e7-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.663423 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/184fbeb3-732a-11ea-9f29-02429a45b1b2-config-volume") pod "coredns-fb8b8dccf-bktjn" (UID: "184fbeb3-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.666767 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-sflpk" (UniqueName: "kubernetes.io/secret/1846dd62-732a-11ea-9f29-02429a45b1b2-coredns-token-sflpk") pod "coredns-fb8b8dccf-lbpbz" (UID: "1846dd62-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.682574 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-cjc6f" (UniqueName: "kubernetes.io/secret/12c200e7-732a-11ea-9f29-02429a45b1b2-storage-provisioner-token-cjc6f") pod "storage-provisioner" (UID: "12c200e7-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.791486 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/18876401-732a-11ea-9f29-02429a45b1b2-cni-cfg") pod "kindnet-hcl42" (UID: "18876401-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.791875 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/18876401-732a-11ea-9f29-02429a45b1b2-lib-modules") pod "kindnet-hcl42" (UID: "18876401-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.792173 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-n82c5" (UniqueName: "kubernetes.io/secret/18876401-732a-11ea-9f29-02429a45b1b2-kindnet-token-n82c5") pod "kindnet-hcl42" (UID: "18876401-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.792706 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/18876401-732a-11ea-9f29-02429a45b1b2-xtables-lock") pod "kindnet-hcl42" (UID: "18876401-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: E0331 08:32:02.798128 1618 reflector.go:126] object-"kube-system"/"kindnet-token-n82c5": Failed to list *v1.Secret: secrets "kindnet-token-n82c5" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.893841 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/18869b9f-732a-11ea-9f29-02429a45b1b2-lib-modules") pod "kube-proxy-m7v6p" (UID: "18869b9f-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.895351 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/18869b9f-732a-11ea-9f29-02429a45b1b2-kube-proxy") pod "kube-proxy-m7v6p" (UID: "18869b9f-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.896545 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/18869b9f-732a-11ea-9f29-02429a45b1b2-xtables-lock") pod "kube-proxy-m7v6p" (UID: "18869b9f-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.900684 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-82nbp" (UniqueName: "kubernetes.io/secret/18869b9f-732a-11ea-9f29-02429a45b1b2-kube-proxy-token-82nbp") pod "kube-proxy-m7v6p" (UID: "18869b9f-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:03 minikube kubelet[1618]: E0331 08:32:03.793791 1618 secret.go:198] Couldn't get secret kube-system/coredns-token-sflpk: couldn't propagate object cache: timed out waiting for the condition
Mar 31 08:32:03 minikube kubelet[1618]: E0331 08:32:03.793998 1618 nestedpendingoperations.go:267] Operation for ""kubernetes.io/secret/184fbeb3-732a-11ea-9f29-02429a45b1b2-coredns-token-sflpk" ("184fbeb3-732a-11ea-9f29-02429a45b1b2")" failed. No retries permitted until 2020-03-31 08:32:04.293967663 +0000 UTC m=+36.055171975 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "coredns-token-sflpk" (UniqueName: "kubernetes.io/secret/184fbeb3-732a-11ea-9f29-02429a45b1b2-coredns-token-sflpk") pod "coredns-fb8b8dccf-bktjn" (UID: "184fbeb3-732a-11ea-9f29-02429a45b1b2") : couldn't propagate object cache: timed out waiting for the condition"
Mar 31 08:32:03 minikube kubelet[1618]: E0331 08:32:03.794879 1618 secret.go:198] Couldn't get secret kube-system/coredns-token-sflpk: couldn't propagate object cache: timed out waiting for the condition
Mar 31 08:32:03 minikube kubelet[1618]: E0331 08:32:03.794952 1618 nestedpendingoperations.go:267] Operation for ""kubernetes.io/secret/1846dd62-732a-11ea-9f29-02429a45b1b2-coredns-token-sflpk" ("1846dd62-732a-11ea-9f29-02429a45b1b2")" failed. No retries permitted until 2020-03-31 08:32:04.294926895 +0000 UTC m=+36.056131206 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "coredns-token-sflpk" (UniqueName: "kubernetes.io/secret/1846dd62-732a-11ea-9f29-02429a45b1b2-coredns-token-sflpk") pod "coredns-fb8b8dccf-lbpbz" (UID: "1846dd62-732a-11ea-9f29-02429a45b1b2") : couldn't propagate object cache: timed out waiting for the condition"
Mar 31 08:32:03 minikube kubelet[1618]: E0331 08:32:03.900920 1618 secret.go:198] Couldn't get secret kube-system/kindnet-token-n82c5: couldn't propagate object cache: timed out waiting for the condition
Mar 31 08:32:03 minikube kubelet[1618]: E0331 08:32:03.901234 1618 nestedpendingoperations.go:267] Operation for ""kubernetes.io/secret/18876401-732a-11ea-9f29-02429a45b1b2-kindnet-token-n82c5" ("18876401-732a-11ea-9f29-02429a45b1b2")" failed. No retries permitted until 2020-03-31 08:32:04.401170675 +0000 UTC m=+36.162375074 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "kindnet-token-n82c5" (UniqueName: "kubernetes.io/secret/18876401-732a-11ea-9f29-02429a45b1b2-kindnet-token-n82c5") pod "kindnet-hcl42" (UID: "18876401-732a-11ea-9f29-02429a45b1b2") : couldn't propagate object cache: timed out waiting for the condition"
Mar 31 08:32:04 minikube kubelet[1618]: W0331 08:32:04.418840 1618 container.go:409] Failed to create summary reader for "/system.slice/run-rfbc88cf5398744519564ad9cbf4ff678.scope": none of the resources are being tracked.
Mar 31 08:32:04 minikube kubelet[1618]: W0331 08:32:04.419588 1618 container.go:409] Failed to create summary reader for "/system.slice/run-r0435686948fa4809aafd2bfdbacf7779.scope": none of the resources are being tracked.
Mar 31 08:32:05 minikube kubelet[1618]: W0331 08:32:05.976174 1618 pod_container_deletor.go:75] Container "fdc9efa64e13c2ce2c3745c444a18be062347bf4c9dd4e17f131c14e020b9101" not found in pod's containers
Mar 31 08:32:06 minikube kubelet[1618]: W0331 08:32:06.837949 1618 pod_container_deletor.go:75] Container "55124d3804fb1e46a3df0165b6a8e99f7b1ccc3fd80da91f0645219a283f7b79" not found in pod's containers
Mar 31 08:32:06 minikube kubelet[1618]: W0331 08:32:06.868003 1618 pod_container_deletor.go:75] Container "a48e9875ea2d71897bfcb6a9d5163006cbc89e4d738c41f651c47396299b93fb" not found in pod's containers
Mar 31 08:32:08 minikube kubelet[1618]: I0331 08:32:08.373210 1618 transport.go:132] certificate rotation detected, shutting down client connections to start using new credentials
Mar 31 08:32:09 minikube kubelet[1618]: E0331 08:32:09.711731 1618 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
Mar 31 08:32:09 minikube kubelet[1618]: E0331 08:32:09.711865 1618 helpers.go:721] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
Mar 31 08:32:19 minikube kubelet[1618]: E0331 08:32:19.816629 1618 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
Mar 31 08:32:19 minikube kubelet[1618]: E0331 08:32:19.817086 1618 helpers.go:721] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
Mar 31 08:35:06 minikube kubelet[1618]: I0331 08:35:06.390555 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "nginx-ingress-token-6hbxw" (UniqueName: "kubernetes.io/secret/86005fa7-732a-11ea-9f29-02429a45b1b2-nginx-ingress-token-6hbxw") pod "nginx-ingress-controller-b84556868-kh8n6" (UID: "86005fa7-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:35:07 minikube kubelet[1618]: W0331 08:35:07.441583 1618 pod_container_deletor.go:75] Container "0254de39b3801b1cdce25aea2b15a6cf57f9d4c13e50b84459be2a1b197f73aa" not found in pod's containers
Mar 31 08:41:53 minikube kubelet[1618]: E0331 08:41:53.448884 1618 reflector.go:126] object-"default"/"default-token-jp22c": Failed to list *v1.Secret: secrets "default-token-jp22c" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node "minikube" and this object
Mar 31 08:41:53 minikube kubelet[1618]: I0331 08:41:53.535630 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-jp22c" (UniqueName: "kubernetes.io/secret/78b61bb0-732b-11ea-9f29-02429a45b1b2-default-token-jp22c") pod "web" (UID: "78b61bb0-732b-11ea-9f29-02429a45b1b2")
Mar 31 08:41:55 minikube kubelet[1618]: W0331 08:41:55.682086 1618 pod_container_deletor.go:75] Container "cc3588d4252ea6a8587eecc630d55d513d07e8630a4f8eb3bbffb6ed7c4bc995" not found in pod's containers
Mar 31 08:52:32 minikube kubelet[1618]: W0331 08:52:32.579484 1618 reflector.go:289] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: too old resource version: 325 (1077)

==> storage-provisioner [791695c1a1a8] <==

I suspect something may be missing to forward the port with the docker driver. I don't know if this is a documentation issue or an implementation issue. @medyagh - can you comment?

Do you mind trying to see if it works properly with --driver=hyperkit?

Works just fine with --driver=hyperkit

the ingress addon is currently not supported on docker driver on MacOs. this is due the limitation on docker bridge on mac.
there is a work arround that we have implemented for the core minikube tasks such as tunnel and service.

we could add same work arround for addon ingress on docker driver on mac and windows.
that said I will mark this as a bug to fix.

sorry that you faced this issue, the LEAST we could do is at least not allow the user to enable this addon on docker on macos driver for now. till it is fixed

@jkornata
I will make a PR to fix this bug

Thank you @medyagh

@medyagh, could you please re-open this until the defect is fixed?

This issue is referenced in the cli output when trying to enable the ingress addon, yet the status is closed? probably better to open it up @medyagh

I think the bot heard it wrong, the comment said to not close this bug

I've been trying to enable ingress on Windows 10. When I try, I get the following error:

$ minikube addons enable ingress
* Due to docker networking limitations on windows, ingress addon is not supported for this driver.
Alternatively to use this addon you can use a vm-based driver:

        'minikube start --vm=true'

To track the update on this work in progress feature please check:
https://github.com/kubernetes/minikube/issues/7332

I believe this error message was introduced as part of fix: #7393

Which redirects to this error. Is this the correct ticket? If so why does the ticket only refer to MacOS. If not, what is the correct ticket.

I'm sorry if this comment doens't have anything to do with this ticket, but I reached a dead end with this error and I wanted to make sure I'm tracking correctly.

Yes, this error message will show up for the docker driver on both MacOS and Windows, since this ticket applies to both. This is still an outstanding bug we need to address.

@oconnelc have you tried the suggestion that minikube gave?
'minikube start --vm=true'

This issue is still exists if you want a small work around

I suggest you to install the virtualbox and run the command
minikube addons enable ingress

if you get the below error in Mac(OS)
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
E0911 13:34:45.394430 41676 start.go:174] Error starting host: Error
creating host: Error executing step: Creating VM.
: Error setting up host only network on machine start: The host-only
adapter we just created is not visible. This is a well known
VirtualBox bug. You might want to uninstall it and reinstall at least
version 5.0.12 that is is supposed to fix this issue.

Then try doing following steps
System Preferences -> Security & Privacy -> Allow -> Then allow the software corporation (in this case Oracle)
Restart

Sorry for the delayed response. I swear that the last time I tried minikube start --vm=true it failed to run but it seems to be running now.

I'm also in the process of updating to use WSL2

I still am running into this error on Mac despite running minikube start --vm=true - any ideas?
I made a stackoverflow question about it https://stackoverflow.com/questions/63388065/minikube-kubernetes-wont-allow-ingress-on-mac-despite-running-as-a-vm

I still am running into this error on Mac despite running minikube start --vm=true - any ideas?
I made a stackoverflow question about it https://stackoverflow.com/questions/63388065/minikube-kubernetes-wont-allow-ingress-on-mac-despite-running-as-a-vm

minikube delete -> minikube start --vm=true

fixed my issue

Why is not possible to expose nodePorts just like kind does it ?

https://kind.sigs.k8s.io/docs/user/ingress/#create-cluster

I still am running into this error on Mac despite running minikube start --vm=true - any ideas?
I made a stackoverflow question about it https://stackoverflow.com/questions/63388065/minikube-kubernetes-wont-allow-ingress-on-mac-despite-running-as-a-vm

minikube delete -> minikube start --vm=true

fixed my issue

Thank you, I've already fixed this issue by your instruction

I still am running into this error on Mac despite running minikube start --vm=true - any ideas?
I made a stackoverflow question about it https://stackoverflow.com/questions/63388065/minikube-kubernetes-wont-allow-ingress-on-mac-despite-running-as-a-vm

minikube delete -> minikube start --vm=true

fixed my issue

Thank you, i fixed my issue. Now i can enable add and works fine.

Is there anything happening on darwin/macos for this issue, other than the work around?

So sad :/

The will is to use docker on macOS, not another VM... but solution it's perhaps to use Kubernetes on Docker with Docker Mac Desktop...

The real question is : what is the work around?

Thanks πŸ™

Welcome onboard @JulienBreux . Had also to rollback to VirtualBox with -vm option to get things work. I gave up.

Just ran into this issue. To fix it on my Mac :

minikube config set vm-driver hyperkit
minikube delete
minikube start
minikube addons enable ingress
commented

Just ran into this issue. To fix it on my Mac :

minikube config set vm-driver hyperkit
minikube delete
minikube start
minikube addons enable ingress

this worked for me on windows 10. Thank you.

Is there really no workaround on this. I chose the Docker driver instead of hyper driver due to DNSMasq issues. Now I am told to use hyperkit again which will cause the same issue but allow Ingress Nginx..

For what it's worth:
I just upgraded to minikube v1.16.0 on Windows 10 and it worked. Thank you so much, @medyagh!

Appears this has now been fixed on Windows in #9761. Is the issues also being looked into for MacOS?

commented

@camba1
It works for me on my mac.
Thank you a lot!

I have the same problem and I start minikube with this command,
minikube start --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers' --vm=true. and the terminal shows me that

πŸ˜„  minikube v1.16.0 on Darwin 11.1
✨  Automatically selected the hyperkit driver. Other choices: vmware, vmwarefusion
βœ…  Using image repository registry.cn-hangzhou.aliyuncs.com/google_containers
πŸ‘  Starting control plane node minikube in cluster minikube
πŸ”₯  Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.20.0 on Docker 20.10.0 ...
    β–ͺ Generating certificates and keys ...
    β–ͺ Booting up control plane ...
    β–ͺ Configuring RBAC rules ...
πŸ”Ž  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
πŸ„  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

So, I'm happy to type the command, minikube addons enable ingress. But after several minutes, it shows me that

❌  Exiting due to MK_ENABLE: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: timed out waiting for the condition]

😿  If the above advice does not help, please let us know: 
πŸ‘‰  https://github.com/kubernetes/minikube/issues/new/choose

and then, I get the pods kubectl get pods -A

NAMESPACE     NAME                                        READY   STATUS              RESTARTS   AGE
kube-system   coredns-54d67798b7-d5vwm                    1/1     Running             0          7m7s
kube-system   etcd-minikube                               1/1     Running             0          7m22s
kube-system   ingress-nginx-admission-create-44759        0/1     ImagePullBackOff    0          6m46s
kube-system   ingress-nginx-admission-patch-sp948         0/1     ImagePullBackOff    0          6m46s
kube-system   ingress-nginx-controller-5f568d55f8-dtrmv   0/1     ContainerCreating   0          6m46s
kube-system   kube-apiserver-minikube                     1/1     Running             0          7m22s
kube-system   kube-controller-manager-minikube            1/1     Running             0          7m22s
kube-system   kube-proxy-chz6n                            1/1     Running             0          7m7s
kube-system   kube-scheduler-minikube                     1/1     Running             0          7m22s
kube-system   storage-provisioner                         1/1     Running             1          7m22s

To get more pieces of information and therefore I execute the command kubectl describe pod ingress-nginx-admission-create-44759 -n=kube-system

Events:
  Type     Reason       Age                    From               Message
  ----     ------       ----                   ----               -------
  Normal   Scheduled    28m                    default-scheduler  Successfully assigned kube-system/ingress-nginx-admission-create-44759 to minikube
  Warning  FailedMount  28m                    kubelet            MountVolume.SetUp failed for volume "ingress-nginx-admission-token-79ldg" : failed to sync secret cache: timed out waiting for the condition
  Normal   Pulling      27m (x4 over 28m)      kubelet            Pulling image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.2.2"
  Warning  Failed       27m (x4 over 28m)      kubelet            Failed to pull image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.2.2": rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
  Warning  Failed       27m (x4 over 28m)      kubelet            Error: ErrImagePull
  Normal   BackOff      18m (x43 over 28m)     kubelet            Back-off pulling image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.2.2"
  Warning  Failed       3m16s (x109 over 28m)  kubelet            Error: ImagePullBackOff

the result of kubectl describe pod ingress-nginx-controller-5f568d55f8-dtrmv -n=kube-system

Events:
  Type     Reason       Age                   From               Message
  ----     ------       ----                  ----               -------
  Normal   Scheduled    50m                   default-scheduler  Successfully assigned kube-system/ingress-nginx-controller-5f568d55f8-dtrmv to minikube
  Warning  FailedMount  27m (x4 over 41m)     kubelet            Unable to attach or mount volumes: unmounted volumes=[webhook-cert], unattached volumes=[ingress-nginx-token-gmz69 webhook-cert]: timed out waiting for the condition
  Warning  FailedMount  19m (x23 over 50m)    kubelet            MountVolume.SetUp failed for volume "webhook-cert" : secret "ingress-nginx-admission" not found
  Warning  FailedMount  5m17s (x14 over 48m)  kubelet            Unable to attach or mount volumes: unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert ingress-nginx-token-gmz69]: timed out waiting for the condition

Well! After execution of docker login, it shows the same Events, Do you have any suggestions for that?

Hi, I have same issue. If I switch from "docker" to "hyperkit" (performing minikube delete, etc. ) I cannot pull any image from Internet.
BTW, I'm trying to switch from docker to hyperkit because it seems to be a requirement to enable Nginx ingress on MAC
minikube addons enable ingress <-- this only seems to work with hyperkit according minikube output messages.

How can I fix it? I just want to get working the ingress to be able to request my local Nginx app with minikube on MAC.

  • I'm not using any VPN or local proxy.
  • minikube v1.19.0 en Darwin 10.14.6
  • Kubernetes v1.20.2 en Docker 20.10.4

Any workaround to call my Nginx until this issue is resolved?

Thank you for your time and help!

is there a solution for this issue anywhere?

Same problem in here. please update this issue if it's fixed. thank you

Solved for me:
$ minikube start --vm-driver=virtualbox $ minikube addons enable <ingress>

using hyperv instead of docker in windows solved the issue.
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All
minikube start --driver=hyperv

β–Ά minikube start --vm=true
πŸ˜„  minikube v1.20.0 on Darwin 10.15.7
✨  Using the docker driver based on user configuration
πŸ‘  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
πŸ”₯  Creating docker container (CPUs=4, Memory=5946MB) ...
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
    β–ͺ Generating certificates and keys ...
    β–ͺ Booting up control plane ...
    β–ͺ Configuring RBAC rules ...
πŸ”Ž  Verifying Kubernetes components...
    β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
πŸ„  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

~/Projects
β–Ά minikube addons enable ingress

❌  Exiting due to MK_USAGE: Due to networking limitations of driver docker on darwin, ingress addon is not supported.
Alternatively to use this addon you can use a vm-based driver:

	'minikube start --vm=true'

To track the update on this work in progress feature please check:
https://github.com/kubernetes/minikube/issues/7332

using hyperv instead of docker in windows solved the issue.
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All
minikube start --driver=hyperv

@bajosiaa It should be helpful for a Windows topic but this is a MacOS one...

Docker driver for minikube allows pods to use special DNS name host.docker.internal to communicate with other containers outside of minikube.

It is helpful to have ingress addon working under minikube docker driver on macOS.

when will ingress addon be functional with M1 apple Macintosh

driver 'virtualbox' is not supported on darwin/arm64
The driver 'hyperkit' is not supported on darwin/arm64

We still have this issue. When it is expected to be fixed.

minikube start --vm=true
πŸ˜„ minikube v1.20.0 on Darwin 10.15.7
✨ Using the docker driver based on existing profile
πŸ‘ Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
πŸƒ Updating the running docker "minikube" container ...
🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
πŸ”Ž Verifying Kubernetes components...
β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
πŸ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
$ minikube addons enable ingress

❌ Exiting due to MK_USAGE: Due to networking limitations of driver docker on darwin, ingress addon is not supported.
Alternatively to use this addon you can use a vm-based driver:

'minikube start --vm=true'

To track the update on this work in progress feature please check:
#7332

We still have this issue. When it is expected to be fixed.

minikube start --vm=true
πŸ˜„ minikube v1.20.0 on Darwin 10.15.7
✨ Using the docker driver based on existing profile
πŸ‘ Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
πŸƒ Updating the running docker "minikube" container ...
🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
πŸ”Ž Verifying Kubernetes components...
β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
πŸ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
$ minikube addons enable ingress

❌ Exiting due to MK_USAGE: Due to networking limitations of driver docker on darwin, ingress addon is not supported.
Alternatively to use this addon you can use a vm-based driver:

'minikube start --vm=true'

To track the update on this work in progress feature please check:
#7332

minikube delete
minikube start --vm=true
minikube addons enable ingress

Running minikube delete before minikube start --vm=true works for me.

I think minikube won't pull VM boot image with option --vm=false if there is a docker driver in local cache.

minikube delete might be replaced with minikube cache delete, but I haven't tried.

Ingress on the macOS docker driver is still an open issue, we just haven't had the bandwidth to address it yet. We would be happy to review a PR fixing this, otherwise we will try to get to this soon.

commented

Here is a workaround. It looks ugly but it works.

Like what minikube tunnel does, a tunnel is required to hack the network.

  1. Open a new terminal and run the following command.

    Replace API_SERVER_SSH_PORT and your real USERNAME
    You can get API_SERVER_SSH_PORT by running docker port minikube | grep 22. The API_SERVER_SSH_PORT is 57008 in the following case.

    $ $ docker port minikube | grep 22
    22/tcp -> 127.0.0.1:57008
    
    sudo ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -N docker@127.0.0.1 -p [API_SERVER_SSH_PORT] -i /Users/[USERNAME]/.minikube/machines/minikube/id_rsa -L 80:127.0.0.1:80
    
  2. Add the following line to the bottom of the /etc/hosts file.
    Replace hello-world.info with your real DNS name.

    127.0.0.1 hello-world.info
    
  3. Verify that the Ingress controller is directing traffic:
    Replace hello-world.info with your real DNS name.

    curl hello-world.info
    

Reference: ingress-minikube

I have tried everything above, my minkube just gets stuck while verifying ingress addon

πŸ„  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
    β–ͺ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
    β–ͺ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
    β–ͺ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
πŸ”Ž  Verifying ingress addon...
commented

I have tried everything above, my minkube just gets stuck while verifying ingress addon

Could you please list all the command you tried and the Minikube version?

@zhan9san I get a connection refused error when I try to ssh on M1 Mac.
When I restart minikube and try again I get kex_exchange_identification: Connection closed by remote host.
That aside, kudos on the efforts with your PR to try fixing the issue

I have tried everything above, my minkube just gets stuck while verifying ingress addon

Could you please list all the command you tried and the Minikube version?

Here you go

❯ minikube version
minikube version: v1.22.0
commit: a03fbcf166e6f74ef224d4a63be4277d017bb62e
❯ minikube addons enable ingress
commented

@zhan9san I get a connection refused error when I try to ssh on M1 Mac.
When I restart minikube and try again I get kex_exchange_identification: Connection closed by remote host.
That aside, kudos on the efforts with your PR to try fixing the issue

Sorry about that.

The port should not be API_SERVER_PORT but API_SERVER_SSH_PORT.

You can get API_SERVER_SSH_PORT by running docker port minikube | grep 22 on MacOS.

e.g. It's 57008

$ docker port minikube | grep 22
22/tcp -> 127.0.0.1:57008

I'll correct the comment above.

commented
❯ minikube version
minikube version: v1.22.0
commit: a03fbcf166e6f74ef224d4a63be4277d017bb62e
❯ minikube addons enable ingress

Hi,
In fact, Ingress add-on is not supported on MacOS in commit: a03fbcf166e6f74ef224d4a63be4277d017bb62e

I am sorry I have no idea why minikube gets stuck in your case.

I tried it but got the different result from yours.

❯  minikube version
minikube version: v1.22.0
commit: a03fbcf166e6f74ef224d4a63be4277d017bb62e
~                                                                                                                     09:39:38
❯ minikube addons enable ingress

❌  Exiting due to MK_USAGE: Due to networking limitations of driver docker on darwin, ingress addon is not supported.
Alternatively to use this addon you can use a vm-based driver:

	'minikube start --vm=true'

To track the update on this work in progress feature please check:
https://github.com/kubernetes/minikube/issues/7332

@zhan9san thanks. The steps are ok now but I still am unable to connect. When I run the ssh command I get this Warning: Permanently added '[127.0.0.1]:49608' (ECDSA) to the list of known hosts. , which I assume is fine. But after making a request I get channel 2: open failed: connect failed: Connection refused

commented

When I run the ssh command I get this Warning: Permanently added '[127.0.0.1]:49608' (ECDSA) to the list of known hosts. , which I assume is fine.

Yes, the warning message is fine.

Let me explain key parameters, -L 80:127.0.0.1:80, in ssh command. It may help you debug it.
The complete format is -L [bind_address:]port:host:hostport. The default value of bind_address is 127.0.0.1
You can get detailed help by running man ssh.

Please keep in mind, the goal is to access service in Kubernetes cluster. Due to the network issue, we cannot access it directly on MacOS. So we introduce ssh tunnel

If we use another values in parameters, they all works as well.
e.g.

-L 127.0.0.1:80:127.0.0.1:80
-L 80:[API_SERVER_IP]:80
-L 127.0.0.1:80:[API_SERVER_IP]:80

How to get [API_SERVER_IP] minikube ip

Please note the two 127.0.0.1 is not the same server.

After setting up a tunnel, we can access host:hostport by running curl bind_address:port.

-p [API_SERVER_SSH_PORT] : It is the ssh port to connect to on the remote api server container

-L 80:127.0.0.1:80: To understand this concept, let me show the complete parameter, -L [bind_address:]port:host:hostport. For this case, it would be -L 127.0.0.1:80:127.0.0.1:80.

But after making a request I get channel 2: open failed: connect failed: Connection refused

Could you show add more detailed information?

kubectl get ingress

BTW, the ssh tunnel is independent of service you access. If service doesn't work, ssh tunnel won't help.
You can verify service by following way.

$ minikube ssh
$ curl 127.0.0.1:80

channel 2: open failed: connect failed: Connection refused

Maybe it is because the service doesn't work in Kubernetes cluster.

After you run ssh command, it will stuck(Certainly, it can run in background), and then you can access service in another terminal.

@sharifelgamal Hi, can we expect a release soon with @zhan9san's fix?

Our next release should be at the end of August.

Our next release should be at the end of August.

Any update regarding this issue?

Release is underway right now, 1.23.0 will be released today.

@sharifelgamal @zhan9san We ran into an issue due to this change. We were running K8s 1.17.4 using minikube version 1.23.0.
When we run minikube tunnel we see the following error:

E0927 17:11:25.166351   27424 ssh_tunnel.go:82] error listing ingresses: the server could not find the requested resource

I believe the reason is because in K8s 1.17 Ingress is only present in v1beta1 - https://v1-17.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#ingresslist-v1beta1-networking-k8s-io

The PR tries to list the ingress resources using v1 apiVersion which is not present in versions prior to K8s 1.19

commented

Hi @mdhume
Sorry for the inconvenience

Would it be possible to upgrade the k8s cluster? It will introduce more logic to support backward compatibility

Is minikube tunnel still required on mac?

For the docker driver, yes.

@zhan9san unfortunately we won't be able to since that is the version we are running currently. One option could be to revert to previous behavior i.e. disable ingress support, if the K8s version detected is prior to 1.19

commented

How about adding an option like

minikube tunnel --service-only

or something else to set up tunnels for 'service' only?

@zhan9san that would work too πŸ‘

commented

@sharifelgamal

To follow the code of conduct in existing flags, I'd like to implement the following command.

minikube tunnel --ingress would create tunnels for both service and ingress.

while

minikube tunnel would for service only.

But this would have an impact on ingress for non-Mac system.

Do you have any concern?

What helped me:

minikube start --driver=virtualbox

(Since hyperkit has issues accessing the internet for me)

Here is a workaround. It looks ugly but it works.

This is (at time of posting) still the only way to make it work on Mac Silicon (M1, 2020) using:

  • Darwin 21.3.0 (MacOS 12.2.1)
  • minikube 1.25.2
  • docker 20.10.16

Is there a specific reason the workaround cannot be incorporated into the master?

To date the Apple Silicon virtualization drivers are still limited. So working with docker is rather useful, and this workaround literally saved my day.

what is the workaround for using ingress for minikube on docker driver on m1 chip macOS

what is the workaround for using ingress for minikube on docker driver on m1 chip macOS

The one described by @zhan9san above

Oh I have been trying to export my nodeport of a service to my host machine running the minikube (macOS) and now I see this open issue. Well is there a workaround? I mean it really is the most basic thing to try to reach the minikube wirh a client outside of the minikube, isn't it? I really wonder how this can be, but maybe I am missing the point why someone would set up a cluster without having access to it.

I was able to get ingress and ingress-dns exposed properly on minikube with docker driver by using docker-mac-net-connect

@michelesr I am not using ingress but a regular node port. it is only possible using the virtual box drivers on intel based macs.

@michelesr I am not using ingress but a regular node port. it is only possible using the virtual box drivers on intel based macs.

That would work with the tool I linked. It basically allow you to reach docker containers using their IP address, just like you would on a Linux machine, and so makes the minikube ip reachable from the host and your node ports accessible

@michelesr I tried it out already and unfortunately I didn't work with it either. Still thank you very much for trying to help.

I was able to get ingress and ingress-dns exposed properly on minikube with docker driver by using docker-mac-net-connect

@michelesr Thanks for sharing - that tool is incredibly useful. It's the only way I've been able to get ingress-dns to work on a Mac with an ARM64 chip.